Professional Documents
Culture Documents
Cognitive Readiness Assessment and Reporting
Cognitive Readiness Assessment and Reporting
Cognitive readiness (CR) and performance for operational time-critical environments are
continuing points of focus for military and academic communities. In response to this need,
we designed an open source interactive CR assessment application as a highly adaptive and
efficient open source testing administration and analysis tool. It is capable of evaluating the CR
of individual personnel and combining these results into intuitive, visually based summaries of
readiness. The components, logic, and architecture are presented.
THE BROAD RESEARCH DOMAINS of cognitive environments, thus removing the dependency on class-
readiness (CR) and human performance for operational rooms and workstations and simultaneously taking
time-critical environments such as emergency response advantage of personnel periodic downtimes typically
and military operations have long been critical focus associated with deployments.
areas within military and academic research communi- Any CR software for field use must be easily customiz-
ties. While the research community has firmly estab- able, mobile-platform ready, and extensible. Similarly,
lished assessment methods for measuring an individual’s the reporting system must be able to aggregate indi-
cognitive state and capability level, a current challenge vidual assessments to address the needs of small mission-
within ongoing dynamic field operations is having the focused specialized subgroups and whole-group levels.
ability to employ these metrics proactively and obtain For this, we designed an open-source interactive cogni-
accurate assessments of individual and group CR and tive readiness assessment application (iCogRA2) adaptive
then identify supplemental training or preparation and efficient testing administration and analysis tool for
needs. Given the advanced state of computer systems evaluating CR. For this, computer-based testing (CBT)/
today, an integrated mobile software application that computerized-adaptive testing (CAT), based predomi-
can employ subject matter expert (SME) measures to nantly on item response theory (IRT) methods, provide
evaluate CR effectively and accurately for mission readi- highly informative platform structures minimizing work-
ness is now feasible. The solution and methods must load.
offer rapid testing, together with reliable prediction and iCogRA2 was functionally defined by technologists
evaluation measures, allowing supervisors to improve in the fields of test design, measurement, and CR. The
their abilities to assess, monitor, and mitigate issues system logic is purposefully linear and has an ability to
leading to poor performance. Another primary intent is leverage inputs from willing contributors through an
to have this system fully functional for field deployment open source design.
TRAIT DESCRIPTION
Situation awareness Ability to perceive and comprehend one’s place in the environment and how to react to
changing conditions
Transfer The ability to apply what is learned in one context to devise solutions in another performance context
Metacognition The executive functions of thought used to monitor, assess, and regulate one’s own cognitive processes
Automaticity The ability to regulate processes that require limited conscious attention
Problem solving The ability to analyze the current situation, devise strategies to understand the situation, and develop
resolutions through a series of executable steps
Decision making The ability to review various courses of action and then allocate sufficient resources to the problem
Mental flexibility and creativity The ability to generate, adapt, and modify courses of action in response to variable situations
Leadership A combination of technical, conceptual, ethical, and interpersonal competencies that encourage and
support others in carrying out a designated course of action
Emotion The ability to perform complex tasks under confusing, high-stress situations
categorization. This library is queried when generating appropriate to the subject of interest and their individ-
test content for users as well as when performing scoring ual performance during the course of the examination.
analysis. Test questions are drawn from the iCogRA2 test library
To perform training recommendations based on user database as needed, and scores update according to
test scores, the system needs information on available each answer submitted by inputting the relevant answer
training courses. To that end, the training library table information to the user library table. Because IRT uses
contains data organized into structure columns, includ- a per item approach to performance analysis, the sys-
ing training course name, course identification number, tem need not track users over their entire test session
identification numbers of prerequisite courses, and a list (beyond ensuring that they answer enough questions
of topics that the course in question addresses. These from the test library to fulfill the criteria of the exami-
categorizations serve as criteria for matching individual nation) for purposes of content generation. Only their
questions (and thereby performance on tests including response to the question at hand is necessary to deter-
those questions) to relevant training. mine which question should be selected and dispensed
Users of the iCogRA2 software have various aspects to the user.
of their testing experience and training history tracked. A key component of iCogRA2 is an SME input inter-
To this end, the system stores data relevant to indi- face for questions, answers, and precoded item response
vidual users, and the user library table sorts by indi- function options defined by the data stored in the test
vidual identification numbers (which need not be linked library. For example, as knowledge and research expand,
to any confidential identification information within SMEs can establish the test question content and store
the database) and contains the user’s testing history the input in defined XML format, describing the neces-
(including individual test question responses), as well as sary components of each question (e.g., question ID,
information to support supervisors’ testing assignments. text, answer type, question rating). This information is
Individual identification numbers can then be assigned transformed at run time into a rich-text visualization
(by an administrator) to a subsidiary role under a speci- for the user. SMEs can also use multimedia content by
fied command user. These relationships are reflected in embedding links to images, video, and audio files in the
stored data in the user library table. Test assignment may question and answer text using simple inline syntax and
be performed on the server side or from the handheld. HTML content tags.
Administrators can simply select a test from a list of
available examinations and then select users to whom it Application Run-Time Environment and Software
should be assigned. Design. The iCogRA2 computations are performed
on the server, thus reducing the demands placed on
Test Logic. The CAT-IRT logic is a server-side solu- the handhelds. Because handheld software is somewhat
tion. The system generates tests for individual users defined by the hardware, the systems essentially act
entire group. For easy understanding, the scores are sum- the user desires to view a higher-resolution account of a
marized in interactive rose diagrams depicting relative given quality, he or she may click on the trait to expand
strengths of various metrics for the group or individu- it into a rose chart detailing the subfactors that influence
als. A rose diagram simultaneously presents continuous the combined psychometric. For instance, a quality such
or discrete values of multiple psychometric factors with as knowledge could be broken down into a set of mea-
ease. The values for a group along a given axis are calcu- surements for each individual skill necessary for mission
lated by any function from the observed results of each success.
individual to an overall score that includes the mean, Figure 6, an example chart, graphs the average and indi-
median, or even multiple percentiles of the group. If vidual scores of several users side-by-side for comparison.
FIGURE 7. USER PERFORMANCE LEVELS Training Recommendations. The exported results can
be fed into a program that automatically recommends
and even schedules actions to remedy poor performance.
Each axis of the rose chart represents an individually Each evaluation criterion is categorically labeled depend-
scaled score for one subject area with the radial line indi- ing on the type and nature of the quality being measured.
cating either the average of all users’ scores or a single These labels may be binary pass or fail, or they may have
user’s score. The intersection point of the radial line with greater granularity such as an A through F scale. The rec-
the subject axis gives the performance of the group (or ommendation system requires a database consisting of
that user) for that metric. all of the courses and supplemental tutorials available to
augment categorical skills or knowledge.
Single Topic Assessment. When considering a single Course recommendations are based on optimal
topic for assessment, the supervisor is presented with a individual and group readiness criteria and can be
scatter plot (see Figure 7). The vertical axis represents score set accordingly, yet it remains the supervisor’s task to
and the horizontal axis individual users. In both the single- determine if the result indicate adequate CR. Indeed,
subject and multisubject graphing interfaces, the overall logic similar to that of CAT could be used to assign
graph is color-coded for rapid analysis. According to preset individuals to an appropriate task in accordance with
thresholds, a given score for an individual represents poor, the recommendations, and this is essentially because
adequate, or good performance within that subject area. tasks intuitively have differing degrees of difficulty
Here, those categorizations are represented by scaled gray (physical or intellectual). If an individual is found to
shades (or by red, yellow, and green color coding on a be unable to learn acceptable task details, the program
phone). Selection of an individual point on the scatter plot suggests reassignment to another task of lesser difficulty
or an individual line on the rose chart takes the adminis- where he or she may be better able to support opera-
trator directly to test answers, training recommendations, tional objectives. Conversely, when group deficiency
and test assignment options for the selected user. for difficult tasks is observed, units with mastery of
marginally easier tasks may be selected to train in the
Part 4: Actionable Intelligence more complicated task. In each situation, shifting is
Results Storage and Exportation. iCogRA2 generates performed to maximize the individual’s value to the
time-stamped raw data for other programs to access group while simultaneously maximizing his or her CR
the storage tables in an internal relational database that performance potential.
records individual soldier responses, date of response, The final recommendation tool is based on temporally
and time taken to respond. The database also stores tracked operational readiness metrics. Even if a group
summary and analysis results of the raw data, including shows competency on a given metric, there may be a
unit averages and median and reports. Each individual’s downward trend in the metric when viewed as a function
personal record is updated indicating the date, time, of time. Various modeling tools can be used to predict
and identification key for his or her results to support the trajectories, and if the downward trend predicts that
harvesting by SQL queries. Mechanisms can be packaged a group will soon lose competency or readiness, the soft-
and exported from within iCogRA2 into .xls and .csv data ware will indicate a preventive measure that is necessary
tables for easy access by standard graphical and analytical to maintain satisfactory levels.
Salas E., Priest, H.A., Wilson, K.A., & Burke, C.S. (2006).
References Scenario-based training: Improving military mission per-
formance and adaptability. In T. Britt, A. Adler, & C. Castro
Allen, M.J., & Yen, W.M. (2001). Introduction to measurement (Eds.), Military life: The psychology of serving in peace and com-
theory. Longrove, IL: Waveland Press. bat (pp. 32–53). Westport, CT: Greenwood.
Beale, R. (2005). Supporting social interaction with smart Salomon, G., & Perkins, D.N. (1989). Rocky roads to transfer:
phones. IEEE Pervasive Computing, 4(2), 35–41. Rethinking mechanisms of a neglected phenomenon.
Educational Psychologist, 24, 113–142.
Embretson, S.E., & Reise, S.P. (2000). Item response theory.
Florence, KY: Psychology Press. Schoomaker, E.B. (2007). Military medical research on cogni-
tive performance: The warfighter’s competitive edge. Aviation,
Fletcher, J.D. (2004). Cognitive readiness: Preparing for the Space, and Environmental Medicine, 78(5, Suppl.), B4–6.
unexpected. Alexandria, VA: Institute for Defense Analyses.
Retrieved from http://www.dtic.mil/cgi-bin/GetTRDoc?AD= Stevens’ handbook of experimental psychology. (2004).
ADA458683&Location=U2&doc=GetTRDoc.pdf. Hoboken, NJ: Wiley.
MATTHEW HERIC, PhD, is CEO of IAVO Research and Scientific and e-Educational Systems and
Research and has been involved with military and educational research programs since the early
1980s. He has more than 25 years of experience as a researcher, consultant, and business owner. His
primary academic focus is the application of statistical models to evaluate learning and educational pro-
grams. He has led several instructional programs for the U.S. Air Force (U-2 program), NASA (Defense
Landsat Program Office), and the National Exploitation Lab (Washington, DC) and has worked as a
guest lecturer, invited speaker, and contract writer. He may be reached at mheric@iavo-rs.com.
JENN CARTER is the division director of the behavioral sciences group at IAVO Research and
Scientific and is leading three long-term Office of Naval Research projects in the field of human,
social, cultural, and behavioral research. She has expertise in rhetorical communications and has
worked with the North Carolina Population Center studying the human impact on land use and land
change in developing nations, automata model development, spatial analysis of human events, pre-
dictive model creation, and statistical applications. She may be reached at jcarter@iavo-rs.com.