You are on page 1of 50

MIDTERM

ERGONOMICS 2
DEFINITION
● Cognitive ergonomics, defined by the International Ergonomics Association "is concerned with mental
processes, such as perception, memory, reasoning, and motor response, as they affect interactions among
humans and other elements of a system”. The relevant topics include:
○ mental workload
○ decision-making
○ skilled performance
○ human-computer interaction,
○ human reliability
○ work stress
○ training
(As these may relate to human-system design. Cognitive ergonomics studies cognition in work and
operational settings, in order to optimize human well-being and system performance. It is a subset of the
larger field of human factors and ergonomics.)

HISTORY
 The field of cognitive ergonomics emerged predominantly in the 70's with the advent of the personal
computer and new developments in the fields of cognitive psychology and artificial intelligence.
 "Cognitive ergonomics is the application of psychology to work ,to achieve the optimization between
people and their work.”
 It is an applied science, and has rapidly developed over the last 27 year.
 According to van der Veer, Enid Mumford was one of the pioneers of interactive systems engineering,
and advocated the notion of user-centered design
 criteria for developing user centered design:
o task analysis ( the evaluation of cognitive task demands)
o analyzing motor control cognition during visual tasks (operating machinery, or the evaluation
of attention)

APPLICATIONS OF COGNITIVE ERGONOMICS


 Designing a software interface to be "easy to use“
 Designing a icons and visual cues so that the majority of people will understand and act in the
intended manner
 Designing an airplane cockpit or nuclear power plant control system so that the operators will not
make catastrophic errors

WHY DO WE NEED COGNITIVE ERGONOMICS?


 The way people perceive and act has direct implications on the design of the objects and environment
that they use
 Mind is as comfortable at work as the body
 If physical surroundings reflect and support their natural cognitive tendencies,
o Less errors
o performance & productivity -positive boost

PERCEPTION
● Perception : The top down way our brain organizes and interprets that information and puts it into context.
● Perceptual set : The psychological factor that determines how you perceive your environment.
● Factors affecting perception:
○ Context
○ Culture
○ Emotions or moods
○ Position

RULES OF GROUPING

● Form perception:
○ Proximity: grouping common objects nearby
○ Continuity : perceive smooth, continuous pattern rather than broken ○ Closure : fill in gaps to create
whole object

● Depth perception : ability to see object in three dimension although images that strike retina are two
dimensional
○ Binocular cues : depth cues, such as retinal disparity, depends on use of both eyes. Since eyes are
only2.5 inches apart not distinct images are produced by the senses. Therefore our brain uses the two
images to define distance, i.e. if objects are closure finer distinction can be obtained. This is what we
call retinal disparity

○ Monocular cues : depth cues, such as inter-positional and linear perspective, available to one eye
only 9
RULES OF GROUPING
● The monocular cues are:
○ Relative size, height
○ Perspective
○ Texture gradient
○ Interposition
● Motion perception: infers speed and direction of moving object “Shrinking objects are retreating and
enlarging objects are approaching” Large objects appear to move slowly as compared to small objects moving
with same speed
● Constancy: tendency to recognize objects irrespective of its distance, viewing angle, motion or illumination

AREAS OF APPLICATION
● Industrial Areas
○ Human computer interaction
○ Transportation
○ Control processes
● Intervention Areas
○ Design
○ Technological Innovation
○ Safety and accident Investigations

HUMAN COMPUTER INTERACTION (HCI)

● HCI involves the study, planning, and design of the interaction between human (users) and computers.
● Supporting knowledge on both the computer and the human
● Computer side - techniques in computer graphics operating systems, programming languages and
development environments - relevant
● Human side - communication theory, graphic and industrial design disciplines, linguistics, social sciences,
cognitive psychology and human factors such as computer user satisfaction - relevant
● Interaction occurs at the user interface which includes both hardware and software
DESIGN AND DEVELOPMENT OF SOFTWARE SYSTEMS
In designing and developing a software system, the following two items are most important: that it
provides adaptability of use to fulfill each individual’s needs and that it is enabled easily to correct errors
created by the user. The software should enable the user to perform the task in a logical manner in which the
user would like to perform the task.

USER INTERFACE DESIGN


In this attribute, the most important two considerations are to minimize mental workload and
ambiguity, and to maximize task strength.

EVALUATION OF HCI

● HCI should be evaluated for consistency in design, extent of usability of the interface, and for the design of
satisfaction associated in using the interface. For each of these three attributes standardized checklists and
questionnaires are available.

● The ultimate objective for evaluating HCI is to derive an evaluation index that would provide quantitative
evaluation of the effectiveness of each HCI. This could be tested by an independent laboratory (such as an
independent laboratory in the USA used by hardware products). By so doing, the consumer would be well
informed regarding which products may be the best for their use.

INDIVIDUAL AND CULTURAL DIFFERENCES

● Regarding individual and cultural differences, there are two main considerations: the spectrum novice
versus the expert. As an example, experts perform better when using commands whereas novices perform
better when using menus. In cultural differences, both format and structure are important. For example,
Americans perform better when menus are presented horizontally, whereas people in Mainland China
perform better when menus are presented vertically.

MULTIMEDIA AND VISUALIZATION

● The term “multimedia” implies the use of more than one of such items as voice communication, graphics,
color and alphanumeric presentation. Multimedia is most helpful in either presenting complex problems or if
one wishes to present both a trend and an exact presentation. An example of the latter is well illustrated in
watch design by presenting the time digitally (which gives exact time but takes longer to read than analogue
presentation) and with the dial by analogue presentation.

● Visualization is most effective for presenting abstract concepts such as the sum of squares in a statistics
procedure of analysis of variance.

INTELLIGENT INTERFACE AND SYSTEMS

● Computerized systems that the human cannot perform very effectively or systems for which intelligent
interfaces can be designed economically such as the tuning of electronic tuning machines are prime
candidates for intelligent system design. In his case, the procedure and thought processes involved in the
tuning were documented, and based on this information a fuzzy logic-based combined neuronet and
knowledge-based system was developed for intelligently (without human intervention). The machines can
now be tuned at a fraction of the time and with greater accuracy than was possible with human tuning.
● Intelligent interfaces can also act as decision supports in HCI operations. For example, in supervisory control
of flexible manufacturing systems, a real-time look-ahead capability is provided to the computer interface
supervisor in which information is used by the supervisor better to schedule the machine utilization and,
thus, minimize throughput time.

INPUT DEVICES AND DESIGN OF WORKSTATION

● The physical part of HCI concerned with input devices and workstation design. The effective design and
implementation of these in the workplace will result in reduced fatigue and lower muscle skeletal discomfort
than what otherwise would be possible. Three-way adjustable chairs with good back support and arm rests as
an essential as wrist and foot rest devices and the use of input devices which require the use of minimum
force.

● Carpal tunnel syndrome frequently results from prolonged use of computers. This effect can be reduced by
using open ergonomically designed input devices and workstations, and by providing rest periods during the
workday. However, the most critical factor in determining the occurrence of carpal tunnel syndrome in
computerized work is the length of time one works on the computer. Hence, job rotation between
computerized and non-computerized work is an essential element for reducing the probability of carpal tunnel
syndrome occurrence.

HUMAN-COMPUTER INTERACTION (HCI) STANDARDS

WHY ARE STANDARDS IMPORTANT IN IMPROVING USABILITY: CONSISTENCY


Most users have horror stories about inconsistency between, and even within, systems. Standards
provide a consistent reference across design teams or across time to help avoid confusion. In other fields,
consistency, for example, between components that should interconnect, is the prime motivation for
standards. It is certainly a worthwhile target for user interface standards.

WHY ARE STANDARDS IMPORTANT IN IMPROVING USABILITY: GOOD PRACTICE


In most fields, standards provide definitive statements of good practice. In user interface design there
are many conflicting viewpoints about good practice and standards, especially international ones, can provide
independent and authoritative guidance. International standards are developed slowly, by consensus, using
extensive consultation and development processes. This has its disadvantages in such a fast-moving field as
user interface design and some have criticized any attempts at standardization as premature. However, there
are areas where a great deal is known that can be made accessible to designers through appropriate
standards and there are approaches to user interface standardization, based on human characteristics, that
are relatively independent of specific technologies.

WHY ARE STANDARDS IMPORTANT IN IMPROVING USABILITY: COMMON UNDERSTANDING


Standards do not guarantee good design but they do provide a means for different parties to share a
common understanding when specifying interface quality in design, procurement and use.
○ For users: standards allow them to set appropriate procurement requirements and to evaluate
competing supplier’s offerings.
○ For suppliers: standards allow them to check their product during design and manufacture and to
provide a basis for making claims about the quality of their products.
○ For regulators: standards allow them to assess quality and provide a basis for testing products.

WHY ARE STANDARDS IMPORTANT IN IMPROVING USABILITY: APPROPRIATE PRIORITIZATION OF USER


INTERFACE ISSUES
One of the most significant benefits of standardization is that it places user interface issues squarely
on the agenda. Standards are serious business and whereas many organizations pay little regard to research
findings, few organizations can afford to ignore standards. Indeed, in Europe, and increasingly in other parts
of the world, compliance with relevant standards is a mandatory requirement in major contracts.

CURRENT USER INTERFACE STANDARDIZATION INITIATIVES


● The International Organization for Standardization (ISO) is the worldwide standardization organization
responsible for developing international standards in the field of mechanical standardization.
● ISO is organized into technical committees (TC) and the ergonomics committee responsible for HCI
standards is ISO TC159 SC4.
USING STANDARDS IN SYSTEM DEVELOPMENT
● The contents of the main user interface standards, including those under development in the context of
how these standards might be used during a number of typical HCI activities are described briefly (Table 4).
● The activities follow an approximate time sequence with some additional activities added towards the end
of the list. They are not intended to represent the “right way” to approach user interface design nor are all
the activities necessary in every project.

INTERFACE SPECIFICATION AND TASK SPECIFICATION, INITIAL DESIGN, SIMULATION, PROTOTYPING AND
MODELING, DESIGN AND BUILD
Specification, design and build involve different HCI activities, but the same standards are relevant.
Standards offer specific guidance on well-established design requirements, e.g. the pressure required to
operate a key, or the contrast required for an image to be distinct from its background.

I. DIALOGUE DESIGN AND INTERFACE NAVIGATION

● ISO 9241-10: 1996: Dialogue Principles presents high-level ergonomic principles that apply to the design of
dialogues between humans and information systems. These include suitability for the task, controllability and
error tolerance, among others. The principles are supported by scenarios that indicate the relative priorities
and importance of the different principles in practical applications.

● ISO 9241-14: 1997: Menu Dialogues provides recommendations on menu structure, navigation, option
selection and execution, and menu presentation (by various techniques including windowing, panels,
buttons, fields, etc.).

● ISO 9241-15: 1998: Command Dialogues provides recommendations on command language structure and
syntax, command representations input and output considerations, feedback and help.

● ISO FIDIS 9241-16: 1998: Direct Manipulation Dialogues provides recommendations on the manipulation of
objects, and the design of metaphors, objects and attributes. It covers those aspects of “graphical user
interfaces” that are directly manipulated and not covered by other parts of ISO 9241.

● ISO 9241-17: 1998: Form-filling Dialogues provides recommendations on form structure and output
considerations, input considerations and form navigation.
II. DISPLAY DESIGN
“Display design” refers both to the design of display hardware and to the presentation of information
on the display.
● Display hardware specification and design for office VDTs is covered in ISO 9241-3: 1992: Display
Requirements. This part deals with the design of screen hardware for visual display terminals. In addition to
design specifications, this part also contains a proposed user performance test as an alternative route to
conformance. (Note that the ergonomics requirements for flat panels am dealt with in ISO FDIS 13406-2:

● ISO 9141-7: 1998: Display Requirements with Reflection deals with the ergonomic requirements for, and
details of, methods of measurement of reflections from the surface treatments.

● ISO 9241-8: 1997: Requirements for Displayed Colours deals with the ergonomic requirements for
multicolor displays that supplement the monochrome requirements in Part 3. Displays for control rooms are
dealt with separately in ISO 11064.

● Software aspects of display design are covered in ISO FDIS 9241: 12: 1998: Presentation of Information. This
part deals with the specific ergonomics issues involved in representing and presenting information in visual
form. It includes guidance on ways of representing complex information, screen layout and designs, as well as
the use of windows.

(There is already a substantial body of material available in guidelines and recommendations and this part
represents a distillation of the most useful and relevant ones.)

III. KEYBOARD AND INPUT DESIGN

● Keyboard specification and design (in terms of the operation of the keys and its ergonomic qualities) is
covered in ISO 9241-4: 1998: Keyboard Requirements. This deals with alphanumeric keyboard design. In
addition to design specifications, this part also contains a proposed user performance test as an alternative
route to conformance. It deals with the ergonomic aspects of the keyboard, not the layout that is specified in
ISO 9995: Keyboard Layouts for Text Office Systems.

● Non-keyboard input devices are becoming increasingly popular and ISO FDIS 9241-9: 1998: Requirements
for Non-keyboard Input Devices deals with the ergonomic requirements for pointing devices including the
mouse, tracker ball, etc. that can be used in conjunction with a visual display terminal.

IV. WORKPLACE AND CONSOLE DESIGN

● Office workplaces incorporating VDTs are covered in some detail in ISO 9241-5: 1998: Workstation layout
and Postural Requirements. This part deals with the ergonomic requirements for a visual display terminal
workstation that will allow the user to adopt a comfortable and efficient posture. Workplaces in control
rooms are dealt with separately in ISO DIS 11064-3: 1998: Control Room Layout.

● Environment considerations (visual, acoustic, thermal) are covered in ISO 9241-6: 1998: Environment
Requirements.

USER SUPPORT, DOCUMENTATION, MANUALS AND TRAINING


In assessing the usability of a product in practice, real users take account of the documentation,
manuals and training received as well as the specific characteristics of the product. ISO 9241-13: 1998: User
Guidance covers some of these aspects and provides recommendations for the design and evaluation of user
guidance attributes of software user interfaces including prompts, feedback, status, on-line help and error
management.

SAFETY CRITICAL SYSTEMS

● Although many user interface standards are generic and apply t a wide range of different task situations,
safety critical systems pose special risks and may demand special standards. For example, the size, spacing,
force and travel of keys on a keyboard to be used in an office environment may not be achievable when the
same keyboard has to be used in a control room where protection from sparks or use by operators wearing
gloves may override normal requirements.

● ISO 11064 Parts 4-8 will address number of HCI issues in safety critical systems.

USER-CENTERED DESIGN METHODS

● Although there may be disagreement about what user-centered design means in detail, most HCI specialists
would argue that it is fundamental to the practice of human-computer systems. Indeed, much of ISO 9241
has an implicit user-centered design philosophy behind it.

● ISO FDIS 13407: 1998: Human-centered Design Process for Interactive Systems describes the ergonomic
process to be applied within the design process to ensure that proper attention is paid to the human and
user issues. Coverage includes usability design and evaluation, user-centered design methods, the use of
ergonomics standards in design activities and evaluation methods. The standard is aimed at those
responsible for managing design processes and it presents high level overview of what activities are
recommended for human centered design.

HEALTH AND SAFETY ISSUES

● Health and safety concerns were the starting point for a number of the standardization and regulatory
measures. The Council of the European Economic Community (as it then was called) published a Directive on
29 May 1990 on the minimum safety and health requirements for work with display screen equipment
(90/270/EEC). This Directive comes under Article 118a anad is concerned to ensure that there is a minimum
level of health and safety in member states and is aimed at employers. This directive does not therefore refer
to the product standards (e.g. ISO 9241) even though they deal with much on the same technical content.
With the exception of the UK, they are unlikely to be used formally in national implementation of the Display
Screen Directive.

● In practice, the European Standards EN 29241 developed by TC122 (based on ISO 9241) can be used by
suppliers to demonstrate compliance with the ergonomics state-of-the-art.

JOB DESIGN, GROUP WORKING AND ORGANIZATIONAL ISSUES


In developing ISO 9241, there was a clear recognition that many of the problems often attributed to
poor equipment for workplace design may in fact stem from poor job design. Thus ISO 9241-2: 1992:
Guidance on Task Requirements provides guidance on the design of display screen tasks based on nearly half
a century of research and organizational practice in socio-technical systems.

USERS WITH SPECIAL NEEDS, CHILDREN, THE DISABLED AND THE AGED
Many users of information technology feel that little account is taken of the requirements of so-called
“normal” users. Users with special needs - whether transitory or permanent - are even less well catered for.
One of the major benefits of information technology is its potential to extend human capacity and to
complement human skill. as new work item has recently been started on accessibility for users with special
needs.

CONTROL PROCESS

● The term "process" refers here to the industry known as "processing industry". A processing industry is one
where energy and matter interact and transform one into another (Woods, O’Brien and Hanes, 1987).

● A typical example of such industries is nuclear power plants. But the industries of paper production and the
pasteurization of milk also belong to this category. There is one ergonomically relevant characteristic that
distinguishes among the examples of processing industries.

DESIGN
● There are two aspects of interest in system design:
○ Human Beings who design the system
○ Human Beings who interact with system

● In the early times of human factor engineering, they were called to explain why the particular design had
not worked. Later on, they were called to intervene directly in the design process (Wickens and Hollands,
2000). Today, the processes of innovation requires that ergonomists "proactively" supply ideas and empirical
data for the design of future artefacts improving human performance and public acceptance of new
technologies (Akoumianakis and Stephanidis, 2003; Kohler, Pannasch and Velichkovsky, 2008). 45

TECHNOLOGICAL INNOVATION

● New concept of design is “user centered “ designing


● It bases its development on the fact describing the human being who interacts with the system from the
viewpoint of cognitive science.
● Based on those characteristics cognitive ergonomists provided engineers with a set of principles to be
considered in the design.
● Paradigm has let to the establishment of usability research that has
contributed greatly to the effectiveness, efficiency and satisfaction of users in their interaction with the
technologies and to a better interaction between users through technology (Holzinger, 2005)

SAFETY AND ACCIDENT INVESTIGATION

● “Human reliability analysis” (HRA) : These techniques are based on the assumption that the actions of a
person in a workplace can be considered from the same point of view as operations of a machine. The
objective is to predict the likelihood of human error and evaluate how the entire work system is degraded as
a result of this error alone or in connection with the operation of the machines, the characteristics of the task,
the system design and characteristics of individuals (Swain and Guttmann, 1983). This approach has let to a
considerable progress in the efforts to predict the occurrence of human error. However, it has have been
criticized as insufficient. Reason (1992) particularly notes that the main difficulty is the estimation of error
probability. In designing new systems, there are no prior data on the error probabilities. One can count on
data from simple components, such as errors that are committed to read a data into a dial or enter them into
a keyboard, but not the errors that may be committed by interacting with the system. 47
● “Cognitive psychology”: ergonomics seek to know the mental processes responsible for committing an error
(Norman 1981; Reason, 1992). They assume that errors are not caused by irresponsible behaviour or
defective mental functioning. They may be rather the consequence of not having taken into account how a
person perceives, attends, remember, makes decisions, communicates and acts in a particularly designed
work system. This standpoint suggests investigating the causes of human errors by analyzing the
characteristics of human information processing. The three types of errors can be largely attributed to the
familiarity that the person has with the system:
o Errors based on skills: When a person is very familiar with the task, actions are overlearned as a low-
level pre-programmed sequence of operations that do not require and often are not under conscious
control. If one of these actions is poorly performed or the sequence is applied in an unusual order, a
skill-based error occurs.
o Errors based on rules: The selection of actions in a situation often depends on the implementation of a
set of rules of the type IF(condition) THEN (action). The activation of the right rules depends on the
interpretation of the situational conditions. If a situation is misinterpreted the retrieved rule will be
inappropriate as well. In other words, an error based on rules will occur.
o Errors based on knowledge: When we encounter a new problem situation, so that existing skills and
learned rules are of little help, it is necessary to plan a novel action sequence to its resolution. This is a
higher-order cognitive activity demanding a lot of conscious control. If it actions are not planned
correctly, a knowledge-based error will occur.

ATTENTION
● Driving involves various subtasks : Tracking, Decision making, Navigation, Adherence to warnings and
signals, Tending environment and mechanical system(music system, AC), Communicating, Observing inside
and outside events
● Points that can be noted : Our ability to attend stimuli is limited, Direction of attention determines how well
we perceive, remember and act on information
● We may therefore conclude: Observation that fall outside our awareness region has little influence in the
performance. So an information on display panel becomes irrelevant if not attended by the operator
● It may also be noted that when a single overlearned response has been executed to a stimuli many times in
past, attention is not needed. In such situations, however, familiar or irrelevant stimuli may interfere and
affect the operator’s level of performance.

TYPES OF ATTENTION
● Selective attention. Selected lang ang binibigyan ng attention
● Divided attention. Multiple things ang binibigyan ng attention
● Demanding attention. Difficult yung task kaya kailangan ibigay ang full attention

MODELS OF ATTENTION
● Bottleneck model : it specifies a particular stage in the information-processing sequence at which the
amount of information to which we can attend is limited.
● Resource model : in this, attention is viewed as limited capacity resource that can be allocated to one or
more tasks.
● Further classification can be done here by dividing the bottleneck model into : early selection and late
selection model while resource model as single resource and multi-resource model.

BOTTLENECK MODEL (FILTER THEORY)


● Proposed by Broadbent(1958)
● According to the theory, the stimuli enters the central processing channel one at a time to be identified.
Thus the filtering of unwanted message takes place prior to identification.
● Assumes the existence of a limited-capacity channel (typically with a capacity of one item) at some specific
stage of human information processing.
● Filtering can be done according to gross physical characteristics.
● References of the theory is based on the studies of Cherry’s “cocktail-party phenomenon”.
● This exemplifies the study based on selective attention highlights of the study: ○ Physically distinct words
can be easily repeated
o If distinct messages are passed simultaneously little attention was shown to unattended message even
if repeated many times
o Change in language was also rarely noticed for unselected receptor
o If rapidly spoken words on one ear are blocked till selected receptor processes the information

EARLY (ATTENUATION) AND LATE SELECTION THEORY

● Early selection advocates argue that the locus of selection is at early stages of processing and that
therefore, unattended stimuli are not fully processed. In contrast, late selection theorists argue that attention
operates only after stimuli have been fully processed.

● Early Selection Theory is any theory of attention proposing that selection of stimuli for processing occurs
prior to stimulus identification. According to early-selection theory, unattended stimuli receive only a slight
degree of processing that does not encompass meaning, whereas attended stimuli proceed through a
significant degree of deep, meaningful analysis.

● Late Selection Theory is any theory of attention proposing that selection occurs after stimulus identification.
According to late-selection theory, within sensory limits, all stimuli—both attended and unattended—are
processed to the same deep level of analysis until stimulus identification occurs; subsequently, only the most
important stimuli are selected for further processing. 54

ATTENUATION/EARLY SELECTION THEORY


● Given by TREISMAN 1964
● “She proposed filter attenuation model in which an early filter served only to attenuate the segment of and
unattended message rather than to block it entirely”
● It proposes the leak
● An attenuated message would not be identified under normal conditions but the message could be
identified if familiarity or context sufficiently lowered the identification threshold

LATE SELECTION THEORY


● Deutsch and Norman 1968
● They argued that all messages are identified but decay rapidly if not selected or attended
● Both attenuation and Late selection theory presumed that bottleneck may not be fixed but vary as function
of specific task requirement
● They suggested that as the information processing system shifts from early selection to late selection mode
more information is gathered from irrelevant sources requiring a greater amount of effort to focus on
relevant source

RESOURCE MODELS
● Resource model views attentional limitation as arising from a limited capacity of resource for mental activity
● Performance suffers when resource demand exceeds the supply
● There are two models
○ Unitary resource model
○ Multiple resource model

UNITARY RESOURCE MODEL


● Proposed by Kahneman 1973
● He proposed attention as a limited capacity resource that can be applied to a variety of processes and tasks
● The execution of multiple tasks is not difficult unless the available capacity of attentional resources is
exceeding
● Allocation policy depends on momentary intentions and evaluation of demands being placed from these
resources
● Dual Task Procedure proposed by Posner and Boies 1971
● According to the unitary resource model of attention, there is a single resource of attention divided among
different tasks in different amounts, and attention is voluntarily shifted when demands on attention needed
exceeds the limited supply of attentional resource available.
● According to this a person is required to perform two tasks at once namely primary and secondary with
instruction to perform primary as well as secondary
● According to resource model therefore all attentional pool should be devoted to primary task and any spare
should be devoted to secondary task

MULTIPLE RESOURCES MODEL


● Proposed by Navon and Gopher 1979
● According to this there is no single attentional resource rather several distinct sub systems each have their
own limited pool of resource
● The model assumes that two tasks can be performed together more effectively to the extent that they
require separate pool of resources
● This model was developed because the amount of performance decrement for multiple tasks often depends
on the stimulus and modalities and the response required for each task
● Based on the idea that multiple attentional resources exist and in some cases are separate from one
another (i.e., can perform different attentional tasks at the same time without interference).

PROBLEM-SOLVING
● Solutions depend on the type of responses we require i.e. quick or accurate
○ Trial and error
○ Algorithms
■ Logical, methodical, step by step procedure that eventually guarantees a solution but may be
slow to work with
○ Heuristics
■ Simple strategy that allows us to solve problem faster, although more error-prone than
algorithms

A PRODUCTION SYSTEM FRAMEWORK


● Problem solving is proposed within an abstract mental problem space. This would need following aspects to
be clearly defined:
o Declarative knowledge: Representation of the problem space, about facts relation we are able to
verbalize.
o Procedural knowledge: allowable action as defined by the problem ○ Control knowledge: strategies
used to coordinate the overall problem solving process
o Production system : includes a global database, production rules that operate on database and a
control system that has rules to apply.
■ Heart of production system is its “if-then “ statements.
o Task environment : characterized by a set of states and a set of operators that bring allowable change
■ Newell and Simon’s (1972) helped determine performance parameter i.e. “how to represent
the problem mentally, which is based on the task environment and other knowledge”.
■ Limitations of problem solving:
● Inaccurate and incomplete problem statement
● Limited capacity of short term memory

LOGIC AND REASONING


● Two types of reasoning are usually distinguished 1. Deduction 2. Induction
● DEDUCTION : It refers to reasoning in which the conclusion about particular condition follows necessarily
from general premises about the problem. Formal logic involves arguments in the form of a list of premises
and a conclusion which are called syllogism.
○ There are two inferences rules that are modus ponens and modus tollens
■ Modus Ponens states that given major premises that A implies B and minor premises that A
is true then it can be inferred that B is true
(Latin for "method of affirming." A rule of inference used to draw logical conclusions, which states that if p is
true, and if p implies q (pq), then q is true.)

■ Modus Tollens states that that given major premises that A implies B and minor premises
that B is false then it can be inferred that A is false
(Latin for "method of denying." A rule of inference drawn from the combination of modus ponens and the
contrapositive. If q is false, and if p implies q (pq), then p is also false.)

○ Conditional Reasoning: deductive reasoning with conditional statements of an “if then” form is
called conditional reasoning Example : If the system was shutdown then there was a system failure
○ Categorical Reasoning : These are the statements that include quantifiers some all no and some not
Example : All A’s are B’s and All B’s are C’s therefore all A’s are C’s.

● INDUCTION is a reasoning in which a generalized conclusion is drawn from a particular condition . It involves
inferential processes that expand knowledge under uncertainty
● Two types of knowledge are modified
○ Procedural
○ Conceptual
● Adaptive Heuristics : Human inductions are constrained by adaptive Heuristics that favors certain classes of
hypotheses over others and by cognitive limitations. One such heuristic is availability

DECISION-MAKING
● Normative Theory and Descriptive Theory
● Normative Theory : Concerns how we should chose possible action under ideal conditions. It relies on
notion of utility i.e. how much particular outcome is worth to the decision maker
● Descriptive theory: decision makers pay attention to a single dominant attribute and are less willing to make
trade-offs in other dimension. It is a stressing situation and prone to performance reversal.

DECISION-MAKING MODELS
● Normative decision models date back to early application of economics and statistics to specify how to
make optimal decisions (von Neumann and Morgenstern, 1947; Savage, 1954); thus, they focus heavily on the
notion of rationality (Savage, 1954; von Winterfeldt and Edwards, 1986).
● In contrast, behavioral decision models acknowledge the limitations of human decision makers: they are
systematically biased and use heuristics to overcome cognitive limitations. In these models, decision makers
are not rational but boundedly rational (Simon, 1955).

● Naturalistic decision models extend this perspective to understand how people actually make decisions and
cognitively complex tasks in realistic demanding situations which cannot be easily replicated in a laboratory
setting.

NORMATIVE MODELS
● Classical decision theory represents preference and choice problems in terms of four basic elements: (1) a
set of potential actions (Ai) to choose between, (2) a set of events or world states (Ej), (3) a set of
consequences (Cij) obtained for each combination of action and event, and (4) a set of probabilities (Pij) for
each combination of action and event.

● For example, a decision maker might be deciding whether to wear a seat belt when traveling in an
automobile. Wearing or not wearing seat belt corresponds to two actions, A1 and A2. The expected
consequence (Cij) of either action depends on whether an accident occurs. Having or not having an accident
corresponds to two events, E1 and E2. Wearing a seat belt reduces the expected consequence (C11) of
having an accident (E1). As the probability of having an accident increases, use of a belt should therefore
become more attractive.

● Normative models are based on basic axioms (or what are felt to be self-evident assumptions) of rational
choice.

I. AXIOMS OF RATIONAL CHOICE


● Numerous axioms have been proposed that are essential either for a particular model of choice or for the
method of eliciting numbers used for a particular model (von Winerfeldt and Edwards, 1986). The best
known set of axioms (Table 1) establishes the normative principle of subjective expected utility (SEU) as a
basis for making decisions.

● On an individual basis, these axioms are intuitively appealing (Stukey and Zeckhauser, 1978), but people’s
preferences can deviate significantly from the SEU model in ways that conflict with certain axioms.
Consequently, there has been a movement toward developing less restrictive standards of normative decision
making (Zey, 1992; Frisch and Clemen, 1994).
● Frisch and Clemen (1994, p. 49) propose that “a good decision should (a) be based on the relevant
consequences of the different options (consequentialism), (b) be based on an accurate assessment of the
world and a consideration of all relevant consequences (thorough structuring), and (c ) make tradeoffs of
some form (compensatory decision rule).”

● Consequentialism and the need for thorough structuring are both assumed by all normative decision rules.
Most normative rules are also compensatory.

● However, when people make routine habitual decisions, they often do not consider the consequences of
their choices. In addition, because of cognitive limitations and the difficulty of obtaining information, it
becomes unrealistic in many settings for the decision maker to consider all the options and possible
consequences.

● To make decisions under such conditions, decision makers may limit the scope of the analysis by applying
principles such as satisficing and other noncompensatory decision rules. They may also apply heuristics, based
on their knowledge or experience, leading to performance that can approximate the results of applying
compensatory decision rules.

II. DOMINANCE

Dominance is perhaps the most fundamental normative decision rule. Dominance is said to occur between two
alternative actions, Ai and Aj when Ai is at least as good as Aj for all events E, and for at least one event Ek, Ai is
preferred to Aj. For example, one investment might yield a better return than another regardless of whether the stock
market goes up or down. Dominance can also be described for the case where the consequences are multidimensional.
This occurs when for all events E, the kth consequence associated with action i (Cik) and action j (Cjk) satisfies the
relation Cik => Cjk for all k and for at least one consequence Cik > Cjk. For example, a physician choosing between
alternative treatments has an easy decision if one treatment is both cheaper and more effective for all patients.

Dominance is obviously a normative decision rule, since a dominated alternative can never be better than the
alternative that dominates, it. Dominance is also conceptually simple, but it can be difficult to detect when there are
many alternatives to consider or many possible consequences. The use of tests for dominance by decision makers in
naturalistic settings is discussed further in Section 2.3.

BEHAVIORAL DECISION MODELS

As a normative ideal, classical decision theory has influenced the study of decision making in a major way. Much of the
earlier work in behavioral decision theory compared human behavior to the prescriptions of classical decision theory
(Edwards, 1954; Slovic et al., 1977; Einhorn and Hogarth, 1981). Numerous departures were found, including the
influential finding that people use heuristics during judgment tasks (Tversky and Kahneman, 1974). On the basis of such
research, psychologists have concluded that other approaches are needed to describe the process of human decision
making. Descriptive models that relax assumptions of the normative models but retain much of their essence are now
being evaluated in the field of judgment and decision theory (Stevenson et al., 1993). One of most exciting
developments is that fast and frugal heuristics can perform very well even when compared to sophisticated optimization
model (Gigerenzer, 2008).

STATISTICAL ESTIMATION AND INFERENCE

The ability of people to perceive, learn, and draw inferences accurately from uncertain sources of information has been
a topic of much research. In the following discussion we first consider briefly human abilities and limitations on such
tasks. Attention then shifts to several heuristics that people may use to cope with their limitations and how their use can
cause certain biases. In the next section, we then consider briefly the role of memory and Selective] processing of
information from a similar perspective. Attention then shifts to mathematical models of human judgment that provide
insight into how people judge probabilities, the biases that might occur, and how people learn to perform probability
judgment tasks. In the final section, we summarize findings on debiasing human judgements.

Human Abilities and Limitations. Research conducted in the early 1960s tested the notion that people behave as
“intuitive statisticians” who gather evidence and apply it in accordance with the Bayesian model of inference (Peterson
and Beach, 1967). Much of the earlier work focused on how good people are at estimating statistical parameters such as
means, variances and proportions. Other studies have compared human inferences obtained from probabilistic evidence
to the prescriptions of Bayer’s rule.

Part of the issue is that when events occur rarely, people will not be able to base their judgments on a representative
sample of their own observations. Most of the information they receive about unlikely events will come from secondary
sources, such as media reports, rather than from their own experience. This tendency might explain why risk estimates
are often related more strongly to factors other than likelihood, such as catastrophic potential or familiarity
(Lichtenstein et al., 1978; Slovic 1978,.1987: Lehto et al., 1994). Media reporting focuses on “newsworthy” events, which
tend to be more catastrophic and unfamiliar. Consequently, judgments based on media reports might reflect the latter
factors instead of likelihood.

Heuristics and Biases. Tversky and Kahneman (1973, 1974) made a key contribution to the field when they showed that
many of the above-mentioned discrepancies between human estimates of probability and Bayes’ rule could be
explained by the use of three heuristics. The three heuristics they proposed were those of representativeness,
availability, and anchoring and adjustment. The representativeness heuristic holds that the probability of an item A
belonging to some category B is judged by considering how representative A is of B. For example, a person is typically
judged more likely to be a librarian than a farmer when described as “a meek and tidy soul who has a desire for order
and structure and a passion for detail.” Applications of this heuristic will often lead to good probability estimates but can
lead to systematic biases.

The availability heuristic holds that the probability of an event is determined by how easy it is to remember the event
happening. Tversky and Kahneman state that perceived probabilities will therefore depend on familiarity, salience,
effectiveness of memory search, and imaginability. The implication is that people will judge events as more likely when
the events are familiar, highly salient (such as an airplane crash), or easily imaginable. Events will also be judged more
likely if there is a simple way to search memory. For example, it is much easier to search for words in memory by the
first letter rather than by the third letter. It is easy to see how each item above affecting the availability of information
can influence judgments. Biases should increase when people lack experience or when their experiences are too
focused.

The anchoring-and-adjustment heuristic holds that people start from an initial estimate and then adjust it to reach a
final value. The point chosen initially has a major Impact on the final value selected when adjustments are insufficient.
Tversky and Kahneman (1974) refer to this source of bias as an anchoring effect. They show how this effect can explain
under and overestimates of disjunctive and conjunctive events. This happens if the subject starts with a probability
estimate of a single event. The probability of a single event is of course. less than that for the disjunctive event and
greater than that for the conjunctive event. If adjustment is too small. Under and overestimates occur, respectively, for
the disjunctive: and conjunctive-events. Tversky and Kahneman also discuss how anchoring and adjustment may cause
biases. In subjectives probability distributions.
NATURALISTIC DECISION MODELS

In a dynamic and realistic environment, actions taken by a decision maker are made sequentially in time. Taking actions
can change the environment, resulting in a new set of decisions. The decisions might be made under time pressure and
stress by groups or by single decision makers. This process might be performed on a routine basis. or might: involve’
severe conflict: For example, either a group of soldiers or an individual officer might routinely: identify’ marked vehicles
as friends or foes. When a vehicle has unknown or ambiguous marking, the decision changes to a conflict-driven process.
Naturalistic, decision theory has emerged as a new field that focuses on such decisions in real-world environments
(Klein, 1998; Klein et al., 1993). The notion that most decisions are made in a routine, non analytical way is the driving
force of this approach. Areas where such behavior seems prominent include juror decision making, troubleshooting of
complex systems, medical diagnosis, management decisions and numerous other examples.

These models assume that people rarely weigh alternatives and compare: them in terms of expected value or utility.
Each model: is also descriptive rather than prescriptive. Perhaps the most general conclusion that can be drawn from
this work is that people use different decision strategies, depending on their experience, the task, and the decision
context. Several of the models also postulate that people choose between decision strategies by trading off
effectiveness against the effort required.

I. LEVELS OF TASK PERFORMANCE

There is growing recognition that most decisions are made on a routine basis in which people simply follow past
behavior patterns (Rasmussen, 1983; Svenson, 1990; Beach, 1993). Rasmussen (1983) follows this approach to
distinguish among skill-based, rule based, and knowledge-based levels of task performance. Lehto (1991) further
considers judgment-based behavjor as a fourth level of performance. Performance is said to be at either a skill-based or
a rule-based level when tasks are routine in nature. Skill-based performance involves the smooth, automatic flow of
actions without conscious decision points. As such, skill-based performance describes the decisions made by highly
trained operators performing familiar tasks. Rule-based performance involves the conscious perception of
environmental cues, which trigger the application of rules learned on the basis of experience. As such, rule-based
performance corresponds closely to recognition-primed decisions (Klein, 1989). Knowledge-based performance is said to
occur during learning or problem-solving activity during which people cognitively simulate the influence of various
actions and develop plans for what to do. The judgment-based level of performance occurs when affective reactions Of a
decision maker cause a change in goals or priorities between goals (Janis and Mann 1977; Lesions, 1988: Lehto, 199).
Distinctive types of errors in decision making occur at each of the four levels Re son 1990: Lehto 19901)

At the skill-based level, errors occur due to perceptual variability and when people fail to shift up to rule-based or higher
levels of performance. At the rule-based level, errors occur when people apply faulty rules or fail to shift up to a
knowledge-based level in unusual situations where the rules they normally use are no longer appropriate. The use of
faulty rules leads to an important distinction between running and taking risks. Along these lines, Wagenaar (1992)
discusses several case studies in which people following risky forms of behavior do not seem to be consciously
evaluating the risk. Drivers, in particular, seem habitually to take risks. Wagenaar explains such behavior in terms of
faulty rules derived on the basis of benign experience. At the knowledge-based level, errors occur because of cognitive
limitations or faulty mental models or when the testing of hypotheses causes unforeseen changes to systems. At
judgment-based levels, errors (or violations) occur because of inappropriate affective reactions, such as anger or fear
(Lehto. 1991). As noted by Isen (1993), there also is growing recognition that positive affect can influence decision
making. For example, positive affect can promote the efficiency and thoroughness of decision making but may cause
people to avoid negative materials. Positive affect also seems to encourage risk-averse preferences. Decision making
itself can be anxiety provoking, resulting in violations of rationality (Janis and Mann. 1977).
II. RECOGNITION-PRIMED DECISION MAKING

Klein (1998, 2004) developed the theory of recognition decision making on the basis of observations of firefighters and
other professionals in their naturalistic environments. He found that up to 80% of the decisions made by firefighters
involved some sort of situation recognition, where the decision makers simply followed The model he developed
distinguishes between three basic conditions. In the simplest case, the decision maker recognizes the situation and takes
the obvious action. A second case occurs when the decision maker consciously simulates the action to check whether it
should work before taking it. In the third and most complex case, the’ action is found to be deficient during the mental
simulation and is consequently rejected. An important point of the model is that decision makers do not begin by
comparing all the options. Instead, they begin with options that seem feasible based on their experience. This tendency,
of course, differs from the SEU approach but is comparable to applying the satisficing decision rule (Simon, 1995)
discussed earlier.

III. DOMINANCE STRUCTURING

Dominance structuring (Montgomery, 1989; Montgomery and Willen, 1999) holds that decision making in real contexts
involves a sequence of four steps. The process begins with a preediting stage in which alternatives are screened from
further analysis. The next step involves selecting a promising alternative from the set of alternatives that survive the
initial screening. A test is then made to check whether the promising alternative dominates the other surviving
alternations. If dominance is not found, the information regarding the alternatives is restructured in an attempt to force
dominance. This process involves both the bolstering and deemphasizing of information in a way that eliminates
disadvantages of the promising alternative. Empirical support can be found for each of the four stages of the bolstering
process (Montgomery Willen, 1999). Consequently, this theory may have Value as a description of how people make
nonroutine decisions.

IV. EXPLANATION-BASED DECISION MAKING

Explanation-based decision making (Oskarsson et al., 2009; Pennington and Hastie, 1986, 1988) assumes that people
begin their decision-making process by constructing a mental model that explains the facts they have received. While
constructing this explanatory model, people are also assumed to be generating potential alternatives to choose
between. The alternatives are then compared to the explanatory model rather than to the facts from which it was
constructed.

IMPROVING DECISIONS

● Training and task environment : Research and reasoning proposes that people with formal training make same type of
error as those without training. The deficiency in decision making that seems amenable to instructional training is the
judgment of probability

● The way in which information is presented should not be underestimated due to the factor called framing. Information
should usually be framed in such a way that its attributes are encoded positively rather than negatively

● Decision aids: following the cognitive limitations decision maker should be provided with decision aids such as decision
analysis and decision support system.

○ Decision analysis: it is a set of techniques for structuring a set of problems and decomposing them into similar
components
○ Decision Support System(DSS):It is a computer based tool used to guide operator through the decision making
process . It provides them with information that may involve retrieval of data filtering and simulation .Example – MAUD
(Multi Attribute Utility Decomposition)

DECISION SUPPORT

That is, decision support should have an objective (i.e., optimal or satisfactory choices, easier choices, more justifiable
choices, etc.). Also, it must have a means (i.e., decision analysis or other method of decision support and it must have a
current state {i.e., decision quality, effort expended. knowledge, etc. of the supported decision makers). The
effectiveness of decision support can then be defined in terms of how well the means move the current state toward the
objective.

DECISION ANALYSIS

The application of classical decision theory to improve human decision making is the goal of decision analysis (Howard,
1968, 1988; Raiffa, 1968; Keeney and Raiffa, 1976). Decision analysis, requires inputs from decision makers, such as
goals, preference and importance measures, and subjective probabilities. Elicitation techniques have consequently been
developed that help decision makers provide these inputs. Particular focus has been placed on methods of quantifying
preferences trade-offs between conflicting objectives, and uncertainty (Raiffa, 1968; Keeney and Raiffa, 1976). As a first
step in decision analysis, it is necessary to do some preliminary structuring of the decision, which then guides the
elicitation process. The following discussion first presents methods of structuring decisions and then covers techniques
for assessing subjective probabilities, utility functions and preferences.

STRUCTURING DECISIONS

Decision Matrices and Trees. Decision matrices are often used to represent single-stage decisions (Figure 4). The
simplicity of decision matrices is their primary advantage. They also provide a very convenient format for applying the
decision rules discussed in Section 2.1. Decision trees are also commonly used to represent single-stage decisions -
(Figure 5) and are particularly useful for describing multistage decisions ’ Analysis of a single or multistage decision tree
involves two basic steps, averaging out and folding back (Raiffa, 1968). These steps occur at chance and decision *
nodes, respectively.’ Averaging out occurs when the expected value (or utility) at each chance node is calculated. In
Figure 5 this corresponds to calculating ° the expected value of A1 and A2, respectively. Folding back refers to choosing
the action with the greatest value expected at each decision node.
Value Trees. Value trees hierarchically organize objectives, attributes, goals, and values (Figure 6). From this perspective,
an objective corresponds to satisficing or maximizing a goal or set of goals. When there is more than one goal, the
decision maker will have multiple objectives, which may differ in importance. Objectives and goals are both measured
on a set of attributes. - Attributes may provide (1) objective measures of a goal, such as when fatalities and injuries are
used as a measure of highway safety: (2) subjective measures of a goal, such as when people are asked to rate the
quality of life in the suburbs versus the city; or (3) proxy or indirect measures of a goal, such as when the quality of
ambulance service is measured in terms of response time.

In generating objectives and attributes, it becomes important to consider their relevance, completeness, and
independence. Desirable properties of attributes (Keeney and Raiffa, 1976) include:

1. Completeness: the extent to which the attributes measure whether an objective is met
2. Operationality: the degree to which the attributes are meaningful and feasible to measure
3. Decomposability: whether the whole is described by its parts
4. Nonredundancy: the fact ‘that ‘correlated attributes give misleading results
5. Minimum size: the fact that considering irrelevant attributes is expensive and may be misleading

Once a value tree has been generated various methods can be used to assess preferences directly between the
alternatives.

Event Trees or Networks. Event trees or networks show how a sequence of events can lead from primary events to one
or more outcomes. Human reliability analysis (HRA) event trees are a classic example of this approach (Figure 7). If
probabilities are attached to the primary events, it becomes possible to calculate the probability of outcomes, as
illustrated in Section 4.1.2. This approach has been used in the field of risk assessment to estimate the reliability of
human operators and other elements of complex systems (Gertman and Blackman, 1994). Fault trees work backward
from a single undesired event to its causes (Figure 8. Fault trees are commonly used in risk assessment to help infer the
chance of an accident occurring (Hammer, 1993; Gertman and Blackman, 1994). Inference trees relate a sect of
hypotheses at the top level of the tree to evidence depicted at the lower levels. The latter approach has been used by
expert systems such as Prospector (Duda et al.. 1979). Prospector applies a Bayesian approach to inter the presence of a
mineral deposit from uncertain evidence.
INDIVIDUAL DECISION SUPPORT

The concept of DSSs dates back to the early 1970s. It was first articulated by, Little (1970) under the term decision
calculus and By Scot-Morton (1977) under the term management decision systems. DSSs are interactive computer-based
systems that help decision makers utilize data and models to solve unstructured or semistructured problems (Scott-
Mortan, 1977; Keen and Scott-Morton,. 1978). Given the unstructured nature of these problems, the goal of such
systems is to support, rather than replace, human decision making. The three key components of a DSS are (1) a model
base, (2) a database, and (3) a user interface. The model base comprises quantitative models (e.g.. financial or Statistical
models) that provide the analysis capabilities of DSSs. The database manages and organizes the data in meaningful
formats that can be extracted or queried. The user interface component manages the dialogue or interface between the
DSS and the users. For example, visualization tools can be used to facilitate communication between the DSS and the
users.

DSSs are generally classified into two types: model driven and data driven. Model-driven DSSs utilize a collection of
mathematical and analytical models for the decision analysis. Examples include forecasting and planning models,
optimization models, and sensitivity analysis models (i.e., for asking “what-if” questions). The analytical capabilities of
such systems are powerful because they are based on strong theories or models. On the other hand, data-driven DSSs
are capable of analyzing large quantities of data to extract useful information. The data may be derived from
transaction-processing systems, enterprise systems, data warehouses, or Web warehouses. Online analytical processing
and data mining can be used to analyze the data. Multidimensional data analysis enables users to view the same data in
different ways using multiple dimensions. The dimensions could be product, salesperson, price, region, and time period.
Data mining refers to a variety of techniques that can be used to find hidden patterns and relationships in large
databases and to infer rules from them to guide decision making and predict future behavior Data min mg can yield
information on associations, sequences, classifications, clusters, and forecasts.

(Laudon, 2003). Associations are occurrences linked to- a single event (é.g., beer is purchased along with diapers);
sequences are events linked over time (e.g.. the purchase of a new oven after the purchase of a house Classifications
refer to recognizing patterns and rules to categorize an item or object into its predefined group (e.g., customers who are
likely to default on loans): clustering refers to categorizing items or objects into groups that have yet been defined (e.g..
identifying customers with similar preferences). Data mining can also be used for forecasting (e.g.. projecting sales
demand).

SKILL-ACQUISITION

● Phases of Skill Acquisition:

○ Cognitive: it is for novice learning to understand the task environment and must attend to cues and events that do not
require under later phase.

○ Associative: It is an intermediate phase where skill acquired through cognitive approach are linked together.

○ Autonomous : It is characterized by the procedure becoming more automatic and less subject to cognitive control.
Automatic process is characterized by : mandatory without intention; do not interfere with other mental activities; occur
without awareness

CHARACTERISTICS OF EXPERT PERFORMACE


HUMAN FACTORS ISSUES
Human factor concerns to be addressed during construction of an expert system are:

○ Selection of task or problem :


■ deductive problems are easily framed
■ Task should be easily structured
■ Focus should be relied on area of knowledge
■ Human expert should conform to the task

○ Representation of knowledge
■ Object in the base must reflect experts’ knowledge structure
■ Different objects should be discriminable
■ knowledge engineer and subject matter expert maintain a common frame of reference
■ Procedures should be compatible with experts’ mental model
■ Biases and exaggeration should be detected and compensated for
■ End user expectation to be kept in mind

○ Design of interface:
■ Poor design of interface leads to user aversion and operator errors
■ Human factor specialist should determine the information to be presented to optimize efficiency
■ Use of natural language and direct graphical representation

○ Performance of expert system:


■ Potential error in knowledge base should be tested
■ Prototype testing

○ Successful expert system:


■ Minimum acceptance problem
■ Training programs to focus on various parts of integration of expert and expert system
■ Maintenance of system ensures long term reliability
■ Possible areas of intervention which was not initially dealt

EYE-HAND COORDINATION

● Eye–hand coordination (also known as hand–eye coordination) is the coordinated control of eye movement with hand
movement, and the processing of visual input to guide reaching and grasping along with the use of proprioception of the
hands to guide the eyes. Eye–hand coordination has been studied in activities as diverse as the movement of solid
objects such as wooden blocks, archery, sporting performance, computer gaming, copy-typing, and even teamaking.
● It is part of the mechanisms of performing everyday tasks; in its absence most people would be unable to carry out
even the simplest of actions such as picking up a book from a table or playing a video game.

RESPONSE TIME

Response time is the sum reaction time plus movement time. Reaction time is the elapsed time between the
presentation of a sensory stimulus and the subsequent behavioral response. It indicates how fast the thinker can
execute the mental operations needed by the task at hand. In turn, speed of processing is considered an index of
processing efficiency. The behavioral response is typically a button press but can also be an eye movement, a vocal
response, or some other observable behavior.

TYPES OF RESPONSE TIME

● Simple reaction time is the motion required for an observer to respond to the presence of a stimulus.

● Recognition or Go/No-Go reaction time tasks require that the subject press a button when one stimulus type appears
and withhold a response when another stimulus type appears. For example, the subject may have to press the button
when a green light appears and not respond when a blue light appears.

● Choice reaction time (CRT) tasks require distinct responses for each possible class of stimulus. For example, the subject
might be asked to press one button if a red light appears and a different button if a yellow light appears. The Jensen box
is an example of an instrument designed to measure choice reaction time.

● Discrimination reaction time involves comparing pairs of simultaneously presented visual displays and then pressing
one of two buttons according to which display appears brighter, longer, heavier, or greater in magnitude on some
dimension of interest.

(Due to momentary attentional lapses, there is a considerable amount of variability in an individual's response time,
which does not tend to follow a normal (Gaussian) distribution. To control for this, researchers typically require a subject
to perform multiple trials, from which a measure of the 'typical' response time can be calculated. Taking the mean of the
raw response time is rarely an effective method of characterizing the typical response time, and alternative approaches
(such as modeling the entire response time distribution) are often more appropriate.)

COGNITIVE ERGONOMICS SUPPORTS USED IN INDUSTRIAL PRODUCTION

Having introduced the concept of cognitive ergonomics and the capabilities of the human mind, we will now bring it
closer to home and look at how this topic affects the operator in production industry. Many different tools and methods
that aid the operator from a cognitive perspective exist in the assembly environment, limiting the mental capacity
required. Interestingly, a number of these methods came about purely from the desire to optimize the performance of
systems, rather than to specifically provide operators with cognitive support; the added cognitive benefits sort of came
about as an added bonus almost unintentionally. This section will introduce various different ways in when cognitive
ergonomic considerations are effectively being used in the production environment.

I. DESIGN FOR ASSEMBLY

A recurring problem in industry is that all too often the product is designed without consideration of the fact that the
product has to be put together by an assembler in a production facility. Design for assembly (DFA) is a method which
aims to encourage designers to think about the assembly implications of their design, for instance by minimizing the
number of required components and enabling as simple an assembly method as possible (Boothroyd, 2002). This should
in turn lead to reduced times and cost during the manufacturing stage, while maintaining quality. DFA aims to enhance
the level of communication between the manufacturing and design teams, to ensure an optimized solution meeting the
requirements of both parties is achieved. Taking DFA into consideration during all stages of the product's design and
development right from its conception reduces the need to make design changes late in the process. The DFA
procedures and design rules that should be followed differ depending on whether the product is manually or
automatically assembled. The general DFA guidelines try to address two key areas, the handling of parts and the way in
which parts are connected or fastened.

The following are general guidelines that should be considered during the design of products, as they will have a positive
impact on the assembly stage of the product, aiding the operator with their work tasks from both a physical and
cognitive perspective. Where possible, parts should:

 Use geometrical features: symmetrical, or obviously asymmetric for instances where symmetry can't be achieved.
 Design parts that cannot be attached incorrectly.
 Use shapes or features that ensure parts won't stick together when.in mass storage containers.
 Avoid shapes or features that will cause parts to tangle when in mass storage containers.
 Easy to handle, avoid very small or excessively large, slippery or sharp parts that could be difficult or hazardous to
handle.
 Reduce the count and part types.
 Ensure sufficient access and visibility is provided

II. THE USE OF FIXTURES

Providing assemblers with nothing but a table and a few tools would likely result in high levels of frustration,
dissatisfaction, disorder, confusion, poor posture, MSDs and eventually absenteeism. To remedy these problems,
carefully designed fixtures are installed at workstations to ease the mental workload on operators and improve
performance and efficiency. A fixture is a device that holds or supports the work piece during manufacturing operations.
It enables the part to be held securely in a specific orientation, freeing the users hands so other parts can be attached to
it and necessary processes such as tightening carried out. Fixtures can also be used to hold tools supporting their weight
so the operator only needs to ensure their position relative to the product and not take the weight

A jig is a device that is pretty similar to a fixture, but also provides support in the processing operations by guiding
cutting tools. The complexity and usefulness of fixtures varies, in some cases a simple device locking the part to the table
top is sufficient; however, for more complex heavier products a much more sophisticated fixture is necessary, with
additional capabilities, such as the ability to rotate, etc.

A number of considerations should be taken when designing fixtures to ensure they are optimizing the operator's
capabilities both physically and cognitively, as they can play a significant role in providing the operator with cues and
clues. The alignment of fixtures on the workstation should correspond to the order the assembly tasks should be carried
out as well. The alignment of fixtures should also take into the consideration the way in which the material will be
supplied to the workstation, so as to reduce the time spent orienting the material. By having a fixture that determines
how the product should be orientated, the need to recall details from memory is reduced, which is particularly beneficial
for operators who work on several, product variants. The shape of fixtures is often a negative form of the part or
component that needs to be assembled so also acts as a device to aid the assembler.

III. KITTING

Kitting is a method where all the required components necessary to make a product or subassembly are delivered to the
operator's workstation inside a container called a kitting bin. The container often uses templates or is structured in such
a way that the components can only be stored one way. Having a structured layout provides support for the assembler
indicating in which order the parts should be removed and assembled, while supporting the kitter by visually showing
what parts are required and in what quantity. The kit also acts as a memory trigger or early warning symbol because if
the box is not empty when the worker has completed their task it is clear they have made an error somewhere during
the assembly. While the value of this technique has been questioned from a materials handling

IV. STANDARDIZED WORK

Standardized work is a key part of lean manufacturing philosophy; it stops everyone from taking the “this is my way of
doing things” approach and rather provides an optimized standard method that all workers should take (assemblers,
machine maintenance, managers, etc.), This method means workers don't need to choose between numerous] possible
ways of completing the task, rather there is only one clearly defined way, the best way. By providing workers with a
specific set method to carry out tasks, over time the process will become engrained in their memory, reducing the time
and energy associated with memory recall. By combining all the different elements of the worker's task into a sequence,
efficiency and productivity can be achieved as well as cognitive support for the worker. é ‘The use of other methods such
as kitting contributes to standard work as the material is presented in a certain order, based on the standardized way
the part should be assembled. Standardized work not only applies to the necessary sequence of tasks the assembler
should conduct; it also applies to the state of the workstation. So pictures are often displayed showing what the normal
condition of the workstation is and how it should be left and the end of a shift. In Toyota's Total Quality Management
Philosophy, having standardized processes is key, as it provides the baseline needed to facilitate continuous
improvement (kaizen-implementation of incremental change) (Womack, 1996). Workers are encouraged to identify
potential areas of improvement that could become the new standardized procedure, which helps to create a satisfying
and fulfilling work environment. Standardized work generally involves a high level of documentation. This can be
particularly beneficial for training our poses, making it easier for new personnel to get to grips with quickly.

V. WORK INSTRUCTIONS

In its simplest form, a work instruction provides the operator with written guidelines or pictures of how the part should
be assembled. Some work instructions can be quite open, only specifying the key distances or torque required with little
guidance on the specific details of how the operator should actually perform the task. Other instructions utilize
standardized work principles, ensuring operators are aware of the only correct way of implementing the necessary tasks.
Instructions can be provided in paper form or through specialized training; however, the recent trend is for production
facilities to have computers and screens located at the workstation. These provide operators with information and the
necessary instructions (both text and pictures) as to how parts should be assembled. The operator has access to all the
parts stored on the system and can obtain the necessary information by entering the part identification number. In
some systems instead of manually typing in the part identification number to view the instructions, the operator simply
scans an ID card and instructions for the part in question are provided; this method contributes to quality control as all
defects can be traced.

More complex systems utilize picking by light. Initially the user is guided to the necessary material, then for assembly
operations a light ball is situated where the production step is carried out and is illuminated when necessary. A sensor
then picks up the assemblers presence and provides them with a current work instruction on the display screen. Only
when the task has been correctly carried out can the assembler move onto the next step. For instance if only five screws
have been mounted instead of the required six the system won't allow the operator to conduct the next step and an
alarm will sound, alerting them of their error. This method also limits the need for operators to spend time and energy
retrieving information from their memory or trying to correctly interpret a scenario, This is particularly valuable in
environments where a high number of similar product variants exist, providing operators with the correct level of
support. Such systems can be used anywhere and by operators of any nationality as the onscreen instructions can be in
several languages. This ensures that a standardized way of work is followed throughout the whole company regardless
of the geographic location of the different sites. Using a software based system also means that should any
modifications to the assembly instructions need to be made; the system can be updated with no hassle with changes
being made to all stations on the line simultaneously.

VI. POKA YOKE

A number of mistakes in production leading to defects and reduced quality are a result of assemblers simply forgetting
to do something. Poka yoke was introduced as an attempt to combat this issue, eliminating defects by correcting or
alerting humans of their errors as soon as they occur. Poka yoke, a term that originated in Japan, means “mistake
proofing” and is concerned with preventing errors from becoming defects before the fact. Many production facilities
purposely implement tools, equipment or procedures for error proofing, making it very difficult for mistakes to be made.
By only providing one way of holding or storing the part, both kitting containers and fixtures at the workstation act as
poka yokes.

VII. PICK BY BARCODES

This method utilizes barcodes and an optical barcode scanner. A terminal provides the operator with real-time data
collection information about where they need to go, what they need to pick and in what quantity, using either text or
images. The operator uses the device to scan the barcode on the storage box and the terminal provides them with
information regarding the desired quantity. The barcode scanner and terminal are either hand held, secured around the
lower arm, or truck-mounted. This system tends to be more cost-effective than pick by light in lower volume
environments. However, unlike other picking systems, the operator needs to look at the screen to retrieve the necessary
information, which can be an inconvenience. This system is considered to be one step up from using a paper sheet to
carry out picking tasks; however, it is not suitable for certain work environments when operators need to wear
protective clothing such as gloves.

VIII. PICK BY LIGHT

This method uses lights positioned on shelves, flow racks or work benches to direct and indicate to the operator what
they should do next. At the right point in the sequence, the light will guide the operator to a certain location. Once they
have completed the task the light will either go off automatically based on sensors or the operator will manually confirm
the action by clicking the illuminated button, triggering the next light in the sequence to illuminate. In addition to a light,
some systems are fitted - with a display showing the necessary quantity or other information.

Despite being called pick by light, this method is not limited to picking. It can also be used to provide information about
assembly tasks, for instance which tool should be used and what torque should be applied. The system can also indicate
the correct storage container for items to be placed in after assembly (“put to light”). This system is considered more
user-friendly than picking by barcode, as the operator's hands are kept free. Many argue that this system is the fastest
picking method, as users don't need to refer to a screen or wait to hear instructions; rather, their attention is
instinctively drawn towards the light. Should changes be made to the assembly line, the light modules can be easily
moved and updates made to the software infrastructure. .

IX. PICK BY VOICE

This system is similar to pick by light but uses the sense of hearing to gain the operators’ attention, rather than lights.
Each operator wears a headset and is provided with the necessary information to know what to pick, in what quantity
and where it is located. In this method both the user's hands and eyes are free. To confirm the pick, the operator can
use voice control where they will repeat some of the product information (e.g. the last four digits of the barcode), or a
sensor positioned in the container will detect their selection. Unlike pick by light, this technique can be used even when
multiple operators are working in the same area. The use of both pick by voice and pick by light make it relatively easy
for new workers to learn their new work tasks quickly.

X. ANDON

Andon systems provide a visual display that all workers can see to show the status of the plant floor. Enhanced
visualization is said to not only create a sense of belonging in teams, but also point out when problems in the process
occur, alerting management, maintenance and other workers down the line who depend on the affected station.
Empowering operators to stop the production processes encourages an immediate response, which in turn should
enhance the overall quality and reduce waste (Alzatex, 2014). Generally the worker at the directly affected station pulls
a cord triggering an alarm or flashing lights to alert the rest of the workforce that a problem has occurred; this can also
be automated. Once the issue has been resolved, the andon is deactivated so that work can continue as normal. Many
industry facilities have andon coaches whose role is to resolve any issues as soon as they arise.

HUMAN INFORMATION PROCESSING MODEL

SEVERAL PROBLEMS FROM THE INTERFACE DESIGN:

1. Too many controls—not all are required.


2. Rather than turning a dial, you have to key in the cooking time.
3. Too many contingent actions—even the “start” button has to be pressed. It is no longer automatic after step 2
above,
4. Functions that seem deceptive. For example, if you want to heat up your ‘cup of coffee—press “beverage”. It will
then cook for 1 min 20 sec. But the microwave oven does not know your cup is half full, and that you want the
temperature to be less than burning hot—about 65°C
Information processing lies at the heart of human performance. In a plethora of situations in which humans interact with
systems, the operator must perceive information, transform that information into different forms, take actions on the
basis of; the perceived and transformed information, and process the feedback from that action, assessing its effect on
the environment. These characteristics apply whether information processing is defined in terms of the classic open-
loop information processing model that derives from much of psychological research (Figure 1a) or the closed-loop
model of Figure 156, which has its roots within both control _ engineering (e.g... Pew and Baron, 1978; McRuer, 1080;
Jagacinskhi and Flach, 2003) and more recent conceptualizing in ecological psychology (Flach et ab... 1995 Hancock et
al., LYYS). In either case, transformations must be made on the information as it flows through the human operator.’

WHY HUMANS MAKE ERROR?

When compared to machines, humans appear to be vastly inferior. We cannot process information as fast as computers.
We cannot work as fast or as long as machines without getting tired. We cannot exert as much force as machines or
exert it as accurately or as long. We have an advantage over machines with our abilities to reason and learn but not with
our physical abilities to work quickly, to work for long hours without rest, or to lift heavy objects. Even so, research in
human factors founded in experimental and applied psychology provides us the rationale for why even our exceptional
abilities break down and we make errors. By understanding the limitations and capabilities of humans, we can design
systems to take advantage of their strengths and minimize the impact of their weaknesses.
SENSORY PROCESSING

Information and events that occur in the environment gain access to the brain through our senses (e.g. sight, sound,
smell, touch) which all impact on the quality of information that reaches the brain. All sensory systems have an
associated short term sensory store in the brain, that can store up to 4 seconds of sensory data.
PERCEPTION

Raw data from the environment relayed to the brain must be interpreted and decoded through a human's ‘perception’.
Perceptual processing has two features, firstly that it is automatic and rapid, requiring little attention, driven by both
sensory input and by our long term memory.

COGNITION

Cognitive processes generally require greater time, mental effort or attention, as the brain to carry out processes such as
rehearsal, reasoning or image translation using our working memory. Our cognitive functions can be affected by our
emotions or stress levels.

MEMORY
Memory is built and retrieved in three different ways:
● Recall: a measure of memory in which the person must retrieve information learned earlier.
● Recognition: a measure of memory in which the person has to only identify the previously learned information.
● Relearning : a measure of memory that assess the time saved while learning material again.
Richard Atkinson and Richard Shiffrin, the American psychologists in 1960s, figured out that memory can be broken into
three stages.

MEMORY

● We initially record the things we want to remember as immediate fleeting sensory memory.
● Short-term memory is the one where we keep the memories encoded by rehearsal. This is how we remember briefly
the short memories like passwords, phone nos. etc. without any rehearsal any stimuli can be recalled within 30 seconds.
● This is because mind can only hold and retrieve 4-7 bits of information at a time.
● This also points that it either gets decayed or transferred to long term memory.
● Long term memory is spacious durable unit that holds all the knowledge, experience and we have.
● Depends on both time we take to learn it and process we use to retain
● Shapes reshapes brain and eventually identity

● Working memory : Conscious, active processing of incoming auditory and visual-spatial information and of information
retrieved from long term memory
● Classically conditioned association: represent association with some incidences
● Automatic processing : non-conscious encoding of incidental information , such as time, space and frequency and well
learned information
LONG-TERM MEMORY
● Different kinds of long-term memory:
1. Procedural memory: how we remember to do things
2. Episodic: connected with some episode of life

HOW OUR MEMORIES ARE STORED?

● We all have been constantly retrieving the memories where we parked the vehicle, did we lock the room, have we
turned the lights off, dates, day, birthdays, names, events and so on.
● Implicit memory: Retention independent of conscious recollection i.e. how we ride our bikes, how to talk, eat dress
are dealt in on a mostly automatic nonconscious level
● Explicit memory: Memory of fact and experiences that one can consciously know or “declare” i.e. the chronicles of our
personal experience and general knowledge often requiring conscious and effortful work

RETRIEVAL CUES AND PRIMING

● Our memories are not like books neatly kept on the shelf that we often take off to read, retrieve, relate and
reproduce. It, however is more like a cobweb in the dark catacomb of our mind that is closely interrelated in series of
association that builds all sorts of diverse links as bits of source of information.
● For ex : if we try to remember a person seen in the dull lights or vague darkness we may remember certain features of
the place as the color of his jacket, his hair, complexion and so on but we cannot explicitly identify the person with
certainty. Here the darkness, jacket, color, complexion etc. acts as retrieval cues.
● “Memories are built in our mind as bits of pieces lying in the cobweb which is retrieved by the retrieval cues” The
more we have retrieval cues the better we can remember things.
● Priming : The act of activating associations unconsciously. Sometimes it’s called the “memory-less” memory in which
we have some invisible images captured that we may awaken for associations

MEMORY

● Serial position effect : memory is dependent on the order in which we receive information so it’s a tendency to recall
first and last items on the list
1. Primacy effect
2. Recency effect

● We forget in three ways :


1. We forget to encode it
2. We fail to retrieve it
3. Memory suffers storage decay

● These techniques help you to encode explicit memory but how we retain depends on how deeply the information has
penetrated through different levels of processing
○ Mnemonics : Memory aids especially those techniques that uses vivid imagery and organizational devices
○ Chunking: Organizing information into familiar, manageable unit; often occurs automatically.
● Shallow processing: encoding information on basic auditory or visual levels, based on sound, structure or appearance
of word.

● Deep processing : encodes semantically based on actual meaning associated with word Connection to something
meaningful such as some personal or emotional experience.

RESPONSE SELECTION AND EXECUTION

The understanding of a situation, achieved through perception and cognitive process will often trigger an
action. The selection of a response is separate from the execution of the action, which requires muscle
coordination for moving your body to ensure the selected goal is achieved, whatever that may be.

FEEDBACK

Feedback loop indicates that actions are sensed by the human and that the flow of information can be started
at any point and is continuous, feedback establishes that the goal has been achieved.

ATTENTIONAL RESOURCES

● It is limited.
● Selective Attention: selecting to process some stimuli and not others.
● Focused Attention: the ability to, once stimulus is selected, to process that stimulus and not others.
● Divided Attention: the ability to process two more inputs at a time.

SELECTIVE ATTENTION

● DEFINITION: The ability to pick one source of information to process (attend) over other sources.
○ This is the hand that guides the search light.
● Information is conceived as coming in from the world in channels.
○ A channel is a potential source of information.
○ Vision vs. audition are channels; within vision or audition, different locations may be different channels, e.g.,
different dials on dashboard.
○ Can be demonstrated in shadowing.
● Limits of memory influence how often we sample a channel.
○ If we do not remember sampling a channel, we forgot, we may return to that channel earlier.
● Under stress the range of sampling becomes more narrow and driven by salience.
○ Thus, under stress, we sample fewer channels.
○ Choice will be determined by my importance but by the attention getting quality of the channel.
○ Salience and importance may be similar or not.

FOCUSED AND DIVIDED ATTENTION

● Focused Attention: The range of information that will be processed at one time.
○ Information in the beam of the flashlight.
○ Relates to ability to ignore irrelevant information.
● Divided Attention: The ability to include two or more sources of information and process them
simultaneously.
○ Having the beam fall on two or more objects.
○ Ability to handle two or more tasks at once.
● Channels (How they are defined):
○ Within a channel information is processed in parallel, that is, at the same time.
○ In different channels information must be processed sequentially, that is, one after the other.
○ A major unit that defines a channel is space
■ Location defines a channel for both audition and vision.
■ In shadowing, it is harder to do if both voices come from the same side.
■ In vision, depth can also define a channel (Neisser and Beckman, 1975).
○ Pitch may also define a channel for audition.

SIGNAL DETECTION THEORY

Have you ever:


● Looked for the “perfectly ripe” bananas?
● Sorted through your inbox for an important email?
● Clicked on a fake download link on an ad-filled website?

Find the common elements of the following situations?

● A physician is examining a patient and trying to make a diagnosis. The patient shows a set of symptoms, and
the doctor tries to decide whether a particular disorder is present or not. To complicate the problem, the
symptoms are ambiguous, some of them pointing in the direction of the disorder, others pointing away from
it; moreover, the patient is a bit confused and does not describe the symptoms clearly or consistently. The
correct diagnosis is not obvious.

● A seismologist is trying to decide whether to predict that a large earthquake will occur during the next
month. As it was for the doctor, the evidence is complex. Many bits of data can be brought to bear on the
decision, but they are ambiguous and sometimes contradictory: records of earthquake activity in the distant
past (some of which are not very accurate), seismographic records from recent times (whose form is clear, but
whose interpretation is not), and so forth.

● A witness to a crime is asked to identify a suspect. Was this person present at the time of the crime or not?
The witness tries to remember the event, but the memory is unclear. It was dark, many things happened at
once,, and they were stressful, confusing, and poorly seen. Moreover, the crime occurred some time ago, and
the witness has been interviewed repeatedly and has talked to others about the event.

SIGNAL DETECTION THEORY


 Essentially, signal detection theory (SDT) is about:
o How an operator can tell the difference between a signal (what they’re looking for) or noise
(anything that’s not a signal
 Signal Detection Theory (SDT) is not merely a theory but also a mathematical technique for analysing
perceptual performance.
 It originated as a model of how human observers perform when they must detect ambiguous visual
stimuli of a certain type, such as targets on a radar screen (Tanner & Swets, 1954).
 The theory describes the task of sensory discrimination as one of distinguishing between specific target
stimuli and other, irrelevant stimuli, referred to as signals and noise respectively.
 For example, the radar observer’s task is to detect meaningful radar ‘blips’ (signals) whilst ignoring or
rejecting all irrelevant stimuli (noise). The theory also posits two important internal factors influencing
an observer’s performance on a signal detection task:
o The observer’s sensitivity in being able to discriminate the true signals from non- signals.
o The observer’s response criterion (or bias) when it comes to ambiguous stimuli. In other words,
the observer’s strategy for handling those stimuli that require a deliberate judgement.
 SDT thus recognises that individuals are not merely passive receivers of stimuli or information; when
confronted with uncertainty, they also become engaged in the conscious process of deciding whether
what they perceive signifies one thing rather than another.

SIGNAL DETECTION PARADIGM

4 OUTCOMES IN A SIGNAL DETECTION TASK

● Hit, operator responds to a signal (second quadrant)


● Miss, operator does not respond to a signal (third quadrant)
● False Alarm, operator responds to a noise (fourth quadrant)
● Correct Rejection, operator does not respond to a noise (first quadrant)
HOW DO I DESIGN USING SDT?

● Salience is how much something stands out from the background. It helps the operator become better at a
signal detection task by “highlighting” the important stuff. This should be the priority when trying to design
with SDT in mind.
1. Let the users know what to expect
2. Bigger importance = bigger visual weight
3. The more important the information, the more it should stand out
4. Reduce number of distractors

HOW TO IMPROVE SALIENCE


 Let users know what to expect
 Use visual hierarchy
 Remove distractors

RELEVANCE TO SITUATION AWARENESS

● The task of situation assessment is to arrive at a mentally perceived situation with a full understanding of its
implications. This is obviously far more complex and abstract than perceptual signal detection, and involves
acquiring information and interpreting that information on the basis of prior knowledge and expectations.

● Nevertheless, there is a degree of equivalence in the fact that both signal detection and situation
assessment involve discrimination. Specifically, the ‘observer’ in situation assessment must be able to
discriminate between at least the following:

● Valid versus invalid information. Invalid information is that which appears to represent the current situation
but is in fact erroneous or unreliable. For example, an item of information may be too old to be current, or
originate from an untrustworthy source.

● Valid versus invalid interpretations. It is important to correlate different items of information to establish a
coherent picture of the situation as it actually is. For example, one might interpret a warning light as indicating
a system fault, whereas other evidence indicates that the warning light itself is at fault.

● Valid versus invalid inferences. In this case, the discrimination is to do with the validity of one’s logic rather
than one’s interpretations. For example, the idea that “the enemy will surrender as soon as they see us
coming,” an inference based on the assumption that lesser powers are intimidated by our technological
supremacy, may not be valid.

● There are also parallels between the types of error that can be made in signal detection and situation
assessment.
○ Aside from the failure to detect critical stimuli, the possible errors of situation assessment
include accepting invalid information and rejecting valid interpretations of the situation. Such errors can arise
because of the human vulnerability to confirmation bias: the automatic tendency to seek primarily those
sources of information that confirm what we already believe to be true.
○ The USS Vincennes incident in the Persian Gulf is a case in point: because of an expectancy of
imminent air attack, some cues were overlooked and others were completely misread, resulting in the
shooting-down of an Iranian passenger jet (Klein, 1998). Confirmation bias can be regarded as a low criterion
setting in the observer, that is, an too much willingness to accept stimuli as evidence for a particular situation.
STEPS TO AVOID THE “ALARM FALSE ALARM”

● readjustment of criterion
● improve the sensitivity of the alarm system
● training users about inevitable trade-offs between misses and false alarms
● graded alarm systems: more than a single level of alert is provided

APPLICATIONS OF SDT

● The early application of SDT to studies of sensory performance is considered a major advance in the
understanding of human perception. To some extent, though, these origins obscure the theory’s more general
applicability.

● In essence, SDT models the ability of an agent to compare and match some given input with one or more
known categories. There is no particular reason why such a model should be confined to treating only raw
sensory features as possible inputs; the same principle of comparing given-to-known and dealing with
uncertainty applies equally to higher-level cognitive judgments.

● Arguably, the model applies to any instance of perceptual judgement or diagnosis (i.e. deciding what is the
case on the basis of perceived evidence) under conditions of uncertainty.

SIGNAL DETECTION THEORY

● The SDT model assumes that (Green & Swets, 1996) :


○ Sensory evidence is aggregated concerning the presence or absence of the signal.
○ A decision is made about whether this evidence indicates a signal or not.

● Neutral Evidence (X) : Rate of firing of neurons at a hypothetical “detection center”

● Critical Threshold: Response Bias (XC )


○ X > XC : Operator Decision (YES)
○ X < XC : Operator Decision (No)
○ P(hit) + P(miss) = 1
○ P(false alarm) + P(correct reject)=1

< Hypothetical distributions underlying SDT >

(a) high sensitivity

(b) low sensitivity


CRITERION / BIAS
● a measure of the willingness of a respondent to say 'Signal Present' in an ambiguous situation.
● 3 TYPES OF CRITERION:
○ Neutral

○ Liberal

○ Conservative

SENSITIVITY

D-prime (d’) is a measure of sensitivity.


= perceptual distance between the means of the “present” and “absent” distributions.
This perceptual distance is expressed in terms of z-scores
THE EFFECT OF BIAS
CALCULATING SENSITIVITY AND BIAS

PAYOFFS

In this case, Bopt is maximizing the total expected financial


gains or minimizing expected losses.

V : Value of desirable events (hit: H, correct rejection : CR)


C : Cost of undesirable events (miss: M, false alarm : FA)

RECEIVER OPERATING CHARACTERISTICS (ROC) CURVE

This graphic method is used to portray this equivalence of sensitivity across changing levels of bias
The ROC curve is useful for obtaining an understanding of the joint effects of sensitivity and response on the
data form a signal detection analysis.

Of the four value, only two are critical (hit, false alarm)

The ROC curve plots P(h) against P(fa) for different settings of the response criterion

The value of beta at any given point along the ROC curve is equal to the slope of a tangent drawn to the curve
at that point.

Absolute Judgement
 Imagine a task where you’re assigning stimuli to categories/levels
 Stimuli range along continuum; pitches in a piano
 Might be line lengths, tone pitches, light intensity, texture roughness
 Stimuli presented individually to subject in random order
 There are many types of absolute judgement tasks and all of them involves assigning stimuli to
different categories
o Stimuli may be auditory, visual, tactile or olfactory
o An example of auditory stimuli are pitches in a piano. Visual stimuli be line lengths or angle or
size
 Human performance in absolute judgement task depends on their information processing capacity
 If there are only two stimuli, people have perfect performance. However, if the number of stimuli
increase they start making errors.
 Based on studies, the maximum channel capacity humans in 2.6 bits corresponding to 7 +- 2. This is
related to the capacity of a person’s working memory.
 Start with 2 pitches (hi, lo); perfect performance
 Go to 4, people start making errors
 H1 (information transfer) can be computed
 Maximum channel capacity ~2.6 bits (Miller 7 plus or minus 2)

 Performance falls off after four alternatives or


remain constant for 2.6 bits. Compare Miller 7 +- 2.
o This graph shows the relationship of
information transferred (Ht) and stimulus
information (Hs).
o Perfect performance occurs if these two
values are equal but sadly human
performance is limited by memory capacity
o The limit in general population is 2.6 bits of
information.

Limit Not Sensory

 The limit of our performance is not our sensory capacity but our working memory.
 People cannot store much information inside their brain and retrieve them efficiently.
 Our senses can make discriminations but we cannot remember the levels or categories that we have
set
 For example, we can discriminate different shades of red, but we cannot name and remember them.
 Level of asymptote does: not reflect sensory limitation
 Rather working memory (STM} limitation

Single Dimension

Unidimensional judgments (Ex: size)


 Stimuli vary along one dimension only
 Observer places stimuli into 2 or more categories (S, M, L)

Channel Capacity (Experimental Results)


○ The level of the late part indicates the channel capacity of the
operator. (2~3 bits)
○ Errors began to occur (HT<HS)
Edge Effect (Experimental Results)
○ Stimuli located in the middle of the range of presented stimuli are generally identified with poorer accuracy
than those at extremes

Multidimensional Judgement

 Stimuli vary along more than one dimensions (e.g. size and colour)
 Observer places stimuli into 2 or more categories spread across
multiple dimensions
 Most of our recognition is based on the identification of some
combination of two or more stimulus dimensions rather than
levels along a single dimension.

○ Orthogonal dimensions
■ The level of the stimulus on one dimension can take on any value,
independent of the other
■ For example, Weight and Hair color
■ As more dimensions are added, more total information is transmitted, but less information is transmitted
per dimension.

○ Correlated Dimensions
■ The level on one constrains the level on another.
■ For example, Height and weight
■ As more dimensions are added, the security of the channel
improves, but HS limits the amount of information that can be
transmitted.

● Orthogonal dimensions maximize HT , the efficiency of the channel.


Correlated dimensions minimize Hloss; that is, they maximize the security of the channel.

Applications of absolute judgement

 The knowledge of absolute judgment enables a supervisor make use of workers more efficiently while
considering their imitations.
 In a sorting task, workers should not be expected to deliver perfect work if there are several levels.
 Sorting colors of different shades will also be difficult for a worker unless, edge effect is used as an
advantage.
 Useful for task in which worker has to sort stimuli into levels along some physical continuum
o e.g., fruit inspection: fruit size, color, classification of wool quality
 Asking a novice to sort into 8 levels and expecting them to be perfect—won’t work
 Millions of color on a display wouldn’t know the difference

Separable vs. Integral Dimensions

 With orthogonal and correlated dimensions, we are referring to the properties of the information in a
stimulus
 How that stimulus is related to other stimuli
 Not the perceived form of the stimulus
 Separable vs. integral dimensions refers to the way that the dimensions of a multidimensional stimulus
interact

Separable: each dimension perceived as independent of other dimension(s)


Example: color and fill texture of object, perpendicular vectors

Integral: one dimension of the stimulus affects perception of other dimension


Dimensions are dependent
Examples: color and brightness of an object, rectangle height and width

Vigilance
● To be watchful
● To be alert
● What is happening
● And what can happen

VIGILANCE DECREMENT

● "deterioration in the ability to remain vigilant for critical signals with time, as indicated by a decline in the
rate of the correct detection of signals"
● It is most commonly associated with monitoring to detect a weak target signal. Detection performance loss
is less likely to occur in cases where the target signal exhibits a high saliency. For example, a radar operator
would be unlikely to miss a rare target at the end of a watch if it were a large bright flashing signal, but might
miss a small dim signal

Measuring Vigilance Performance


● Influences on sensitivity
● Changes in bias

THEORIES OF VIGILANCE
Arousal theory
Expectancy Theory
○ The theory suggests that although individuals may have different sets of goals, they can be motivated if they
believe that:
■ There is a positive correlation between efforts and performance,
■ Favorable performance will result in a desirable reward,
■ The reward will satisfy an important need,
■ The desire to satisfy the need is strong enough to make the effort worthwhile.

4 FACTORS/CAUSES OF VIGILANCE DECREMENT


● Stimulus duration
● Interdimensional interference
● Distraction
● Response complexity

TECHNIQUES TO COMBAT THE LOSS OF VIGILANCE


● INCREASING SENSITIVITY
○ Show target examples
○ Increase target salience
○ Reduce the event rate
○ Train observers
● SHIFT IN RESPONSE CRITERION
○ Instructions, knowledge of results, false signal, confidence level

INFORMATION THEORY

● How to quantify this flow of information so that different tasks confronting the human operator can be
compared
● Measure task difficulty by determining the rate at which information is presented.
● Measure processing efficiency, using the amount of information an operator processes per unit of time.
● Provides metrics to compare human performance across a wide number of different tasks.

PRINCIPLES OF INFORMATION THEORY

● The basic elements of any general communications system include


○ a source of information which is a transmitting device that transforms the information or "message" into a
form suitable for transmission by a particular means.
○ the means or channel over which the message is transmitted.
○ a receiving device which decodes the message back into some approximation of its original form.
○ the destination or intended recipient of the message.
○ a source of noise (i.e., interference or distortion) which changes the message in unpredictable ways during
transmission

THE NUMBER OF EVENTS

● When all alternatives are equally likely, the information conveyed by an event Hs , in bits, can be expressed
by the formula.

; where N is the number of equally likely alternatives.


● Information theory has a quality of optimal performance. This is based on the minimum number of
questions therefore arrives at a solution in a minimum time.

PROBABILITIES OF EVENTS

● The probabilistic element of information is quantified by making rare events convey more bits. ;

where Pi is the probability of occurrence of event i.

● Psychologists are often more interested in measuring the average information conveyed by a series of
events with differing probabilities that occur over time.

PROBABILITIES OF EVENTS
● Low –probability events convey more information because they occur infrequently. However, the fact that
low-probability events are in frequent causes their high-information content to contribute less to the average.

SEQUENTIAL CONSTRAINTS AND CONTEXT


● Given a particular context, it may be highly expected, and therefore its occurrence conveys very little
information in that context.

● Absolute probability of the event Pi is now replaced by a contingent probability Pi|X (the probability of
event I given context X)

INFORMATION TRANSMISSION OF DISCRETE SIGNALS

● Channel Capacity : How much information is transmitted from stimulus to response


● Bandwidth : How rapidly information is transmitted.

Hs: Value of stimulus information


HR: Response information
Ht: Information faithfully transmitted
Hl: Information loss

You might also like