You are on page 1of 16

Visual Perceptual Issues of the Integrated Helmet and Display

Sighting System (IHADSS): Four Expert Perspectives


Clarence E. Rash1, Kevin Heinecke2, Gregory Francis3, and Keith L. Hiatt4
1
U.S. Army Aeromedical Research Laboratory, Fort Rucker, Alabama 36362
2
2nd Aviation Detachment, United States Military Academy, New Windsor, New York 12553
3
Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47907
4
U.S. Army Research Institute of Environmental Medicine, Natick, MA 01760-5007

ABSTRACT

The Integrated Helmet and Display Sighting System (IHADSS) helmet-mounted display (HMD) has been flown for
over a quarter of a century on the U.S. Army’s AH-64 Apache Attack Helicopter. The aircraft’s successful deployment
in both peacetime and combat has validated the original design concept for the IHADSS HMD. During its 1970s
development phase, a number of design issues were identified as having the potential of introducing visual perception
problems for aviators. These issues include monocular design, monochromatic imagery, reduced field-of-view (FOV),
sensor spectrum, reduced resolution (effective visual acuity), and displaced visual input eye point. From their diverse
perspectives, a panel of four experts  an HMD researcher, a cognitive psychologist, a flight surgeon, and a veteran AH-
64 aviator  discuss the impact of the design issues on visual perception and related performance.

Keywords: Helmet-mounted display, HMD, static visual illusions, dynamic visual illusions, Integrated Helmet and
Display Sighting System, IHADSS

1. INTRODUCTION

Military applications for helmet-mounted displays (HMDs) have exploded in the past decade. Originally conceived for
aviation, HMDs have been developed for an expanding array of mounted and dismounted applications. One of the
longest-fielded HMDs is the Integrated Helmet and Display Sighting System (IHADSS), developed and first employed
on the U.S. Army’s AH-64 Apache attack helicopter in the late 1970s. The IHADSS is a monocular design, providing
pilotage and targeting imagery and symbology to the right eye only (Figure 1). From its initial fielding, aviators have
reported a host of physical visual symptoms and visual illusions associated with this HMD. These issues have been
thoroughly reviewed by Rash, Hiatt, and Heinecke.1-2

Slid

Mai,y Slaa
I—

HUU
RcaUeral
Slop
HDU
Rotatict
collar
SlabdinQ
sp&,g

Figure 1. The AH-64D monocular HMD (IHADSS) (left) and two views of the helmet display unit (HDU).

Head- and Helmet-Mounted Displays XIII: Design and Applications,


edited by Randall W. Brown, Peter L. Marasco, Thomas H. Harding, Sion A. Jennings,
Proc. of SPIE Vol. 6955, 69550D, (2008) · 0277-786X/08/$18 · doi: 10.1117/12.769524

Proc. of SPIE Vol. 6955 69550D-1

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


There are a number of visual perception issues associated with the optical design and implementation of the IHADSS
HMD. These include a monocular design, presentation of monochromatic imagery, reduced field-of-view (FOV), sensor
spectrum, reduced resolution (effective visual acuity), and displaced visual input eye point. These issues are not
restricted to the IHADSS but also are present in other HMD designs.

In this paper, a panel of four experts will discuss these issues from their differing perspectives. The panelists include an
HMD researcher, a cognitive psychologist, an Army flight surgeon, and an Apache aviator. The approach will be to
present a brief introduction to each issue, followed by the input of each panelist. The intent of the panel discussion is to
provide the HMD community with parallel insights into important visual perception issues that are central to both
current and future HMD designs.

2. PANELISTS

Clarence E. Rash (HMD Researcher) - Mr. Rash is a research physicist with the Visual Sciences Branch, Sensory
Research Division, U.S. Army Aeromedical Research Laboratory (USAARL), Fort Rucker, AL. He has 28 years
experience in HMD research and development. Mr. Rash has investigated optical and visual issues associated with the
IHADSS HMD since 1979, the final year of its development. He has published over 200 scientific papers relating to
cockpit and helmet-mounted displays. He currently serves as the U.S. lead investigator on a 10-year joint US-UK
longitudinal study to determine if the use of the monocular helmet-mounted display (IHADSS) in the British Apache
AH Mk l attack helicopter has any long-term effect on visual performance.3-4

Gregory Francis (Cognitive Psychologist) - Dr. Francis is a professor of Psychological Sciences at Purdue University.
He has worked for 12 years with the USAARL, Fort Rucker, AL., to study glass cockpits,5 accident rates,6 and various
issues associated with the Apache AH-64.7 Dr. Francis develops, analyzes, and tests computational models of visual
perception and human cognition. These models are used to understand human behavior and to promote the development
of human-computer systems that are optimized for human performance.

Keith L. Hiatt (Flight Surgeon) - COL Hiatt is a Physician and Aerospace Medicine Specialist. He is currently the
Medical Director of the U.S. Army Research Institute of Environmental Medicine at Natick, MA.
He has 16 years of experience as an Army Flight Surgeon and is also a Federal Aviation Administration Senior AME.
He has collaborated with the USAARL on numerous research projects centering on the Apache Helicopter and the
IHADSS in particular. He was the initial investigator of the US-UK longitudinal study of the IHADSS in the British
version of the Apache Longbow3-4 and was the first individual to conduct in-theater HMD research in Operation Iraqi
Freedom One.8 He has published numerous scientific papers in the area of rotary-wing specific medicine,8-13 a large
proportion of which deal with the Apache HMD.

J. Kevin Heinecke (Apache Aviator) – CW4 Heinecke is a career U.S. Army Attack Helicopter aviator qualified in the
AH-1 Cobra and AH-64 Apache in 1990 and 1999, respectively. He has flown the Apache's IHADDS HMD system in
operational deployments in Korea, Bosnia Herzegovina, and Iraq, in addition to peacetime training operations. Presently
he has over 1000 operational hours with AH-64’s HMD. He earned an MS Degree in Aviation Systems, Aircraft
Research and Design, from the University of Tennessee Space Institute, Tullahoma, TN, and has authored papers on the
AH-64 IHADSS.2, 14 He is presently serving as an adjunct instructor for the Department of Civil and Mechanical
Engineering (Aero and Thermo Group) with the U.S. Military Academy at West Point.

3. MONOCULAR DESIGN

During early development of the IHADSS HMD, head-supported weight was a critical factor driving its design. The
result was an optical design decision to adopt a monocular approach. The final IHADSS design was the HDU (Figure 1),
which is an optical relay unit incorporating a miniature 1- inch cathode ray tube (CRT) and relay optics. The HDU is
mounted on the right side of the IHADSS helmet. Video imagery originating from the nose- mounted forward-looking
infrared (FLIR) sensors (one for pilotage and one for targeting) is presented on the face of the CRT and is optically
relayed and reflected off a beamsplitter (combiner) into the pilot’s right eye. The presented imagery subtends a FOV of

Proc. of SPIE Vol. 6955 69550D-2

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


30° vertically (V) by 40° horizontally (H). This imagery, presented to the right eye only, is what the pilot uses to fly and
operate the AH-64 at night. The left eye is unaided, allowing the pilot to view cockpit displays and the outside world.

Clarence E. Rash (HMD Researcher) - While the 1970s selection of a monocular design for the IHADSS was dictated
by head-supported weight and center-of-mass concerns, monocular HMDs actually offer certain advantages, e.g.,
smaller packaging, lower cost, and the ability to have one eye free for viewing both inside and outside the cockpit. Their
smaller packaging permits them to be placed closer to the head, resulting in less occlusion of the pilot’s look around
vision.15 However, monocular designs also present disadvantages, which include FOV limitations, small exit pupils, the
potential for binocular rivalry and visual illusions, eye dominance problems, reduced depth cues, increased workload,
and reduced reaction time.16

The FOV of an HMD design correlates directly with its size and weight, i.e., a reduced FOV allows for a smaller
objective lens and related optics, which reduces head-supported weight (mass). However, to compensate for a reduced
FOV, the pilot must increase head movement.

The IHADSS’ small 10-mm exit pupil requires the HMD optics to be extremely close to the eye to prevent vignetting of
the FOV. When in use, the HDU frequently rests in direct contact with the pilot’s maxilla/zygomatic arch (right
cheekbone).3 The small exit pupil makes stability of fit an extremely important factor. This is especially true in the high
vibration environment of helicopter flight.

As a monocular display, the IHADSS causes the complete loss of stereopsis (the visual appreciation of three dimensions
available with binocular vision). Stereopsis is thought to be particularly important in tactical helicopter flight that is
within the 200-meter limit of effective vision.17 Fortunately, monocular depth cues (e.g., retinal size, motion parallax,
interposition and perspective) are generally accepted as more important for routine flight. Pilots using monocular HMDs
can improve their nonstereo depth perception with training.

During the development of the IHADSS, there were two major concerns with the proposed monocular display format,
namely eye dominance and binocular rivalry. Eye (or "sighting”) dominance refers to a tendency to use one eye in
preference to the other during monocular viewing.18 Critical cost and weight considerations favored a monocular display
format for the IHADSS, which logically would be located on the right side of the helmet since the majority of the human
population is right-eye dominant. However, there were serious questions whether a left-eye dominant pilot could learn to
attend to a right-eye display. In fact, there is evidence linking sighting dominance with handedness and various facets of
cognitive ability, including tracking ability and rifle marksmanship performance.19 One study addressing head aiming
and tracking accuracy with helmet-mounted display systems indicated eye dominance to be a statistically significant
factor.20 However, this small amount of research is far from compelling and the IHADSS still is produced to fit the
right-eyed majority. Since the AH-64 has been fielded, there have been no reports in the literature addressing the
influence of eye dominance on IHADSS targeting accuracy or AH-64 pilot proficiency.

Probably a more troublesome concern raised during the IHADSS development was binocular rivalry, which occurs
when the eyes receive dissimilar input. This ocular conflict apparently is resolved by the brain through the suppression
of one of the images. The IHADSS presents the eyes with a multitude of dissimilar stimuli: color, resolution, field-of-
view, motion, and luminance. Pilots report difficulty making the necessary attention switches between the eyes,
particularly as mission time progresses.21 For example, the relatively bright green phosphor in front of the right eye can
make it difficult to attend to a darker visual scene in front of the left (unaided) eye. Conversely, if there are bright city
lights in view, it may be difficult to shift attention away to the right (HDU) eye.22 AH-64 pilots report occasional
difficulty in adjusting to one dark-adapted eye and one light-adapted eye.23 It may be hard to read instruments or maps
inside the cockpit with the unaided eye, since the IHADSS eye "sees" through the instrument panel or floor of the
aircraft, continuously presenting the pilot with a conflicting outside view. In addition, attending to the unaided eye may
be difficult if the symbology presented to the right eye is changing or jittering.22 Some pilots resort to flying for very
short intervals with one eye closed, an extremely fatiguing endeavor.21-23 The published user surveys generally agree
that the problems of binocular rivalry tend to ease with practice - although under conditions of a long, fatiguing mission,
particularly if there are system problems (e.g., display focus or flicker, poor FLIR imagery, etc.), rivalry is a recurrent

Proc. of SPIE Vol. 6955 69550D-3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


pilot stressor.21-23 It is likely that sighting dominance interacts with binocular rivalry, affecting a pilot's ability to attend
to one or the other eye.

Gregory Francis (Cognitive Psychologist) - Binocular rivalry is an important concern for the monocular IHADSS.
Images from one eye may perceptually interfere with the ability of the user to see items with the other eye. Switching
between eyes is a challenging task, and subjects in the laboratory are not always very good at performing such
switches.24 A more significant issue may be the attentional focusing on information in one eye to the exclusion of
information in the other eye. Cognitive tunneling (sometimes called attentional tunneling or cognitive capture) refers to
a difficulty in dividing attention between two superimposed fields of information (e.g., HMD symbology as one field
and see-through images as another field). In the aviation environment, such effects can lead to serious problems. Fischer
et al.25 and Wickens and Long26 found that pilots sometimes did not detect an airplane on a runway when landing while
using a HUD system. Clearly, the importance of the detection task is not enough, by itself, to overcome some change
blindness effects. Cognitive tunneling is probably related to a phenomenon known as change blindness, where changes
in a display are very difficult to identify unless an observer is attending the object that changes. A simple version of this
effect can be seen in Figure 2. The difference in the two images is very visible, but it is not easily detected because in
each image the scene is self-consistent. Similar effects can be found for overlapping images as might be produced by the
IHADSS.27

H
— '1;—: irt_7 ..iij

#0
-J
Figure 2. These two similar images have a significant difference that is surprisingly difficult to find.

Another significant issue involves the loss of binocular stereopsis as a cue for depth perception. The scene on the
IHADSS display typically does not correspond to the scene for the unaided eye. As a result, the different views available
to the two eyes do not contain the disparity cues that are normally used to judge relative distances of objects. The
disparity cues that support stereopsis are most effective up to about 30 meters (100 feet),28 a range that is quite important
for tactical helicopter flight.

Keith L. Hiatt (Flight Surgeon) - Although the selection of a monocular design for the IHADSS was primarily dictated
by head-supported weight and center-of-mass issues, the fact remains that the IHADSS still presents the end user with a
number of concerns. These concerns were often voiced by my Apache pilot patients, but they really became obvious to
me when I had the opportunity to fly the Apache on a number of training missions.

First, the IHADSS is the only helmet approved for the AH-64 and has been in use for over 25 years. A unique feature of
the IHADSS helmet is that it serves as a platform for the HMD which provides pilotage and fire control imagery and
flight symbology. In order to view the HMD imagery, the helmet/HMD must be fitted such that the exit pupil of the
HMD is properly aligned with the aviator's eye each time it is worn. This makes the fit and stability of the IHADSS
helmet critical considerations. Achieving proper fit of the IHADSS helmet is complicated by its very intricate system of
straps and pads. In order to properly function, pilots require a proper, customized, repeatable fit in order to maintain the
exit pupil position and optimize the resulting full FOV. Fitting of the IHADSS helmet often takes several hours to
complete and frequent adjustments are required between sorties and even during sorties due to the high-vibration

Proc. of SPIE Vol. 6955 69550D-4

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


environment inherent in rotary wing aviation. Unfortunately, this fitting process must be repeated every time aviators are
transferred to a new duty station, as they cannot take the IHADSS helmet with them as it is part of the AH-64 aircraft
system and is unit property.

Second, the display is not interchangeable between eyes and is only used with the right eye. Those individuals with a
dominant left eye have some difficulty in adapting to using their right eye for primary visual input. Many of my left eye
dominant pilots had increased complaints of eye strain and headaches after long missions.

Third, the device is not easily adjustable to fit different head sizes and viewing angles. Although, the display arm of the
device provides for fore-aft and roll adjustment and the combiner can be moved up and down, the primary adjustment
limitation is the helmet to which the display is attached. In fact, many individuals’ anthropometrics do not allow for a
full FOV since the combiner is not low enough or reasonable eye clearance is not obtained. Any misalignment of the
HDU and IHADSS helmet outside a specific value will not allow a proper boresight with the total system.

The fourth concern is the issue of “rivalry” as described by my fellow panelists. From a medical standpoint the system
presents the brain with two very dissimilar sets of stimuli, which requires the pilot to consciously and continually switch
their attention back and forth between eyes. Trainee and junior pilots report difficulty making the necessary attention
switches between the eyes and often complain of eye fatigue and headaches. It appears, however, that with time and
training that this issue may be overcome in most individuals unless the mission is particularly long or if the visual input
becomes complex due to bright lights or system failures. Of note, there are some very experienced Apache pilots who
claim that they can actually look at two different objects (one with each eye) in different axial planes – an example is
reading a document on the desk while describing an object in the distance. This myth is explained by the fact that these
pilots have trained themselves to alternate and scan at such a rapid rate that only an astute observer can see their rapid
eye movements.

Fifth, is the issue of individuals requiring visual correction to 20/20. Early on, for those pilots requiring corrective
spectacles, significant spectacle modifications were required so that corrective lenses did not interfere with the display.
This was further complicated with the requirement to also use the IHADSS in a nuclear-biological-chemical (NBC)
environment requiring protective masks. These issues were later somewhat minimized by the introduction of the
disposable contact lens program and later with the use of refractive surgery. However not all individuals’ vision can be
corrected to 20/20 with refractive surgery or with contact lenses. Lastly, contact lenses are less than ideal in some of our
current operations due to dust and hygiene issues.

Sixth and final, is the fact that regardless of the fact that the IHADSS HMD is overall lighter than other helmets, the
system adds significant mass to the head and neck and moves the center of gravity forward and to the right. Added head
mass and eccentric placement have been associated with increased complaints of back and neck pain both anecdotally
and in the published literature.12

J. Kevin Heinecke (Apache Aviator) - The monocular design, per se, has never produced a functional problem for
pilots, just a physiological one, and the physical problems only manifest themselves under night aided flight.

Up until the advent of the nose-mounted FLIR system on the AH-64 and the use of the IHADSS’ monocular design in
the early 1980s, pilots had been using binocular night vision goggles (NVGs). As an AH-1 Cobra attack helicopter pilot,
NVGs provided me and my colleagues a 30° (V) X 40° (H) FOV with a Snellen visual acuity of 20/50. NVGs provided
us the ability to fly our attack helicopters at night, but earlier versions of NVGs did not provide flight data symbology as
is present in today’s advanced NVG models, nor were they integrated into the airframe. In contrast, the AH-64 Apache
enhanced optics do provide us with flight data in the form of a heading tape, vertical speed indicator, airspeed value,
velocity vector, and an acceleration cue that came as part of the aircraft. So, functionally speaking, the choice to go with
the IHADSS’ monocular design allowed the Army to provide pilots the ability to receive “unaided” visual imagery from
the cockpit displays in addition to providing an aided view of the nighttime environment overlaid with flight safety data,
which was previously unavailable. The system also provides an unobstructed view of the night time environment since
the viewer isn’t viewing the outside world from behind airframe obstructions. In the event of a system failure the
Apache pilot can easily “flip up” the HDU monocle (combiner) on his helmet and immediately transition to visual flight
rules (VFR) mode while exiting the environment unaided. The only problem under this scenario is the need for one’s

Proc. of SPIE Vol. 6955 69550D-5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


right eye to “night adapt” or transition from day vision (recall the pilot is seeing an optically enhanced view of the
nighttime scenery with the right eye) to night vision. Although system failures are quite rare, the effect can still be
noticed during aircraft shutdown from night missions when the pilot experiences the condition known as monochromatic
adaptation or “brown eye”. After shutdown of the aircraft’s night vision equipment, the merging of night vision (left
eye) and day vision (right eye) results in the pilot seeing “brown”, this can be uncomfortable at first. In the event of a
system failure in flight, one could either close their right eye momentarily trying to facilitate a faster transition to
unaided vision (not that it ever seems faster) or just suffer through the experience. Under this scenario, the most striking
concern for the pilot is the loss of the flight symbology data mentioned above, especially if operating in mountainous
terrain (i.e. Korean Demilitarized Zone [DMZ]) or close to the earth’s surface during NOE (nap of the earth) flight.

A second area of physical discomfort handled by the pilot due to the monocular design is the issue of “binocular
rivalry.” In this case, the right and left eyes fight for dominance trying to provide the pilot’s brain “their” view of the
world. Since both eyes are seeing different versions of the same environment, the brain can’t just “merge” the two as a
computer software program might. This being the case, the pilot flying under FLIR must choose the right eye imagery
over the unaided left eye imagery. The problem surfaces when a pilot is either left eye dominant and the brain keeps
forcing him/her to accept the left eye imagery, or when the lighting in the cockpit (or outside explosions, lighting, etc.)
becomes more dominant than the FLIR image, and the brain forcefully pops the pilot’s attention back to the left eye
image. In addition to causing headaches and fatigue, this condition becomes stressful at the junior flight level and
requires hours of training over several years (in some cases) to adequately overcome. At the experienced level, pilots are
able to switch back and forth at will between views, but even this ability becomes task saturated in the current operating
environment where AH-64 pilots fly 4- to 6-hour night missions in Iraq and Afghanistan.

4. MONOCHROMATIC IMAGERY

The IHADSS imagery is monochromatic (having no variation in hue), green on black. The “green” is defined by the
choice of phosphor. The original phosphor choice was P43, which was primarily selected for his high luminous
efficiency and peak luminous output at 543 nanometers.

Clarence E. Rash (HMD Researcher) - The luminous efficiency of the eye is a function of wavelength and adaptation
state. For example, at photopic (day adapted eye) levels of illumination, the eye is most efficient at 555 nm, requiring at
other wavelengths more energy to perceive the same brightness. The 543-nanometer peak of the P43 phosphor is very
close to the eye’s peak response. Since the IHADSS imagery is monochromatic, it is devoid of color contrast, and the
pilot must depend solely on luminous contrast.

Gregory Francis (Cognitive Psychologist) - Color is an important and dramatic aspect of visual perception in normal
viewing. Many stimuli and objects in natural scenes are best identified by their color differences relative to their
surroundings. In a similar way, color in an HMD could be used to easily encode information that might be difficult to
convey in a monochromatic display of symbology. For example, color differences could be used to indicate when key
variables are beyond acceptable values. On the other hand, assigning colors to FLIR imagery might introduce significant
problems. Such imagery contains information about objects in the world, but it is not colored information in the sense
that the human visual system is designed to interpret. Adding false color to FLIR imagery may be a way to indicate
additional information about the energy of sources in the scene, but the presented colors may simply confuse a visual
system that uses color contrast to distinguish between different objects in the world. Overall, there is not a simple
answer to the question of whether adding color to IHADSS imagery would be worthwhile. It depends on how the color
would be used and what tasks the user must perform.

Keith L. Hiatt (Flight Surgeon) - From a medical standpoint, to my knowledge and in my experience, there are no real
problems presented by the monochromatic green on black visual display of the IHADSS. The medical selection
standards for U.S. Army aviators preclude individuals with colorblindness, so color is not an issue. In fact, even if a
pilot was colorblind, the system relies on luminance intensity and not individual colors – thus a monochromatic system
would likely be superior. As to the issue of “brown eye syndrome,” the rod and cone components of this phenomenon
would most likely be affected with high saturation by whatever color or colors were used.

Proc. of SPIE Vol. 6955 69550D-6

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


J. Kevin Heinecke (Apache Aviator) - After years of flying NVGs and FLIR, both with varying shades of green, pilots
have become comfortable with monochromatic imagery. Furthermore, reports from the field with respect to the
Modernized FLIR sensor being used in their AH-64D aircraft indicate that the imagery is almost “TV-like” in
appearance. Switching to a new image base would be tantamount to a paradigm shift in the night flying community as a
whole, not just AH-64 pilots.

5. REDUCED FIELD-OF-VIEW (FOV)

FOV refers to the maximum image angle of view that can be seen through an optical device, e.g., the HMD. The
IHADSS provides a 30º (V) by 40º (H) rectangular FOV, presenting an image to the pilot which is equivalent to a 7-foot
(diagonal) CRT being viewed from 10 feet away.29 Although the monocular HDU design obstructs unaided lateral
vision to the lower right, the IHADSS provides an unimpeded external view throughout the range of the FLIR sensor’s
movement (± 90º azimuth and +20º to -45º elevation). The IHADSS is designed to present the FLIR sensor’s FOV in
such a manner that the image on the combiner occupies the same visual angle in front of the eye, resulting in unity
magnification.

Clarence E. Rash (HMD Researcher) - When HMD imagery is provided by a sensor such as the FLIR sensor mounted
on the nose of the Apache helicopter, the optimal, maximum FOV possible is determined solely by the FOV of the
sensor. Mapping the sensor’s FOV to the HMD’s FOV produces a 1 to 1 or unity magnification — the desired condition
for pilotage. If the sensor’s output is displayed over a larger (magnified) FOV, image resolution severely suffers, as the
number of available pixels must be spread out over larger angles of viewing.

The human eye has an instantaneous FOV that can be described as oval in shape and measures 120º (V) by 150º (H).
Considering both eyes together, the overall binocular FOV measures approximately 120º (V) by 200º (H).30 In
comparison, the IHADSS has a 30º (V) by 40º (H) and is rectangular in shape.

A number of studies have been conducted in an attempt to understand the role of FOV in pilotage and targeting tasks.
Sandor and Leger31 investigated tracking with two restricted FOVs (20º and 70º). They found that tracking performance
appeared to be "moderately" impaired for both FOVs. Further investigation on FOV targeting effects found negative
impacts on coordinated head and eye movements32 and reinforced decreased tracking performance with decreasing FOV
size.33-34 Kasper et al.35 also examined the effect of restricted FOVs on rotary-wing aviator head movement and found
that aviators respond to such restrictions by making significant changes in head movement patterns. These changes
consist of shifts in the center of the aviator’s horizontal scan patterns and movements through larger angles of azimuth.
They also concluded that these pattern shifts are highly individualized and change as the restrictions on FOV change.
This work was an extension of Haworth et al.36 which looked at FOV effects on flight performance, aircraft handling,
and visual cue rating.

A seminal study of the importance of FOV to rotary-wing aviation is the Center for Night Vision and Electro-Optics’,
Fort Belvoir, VA, investigation of the tradeoff between FOV and resolution.37 In this study, five pilots using binocular
simulation goggles, performed terrain flights in an AH-1S Cobra helicopter. Seven combinations of FOV (40º circular to
60º [V] by 75º [H]), resolutions (20/20 to 20/70), and overlap percentages (50% to 100%) were studied. They reported
the lowest and fastest terrain flights were achieved using the 40º - 20/60 - 100% and 40º - 20/40 - 100% conditions, with
the pilots preferring the wider (60º) condition. However, the author did not feel that the results justified increasing FOV
without also increasing resolution.

In spite of this research, the question of how large a FOV is required still has not been fully answered. Pilots want it to
be as large as possible. HMD designers must perform tradeoffs between FOV, resolution, weight, size, and cost. The
task of determining FOV required for flying is not a simple one. Obviously, the selected FOV should reflect the
aircraft’s mission, providing optimal visual search performance, object recognition, and spatial orientation.38 Seeman et
al.39 recommend an instantaneous FOV of 50º (V) by 100º (H) for flight tasks involving control of airspeed, altitude, and
vertical speed. This estimate does not include considerations for other flight tasks, such as hover. Both Haworth et al.36
and Edwards et al.40 found that performance gains could be tied to increasing FOVs up to about 60º, where performance

Proc. of SPIE Vol. 6955 69550D-7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


seems to encounter a ceiling effect. This raises the question as to whether increased FOV designs are worth the tradeoff
costs.

Gregory Francis (Cognitive Psychologist) - Restrictions on FOV influence many different aspects of visual perception.
For example, depth perception, distance estimation, and size estimation are all dependent on relative occlusion of
objects in a scene. This cue is particularly important for a monocular IHADSS that loses stereoscopic information. With
a restricted FOV, fewer objects in the scene are visible and it becomes more difficult to observe the depth of objects in
the scene. As discussed by other panelists, crewmembers must scan the scene for a better understanding of the external
world. The limited FOV introduces additional issues because the IHADSS display has at least two separate uses. It is
used to provide a view of the external world through the PNVS and also to provide flight symbology. Flight symbology
information is placed on the edges of the CRT display, so as not to interfere with views of the external world. However,
by being on the edge of the display, the symbols are difficult to resolve without an eye movement to place the symbol
image on the fovea of the eye. Such eye movements require planning and attention that distract from other flight duties
and can lead to cognitive tunneling effects.

Keith L. Hiatt (Flight Surgeon) - In discussions with my patients and in my experience, both the FOV of the IHADSS
(30° by 40°) and the FOV for the current NVG system (40° circular) are less than optimal. Both systems, in effect, cause
a “looking at the world through a toilet-paper tube” phenomenon which results in a serious and significant loss in
peripheral vision that requires pilots to constantly use a multi-axis scanning technique that is fatiguing, causes eye and
neck strain, and headaches. Although NVGs have undergone numerous upgrades in development (primarily resolution
and weight), the IHADSS remains at a late 1970’s technology level regardless the airframe itself undergoing numerous
upgrades to become the AH-64D (Longbow). Having had the opportunity to view with the multi-tube “Panoramic”
goggle prototype, its 100° (H) by 40° (V) FOV was an “eye-opening” experience. User complaints both in the office and
in the literature regarding user satisfaction point to a need for an increased FOV,1 although what the optimum FOV
would be is debatable. In addition to the increase in fatigue and other physical complaints, the FOV of the current
system may also cause pilots to fixate on targets and lose situational awareness – a complaint voiced by several of my
patients.

J. Kevin Heinecke (Apache Aviator) - The 30° by 40° FOV for the Apache FLIR sensor and circular 40° FOV for
NVGs remains the norm. Aviators are trained to adjust their scan to facilitate receiving as much of the outside scenery
as possible. The training entails using a systematic left to right (or right to left) scan technique to compensate for the loss
of peripheral cues omnipresent in daytime visual flight. The pilot continues his scan focused on safety of flight cues,
whereas the copilot continues his/her scan focused on a reconnaissance of the battlefield. The problem is, once a pilot
identified a point of interest, his/her attention tends to focus on this area of concern…drawing you in. It requires a
special skill set to maintain situational awareness and not lose focus on the bigger picture. Another problem is, that
when fatigued, a pilot finds he/she is too tired to maintain the optimal scanning technique, causing him/her to miss
pertinent battlefield observations that would otherwise be taken for granted in the day.

Is the FOV too small? Probably, yes. But, the logistics to reinvent the current design (an airframe nose-mounted night
vision sensor) would almost certainly outweigh the training effort cost that appears to be working today. By the time an
individual graduates from novice pilot to effective copilot, and finally to Pilot-in-Command (PIC), he/she has amassed a
minimum of 100 hours of night imaging system time. Pilots have shown that they can successfully negotiate the rigors
of any mission. Once this has been accomplished, the pilot becomes more skilled with practice. Increased resolution
(discussed later) is always the best solution.

6. SENSOR SPECTRUM

Normal vision has evolved using only light energy over a specific part of the electromagnetic spectrum. As Figure 3
shows, the visible portion of this spectrum falls roughly between 400 and 700 nanometers (0.4 to 0.7 microns). Within
this range, light behaves in certain ways as it reflects off of objects of different properties. Within the human visual
system, different wavelengths of light that hit the eye are, to a first approximation, interpreted as different perceptual
colors that identify properties of object surfaces. The Apache’s thermal imaging (FLIR) sensors operate in the 8-12
micron spectral range (Figure 3).

Proc. of SPIE Vol. 6955 69550D-8

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


Figure 3. The electromagnetic spectrum. The gray areas indicate the wavelengths for normal vision and
the Apache FLIR sensor.

Clarence E. Rash (HMD Researcher) - Thermal imaging theory is based on the fact that every object emits radiation.
This radiated energy is a direct result of the vibration of the molecules making up the object. An object's temperature is
a measure of its vibrational energy. Hence, the higher the temperature of a body, the greater its amount of radiated
energy. In turn, the temperature of a body is determined by several factors, including: a) the object's recent thermal
history, b) the reflectance and absorbance characteristics of the object, and c) the ambient temperature of the object’s
surroundings. In the end, the representation of the outside scene seen by the pilot on the IHADSS display is in a
completely different spectral range (8 to12 microns) with visual cues that do not match those present in a normal-vision
representation of the scene.

Figure 4 shows a scene with a photograph taken by a normal (visible light) camera on the left and with an infrared
camera on the right. There are many similarities between the two images. In each image, many of the major buildings
are detected, and contrast between adjacent objects is notable. On the other hand, there are many significant differences
in the two images. For example, some of the electrical wires visible in the normal image are almost invisible in the
infrared image. Similarly, the writing on the railcars is visible in the normal image but washed out in the infrared image.
The emissions from the buildings are nearly invisible in the normal image but are quite clear in the infrared image. Most
of these differences are due to the properties of the sensors. They detect different types of electromagnetic energy and so
are sensitive to different parts of the scene.41

t1_
-—

L ___
Figure 4. Pictures of a scene taken with a normal camera (left) and with an infrared camera (right).

Gregory Francis (Cognitive Psychologist) - Thermal imaging provides information about a nighttime scene where the
lack of light would otherwise prevent a person from knowing anything about the scene. Some information is almost
always better than no information, so the information from the FLIR system can provide significant benefits to users.
The frustration reported by some AH-64 pilots is probably due to inconsistencies between the visual information
provided by the FLIR and the visual information expected by a pilot’s visual system. The human visual system has

Proc. of SPIE Vol. 6955 69550D-9

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


evolved to interpret patterns of reflected light in a way that allows us to identify objects in the environment. Differences
in the wavelength, gradients, and spatial arrangements of light are all important cues for identifying objects. Most of
these cues are absent or misleading in thermal imagery. Consider the image on the right side of Figure 4. To a naïve
observer, the spaces between the supports of the boxcars produce what look like shadows that are interpreted as part of a
three-dimensional structure similar to a small tent. The bright white from the boxcars’ sides looks more like some sort of
wall than a structure above the “tents.” With sufficient training, pilots can learn to interpret the thermal imaging
information to gain an understanding of the world around them. However, it is unlikely that such training will ever
become as proficient as normal vision.

Keith L. Hiatt (Flight Surgeon) - The main issues brought to the attention of flight surgeons regarding the FLIR sensors
in the Apache are indeed based on Mr. Rash’s statements that the parameters of recent thermal history, reflectance and
absorbance characteristics and the ambient temperature of the surroundings determine or greatly affect the ability of
pilots to perceive and understand at what they are actually looking. In perfect conditions, the objects viewed under FLIR
are quite recognizable. Unfortunately, perfect conditions are rare in most training and operational environments.
Personally, as a flight surgeon in the Mojave Desert, CA, and in Iraq during the Summer – looking out at a relatively
vegetation free and target free landscape (using the FLIR), there was little to see, but the dunes and flat expanses were
recognizable – if you added vehicles or structures to the scene they were quite visible as such. At late night/early pre-
dawn morning, after these vehicles and buildings have had time to emit most of their absorbed heat, they were not
visible or distinct. In another environment (England) with an ambient temperature and humidity in the Spring or Fall,
with little or no sunshine - the FLIR image I was provided was not much better than an analog television with a snow
pattern – regardless of vehicles or structures (although they could be seen by the naked eye). My patients have
complained of the same issues regarding clarity, and they have noted some anxiety in friend vs. foe issues as well
(although this has been improved with the fielding of certain systems). In discussions with patients and colleagues, the
main complaints of most pilots are echoed in Mr. Heinecke’s remarks below, i.e., the modernized FLIR system being
employed may assist with these issues.

J. Kevin Heinecke (Apache Aviator) - The FLIR sensor spectrum is above that of visible light in wavelength. This
being the case, a simple visual one-for-one recognition of terrain objects, people, or mechanical object is not possible.
What pilots see is a “heat signature” of these objects, which does not usually look anything like their visual spectrum
image. Picture a tank in your minds eye, now picture a bright “green blob,” heavy on the brightness at the aft end. A
burning object is just one complete bright green blob or a cold object can fade into the surrounding scenery. The easiest
FLIR images to recognize on a one-for-one basis are terrestrial objects like trees and manmade objects that distribute
heat uniformly (airport runways, buildings, etc.). Because stateside training is heavy in these objects and U.S.-made
vehicles, they are more recognizable in training, so it can be easier to detect them but not always at a distance. Enemy
vehicles and equipment are another story. A mental picture of a visual spectrum image almost never looks like its heated
infrared image. Under the stress of combat and during decisive moments, the AH-64 pilot at times strains to make a
positive identification. I have been told that learning is enhanced through experiencing significant emotional events.
Pilots “re-learn” visual associations of these known objects and refer back to these recognitions when surveying the
battlefield; it truly is a learned skill that requires much effort. The Army provides pilots with the best learning aids
possible, but learning aids are no replacement for actual experience. Unfortunately, this occurs far too often during life
or death situations. In short, at the earliest night system flight level, it is extremely frustrating to try to make out what
you are seeing. To compound this problem, the cockpit is a tandem design (front and back seats) requiring extra explicit
audio interaction between seats to identify targets, obstacles, other aircraft, and general points of interest. If the co-pilot
is having difficulty recognizing what he/she is seeing, he/she can’t be of much help to the pilot when you are trying to
help the Warfighters on the ground.

Fortunately, the legacy system I’ve been discussing is being replaced by the modernized FLIR versions. The sensor
spectrum remains the same, but the image has been enhanced to the state that pilots claim they feel like they are viewing
a TV quality image. The system is able to display a much sharper (higher resolution) image, making it easier to relate to
its visual spectrum counterpart. This in turn cuts down on response delays, confusion, and mistakes due to faulty visual
analysis.

Proc. of SPIE Vol. 6955 69550D-10

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


7. REDUCED RESOLUTION

The most frequently asked HMD design question is "How much resolution must the system have?" Resolution refers to
the ability of an optical system to reproduce detail in the “viewed” scene. This, in turn, defines the fidelity of the image.
Spatial resolution is, perhaps, the most important parameter in determining the image quality of a display system. An
HMD’s resolution delineates the smallest size target that can be displayed. An image’s resolution usually is given as the
number of vertical and horizontal pixels that can be presented.

For the human visual system, visual acuity is a measure of the ability to resolve fine detail. A common metric is Snellen
visual acuity, which is expressed as a comparison of the distance at which a given set of letters is correctly read to the
distance at which the letters would be read by someone with clinically normal vision. A value of 20/80 indicates an
individual reads letters at 20 feet that normally can be read at 80 feet. Normal visual acuity is 20/20. Visual acuity, as
measured through imaging systems, is a subjective measure of the user’s visual performance using these systems. Target
acquisition is a primary performance task. For this task, a reduced acuity value implies the user would achieve
acquisition at closer distances. Providing an acuity value for FLIR (thermal) systems is difficult since the parameter of
target angular subtense is confounded by the emission characteristics of the target objects. However, for comparison
purposes, Snellen visual acuity with the AH-64 FLIR sensor/IHADSS combination is cited as being 20/60.37

Clarence E. Rash (HMD Researcher) - FOV and resolution are two major trade-offs made in order to acquire the
capability to have operationally useful nighttime flight. Reduced resolution is most likely a contribution to pilot reports
of visual fatigue. It also has a possible impact on flight tasks, specifically height estimation. Crowley et al.42 compared
the differences in 13 Army pilot’s ability to judge and maintain height above terrain using binocular unaided day vision,
40-degree FOV day vision, ANVIS monocular night time, ANVIS binocular night time, and AH-64 FLIR monocular
night time vision. Aircraft type was an AH-1 Cobra equipped with an Apache FLIR and extensive data collection
capability (radar altimeter). Instrument information or flight symbology on the FLIR image for altitude was removed.
The results showed that subjects performed poorly when asked to provide absolute altitude estimates under any
condition, but were more consistent in estimating changes in altitude. Performance with the AH-64 FLIR was
consistently worse than with the other viewing conditions. The authors attributed the more variable results with the
FLIR to poorer resolution and changing thermal conditions over the data collection period.

The major contributors to the imaging system resolution and ultimately the correlated visual acuity available to the
pilot’s eye are the sensor and the display (including relay optics). In any system, there is always the weakest link.
Ideally, this would be the eye. Such a system is said to be eye-limited. In the legacy Apache imaging system, the FLIR
sensor was the weakest link, making the system “sensor-limited.” Since the upgrade to the modernized FLIR sensor,
currently in progress, the system has transitioned to being “display-limited.” The IHADSS is in need of a redesign. With
the exception of cathode-ray-tube upgrade, the optical design and display system as a whole is also of 1970s vintage.

Gregory Francis (Cognitive Psychologist) - As with most aspects of perception, the role of resolution for identifying
objects in an environment is quite complicated. Deviations from normal acuity are usually due to imperfections in the
optics of the eye, typically related to inflexibility in the lens. These imperfections lead to a blurring of the retinal image.
A reduction in resolution on a display has a quite different effect on visual discrimination. For example, consider the
images in Figure 5.43 The image on the far right of each row shows the original image. Moving from right to left, the
images are generated with lower resolution by averaging and combining nearby pixels. The image on the far left of each
row is six pixels wide and seven pixels high. Clearly, with fewer pixels (lower resolution), the images look less like the
original and are more difficult to recognize. However, the difficulty is not just a function of the number of pixels
because squinting the eyes will make the middle two images of each row look more similar to the original image. There
is, in fact, significant information in the pixilated images,44 but the visual system has trouble using that information
because the sharp edges between the pixels mask that information. Squinting blurs the sharp edges of the pixels and
allows the underlying information to become visible.

Keith L. Hiatt (Flight Surgeon) - Resolution and FOV are most likely the weakest elements of the Apache imaging and
display and are the cause of most of the complaints I have heard from my patients. The fact that visual acuity is reduced
to about 20/60 inherently makes the job of being a pilot that much harder. Vision is by far the most important sense for

Proc. of SPIE Vol. 6955 69550D-11

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


Figure 5. Decreases in resolution make images more difficult to recognize.43

an aviator (which is echoed in the fact that a large proportion of a flight physical is based on visual testing. The impact
of reducing a pilot’s ability to see clearly causes a much higher cockpit workload and results in fatigue, headaches, eye
strain, and reduced performance. The requirement for increased pilot workload (secondary to decreased resolution)
results in a difficulty to maintain situational awareness and may lead to accidents. As a flight surgeon, I have
unfortunately been on many accident investigations dealing with the Apache on routine training flights (peacetime) – a
large number of them were due to blade strikes or tail rotor strikes because the pilot was not aware of how close he was
to a tree or wire. Invariably, these accidents were attributed to human error regardless of the fact that the pilot’s visual
sense was severely hampered by the system he operated. When the Apache was conceived, it was designed primarily for
standoff missions where the tradeoff between increased distance from targets and objects and the loss of visual acuity
was not as big of a concern. When you then add the current operational parameters of dust, close air support in
urbanized areas, bright lights, enemy fire and friendly forces - workload may become overwhelming and lead to
accidents – regardless of how senior and proficient the pilot may be. The modernized FLIR sensor is certainly a step in
the right direction, but the whole 1970s system really needs to be upgraded for the kind of missions we are now called
upon to perform.

J. Kevin Heinecke (Apache Aviator) - Reduced resolution in the AH-64 legacy FLIR system has always been a
problem. Based upon ambient conditions, user skill level, and individual FLIR system nuances you may have acceptable
resolution for a mission or merely be flying with a “green blob” on your monocle. Ambient conditions are worst at dusk
and dawn when the minimum resolvable temperature allows for a larger percentage of objects having a like temperature.
Like temperatures all show up as the same shade of green. Pilots need to be constantly vigilant with re-optimizing their
FLIR system throughout the night. Couple this fact with a pilot who is not the most skilled at optimizing the FLIR and
you can see where the problem of reduced visual resolution worsens. The final culprit is the system itself. Although each
system is built to a specification they all have individual performance levels. It is important to picture many parts
working in concert to provide the pilot the final image on the monocle. Any one of many parts can fail to be operating at
100% causing a diminished image. This being the case, AH-64 pilots have relied for years on flight symbology and
increased vigilance to maintain adequate flight safety. Unfortunately it becomes important to fly slower and lower to do
a proper reconnaissance. This problem seems to have been solved in the modernized FLIR system.

Proc. of SPIE Vol. 6955 69550D-12

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


The modernized FLIR (fielded in two battalions at this time) allows for the positive detection of enemy combatants at
ranges that far exceed that of the legacy system. The system is such an improvement that it exceeds the weapons range
of the aircraft allowing much greater standoff prior to engagement. Stories of high altitude loitering while watching
insurgents emplace IEDs (improvised explosive devices) while being unobserved demonstrates the improved safety,
stealth, and user-capability of the system. If insurgents don’t know an aircraft is there, they cannot defend against the
aircraft. A new problem with system is one of paradigm shift in landing the aircraft. In the past pilots relied on the radar
altimeter to advise when they were closer to the ground, the closer they were to the ground, the clearer the image
became. Now pilots need to force themselves to maintain a proper descent rate because the image is so clear they think
they are closer to the ground than they actually are.

8. DISPLACED VISUAL INPUT EYE POINT

When flying the AH-64, the primary visual input for night and foul weather flight is the FLIR sensor. This sensor is
located in the nose turret approximately 9 feet (3 meters) forward and 3 feet (1 meter) below the pilot's design eye
position (Figure 6).

,flNVS

Figure 6. The position of the FLIR and other sensors on the nose of the AH-64.

Clarence E. Rash (HMD Researcher) - The exocentric positioning of the FLIR sensor has the advantage of providing
an unobstructed view of areas below the physical aircraft. This "see-through" capability is very useful when landing
must be made in cluttered or unfamiliar landing areas. On the other hand, this exocentric positioning of the sensor can
introduce problems of parallax, motion estimation, and distance estimation.22 Pilots also must learn how to manipulate
the aircraft from a point of view that is different than their visual system is used to.

Gregory Francis (Cognitive Psychologist) - In some respects, people are quite good at learning to operate with
displaced sensory information. Indeed, this ability is probably related to our skills as efficient tool users. People can
quickly learn how far they can reach with a pole, hammer, or sword. With extreme practice, the tool becomes an
extended “part” of the user. While it may seem that we precisely know the positions, sizes, and properties of our limbs,
research shows that this knowledge can be easily changed so that people believe their limbs are in incorrect positions.45
In a similar way, people can learn to operate devices from a viewpoint different from normal. In general, this is not a
problem, but it may require substantial training to become proficient with an unnatural viewpoint. Moreover, transitions
from one viewpoint to another may introduce problems. An important issue here is whether training provides the user
with knowledge about how to interpret the new viewpoint in terms of the original, or if the pilot literally sees
information in a new way from the new viewpoint. The former would suggest that transitions from one view to another
require cognitive changes that may take time to accomplish but are essentially all or none. The latter would suggest that

Proc. of SPIE Vol. 6955 69550D-13

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


such transitions may involve a gradual change from one perceptual state to another. Errors would be common during
such changes.

Keith L. Hiatt (Flight Surgeon) - In many situations, most, if not all, of the visual input relied upon by the pilot to know
where he/she is in 4-dimensional space comes from the FLIR. The fact that the FLIR is exocentrically located roughly 3
meters forward and 1 meter below the pilot's eye may very well lead to problems with spatial orientation/perception and
estimation of distance and motion. However, with adequate training and proficiency in use of the system most pilots are
able to overcome the fact that what they are seeing is not actually where they are and they can then mentally correct this
visual input to re-orient themselves. As Mr. Heinecke has stated, failure to continually re-orient one self’s position to
that of the actual helicopter can be catastrophic. I personally witnessed an AH-64 Longbow in a hover at an airfield drift
aft and the tail rotor struck a parked (and thankfully) unoccupied sister airframe’s cockpit. Damage to both aircraft was
significant, but luckily no one was injured. Although I am not an engineer, I doubt that there is any place on the airframe
that the FLIR could be mounted that would not cause some of these issues.

J. Kevin Heinecke (Apache Pilot) - The problem of viewing an image from a vantage point displaced 10 feet in front of
one’s self and 3 feet below is always one of adequate training. The pilot must always know how far back the tail rotor is
from the object they are viewing just in front of the aircraft. It becomes an issue when transitioning into or out of a battle
position where natural and manmade obstructions are to be used as cover and concealment. The problem is at its worst
when the pilots are hovering in this battle position close to the buildings and terrain, a few moments of rearward and
sideward drift can be catastrophic. The problem of maintaining “situational awareness” becomes better with time and
training. The legacy system as compared to the modernized FLIR has not changed this situation. The system is still
mounted to the nose of the aircraft and pilots still need to “check the data” their brain is receiving and account for where
they really are in 3-dimensional space.

9. CONCLUDING STATEMENTS

After nearly three decades of fielding, the AH-64 Apache helicopter with its integral monocular HMD, the IHADSS, has
been considered an operational success. Piloting and operating this aircraft in a military environment using the IHADSS
place extraordinary demands on the human visual system. In this paper, a panel of experts has discussed several of the
more demanding design issues associated with the IHADSS monocular HMD. From a subjective point of view, Apache
pilots, represented in this panel by Mr. Heinecke, have stated that they have compensated for the HMDs demands. From
an operational view, both Heinecke, Hiatt, and Rash have shown that pilots appear to adapt and have less visual issues
with the system with increased flight hours and all authors agree that the system is in need of a technological upgrade.
From a more objective point of view, an investigation of causal factors associated with Apache helicopter accidents over
the period FY85-02 was conducted by Rash et al.46 For 228 accidents that occurred during this period, 21 (9.2%) were
categorized as ones in which the HMD and FLIR sensor played a major or subsidiary role in the accident sequence
itself. However, the IHADSS HMD was in actual use during only 98 of these accidents. When only accidents in which
the IHADSS/FLIR system was identified as the major contributor were considered, these accidents represented less than
1% of all AH-64 accidents and only 2% of accidents where the IHADSS was in use. The study concluded that the use of
the IHADSS HMD did not play the role of a major contributor to AH-64 accidents.

10. DISCLAIMER

The views, opinions, and/or findings contained in this report are those of the authors and should not be construed as an
official Department of the Army position, policy, or decision unless designated by other documentation.

11. REFERENCES

1. Rash, C.E. (2008). A 25-Year Retrospective Review of Visual Complaints and Illusions Associated with a
Monocular Helmet-Mounted Display. Health and Safety Aspects of Visual Displays, Displays, Vol. 29, 2, 70-80.
2. Hiatt, K.L., Rash, C.E., and Heinecke, J.K. (2008). Visual Issues Associated with the Use of the Integrated Helmet
and Display Sighting System (IHADSS) in the Apache helicopter: Three Decades in Review, in Proceedings of
SPIE, Vol. 6955.

Proc. of SPIE Vol. 6955 69550D-14

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


3. Hiatt, K.L., Braithwaite, M.G., Crowley, J.S., Rash, C.E., van de Pol, C., Ranchino, D.J., Statz, W.K., and Eke., A.J.
(2002). The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Cohort Study of Apache AH
MK1 Pilots Initial Report. USAARL Report No. 2002-04. Fort Rucker, AL: U.S. Army Aeromedical Research
Laboratory.
4. Rash, C.E., van de Pol, C., Crowley, J.S., Isaak, M.I., Lewis, L.J., Hiatt, K.L., Braithwaite, M.G., and Ranchino, D.J.
(2004). The Effect of a Monocular Helmet-Mounted Display on Aircrew Health: A Cohort Study of Apache AH
Mk1 Pilots Two-Year Baseline Review. USAARL Report No. 2004-18. Fort Rucker, AL: U.S. Army Aeromedical
Research Laboratory.
5. Francis, G., Rash, C.E., Adam, G., LeDuc, P., and Archie, S. (2003). A Comparison of AH-64D and OH-58D Pilot
Attitudes Toward Glass Cockpit Crewstation Designs. USAARL Report No. 2003-02. Fort Rucker, AL: U.S. Army
Aeromedical Research Laboratory.
6. Rash, C.E., Suggs, C.L., LeDuc, P.A., Adam, G.A., Manning, S.D., Francis, G., and Noback, R. (2001). Accident
Rates in Glass Cockpit Model U.S. Army Rotary-Wing Aircraft. Fort Rucker, AL: U.S. Army Aeromedical Research
Laboratory. USAARL Report No. 2001-12.
7. Francis, G., and Rash, C.E. (2005). Analysis and Design of Keyboards for the AH-64D Helicopter. USAARL Report
No. 2005-11. Fort Rucker, AL: U.S. Army Aeromedical Research Laboratory.
8. Hiatt, K.L., Rash, C.E., Harris, E.S., and McGilberry, W.H. (2004). Apache Aviator Visual Experiences with the
IHADSS Helmet-Mounted Display in Operation Iraqi Freedom. USAARL Report No. 2004-20. Fort Rucker, AL:
U.S. Army Aeromedical Research Laboratory.
9. Hiatt, K.L. (1997). Lyme Disease in a Rotary-Wing Aviator. Presented at the 68th Annual Scientific Meeting of the
Aerospace Medical Association, Chicago, IL, 11-15 May.
10. Hiatt, K.L. (1997). The Association between High Time Helmet Mounted Systems Use and Spinal Conditions in
Rotary-Wing Aviators. Presented at the NDIA Symposia, Framingham, MA, 2-5 Dec.
11. Hiatt, K.L. (1998). Health Effects of Helmet Mounted Systems. Presented at the 1st Combined Operational
Aeromedical Problems Course, Fort Walton Beach, FL, 22-28 Feb.
12. Hiatt, K.L. (2000). Helmet Mounted Systems Use and Spinal Conditions in Army Aviators. U.S. Army Medical
Department Journal, Vol. PB 8-00, 47-53.
13. Hiatt, K.L. and Braithwaite, M.G. (2002). An Internal Validation of the British Army Air Corps Spatial
Disorientation Sortie. Presented at NATO R&T Organization – Human Factors and Medicine Panel, Spatial
Disorientation In Military Vehicles: Causes, Consequences, and Cures, La Coruna Spain. 15 –17 May.
14. Heinecke, J.K. (2008). Apache Aviator Evaluation of Dual-Technology Night Vision Systems in Operation Iraqi
Freedom (OIF) Urban Combat (Master’s Thesis) USAARL Report No. 2007-05. Fort Rucker, AL: U.S. Army
Aeromedical Research Laboratory.
15. Laycock, J., and Chorley, R.A. (1980). The electro-optical display/visual system interface: Human factors
considerations. Advancements on visualization techniques, AGARD, 3/1 – 3/15.
16. Conticelli, M., and Fujiwara, S. (1964). Visuo-motor reaction time under differential binocular adaptation. Atti Della
Foundizone, Georgio Ronchi.
17. Tredici, T.J. (1985). Ophthalmology in aerospace medicine. In: Dehart, R.L., editor, Fundamentals of Aerospace
Medicine, Philadelphia: Lea and Febiger, 465-510.
18. Beaten, A. (1985). Left side, right side: a review of laterality research, New Haven, Yale University Press.
19.Crowley, J.S. (1989). "Cerebral laterality and handedness in pilots: performance and selection implications,"
USAFSAM-TP-88-11, Brooks AFB, Texas: School of Aerospace Medicine, USAF Systems Command.
20. Verona, R.W., Johnson, J.C., and Jones, H. (1979). Head aiming/tracking accuracy in a helicopter environment.
USAARL Report No. 79-9, Fort Rucker, AL.
21. Bennett, C.T., and Hart, S.G. (1987). "PNVS-related problems: pilots reflections on visually coupled systems, 1976-
1987," Working paper. Moffett Field, CA, NASA-Ames Research Center.
22. Hale, S., and Piccione, D. (1989). Pilot performance assessment of the AH-64 helmet display unit. Aberdeen
Proving Ground, MD: U.S. Army Human Engineering Laboratory. Technical Note 1-90.
23. Crowley, J.S. (1991). Human factors of night vision devices: Anecdotes from the field concerning visual illusions
and other effects. Fort Rucker, AL: U.S. Army Aeromedical Research Laboratory. USAARL Report No. 91-15.
24. Meng, M., and Tong, F. (2004). Can attention selectively bias bistable perception? Difference between binocular
rivalry and ambiguous figures. Journal of Vision, 4(7), 539–551, http://www.journalofvision.org/4/7/2/,
doi:10.1167/4.7.2

Proc. of SPIE Vol. 6955 69550D-15

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms


25. Fischer, E., Haines, R. F., and Price, T. A. (1980). Cognitive issues in head-up displays. NASA Technical Paper
1711, NASA Ames Res. Ctr., Moffett Field, CA.
26. Wickens, C. D. and Long, J. (1995). Object versus space-based models of visual attention: Implications for the
design of head-up displays. Journal of Experimental Psychology: Applied. 1, 179-193.
27. Neisser, U. (1979). The control of information pickup in selective looking. In Perception and its Development: A
Tribute to Eleanor J Gibson. Ed. A D Pick (Hillsdale, NJ: Lawrence Erlbaum Associates) pp 201-219.
28. Cutting, J. E. and Vishton, P. M. (1995). Perceiving layout and knowing distances: The integration, relative potency,
and contextual use of different information about depth. In W. Epstein and S. Rogers (Eds.), Handbook of perception
and cognition: Perception of space and motion (pp. 69-117). New York: Academic Press.
29. Berry, J., Dyer, R., Park, R., Sellers, A., and Talton, M. (1984). PNVS Handbook. Fort Rucker, AL: U.S. Army
Training and Doctrine Command.
30. Zuckerman, J. (1954). Perimetry, J.B. Lippincott Company, Philadelphia, PA, p. 237.
31. Sandor, P. B., and Leger, A. (1991). Tracking with a restricted field of view: Performance and eye-head coordination
aspects. Aviation, Space , and Environmental Medicine, Vol. 66, No. 6, pp.1026-1031.
32. Venturino, M., and Wells, M. J. (1990). Head movements as a function of field-of-view size on a helmet-mounted
display. Proceedings of the Human Factors Society 34th Annual Meeting. Santa Monica, CA: Human Factors
Society, Inc., pp. 1572-1576.
33. Kenyon, R. V., and Kneller, E. W. (1992). Human performance and field of view. Proceedings of SID, Vol. XXIII,
pp. 290-293.
34. Wells, M. J., and Venturino, M. (1989). Head movements as a function of field-of-view size on a helmet-mounted
display. Proceedings of the Human Factors Society 33 rd Annual Meeting. Santa Monica, CA: Human Factors
Society, Inc., pp. 91-95.
35. Kasper, E. F., Haworth, L. A., Szoboszlay, Z. P., King, R. D., and Halmos, Z. L. (1997). Effects of in-flight field of
view restriction on rotorcraft pilot head movement. Head-Mounted Displays II, Proceedings of SPIE, Vol. 3058, pp.
34-45.
36. Haworth, L. A., Szoboszlay, Z. P., Kasper, E. F., DeMaio, J., and Halmos, Z. L. (1996). In-flight simulation of
visionic field-of-view restrictions on rotorcraft pilot’s workload, performance and visual cuing. 52nd Annual Forum
of the American Helicopter Society, Washington, DC.
37. Greene, D. A. (1988). Night vision pilotage system field-of-view (FOV)/resolution tradeoff study flight experiment
report. Fort Belvior, VA: U.S. Army Night Vision Laboratory. NV 1-26.
38. Lohmann, R. A. and Weisz, A. Z. (1989). Helmet-Mounted Displays for helicopter pilotage: design configurations
tradeoffs, analysis, and test. Helmet-Mounted Displays, Proceedings of SPIE, Vol. 1116, pp. 27-32.
39. Seeman, J., De Maio, J., Justice, S., Wasson, J., Derenski, P., Hunter, M., and Walrath, L. (1992). Advanced
helicopter pilotage visual requirements. Proceedings of the American Helicopter Society, Vol. 48, pp. 233-252.
40. Edwards, K. L., Buckle, J. W., Dotherty, M. J., Lee, L. J., Pratty, A. C., and White, J. F. (1997). An operationally-
applicable objective method for the analysis and evaluation of the flights of helicopter mission task elements during
field-of-view trials. Head-Mounted Displays II, Proceedings of SPIE, Vol. 3058, pp. 235-251.
41. Francis, G., and Rash, C.E. (2008). Cognitive factors in helmet-mounted displays. In C. E. Rash, M. Russo and T.
Letowski (Eds.), Helmet-Mounted Displays: Sensory, Perceptual and Cognitive Issues. Unpublished manuscript.
42. Crowley, J. S., Haworth, L., Szoboszlay, Z., and Lee, A. (1997). Helicopter pilot estimation of self-altitude in a
degraded visual environment. Presented at the Aerospace Medical Association Annual Scientific Meeting, Atlanta,
GA.
43. Munger, D. and Munger, G. (2007). Casual Fridays: We can identify "mystery faces" just 6 pixels wide! Cognitive
Daily, http://scienceblogs.com/cognitivedaily/2007/03/casual_fridays_we_can_identify.php
44. Yip, A. and Sinha, P. (2002). Role of color in face recognition, Perception, 31, 995–1003.
45. Tsakiris, M. and Haggard, P. (2005). The rubber hand illusion revisited: Visuotactile integration and self-attribution.
Journal of Experimental Psychology: Human Perception and Performance, 31, 80-91.
46. Rash, C. E., Reynolds, B. S., Stelle, J. A., Peterson, R. D., and LeDuc, P. A. (2003). The Role of the Pilots Night
Vision System (PNVS) and Integrated Helmet Display Sighting System (IHADSS) in AH-64 Apache Accidents.
Fort Rucker, Alabama: U. S. Army Aeromedical Research Laboratory. USAARL Report 2003-08.

Proc. of SPIE Vol. 6955 69550D-16

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/20/2015 Terms of Use: http://spiedl.org/terms

You might also like