Professional Documents
Culture Documents
Abstract
Eye tracking technology is becoming easier and cheaper to use, resulting in its increasing application to numerous
fields of research. The data collected during an eye tracking experiment can be analyzed by statistical methods
and/or with visualization techniques. Visualizations can reveal characteristics of fixations, saccades, and scanpath
structures. In this survey, we present an overview of visualization techniques for eye tracking data and describe their
functionality. We classify the visualization techniques using nine categories. The categories are based on properties
of eye tracking data, including aspects of the stimuli and the viewer, and on properties of the visualization techniques.
The classification of about 90 publications including technical as well as application papers with modifications of
common visualization techniques are described in more detail. We finally present possible directions for further
research in the field of eye tracking data visualization.
Categories and Subject Descriptors (according to ACM CCS): [General and reference]: Document types—Surveys and
overviews [Human-centered computing]: Visualization—Visualization techniques [Human-centered computing]:
Visualization—Visualization design and evaluation methods
c The Eurographics Association 2014.
DOI: 10.2312/eurovisstar.20141173
64 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
Figure 1: Histogram of all publications of this survey, relevant for eye tracking data visualization techniques. The number of
published articles, conference papers, and books has strongly increased during the last decade.
an increasingly important role in future eye tracking analysis. classes: point-based visualization techniques, AOI-based
Thus, the contribution of the survey is manifold: First, based visualization techniques, and visualization techniques using
on a taxonomy for eye tracking terms, a classification sys- both. Next, we tagged the visualization techniques with
tem for stimuli and visualization techniques is formulated. information about stimulus type, in-context visualizations,
Second, we assign papers from the literature to those classes. animated visualizations, interactive visualizations, and active
Third, based on these results, we identify questions for future stimulus content. A detailed description of this classification
development of eye tracking visualization techniques. will be given in the next section. The main part of the paper
is the presentation of the different visualization techniques
Evaluation has become an important step in the develop- with respect to this classification system. At the end, we will
ment of new visualization techniques. Eye tracking is one discuss possibilities for future research.
means of evaluating those newly developed approaches.
Thus, analyzing eye tracking data with visualization tech-
2. Taxonomy
niques is just a logical step that followed. The number of
published articles, conference papers, and books about eye Before we present our taxonomy classifying visualization
tracking visualizations has strongly increased during the last techniques, we first define the basic terminology related to
decade. Figure 1 shows a histogram of all relevant publica- eye tracking data (Section 2.1). The taxonomy is subdivided
tions with visualization techniques for eye tracking data, that into two categories: those related to stimuli of eye tracking
are included in this survey. Especially, after the first IEEE experiments (Section 2.2) and those related to visualization
Symposium on Information Visualization in 1995 [GE95], techniques (Section 2.3). Other taxonomy papers target
the number of visualization techniques for eye tracking has areas different from ours, e.g., taxonomies restricted to the
steadily increased. However, so far a comprehensive survey stimulus alone [SNDC09], the dimensionality of the visu-
of visualization techniques for eye tracking, structuring and alization [Špa08], or the eye tracking data [RTSB04]. Our
discussing the approaches is still missing. Since eye tracking taxonomy includes more categories to obtain a fine-grained
is used in a wide application range and new challenges arise, categorization of the visualization techniques and to find out
such as eye tracking in the 3D environment, an exhaustive what visualization techniques might be missing.
survey is necessary to find missing visualization techniques.
2.1. Terminology
Due to the wide application range of eye tracking, the
development of visualization techniques has become an Eye tracking devices record gaze points performed by a
interdisciplinary challenge that requires a comprehensive participant on a stimulus. The recording rates depend on
literature research over many different disciplines. To sat- the characteristics of the devices. State-of-the-art devices
isfy this requirement, we reviewed the main visualization allow rates between 60 and 120 Hz. Some newer high speed
and visual analytics conferences (VIS, EuroVIS, EuroVA), eye trackers support recording rates of 240 Hz or more. The
the ACM Digital Library, and the IEEE Xplorer Digital recording rate specifies how many gaze points are recorded
Library, but also the top conference on eye tracking research per second. However, eye tracking research is not using raw
(ETRA), and main journals of usability research (Journal of gaze points. Rather, different data types which are shown
Human Factors), psychology (Behavior Research Methods), in Figure 2 can be distinguished and will be defined in the
and cognitive sciences (Cognitive Science Journal). This following in more detail. For each data type different metrics
literature research resulted in about 90 papers found on eye are used in an eye tracking analysis. The most important
tracking data visualization. To allow the reader to better find metrics for each data type will be explained shortly. A
adequate visualization techniques for his or her analysis, comprehensive collection of eye tracking metrics can be
we subdivided the visualization techniques into three main found in the book by Holmqvist et al. [HNA∗ 11].
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 65
Figure 2: Gaze points are spatially and temporally aggregated into fixations. Fixations are connected by saccades and have a
certain duration represented by the radius. A temporal order of fixations is a gaze, however, only if the fixations are within an
AOI. An AOI is a region of specific interest on the stimulus. A saccade from one AOI to the next is called a transition. A complete
sequence of fixations and saccades is called a scanpath.
Fixation. A fixation is an aggregation of gaze points. Gaze (i.e. which area is covered by the scanpath), scanpath length
points are aggregated based on a specified area and timespan. in pixels, or scanpath duration in milliseconds.
The aggregation area is usually about 20 to 50 pixels, the
timespan between 200 and 300 ms [HNA∗ 11]. Common Stimulus. A stimulus can be any visual content presented
metrics for fixations are the fixation count (i.e. number of to participants during an eye tracking experiment. Typically,
fixations), the fixation duration in milliseconds, and the static and dynamic stimuli, with either active or passive con-
fixation position given as x- and y-coordinates in pixel. tent are differentiated. Usually, 2D stimuli are presented to
participants. However, in recent years, 3D stimuli have be-
Saccade. Saccades describe a rapid eye movement from one come a focus of research as well.
fixation to another. They typically last about 30 to 80 ms and
are the fastest movement the human body can perform. Dur- Area of Interest. Areas of interest (AOIs) or regions of
ing this timespan visual information is suppressed [HNA∗ 11]. interest (ROIs) are parts of a stimulus that are of high im-
Typical metrics are the Saccadic amplitude (i.e. the distance portance for a hypothesis. In 3D stimuli, AOIs can also be
the saccade traveled), the saccadic duration in milliseconds, models or objects of interest (OOIs). For dynamic stimuli,
and the saccadic velocity in degrees per second. dynamic AOIs have to be defined. AOIs can either be created
beforehand or after an eye tracking experiment. Usually,
Smooth Pursuit. During the presentation of dynamic stim- AOIs are created based on the semantic information of
uli smooth pursuits can occur. They occur unintentional and the stimulus. A transition is a movement from one AOI to
only if viewers follow a movement in a presented stimulus. another. Typical metrics for AOIs are the transition count (i.e.
The velocity of the eye during smooth pursuits is about 10 to number of transitions between two AOIs), the dwell time
30 degrees per second [HNA∗ 11]. within an AOI in milliseconds, and the AOI hit which defines
if a fixation is within an AOI or not.
Scanpath. A sequence of alternating fixations and saccades
is called a scanpath. A scanpath can give information about
2.2. Stimulus-Related Categories
the search behavior of a participant. An ideal scanpath
would be a straight line to a specified target [CHC10]. De- The first part of the taxonomy is based on a categorization
viance from this ideal scanpath can be interpreted as poor of stimuli. The type of stimulus can have a great influence
search [GK99]. Scanpath metrics include the convex hull on the choice and design of a visualization technique. The
c The Eurographics Association 2014.
66 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
first categorization divides stimuli into point-based and AOI- Viewing Task. Although a given task during an eye tracking
based data. Furthermore, stimuli can be distinguished as be- experiment has a significant influence on the eye movement
ing static or dynamic, with active or passive content, and rep- of viewers [AHR98], we decided not to use the task as a
resenting 2D or 3D content. The viewing task would be an- categorization factor, since many visualization techniques
other type for classification, however it will not be included do not depend explicitly on the given task for a successful
in our taxonomy. interpretation of the data.
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 67
image in the sequence. Additionally, aesthetic criteria for can be a convenient option for explorative data analysis.
animation [BBD09] have to be applied to preserve a viewer’s Algorithmic concepts such as data mining or knowledge
mental map. discovery in databases combined with visualization tech-
niques and the perceptual abilities of the human viewer
Single User Versus Multiple Users. An important factor can be a good means to uncover common structures or
for eye tracking analysis is the number of users represented strategies among the study participants. In the eye tracking
in a visualization technique. Representing single users allows domain, different visual analytics systems have been de-
the analyst to inspect the viewing behavior of one participant. veloped over the last years. One example of how existing
However, visualizing multiple users at the same time can visual analytics systems can be used with eye tracking
allow one to find strategies of groups, but these represen- data is described by Andrienko et al. [AABW12]. The
tations might suffer from visual clutter if too much data is authors investigated how visual analytics approaches for
represented at the same time. Here, optimization strategies, geographic information science (GIScience) can be adapted
such as averaging or bundling of lines might be used, to for analyzing spatio-temporal data.
achieve better results [HFG06, HEF∗ 13].
c The Eurographics Association 2014.
68 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
Table 1: List of all references that introduced a new, an improvement, or an adaption of an existing visualization technique. The
references are classified by the visualization-related and stimulus-related categories described in Sections 2.2 and 2.3.
Spatio-temporal
Non-interactive
Passive content
Active content
Not in-context
Multiple users
Single user
Interactive
In-context
Animated
Temporal
Dynamic
Spatial
Static
Static
2D
3D
2D
3D
Reference Visualization Stimulus
Grindinger et al. [GDS10] • ◦ ◦ ◦ • ◦ • • ◦ ◦ • ◦ • • • ◦ • • ◦
Goldberg & Helfman [GH10c] • • ◦ ◦ • • ◦ • ◦ ◦ • ◦ • • ◦ ◦ • • ◦
Bojko [Boj09] ◦ • ◦ ◦ • ◦ • • ◦ • ◦ ◦ • • ◦ ◦ • • ◦
Velichkovsky & Hansen [VH96] ◦ • ◦ ◦ • ◦ • • ◦ • ◦ ◦ • • ◦ ◦ • • ◦
Wooding [Woo02] ◦ • ◦ ◦ • ◦ • • • • • ◦ • • ◦ ◦ • • ◦
Latimer [Lat88] ◦ • ◦ ◦ • ◦ • ◦ • ◦ • ◦ • • ◦ ◦ • • ◦
Kurzhals & Weiskopf [KW13] ◦ • • • • ◦ • • • • ◦ • ◦ ◦ • ◦ • • ◦
Paletta et al. [PSF∗ 13b] ◦ • • ◦ • • ◦ ◦ • • ◦ • ◦ ◦ • • ◦ ◦ •
Noton & Stark [NS71b] ◦ ◦ • ◦ • • ◦ • ◦ • ◦ ◦ • • ◦ ◦ • • ◦
Point-based
• ◦ • • • • ◦ • ◦ • • • ◦ ◦ • • ◦ ◦ •
Pfeiffer [Pfe12] ◦ • • • ◦ • • ◦ • • ◦ • ◦ ◦ • • ◦ ◦ •
Špakov & Räihä [ŠR08] • ◦ ◦ ◦ • • ◦ • ◦ • ◦ • ◦ • ◦ ◦ • • ◦
Beymer & Russel [BR05] • ◦ ◦ ◦ • • ◦ • ◦ • ◦ • ◦ • ◦ ◦ • • ◦
Räihä et al. [RAM∗ 05] • ◦ ◦ ◦ • • ◦ • ◦ • ◦ ◦ • • ◦ ◦ • • ◦
Kim et al. [KDX∗ 12] • ◦ ◦ ◦ • • ◦ • ◦ ◦ • ◦ • • ◦ • ◦ • ◦
Crowe & Narayanan [CN00] • ◦ ◦ ◦ • • ◦ • ◦ ◦ • ◦ • • ◦ ◦ • • ◦
Holsanova [Hol01] • ◦ ◦ ◦ • • ◦ • ◦ ◦ • ◦ • • ◦ ◦ • • ◦
Itoh et al. [ITS00] • ◦ ◦ ◦ • • ◦ • ◦ ◦ • ◦ • ◦ • • ◦ ◦ •
Pellacini et al. [PLG06] • ◦ ◦ ◦ • • • • ◦ ◦ • ◦ • • ◦ ◦ • • ◦
Kurzhals et al. [KHW14] • ◦ ◦ ◦ • ◦ • • ◦ • ◦ • ◦ ◦ • • ◦ • ◦
AOI-based
Figure 3: Example of a line chart, bar chart, and scatter plot. Each graph represents the same data set.
c The Eurographics Association 2014.
70 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
visualization technique allows a visual scanpath comparison The aggregation of fixations over time and/or participants
to measure the similarity of aggregated scanpaths [GDS10]. is known under the term attention map, fixation map, or
Additionally, a general overview of scanning tendencies heat map and can be found in numerous publications as
can be seen such as downward and upward scanning, or summarizing illustrations of the analyzed data. The main
horizontal shifts [GH10c]. However, the separation into x- purpose of attention maps is to obtain an overview of the
and y-coordinates makes it difficult to perceive the combined eye movements and identify regions on the stimulus that
behavior in the two spatial dimensions. attract much attention; the latter is often used to determine
AOIs. There are numerous papers describing how to create
attention maps (e.g., [ŠM07, Bli10]).
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 71
c The Eurographics Association 2014.
72 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
b) Light to Dark provides an overview of the complete data set and can be ap-
plied to static [LÇK10] and dynamic stimuli [DM98, KW13].
With the STC, scanpaths, fixations, and cluster information
are visualized statically and allow a direct identification of
c) Multicolor interesting timespans that would require a sequential search
in the data otherwise.
Figure 12 shows an STC for the visualization of fixations
and gaze clusters of multiple participants, recorded from a
d) Thin to Thick
dynamic stimulus. A sliding video plane along the temporal
Figure 9: Different types of scaled traces where the scan- dimension is applied to relate timespans with the dynamic
path representation is shown without fixation durations. In content of the stimulus. Since this 3D visualization bears
(a) the normal scaled trace is shown, the line type in (b) is issues resulting from occlusion, distortion, and inaccurate
changed from light to dark, in (c) multicolors are used for depth perception, 2D projections of the data can be applied
each saccade. In (d) the line thickness changes from thin to to walls on the sides of the STC.
thick. Figure reprinted with kind permission from Goldberg
and Helfman [GH10c]. The main advantage of the STC in comparison to alterna-
tive visualization techniques, is the direct overview of data
from multiple participants that allows an efficient identifica-
Another challenge of scanpath visualization is how to tion of important AOIs. To this point, an application of the
show 3D fixation information. Basically, there are two STC to data from multiple participants was applied to syn-
approaches. The first one, shown in Figure 10, is to visualize chronizable stimuli only. The application of this visualiza-
scanpaths over the original stimulus [DMC∗ 02, SND10b, tion technique to asynchronous data (i.e., head-mounted eye
Pfe12, PSF∗ 13a]. The other one is to warp the stimulus into tracking data) has not been investigated so far, since it bears
a 2D representation and to draw scanpath lines on this 2D additional issues with the spatio-temporal coherence between
image [RTSB04]. Besides the question how to adequately participants and an AOI-based analysis of this kind of data
show 3D data on a 2D computer screen, which might lead to provides more effective approaches.
data loss, visualization of scanpaths from 3D data have same
the limitations as their 2D counterparts.
5. AOI-Based Visualization Techniques
Other than the point-based visualization techniques, AOI-
4.4. Space-Time Cube
based visualization techniques use additional information
As an alternative spatio-temporal visualization approach, of the recorded fixation data that annotates regions or ob-
space-time cube (STC) visualizations are commonly used in jects on the stimulus that are of special interest to the user.
various research fields [Häg82, Kra03]. For the application to The annotation of AOIs in a static stimulus is often solved
eye tracking data, the 2D spatial domain of the stimulus is by defining bounding shapes around an area or an object.
extended by a third, temporal dimension. This representation Automatic fixation clustering algorithms are also a common
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 73
Figure 11: Saccade plots represents the horizontal and vertical position of a saccade on the four sides of a stimulus. The position
is represented by a dot and the color coding is used to distinguish between the different saccades. The distance of the dot from
the stimulus represents the length of the saccade. If a dot is placed on the left or right side, or top or bottom depends on the
position of the saccade in the stimulus. For example, if the saccade is in the top part of the stimulus it is shown on top of the
stimulus. Figure based on [BSRW14].
approach to identify AOIs on a stimulus [PS00, SD04]. Here, either fixations can be represented [RAM∗ 05] or
With AOI information, various metrics can be applied to the timespan of an AOI visit can be shown with a rectan-
the data [PB06, JK03], depending on the user’s research gle [CN00, Hol01]. The first one is similar to a scanpath as
questions. A simple in-context approach is an overlay of each fixation is represented individually. Therefore, showing
AOI boundaries on the stimulus with values of the used multiple participants on top of each other would lead to
metric in each AOI (e.g., dwell time in an AOI [RVM12]). visual clutter. However, for investigating individual partici-
The visualization techniques in this section visualize mainly pants, as for example in reading tasks [ŠR08, BR05] where
temporal aspects of the data, or relations between AOIs. each word represents one AOI, this technique can be helpful
as it allows to see backtracks (i.e. backward movements
to re-read words). Furthermore, attention maps allow to
5.1. Timeline AOI Visualizations display fixation frequencies on visual regions of interest
Similar to point-based data, timelines can also be applied to over time [Coc09]. There is a condensed version of this
show temporal aspects of AOI-based data. As in the case of visualization technique [KDX∗ 12]. This technique uses only
point-based timeline visualizations, time is again represented one horizontal axis for time. Each fixation is displayed as a
as one axis of a coordinate system. The other axis can repre- circle on the horizontal axis. The radius of the AOI circle
sent AOIs or participants. corresponds to the fixation duration and the color of each
fixation corresponds to the AOI. This visualization indicates
In the case of AOIs represented on the second axis either how long and how often an AOI was visited and how many
one user, averaged information of multiple users, or multiple fixations belong to an AOI.
users separately can be shown. In general, all of these tech-
niques represent the AOIs on separate timelines horizontally Representing averaged eye tracking data allows to show
or vertically next to each other. A visualization technique data of multiple participants in the same visualization with-
representing only one user at a time is shown in Figure 13. out causing visual clutter. This can be represented by a time-
c The Eurographics Association 2014.
74 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
Figure 12: Space-time cube visualization for video eye tracking data. A temporal dimension (green) extends the spatial dimension
of the video to a static overview of the data. Gaze clusters are generated automatically from the gaze data of all participants.
The gaze points are projected onto the sidewalls of the space-time cube. A frame-by-frame navigation is realized to be able to
investigate interesting frames. Figure based on [KW13].
Figure 13: The time plot (on the right) uses a horizontal
timeline for each AOI, which are highlighted as rectangles
on the stimulus (on the left). Each fixation is drawn as a
circle at its point in time and depending on the AOI. The AOI 2 AOI 3 AOI 4
AOI 1
circle radius indicates the fixation duration. Each fixation is
connected by a line showing the path. Figure reprinted with Figure 14: Parallel scanpath visualization maps time onto
kind permission from Räihä et al. [RAM∗ 05]. the y-axis and the AOIs of a stimulus onto the x-axis. For
each participant the fixations within an AOI are displayed
according to time, and the saccades are represented by lines.
Figure based on [RCE12].
line for each AOI where the AOI visit is indicated by a rect-
angle [KHW14] or by using AOI rivers [BKW13] shown in
Figure 15. Here, the time spent in each AOI and the distribu-
tion amongst participants can be seen. visualization technique is called a scarf plot [RD05]. It can be
extended by displaying each fixation with the corresponding
When each participant is represented individually, a
duration separately, the color of the fixation indicating the
timeline for each is shown. Here, a representation with
AOI it belongs to [RHOL13]. Participant groups instead of
rectangles [HNA∗ 11, WFE∗ 12] or as a scanpath can be
individual participants can be visualized by displaying AOIs
used [ITS00, RCE12]. The second visualization, shown
with a thumbnail image on the x-axis [TTS10]. This can
in Figure 14, allows a visual scanpath comparison when
both be used for static and dynamic stimuli. Condensing
applying statistical measures [RHB∗ 14].
the scarf plot even more can be achieved by placing a scarf
When participants are used for the second dimension, line for all participants belonging to one group next to
AOI hits are represented as rectangles on the timeline. This each other. A scarf plot technique can also be combined
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 75
Figure 15: Based on ThemeRiver by Havre et al. [HHWN02], the AOI Rivers display the change of fixation frequencies for each
AOI and transitions between AOIs. Time is represented on the x-axis, and each AOI is color-coded individually. Figure based
on [BKW13].
c The Eurographics Association 2014.
76 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
Figure 17: The upper part shows AOI timelines with attention histograms. The lower part shows scarf plots of ten participants.
For each participant a timeline is shown with colored timespans that correspond to the colors of visited AOIs. Black spaces
indicate that no AOI was looked at. Figure based on [KHW14].
Figure 18: Models of a 3D stimulus are mapped to a timeline depending on the time when and how long they were fixated. Figure
reprinted with kind permission from Stellmach et al. [SND10b] and ACM.
represents one participant. If all participants are represented Table 2: A classical transition matrix orders AOIs of one
in the same matrix a visual comparison of search strategies participant horizontally in rows and vertically in columns.
is possible. This is achieved by concatenating the matrices Each cell represents the number of transitions between two
of all participants and assigning a value to each matching AOIs. The diagonal is empty as no transitions within AOIs
sequence between them as shown in Figure 19 [GH10b]. A can exist [GK99].
similar visualization technique using matrices is to place
AOIs horizontally as rows and, for example, the task or query AOI 1 AOI 2 AOI 3 AOI 4
of a study vertically as columns. Each cell can then contain AOI 1 - 1 0 3
different metrics such as fixation duration, or fixation count AOI 2 2 - 4 1
[ETK∗ 08,SLHR09]. A matrix representation mixes statistical AOI 3 0 7 - 3
measures with a visual representation. This representation has AOI 4 1 0 2 -
the advantage of comparing 2D data sets visually. However,
if the number of AOIs is high, the matrix becomes large.
The following visualization techniques are not focused
on representing statistical information as the transition type of diagram represents only data of one participant.
matrix. Rather, the relation between AOIs is analyzed by Averaged values of multiple participants can also be used.
using well established visualization techniques such as a Most visualization techniques show the graph independently
directed graph or tree visualization. A directed graph can of the stimulus [IHN98, FJM05, TAK∗ 05, SSF∗ 11, BRE13].
be used to show transitions between AOIs (cf. Figures 20 Losing the topological order makes the graph harder to
and 21). Each node of the graph depicts one AOI and the interpret. However, the transition information allows one to
links depict the transitions. Node size can be varied or see in which order AOIs have been looked at or how often
color coding can be used to represent different metrics the participant returned to look at an AOI. One example of
such as fixation count or fixation duration. The thickness such a visualization technique is shown in Figure 21. Like
of a link can depict the number of transitions between two the transition matrix, the graph representation does not scale
AOIs as shown in the example in Figure 20. Usually, this well to a vast amount of AOIs.
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 77
c The Eurographics Association 2014.
78 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 79
7. Conclusion [BRE13] B LASCHECK T., R ASCHKE M., E RTL T.: Circular heat
map transition diagram. In Proceedings of the 2013 Conference
In this state-of-the-art report, we have reviewed existing pa- on Eye Tracking South Africa (2013), ETSA ’13, pp. 58–61. 6,
pers describing eye tracking visualizations. To classify the 14, 16
different approaches we have created a taxonomy that divides [BSRW14] B URCH M., S CHMAUDER H., R ASCHKE M.,
the papers into point-based and AOI-based visualization tech- W EISKOPF D.: Saccade plots. In Proceedings of the 2014 Sympo-
niques as well as into visualizations that use both types of sium on Eye Tracking Research & Applications (2014), ETRA’14,
pp. 307–310. 6, 10, 11
data. Furthermore, we have classified the papers on the sec-
ond level according to the data represented: temporal, spa- [CAS13] C HEN M., A LVES N., S OL R.: Combining spatial and
temporal information of eye movement in goal-oriented tasks. In
tial, or spatio-temporal. An overview of all visualization tech- Human Factors in Computing and Informatics. Springer, 2013,
niques was given as well as a detailed description of the dif- pp. 827–830. 6, 9
ferent visualization techniques. Based on the results, we have [CCY∗ 03] C ONVERTINO G., C HEN J., YOST B., RYU Y.-S.,
presented future directions for research. N ORTH C.: Exploring context switching and cognition in dual-
view coordinated visualizations. In Proceedings of the 2003 In-
ternational Conference on Coordinated and Multiple Views in
Exploratory Visualization (2003), pp. 55–62. 7
[CHC10] C ONVERSY S., H URTER C., C HATTY S.: A descriptive
References model of visual scanning. In Proceedings of the 3rd BELIV’10
[AABW12] A NDRIENKO G., A NDRIENKO N., B URCH M., Workshop: BEyond Time and Errors: Novel evaLuation method
W EISKOPF D.: Visual analytics methodology for eye movement for Information Visualization (2010), BELIV ’10, pp. 35–42. 3
studies. IEEE Transactions on Visualization and Computer Graph- [Chi00] C HI E. H.: A taxonomy of visualization techniques using
ics 18, 12 (2012), 2889–2898. 1, 4, 5 the data state reference model. In IEEE Symposium on Information
[AHR98] A ALTONEN A., H YRSKYKARI A., R ÄIHÄ K.-J.: 101 Visualization (2000), InfoVis ’00, pp. 69–75. 4
spots, or how do users read menus? In Proceedings of the SIGCHI [CM97] C ARD S., M ACKINLAY J.: The structure of the informa-
Conference on Human Factors in Computing Systems (1998), CHI tion visualization design space. In IEEE Symposium on Informa-
’98, pp. 132–139. 4 tion Visualization (1997), InfoVis ’97, pp. 92–99. 4
[AJTZ12] ATKINS S., J IANG X., T IEN G., Z HENG B.: Sac- [CN00] C ROWE E. C., NARAYANAN N. H.: Comparing interfaces
cadic delays on targets while watching videos. In Proceedings of based on what users watch and do. In Proceedings of the 2000
the 2012 Symposium on Eye Tracking Research & Applications Symposium on Eye Tracking Research & Applications (2000),
(2012), ETRA’12, pp. 405–408. 7 ETRA ’00, pp. 29–36. 6, 11
[BBD09] B ECK F., B URCH M., D IEHL S.: Towards an aesthetic [Coc09] C OCO M. I.: The statistical challenge of scan-path anal-
dimensions framework for dynamic graph visualisations. In Pro- ysis. In Proceedings of the 2nd Conference on Human System
ceedings of the 13th International Conference on Information Interactions (2009), HSI’09, pp. 372–375. 11, 16
Visualization (2009), IV ’09, pp. 592–597. 5
[CPS06] C ROFT J., P ITTMAN D., S CIALFA C.: Gaze behavior
[BBM∗ 09] B ERG D. J., B OEHNKE S. E., M ARINO R. A., of spotters during an air-to-ground search. In Proceedings of
M UNOZ D. P., I TTI L.: Free viewing of dynamic stimuli by the 2006 Symposium on Eye Tracking Research & Applications
humans and monkeys. Journal of Vision 9, 5 (2009), 1–15. 7 (2006), ETRA ’06, pp. 163–179. 10
[BFH10] BALDAUF M., F RÖHLICH P., H UTTER S.: KIBITZER: [DDJ∗ 10] D UCHOWSKI A. T., D RIVER J., J OLAOSO S., TAN W.,
A wearable system for eye-gaze-based mobile urban exploration. R AMEY B. N., ROBBINS A.: Scan path comparison revisited. In
In Proceedings of the 1st Augmented Human International Con- Proceedings of the 2010 Symposium on Eye Tracking Research &
ference (2010), AH ’10, pp. 9:1–9:5. 6 Applications (2010), ETRA ’10, pp. 219–226. 1
[BG08] B RASEL S. A., G IPS J.: Points of view: Where do we [DJB10] D ORR M., JARODZKA H., BARTH E.: Space-variant
look when we watch TV. Perception 37, 12 (2008), 1890–1894. 7 spatio-temporal filtering of video for gaze visualization and per-
[BKW13] B URCH M., K ULL A., W EISKOPF D.: AOI Rivers for ceptual learning. In Proceedings of the 2010 Symposium on Eye
visualizing dynamic eye gaze frequencies. Computer Graphics Tracking Research & Applications (2010), ETRA ’10, pp. 307–
Forum 32, 3.3 (2013), 281–290. 6, 12, 13 314. 6, 8
[DLN∗ 06] D IXON T. D., L I J., N OYES J., T ROSCIANKO T.,
[Bli10] B LIGNAUT P.: Visual span and other parameters for the
N IKOLOV S., L EWIS J., C ANGA E., B ULL D., C ANAGARA -
generation of heatmaps. In Proceedings of the 2010 Symposium
JAH C.: Scan path analysis of fused multi-sensor images with
on Eye Tracking Research & Applications (2010), ETRA ’10,
luminance change: A pilot study. In Proceedings of the 9th In-
pp. 125–128. 8
ternational Conference on Information Fusion (2006), pp. 1–8.
[BM13] B REHMER M., M UNZNER T.: A multi-level typology of 7
abstract visualization tasks. IEEE Transactions on Visualization
[DM98] D UCHOWSKI A., M C C ORMICK B. H.: Gaze-contingent
and Computer Graphics 19, 12 (2013), 2376–2385. 4
video resolution degradation. In Proceedings of the Society of
[Boj09] B OJKO A.: Informative or misleading? heatmaps decon- Photographic Instrumentation Engineers (1998), vol. 3299 of
structed. In Human-Computer Interaction. New Trends, Jacko J., SPIE ’98, pp. 318–329. 6, 10
(Ed.), LNCS ’09. Springer, 2009, pp. 30–39. 6, 8
[DMC∗ 02] D UCHOWSKI A., M EDLIN E., C OURINA N.,
[BR05] B EYMER D., RUSSELL D. M.: WebGazeAnalyzer: A G RAMOPADHYE A., M ELLOY B., NAIR S.: 3D eye movement
system for capturing and analyzing web reading behavior using analysis for VR visual inspection training. In Proceedings of
eye gaze. In CHI ’05 Extended Abstracts on Human Factors in the 2002 Symposium on Eye Tracking Research & Applications
Computing Systems (2005), CHI EA ’05, pp. 1913–1916. 6, 11 (2002), ETRA ’02, pp. 103–155. 6, 10
c The Eurographics Association 2014.
80 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
[DMGB10] D ORR M., M ARTINETZ T., G EGENFURTNER K. R., [HH02] H ORNOF A. J., H ALVERSON T.: Cleaning up systematic
BARTH E.: Variability of eye movement when viewing dynamic error in eye-tracking data by using required fixation locations. Be-
natural scenes. Journal of Vision 10, 10 (2010), 1–17. 7, 8 havior Research Method, Instruments, & Computers 34, 4 (2002),
[DPMO12] D UCHOWSKI A. T., P RICE M. M., M EYER M., 592–604. 7
O RERO P.: Aggregate gaze visualization with real-time heatmaps. [HHBL03] H OLMQVIST K., H OLSANOVA J., BARTHELSON M.,
In Proceedings of the 2012 Symposium on Eye Tracking Research L UNDQVIST D.: Reading or scanning? A study of newspaper
& Applications (2012), ETRA’12, pp. 13–20. 6, 8 and net paper reading. In The Mind’s Eye, Hyönä J., Radach R.,
[Duc02] D UCHOWSKI A.: A breadth-first survey of eye-tracking Deubel H., (Eds.). Elsevier Science BV, 2003, pp. 657–670. 15
applications. Behavior Research Method, Instruments, & Comput- [HHWN02] H AVRE S., H ETZLER E., W HITNEY P., N OWELL
ers 34, 4 (2002), 455–470. 1 L.: ThemeRiver: Visualizing thematic changes in large document
[Duc07] D UCHOWSKI A.: Eye Tracking Methodology: Theory collections. IEEE Transactions on Visualization and Computer
and Practice, 2nd ed. Science+Business Media. Springer, 2007. 4 Graphics 8, 1 (2002), 9–20. 13
[ETK∗ 08] E GUSA Y., T ERAI H., K ANDO N., TAKAKU M., [HNA∗ 11] H OLMQVIST K., N YSTRÖM M., A NDERSSON R., D E -
S AITO H., M IWA M.: Visualization of user eye movement for WHURST R., JARODZKA H., VAN DE W EIJER J.: Eye Tracking:
search result pages. The 2nd International Workshop on Evaluat- A Comprehensive Guide to Methods and Measures, 1 ed. Oxford
ing Information Access (EVIA) - (2008), 42–46. 6, 14 University Press, 2011. 1, 2, 3, 8, 9, 12
[FJM05] F ITTS P. M., J ONES R. E., M ILTON J. L.: Eye move- [Hol01] H OLSANOVA J.: Picture viewing and picture description:
ment of aircraft pilots during instrument-landing approaches. Er- Two windows on the mind. PhD thesis, Lund University, 2001. 6,
gonomics: Psychological Mechanisms and Models in Ergonomics 11
3 (2005), 56–66. 14 [HRO11] H UTCHERSON D., R ANDALL R., O UZTS A.: Shelf real
[GDS10] G RINDINGER T., D UCHOWSKI A., S AWYER M.: estate: Package spacing in a retail environment. 2011. 8
Group-wise similarity and classification of aggregate scanpaths. [IHN98] I TOH K., H ANSEN J., N IELSEN F.: Cognitive modelling
In Proceedings of the 2010 Symposium on Eye Tracking Research of a ship navigator based on protocol and eye-movement analysis.
& Applications (2010), ETRA ’10, pp. 101–104. 6, 7, 8 Le Travail Humain (1998), 99–127. 6, 14
[GE95] G ERSHON N. D., E ICK S. G. (Eds.):. IEEE Symposium [ITS00] I TOH K., TANAKA H., S EKI M.: Eye-movement analysis
On Information Visualization 1995, (InfoVis) (1995), InfoVis ’95, of track monitoring patterns of night train operators: Effects of
IEEE Computer Society. 2 geographic knowledge and fatigue. In Proceedings of the Human
[GH10a] G OLDBERG J. H., H ELFMAN J. I.: Comparing infor- Factors and Ergonomics Society Annual Meeting (2000), vol. 44,
mation graphics: a critical look at eye tracking. In Proceedings pp. 360–363. 6, 12
of the 3rd BELIV’10 Workshop: BEyond Time and Errors: Novel
[JC76] J UST M. A., C ARPENTER P. A.: Eye fixations and cogni-
evaLuation method for Information Visualization (2010), BELIV
tive processes. Cognitive Psychology 8 (1976), 441–480. 7
’10, pp. 71–78. 15
[GH10b] G OLDBERG J. H., H ELFMAN J. I.: Scanpath clustering [JK03] JACOB R., K ARN K.: Eye tracking in human-computer in-
and aggregation. In Proceedings of the 2010 Symposium on Eye teraction and usability research: Ready to deliver the promises. In
Tracking Research & Applications (2010), ETRA ’10, pp. 227– The Mind’s Eye, Hyönä J., Radach R., Deubel H., (Eds.). Elsevier
234. 6, 14, 16 Science BV, 2003, pp. 573–605. 11
[GH10c] G OLDBERG J. H., H ELFMAN J. I.: Visual scanpath [KDX∗ 12] K IM S.-H., D ONG Z., X IAN H., U PATISING B., Y I
representation. In Proceedings of the 2010 Symposium on Eye J. S.: Does an eye tracker tell the truth abou visualizations?
Tracking Research & Applications (2010), ETRA ’10, pp. 203– findings while investigating visualizations for decision making.
210. 6, 7, 8, 9, 10 IEEE Transactions on Visualization and Computer Graphics 18,
12 (2012), 2421–2430. 6, 11
[GK99] G OLDBERG J. H., KOTVAL X. P.: Computer interface
evaluation using eye movements: Methods and constructs. Inter- [KHW14] K URZHALS K., H EIMERL F., W EISKOPF D.:
national Journal of Industrial Ergonomics 24 (1999), 631–645. 3, ISeeCube: Visual analysis of gaze data for video. In Proceedings
6, 9, 13, 14 of the 2014 Symposium on Eye Tracking Research & Applications
(2014), ETRA’14, pp. 43–50. 6, 12, 13, 14
[GWG∗ 64] G UBA E., W OLF W., G ROOT S. D ., K NEMEYER
M., ATTA R. V., L IGHT L.: Eye movement and TV viewing in [Kra03] K RAAK M.-J.: The space-time cube revisited from a geo-
children. AV Communication Review 12, 4 (1964), 386–401. 7 visualization perspective. In Proceedings of the 21st International
Cartographic Conference (2003), pp. 1988–1995. 10
[GWP07] G OLDSTEIN R., W OODS R., P ELI E.: Where people
look when watching movies: Do all viewers look at the same [KW13] K URZHALS K., W EISKOPF D.: Space-time visual analyt-
place? Computers in Biology and Medicine 37, 7 (2007), 957–964. ics of eye-tracking data for dynamic stimuli. IEEE Transactions
8 on Visualization and Computer Graphics 19, 12 (2013), 2129 –
2138. 6, 8, 9, 10, 12
[HEF∗ 13] H URTER C., E RSOY O., FABRIKANT S., K LEIN T.,
T ELEA A.: Bundled visualization of dynamic graph and trail data. [Lan00] L ANKFORD C.: Gazetracker: Software designed to facili-
IEEE Transactions on Visualization and Computer Graphics PP, tate eye movement analysis. In Proceedings of the 2000 Sympo-
99 (2013), 1–1. 5, 6, 9 sium on Eye Tracking Research & Applications (2000), ETRA
’00, pp. 51–55. 6, 9, 16
[HFG06] H EMBROOKE H., F EUSNER M., G AY G.: Averaging
scan patterns and what they can tell us. In Proceedings of the 2006 [Lat88] L ATIMER C. R.: Eye-movement data: Cumulative fixation
Symposium on Eye Tracking Research & Applications (2006), time and cluster analysis. Behavior Research Method, Instruments,
ETRA ’06, p. 41. 5, 6, 9 & Computers 20 (1988), 437–470. 6, 8
[Häg82] H ÄGERSTRAND T.: Diorama, path and project. Tijd- [LÇK10] L I X., Ç ÖLTEKIN A., K RAAK M.-J.: Visual exploration
schrift voor Economische en Sociale Geografie 73, 6 (1982), 323– of eye movement data using the space-time-cube. In Geographic
339. 10 Information Science, Fabrikant S., Reichenbacher T., Kreveld M.,
c The Eurographics Association 2014.
T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data 81
Christoph S., (Eds.), LNCS ’10. Springer, 2010, pp. 295–309. 6, data. In Human-Computer Interaction-INTERACT 2005, Costa-
10 bile M. F., Paternò F., (Eds.), vol. 3585 of LNCS ’05. Springer,
2005, pp. 946–949. 6, 11, 12
[LHB∗ 08] L ORIGO L., H ARIDASAN M., B RYNJARSDÓTTIR H.,
X IA L., J OACHIMS T., G AY G., G RANKA L., P ELLACINI F., [RBB14] R ASCHKE M., B LASCHECK T., B URCH M.: Visual
PAN B.: Eye tracking and online search: Lessons learned and Analysis of Eye Tracking Data. Springer New York, 2014, ch. 15,
challenges ahead. Journal of the American Society for Information pp. 391–409. 16
Science and Technology 59, 7 (2008), 1041–1052. 13 [RCE12] R ASCHKE M., C HEN X., E RTL T.: Parallel scan-path
[LPP∗ 06] L EE B., P LAISANT C., PARR C. S., F EKETE J.-D., visualization. In Proceedings of the 2012 Symposium on Eye
H ENRY N.: Task taxonomy for graph visualization. In Proceed- Tracking Research & Applications (2012), ETRA’12, pp. 165–168.
ings of the 2006 AVI Workshop on BEyond Time and Errors: Novel 6, 12
Evaluation Methods for Information Visualization (2006), BELIV [RD05] R ICHARDSON D. C., DALE R.: Looking to understand:
’06, pp. 1–5. 4 The coupling between speakers’ and listeners’ eye movements and
[LPS∗ 12] L I R., P ELZ J., S HI P., OVESDOTTER A LM C., H AAKE its relationship to discourse comprehension. Cognitive Science
A.: Learning eye movement pattens for characterization of per- 29, 6 (2005), 1045–1060. 6, 12
ceptual expertise. In Proceedings of the 2012 Symposium on Eye [RHB∗ 14] R ASCHKE M., H ERR D., B LASCHECK T., B URCH M.,
Tracking Research & Applications (2012), ETRA’12, pp. 393–396. S CHRAUF M., W ILLMANN S., E RTL T.: A visual approach for
13 scan path comparison. In Proceedings of the 2014 Symposium on
[MM58] M ACKWORTH J. F., M ACKWORTH N. H.: Eye fixations Eye Tracking Research & Applications (2014), ETRA’14, pp. 135–
recorded on changing visual scenes by the television eye-marker. 142. 12
Journal of the Optical Society of America 48, 7 (1958), 439–444. [RHOL13] R ISTOVSKI G., H UNTER M., O LK B., L INSEN L.:
6, 8 EyeC: Coordinated views for interactive visual data exploration
[MSHH11] M ITAL P. K., S MITH T. J., H ILL R. L., H ENDERSON of eye-tracking data. Internation Conference on Information
J. M.: Cluster analysis of gaze during dynamic scene viewing is Visualization (2013), 239. 12, 15, 16
predicted by motion. Cognitive Computation 3, 1 (2011), 5–24. 8 [RTSB04] R AMLOLL R., T REPAGNIER C., S EBRECHTS M.,
[NH10] NAKAYAMA M., H AYASHI Y.: Estimation of viewer’s B EEDASY J.: Gaze data visualization tools: Opportunities and
response for contextual understanding of tasks using features of challenges. In Proceedings of the 8th International Conference on
eye-movements. In Proceedings of the 2010 Symposium on Eye Information Visualization (2004), IV ’04, pp. 173–180. 2, 6, 10
Tracking Research & Applications (2010), ETRA ’10, pp. 53–56. [RVM12] RODRIGUES R., V ELOSO A., M EALHA O.: A televi-
7 sion news graphical layout analysis method using eye tracking. In
[NS71a] N OTON D., S TARK L.: Scanpaths in eye movements Proceedings of the 16th International Conference on Information
during pattern perception. Science 171, 3968 (1971), 308–311. 9 visualization (2012), IV ’12, pp. 357–362. 11
[SD04] S ANTELLA A., D E C ARLO D.: Robust clustering of eye
[NS71b] N OTON D., S TARK L.: Scanpaths in saccadic eye move-
movement recordings for quantification of visual interest. In
ments while viewing and recognizing patterns. Vision Research
Proceedings of the 2004 Symposium on Eye Tracking Research &
11 (1971), 929Ű942. 6, 9
Applications (2004), ETRA ’04, pp. 27–34. 11
[PB06] P OOLE A., BALL L.: Eye tracking in Human-Computer [Shn96] S HNEIDERMAN B.: The eyes have it: a task by data type
Interaction and Usability Research: Current Status and Future taxonomy for information visualizations. In IEEE Symposium on
Prospects. Idea Group Inc., 2006, ch. E, pp. 211–219. 11 Visual Languages (1996), pp. 336–343. 4
[Pfe12] P FEIFFER T.: Measuring and visualizing attention in space [SLHR09] S IIRTOLA H., L AIVO T., H EIMONEN T., R AIHA K.-
with 3d attention volumes. In Proceedings of the 2012 Sympo- J.: Visual perception of parallel coordinate visualizations. In
sium on Eye Tracking Research & Applications (2012), ETRA’12, Proceedings of the 13th International Conference on Information
pp. 29–36. 6, 8, 10 Visualization (2009), IV ’09, pp. 3–9. 6, 14
[PLG06] P ELLACINI F., L ORIGO L., G AY G.: Visualizing Paths [ŠM07] Š PAKOV O., M INIOTAS D.: Visualization of eye gaze
in Context, 2006. 6, 13 data using heat maps. Electronics and Electrical Engineering 2,
[PS00] P RIVITERA C., S TARK L.: Algorithms for defining visual 74 (2007), 55–58. 8
regions-of-interest: Comparison with eye fixations. IEEE Transac- [SM13] S MITH T. J., M ITAL P. K.: Attentional synchrony and the
tions on Pattern Analysis and Machine Intelligence 22, 9 (2000), influence of viewing task on gaze behavior in static and dynamic
970–982. 1, 11 scenes. Journal of Vision 13, 8 (2013), 1–24. 7
[PSF∗ 13a] PALETTA L., S ANTNER K., F RITZ G., H OFMANN A., [SMI14] SMI BeGaze eye tracking analysis software.
L ODRON G., T HALLINGER G., M AYER H.: A computer vision http://www.smivision.com, 2014. 16
system for attention mapping in SLAM based 3d models. CoRR
(2013). 8, 10 [SND10a] S TELLMACH S., NACKE L., DACHSELT R.: 3D atten-
tional maps: Aggregated gaze visualizations in three-dimensional
[PSF∗ 13b] PALETTA L., S ANTNER K., F RITZ G., M AYER H., virtual environments. In Proceedings of the International Confer-
S CHRAMMEL J.: 3D attention: Measurement of visual saliency us- ence on Advanced Visual Interfaces (2010), AVI ’10, pp. 345–348.
ing eye tracking glasses. In CHI ’13 Extended Abstracts on Human 8
Factors in Computing Systems (2013), CHI EA ’13, pp. 199–204.
[SND10b] S TELLMACH S., NACKE L., DACHSELT R.: Advanced
6, 8
gaze visualizations for three-dimensional virtual environments. In
[Pur97] P URCHASE H. C.: Which aesthetic has the greatest effect Proceedings of the 2010 Symposium on Eye Tracking Research &
on human understanding? In Proceedings of Graph Drawing Applications (2010), ETRA ’10, pp. 109–112. 6, 9, 10, 13, 14
(1997), pp. 248–261. 4
[SNDC09] S TELLMACH S., NACKE L., DACHSELT R., C RAIG
[RAM∗ 05] R ÄIHÄ K.-J., AULA A., M AJARANTA P., R ANTALA L.: Trends and techniques in visual gaze analysis. CoRR (2009),
H., KOIVUNEN K.: Static visualization of temporal eye-tracking 89–93. 2
c The Eurographics Association 2014.
82 T. Blascheck, K. Kurzhals, M. Raschke, M. Burch, D. Weiskopf & T. Ertl / State-of-the-Art of Visualization for Eye Tracking Data
c The Eurographics Association 2014.