You are on page 1of 7

*Keep everything on Default*

ASL Eye-Trac 6 Net User Interface: Setup


1. Start All Programs Applied Sciences/laboratories? ASL Eye-Trac 6 Net User Interface
2. On User Interface main screen click upload green light= online (Eye tracker data online)
3. Click on Eye/Scene Video shortcut click on advanced click on exposure tab (if needed)

*ONE TIME SET TARGET POINTS CALIBRATION*


Enter Target Points (Needs to be done only ONCE, unless a change is made)
1. Click on Calibrate tab Set Target Points
2. Roll the mouse over the POG display on the main User Interface screen BUT match the
cross hairs with the corresponding point on Eye/Scene video screen; cross hairs should be
over the middle of the target points on Eye/Scene video screen (may not match exactly with
POG display, but must match with Eye/Scene video) Left click for target points 1-9 when
cross hairs are lined up with corresponding point on Eye/Scene video screen
3. Click OK on Set Target Points (STP) dialog box

Check Target Point Locations


1. Click Calibrate Check Target Points Check Target Points window shows the
coordinate value of the target point number in the Target Point Selection box
2. The coordinates for our calibration:
Point 1 Horizontal= 24; Vertical= 40
Point 2 H= 125; V= 40
Point 3 H= 224; V= 40
Point 4 H= 24; V= 123
Point 5 H= 125; V= 122
Point 6 H= 225; V= 123

Point 7 H= 24; V= 206


Point 8 H= 125; V= 207
Point 9 H= 224; V= 206

If gaze coordinates do not match the same horizontal and vertical coordinates for
each point, there will be an error

Display Target Points


1. On the subject screen, click Start Run Display Target Points 9 point target
2. The same 9 point target setup on the subject screen will also be seen on the Eye tracker
monitor: Eye/Scene video screen

*For everyday system use start here:*


Head Tracker setup
1. Click Head Tracker Activate Head Tracker (or click on HT shortcut)
2. Make sure subjects face (eyes, nose, and mouth) is in the middle of the screen when in
middle, head tracker automatically recognizes the face
3. On the head tracker display/date window, click Advanced Controls this will show the
subjects head distance from the camera (~24)

Set up a subject
1. Dim to moderate light
2. Proper head image recognition indicators coordinate frame with red, green, and blue
lines (x,y,z coordinates) These three colored lines are the head coordinate frame and the
lines should meet at the bridge of the nose blue dots may be seen at corners of mouth,
nostrils, and corners of eye
3. If face not recognized click Restart Head tracking

Eye image Centering

1. On User Interface screen, click Eye Camera Set Default Sensor-to-Eye Vector Right
Eye (only do this ONCE)
2. Eye Camera Pan/Tilt Control click on Camera Auto Focus (only do this ONCE)
3. To turn on the Illuminator, check the Illuminator Power box in the Eye Discrimination
window the Illuminator Level slide should be slightly to the left of center
4. Have subject look at the center of the display screen (9 point target pattern-subject display
screen)
5. While subject is fixated at center of display, click on Auto under Pan/Tilt Tracking on the
main User Interface screen
6. If subjects eye is not in the center of the Eye/Scene video, Right click mouse and hold while
moving the screen/image around in the POG display until eye is center
7. The pupil will be a bright white circle and the corneal reflection (CR) will be a small white
spot the CR is Brighter than the pupil the CR may be inside or just outside of the pupil
8. The CR must always be brighter than the pupil: adjust Illuminator level if needed
9. Little white dot= Corneal Reflection (CR); Little black dot= CR Discrimination Outline;
White cross hairs= pupil center; Black cross hairs= CR center

Calibrate Subject
1. Measuring the distance between the subjects pupil and a corneal reflection
2. On User Interface main screen, click on Calibrate Standard Calibration
3. Calibration type Standard Calibration
4. Tell participant to Look at point 1,2,etc. after each gaze at each point, click on Save
Current Point this will save the point of gaze for target points 1-9 Must manually change
the calibration point in the box using the arrows

Testing the Calibration


1. Have subject look at each target point again look at the point-of-gaze cursor or cross hairs
on the scene video monitor
2. Make sure the cross hairs on scene video match target points 1-9

3. Calibrate pull-down Show Raw Calibration Data


4. Window Eye Data: distortion must be in consistent direction

Record Data
1. On User Interface: File File Configuration Eye data file tab check what you want
recorded (Head Tracker must be activated)
2. File New Data File .eyd (eye-data) file (file name) name the file Save File
Description (will be visible with EyeNal and saved in a header with data file), When data file is
open, file path and name will be in File Info section of User Interface
3. Start Recording Each start and stop creates new Segment within the data file; File Info
section indicates system is in recording state and current segment number
4. Mark Flags: Manually insert marks into recording data file in real time pressing number
key or a mark button places corresponding number in Markers column in data file (These are
distinct from XDAT flags marks can only be input manually and cannot be sent by external
device)

ASL Results Plus


1. OPEN though start menu option ASL Results Plus
ASLResultsPlus.exe
ASL_Results_Plus folder contains Docs folder with Manuals and Tutorials
2. Window pop-up? Create New Project OR Open Existing
OR File New Project name the project
File Open Project (or Open Project Icon) Choose Project Folder window
This shows all Project Folders that can be selected for editing
(All project folders can also be found by clicking Documents on left hand side Applied
Science Laboratories ASL Results Plus data)
Highlight a project folder Select Folder puts participant data files in left-hand project tree

3. Add participant data files File Open Participant File (or Open Participant File Icon)
Documents Applied Science Laboratories Eye-Trac 6 data (shows all participant
files)

ASL Results can be used to:


examine and plot raw data,
associate scene images with sections of gaze data
define areas of interest on images
reduce gaze data to fixations
reduce gaze data to dwells (periods of continuous gaze on one area of interest)
display data graphically
o time plots
o X/Y scan plots superimposed on scene image
o heat map plots on scene image
compute various statistics that relate fixations or dwells to areas of interest
combine results across trials or subjects by averaging statistical data from each, or by pooling
the original data
export results in Excel or ASCII text format for further custom analyses
Project tree left-hand panel:
Nodes are added to tree as data and analysis results are added to project
Segments start and stop portions of data file
Events sub-divided data files defined by some beginning and end criteria
Each segment must have at least one event sub-node
Events usually correspond to experiment Trials
Default Event entire segment
Sub-nodes under each event are all created by data processing in the ASL Results program
Gaze Data in an event can be reduced to a set of fixations (forming a sub-node under
the event)
Fixation sets can be further processes to match fixations with areas of interest on the
scene, forming fixation sequence and dwell nodes under the fixation node
Various statistics can be computed from the fixation sequence and dwell data to form
additional sub-nodes
Right clicking a Node on tree diagram opens additional actions for further processing of data
defined by that node
(Parsing original data into additional events, computing fixations, etc..)

Event Parsing
Right click either file name or an individual segment on tree diagram Parse Events
Configure Events dialog box
Start Trigger
XDAT. The event starts on the first record that has an XDAT value contained in the user-defined
list of Start values; or, if the user has set the radio button to Any change, on the first record
with an XDAT value different from the previous record. Note: the XDAT value on the very first
field is always considered a change and will trigger an event if Any change has been
selected.
Mark_Flag. The same as XDAT, except that the event starts on the first record that contains one of
the specified Mark Flags. If Any change radio button is set, the event will start on the first record
containing any mark flag.
Time. The event starts at specified time, specified in seconds, and measured from the beginning of
the segment. The user can specify more than one start time value to create several events. In the
example shown below, the first event would start 10.5 seconds after the beginning of the data
segment, the next event would start 20 seconds after the beginning of the segment, and third event
would start 30 seconds after the beginning of the segment. It is important to note that this can create
overlapping events, if the one event ends after the start time for a subsequent event. If you create
multiple events by start time, pay careful attention to the stop condition (described in the next
section).
Skip seconds before start. If Start Trigger is None, XDAT or Mark_Flag, user can also specify
additional interval that will be skipped before the event start. In other words, if the skip time is t, the
event will start t seconds after the Start Trigger is encountered. Suppose, for example, that the Start
Trigger is XDAT, and that any change in value and Skip 5 seconds are specified. Further suppose that
the first change in XDAT is 7 seconds from the start of the segment. In this case the first event will
start 7+5=12 seconds from the beginning of the data segment.
Batch Event Parsing parsing rules for selected segments from different data files
Highlight: Data Files node click Batch from ASL Results main window Parse Events
check off segment nodes to be parsed OK Configure Events dialog
Configure Backgrounds
Image file formats: BMP, JPEG (JPG) and GIF
*In order to superimpose point of gaze on the image, the program needs to know how to translate the
eyetracker coordinates to the pixel location on the image (VGA coordinates)*

SO we specify Attachment Points these are ideally near two opposite corners of the image and
easily identifiable landmarks in the image
*(Attachment points on image file can be determined in advance: display image just as it was
displayed to the subject and use the eye tracker Set Target Points function to find coordinates for
the points in question)*
Configure Backgrounds Configure Backgrounds dialog Add New Background Select
New Background Image dialog
Blank Background Image
Plots are to be superimposed on a blank screen Use blank Background image (and select
color and size of desired image); type a Background Name (and image will subsequently be
associated with configuration parameters) under Eye Tracker coordinates select horizontal
and vertical gaze coordinate values that will correspond to top left and bottom right corners of
the blank image (usually top left (h = 0, v = 0) and bottom right (h = 260, v = 240)
OK blank image will appear with attachment points labeled P1 and P2 OK to close
image window; this background image with attachment points is now part of the project
and will be available for superimposing scan plots and heat maps
Read from File
Read from File browse to the image file type a Background Name OK Add/Edit
Attachment Points dialog Select 2 landmarks near opposite corners of the image and enter
coordinates into Point 1 column (usually point near top left) and Point 2 (usually bottom
right); with Point 1 selected, click on corresponding point in the image (do the same for point 2;
VGA coordinates entered automatically)
Image files with the same resolution? No need to find unique attachment points for each default

You might also like