You are on page 1of 10

HELPING HONORS

Usability Testing Plan

Version 2

Julia Croston and Anna Walker


April 17, 2023
2

Table of Contents

Executive Summary……………………………………………………………………………. 3

Methodology…………………………………………………………………………………… 4

Participants……………………………………………………………………………... 5

Procedure……………………………………………………………………………….. 5

Usability Tasks………………………………………………………………………………….. 7

Usability Metrics………………………………………………………………………………... 8

Reporting Results………………………………………………………………………………..10
3

Executive Summary
We are conducting usability tests on the Hicks Honors Website. We recruited four current
Honors students from the University of North Florida for the testing, with the goal of discovering
strengths and weaknesses within the website. We anticipate that these Honors students, being
two freshman and two upperclassmen, represent the standard users of the website and will
therefore reveal common issues.
We are conducting three tests in private study rooms within the UNF library, allowing for a
quiet, focused space for users. Participants will be asked to complete three tasks, two of which
are website based and the third is card sorting. The website tasks will be recorded with eye
tracking software, as well as notetaking. The third task will be recorded with photographs and
notetaking. With these tests, we are focusing on the information available about the Capstone,
the process to book an appointment, and the organization of the drop-down menu. We will also
be using pre-test, post-task, and post-test interviews to gather additional data about users’
experiences with the website.
The metrics we plan to use include scenario completion, scenario completion time, non-critical
errors, critical errors, and subjective evaluations. After the tests are complete, we will compile
the data based on these metrics and organize it. We plan to represent the data using screenshots
from GazeRecorder, graphs, tables, and written explanations to ensure clarity within our report.
After the compilation and organization of the data, we will form a final report of our findings in
InDesign to present to Hicks Honors College.
4

Methodology
To start, users will be given a consent form for them to read and sign prior to any testing. The
form explains that the testing is voluntary, their consent can be withdrawn at any time, and they
are free to ask us any questions. After this form is read and signed, the testing session begins.
Next, users will be asked questions by the testing facilitator. Answers will be recorded in a
shared document by the designated note-taker. Responses will provide basic demographic
information, including major and year, as well as detail on past student experience. Questions
will pertain to their favorite part of Honors and future goals in the program.
For the first task, users will search the website for Capstone information. Users will be asked to
think out loud while the computer records the screen and tracks eye movement, using
GazeRecorder. Calibration, for eye tracking, will be part of this process in addition to a brief
explanation of the task. The task will be timed to measure efficiency. This task will help answer
our first research question: Can students quickly access the information and links they need to
make appointments with Honors faculty and staff? A sub-question of this is: do users get
distracted by any elements of the website during the task?
In the second task of the usability testing, users will be instructed to imagine that they are
looking to book an appointment with UNF Honors faculty or staff. They will be asked to voice
their thoughts, process, and emotions while they navigate the website. This second task will also
be tracked through GazeRecorder. The eye tracking video will be used to both calculate the time
it takes a user to reach their desired goal and how the website’s features draw the attention of the
users.
This specific function is being tested because of previously observed issues with figuring out
how to book an appointment and get into contact with Honors faculty and staff. There are
multiple ways to reach the ‘Book an Appointment’ page, although they do not seem particularly
visible for users. Our goal in testing this function is to discover whether users struggle to book
appointments and to learn whether the ‘Book an Appointment’ page can be reached through the
“Three Clicks” method. This task will help answer our second research question: Can students
easily and efficiently access appropriate information about the Capstone project?
In the third and final task of the testing session, the user will engage in the card sorting method.
The goal of this task is to discern how users would group different sets of information and
organize the top drop-down menu. The users will be given 20 note cards with words or phrases
written on them that reflect current subheadings or informational pages on the Hicks Honors
Website. We collected these words and phrases primarily from the current drop-down menu,
though we did add additional note cards based on informational links on the home page. Users
will also be given a pen and blank notecards to add additional headings as they see fit. We will
also explain to them that they are allowed to set aside any headings they do not find to be
helpful. This third task will provide clarity to our final research question: What is the most
efficient, user-friendly way to organize the information contained under the drop-down menus?
We decided to use card sorting to test this final issue of organization because of frustrations
expressed in earlier parts of the research project. Students that we interviewed previously
disclosed frustration and confusion regarding the organization and presence (or absence) of
certain headings in the drop-down menu.
5

After the final task in the session, we will perform an end-of-session verbal interview, which we
plan to voice-record in order to keep track of common thoughts and answers between users. We
decided an interview would be less intimidating than a written survey, and it will allow us to ask
follow up questions if need be.

Participants
Testing will be conducted with four users––two freshmen and two upperclassmen. Recruitment
consisted of reaching out to three personal contacts and emailing the dean of the Honors college,
Dr. J, requesting an available freshman. All four students are eligible for testing due to their
participation in the UNF Honors program, even if planning not to continue. Other eligibility
pertains to year, since the study will work best with an even number of first-year students and
upperclassmen to better represent the population in Honors.
Past user experience with the Honors website will be taken into account, but will not alter
participation eligibility. Upperclassmen experience with the website will be contrasted with
freshmen experience, reflecting changes in perception over time attending the university. It will
be more likely that upperclassmen have experience with the first task, finding Capstone
information. Additionally, freshmen will more likely be familiar with task two, booking an
appointment. However, each individual will have their own unique experience interacting with
the website based on urgency in completing Honors requirements and personal desire to find
information.
The four participants, in individual sessions, will be prefaced with a disclaimer that any negative
response to Honors will not be used against them, and responses should be honest reflections of
their experience over the course of the requested tasks. The final verbal survey will be open-
ended questions to reflect the participants’ experiences throughout the tasks and anything else
they want to add.

Procedure
Participants in the testing will be asked to meet us at a reserved study room on the second floor
of the Thomas Carpenter library at UNF to conduct the testing. We plan to use a Pavilion HP
laptop and a Google Chrome browser with the GazeRecorder software to conduct the testing.
The software will be used for both website tasks, for three minutes in each session to give plenty
of time to users. These study rooms include a table and multiple chairs, as well as a monitor and
computer. The room mimics that of a small office or conference room. The user’s interaction
with the website and notecards will be observed by the main testing facilitator, who will sit
silently near the user as the testing is being conducted. The notetaker will sit across the table and
take notes quietly.
The facilitator of the testing will ask the user introductory questions about the user’s major and
involvement in the Honors college while the other group member takes notes. Then, the
facilitator will explain the guidelines of the testing and demonstrate how to use the eye-tracking
application. After this explanation, the facilitator will refer back to the consent form that had
already been read and signed and remind the user that participation is voluntary, consent can be
withdrawn at any time, and that we are happy to answer any questions before or after the testing.
6

The facilitator will remind the user to think aloud as they complete the tasks and reassure the
user that the notes being taken will help us determine how the website is perceived and
experienced. The note taker will proceed with writing notes, collecting data, and filling out the
rolling issues log as the tasks are completed. After each task session, the facilitator and note-
taker will discuss the task briefly with the user.
After all the tasks have been completed, then the end-of-session survey will be completed to get
final impressions, issues, and feelings about the website.
7

Usability Tasks
The three tasks for users are based on information from previously developed personas and
scenarios. Due to the limited time and capacity of testing, we chose to only focus on those three
functions of the website. The tasks will remain the same for all participants in the testing.
As previously noted, the tasks will be performed in the study rooms in UNF’s library. Because of
this location, we cannot guarantee total silence, but it seems to be the quietest option available
that will accommodate our testing. Additionally, we are reliant upon the library’s WiFi and a
third-party software (GazeRecorder) to be running properly during the testing. We do not
anticipate any issues with accessing the Hicks Honors Website during our testing sessions. The
presence of the testers in the room might impact the way in which the users interact with the
website, but we attempt to limit this impact by developing a rapport with the user and remaining
silent during the task sessions.
Reciting a pre-written script, the facilitator will introduce users to the task to locate information
about the Honors Capstone, which is essential to graduate with Honors. This scenario was based
on previously developed personas drawn from interviews with one upperclassman and one
freshman student. Both persona scenarios dealt specifically with the Capstone, branching off into
different goals, since the interviewed students were equally interested in this information. The
participants will search for Capstone information while the computer screen records. Eye
tracking will also be used in order to see what draws the user’s eye. With third party resources,
there could be technical issues. Users may also be confused on the specific Capstone information
requested by the task; however, the facilitator will be there to confirm or deny.
Using a scenario that we developed prior to the testing sessions, users will be instructed to book
an appointment (of whatever sort they’d like) with an Honors faculty or staff member. As each
of the participants are Honors students, they will likely have some familiarity with the website,
although we do not anticipate that everyone has had to make appointments yet. However, when
wanting to discuss concerns or questions with someone from the Hicks Honors college, these
appointments seem to be the most straightforward way of getting those questions answered.
Whether for advising, fellowships, or concerns about the program, appointments with Honors
faculty and staff were common occurrences among the students we interviewed. This is one of
the primary functions of the website, which can be used by freshmen, upperclassmen, and
potential applicants.
The final task, users will be introduced to notecards with headings and subheadings from the
Honors’ website menu and dropdown menus that link to informational pages. On a large table,
the user will be asked to organize the cards in a way they think works best without re-examining
the website. Blank note cards will be provided for the participant to add any information they see
fit. As potential pitfalls, the participant may get frustrated by searching for a “correct” answer,
may feel too indecisive, or may rush through the task without detailed, spoken thought. The
participant is asked to complete this task because the organization of the website is something
that affects every user. By documenting these participants’ opinion of how the information
should be organized and categorized, the current organization of the website can be evaluated to
see how it compares to user expectations.
8

Usability Metrics
The metrics we plan to measure are scenario completion, critical errors, non-critical errors,
subject evaluations, and scenario completion time. We will be measuring each participants’
individual experience against set performance goals as outlined below. These metrics will be
informed by what is necessary for usability testing.

Scenario Completion
The scenarios are complete when the participant indicates to the testing facilitator that they have
finished the task, even if the intended goal is not fulfilled. The intended goal of each scenario is
to complete the proposed task. These goals are outlined by the testing facilitator’s dialogue script
and include finding specific Capstone information and booking an appointment. The final task in
this section will be to organize cards that convey different website headings and subheadings to
the particpant’s subjective specifications. Frustration or confusion may distract from achieving
these goals; however, once a participant feels they are finished, the task will be deemed as
complete.

Critical Errors
During the course of each participant’s testing, critical errors that occur will be measured.
Critical errors include those that prevent scenario completion. As previously mentioned, scenario
completion is the primary goal of these tests, so any critical errors that occur during testing will
be noted extensively. Critical errors may cause the participant frustration or to give up entirely,
in which case the website has failed to complete its goal of providing information. The
performance goal for this metric is for zero critical errors to occur, creating something to
measure participants’ testing against.

Non-critical Errors
Another important metric of this testing is the presence of non-critical errors. Non-critical errors
are ones that participants themselves can come back from and figure out a way to complete the
task in spite of the error. These errors do not cause scenario incompletion, but they are still
important to note and measure because of how non-critical errors can cause frustration, extend
length of tasks, or confuse the participants. The main non-critical errors we will be measuring are
related to wrong clicks due to confusion over subheadings or unclear navigation. These errors
may lead to inefficiency in completing tasks, which may affect scenario completion time as
outlined below. The performance goal for this metric is zero non-critical errors, so any non-
critical errors that do occur could point to issues or ineffective design on the website. We will not
be counting exploration through the mouse hovering over certain sections or accidental clicks as
non-critical errors.

Subjective Evaluations
An evaluation will take place after the three scenario tasks are completed. Questions will be
asked verbally, and responses will be recorded through iPhone software and notetaking. These
questions are open-ended to allow respondents to speak about their experience interacting with
9

the website. Potentially, questions could confuse participants. The testing facilitator or notetaker
will expand on the proposed question until the participant understands what is being asked. Tone
will be taken into account to determine if questions were asked in a leading way or if the
participant used sarcasm or any other deviation from a conventionally serious tone. There will be
one question asking the participant to rank, on a scale from 1 to 10, how likely they are to return
to the website. Results to this question may be arbitrary; tone and timing of response will be
taken into account. Debriefing will be less of a priority since the participants will be more aware
of the goals as questions are asked. Participants can also ask any questions about the completed
tasks during this time.

Scenario Completion Time


Using GazeRecorder and screen recording software, the time needed to complete each of the
website tasks will be recorded. We will not be recording completion time of the card sorting task
because it is more of an exercise of thought and we do not want the participants to feel rushed.
Instead, the results of the card sorting tasks will be noted and compared to the results of other
participants’ sorting and the current organization of the website.
10

Reporting Results
All recorded and noted information from the stages of testing will be examined and used to
develop a final report. This report will convey results of the two goal-oriented scenarios, card
sorting, and a verbal survey. The goal of the report is to accurately represent users and their
experiences to give tested users agency in potential website revision. Usability metrics will be
described in detail using various formats depending on what portrays each measurement best,
including graphs and word cloud assortments. The final report will be presented objectively to
stakeholders, members of staff at the Honors college. Recommendations to improve the website
will be part of this presentation as a way to offer assistance to the Honors college. The
presentation will also outline what is working well for the users. The report is anticipated to be
delivered by April 10 and presented to Honors stakeholders on April 17.

You might also like