You are on page 1of 40

Fundamentals of HCI

1st Semester, 2023

Week#5 – Lecture 6
HCI Design (Part 1)

Dr. Wajanat Rayes


Information Science Department
1
Lecture Outlines
• HCI Design Iterative Steps
• Interface Selection Choices
– Hardware/Platform
– Software Interface Components

• Wire-framing
• Participatory Design

2
HCI Design

• HCI design includes all preparatory activities for developing


an interactive software to ensure high usability and good user
experience up to before actual implementation.

• Iterative/Related and creative refinements of:

1. Requirements analysis

2. User analysis

3. Scenarios and task modeling

4. Interface selection and consolidation

3
The Overall Design Process

4
1- Requirements Analysis
• Any software design starts with a careful analysis of the functional
requirements
• For interactive software with a focus on the user experience, we take a
particular look at :

I – Functions that are to be activated by the user (functional-task


requirements) through interaction. They are what the system does
or must not do, and how the system responds to inputs.

2 – Functions that are important in realizing certain aspects of the


user experience (functional-UI requirements), even though it may
not be directly activated by the user.

– E.g. An automatic functional feature of adjusting the display


resolution of a streamed video based on the network traffic 5
1- Requirements Analysis – Cont.

– Non-functional interactional requirements (non-functional


3 UI requirement), i.e. those that are not directly related to
accomplishing the application task. It includes system
attributes such as security, reliability, performance,
maintainability, scalability, and usability.

• E.g. requiring certain font size or type according to a


corporate guideline may not be a critical functional
requirement but a purely HCI requirement feature.

6
2- User Analysis
• User analysis is an essential step in HCI design “Know thy user”

• The results of the user analysis will be reflected back to the


requirements, and this could identify additional UI requirements
(functional or nonfunctional). It is simply a process to reinforce
the original requirements' analysis to further accommodate the
potential users in a more complete way.

• E.g. a particular age group might necessitate certain interaction


features such as a large font size and high contrast

– E.g. a functional UI feature to adjust the speed of scroll


7
3- Scenario and Task modeling
• Process of defining how the user will interact

• Identifying application task structure and their sequential


relationship

• With a crude task model, we can also draw a more detailed scenario
or storyboard to envision how the system would be used and assess
both the appropriateness of the task model and feasibility of the
given requirements

• Again, one can regard this as simply an iterative process to refine


the original rough requirements

8
3- Scenario and Task modeling – Cont.

• Through the process of


storyboarding, a rough visual
profile of the interface can be
sketched. Furthermore, the
storyboard will serve as another
helpful medium in selecting the
actual software or hardware
interface.
• A storyboard communicates a
story through images displayed
Creating a storyboard on sticky notes allows you
in a sequence of panels that to be collaborative and rearrange the sequence as
chronologically maps the you discover new information.
https://www.nngroup.com/articles/storyboards-visualize-ideas/
story’s main events.
9
Scenario Development
“Day-in-the-life” scenarios:

• Characterize what happens when users perform typical tasks

• Can be acted out as a form of walkthrough

• May be used as basis for video-based scenario

• Helpful for selecting actual software or hardware interface (visual, aural,


haptic, …).

• Rough visual look of the interface can be sketched

• Task model as seed

• A starting point for drawing the object-class diagram, message


diagrams and use cases for preliminary implementation (programming)

• Useful tools – storyboards


10
Example: Interaction modeling for Smart TVs

1. User analysis

2. Operational context

3. Task analysis

1. How does user do it? (Task model)

2. Choosing the interface (by preference, ergonomics, technology

maturity and reliability, performance?)

– New interface design?

– Interface characteristics (e.g. accuracy, speed, mental load, …)

11
Preliminary study

• Objective: Basic user characteristics

– 76 people (average age 23 years old)

– Their background and distinguishing characteristics?

– Operational/Usage situation?

– What applications/functionalities (in the smart TV)


do they use?

– (Any multitasking behaviors?)

12
Summary of Results
User Use Configuration Contents Application/Tasks

Early 20s TV Music / MTV Music player


College student / 2m Entertainment
Company man programs TV Program guide

One room apartment Multitask with eating,


or Home (living room) Laundry, …

Use TV alone late in


the evening

Skilled enough for


TV setting changes

• Preferred interface: Smart phone and conventional remote control


• No table in the living room
• Sofa and comfy chairs in homes / Floor-Bed in one room apratments
13
Main interaction modeling (2nd interview)

• Interview and study used preliminary results

– User: early 20’s college students (15 people)

– Interface used: Smart phone and remote control

– Interaction model for using applications (on the smart TV) such as:

• Music player (as coupled with personal mp3 player)

• VOD and TV program selector

• Internet

– Study methods

• Survey (see next page)

• Draw pictorial and annotated scenarios and capture user videos

14
Survey

4. Rank the most preferred and often used interface for the
smart TV among:
a. Motion gesture
b.
c.
Voice
Smart Phone
• Free form interview but used
d. Remote Control
a basic survey asking
5. What would you use to turn the TV on if you cannot find the important aspects of
remote control?
a. Motion gesture interaction process of using
b. Voice
c. Smart Phone the three smart tv
d. Remote Control
e. Will find the remote applications
f. Use the switch on the TV

6. There are many applications available on the smart TV. What is


your preferred way to show what is available? • Considered what if the user
a. 1D layout of icons (linear menu)
b. 2D layout of icons used a smart phone or
c. Others
conventional remote

15
Scenario 1.

(1) User comes home listening to his smart phone mp3 player and as he
approaches the TV, a pop up window is shown on the phone and by
touch it, the smart TV is turned on and music player on the smart TV16is
activated.
Scenario 1.

(2) User takes off the headset and sets the music play option to
“random” and continue to listen to the music through the smart TV.
17
Scenario 1.

(3) Use voice command and gesture to go to next song.

18
Scenario 1.

(4) Checks for the lyrics using the remote control (dual usage of remote
and smart phone).
19
Scenario 1.

(5) Turn off the TV using the remote.

20
4- Interface selection and consolidation
• For each subtasks and scenes in the storyboard, particular software interface
components (e.g. widgets), interaction technique (e.g. voice recognition),
hardware (sensors, actuators, buttons, display, etc.) will be chosen.

• Consider response time

• The chosen individual interface components need to be consolidated into a


practical package, because not all these interface components may be
available on a working platform (e.g. Android based smart phone, desktop PC,
mp3 player). Certain choices will have to be retracted in the interest of
employing a particular interaction platform.

• For instance, for a particular subtask, the designer might have chosen voice
recognition to be the most fitting interaction technique. However, if the
required platform does not support a voice sensor or network access to the
remote recognition server, an alternative will have to be devised.
21
Interface Selection Choices
• Different interactions and subtasks may require various individual
devices (sensors and displays).

• We look at the hardware options in terms of the larger computing


platforms, which are composed of the usual devices.

• The choice of a design configuration for the hardware interaction


platform is largely determined by the characteristics of the
task/application that necessitates a certain operating environment.

22
Interface Selection Choices: Hardware/Platform

• Desktop

• Mobile

• Pad

• Kiosk

• Embedded

• TV / Console

• VR / AR

• Free form

23
Interface Selection Choices: Hardware/Platform – Cont.

• Desktop (stationary)
– Monitor (typical size 17 – 42 in.; resolution1280 × 1012 or higher); keyboard,
mouse, speakers/ headphones (microphone)
– Suited for: Office-related tasks, time-consuming/serious tasks, multitasking

• Smartphones/ Handhelds (Mobile)


– Suited for: Simple and short tasks, special-purpose tasks.

• Tablets/Pads
– Suited for: Simple, mobile, and short tasks, but those that require a relatively l
arge screen e.g., a sales.

• EMBEDDED (STATIONARY/MOBILE)
– Suited for: Special tasks and situations where interaction and computations
are needed on the spot (e. g., printer, ice cooker, MP 3 player)
24
Interface Selection Choices: Hardware/Platform – Cont.

• Tv/Consoles (Stationary)
– Suited for: TV-centric tasks, limited interaction, tasks that need privacy (e.g.,
wild-gesture-based games in the Livingroom)

• Kiosks
– Suited for Public users and installations, limited interaction, short series of
selection tasks, monitoring tasks

• Virtual Reality
– Suited for: Spatial training, tele-experience and tele-presence, immersive
entertainment

25
Interface Choices: Software Components

• Command line

• WIMP/2D

– Windows/Layers
– Icons (simple and intuitive)
– Menu

– Direct interaction

– GUI

• Non-Wimp

– 3D, Gesture, Voice, Multimodal, …

26
Interface Choices: Types of Menus
Menu type Usage
Menus allow Pull down Top level (main) categorical menu
acti vati ons Pop up Object specific, context specific
of c o m m a n d s
Tool bar Functional / operational tasks
a n d tasks
Tabs File folder metaphor (categorical menu)
through Selection
recognition) Scroll menu Long menu (many menu items)

rather than recall. 2D array / Identification of items by icons (vs. by lon

Image maps g names) or pictures

Buttons / Hyp Short menu (few choices)

erlinks

Check boxes / Multiple choice / Exclusive choice

Radio buttons

Hot Keys For expert users

Aural menu Telemarketing, For the disabled 27


Direct Interaction
• The mouse/touch-based interaction is strongly tied to the concept of
direct and visual interaction.

• Before the mouse era, the HCI was mostly in the form of keyboard
inputting of text commands. The mouse made it possible for users to
apply a direct metaphoric “ touch” upon the target objects (which are
visually and metaphorically represented as concrete objects with the
icons) rather than “ commanding” the operating system (via keyboard
input) to indirectly invoke the job. In addition to this virtual “ touch”
for simple enactment, the direct and visual interaction has further
extended to direct manipulation, e. g., moving and gesturing with the
cursor against the target interaction objects.

• “ Dragging and dropping,” “ cutting and pasting,” and “ rubber


banding” are typical examples of these extensions
28
Interface Choices: Types of Menus
drop down

Figure 4.13 Different styles of menus 2: (a) buttons, (b) check boxes and radio buttons,
(c) slider menu, (d) image map. 29
Interface Choices: GUI Components
It
GUI also som eti m es referred to as the WIMP (window
, icon, mouse, a n d poi nter).

GUI interface components:


• Text box – for making short/medium alphanumeric input

• Toolbar: A small group of frequently used icons/functions


organized horizontally or vertically for a quick direct access

• Forms – Mixture of menus, buttons and text boxes for long thematic
input

• Dialog/Combo boxes - Mixture of menus, buttons and text boxes for


short mixed mode input

30
Interface Choices: GUI Components – Cont.

Figure 4.14 GUI interface components: (a) form, (b) toolbar, (c) dialog box, (d) combo box. 31
Interface Choices: 3D in 2D
• Standard GUI elements that Figure 4.15 3-D interface in 2-D interaction input space
are o p e ra te d a nd p re s e nte d
in the 2- D s p a c e , i.e ., the y a re
c o ntro lle d b y a mouse or
touch screen a n d laid out on
a 2-D screen.

• Ho we ve r, 2- D c o ntro l in a 3– D
a p p lic a tio n is o fte n no t
s uffic ie nt ( e .g ., 3-D g a m e s ) .

• The m i s m a t c h in the degree s


of f reedom brings about
fatigue a n d inconvenience.
For this reason, n o n -WIMP–
b a s e d inte rfa c e s s uc h a s 3D
motion gestures are gaining
popularity. 32
Other (NON-WIMP) Interfaces

• The WIMP interface is synonymous with the GUI. It has been a huge
success since its introduction in the early 1980s, when it revolutionized
computer operations.

• Thanks to continuing advances in interface technologies (e. g., voice


recognition, language understanding, gesture recognition, 3 -D tracking)
and changes in the computing environment (e. g., personal to ubiquitous
, sensors everywhere)— new interfaces are starting to making their way
into our everyday lives.

• In addition, the cloud-computing environment has enabled running


computationally expensive interface algorithms, which non-WIMP
interfaces often require, over less powerful(e. g., mobile) devices against
large service populations. 33
Wire-framing
• Concrete interaction flow with specific choices

• Wire-framing originated from making rough specifications for web site


page design and resembles scenarios or storyboards

• Usually wire-frames look like page schematics or screen blueprints, as a


visual guide that represents the skeletal framework of a website or
interface

• The wireframe depicts the page layout or arrangement of the UI objects,


and how they respond to each other. The wireframe usually only focuses on
what a screen does, not what it looks like.

• Wireframes can be pencil drawings or sketches on a whiteboard, or they can


be produced by means of a broad array of free or commercial software
applications. 34
Wire-framing – Cont.
A wireframe is an outline of a webpage or app. It is usually a two-dimensional
outlining of what included in each view in an app/ a website (a blueprint).
Wireframes can be hand drawn on paper- Low-fidelity wireframes, built out
digitally - High-fidelity wireframes, or Med-fidelity wireframes (mix of low and
high)

35 https://looka.com/blog/wireframe-examples/ https://usabilitygeek.com/smart-ux-high-fidelity-wireframes/
Wire-framing Tools

• Wire-frames produced by these tools can be simulated to show


interface behavior, and depending on the tools, the interface logic
can be exported for actual code implementation (but usually not).

• Note there are tools that allow the user to visually specify UI
elements and their configuration and automatically generate code.

• Regardless of which type of a tool is used, it is important that the


design and implementation stage be separated. Through wire-
framing, the developer can specify and flesh out in more concrete
manner, the kinds of information displayed, the range of functions
available, their priorities, alternatives, and interaction flow.

36
Web based “FluidUI” wire-framing tool (www.fluidui.com)

37
Participatory Design: The Good

• More accurate information about tasks


• More opportunity for users to influence design decisions
• A sense of participation that builds users' ego investment in
successful implementation
• Potential for increased user acceptance of final system

38
Participatory Design: The Bad
• More costly
• Lengthen the implementation period
• Build antagonism with people not involved or whose suggestions
rejected
• Force designers to compromise their design to satisfy
incompetent participants
• Build opposition to implementation
• Exacerbate personality conflicts between design-team members
and users
• Show that organizational politics and preferences of certain
individuals are more important than technical issues

39
Other Things to Consider

• Ethnographic observation

• Social impact

• Legal impact

• Deadlines

40

You might also like