You are on page 1of 20

78

DESIGN PROCESS AND TASK ANALYSIS

LESSON 7 – DIALOG DESIGN

A dialog is the construction of interaction between two or more beings or systems. In HCI, a
dialog is studied at three levels −
● Lexical − Shape of icons, actual keys pressed, etc., are dealt at this level.
● Syntactic − The order of inputs and outputs in an interaction are described at this level.
● Semantic − At this level, the effect of dialog on the internal application/data is taken care of.

Lesson Objectives:

At the end of this lesson, you will be able to:


● Learn all the aspects of dialog levels and representation
● Introduce formalism techniques that we can use to signify dialogs.
● Learn about visual materials being used in communication process.
● Understand direct manipulation as a good form of interface design.
● Know the sequence in item presentation.
● Understand the use of proper use of menu layout and form fill-in dialog boxes.

7.1 Dialog Representation

To represent dialogs, we need formal techniques that serves two purposes:


● It helps in understanding the proposed design in a better way.
● It helps in analyzing dialogs to identify usability issues. E.g., Questions such as “does the design
actually support undo?” can be answered.

7.2 Introduction to Formalism

There are many formalism techniques that we can use to signify dialogs. In this chapter, we will
discuss on three of these formalism techniques, which are −
● The state transition networks (STN)
● The state charts
● The classical Petri nets

State Transition Network (STN)

STNs are the most spontaneous, which knows that a dialog fundamentally denotes to a
progression from one state of the system to the next.
79

DESIGN PROCESS AND TASK ANALYSIS

The syntax of an STN consists of the following two entities:


● Circles − A circle refers to a state of the system, which is branded by giving a name to the
state.
● Arcs − The circles are connected with arcs that refers to the action/event resulting in the
transition from the state where the arc initiates, to the state where it ends.
STN Diagram

Image Courtesy of TutorialsPoint

StateCharts

StateCharts represent complex reactive systems that extends Finite State Machines (FSM),
handle concurrency, and adds memory to FSM. It also simplifies complex system representations.
StateCharts has the following states:
● Active state − The present state of the underlying FSM.
● Basic states − These are individual states and are not composed of other states.
● Super states − These states are composed of other states.

Illustration

For each basic state b, the super state containing b is called the ancestor state. A super state
is called OR super state if exactly one of its sub states is active, whenever it is active.

Let us see the StateChart Construction of a machine that dispense bottles on inserting coins.
80

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of TutoralsPoint

The diagram explains the entire procedure of a bottle dispensing machine. On pressing the
button after inserting coin, the machine will toggle between bottle filling and dispensing modes. When
a required request bottle is available, it dispense the bottle. In the background, another procedure
runs where any stuck bottle will be cleared. The ‘H’ symbol in Step 4, indicates that a procedure is
added to History for future access.

Petri Nets

Petri Net is a simple model of active behavior, which has four behavior elements such as −
places, transitions, arcs and tokens. Petri Nets provide a graphical explanation for easy understanding.
● Place − This element is used to symbolize passive elements of the reactive system. A place is
represented by a circle.
● Transition − This element is used to symbolize active elements of the reactive system. Transitions
are represented by squares/rectangles.
81

DESIGN PROCESS AND TASK ANALYSIS

● Arc − This element is used to represent causal relations. Arc is represented by arrows.
● Token − This element is subject to change. Tokens are represented by small filled circles.

Petri Nets were developed originally by Carl Adam Petri [Pet62], and were the subject of his
dissertation in 1962. Since then, Petri Nets and their concepts have been extended and developed,
and applied in a variety of areas: Office automation, work-flows, flexible manufacturing, programming
languages, protocols and networks, hardware structures, real-time systems, performance evaluation,
operations research, embedded systems, defense systems, telecommunications, Internet, e-
commerce and trading, railway networks, biological systems.
Here is an example of a Petri Net model, one for the control of a metabolic pathway. Tool
used: Visual Object Net++
82

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of TechFak: Petri Nets


83

DESIGN PROCESS AND TASK ANALYSIS

7.3 Visual Thinking

Visual materials have assisted in the communication process since ages in form of paintings,
sketches, maps, diagrams, photographs, etc. In today’s world, with the invention of technology and its
further growth, new potentials are offered for visual information such as thinking and reasoning. As per
studies, the command of visual thinking in human-computer interaction (HCI) design is still not
discovered completely. So, let us learn the theories that support visual thinking in sense-making
activities in HCI design.

An initial terminology for talking about visual thinking was discovered that included concepts
such as visual immediacy, visual impetus, visual impedance, and visual metaphors, analogies and
associations, in the context of information design for the web.

Visual thinking is the use of imagery and other visual forms to make sense of the world and to
create meaningful content. Digital imagery is a special form of visual thinking, one that is particularly
salient for HCI and interaction design. Digital photographs qualify as digital imagery only when they
are also visual thinking that is, when they are instrumental in making sense or creating meaning.

As such, this design process became well suited as a logical and collaborative method during
the design process. Let us discuss in brief the concepts individually.

Visual Immediacy

It is a reasoning process that helps in understanding of information in the visual representation.


The term is chosen to highlight its time related quality, which also serves as an indicator of how well the
reasoning has been facilitated by the design.

Visual Impetus

Visual impetus is defined as a stimulus that aims at the increase in engagement in the contextual
aspects of the representation.

Visual Impedance

It is perceived as the opposite of visual immediacy as it is a hindrance in the design of the


representation. In relation to reasoning, impedance can be expressed as a slower cognition.

Visual Metaphors, Association, Analogy, Abduction and Blending


84

DESIGN PROCESS AND TASK ANALYSIS

● When a visual demonstration is used to understand an idea in terms of another familiar idea it is
called a visual metaphor.
● Visual analogy and conceptual blending are similar to metaphors. Analogy can be defined as
an implication from one particular to another. Conceptual blending can be defined as
combination of elements and vital relations from varied situations.

The HCI design can be highly benefited with the use of above mentioned concepts. The
concepts are pragmatic in supporting the use of visual procedures in HCI, as well as in the design
processes.

7.4 Direct Manipulation Programming

Let’s say that you’re looking at an image of yourself on a roller coaster and want to see if your
terrified expression has been caught on camera. What do you do? Something like this?

Image Courtesy of Nielsen Norman Group

On a mobile phone you can pinch out to zoom into an image and pinch in to zoom out.

The action of using your fingertips to zoom in and out of the image is an example of a direct-
manipulation interaction. Another classic example is dragging a file from a folder to another one in
order to move it.
85

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of Nielsen Norman Group


Moving a file on MacOS using direct manipulation involves dragging that file from the source folder
and moving it into the destination folder.
Definition: Direct manipulation (DM) is an interaction style in which users act on displayed objects
of interest using physical, incremental, reversible actions whose effects are immediately visible on the
screen.

Ben Shneiderman first coined the term “direct manipulation” in the early 1980s, at a time when
the dominant interaction style was the command line. In command-line interfaces, the user must
remember the system label for a desired action, and type it in together with the names for the objects
of the action.

Image Courtesy of Nielsen Norman Group


Moving a file in a command-line interface involves remembering the name of the command (“mv”
in this case), the names of the source and destination folders, as well as the name of the file to be
moved.

Direct manipulation is one of the central concepts of graphical user interfaces (GUIs) and is
sometimes equated with “what you see is what you get” (WYSIWYG). These interfaces combine menu-
based interaction with physical actions such as dragging and dropping in order to help the user use
the interface with minimal learning.

The Characteristics of Direct Manipulation

In his analysis of direct manipulation, Shneiderman identified several attributes of this interaction
style that make it superior to command-line interfaces:
86

DESIGN PROCESS AND TASK ANALYSIS

● Continuous representation of the object of interest. Users can see visual representations of the
objects that they can interact with. As soon as they perform an action, they can see its effects
on the state of the system. For example, when moving a file using drag-and-drop, users can see
the initial file displayed in the source folder, select it, and, as soon as the action was completed,
they can see it disappear from the source and appear in the destination — an immediate
confirmation that their action had the intended result. Thus, direct-manipulation UIs satisfy, by
definition, the first usability heuristic: the visibility of the system status. In contrast, in a command-
line interface, users usually must explicitly check that their actions had indeed the intended
result (for example, by listing the content of the destination directory).
● Physical actions instead of complex syntax. Actions are invoked physically via clicks, button
presses, menu selections, and touch gestures. In the move-file example, drag-and-drop has a
direct analog in the real world, so this implementation for the move action has the right signifiers
and can be easily learned and remembered. In contrast, the command-line interface requires
users to recall not only the name of the command (“mv”), but also the names of the objects
involved (files and paths to the source and destination folders). Thus, unlike DM interfaces,
command-line interfaces are based on recall instead of recognition and violate an important
usability heuristic.
● Continuous feedback and reversible, incremental actions. Because of the visibility of the system
state, it’s easy to validate that each action caused the right result. Thus, when users make
mistakes, they can see right away the cause of the mistake and they should be able to easily
undo it. In contrast, with command-line interfaces, one single user command may have multiple
components that can cause the error. For instance, in the example below, the name of the
destination folder contains a typo “Measuring Usablty” instead of “Measuring Usability”. The
system simply assumed that the file name should be changed to “Measuring Usablty”. If users
check the destination folder, they will discover that there was a problem, but will have no way
of knowing what caused it: did they use the wrong command, the wrong source filename, or
the wrong destination?

Image Courtesy of Nielsen Norman Group


The command contains a typo in the destination name. Users have no way of identifying this
error and must do detective work to understand what went wrong.

This type of problem is familiar to everyone who has written a computer program. Finding a bug
when there are variety of potential causes often takes more time than actually producing the code.
87

DESIGN PROCESS AND TASK ANALYSIS

● Rapid learning. Because the objects of interest and the potential actions in the system are
visually represented, users can use recognition instead of recall to see what they could do and
select an operation most likely to fulfill their goal. They don’t have to learn and remember
complex syntax. Thus, although direct-manipulation interfaces may require some initial
adjustment, the learning required is likely to be less substantial.

Direct Manipulation vs. Skeuomorphism

When direct manipulation first appeared, it was based on the office-desk metaphor — the
computer screen was an office desk, and different documents (or files) were placed in folders, moved
around, or thrown to trash. This underlying metaphor indicates the skeuomorphic origin of the concept.
The DM systems described originally by Shneiderman are also skeuomorphic — that is, they are based
on resemblance with a physical object in the real world. Thus, he talks about software interfaces that
copy Rolodexes and physical checkbooks to support tasks done (at the time) with these tools.

As we all know, skeuomorphism saw a huge revival in the early iPhone days, and has now come
out of fashion.

Image Courtesy of Nielsen Norman Group


A skeuomorphic direct-manipulation interface for “playing” the piano on a phone.

While skeuomorphic interfaces are indeed based on direct manipulation, not all direct-
manipulation interfaces need to be skeuomorphic. In fact, today’s flat interfaces are a reaction to
skeuomorphism and depart from the real-world metaphors, yet they do rely on direct manipulation.

Disadvantages of Direct Manipulation


88

DESIGN PROCESS AND TASK ANALYSIS

Almost each DM characteristic has a directly corresponding disadvantage:


● Continuous representation of the objects? It means that you can only act on the small number
of objects that can be seen at any given time. And objects that are out of sight, but not out of
mind, can only be dealt with after the user has laboriously navigated to the place that holds
those objects so that they can be made visible.
● Physical actions? One word: RSI (repetitive strain injury). It’s a lot of work to move all those icons
and sliders around the screen. Actually, two more words: accidental activation, which is
particularly common on touchscreens, but can also happen on mouse-driven systems.
● Continuous fe0edback? Only if you attempt an operation that the system feels like letting you
do. If you want to do something that’s not available, you can push and drag buttons and icons
as much as you want with no effect whatsoever. No feedback, only frustration. (A good UI will
show in-context help to explain why the desired action isn’t available and how to enable it.
Sadly, UIs this good are not very common.)
● Rapid learning? Yes, if the design is good, but in practice learnability depends on how well
designed the interface is. We’ve all seen menus with poorly chosen labels, buttons that did not
look clickable, or drop-down boxes with more options than the length of the screen.
And there are even more disadvantages:
● DM is slow. If the user needs to perform a large number of actions, on many objects, using direct
manipulation takes a lot longer than a command-line UI. Have you encountered any software
engineers who use DM to write their code? Sure, they might use DM elements in their software-
development interfaces, but the majority of the code will be typed in.
● Repetitive tasks are not well supported. DM interfaces are great for novices because they are
easy to learn, but because they are slow, experts who have to perform the same set of tasks
with high frequency, usually rely on keyboard shortcuts, macros, and other command-language
interactions to speed up the process. For example, when you need to send an email
attachment to one recipient, it is easy to drag the desired file and drop it into the attachment
section. However, if you needed to do this for 50 different recipients with customized subject
lines, a macro or script will be faster and less tedious.
● Some gestures can be more error-prone than typing. Whereas in theory, because of the
continuous feedback, DM minimizes the chance of certain errors, in practice, there are
situations when a gesture is harder to perform than typing equivalent information. For example,
good luck trying to move the 50th column of a spreadsheet into the 2nd position using drag and
drop. For this exact reason, Netflix offers 3 interaction techniques for reordering subscribers’ DVD
queues: dragging the movie to the desired position (easy for short moves), a one-button shortcut
for moving into the #1 position (handy when you must watch a particular movie ASAP), and the
indirect option of typing the number of the desired new position (useful in most other cases).
89

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of Nielsen Norman Group


Netflix allows 3 interactions for rearranging a queue: dragging a movie to the desired position (not
shown), moving it directly to top (Move to top option), or typing in the position where it needs to be
moved (Move to option).

● Accessibility may suffer. DM UIs may fail visually impaired users or users with motor skill
impairments, especially if they are heavily based on physical actions, as opposed to button
presses and menu selections. (Workarounds exist, but it can be difficult to implement them.)

Direct manipulation has been acclaimed as a good form of interface design, and are well
received by users. Such processes use many source to get the input and finally convert them into an
output as desired by the user using inbuilt tools and programs.

“Directness” has been considered as a phenomenon that contributes majorly to the


manipulation programming. It has the following two aspects: Distance and Direct Engagement.

Distance

Distance is an interface that decides the gulfs between a user’s goal and the level of
explanation delivered by the systems, with which the user deals. These are referred to as the Gulf of
Execution and the Gulf of Evaluation.

The Gulf of Execution

The Gulf of Execution defines the gap/gulf between a user's goal and the device to implement
that goal. One of the principal objective of Usability is to diminish this gap by removing barriers and
follow steps to minimize the user’s distraction from the intended task that would prevent the flow of the
work.
90

DESIGN PROCESS AND TASK ANALYSIS

The Gulf of Evaluation

The Gulf of Evaluation is the representation of expectations that the user has interpreted from
the system in a design. As per Donald Norman, The gulf is small when the system provides information
about its state in a form that is easy to get, is easy to interpret, and matches the way the person thinks
of the system.

Direct Engagement

It is described as a programming where the design directly takes care of the controls of the
objects presented by the user and makes a system less difficult to use.

The scrutiny of the execution and evaluation process illuminates the efforts in using a system. It
also gives the ways to minimize the mental effort required to use a system.

Problems with Direct Manipulation

● Even though the immediacy of response and the conversion of objectives to actions has made
some tasks easy, all tasks should not be done easily. For example, a repetitive operation is
probably best done via a script and not through immediacy.
● Direct manipulation interfaces find it hard to manage variables, or illustration of discrete
elements from a class of elements.
● Direct manipulation interfaces may not be accurate as the dependency is on the user rather
than on the system.
● An important problem with direct manipulation interfaces is that it directly supports the
techniques, the user thinks.

7.5 Item Presentation Sequence

In HCI, the presentation sequence can be planned according to the task or application
requirements. The natural sequence of items in the menu should be taken care of. Main factors in
presentation sequence are:
● Time
● Numeric ordering
● Physical properties

A designer must select one of the following prospects when there are no task-related
arrangements
● Alphabetic sequence of terms
91

DESIGN PROCESS AND TASK ANALYSIS

● Grouping of related items


● Most frequently used items first
● Most important items first
7.6 Menu Layout

Helping users navigate should be a high priority for almost every website and application. After
all, even the coolest feature or the most compelling content is useless if people can’t find it. And even
if you have a search function, you usually shouldn’t rely on search as the only way to navigate. Most
designers recognize this, and include some form of navigation menu in their designs.

Definition: Navigation menus are lists of content categories or features, typically


presented as a set of links or icons grouped together with visual styling distinct from the
rest of the design.

Navigation menus include, but are not limited to, navigation bars and hamburger menus.

Menus are so important that you find them in virtually every website or piece of software you
encounter, but not all menus are created equally. Too often we observe users struggling with menus
that are confusing, difficult to manipulate, or simply hard to find.

Avoid common mistakes by following these guidelines for usable navigation menus:

A. Make It Visible
1. Don’t use tiny menus (or menu icons) on large screens. Menus shouldn’t be hidden when you
have plenty of space to display them.
2. Put menus in familiar locations. Users expect to find UI elements where they’ve seen them
before on other sites or apps (e.g., left rail, top of the screen). Make these expectations work in
your favor by placing your menus where people expect to find them.
3. Make menu links look interactive. Users may not even realize that it’s a menu if the options don’t
look clickable (or tappable). Menus may seem to be just decorative pictures or headings if you
incorporate too many graphics, or adhere too strictly to principles of flat design.
4. Ensure that your menus have enough visual weight. In many cases menus that are placed in
familiar locations don’t require much surrounding white space or color saturation in order to be
noticeable. But if the design is cluttered, menus that lack visual emphasis can easily be lost in a
sea of graphics, promotions, and headlines that compete for the viewer’s attention.
5. Use link text colors that contrast with the background color. It’s amazing how many designers
ignore contrast guidelines; navigating through digital space is disorienting enough without
having to squint at the screen just to read the menu.
92

DESIGN PROCESS AND TASK ANALYSIS

Even designers familiar with all of these guidelines can still end up creating menus that are
overlooked by users, because it is so difficult to objectively evaluate your own work — especially for
subjective criteria like whether something is visible. If you know where it is (because you put it there),
then of course you can see it! That’s why it’s so important to test your menus with users.

Image Courtesy of hci.ilikecake.ie

B. Communicate the Current Location


6. Tell users ‘where’ the currently visible screen is located within the menu options. “Where am I?”
is one of the fundamental questions users need to answer to successfully navigate. Users rely on
visual cues from menus (and other navigation elements such a breadcrumbs) to answer this
critical question. Failing to indicate the current location is probably the single most common
mistake we see on website menus. Ironically, these menus have the greatest need to orient
users, since visitors often don’t enter from the homepage.
93

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of hci.ilikecake.ie


C. Coordinate Menus with User Tasks
7. Use understandable link labels. Figure out what users are looking for, and use category labels
that are familiar and relevant. Menus are not the place to get cute with made-up words and
internal jargon. Stick to terminology that clearly describes your content and features.
8. Make link labels easy to scan. You can reduce the amount of time users need to spend reading
menus by left-justifying vertical menus and by front-loading key terms.
9. For large websites, use menus to let users preview lower-level content. If typical user journeys
involve drilling down through several levels, mega-menus (or traditional drop-downs) can save
users time by letting them skip a level (or two).
10. Provide local navigation menus for closely related content. If people frequently want to
compare related products or complete several tasks within a single section, make those nearby
pages visible with a local navigation menu, rather than forcing people to ‘pogo stick’ up and
down your hierarchy.
11. Leverage visual communication. Images, graphics, or colors that help users understand the
menu options can aid comprehension. But make sure the images support user tasks (or at least
don't make the tasks more difficult).
94

DESIGN PROCESS AND TASK ANALYSIS

Image Courtesy of hci.ilikecake.ie

D. Make It Easy to Manipulate


12. Make menu links big enough to be easily tapped or clicked. Links that are too small or too close
together are a huge source of frustration for mobile users, and also make large-screen designs
unnecessarily difficult to use.
13. Ensure that drop-downs are not too small or too big. Hover-activated drop-downs that are too
short quickly become an exercise in frustration, because they tend to disappear while you’re
trying to mouse over them to click a link. On the other hand, vertical drop-downs that are too
long make it difficult to access links near the bottom of the list, because they may be cut off
below the edge of the screen and require scrolling. Finally, hover-activated drop-downs that
are too wide are easily mistaken for new pages, creating user confusion about why the page
has seemingly changed even though they haven’t clicked anything.
14. Consider ‘sticky’ menus for long pages. Users who have reached the bottom of a long page
may face a lot of tedious scrolling before they can get back to the menus at the top. Menus
that remain visible at the top of the viewport even after scrolling can solve that problem and
are especially welcome on smaller screens.
95

DESIGN PROCESS AND TASK ANALYSIS

15. Optimize for easy physical access to frequently used commands. For drop-down menus, this
means putting the most common items close to the link target that launches the drop-down (so
the user's mouse or finger won't have to travel as far. Recently, some mobile apps have even
begun reviving pie menus, which keep all the menu options nearby by arranging them in a
circle (or semicircle).

7.7 Form Fill-in Dialog Boxes

Dialog box is a graphical user interface element, which can be noticed as small window that
provides information for the user and waits for the response the user in order to perform action upon
users input. These dialog boxes are also used to confirmation message/notice to the user with “OK”
button on it in order to confirm that the message/notice has been read by the user.

Example: The more often used dialog boxes are alerts.

Image Courtesy of HCI-sec02

Appropriate for multiple entry of data fields:


● Complete information should be visible to the user.
● The display should resemble familiar paper forms.
● Some instructions should be given for different types of entries.

Users must be familiar with:


● Keyboards
● Use of TAB key or mouse to move the cursor
● Error correction methods
● Field-label meanings
● Permissible field contents
● Use of the ENTER and/or RETURN key.
96

DESIGN PROCESS AND TASK ANALYSIS

Microsoft Word Example Dialog Box

One of the reasons why dialog boxes are very important is to ensure that the users will avoid
mistakes such as the dialog box shown on Figure 1. The user may be trying to close the application
already while working on a document but is not yet done saving it.

Form Fill-in Design Guidelines:


● Title should be meaningful.
● Instructions should be comprehensible.
● Fields should be logically grouped and sequenced.
● The form should be visually appealing.
● Familiar field labels should be provided.
● Consistent terminology and abbreviations should be used.
● Convenient cursor movement should be available.
● Error correction for individual characters and entire field’s facility should be present.
● Error prevention.
● Error messages for unacceptable values should be populated.
● Optional fields should be clearly marked.
● Explanatory messages for fields should be available.
● Completion signal should populate.
97

DESIGN PROCESS AND TASK ANALYSIS

Summary of the Lesson:

● A dialog is the construction of interaction between two or more beings or systems. In HCI, a
dialog is studied at three levels: lexical, syntactic and semantic.
● To represent dialogs, we need formal techniques that helps in understanding the proposed
design in a better way and helps in analyzing dialogs to identify usability issues.
● The three of these formalism techniques are, state transition networks (STN), state charts, and
classical Petri nets.
● Direct manipulation is an interaction style in which the objects of interest in the UI are visible and
can be acted upon via physical, reversible, incremental actions that receive immediate
feedback.
● The presentation sequence can be planned according to the task or application requirements.

You might also like