You are on page 1of 21

Introduction

HCI (human-computer interaction) is the study of how people interact with computers and
to what extent computers are or are not developed for successful interaction with human
beings. As its name implies, HCI consists of three parts: the user, the computer itself, and
the ways they work together.

User
By "user", we may mean an individual user, a group of users working together. An
appreciation of the way people's sensory systems (sight, hearing, touch) relay information
is vital. Also, different users form different conceptions or mental models about their
interactions and have different ways of learning and keeping knowledge and. In addition,
cultural and national differences play a part.

Computer
When we talk about the computer, we're referring to any technology ranging from desktop
computers, to large scale computer systems. For example, if we were discussing the
design of a Website, then the Website itself would be referred to as "the computer".
Devices such as mobile phones or VCRs can also be considered to be “computers”.

Interaction

There are obvious differences between humans and machines. In spite of these, HCI
attempts to ensure that they both get on with each other and interact successfully. In order
to achieve a usable system, you need to apply what you know about humans and
computers, and consult with likely users throughout the design process. In real systems,
the schedule and the budget are important, and it is vital to find a balance between what
would be ideal for the users and what is feasible in reality.

The Goals of HCI

The goals of HCI are to produce usable and safe systems, as well as functional systems. In
order o produce computer systems with good usability, developers must attempt to:

 understand the factors that determine how people use technology


 develop tools and techniques to enable building suitable systems
 achieve efficient, effective, and safe interaction
 put people first

Underlying the whole theme of HCI is the belief that people using a computer system
should come first. Their needs, capabilities and preferences for conducting various tasks
should direct developers in the way that they design systems. People should not have to
change the way that they use a system in order to fit in with it. Instead, the system should
be designed to match their requirements.

Usability

Usability is one of the key concepts in HCI. It is concerned with making systems easy to
learn and use. A usable system is:
 easy to learn
 easy to remember how to use
 effective to use
 efficient to use
 safe to use
 enjoyable to use

Why is usability important?

Many everyday systems and products seem to be designed with little regard to usability.
This leads to frustration, wasted time and errors. This list contains examples of interactive
products:
mobile phone, computer, personal organizer, remote control, soft drink machine, coffee
machine, ATM, ticket machine, library information system, the web, photocopier, watch,
printer, stereo, calculator, videogame etc¦.

How many are actually easy, effortless, and enjoyable to use?

For example, a photocopier might have buttons like these on its control panel.

Imagine that you just put your document into the photocopier and set the photocopier to
make 15 copies, sorted and stapled. Then you push the big button with the "C" to start
making your copies.
What do you think will happen?
(a) The photocopier makes the copies correctly.
(b) The photocopier settings are cleared and no copies are made.
If you selected (b) you are right! The "C" stands for clear, not copy. The copy button is
actually the button on the left with the "line in a diamond" symbol. This symbol is widely
used on photocopiers, but is of little help to someone who is unfamiliar with this.

Factors in HCI

There are a large number of factors which should be considered in the analysis and design
of a system using HCI principles. Many of these factors interact with each other, making
the analysis even more complex. The main factors are listed in the table below:
Organisation Factors
Training, job design, politics, roles, workorganisation
Environmental Factors
Noise, heating, lighting, ventilation
Health and Safety Factors
The User
Cognitive processes and capabilities
Motivation, enjoyment, satisfaction, personality, experience
Comfort Factors
Seating, equipment, layout.
User Interface
Input devices, output devices, dialogue structures, use of colour, icons, commands,
navigation, graphics, natural language, user support, multimedia,
Task Factors
Easy, complex, novel, task allocation, monitoring, skills
Constraints
Cost, timescales, budgets, staff, equipment, buildings
System Functionality
Hardware, software, application
Productivity Factors
Increase output, increase quality, decrease costs, decrease errors, increase innovation

Disciplines contributing to HCI

The field of HCI covers a wide range of topics, and its development has relied on
contributions from many disciplines. Some of the main disciplines which have contributed
to HCI are:

Computer Science
o technology
o software design, development & maintenance
o User Interface Management Systems (UIMS) & User Interface Development
Environments (UIDE)
o prototyping tools
o graphics

Cognitive Psychology
o information processing
o capabilities
o limitations
o cooperative working
o performance prediction

Social Psychology
o social & organizational structures

Ergonomics/Human Factors
o hardware design
o display readability

Linguistics
o natural language interfaces

Artificial Intelligence
o intelligent software

Philosophy, Sociology & Anthropology


o Computer supported cooperative work (CSCW)

Engineering & Design


o graphic design
o engineering principles

USER INTERFACES
A user interface is that portion of an interactive computer system that communicates with
the user. Design of the user interface includes any aspect of the system that is visible to
the user. Once, all computer user were specialists in computing, and interfaces consisted
of jumper wires in patch boards, punched cards prepared offline, and batch printouts.
Today a wide range of non specialists use computers, and keyboards, mice, and graphical
displays are the most common interface hardware. The user interface is becoming a larger
and larger portion of the software in a computer system—and a more important portion, as
broader groups of people use computers. As computers become more powerful, the critical
bottleneck in applying computer-based systems to solve problems is now more often in the
user interface, rather than the computer hardware or software.
Because the design of the user interface includes anything that is visible to the user,
interface design extends deep into the design of the interactive system as a whole. A good
user interface cannot be applied to a system after it is built but must be part of the design
process from the beginning. Proper design of a user interface can make a substantial
difference in training time, performance speed, error rates, user satisfaction, and the user’s
retention of knowledge of operations over time. The poor designs of the past are giving
way to elegant systems.
Descriptive taxonomies of users and tasks, predictive models of performance, and
explanatory theories are being developed to guide designers and evaluators. Haphazard
and intuitive development strategies with claims of ‘‘user friendliness’’ are yielding to a
more scientific approach. Measurement of learning time, performance, errors, and
subjective satisfaction is now a part of the design process.
Design of a User Interface
Design of a user interface begins with task analysis—an understanding of the user’s
underlying tasks and the problem domain. The user interface should be designed in terms
of the user’s terminology and conception of his or her job, rather than the programmer’s.
A good understanding of the cognitive and behavioral characteristics of people in general
as well as the particular user population is thus important. Good user interface design
works from the user’s capabilities and limitations, not the machine’s; this applies to
generic interfaces for large groups of people as well as to designing special interfaces for
users with physical or other disabilities. Knowledge of the nature of the user’s work and
environment is also critical. The task to be performed can then be divided and portions
assigned to the user or machine, based on knowledge of the capabilities and limitations of
each.
Levels of Design
It is useful to consider the user interface at several distinct levels of abstraction and to
develop a design and implementation for each. This simplifies the developer’s task by
allowing it to be divided into several smaller problems. The design of a user interface is
often divided into the conceptual, semantic, syntactic, and lexical levels. The conceptual
level describes the basic entities underlying the user’s view of the system and the actions
possible upon them. The semantic level describes the functions performed by the system.
This corresponds to a description of the functional requirements of the system, but it does
not address how the user will invoke the functions. The syntactic level describes the
sequences of inputs and outputs necessary to invoke the functions described. The lexical
level determines how the inputs and outputs are actually formed from primitive hardware
operations.

The syntactic-semantic object-action model is a related approach; it, too, separates the task
and computer concepts (i.e., the semantics in the previous paragraph) from the syntax for
carrying out the task. For example, the task of writing a scientific journal article can be
decomposed into the sub-tasks for writing the title page, the body, and the references.
Similarly, the title page might be decomposed into a unique title, one or more authors, an
abstract, and several keywords. To write a scientific article, the user must understand these
task semantics.
To use a word processor, the user must learn about computer semantics, such as
directories, filenames, files, and the structure of a file. Finally, the user must learn the
syntax of the commands for opening a file, inserting text, editing, and saving or printing
the file. Novices often struggle to learn how to carry out their tasks on the computer and to
remember the syntactic details. Once learned, the task and computer semantics are
relatively stable in human memory, but the syntactic details must be frequently rehearsed.
A knowledgeable user of one word processor who wishes to learn a second one only needs
to learn the new syntactic details.
Syntactic Level Design: Interaction Styles
The principal classes of user interfaces currently in use are command languages, menus,
forms, natural language, direct manipulation, virtual reality, and combinations of these.
Each interaction style has its merits for particular user communities or sets of tasks.
Choosing a style or a combination of styles is a key step, but within each there are
numerous minute decisions that determine the efficacy of the resulting system.
Command Language
Command language user interfaces use artificial languages, much like programming
languages. They are concise and unambiguous, but they are often difficult for a novice to
learn and remember. However, since they usually permit a user to combine constructs in
new and complex ways, they can be more powerful for advanced users. For them,
command languages provide a strong feeling that they are in charge and that they are
taking the initiative rather than responding to the computer. Command language users
must learn the syntax, but they can often express complex possibilities rapidly, without
having to read distracting prompts. However, error rates are typically high, training is
necessary, and retention may be poor. Error messages and on-line assistance are difficult
to provide because of the diversity of possibilities and the complexity of relating tasks to
computer concepts and syntax. Command languages and lengthier query or programming
languages are the domain of the expert frequent users (power users), who often derive
satisfaction from mastering a complex set of concepts and syntax. Command language
interfaces are also the style most amenable to programming, that is, writing programs or
scripts of user input commands.
Menu
Menu-based user interfaces explicitly present the options available to a user at each point
in a dialogue. Users read a list of items, select the one most appropriate to their task, type
or point to indicate their selection, verify that the selection is correct, initiate the action,
and observe the effect. If the terminology and meaning of the items are understandable
and distinct, users can accomplish their tasks with little learning or memorization and few
keystrokes. The menu requires only that the user be able to recognize the desired entry
from a list rather than recall it, placing a smaller load on long-term memory. The greatest
benefit may be that there is a clear structure to decision making, since only a few choices
are presented at a time. This interaction style is appropriate for novice and intermittent
users. It can also be appealing to frequent users if the display and selection mechanisms
are very rapid. A principal disadvantage is that they can be annoying for experienced users
who already know the choices they want to make and do not need to see them listed. Well-
designed menu systems, however, can provide bypasses for expert users. Menus are also
difficult to apply to ‘‘shallow’’ languages, which have large numbers of choices at a few
points, because the option display becomes too big.
For designers, menu selection systems require careful task analysis to ensure that all
functions are supported conveniently and that terminology is chosen carefully and used
consistently. Software tools to support menu selection help in ensuring consistent screen
design, validating completeness, and supporting maintenance.
Form Fill-in
Menu selection usually becomes cumbersome when data entry is required; form fill-in
(also called fill-in-the-blanks) is useful here. Users see a display of related fields, move a
cursor among the fields, and enter data where desired, much as they would with a paper
form for an invoice, personnel data sheet, or order form. Seeing the full set of related
fields on the screen at one time in a familiar format is often very helpful. Form fill-in
interaction does require that users understand the field labels, know the permissible
values, be familiar with typing and editing fields, and be capable of responding to error
messages. These demands imply that users must have some training or experience.
Natural Language
The principal benefit of natural language user interfaces is, of course, that the user
already knows the language. The hope that computers will respond properly to arbitrary
natural language sentences or phrases has engaged many researchers and system
developers, but with limited success thus far. Natural language interaction usually
provides little context for issuing the next command, frequently requires ‘‘clarification
dialog,’’ and may be slower and more cumbersome than the alternatives. Therefore, given
the state of the art, such an interface must be restricted to some subset of natural language,
and the subset must be chosen carefully—both in vocabulary and range of syntactic
constructs. Such systems often behave poorly when the user veers even slightly away from
the subset. Since they begin by presenting the illusion that the computer really can ‘‘speak
English,’’ the systems can trap or frustrate novice users. For this reason, the techniques of
human factors engineering can help. A human factors study of the task and the terms and
constructs people normally use to describe it can be used to restrict the subset of natural
language in an appropriate way, based on empirical observation.
Human factors study can also identify tasks for which natural language input is good or
bad. Although future research in natural language offers the hope of human-computer
communication that is so natural it is ‘‘just like talking to a person,’’ such conversation
may not always be the most effective way of commanding a machine. It is often more
verbose and less precise than computer languages. In settings such as surgery, air traffic
control, and emergency vehicle dispatching, people have evolved terse, highly formatted
languages, similar to computer languages, for communicating with other people. For a
frequent user, the effort of learning such an artificial language is outweighed by its
conciseness and precision, and it is often preferable to natural language.
Direct Manipulation
In a graphical or direct manipulation style of user interface (GUI), a set of objects is
presented on a screen, and the user has a repertoire of manipulations that can be performed
on any of them. This means that the user has no command language to remember beyond
the standard set of manipulations, few cognitive changes of mode, and a reminder of the
available objects and their states shown continuously on the display. Examples of this
approach include painting programs, spreadsheets, manufacturing or process control
systems that show a schematic diagram of the plant, air traffic control systems, some
educational and flight simulations, video games, and the Xerox Star desktop and its
descendants (Macintosh, Windows, and various X Window file managers). By pointing at
objects and actions, users can rapidly carry out tasks, immediately observe the results, and,
if necessary, reverse the action. Keyboard entry of commands or menu choices is replaced
by cursor motion devices, such as a lightpen, joystick, Touch screen, trackball, or mouse,
to select from a visible set of objects and actions. Direct manipulation is appealing to
novices, is easy to remember for intermittent users, encourages exploration, and, with
careful design, can be rapid for power users. The key difficulty in designing such
interfaces is to find suitable manipulable graphical representations or visual metaphors for
the objects of the problem domain, such as the desktop and filing cabinet. A principal
drawback of direct manipulation is that it is often difficult to create scripts or
parameterized programs in such an inherently dynamic and ephemeral language.
In a well-designed direct manipulation interface, the user’s input actions should be as
close as possible to the user’s thoughts that motivated those actions; the gap between the
user’s intentions and the actions necessary to input them into the computer should be
reduced. The goal is to build on the equipment and skills humans have acquired through
evolution and experience and exploit these for communicating with the computer. Direct
manipulation interfaces have enjoyed great success, particularly with new users, largely
because they draw on analogies to existing human skills (pointing, grabbing, moving
objects in space), rather than trained behaviors.
Virtual Reality
Virtual reality environments carry the user’s illusion of manipulating real objects and the
benefit of natural interaction still further. By coupling the motion of the user’s head to
changes in the images presented on a head-mounted display, the illusion of being
surrounded by a world of computer-generated images, or a virtual environment, is created.
Hand-mounted sensors allow the user to interact with these images as if they were real
objects located in space surrounding him or her. Augmented reality interfaces blend the
virtual world with a view of the real world through a half-silvered mirror or a TV camera,
allowing virtual images to be superimposed on real objects and annotations or other
computer data to be attached to real objects. The state of the art in virtual reality requires
expensive and cumbersome equipment and provides very low resoution display, so such
interfaces are currently used mainly where a feeling of ‘‘presence’’ in the virtual world is
of paramount importance, such as training of fire fighters or treatment of phobias. As the
technology improves they will likely find wider use.
Virtual reality interfaces, like direct manipulation interfaces, gain their strength by
exploiting the user’s pre-existing abilities and expectations. Navigating through a
conventional computer system requires a set of learned, unnatural commands, such as
keywords to be typed in, or function keys to be pressed. Navigating through a virtual
reality system exploits the user’s existing, natural ‘‘navigational commands,’’ such as
positioning his or her head and eyes, turning his or her body, or walking toward something
of interest. The result is a more natural user interface, because interacting with it is more
like interacting with the rest of the world.
Other Issues
Blending several styles may be appropriate when the required tasks and users are diverse.
Commands may lead the user to a form fill-in where data entry is required or pop-up (or
pulldown) menus may be used to control a direct manipulation environment when a
suitable visualization of operations cannot be found. The area of computer-supported
cooperative work extends the notion of a single user-computer interface to an interface
that supports the collaboration of a group of users.
Although interfaces using modern techniques such as direct manipulation are often easier
to learn and use than conventional ones, they are considerably more difficult to build,
since they are currently typically programmed in a low-level, ad-hoc manner. Appropriate
higher level software engineering concepts and abstractions for dealing with these new
interaction techniques are still needed. Direct manipulation techniques for the actual
design and building of direct manipulation interfaces are one solution. Specifying the
graphical appearance of the user interface (the ‘‘look’’ via direct manipulation is relatively
straightforward and provided by many current tools, such as Visual Basic, but describing
the behavior of the dialogue (the‘‘feel’’) is more difficult and not yet well supported;
predefined or ‘‘canned’’ controls and widgets represent the current state of the art.

DESIGN OF HCI
Design and usability of these systems leaves an effect on the quality of people’s
relationship to technology. Web applications, games, embedded devices, etc., are all a part
of this system, which has become an integral part of our lives. Let us now discuss on some
major components of this system.

Concept of Usability Engineering

Usability Engineering is a method in the progress of software and systems, which includes
user contribution from the inception of the process and assures the effectiveness of the
product through the use of a usability requirement and metrics.

It thus refers to the Usability Function features of the entire process of abstracting,
implementing & testing hardware and software products. Requirements gathering stage to
installation, marketing and testing of products, all fall in this process.

Goals of Usability Engineering

 Effective to use − Functional


 Efficient to use − Efficient
 Error free in use − Safe
 Easy to use − Friendly
 Enjoyable in use − Delightful Experience

Usability

Usability has three components − effectiveness, efficiency and satisfaction, using which,
users accomplish their goals in particular environments. Let us look in brief about these
components.

 Effectiveness − The completeness with which users achieve their goals.


 Efficiency − The competence used in using the resources to effectively achieve the
goals.
 Satisfaction − The ease of the work system to its users.

Usability Study

The methodical study on the interaction between people, products, and environment based
on experimental assessment. Example: Psychology, Behavioral Science, etc.

Usability Testing

The scientific evaluation of the stated usability parameters as per the user’s requirements,
competences, prospects, safety and satisfaction is known as usability testing.

Acceptance Testing

Acceptance testing also known as User Acceptance Testing (UAT), is a testing procedure
that is performed by the users as a final checkpoint before signing off from a vendor. Let
us take an example of the handheld barcode scanner.
Let us assume that a supermarket has bought barcode scanners from a vendor. The
supermarket gathers a team of counter employees and make them test the device in a mock
store setting. By this procedure, the users would determine if the product is acceptable for
their needs. It is required that the user acceptance testing "pass" before they receive the
final product from the vendor.

Software Tools

A software tool is a programmatic software used to create, maintain, or otherwise support


other programs and applications. Some of the commonly used software tools in HCI are as
follows −

 Specification Methods − The methods used to specify the GUI. Even though
these are lengthy and ambiguous methods, they are easy to understand.
 Grammars − Written Instructions or Expressions that a program would
understand. They provide confirmations for completeness and correctness.
 Transition Diagram − Set of nodes and links that can be displayed in text, link
frequency, state diagram, etc. They are difficult in evaluating usability, visibility,
modularity and synchronization.
 Statecharts − Chart methods developed for simultaneous user activities and
external actions. They provide link-specification with interface building tools.
 Interface Building Tools − Design methods that help in designing command
languages, data-entry structures, and widgets.
 Interface Mockup Tools − Tools to develop a quick sketch of GUI. E.g.,
Microsoft Visio, Visual Studio .Net, etc.
 Software Engineering Tools − Extensive programming tools to provide user
interface management system.
 Evaluation Tools − Tools to evaluate the correctness and completeness of
programs.

HCI and Software Engineering

Software engineering is the study of designing, development and preservation of


software. It comes in contact with HCI to make the man and machine interaction more
vibrant and interactive.

Let us see the following model in software engineering for interactive designing.

The Waterfall Method


Interactive System Design

The uni-directional movement of the waterfall model of Software Engineering shows that
every phase depends on the preceding phase and not vice-versa. However, this model is
not suitable for the interactive system design.
The interactive system design shows that every phase depends on each other to serve the
purpose of designing and product creation. It is a continuous process as there is so much to
know and users keep changing all the time. An interactive system designer should
recognize this diversity.

Prototyping

Prototyping is another type of software engineering models that can have a complete range
of functionalities of the projected system.

In HCI, prototyping is a trial and partial design that helps users in testing design ideas
without executing a complete system.

Example of a prototype can be Sketches. Sketches of interactive design can later be


produced into graphical interface. See the following diagram.

The above diagram can be considered as a Low Fidelity Prototype as it uses manual
procedures like sketching in a paper.

A Medium Fidelity Prototype involves some but not all procedures of the system. E.g.,
first screen of a GUI.

Finally, a Hi Fidelity Prototype simulates all the functionalities of the system in a design.
This prototype requires, time, money and work force.

User Centered Design (UCD)

The process of collecting feedback from users to improve the design is known as user
centered design or UCD.

UCD Drawbacks

 Passive user involvement.


 User’s perception about the new interface may be inappropriate.
 Designers may ask incorrect questions to users.

Interactive System Design Life Cycle (ISLC)

The stages in the following diagram are repeated until the solution is reached.

Diagram

GUI Design & Aesthetics

Graphic User Interface (GUI) is the interface from where a user can operate programs,
applications or devices in a computer system. This is where the icons, menus, widgets,
labels exist for the users to access.

It is significant that everything in the GUI is arranged in a way that is recognizable and
pleasing to the eye, which shows the aesthetic sense of the GUI designer. GUI aesthetics
provides a character and identity to any product.

DEVICES USED FOR INTERACTIVITY

Several interactive devices are used for the human computer interaction. Some of them are
known tools and some are recently developed or are a concept to be developed in the
future.

Touch Screen

The touch screen concept was prophesized decades ago, however the platform was
acquired recently. Today there are many devices that use touch screen. After vigilant
selection of these devices, developers customize their touch screen experiences.
The cheapest and relatively easy way of manufacturing touch screens are the ones using
electrodes and a voltage association. Other than the hardware differences, software alone
can bring major differences from one touch device to another, even when the same
hardware is used.

Along with the innovative designs and new hardware and software, touch screens are
likely to grow in a big way in the future. A further development can be made by making a
sync between the touch and other devices.

In HCI, touch screen can be considered as a new interactive device.

Gesture Recognition

Gesture recognition is a subject in language technology that has the objective of


understanding human movement via mathematical procedures. Hand gesture recognition
is currently the field of focus. This technology is future based.

This new technology magnitudes an advanced association between human and computer
where no mechanical devices are used. This new interactive device might terminate the
old devices like keyboards and is also heavy on new devices like touch screens.

Speech Recognition

The technology of transcribing spoken phrases into written text is Speech Recognition.
Such technologies can be used in advanced control of many devices such as switching on
and off the electrical appliances. Only certain commands are required to be recognized for
a complete transcription. However, this cannot be beneficial for big vocabularies.

This HCI device help the user in hands free movement and keep the instruction based
technology up to date with the users.

Keyboard

A keyboard can be considered as a primitive device known to all of us today. Keyboard


uses an organization of keys/buttons that serves as a mechanical device for a computer.
Each key in a keyboard corresponds to a single written symbol or character.

This is the most effective and ancient interactive device between man and machine that
has given ideas to develop many more interactive devices as well as has made
advancements in itself such as soft screen keyboards for computers and mobile phones.

Response Time

Response time is the time taken by a device to respond to a request. The request can be
anything from a database query to loading a web page. The response time is the sum of the
service time and wait time. Transmission time becomes a part of the response time when
the response has to travel over a network.

In modern HCI devices, there are several applications installed and most of them function
simultaneously or as per the user’s usage. This makes a busier response time. All of that
increase in the response time is caused by increase in the wait time. The wait time is due
to the running of the requests and the queue of requests following it.
So, it is significant that the response time of a device is faster for which advanced
processors are used in modern devices.

HCI Design

HCI design is considered as a problem solving process that has components like planned
usage, target area, resources, cost, and viability. It decides on the requirement of product
similarities to balance trade-offs.

The following points are the four basic activities of interaction design −

 Identifying requirements
 Building alternative designs
 Developing interactive versions of the designs
 Evaluating designs

Three principles for user-centered approach are −

 Early focus on users and tasks


 Empirical Measurement
 Iterative Design

Design Methodologies

Various methodologies have materialized since the inception that outline the techniques
for human–computer interaction. Following are few design methodologies −

 Activity Theory − This is an HCI method that describes the framework where the
human-computer interactions take place. Activity theory provides reasoning,
analytical tools and interaction designs.
 User-Centered Design − It provides users the center-stage in designing where
they get the opportunity to work with designers and technical practitioners.
 Principles of User Interface Design − Tolerance, simplicity, visibility,
affordance, consistency, structure and feedback are the seven principles used in
interface designing.
 Value Sensitive Design − This method is used for developing technology and
includes three types of studies − conceptual, empirical and technical.
o Conceptual investigations works towards understanding the values of the
investors who use technology.
o Empirical investigations are qualitative or quantitative design research
studies that shows the designer’s understanding of the users’ values.
o Technical investigations contain the use of technologies and designs in the
conceptual and empirical investigations.

Participatory Design

Participatory design process involves all stakeholders in the design process, so that the end
result meets the needs they are desiring. This design is used in various areas such as
software design, architecture, landscape architecture, product design, sustainability,
graphic design, planning, urban design, and even medicine.
Participatory design is not a style, but focus on processes and procedures of designing. It
is seen as a way of removing design accountability and origination by designers.

Task Analysis

Task Analysis plays an important part in User Requirements Analysis.

Task analysis is the procedure to learn the users and abstract frameworks, the patterns
used in workflows, and the chronological implementation of interaction with the GUI. It
analyzes the ways in which the user partitions the tasks and sequence them.

What is a TASK?

Human actions that contributes to a useful objective, aiming at the system, is a task. Task
analysis defines performance of users, not computers.

Hierarchical Task Analysis

Hierarchical Task Analysis is the procedure of disintegrating tasks into subtasks that could
be analyzed using the logical sequence for execution. This would help in achieving the
goal in the best possible way.

"A hierarchy is an organization of elements that, according to prerequisite relationships,


describes the path of experiences a learner must take to achieve any single behavior that
appears higher in the hierarchy. (Seels & Glasgow, 1990, p. 94)".

Techniques for Analysis

 Task decomposition − Splitting tasks into sub-tasks and in sequence.


 Knowledge-based techniques − Any instructions that users need to know.

‘User’ is always the beginning point for a task.

 Ethnography − Observation of users’ behavior in the use context.


 Protocol analysis − Observation and documentation of actions of the user. This is
achieved by authenticating the user’s thinking. The user is made to think aloud so
that the user’s mental logic can be understood.
Engineering Task Models

Unlike Hierarchical Task Analysis, Engineering Task Models can be specified formally
and are more useful.

Characteristics of Engineering Task Models

 Engineering task models have flexible notations, which describes the possible
activities clearly.
 They have organized approaches to support the requirement, analysis, and use of
task models in the design.
 They support the recycle of in-condition design solutions to problems that happen
throughout applications.
 Finally, they let the automatic tools accessible to support the different phases of
the design cycle.

ConcurTaskTree (CTT)

CTT is an engineering methodology used for modeling a task and consists of tasks and
operators. Operators in CTT are used to portray chronological associations between tasks.
Following are the key features of a CTT −

 Focus on actions that users wish to accomplish.


 Hierarchical structure.
 Graphical syntax.
 Rich set of sequential operators.

Dialog Design

A dialog is the construction of interaction between two or more beings or systems. In HCI,
a dialog is studied at three levels −

 Lexical − Shape of icons, actual keys pressed, etc., are dealt at this level.
 Syntactic − The order of inputs and outputs in an interaction are described at this
level.
 Semantic − At this level, the effect of dialog on the internal application/data is
taken care of.

Dialog Representation

To represent dialogs, we need formal techniques that serves two purposes −

 It helps in understanding the proposed design in a better way.


 It helps in analyzing dialogs to identify usability issues. E.g., Questions such as
“does the design actually support undo?” can be answered.

Introduction to Formalism

There are many formalism techniques that we can use to signify dialogs. In this chapter,
we will discuss on three of these formalism techniques, which are −
 The state transition networks (STN)
 The state charts
 The classical Petri nets

State Transition Network (STN)

STNs are the most spontaneous, which knows that a dialog fundamentally denotes to a
progression from one state of the system to the next.

The syntax of an STN consists of the following two entities −

 Circles − A circle refers to a state of the system, which is branded by giving a


name to the state.
 Arcs − The circles are connected with arcs that refers to the action/event resulting
in the transition from the state where the arc initiates, to the state where it ends.

STN Diagram

StateCharts

StateCharts represent complex reactive systems that extends Finite State Machines (FSM),
handle concurrency, and adds memory to FSM. It also simplifies complex system
representations. StateCharts has the following states −

 Active state − The present state of the underlying FSM.


 Basic states − These are individual states and are not composed of other states.
 Super states − These states are composed of other states.
Illustration

For each basic state b, the super state containing b is called the ancestor state. A super
state is called OR super state if exactly one of its sub states is active, whenever it is active.

Let us see the StateChart Construction of a machine that dispense bottles on inserting
coins.

The above diagram explains the entire procedure of a bottle dispensing machine. On
pressing the button after inserting coin, the machine will toggle between bottle filling and
dispensing modes. When a required request bottle is available, it dispense the bottle. In the
background, another procedure runs where any stuck bottle will be cleared. The ‘H’
symbol in Step 4, indicates that a procedure is added to History for future access.

Petri Nets

Petri Net is a simple model of active behavior, which has four behavior elements such as −
places, transitions, arcs and tokens. Petri Nets provide a graphical explanation for easy
understanding.

 Place − This element is used to symbolize passive elements of the reactive system.
A place is represented by a circle.
 Transition − This element is used to symbolize active elements of the reactive
system. Transitions are represented by squares/rectangles.
 Arc − This element is used to represent causal relations. Arc is represented by
arrows.
 Token − This element is subject to change. Tokens are represented by small filled
circles.
Direct Manipulation Programming

Direct manipulation has been acclaimed as a good form of interface design, and are well
received by users. Such processes use many source to get the input and finally convert
them into an output as desired by the user using inbuilt tools and programs.

“Directness” has been considered as a phenomena that contributes majorly to the


manipulation programming. It has the following two aspects.

 Distance
 Direct Engagement

Distance

Distance is an interface that decides the gulfs between a user’s goal and the level of
explanation delivered by the systems, with which the user deals. These are referred to as
the Gulf of Execution and the Gulf of Evaluation.

The Gulf of Execution

The Gulf of Execution defines the gap/gulf between a user's goal and the device to
implement that goal. One of the principal objective of Usability is to diminish this gap by
removing barriers and follow steps to minimize the user’s distraction from the intended
task that would prevent the flow of the work.

The Gulf of Evaluation

The Gulf of Evaluation is the representation of expectations that the user has interpreted
from the system in a design. As per Donald Norman, The gulf is small when the system
provides information about its state in a form that is easy to get, is easy to interpret, and
matches the way the person thinks of the system.

Direct Engagement

It is described as a programming where the design directly takes care of the controls of the
objects presented by the user and makes a system less difficult to use.

The scrutiny of the execution and evaluation process illuminates the efforts in using a
system. It also gives the ways to minimize the mental effort required to use a system.

Problems with Direct Manipulation

 Even though the immediacy of response and the conversion of objectives to


actions has made some tasks easy, all tasks should not be done easily. For
example, a repetitive operation is probably best done via a script and not through
immediacy.
 Direct manipulation interfaces finds it hard to manage variables, or illustration of
discrete elements from a class of elements.
 Direct manipulation interfaces may not be accurate as the dependency is on the
user rather than on the system.
 An important problem with direct manipulation interfaces is that it directly
supports the techniques, the user thinks.
Item Presentation Sequence

In HCI, the presentation sequence can be planned according to the task or application
requirements. The natural sequence of items in the menu should be taken care of. Main
factors in presentation sequence are −

 Time
 Numeric ordering
 Physical properties

A designer must select one of the following prospects when there are no task-related
arrangements −

 Alphabetic sequence of terms


 Grouping of related items
 Most frequently used items first
 Most important items first

Menu Layout

 Menus should be organized using task semantics.


 Broad-shallow should be preferred to narrow-deep.
 Positions should be shown by graphics, numbers or titles.
 Subtrees should use items as titles.
 Items should be grouped meaningfully.
 Items should be sequenced meaningfully.
 Brief items should be used.
 Consistent grammar, layout and technology should be used.
 Type ahead, jump ahead, or other shortcuts should be allowed.
 Jumps to previous and main menu should be allowed.
 Online help should be considered.

Guidelines for consistency should be defined for the following components −

 Titles
 Item placement
 Instructions
 Error messages
 Status reports

Form Fill-in Dialog Boxes

Appropriate for multiple entry of data fields −

 Complete information should be visible to the user.


 The display should resemble familiar paper forms.
 Some instructions should be given for different types of entries.

Users must be familiar with −

 Keyboards
 Use of TAB key or mouse to move the cursor
 Error correction methods
 Field-label meanings
 Permissible field contents
 Use of the ENTER and/or RETURN key.

Form Fill-in Design Guidelines −

 Title should be meaningful.


 Instructions should be comprehensible.
 Fields should be logically grouped and sequenced.
 The form should be visually appealing.
 Familiar field labels should be provided.
 Consistent terminology and abbreviations should be used.
 Convenient cursor movement should be available.
 Error correction for individual characters and entire field’s facility should be
present.
 Error prevention.
 Error messages for unacceptable values should be populated.
 Optional fields should be clearly marked.
 Explanatory messages for fields should be available.
 Completion signal should populate.

You might also like