You are on page 1of 40

Esoft computer studies

User interface
designing

Preface
This book will cover the aspects regarding the UI designing subject in the edexcel
HND in computing. When dealing with software industry, the role of a UI designer is
becoming more and more needed. Because there much more to concentrate on the
viewpoint of user rather than developing a software that will accept or reject
eventually.

User interface
designing

Contents
1.

Understanding User interface designing


1.1. History of User interface designing
1.2. Common facts
1.3. Principals of UI designing
1.4. Things to consider
1.5. Changing working environment of a computer
1.5.1. Screens
1.5.2. Keyboards
1.5.3. Pointing devices
1.5.4. Speech recognition
1.5.5. Information storing
1.6. Look and feel development
1.6.1. Graphical interfaces
1.6.2. Screen design for data entry
1.6.3. Intelligent Human computer interfaces
1.6.4. Virtual persons support
1.7. User experience issues
1.7.1. Range of users
1.7.1.1.
Expert
1.7.1.2.
Occasional
1.7.1.3.
Novice
1.7.2. Special needs of users
1.7.2.1.
Ergonomics
1.8. Interaction systems development
1.8.1. Event driven systems
1.8.2. Use of multimedia in UI
1.8.3. Modeling techniques
1.9. Applications of new futures in User interface
1.9.1. Selection of UI
1.9.1.1.
Touch screen
1.9.1.1.1.
Capacitive
1.9.1.1.2.
Resistive
1.9.1.2.
Voice activation
1.9.1.2.1.
Voice recognition
1.9.1.2.2.
Voice command

2.

Understanding issues related to selection of user interface


3

User interface
designing
2.1.1. Identifying the characteristics of a user
2.1.1.1.
Human memory
2.1.1.1.1.
Knowledge representation of humans
2.1.1.1.2.
Perception
2.1.1.1.3.
Attention of user
2.1.1.1.4.
Reasoning
2.1.1.1.5.
Communication
2.1.1.1.6.
Skills of user and their acquisition
2.1.1.1.7.
Users cognitive model
2.1.1.1.8.
Use of metaphors and the impact on HCI
2.1.2. Health conditions of the user
2.1.2.1.
Ergonomics and surrounding environment
2.1.2.2.
Specific concerns of a user
2.1.2.2.1.
Repetitive strain injury
2.1.3. Wider considerations of HCI
2.1.3.1.
Costs of implementation
2.1.3.2.
User level training
2.1.3.3.
System requirements costs
2.1.3.3.1.
Hardware
2.1.3.3.2.
Software
2.1.3.4.
Communications
2.1.3.5.
Information storage
2.1.3.6.
Health and safety

3.

Developing a Human computer interface


3.1. Modeling of the interface
3.1.1. Mapping the system functions to conception design
3.1.2. Grouping of tasks into logical sets
3.2. Analyzing of the factors
3.2.1. Task based analyzing
3.2.2. User centered methods
3.2.2.1.
Storyboarding
3.2.2.2.
User need analysis
3.3.

Evaluating a User interface


3.3.1. Functionality characteristics of UI
3.3.1.1.
Use of quality measuring metrics
3.3.1.1.1.Fitts law
3.3.1.1.2.Keystroke level method
3.3.1.1.3.Test documentation

User interface
designing

1. Understanding User interface designing


The way that you accomplish tasks with a product what you do and how it
responds thats the interface Jef RaskinUser
interface design isnt just about buttons and menus; its about the interaction
between the user and the application or device, and in many cases, its about the
5

User interface
designing
interaction between multiple users through that device. This means that user
interface design isnt about how a product looks, but rather about how it works. Its
not just about arranging buttons and picking colors, but rather about choosing the
right tools for the job. Does a particular interface even need buttons? If so, what do
those buttons need to do? What do I need to provide users with so that they can
figure out how my application works and accomplish what they want to do with
ease?
Working on the user interface early on in the product development life cycle is vital
because, as Jef Raskin succinctly puts it, As far as the customer is concerned, the
interface is the product 1. The user sees and interacts with the user interface, not
the underlying backend architecture of your
application. Getting this element right will thus have a big impact on how much
your customers enjoy using your product and how easy your product is to use. Start
by designing the interface first and then coding the backend engine that powers it,
rather than building the backend first and then
putting an interface wrapper over top.

1.1.
interface designing

History of User

Earlier systems used CUI interface


Then some programs had few colors
Then came the GUI interface
More focused on graphics and animations
Evolution of UI development started
Normal applications moved to web applications
Commercial web apps need more attention

1.2.

Common facts

Due to the designers inexperience, several common opinions about interface


design have emerged,

User interface
designing
Myth: Interface design is about navigation.
Reality: A user of a program is trying to do something, not simply
to go somewhere.
Myth: If the user always knows what to do next in a program, so
therefore the program is usable, then the program is a success.
Reality: Certainly it is true that if the program is unusable, it is not a
success, at least in the immediate sense of being useful (it may make
contributions to future works, even as a failure).
Myth: Most of the work involved in interface design requires
expertise in video and Flash.
Reality: While there can be many useful roles for video and Flash
animations, both should generally be used only to serve a software
or program.
Myth: Graphical art skills are necessary to be an interface designer.
Reality: While graphics skills are needed to create a nice-looking
interface, the graphics role in a software project is somewhat similar to
the role of a construction worker implementing the design of an
architect.

1.3.
designing

Principals of UI

Make it simple, but no simpler.


Albert Einstein
Before you buy software, make sure it believes in the same things you do. Whether
you realize it
or not, software comes with a set of beliefs built in. Before you choose software,
make sure it
shares yours.
PeopleSoft Advertisement (1996)
The golden rule of design: Dont do to others what others have done to you.
Remember the
things you dont like in software interfaces you use. Then make sure you dont do
the same
things to users of interfaces you design and develop.
Tracy Leonard (1996)
7

User interface
designing
Why should you need to follow user interface principles? In the past, computer
software was designed
with little regard for the user, so the user had to somehow adapt to the system .
This approach to
system design is not at all appropriate today the system must adapt to the user.
This is why design
principles are so important.
Computer users should have successful experiences that allow them to build
confidence in themselves
and establish self - assurance about how they work with computers. Their
interactions with computer
software should be success begets success. Each positive experience with a
software program allows
users to explore outside their area of familiarity and encourages them to expand
their knowledge of the
interface. Well - designed software interfaces, like good educators and instructional
materials, should
build a teacher- student relationship that guides users to learn and enjoy what
they are doing. Good
interfaces can even challenge users to explore beyond their normal boundaries and
stretch their
understanding of the user interface and the computer. When you see this happen, it
is a beautiful
experience.
You should have an understanding and awareness of the users mental model and
the physical,
physiological, and psychological abilities of users. This information (discussed in
Chapters 3 and 4) has
been distilled into general principles of user interface design, which are agreed
upon by most experts in
the field. User interface design principles address each of the key components of
the Look and Feel
iceberg presentation, interaction, and object relationships.
Chapter 5: The Golden Rules of User Interface Design
Interface design principles represent high - level concepts and beliefs that should be
used to guide
software design. You should determine which principles are most important and
most applicable for
your systems and then use these principles to establish guidelines and determine
design decision s.
8

User interface
designing
Key Idea! The trick to using interface design principles knows which ones are more
important when making design tradeoffs. For certain products and specific design
situations, these design principles may be in conflict with each other or at odds with
product design goals and objectives. Principles are not
meant to be follow blindly, rather they are meant as guiding lights for sensible
interface design.
The three areas of user interface design principles are:
1. Place users in control of the interface
2. Reduce users memory load
3. Make the user interface consistent.
1.4.

Things to consider

1. Clarity. The interface avoids ambiguity by making everything clear


through language, flow, hierarchy and metaphors for visual elements.
Clear interfaces dont need manuals. They also ensure users make less
mistakes while using them.
2. Concision. Its easy to make the interface clear by overclarifying and
labeling everything, but this leads to interface bloat, where there is just
too much stuff on the screen at the same time. If too many things are
on the screen, finding what youre looking for is difficult, and so the in
terface becomes tedious to use. The real challenge in making a great
interface is to make it concise and clear at the same time.
3. Familiarity. Something is familiar when you recall a previous encounter
youve had with it. Even if someone uses an interface for the first time,
certain elements can still be familiar. You can use reallife meta phors
to communicate meaning; for example, folderstyle tabs are often
used for navigation on websites and in applications. People recog
nize them as navigation items because the metaphor of the folder is
familiar to them.
4. Responsiveness. This means a couple of things. First, responsiveness
means speed: a good interface should not feel sluggish. Secondly, the
interface should provide good feedback to the user about whats hap
pening and whether the users input is being successfully processed.
5. Consistency. Keeping your interface consistent across your applica
tion is important because it allows users to recognize usage patterns.
Once your users learn how certain parts of the interface work, they can
apply this knowledge to new areas and features, provided that the user
9

User interface
designing
interface there is consistent with what they already know.
6. Aesthetics. While you dont need to make an interface attractive for
it to do its job, making something look good will make the time your
users spend using your application more enjoyable; and happier users
can only be a good thing.
7. Efficiency. Time is money, and a great interface should make the user
more productive through shortcuts and good design. After all, this is
one of the core benefits of technology: it allows us to perform tasks
with less time and effort by doing most of the work for us.
8. Forgiveness. Everyone makes mistakes, and how your application han
dles those mistakes will be a test of its overall quality. Is it easy to undo
actions? Is it easy to recover deleted files? A good interface should not
punish users for their mistakes but should instead provide the means
to remedy them

1.5.
environment of a computer
1.5.1.

Changing working

Screens

Screen is an output device for presentation of information in visual or tactile form


(the latter used for example in tactile electronic displays for blind people). When the
input information is supplied as an electrical signal, the display is called an
electronic display. This is used by many applications inorder to interact with
humans.

1.5.2.

Keyboards

Dont assume that since users have a mouse attached to their computer, they will
use it all of the time.
Although you may design the interface to be optimized for mouse users, provide a
way to do most
actions and tasks using the keyboard. One of the key CUA design principles is that
users must be able
to do any action or task using either the keyboard or the mouse.
Key Idea! Keyboard access means users can perform an action using the
keyboard rather than the mouse. It does not mean that it will be easier for users
to use the keyboard, just that they dont have to use the mouse if they dont want
to, or cant. Toolbars, for example, are fast-path buttons for mouse users.
However, users cant get to the toolbar from the keyboardthey must be able to
use the menu bar drop-downs to navigate to the action they want to perform.
10

User interface
designing
Users have very different habits when using keyboards and mice, and they often
switch between them
during any one task or while using one program. With the push toward mousedriven, direct-manipulation interfaces, not all of the major design guides follow this
philosophy of implementing both a keyboard and mouse interface. There is not a
total consensus of agreement on this principle. Many
Macintosh products do not provide complete keyboard access.
However, designers may want to follow this principle for the sake of users as they
migrate to graphical
interfaces and for consistency with other programs that may only have keyboard
input. Users with
special needs usually require an interface with keyboard access. Some new
interface techniques also
may need keyboard support to ensure user productivity. Also, as a user whose
laptop mouse has been
broken or disabled, or lost the mouse pointer on the screen, or been on an airplane
with no room to use
a mouse, I appreciate being able to access all important actions from the keyboard.
Special - purpose
software may choose not to follow this principle, but I recommend that all general purpose software
programs offer keyboard access unless there are compelling reasons to do
otherwise.

1.5.3.

Pointing devices

A pointing device is an external tool that is used to move objects around and also to
select options from menus. The pointing device is an element of the graphical user
interface. It manipulates on screen objects to issue commands. Examples of
pointing devices include the mouse, trackball, light pen, pen for graphic tablet,
joystick, touch screen, wand, head mounted display, virtual reality glasses, and 3-D
mouse.
The concept of the pointing device was developed in 1970 by Douglas Engelbart as
another way to input information into the computer other than through the
keyboard. This input device has become popular and with the growth of the
graphical user interface it has become one of the most necessary and important
tools of the computer.
The mouse is included in almost every computer that is sold today. Besides
becoming an important input tool, it has provided access to the computer for many
individuals with disabilities that might not otherwise have the opportunity to use the
computer.

11

User interface
designing
The pointing device also lets you double click on an icon to start a program
application; and in the WINDOWS operating system you can use the mouse to drag
a file or document to the Recycling Bin to delete a file.

1.5.4.
Speech recognition
In computer science, speech recognition (SR) is the translation of spoken words into text. It is
also known as "automatic speech recognition", "ASR", "computer speech recognition", "speech
to text", or just "STT". Some SR systems use "training" where an individual speaker reads
sections of text into the SR system. These systems analyze the person's specific voice and use it
to fine tune the recognition of that person's speech, resulting in more accurate transcription.
Systems that do not use training are called "Speaker Independent" systems. Systems that use
training are called "Speaker Dependent" systems.
Speech recognition applications include voice user interfaces such as voice dialing (e.g., "Call
home"), call routing (e.g., "I would like to make a collect call"), domotic appliance control,
search (e.g., find a podcast where particular words were spoken), simple data entry (e.g., entering
a credit card number), preparation of structured documents (e.g., a radiology report), speech-totext processing (e.g., word processors or emails), and aircraft (usually termed Direct Voice
Input).
The term voice recognition refers to finding the identity of "who" is speaking, rather than what
they are saying. Recognizing the speaker can simplify the task of translating speech in systems
that have been trained on specific person's voices or it can be used to authenticate or verify the
identity of a speaker as part of a security process.

1.5.5.

Information storing

information storage and retrieval, the systematic process of collecting and


cataloging data so that they can be located and displayed on request. Computers
and data processing techniques have made possible the high-speed, selective
retrieval of large amounts of information for government, commercial, and
academic purposes. There are several basic types of information-storage-andretrieval systems. Document-retrieval systems store entire documents, which are
usually retrieved by title or by key words associated with the document. In some
systems, the text of documents is stored as data. This permits full text searching,
enabling retrieval on the basis of any words in the document. In others, a digitized
image of the document is stored, usually on a write-once optical disc. Database
systems store the information as a series of discrete records that are, in turn,
divided into discrete fields (e.g., name, address, and phone number); records can be
searched and retrieved on the basis of the content of the fields (e.g., all people who
12

User interface
designing
have a particular telephone area code). The data are stored within the computer,
either in main storage or auxiliary storage, for ready access. Reference-retrieval
systems store references to documents rather than the documents themselves.
Such systems, in response to a search request, provide the titles of relevant
documents and frequently their physical locations. Such systems are efficient when
large amounts of different types of printed data must be stored. They have proven
extremely effective in libraries, where material is constantly changing.

1.6.
Look and feel development
In software design, look and feel is a term used in respect of a graphical user
interface and comprises aspects of its design, including elements such as colors,
shapes, layout, and typefaces (the "look"), as well as the behavior of dynamic
elements such as buttons, boxes, and menus (the "feel"). The term can also refer to
aspects of an API, mostly to parts of an API which are not related to its functional
properties. The term is used in reference to both software and websites.
Look and feel applies to other products. In documentation, for example, it refers to
the graphical layout (document size, color, font, etc.) and the writing style. In the
context of equipment, it refers to consistency in controls and displays across a
product line.
Look and feel in operating system user interfaces serves two general purposes.
First, it provides branding, helping to identify a set of products from one company.
Second, it increases ease of use, since users will become familiar with how one
product functions (looks, reads, etc.) and can translate their experience to other
products with the same look and feel.

1.6.1.

Graphical interfaces

In computing, a graphical user interface (GUI, commonly mis-pronounced gooey) is


a type of user interface that allows users to interact with electronic devices using
images rather than text commands. GUIs can be used in computers, hand-held
devices such as MP3 players, portable media players or gaming devices, household
appliances and office equipment. A GUI represents the information and actions
available to a user through graphical icons and visual indicators such as secondary
notation, as opposed to text-based interfaces, typed command labels or text
navigation. The actions are usually performed through direct manipulation of the
graphical elements.
13

User interface
designing
The term GUI is restricted to the scope of two-dimensional display screens with
display resolutions able to describe generic information, in the tradition of the
computer science research at the PARC (Palo Alto Research Center). The term GUI is
rarely applied to other low-resolution types of interfaces that are non-generic, such
as video games (where HUDis preferred), or not restricted to flat screens, like
volumetric displays.

1.6.2.

Screen design for data entry

Paper forms are extremely flexible. They can be carried virtually anywhere and
completed using such simple technology as a pen or a pencil. Except for running out
of forms or ink, they are not subject to failure. However, the data recorded on a
paper form must subsequently be entered into a computer through a keyboard, a
scanner, or similar equipment. Because the data capture and data entry steps are
separated by time, the data might not be available in a timely fashion, and the
process of obtaining the feedback needed to correct errors is lengthy and complex.
Although laptop computers are quite portable, screens generally require the user to
stay near a source of electrical power and to avoid certain environments, and a
screen can fail. (Field personnel sometimes carry an appropriate set of paper forms
as a backup.) However, because a screen is directly linked to a computer, the data
can be utilized as soon as they are entered (thus enhancing timeliness) and verified
and corrected interactively (thus enhancing data accuracy).

1.6.3.

Intelligent Human computer interfaces

The general goal of research in the human-computer intelligent interaction (HCII)


area is to improve the ways a human operator interacts with a computer by
studying not only input-output techniques, but also human factors involved in the
interchange. Within this research theme, programs range from artificial intelligence,
robotics, computer vision, cognitive science, human perception and performance to
virtual reality environment experiments carried out in collaboration with the
National Center for Supercomputing Applications.
The technology of HCII holds great promise. With the explosive growth of raw
computing power and accompanying technologies, such as the Information
Superhighway, comes the potential for revolutionary applications in all areas of life
and on every level-from simple word processing to intelligent automobiles, from
high-tech entertainment to three-dimensional scientific visualization, or from
automated tutors to large-scale manufacturing systems. Exploiting this tremendous
potential can bring profound benefits in all areas of human concern. To take full

14

User interface
designing
advantage of this potential, it is imperative to invent and develop new paradigms
for HCII that are more efficient and effective than existing methods.
In every use of computers to solve human problems, a central and crucial factor is
the flow of information and control between human and machine. However, present
human-computer interaction technology constitutes a significant bottleneck for
realizing the potential of computer technology. The scientific and technological
challenges that must be addressed to eliminate this bottleneck and increase the
efficiency of human-machine interaction constitute a research area to which
Beckman Institute researchers can make fundamental contributions.

1.6.4.

Virtual persons support

Virtual People Support, being a team of able marketing experts online provides a
unique way of Web development process for businesses specifically for small and
medium enterprises (SMEs). Having the deeper grasp and understanding of what
the Internet has evolved into over the yearsit has the ability to maximize the
potential of any website to serve its real purpose as a powerful marketing tool in
boosting the growth of any business or enterprise whether it is big, medium or
small.
Virtual People Support is a great partner in establishing any ventures positioning in
its chosen field or industry because it does not just simply gather bit and pieces of
information about the firms profile, but rather make a site more interactive and
available 24/7 online with the needed updates for the new prospects and present
clientele a business has and will have in the coming days. Also, the team behind
Virtual People Support also makes sure that in the Web development process the
promotion of the site is steady and will always land on top of all online search
engines.
Thats what makes Virtual People Support different. It always put the value of
service and g an extra mileage even during the process of Web development and
even after its completion. It is committed to bringing the site as marketing tool to
the next level and would not stagnate as a brochure-type of website. It creates
means or venues to do be more visible. Yes, visibility even during the Web
development stages is already a major concern for Virtual People Support so the
businesses they serve would not just be satisfied, but also highly recommend them
for exceeding any expectation.

1.7.

User experience issues

You must understand the user to be able to put a happy face on your application.
You should understand the users job, how the software fits in with that job and how
the user goes about getting the job done. You need to approach the design of
15

User interface
designing
software from the users viewpoint not from an abstract requirements document.
Specifically, you should understand what the user will be doing with the application.
If you can think like a user, you can create a much better user interface.

Here are some basic principles to remember about users:


1. Your software is like a hammer - the user doesnt really care how well crafted it
is, the user just wants
nails put in the wall. Users just want to do their job (or play their game). They dont
care about you
or your software. Your software is just an expedient tool to take the user where the
user wants to go.
2. Given a selection of hammers to buy at the hardware store, the user will select
the one which will be
most fun to use. Of course, this varies by user - some will want the plastic handle,
some the wood,
some the green, etc. When evaluating your software, users are often swayed by
looks, not function.
Thus, steps taken to make the product look good (nice icons, pictures, good color
scheme, fields
aligned, etc.) will often favorably enhance evaluations of your software.
3. It had better drive nails. The user will not really know if your software is
adequate to the job until the
user has used the software to do actual work. From an interface perspective, the
software should not
look like it can do more than it can.
4. Some users will try to use a hammer to drive a screw. If your software is good,
some user somewhere
will try to use the software for some purpose for which you never intended it to be
used. Obviously,
you can not design a user interface to deal with uses you can not foresee. There is
no single rigid
model of the right way to use the software, so build in flexibility.
5. Users wont read an instruction manual for a hammer. They wont read one for
your software either,
unless they really have too. Users find reading instruction manuals almost as
pleasurable as dental
work.
6. A user reading the instruction manual for a hammer is in serious trouble. When
you create your help
system (and printed manual), remember that the user will only resort to those
materials if the user is in
trouble. The user will want a problem solved as fast and as easily as possible.
16

User interface
designing
7. Hammers dont complain. You should try to eliminate error messages and any
error messages your
program needs should have the right attitude.

1.7.1.

Range of users

1.7.1.1. Expert
An Expert User can be described as a Domain Expert who uses Computers and
Computer Programs to accomplish tasks within the Expert's Domain. A user who
knows what needs to be done and determines to use the capabilities of the
computer to accomplish Project or Process Targets. This kind of user is not usually
concerned so much with how something is accomplished within the computer as
long as it is consistent and agreeable with what would be accomplished manually.

1.7.1.2. Occasional
This individual typically uses the computer for less than 3 hours per day. This user tends to have
an extensive variety of different tasks (computer and other) and they are unlikely to regularly
spend extended amounts of time sitting and working at the computer.

1.7.1.3. Novice
Novice users are new to the system and will need a simple and basic interface.
Since they are new in the system they will expect more secure ways of doing things
in the system (for example they will choose the templates or wizards to do their first
steps in the system). Novice users' interface should provide simple ways to achieve
important frequently performed tasks. When designing to novice users we should
remember what the main use cases and don't shadow them with unnecessary
features.

1.7.2.
Special needs of users
1.7.2.1. Ergonomics
Human factors and Ergonomics (HF&E) is a multidisciplinary field incorporating
contributions from psychology, engineering, industrial design, graphic design,
statistics, operations research and anthropometry. In essence it is the study of
designing equipment and devices that fit the human body and its cognitive abilities.
The two terms "human factors" and "ergonomics" are essentially synonymous.

The International Ergonomics Association defines ergonomics or human factors as


follows:
Ergonomics (or human factors) is the scientific discipline concerned with the
understanding of interactions among humans and other elements of a system, and
the profession that applies theory, principles, data and methods to design in order
to optimize human well-being and overall system performance.
17

User interface
designing
HF&E is employed to fulfill the goals of health and safety and productivity. It is
relevant in the design of such things as safe furniture and easy-to-use interfaces to
machines and equipment. Proper ergonomic design is necessary to prevent
repetitive strain injuries and other musculoskeletal disorders, which can develop
over time and can lead to long-term disability.
Human factors and ergonomics is concerned with the fit between the user,
equipment and their environments. It takes account of the user's capabilities and
limitations in seeking to ensure that tasks, functions, information and the
environment suit each user.
To assess the fit between a person and the used technology, human factors
specialists or ergonomists consider the job (activity) being done and the demands
on the user; the equipment used (its size, shape, and how appropriate it is for the
task), and the information used (how it is presented, accessed, and changed).
Ergonomics draws on many disciplines in its study of humans and their
environments, including anthropometry, biomechanics, mechanical engineering,
industrial engineering, industrial design, information design, kinesiology, physiology
and psychology.

1.8.
1.8.1.

Interaction systems development


Event driven systems

To achieve operational responsiveness, you need technology that can help you
identify and respond more quickly to opportunities, threats, and changing business
conditions. Event driven systemsapplications and infrastructure built with event
driven architectureenable you to achieve responsiveness quickly and affordably.
Event driven systems are designed to work with streaming event data as it enters
your systemsmonitoring it and analyzing it even before it is written to a database
giving you faster access to insight and enabling quicker response to fast-moving
data. Event driven systems are, by design, more responsive to unpredictable
situations and can help you navigate shifting business landscapes with more agility.
And event driven systems can easily facilitate service-oriented architecture (SOA),
providing greater flexibility and efficiency throughout your organization.
When leading companies around the world want to design and deploy superior
event driven systems, they turn to solutions from Progress Software.

1.8.2.
18

Use of multimedia in UI

User interface
designing
Multimedia user interfaces combine several kinds of media to help people use a
computer. These media can include text, graphics, animation, images, voice, music,
and touch. Multimedia user interfaces are gaining popularity because they are very
effective at keeping the interest of their users, improve the amount of information
users remember, and can be very cost-effective (Alexander, 1992; Fletcher, 1990;
Verano, 1987).
Several reports support the value of multimedia. Bethlehem Steel found that
multimedia training courses cut training time by 20% to 40%, improved retention
rates 20% to 40%, and allowed employees to take training when it was most
convenient (Alexander, 1992). A Department of Defense literature survey (Fletcher,
1990) concluded that, in higher education, interactive videodisc training roughly
improved the performance of students in the 50th percentile to about the 75th
percentile of performance. Big 5 Sporting Goods reported (Wilder, 1992) that
training time for a point-of-sale cashier dropped about 75% to 50%, cashiers made
fewer errors at the point of sale, cashiers and sales representatives retained their
skills, and the company standardized training. Another study (Verano, 1987) found
that the greater the level of interactivity in the course materials, the more students
learned. Compared to standard classroom lecture, students who used interactive
videodiscs retained 19% more information when tested four weeks after the training
period.
Successful multimedia designers build their products with primary emphasis on the
user. Multimedia designers determine which human sensory system is most efficient
at receiving specific information, then use the media that involves that human
sensory system. For example, to teach someone how to fix a part on a jet engine it
is probably most efficient for the student to watch someone else fixing the part
rather than hearing a lecture on how to fix the part. The human visual system is
better at receiving this complex information. So, the designer for this multimedia
product should probably use video as the medium of communication.
This heavy emphasis on the user's senses, rather than the media, means that we
should probably call these product user interfaces "multisensory" rather than
"multimedia.". The human senses that designers use most frequently in their
multimedia products are sight, sound, and touch. Multimedia products often
stimulate these senses simultaneously. For example, a user can see and hear a
video, and then touch a graphical button to freeze the video image.
Since so many media are available to the multimedia user interface designer, it is
very easy to overwhelm and confuse the users of these products. The following
guidelines are based on the way people think and perceive information. These
guidelines will help you build multimedia user interfaces that are easy and
comfortable for people to learn and use.
19

User interface
designing

1.8.3.

Modeling techniques

The object-modeling technique (OMT) is an object modeling language for software


modeling and designing. It was developed around 1991 by Rumbaugh, Blaha,
Premerlani, Eddy and Lorensen as a method to develop object-oriented systems and
to support object-oriented programming. Describes Object model or static structure
of the system.
OMT was developed as an approach to software development. The purposes of
modeling according to
Rumbaugh are:
testing physical entities before building them (simulation),
communication with customers,
visualization (alternative presentation of information), and
reduction of complexity.
OMT has proposed three main types of models:
Object model: The object model represents the static and most stable
phenomena in the modeled domain. Main concepts are classes and
associations with attributes and operations. Aggregation and generalization
(with multiple inheritance) are predefined relationships.
Dynamic model: The dynamic model represents a state/transition view on the
model. Main concepts are states, transitions between states, and events to
trigger transitions. Actions can be modeled as occurring within states.
Generalization and aggregation (concur-rency) are predefined relationships.
Functional model: The functional model handles the process perspective of
the model, corresponding roughly to data flow diagrams. Main concepts are
process, data store, data flow, and actors.
OMT is a predecessor of the Unified Modeling Language (UML). Many OMT modeling
elements are common to UML.
Functional Model in OMT: In brief, a functional model in OMT defines the function of
the whole internal processes in a model with the help of "Data Flow Diagrams
(DFD's). It details how processes are performed independently.

1.9.
1.9.1.

Applications of new futures in User interface


Selection of UI

1.9.1.1. Touch screen


When you search you will notice that you happen to be encompassed with touch
screens. Should you be in any community area scanning this I guess you can easily
find at the very least 3 gadgets using a touch screen user interface inside a four
meter radius near you. Touch screens have really made our electronic digital life
20

User interface
designing
painless; you no longer require an external keyboard, pc mouse or possibly a stylus
pen. Virtually any touch screen display system is all set to go to carry out your
bidding and you could practically control it with your fingertips.
Do you know the effective computer systems which determine and control the
cosmic explosions deep below ground inside CERN lab are run by touch screen
technology devices? Why you have touch screens revolving about the world at
break neck speeds with the International Space station. It's going to make you feel
special when you're conscious that the technology that you employ on your
smartphone or computer is definitely innovative.
Although do you realize which touch screen technology you may be using?
Well lets study unique variations of touch screens for your use presently. Touch
screens can certainly be segregated into three types:
1.Resistive touch screen
2.Capacitive touch screen
3.Surface acoustic touch screen technology
Dr. Sam Hurst filed a patent for the resistive touch screen technology in 1971. Even
though it is a more or less old technological know-how resistive touch screens
already went through improvements and yet are a preferred pick with a lot of
manufacturers because they're well-performing.
Those people who seem to be familiar with exactly how the electrical buttons in
your own home operate will understand the basic principle driving the resistive
touch screen. The electrical switch inside your home stands between the cable
connections in the off position and does not let electricity to circulate. If you happen
to simply turn the switch on contact is created among the cabling and electricity
passes thus providing the current to the kitchen appliance linked to the switch.
Resistive touch screen technology work on the exact same principle, these are
generally made from the actual screen and a couple layers of conductive and
resistive material correspondingly. A few touch screens in addition have a 4th
protecting level.

The levels are coated with clear resistive material called indium tin oxide. The layers
are segregated by particularly thin matrix of dots that can be clear. An electrical
current around 5 volts keeps moving between layers. At the time you touch a
specific area of the screen the pressure exerted by your finger or stylus makes
contact amongst the conductive and resistive layers and for that reason a circuit is
formed. The controller registers alteration of the voltage and calculates the
21

User interface
designing
coordinates with the help of an analog to digital convertor.
Almost all companies have a preference for resistive touch screens as they're not
difficult to manufacturer and production costs are low that can be shared as a profit
with the consumer by asking a cheaper price for the device. Also as the resistive
touch screen has existed for quite a while and is a tried and trusted technology it is
more dependable.
You can utilize nearly every object to register a touch on screen as it doesn't rely
upon the conductive property of the object used. Variations in temperature as well
as other atmospheric conditions do not have bearing on the functioning of a
resistive touch screen either.

1.9.1.1.1. Capacitive
2. A lot of people that use cool gadgets on a regular basis have an understanding
of what a touch screen is. In case you do not own any electronic device having a
touch screen embedded in it, chances are you might have come in contact with
a touch screen since you can find them on ATMs and information kiosks. Touch
screens provide a great deal of control and are also user-friendly and
uncomplicated. Working with pointing devices such as a track pad or computer
mouse entails some amount of maneuvering. You have to move the cursor all the
way around the display screen to click on different icons. Imagine how much
more annoying it might be to use a pointing system to type with the aid of an on
screen keyboard. Making use of a touch screen display you are able to precisely
touch the icon or key that you simply choose to use and it would certainly
acknowledge your touch.
The touch screen is unquestionably an input as well as an output device
combined into one. Despite the fact all touch screens fulfill a similar function the
way in which they acknowledge a touch differs. This depends on the standard
manner in which the touch screen works.There exist three different types of
touch screens:
Capacitive touch screen display
Resistive touch screen technology
Surface acoustic touch screen technology
Let us look into how a capacitive touch screens function. We will utilize the
surface of still, undisturbed water to describe in principal how a capacitive touch
screen functions. The surface of still water is even, with no disruption but the
moment you come in contact with the surface of water with your finger
regardless of where you touch small concentric circles are created which move
away from your finger and grow bigger until they make it to the sides of an
vessel that is holding the water.

Capacitive touch screens operate much like the above mentioned illustration. A
22

User interface
designing
capacitive touch screen consists of the actual display that shows images and
texts, over this screen there is a thin see-through conductor typically made of
indium tin oxide that is capable of possessing an electrical charge. The human
body you might already know is an effective conductor of electrical charge so
when you touch the screen a tiny amount of charge is absorbed by the finger.
The charge is minimal and does not inflict problems for your system, but this can
lead to disruption within the electric field just like it does should you touch the
surface of water. The sensors that happen to be placed on the edges of the
screen work out the X and Y coordinates thus effectively registering the input
and send it to a controller which in turn transmits the data into the software.
Since the capacitive touch screen has only one layer it does not deflect light and
offers bright color display that can easily be looked at even in sunlight with ease.
Additionally, it is a lot more precise and does not require pressure to be exerted
on the screen. The smallest touch would be registered and recognized by the
controller. Capacitive touch screens also acknowledge multi touch, therefore you
can touch two different icons simultaneously and the sensors would detect each
individual touch.
The capacitive touch screen has got its share of disadvantages. These particular
screens are expensive to produce and therefore are not desired by some
electronic companies because they are aware of the price of their device. Simply
because this sort of touch screen relies upon the conductive property of a object
utilized to touch the screen you will not be able to work with it with gloves or any
other object that is a bad conductor of electric charge.
Capacitive touch screens undoubtedly are a fairly new technological innovation
consequently they are currently being evolved more. Companies are looking for
inexpensive parts which can be used in producing these monitors so that they
can lower the price. With a lower cost the capacitive touch screens can find their
way into basic level gadgets in the future and savor wide scale recognition.

2.1.1.1.1. Resistive
Touch screens found its way into daily gizmos and also space station. We have been
surrounded by touch screens and quite frankly they have made our electronic digital
life much easier. You no longer need an external keyboard, mouse or just a stylus
pen as the input device. You could make the computer or electronic device carry out
your command with only so much as your fingertip, which is certainly all you
want.As per the method used to register a touch on screen, touch screens are of 3
categories:
Resistive touch screen technology
Capacitive touch screen
Surface acoustic touch screen
Let us find out more about the resistive touch screen. Dr. Sam Hurst filed the patent
for resistive touch screens in 1971. Resistive touch screen has experienced wide
23

User interface
designing
scale popularity it is still a well known version of touch screens used in many
electronic devices. Although it is seen as a fairly old technology it really is preferred
by corporations due to durability.
Folks which have been aware of precisely how the electrical switches of your home
work will comprehend the rationale behind the resistive touch screen technology.
The electrical switch inside your home separates the wires in the off position and
will not allow electric current circulate. Once you turn the switch on contact is
formed between cables and electricity flows thus supplying the current for the
appliance attached to the switch. Resistive touch screen work with similar principle,
these are generally made from the actual screen and two layers of conductive and
resistive material correspondingly. Certain touch screens furthermore have a fourth
protective layer.

The sheets are coated with see-through resistive material know as indium tin oxide.
The layers are separated by very thin matrix of dots that happen to be transparent.
An electrical current roughly 5 v continues circulating in between the layers. Any
time you press a specific section of the screen pressure applied by the finger or
stylus causes contact between the conductive and resistive layers thereby a circuit
is formed. The controller records alteration of the voltage and computes the
coordinates by using an analog to digital converter.

Resistive touch screens is the popular choice as they are reasonably cheap to
produce also as they quite simply have been in existence for a long time they are
regarded trusted by manufacturers. The display screen is in addition immune to a
change in temperature and pollution in environment. The electrical current working
between the layers is not really affected by fluctuation in temperature. Besides your
fingers you may also use styluses, pencils along with other object given that the
screen would not rely upon the conductive capability of a object utilized to register
a touch on screen.
If resistive touch screen is indeed fantastic then why was there a necessity to invent
new kinds of touch screens you will ask? Well even though there are numerous
benefits there are a couple of disadvantages. We all desire our output devices such
as our screen to be clear and highly detailed. We want it to produce images in hi-def
resolution in any kind of lighting conditions. As the resistive touch screen is made
up of multiple layers light emanating from screen is deflected somewhat. This will
make it hard to view the images and look at text in natural light. Whilst some older
version of resistive touch screens did not register multi touch on the other hand new
24

User interface
designing
era of resistive touch screen have significantly better plus more powerful controllers
that could register multi touch.

2.1.1.2. Voice activation


In radio communications, "voice activation" or "VOX" has been used for decades
where convenience or the need for hands-free operation rule out push to talk. The
idea is simple: monitor the microphone input and transmit only when the user is
actually talking. In practice, voice activation has a number of subtleties that make
implementing it and getting it to work well quite challenging. Fortunately, a Speak
Freely user who is also an expert Windows application and driver developer, Dave
Hawkes took on the task of adding voice activation to Speak Freely and, fortunately
for all of us, contributed the feature so we can benefit from it.
Voice activation can be tricky to set up; it's important to use a good microphone,
well isolated from your speakers, set the input level correctly, and make sure
echoes don't trigger transmission. Transmissions from Apollo astronauts on the
Moon were voice activated, and if you listen to tapes of them, you'll hear the odd
dropped word, echo feedback, and other inevitable artifacts of voice activation. Still,
it got the job done and allowed the astronauts to concentrate on what they were
doing rather than operating the radio. Speak Freely's voice activation can do the
same for you.
It's best to become familiar with Speak Freely in push to talk mode; once you've
mastered the basics of establishing connections, setting compression modes, and
coping with the inevitable problems of sending voice over the network, you'll be
ready to tackle the additional complexities of voice activation.
You enable voice activation with the Options/Voice Activation menu item. The
default setting, "None", disables Voice Activation and selects the usual push to talk
mode. To enable Voice Activation, check one of the three voice activation speed
items, "Fast", "Medium", or "Slow". To avoid transmission breakups due to short
pauses in speech, voice activation continues transmitting until a period of silence of
a given duration has occurred; the choices refer to the length of silence that deems
a transmission complete. For most purposes, "Medium" will work well.
Once you've selected voice activation, you need to adjust the level which causes
Speak Freely to begin transmitting. This depends on your microphone, input gain
setting, and the amount of background noise, so you have to set the level
appropriate for your own environment. Use the Options/Voice Activation/Monitor
menu item to display the Voice Activation Monitor dialogue box. Open a connection
(perhaps to one of the echo servers), and press the space bar to begin transmitting.
The green bar graph at the left shows you, in real time, the sound level received
from the microphone (while you're transmitting to any connection). The red line is
the level above which voice activation enables transmission. You can move this
25

User interface
designing
level up and down with respect to the audio input level with the scroll bar. Adjust it
so it's slightly above the level of the green bar when you're not speaking into the
microphone. You'll see that when it's adjusted correctly transmission will stop (an X
is drawn through the ear cursor) shortly after you stop speaking and resume (the X
disappears) at your next utterance.
When using voice activated transmission, if your audio hardware is half-duplex, you
should also select the Options/Break Input menu item to allow received sound to
interrupt your transmissions. Otherwise, the continuous monitoring of the
microphone would prevent your hearing remote users

2.1.1.2.1. Voice recognition


In computer science, speech recognition (SR) is the translation of spoken words
into text. It is also known as "automatic speech recognition", "ASR", "computer
speech recognition", "speech to text", or just "STT". Some SR systems use "training"
where an individual speaker reads sections of text into the SR system. These
systems analyze the person's specific voice and use it to fine tune the recognition of
that person's speech, resulting in more accurate transcription. Systems that do not
use training are called "Speaker Independent" systems. Systems that use training
are called "Speaker Dependent" systems.
Speech recognition applications include voice user interfaces such as voice dialing
(e.g., "Call home"), call routing (e.g., "I would like to make a collect call"), domotic
appliance control, search (e.g., find a podcast where particular words were spoken),
simple data entry (e.g., entering a credit card number), preparation of structured
documents (e.g., a radiology report), speech-to-text processing (e.g., word
processors or emails), and aircraft (usually termed Direct Voice Input).
The term voice recognition[1][2][3] refers to finding the identity of "who" is speaking,
rather than what they are saying. Recognizing the speaker can simplify the task of
translating speech in systems that have been trained on specific person's voices or
it can be used to authenticate or verify the identity of a speaker as part of a security
process.

2.1.1.2.2. Voice command


A voice command device (VCD) is a device controlled by means of the human voice.
By removing the need to use buttons, dials and switches, consumers can easily
operate appliances with their hands full or while doing other tasks. Some of the first
examples of VCDs can be found in home appliances with washing machines that

26

User interface
designing
allow consumers to operate washing controls through vocal commands and mobile
phones with voice-activated dialing.
Newer VCDs are speaker-independent, so they can respond to multiple voices,
regardless of accent or dialectal influences. They are also capable of responding to
several commands at once, separating vocal messages, and providing appropriate
feedback, accurately imitating a natural conversation. They can understand around
50 different commands and retain up to 2 minutes of vocal messages. VCDs can be
found in computer operating systems, commercial software for computers, mobile
phones, cars, call centers, and internet search engines such as Google.
In 2007, a CNN business article reported that voice command was over a billion
dollar industry and that companies like Google and Apple were trying to create
voice recognition features. It has been five years since the article was published,
and since then the world has witnessed a variety of voice command devices. In
addition, Google created a voice recognition engine called Pico TTS and Apple has
released Siri. Voice command devices are becoming more widely available, and
innovative ways for using the human voice are always being created. For example,
Business Week suggests that the future remote controller is going to be the human
voice. Currently Xbox Live allows such features and Jobs hinted at such a feature on
the new Apple TV.

3. Understanding issues related to selection of user interface


3.1.1.
Identifying the characteristics of a user
3.1.1.1. Human memory
In psychology, memory is the processes by which information is encoded, stored,
and retrieved. Encoding allows information that is from the outside world to reach
our senses in the forms of chemical and physical stimuli. In this first stage we must
change the information so that we may put the memory into the encoding process.
Storage is the second memory stage or process. This entails that we maintain
information over periods of time. Finally the third process is the retrieval of
information that we have stored. We must locate it and return it to our
consciousness. Some retrieval attempts may be effortless due to the type of
information.
From an information processing perspective there are three main stages in the
formation and retrieval of memory:

Encoding or registration: receiving, processing and combining of received


information

Storage: creation of a permanent record of the encoded information

27

User interface
designing

Retrieval, recall or recollection: calling back the stored information in


response to some cue for use in a process or activity

3.1.1.1.1. Knowledge representation of humans


The best option to understand what Knowledge Representation is simply to
mention what it is intended for. Its mission is to make knowledge as explicit
as possible. This is necessary because knowledge is stored in implicit form,
i.e. tacit knowledge non-observable from the outside, inside minds and
spread around in community social habits. To facilitate knowledge sharing it is
necessary to make it explicit.
Tacit Knowledge
Tacit knowledge is what an agent obtains when it observes its environment
and makes internal representations of what it perceives. Here "agent" stands
for an entity capable of election. Agent choices are built from its internal
representation, its model of the world. The model captures what there is and
how it works, thus allowing the agent to predict what would happen if it does
something or not, a complete view on that from a Systems Theory
perspective [Klir92] is shown in Figure.

Knowledge viewed from Systems Theory perspective


In other words, tacit knowledge allows an agent to choose the best options
that, hopefully, will help it achieve its goals. These goals are unimportant
28

User interface
designing
from a generic point of view. They might range from survival to booking a
ticket, passing through getting a favorable transoceanic export rate, for
instance.

3.1.1.1.2. Perception
Perception (from the Latin perceptio, percipio) is the organization, identification and
interpretation of sensory information in order to represent and understand the
environment. All perception involves signals in the nervous system, which in turn
result from physical stimulation of the sense organs. For example, vision involves
light striking the retinas of the eyes, smell is mediated by odor molecules and
hearing involves pressure waves. Perception is not the passive receipt of these
signals, but can be shaped by learning, memory and expectation. Perception
involves these "top-down" effects as well as the "bottom-up" process of processing
sensory input. The "bottom-up" processing is basically low-level information that's
used to build up higher-level information (i.e. - shapes for object recognition). The
"top-down" processing refers to a person's concept and expectations (knowledge)
that influence perception. Perception depends on complex functions of the nervous
system, but subjectively seems mostly effortless because this processing happens
outside conscious awareness.

3.1.1.1.3. Attention of user


Selective attention made the test subjects unable to see the gorilla, and its the
same phenomenon that contributes to usability test subjects inability to see certain
interface elements.
Psychological research abounds in this, but there are several points related to
selective attention that are particularly relevant to UX design:
1. Human visual perception is much more incomplete and inaccurate than most
people realize. Our eyes are not able to process everything that comes into
their field of view. Our minds simply do not have enough cognitive resources.
Emily Balcetis and David Dunning discuss this in their article, Wishful Seeing:
Motivational Influences on Visual Perception in a Physical Environment:

29

User interface
designing
The naive assumption among most laypeople is that the eye functions like a
camera, in that the visual system captures everything in the environment in all its
detail. However the assumption of comprehensive vision is wrong Perception is
not the cold, calculated processing of light, but is instead a result of concurrent
interactions among experienced sensations, memory and thinking, and social
influences.
2. More focus in one area means less attention elsewhere. Attention is a zerosum game. If we pay more attention to one object, we consequently pay less
attention to others. Difficult or important tasks require a great deal of
attention, which leaves less cognitive processes left for gorilla-noticing, or
observing whatever else happens to be in ones field of view.
3. Expectations manipulate our perceptions. Because we have limited visual
intake, we use our biases, expectations, and memories to fill in the gaps. As a
result, what we process are highly subjective interpretations of whats
actually thereinterpretations that vary drastically from person to person.
4. Motivations manipulate our perceptions. When we take an action, we do so
with intent. We have some task or goal in mind and we want to take steps
that bring us closer to achieving that goal or accomplishing that task. Balcetis
and Dunning use the term wishful seeing, which means that we interpret
things in a way that fits with our goalsin other words, we see what we wish
to see. Again, these interpretations are highly subjective and vary drastically
from person to person.

3.1.1.1.4. Reasoning
the capacity for consciously making sense of things, for establishing and verifying
facts, and changing or justifying practices, institutions, and beliefs based on new or
existing information. It is closely associated with such characteristically human
activities as philosophy, science, language, mathematics, and art, and is normally
considered to be a definitive characteristic of human nature.
The concept of reason is sometimes referred to as rationality and sometimes as
discursive reason, in opposition to intuitive reason.
Reason or "reasoning" is associated with thinking, cognition, and intellect. Reason,
like habit or intuition, is one of the ways by which thinking comes from one idea to a
related idea. For example, it is the means by which rational beings understand
themselves to think about cause and effect, truth and falsehood, and what is good
or bad.
In contrast to reason as an abstract noun, a reason is a consideration which explains
or justifies some event, phenomenon or behaviour. The ways in which human
beings reason through argument are the subject of inquiries in the field of logic.
30

User interface
designing
Reason is closely identified with the ability to self-consciously change beliefs,
attitudes, traditions, and institutions, and therefore with the capacity for freedom
and self-determination.

3.1.1.1.5. Communication
We need to communicate by nature and we communicate by choice. There are
physical needs, identity needs, social needs, and practical goals; and all of these are
ways we use to communicate. When it comes to physical needs communication is
so important that its presence or absence affects physical health. [1] It is almost like
a survival tool if we find ourselves in danger we need to communicate to find help
or vice versa. Beyond that there comes identity needs where communication does
more than enable us to survive. It is the way indeed, the only way we learn who
we are. Are we smart or stupid, attractive or ugly, skillful or inept? The answers to
these questions dont come from looking in the mirror. We decide who we are based
on how others react to us. Besides helping to define who we are, communication
provides a vital link with others. Thats where we have social needs. Researchers
and theorists have identified a whole range of social needs that we satisfy by
communicating. These include pleasure, affection, companionship, escape,
relaxation, and control. All of these are done with our interpersonal relations. The
author adds, Two are better than one, because they have a good reward for their
labor. For if they fall, one will lift up his companion. But woe to him who is alone
when he falls, for he has no one to help him up. Then we have practical goals,
besides satisfying social needs and shaping our identity, communication is the most
widely used approach to satisfying what communication scholars call instrumental
goals: getting others to behave in ways we want. Some instrumental goals are quite
basic: Communication is the tool that lets you tell the hair stylist to take just a little
off the sides, lets you negotiate household duties, and lets you convince the
plumber that the broken pipe needs attention now! These are main ways we
communicate and all of them include talking, looking, nonverbal communication,
listening. Also showing how by nature or choice we react and communicate
differently.

3.1.1.1.6. Skills of user and their acquisition

3.1.1.1.7. Users cognitive model


Cognitive models of information retrieval rest on the mix of areas such as cognitive
science, human-computer interaction, information retrieval, and library science.
They describe the relationship between a person's cognitive model of the
information sought and the organization of this information in an information
system. These models attempt to understand how a person is searching for
information so that the database and the search of this database can be designed in
31

User interface
designing
such a way as to best serve the user. Information retrieval may incorporate multiple
tasks and cognitive problems, particularly because different people may have
different methods for attempting to find this information and expect the information
to be in different forms. Cognitive models of information retrieval may be attempts
at something as apparently prosaic as improving search results or may be
something more complex, such as attempting to create a database which can be
queried with natural language search.
Berrypicking
One way of understanding how users search for information has been described by
Marcia Batesat the University of California at Los Angeles. Bates argues that "berry
picking" better reflects how users search for information than previous models of
information retrieval. This may be because previous models were strictly linear and
did not incorporate cognitive questions. For instance, one typical model is of a
simple linear match between a query and a document. However, Bates points out
that there are simple modifications that can be made to this process. For instance,
Salton has argued that user feedback may help improve the search results.
Bates argues that searches are evolving and occur bit by bit. That is to say, a
person constantly changes his or her search terms in response to the results
returned from the information retrieval system. Thus, a simple linear model does
not capture the nature of information retrieval because the very act of searching
causes feedback which causes the user to modify his or her cognitive model of the
information being searched for. In addition, information retrieval can be bit by bit.
Bates gives a number of examples. For instance, a user may look through footnotes
and follow these sources. Or, a user may scan through recent journal articles on the
topic. In each case, the user's question may change and thus the search evolves.
Exploratory Search
Researchers in the areas of human-computer interaction and cognitive science
focus on how people explore for information when interacting with the WWW. This
kind of search, sometimes called exploratory search, focuses on how people
iteratively refine their search activities and update their internal representations of
the search problems.[3] Existing search engines were designed based on traditional
library science theories related to retrieval basic facts and simple information
through an interface. However, exploratory information retrieval often involves illdefined search goals and evolving criteria for evaluation of relevance. The
interactions between humans and the information system will therefore involve
more cognitive activity, and systems that support exploratory search will therefore
need to take into account the cognitive complexities involved during the dynamic
information retrieval process.

32

User interface
designing
Natural language searching
Another way in which cognitive models of information may help in information
retrieval is with natural language searching. For instance, How Stuff Works imagines
a world in which, rather than searching for local movies, reading the reviews, then
searching for local Mexican restaurants, and reading their reviews, you will simply
type ""I want to see a funny movie and then eat at a good Mexican restaurant. What
are my options?" into your browser, and you will receive a useful and relevant
response.[4] Although such a thing is not possible today, it represents a holy grail
for researchers into cognitive models of information retrieval. The goal is to
somehow program information retrieval programs to respond to natural language
searches. This would require a fuller understanding of how people structure queries.

3.1.1.1.8. Use of metaphors and the impact on HCI


The word "metaphor" is well known. A widely quoted example for it can be found in
Shakespeare's As You Like It: "All the world's a stage...".
It is originally a Greek word which means "carrying across". The essence of
metaphor is to give an idea of some unknown thing or concept, by illustrating it with
something else which is known and which originally has nothing to do with it. The
metaphor identifies the two things with each other, for the purpose of illustration. In
fact the metaphor is an error committed on purpose, because the two things which
are said to be identical are in fact not identical.
It technically differs from the simile: it is an implicit simile in which the fact of
comparing is not explicitly mentioned. The metaphor in the broader sense includes
the simile (and the allegory, the hyperbole, the symbol etc.) as well. In this paper
the word 'metaphor' will always be used in this broader sense. The important point
is the figurative speech.
The metaphor puts aside the conventional expressions, it takes us back into the preverbal world from where the words and concepts emerged.
As Fig. 1. shows, the metaphor is usually directed (the arrow cannot be reversed).
Metaphors can be found not only in poetic and dramatic works of great literary
value. They are also a part of our everyday life and of journalism. Its role in
literature and in rhetorics is to illustrate the imaginary, the unknown with something
concrete and known, to evoke associations in the reader/listener to enrich the
impression. The role of the metaphor in science is to illustrate the unknown, not
easily imaginable things. An example for the scientific metaphor is Rutherford's
model of the atom which compares the structure of the hydrogen atom to the solar
system. Metaphoric thinking dominates in those early phases of scientific thinking
33

User interface
designing
(like during formulation of a new theory) when the scientist does not yet see clearly
and thinks in metaphors. The formulation of metaphors usually precedes the
formulation of clear concepts.

3.1.2.

Health conditions of the user

Over the past twenty years a great many questions have arisen concerning the links
that may exist between the use of computers and the health and safety of those
who use them. Some health effects -- such as joint pain and eye strain following an
extended period huddled typing at a screen and keyboard -- are recognised as an
actuality by many. However, proving with any degree of certainly the longer-term
health impacts of computer use remains problematic. Not least this is because
widespread computer use is still a relatively modern phenomenon, with the
boundaries between computers and other electronic devices also continuing to blur.
Current UK legislation, for example, makes it clear that use of a computer should
not induce a fit in an epileptic. However, given that it is also accepted in the UK that
watching flashing video images on a television can induce such a fit, it becomes
immediately obvious that both current guidance and legislation are inadequate. The
existing Display Screen Equipment Regulations 1992 were written at a time when
viewing a photograph -- let alone a movie or TV programme -- on a computer was
not possible. At the time, any health risks associated with mobile phones or wireless
computer networks were also yet to be aired. Hence, whilst the following does
report the current legislation and guidance in respect of the use of computer
equipment, its historical context also needs to be remembered.
POTENTIAL COMPUTING-RELATED DISORDERS
34

User interface
designing
The health problems most highly associated with the use of computer equipment
are upper limb disorders, eye problems, stress and fatigue, and skin complaints.
Upper limb disorders is a term used to describe a range of conditions affecting the
fingers, hands, arms and shoulders. Such conditions may range from mild aches and
pains, through to chronic tissue and/or muscular complaints. Repetitive strain injury
(RSI) is one such condition. This is attributed to the excessive performance of
repetitive, dextrous operations. As a result of such repetitive activity, tenosynovitis
(swollen muscles) or carpal tunnel syndrome (swollen tendons) may develop.
Repetitive strain injury can result from prolonged high-speed typing, intensive use
of a mouse, or indeed the long-term use of a computer gaming control pad. Early
signs of repetitive strain injury include a tingling or numbness in the finger or
fingers impacted, and pain or even swelling across the hands and even upper arms.
In the United States, the National Institute of Occupational Health has reported that
40 per cent of people working predominantly with computers suffer some RSI
symptoms, with over ten per cent experiencing constant discomfort.

3.1.2.1.

Ergonomics and surrounding environment

Ergonomics can be defined as the application of know-ledge of human


characteristics to the design of systems.
People in systems operate within an environment and
environmental ergonomics is concerned with how they
interact with the environment from the perspective of
ergonomics. Although there have been many studies,
over hundreds of years, of human responses to the environment (light, noise, heat,
cold, etc.) and much is known,
it is only with the development of ergonomics as a discipline that the unique
features of environmental ergonomics are beginning to emerge. In principle,
environmental
ergonomics will encompass the social, psychological, cultural and organizational
environments of systems, how-ever to date it has been viewed as concerned with
the
individual components of the physical environment. Typically, ergonomists have
considered the environment in
a mechanistic way in terms such as the lighting or noise
survey rather than as an integral part of ergonomics
investigation.

3.1.2.2. Specific concerns of a user


3.1.2.2.1. Repetitive strain injury
35

User interface
designing
Repetitive strain injuries (RSIs) are "injuries of the musculoskeletal and nervous
systems that may be caused by repetitive tasks, forceful exertions, vibrations,
mechanical compression (pressing against hard surfaces), or sustained or awkward
positions". RSI is also known as cumulative trauma disorders, repetitive stress
injuries, repetitive motion injuries or disorders, musculoskeletal disorders, and
[occupational] overuse syndromes.

3.2.
Analyzing of the factors
3.2.1.
Task based analyzing
A task analysis defines a job in terms of KSA necessary to perform daily tasks. It is a
structured framework that dissects a job and arrives at a reliable method of
describing it across time and people by composing a detailed listing of all the tasks.
The first product of a task analysis is a task statement for each task on the list.
When writing the task statement, start each task with a verb, indicate how it is
performed, and state the objective . For example: Loads pallets using a forklift.
One way of getting a comprehensive list is to have the employees prepare their own
list, starting with the most important tasks. Then, compare these lists with yours.
Finally, discuss any differences with the employees, and make changes where
appropriate. This helps to ensure that you have accounted for all tasks and that
they are accurate. It also gets them involved in the analysis activity.
Task or needs analysis should be performed whenever there are new processes or
equipment, when job performance is below standards, or when requests for changes
to current training or for new training are received. An analysis helps ensure that
training is the appropriate solution, rather than another performance solution.
Once the task statement has been defined, the task analysis will then go into
further detail by describing the:

task frequency

difficulty of learning

importance to train

task criticality

task difficulty

overall task importance

36

User interface
designing
This in turn provides you with the information for identifying the KSA required for
successful task performance. The analysis might also go into further detail by
describing the task steps required to perform the task.

3.2.2.

User centered methods

The options that enables to test the user in various ways are as follows

3.2.2.1.

Storyboarding

Storyboards are visual representations that aid in the creation process of digital
storytelling. Storyboards lay out images in sequential order to create the flow of the
production. They can also include technical aspects and explanations of design. The
following flowchart demonstrates how the basic scenes from a digital story might be
organized.

And then when all of the scenes have been planned, a more detailed storyboard can
be created. This gives the digital story developer a much clearer idea of what
elements will go into their digital and how they will fit together.

3.2.2.2.

User need analysis

User-centered design, according to Katz-Haas, is really about defining who the users
are, defining their tasks and goals, their experience levels, what functions they want
and need from a system, what information they want and need and understanding
how the users think the system should work. User-centered design has also been
linked to the identification of required job performance skills, the assessment of
prospective trainees' skills and the development of objectives.
The first step in any user centered design process is to understand the users needs.
37

User interface
designing
Put simply; whereas Requirements analysis focuses on the elements needed to be
represented in the system, needs analysis focuses on the requirements related to
the goals, aspirations and needs of the users and/or the user community and feeds
them into the system requirement analysis process. The main purpose of needs
analysis is the user's satisfaction.
As it focuses on the needs of the human, needs analysis is not limited to addressing
the requirements of just software, but can be applied to any domain, such as
automotive, consumer product or services such as banking. Although it is not a
business development tool, it can be used to help in the development of a business
case.

3.3.
Evaluating a User interface
3.3.1.
Functionality characteristics of UI
3.3.1.1. Use of quality measuring metrics
3.3.1.1.1. Fitts law
First of all it is not Fitts Law. The name of the famous researcher is Paul Fitts, so one
should be careful on spelling. Fitts's Law is basically an empirical model explaining
speed-accuracy tradeoff characteristics of human muscle movement with some
analogy to Shannons channel capacity theorem. Today, with the advent of graphical
user interfaces and different styles of interaction, Fitts Law seems to have more
importance than ever before.
The early experiments on pointing movements targeted the tasks that might be
related to the worker efficiency problem, such as production line and assembly
tasks. They were not targeted to HCI field because there was no GUI. Paul Fitts
extended Woodworth's research which focused on telegraph operator performance
substantially with his famous reciprocal tapping task to define the well-referenced
Fitts Law (Fitts 1954). In the Fitts Law description of pointing, the parameters of
interest are:
a. The time to move to the target
b. The movement distance from the starting position to the target center
c. Target width

3.3.1.1.2. Keystroke level method


Keystroke-level model, sometimes referred to as KLM or KLM-GOMS, is an approach
to humancomputer interaction (HCI), developed by David Kieras and based on
CMN-GOMS. CMN-GOMS for its part was developed by Card, Moran, and Newell, and
explained in their book The Psychology of Human-Computer Interaction, 1983. The
model is an 11-step method that can be used by individuals or companies seeking
38

User interface
designing
ways to estimate the time it takes to complete simple data input tasks using a
computer and mouse. By using KLM-GOMS, individuals often find more efficient or
better ways to complete a task simply by analyzing the steps required in the
process and rearranging or eliminating unneeded steps.
It is designed to be easier to use than other GOMS methods, such that companies
who cannot afford humancomputer interaction specialists can use it. KLM-GOMS is
usually applied in situations that require minimal amounts of work and interaction
with a computer interface or software design. The calculations and the number of
steps required to accurately compute the overall task time increase quickly as the
number of tasks involved increases. Thus, KLM-GOMS is best suited to evaluate and
time specific tasks that require, on average, less than 5 minutes to complete.
The KLM-GOMS model is designed to be as straightforward as possible. The
sequence of operations is modeled as a sequence of a small number of operations.
Each operation is assigned a duration, which is intended to model the average
amount of time an experienced user would take to perform it.

3.3.1.1.3. Test documentation


Test documentation is the vital element which raises any "try out" or "experimental"
activities to the level of a proper test. Even if the way something is tested is good, it
is worthless if it is not documented. How test documentation should look like is e.g.
specified in the IEEE Standard 829. The IEEE standard allows scaling the set of
documents by a certain degree. But basically this standard boils down to:
Test plan
The scope of the test activities, the methods and tools, the schedule and sequence
of all test activities related to the SW have to be stated and defined in this plan. The
test objects have to be identified as well as the attributes which have to be tested
and the related end of test criteria must be fixed. Responsibilities and risks have to
be identified and documented.
Test design specification
The method and approach for the tests has to be defined. In some cases it may be
necessary to design a sophisticated test environment (test stubs, make files,
recording facilities, etc.). All the technical details have to be specified, designed and
documented.
Test case specification / Test procedure specification
In the test case specification the test object has to be identified as well as the
attributes which have to be tested. It has to be made clear which steps and
measures have to be applied to execute the test cases and which results are
39

User interface
designing
expected.
Test log / Test recording / Test reporting
The test results have to be documented and it has to be identified if the test ended
with the expected results i.e. if they passed or failed. The test recording strongly
depends on the test environment. In some cases this can be an automatic printout.
In other cases this may be a check list which is ticked by the tester (may be even
included in the test procedure / test case spec!). It may be even necessary to apply
different methods within the same project, depending on the kind of test object and
the kind of test employed. A preparation of the test results in a report is required.
Test logs may be voluminous and have to be condensed to have their contents
prepared for a quick overview and reference as well as for management or
customer presentations.

40

You might also like