You are on page 1of 9

User interface

From Wikipedia, the free encyclopedia


  (Redirected from Human machine interface)
Jump to navigationJump to search
For the boundary between computer systems, see Interface (computing). For other
uses, see Interface.

The Reactable, an example of a tangible user interface

In the industrial design field of human–computer interaction, a user interface (UI) is


the space where interactions between humans and machines occur. The goal of this
interaction is to allow effective operation and control of the machine from the human
end, whilst the machine simultaneously feeds back information that aids the
operators' decision-making process. Examples of this broad concept of user
interfaces include the interactive aspects of computer operating systems,
hand tools, heavy machinery operator controls, and process controls. The design
considerations applicable when creating user interfaces are related to, or involve
such disciplines as, ergonomics and psychology.
Generally, the goal of user interface design is to produce a user interface which
makes it easy, efficient, and enjoyable (user-friendly) to operate a machine in the
way which produces the desired result (i.e. maximum usability). This generally
means that the operator needs to provide minimal input to achieve the desired
output, and also that the machine minimizes undesired outputs to the user.
User interfaces are composed of one or more layers, including a human-machine
interface (HMI) that interfaces machines with physical input hardware such as
keyboards, mice, or game pads, and output hardware such as computer monitors,
speakers, and printers. A device that implements an HMI is called a human interface
device (HID). Other terms for human-machine interfaces are man-machine
interface (MMI) and, when the machine in question is a computer, human–
computer interface. Additional UI layers may interact with one or more human
senses, including: tactile UI (touch), visual UI (sight), auditory UI (sound), olfactory
UI (smell), equilibrial UI (balance), and gustatory UI (taste).
Composite user interfaces (CUIs) are UIs that interact with two or more senses.
The most common CUI is a graphical user interface (GUI), which is composed of a
tactile UI and a visual UI capable of displaying graphics. When sound is added to a
GUI, it becomes a multimedia user interface (MUI). There are three broad categories
of CUI: standard, virtual and augmented. Standard CUI use standard human
interface devices like keyboards, mice, and computer monitors. When the CUI blocks
out the real world to create a virtual reality, the CUI is virtual and uses a virtual
reality interface. When the CUI does not block out the real world and
creates augmented reality, the CUI is augmented and uses an augmented reality
interface. When a UI interacts with all human senses, it is called a qualia interface,
named after the theory of qualia. CUI may also be classified by how many senses
they interact with as either an X-sense virtual reality interface or X-sense augmented
reality interface, where X is the number of senses interfaced with. For example,
a Smell-O-Visionis a 3-sense (3S) Standard CUI with visual display, sound and
smells; when virtual reality interfaces interface with smells and touch it is said to be a
4-sense (4S) virtual reality interface; and when augmented reality interfaces interface
with smells and touch it is said to be a 4-sense (4S) augmented reality interface.
Contents

 1Overview
 2Terminology
 3History
o 3.11945–1968: Batch interface
o 3.21969–present: Command-line user interface
o 3.31985: SAA User Interface or Text-Based User Interface
o 3.41968–present: Graphical User Interface
 4Interface design
o 4.1Principles of quality
o 4.2Principle of least astonishment
o 4.3Principle of habit formation
o 4.4A model of design criteria: User Experience Honeycomb
 5Types
 6Gallery
 7See also
 8References
 9External links

Overview[edit]

A graphical user interface following the desktop metaphor

The user interface or human–machine interface is the part of the machine that
handles the human–machine interaction. Membrane switches, rubber keypads and
touchscreens are examples of the physical part of the Human Machine Interface
which we can see and touch.
In complex systems, the human–machine interface is typically computerized. The
term human–computer interface refers to this kind of system. In the context of
computing, the term typically extends as well to the software dedicated to control the
physical elements used for human–computer interaction.
The engineering of human–machine interfaces is enhanced by
considering ergonomics(human factors). The corresponding disciplines are human
factors engineering (HFE) and usability engineering (UE), which is part of systems
engineering.
Tools used for incorporating human factors in the interface design are developed
based on knowledge of computer science, such as computer graphics, operating
systems, programming languages. Nowadays, we use the expression graphical user
interface for human–machine interface on computers, as nearly all of them are now
using graphics.[citation needed]
Multimodal interfaces allow users to interact using more than one modality of user
input.[1]

Terminology[edit]

A human–machine interface usually involves peripheral hardware for the INPUT and for the OUTPUT.
Often, there is an additional component implemented in software, like e.g. a graphical user interface.

There is a difference between a user interface and an operator interface or a


human–machine interface (HMI).

 The term "user interface" is often used in the context of (personal)


computer systems and electronic devices.
o Where a network of equipment or computers are interlinked
through an MES (Manufacturing Execution System)-or Host to
display information.
o A human-machine interface (HMI) is typically local to one
machine or piece of equipment, and is the interface method
between the human and the equipment/machine. An operator
interface is the interface method by which multiple pieces of
equipment that are linked by a host control system are
accessed or controlled.[clarification needed]
o The system may expose several user interfaces to serve
different kinds of users. For example, a computerized library
database might provide two user interfaces, one for library
patrons (limited set of functions, optimized for ease of use) and
the other for library personnel (wide set of functions, optimized
for efficiency).[clarification needed]
 The user interface of a mechanical system, a vehicle or
an industrial installation is sometimes referred to as the human–machine
interface (HMI).[2] HMI is a modification of the original term MMI (man-
machine interface).[3] In practice, the abbreviation MMI is still frequently
used[3] although some may claim that MMI stands for something different
now.[citation needed] Another abbreviation is HCI, but is more commonly used
for human–computer interaction.[3] Other terms used are operator interface
console (OIC) and operator interface terminal (OIT). [4] However it is
abbreviated, the terms refer to the 'layer' that separates a human that is
operating a machine from the machine itself. [3]Without a clean and usable
interface, humans would not be able to interact with information systems.
In science fiction, HMI is sometimes used to refer to what is better described as
a direct neural interface. However, this latter usage is seeing increasing application
in the real-life use of (medical) prostheses—the artificial extension that replaces a
missing body part (e.g., cochlear implants).[5][6]
In some circumstances, computers might observe the user and react according to
their actions without specific commands. A means of tracking parts of the body is
required, and sensors noting the position of the head, direction of gaze and so on
have been used experimentally. This is particularly relevant to immersive interfaces.[7]
[8]

History[edit]
The history of user interfaces can be divided into the following phases according to
the dominant type of user interface:
1945–1968: Batch interface[edit]

IBM 029

In the batch era, computing power was extremely scarce and expensive. User
interfaces were rudimentary. Users had to accommodate computers rather than the
other way around; user interfaces were considered overhead, and software was
designed to keep the processor at maximum utilization with as little overhead as
possible.
The input side of the user interfaces for batch machines was mainly punched
cards or equivalent media like paper tape. The output side added line printers to
these media. With the limited exception of the system operator's console, human
beings did not interact with batch machines in real time at all.
Submitting a job to a batch machine involved, first, preparing a deck of punched
cards describing a program and a dataset. Punching the program cards wasn't done
on the computer itself, but on keypunches, specialized typewriter-like machines that
were notoriously bulky, unforgiving, and prone to mechanical failure. The software
interface was similarly unforgiving, with very strict syntaxes meant to be parsed by
the smallest possible compilers and interpreters.

Holes are punched in the card according to a prearranged code transferring the facts from the census
questionnaire into statistics

Once the cards were punched, one would drop them in a job queue and wait.
Eventually, operators would feed the deck to the computer, perhaps
mounting magnetic tapes to supply another dataset or helper software. The job
would generate a printout, containing final results or an abort notice with an attached
error log. Successful runs might also write a result on magnetic tape or generate
some data cards to be used in a later computation.
The turnaround time for a single job often spanned entire days. If one were very
lucky, it might be hours; there was no real-time response. But there were worse fates
than the card queue; some computers required an even more tedious and error-
prone process of toggling in programs in binary code using console switches. The
very earliest machines had to be partly rewired to incorporate program logic into
themselves, using devices known as plugboards.
Early batch systems gave the currently running job the entire computer; program
decks and tapes had to include what we would now think of as operating
system code to talk to I/O devices and do whatever other housekeeping was
needed. Midway through the batch period, after 1957, various groups began to
experiment with so-called “load-and-go” systems. These used a monitor
program which was always resident on the computer. Programs could call the
monitor for services. Another function of the monitor was to do better error checking
on submitted jobs, catching errors earlier and more intelligently and generating more
useful feedback to the users. Thus, monitors represented the first step towards both
operating systems and explicitly designed user interfaces.
1969–present: Command-line user interface[edit]
Main article: Command-line interface

Teletype Model 33 ASR

Command-line interfaces (CLIs) evolved from batch monitors connected to the


system console. Their interaction model was a series of request-response
transactions, with requests expressed as textual commands in a specialized
vocabulary. Latency was far lower than for batch systems, dropping from days or
hours to seconds. Accordingly, command-line systems allowed the user to change
his or her mind about later stages of the transaction in response to real-time or near-
real-time feedback on earlier results. Software could be exploratory and interactive in
ways not possible before. But these interfaces still placed a relatively
heavy mnemonic load on the user, requiring a serious investment of effort and
learning time to master.[9]
The earliest command-line systems combined teleprinters with computers, adapting
a mature technology that had proven effective for mediating the transfer of
information over wires between human beings. Teleprinters had originally been
invented as devices for automatic telegraph transmission and reception; they had a
history going back to 1902 and had already become well-established in newsrooms
and elsewhere by 1920. In reusing them, economy was certainly a consideration, but
psychology and the Rule of Least Surprise mattered as well; teleprinters provided a
point of interface with the system that was familiar to many engineers and users.
DEC VT100 terminal

The widespread adoption of video-display terminals (VDTs) in the mid-1970s


ushered in the second phase of command-line systems. These cut latency further,
because characters could be thrown on the phosphor dots of a screen more quickly
than a printer head or carriage can move. They helped quell conservative resistance
to interactive programming by cutting ink and paper consumables out of the cost
picture, and were to the first TV generation of the late 1950s and 60s even more
iconic and comfortable than teleprinters had been to the computer pioneers of the
1940s.
Just as importantly, the existence of an accessible screen — a two-dimensional
display of text that could be rapidly and reversibly modified — made it economical for
software designers to deploy interfaces that could be described as visual rather than
textual. The pioneering applications of this kind were computer games and text
editors; close descendants of some of the earliest specimens, such as rogue(6),
and vi(1), are still a live part of Unix tradition.
1985: SAA User Interface or Text-Based User Interface[edit]
In 1985, with the beginning of Microsoft Windows and other graphical user
interfaces, IBM created what is called the Systems Application Architecture (SAA)
standard which include the Common User Access (CUA) derivative. CUA
successfully created what we know and use today in Windows, and most of the more
recent DOS or Windows Console Applications will use that standard as well.
This defined that a pulldown menu system should be at the top of the screen, status
bar at the bottom, shortcut keys should stay the same for all common functionality
(F2 to Open for example would work in all applications that followed the SAA
standard). This greatly helped the speed at which users could learn an application so
it caught on quick and became an industry standard. [10]
1968–present: Graphical User Interface[edit]
AMX Desk made a basic WIMP GUI

Linotype WYSIWYG 2000, 1989

 1968 – Douglas Engelbart demonstrated NLS, a system which uses


a mouse, pointers, hypertext, and multiple windows.[11]
 1970 – Researchers at Xerox Palo Alto Research Center (many from SRI)
develop WIMP paradigm (Windows, Icons, Menus, Pointers)[11]
 1973 – Xerox Alto: commercial failure due to expense, poor user interface,
and lack of programs[11]
 1979 – Steve Jobs and other Apple engineers visit Xerox PARC.
Though Pirates of Silicon Valley dramatizes the events, Apple had already
been working on developing a GUI, such as the Macintosh and Lisa
projects, before the visit.[12][13]
 1981 – Xerox Star: focus on WYSIWYG. Commercial failure (25K sold)
due to cost ($16K each), performance (minutes to save a file, couple of
hours to recover from crash), and poor marketing
 1982 – Rob Pike and others at Bell Labsdesigned Blit, which was released
in 1984 by AT&T and Teletype as DMD 5620 terminal.
 1984 – Apple Macintosh popularizes the GUI. Super Bowl
commercial shown twice, was the most expensive commercial ever made
at that time
 1984 – MIT's X Window System: hardware-independent platform and
networking protocol for developing GUIs on UNIX-like systems
 1985 – Windows 1.0 – provided GUI interface to MS-DOS. No overlapping
windows (tiled instead).
 1985 – Microsoft and IBM start work on OS/2 meant to eventually replace
MS-DOS and Windows
 1986 – Apple threatens to sue Digital Research because their GUI
desktop looked too much like Apple's Mac.
 1987 – Windows 2.0 – Overlapping and resizable windows, keyboard and
mouse enhancements
 1987 – Macintosh II: first full-color Mac
 1988 – OS/2 1.10 Standard Edition (SE) has GUI written by Microsoft,
looks a lot like Windows 2

You might also like