You are on page 1of 448

User & Usability

compiled by

PDF generated using the open source mwlib toolkit. See for more information. PDF generated at: Sun, 13 Jan 2013 14:41:17 UTC

Nielsen Norman Group Donald Norman Jakob Nielsen (usability consultant) Bruce Tognazzini John M. Carroll (information scientist) 1 1 2 6 9 11 12 12 21 26 35 35 37 41 44 44 47 62 67 69 72 75 79 80 81 81 84 84 86

Design & Cognition

Design Design elements and principles Cognitive science

User Experience engineering [Vol 1 to 6] + [Special A to C]

User experience User experience design User experience evaluation

1. Usability or User engineering

Usability engineering Usability Usability testing Usability goals Focus group Cognitive walkthrough Heuristic evaluation RITE Method Think aloud protocol

2. User Interface engineering

User interface design Interface design Human interface guidelines

3. User Interaction engineering

Interaction design Humancomputer interaction Outline of humancomputer interaction Human-machine interface Principles of user interface design User-centered design Use-centered design Activity theory Participatory design

86 90 99 106 107 108 114 115 122 129 129 131 131 133 143 143 156 156 161 162 170 171 172 177 180 191 192 193 195 195 204 208 213 214

4. Visual engineering
Communication design

5. Information Architecture engineering

Information architecture Infographic

6. Accessibility

Web design
Web design Web usability Web accessibility Website architecture Web navigation Web typography Website wireframe Web colors Web interoperability Web modeling Web template

Web Analytics & Optimization

Web analytics List of web analytics software Google Analytics Google Website Optimizer Performance indicator

Session replay Heat map Click-through rate Conversion rate Landing page Landing page optimization A/B testing Multivariate testing Multivariate landing page optimization Purchase funnel Customer lifecycle management Customer lifetime value Predictive analytics

218 218 221 223 224 225 227 230 232 234 235 236 239 250 250 253 255 255 265 268 268 270 275 277 286 286 296 299 299 303 304 307 308 312 315

Consumer behaviour Consumer confusion

Special A: Human factors and ergonomics

Human factors and ergonomics Iterative design User analysis Work sampling Kansei engineering Systems analysis Meta-analysis

Special B
Eye tracking Voice user interface

Special C: HumanComputer Interaction

Computer accessibility Adaptive autonomy Affordance Banner blindness Computer user satisfaction Contextual inquiry Contextual design

Gender HCI Gulf of evaluation Gulf of execution Habituation Human action cycle Human interface device User interface Interaction Interaction technique Look and feel Mode (computer interface) Physiological interaction Principle of least astonishment Progressive disclosure Sonic interaction design Thanatosensitivity Transparency (humancomputer interaction) User (computing) luser Humancomputer information retrieval Information retrieval Software agent Universal usability Knowbility Usage-centered design Activity-centered design Bodystorming Pictive Rapid prototyping Task analysis Scenario (computing) Wizard of Oz experiment Hick's law Fitts's law Steering law GOMS Keystroke-level model

319 323 323 324 330 332 335 340 342 345 346 351 353 355 357 361 363 365 367 368 371 379 385 388 391 393 393 394 394 395 397 399 404 406 411 414 418

Minimalism (technical communication) Structured writing Topic-based authoring Information mapping Darwin Information Typing Architecture

422 422 423 423 424 426

Article Sources and Contributors Image Sources, Licenses and Contributors 430 440

Article Licenses
License 442

Nielsen Norman Group
The Nielsen Norman Group (NN/g) is a computer user interface and user experience consulting firm, founded in 1998 by Jakob Nielsen, Donald Norman and Bruce Tognazzini. They describe themself as providing "Evidence-Based User Experience Research, Training, and Consulting". The three founding partners are highly regarded in the area of user interface. Their work includes an analysis of the interface of Microsoft's Windows 8 operating system.[1][2] They have done analyses of the user experience of mobile devices and intranets.[3][4][5]

[1] (http:/ / www. inquisitr. com/ 407450/ windows-8-start-screen-interface-analyzed-by-nielsen-norman-group/ ) [2] (http:/ / www. techworld. com. au/ article/ 442563/ windows_8_ui_strategic_mistake_argues_design_guru/ ) [3] (http:/ / internet2go. net/ news/ data-and-forecasts/ nielsen-norman-group-says-mobile-user-experience-stinks) [4] (http:/ / www. informationweek. com/ software/ information-management/ nielsen-norman-group-evaluates-intranet/ 229210670) [5] (http:/ / articles. cnn. com/ 2011-05-27/ tech/ ipad. usability. gahran_1_ipad-apps-ipad-users-web-sites?_s=PM:TECH)

External links
NN/g website (

Donald Norman

Donald Norman
Don Norman

Norman at the About, With and For conference in 2005 Born Residence Nationality Fields December 25, 1935 United States American Cognitive science Usability engineering Nielsen Norman Group Korea Advanced Institute of Science and Technology


Alma mater MIT University of Pennsylvania Knownfor The Design of Everyday Things Cognitive ergonomics User-centered design

Donald Arthur Norman (born December 25, 1935) is an academic in the field of cognitive science, design and usability engineering and a co-founder and consultant with the Nielsen Norman Group. He is the author of the book The Design of Everyday Things. Much of Norman's work involves the advocacy of user-centered design. His books all have the underlying purpose of furthering the field of design, from doors to computers. Norman has recently taken a controversial stance in saying that the design research community has had little impact in the innovation of products, and that whereas academics can help in refining existing products, it is technologists that accomplish the breakthroughs.[1] Norman splits his time between co-directing the dual-degree MBA and Engineering program Northwestern University and consulting with the Nielsen Norman Group. Norman announced that he would no longer teach full-time after the 2009-2010 academic year.[2] Norman is an active Distinguished Visiting Professor at the Korea Advanced Institute of Science & Technology where he spends two months a year teaching. He also holds the title of Professor Emeritus of Cognitive Science at the University of California, San Diego.[3] He is on numerous educational, private, and public sector advisory boards including the editorial board of Encyclopdia Britannica.

Donald Norman

Early academics
In 1957 Norman received an Bachelor of Science in Electrical Engineering and Computer Science (EECS) from MIT. Norman continued through college until 1962, in the process earning M.S. in EECS and a Doctorate of Philosophy in Mathematical Psychology from the University of Pennsylvania. After graduating, Norman took up a postdoctoral fellowship at the Center for Cognitive Studies at Harvard University and within a year became a Lecturer. After four years with the Center, Norman took a position as an Associate Professor in the Psychology Department at University of California, San Diego (UCSD). Norman applied his training as an engineer and computer scientist, and as an experimental and mathematical psychologist, to the emerging discipline of cognitive science. Norman eventually became founding chair of the Department of Cognitive Science and chair of the Department of Psychology. At UCSD, Norman was a founder of the Institute for Cognitive Science and one of the organizers of the Cognitive Science Society (along with Roger Schank, Allan Collins, and others), which held its first meeting at the UCSD campus in 1979.[3] Together with psychologist Tim Shallice, Norman proposed a framework of attentional control of executive functioning. One of the components of the Norman-Shallice model is the supervisory attentional system.[4]

Cognitive engineering career

Norman made the transition from cognitive science to cognitive engineering by entering the field as a consultant and writer. The article "The Trouble with Unix" in Datamation catapulted him to a position of prominence in the computer world.[5] Soon after, his career took off outside of academia, although he still remained active at UCSD until 1993. Norman continued his work to further human centered design by serving on numerous University and Government advisory boards such as with the Defense Advanced Research Projects Agency (DARPA). He currently serves on numerous committees and advisory boards like at Motorola, the Toyota Information Technology Center, TED Conference, Panasonic, Encyclopdia Britannica and many more. Norman published several important books during his time at UCSD, one of which, User Centered System Design, obliquely referred to the university in the initials of its title. In 1995, Norman left UCSD to join Apple Computer, initially as an Apple Fellow as a User Experience Architect (The first to use the phrase User Experience in a title), and then as the Vice President of the Advanced Technology Group. He later worked for Hewlett-Packard before joining with Jakob Nielsen to form the Nielsen Norman Group in 1998. He returned to academia as a professor of computer science at Northwestern University where he is co-Director of the Segal Design Institute. Norman has received many awards for his work. He received an honorary degree from the University of Padua in Padua, Italy. In 2001 he was inducted as a Fellow of the Association for Computing Machinery, and in 2006 received the Benjamin Franklin Medal in Computer and Cognitive Science.[6]

Donald Norman

User-centered design
In his book The Design of Everyday Things, originally titled The Psychology of Everyday Things, Norman describes the psychology behind what he deems good and bad design, through case studies, and proposes design principles. He exalts the importance of design in our everyday lives, and the consequences of errors caused by bad design. In the book, Norman uses the term "user-centered design" to describe design based on the needs of the user, leaving aside what he deems secondary issues like aesthetics. User-centered design involves simplifying the structure of tasks, making things visible, getting the mapping right, exploiting the powers of constraint, designing for error, explaining affordances and seven stages of action. Other topics of the book include: The Psychopathology of Everyday Things The Psychology of Everyday Actions Knowledge in the Head and in the World Knowing What to Do To Err Is Human The Design Challenge

"Academics get paid for being clever, not for being right."[7]

Partial bibliography
Human information processing: An introduction to psychology (1972) in collaboration with Peter H. Lindsay (first author)[8] Memory and attention (1977) Learning and memory (1982)

Direct manipulation interfaces (1985) in collaboration with E. L. Hutchins (first author) and J.D. Hollan User Centered System Design: New Perspectives on Human-Computer Interaction (1986) (editor in collaboration with Stephen Draper) The Design of Everyday Things (1988, originally under the title The Psychology of Everyday Things) (Newprint 2002) Turn signals are the facial expressions of automobiles (1992) Things That Make Us Smart (1993) The Invisible Computer (1998) Emotional Design (2004) The Design of Future Things (2007) Living with Complexity, (2010) Defending Human Attributes in the Age of the Machine CD-ROM by the Voyager Company combining Design of Every Day Things, Turn signals are the facial expressions of automobiles, Things That Make Us Smart, and various technical reports (1994)

Donald Norman

[1] Norman, Donald. "Technology First, Needs Last" (http:/ / jnd. org/ dn. mss/ technology_first_needs_last. html). . Retrieved January 26, 2010. [2] Norman, Donald. "My change of status" (http:/ / jnd. org). . Retrieved January 26, 2010. [3] Norman, Donald. "Donald Norman Curriculum Vitae" (http:/ / jnd. org/ docs/ Don_Norman_Academic_Vita. pdf). . Retrieved January 26, 2010. [4] Friedenberg, Jay; Gordon Silverman (2010). Cognitive Science: An Introduction of the Study of Mind. United States of America: SAGE Publications. pp.180182. ISBN978-1-4129-7761-6. [5] Norman, Donald. The trouble with UNIX: The user interface is horrid. Datamation, 27, No. 12, 139-150. [6] "Donald A. Norman" (http:/ / www. fi. edu/ tfi/ exhibits/ bower/ 06/ ccscience. html). Laureate Database. The Franklin Institute Awards. . Retrieved 2011-06-24. [7] "Annual conference" (http:/ / books. google. com/ books?id=FUkXAQAAMAAJ& q="academics+ get+ paid+ for+ being+ clever"& dq="academics+ get+ paid+ for+ being+ clever"& hl=en& ei=xGaDTarAA5DmsQPLo52IAg& sa=X& oi=book_result& ct=result& resnum=2& ved=0CDAQ6AEwAQ). Google Books. 2010-12-21. . Retrieved 2011-06-24. [8] "Human Information Processing: An Introduction to Psychology by Peter H. Lindsay, Donald A. Norman Author(s) of Review: Gregg C. Oden, Lola L. Lopes The American Journal of Psychology, Vol. 110, No. 4 (Winter, 1997), pp. 635-641 doi:10.2307/1423414 at JSTOR, an online journal archive made available to researchers through participating libraries and institutions. Subscription."

External links
Official website ( Publications by Donald Norman ( html) from List of Donald Norman articles ( Donald Norman at Userati ( Lecture by Donald Norman on "The Design of Future Things" (Stanford University, February 9, 2007) (http://

Jakob Nielsen (usability consultant)

Jakob Nielsen (usability consultant)

Jakob Nielsen

Jakob Nielsen Born October 5, 1957 Copenhagen, Denmark

Occupation Web usability consultant

Jakob Nielsen (born 1957 in Copenhagen, Denmark) is a leading web usability consultant.[1] He holds a Ph.D. in humancomputer interaction from the Technical University of Denmark in Copenhagen.

Early life and background

Nielsen's earlier affiliations include Bellcore (now Telcordia Technologies) (Bell Communications Research), the Technical University of Denmark, and the IBM User Interface Institute at the Thomas J. Watson Research Center.

Sun Microsystems
From 1994 to 1998, he was a Sun Microsystems Distinguished Engineer. He was hired to make heavy-duty enterprise software easier to use, since large-scale applications had been the focus of most of his projects at the phone company and IBM. But luckily the job definition of a Distinguished Engineer is "you're supposed to be the world's leading expert in your field, so you figure out what would be most important for the company for you to work on." Therefore, Dr. Nielsen ended up spending most of his time at Sun defining the emerging field of web usability. He was the usability lead for several design rounds of Sun's website and intranet (SunWeb), including the original SunWeb design in 1994.

Current activities
Nielsen is on the editorial board of Morgan Kaufmann Publishers' book series in Interactive Technologies. Nielsen continues to write a fortnightly newsletter, Alertbox, on web design matters and has published several books on the subject of web design. After his regular articles on his Web site about usability research attracted media attention, he subsequently co-founded usability consulting company Nielsen Norman Group with fellow usability expert Donald Norman.

Jakob Nielsen (usability consultant)

Nielsen founded the "discount usability engineering" movement for fast and cheap improvements of user interfaces and has invented several usability methods, including heuristic evaluation. He holds 79 United States patents, mainly on ways of making the Web easier to use. Nielsen gave his name to Nielsen's Law, in which he stated that network connection speeds for high-end home users would increase 50% per year, or double every 21 months. As a corollary, he noted that, since this growth rate is slower than that predicted by Moore's Law of processor power, user experience would remain bandwidth-bound.[2] Nielsen has also defined the five quality components of his "Usability Goals", which are:[3] Learnability Efficiency Memorability Errors (as in low error rate) Satisfaction

Nielsen has been criticized by some graphic designers[4][5] for failing to balance the importance of other user experience considerations such as typography, readability, visual cues for hierarchy and importance, and eye appeal.

His published books include: Hypertext and Hypermedia (1990) (ISBN 0-12-518410-7) Usability Engineering (1993) (ISBN 0-12-518406-9) Designing Web Usability: The Practice of Simplicity (1999) (ISBN 1-56205-810-X) E-Commerce User Experience (2001) (ISBN 0-970-60720-2) (coauthors: Rolf Molich, Carolyn Snyder, Susan Farrell) Homepage Usability: 50 Websites Deconstructed (2001) (ISBN 0-7357-1102-X) (coauthor: Marie Tahir) Prioritizing Web Usability (2006) (ISBN 0-321-35031-6) (coauthor: Hoa Loranger) Eyetracking Web Usability (2008) (ISBN 0-321-49836-4) (coauthor: Kara Pernice) Nielsen publishes a biweekly column, Alertbox ISSN1548-5552, on current issues in usability. A list of Jakob Nielsen's research publications [6] is maintained at

[1] Study Shows People Ignore Generic Photos Online (http:/ / bits. blogs. nytimes. com/ 2010/ 11/ 02/ study-shows-people-ignore-generic-photos-online/ ?src=me& ref=technology) New York Times November 2, 2010 [2] Nielsen, Jakob (1998-04-05). "Nielsen's Law of Internet Bandwidth" (http:/ / www. useit. com/ alertbox/ 980405. html). . Retrieved 2008-02-27. [3] Nielsen, Jakob (1994). Usability Engineering. Morgan Kaufmann Publishers. ISBN0-12-518406-9. [4] Usability News "The Backlash against Jakob Nielsen and What it Teaches Us" (http:/ / www. usabilitynews. com/ news/ article603. asp), July 31, 2002 [5] Curt Cloninger "Usability experts are from Mars, graphic designers are from Venus" (http:/ / www. alistapart. com/ articles/ marsvenus/ ) July 28, 2000 [6] http:/ / www. interaction-design. org/ references/ authors/ jakob_nielsen. html

Jakob Nielsen (usability consultant)

External links ( Nielsen's website List of articles by Jakob Nielsen ( Jakob Nielsen Interview ( Jakob Nielsen Profile/Criticism (

Bruce Tognazzini

Bruce Tognazzini
Bruce Tognazzini
Born Bruce Tognazzini March 26, 1945 San Francisco, California, USA A Country Coach motorhome cruising the USA American

Residence Nationality

Occupation Principal, Nielsen Norman Group Spouse(s) Julie F. Moran, MD (1986present)

Bruce "Tog" Tognazzini (born 1945) is a usability consultant in partnership with Donald Norman and Jakob Nielsen in the Nielsen Norman Group, which specializes in human computer interaction. He was with Apple Computer for fourteen years, then with Sun Microsystems for four years, then WebMD for another four years. He has written two books, Tog on Interface and Tog on Software Design, published by Addison-Wesley, and he publishes the webzine Asktog, with the tagline "Interaction Design Solutions for the Real World".

Tog (as he is widely known in computer circles) built his first electro-mechanical computer in 1957, landing a job in 1959 working with the world's first check-reading computer, NCR's ERMA (Electronic Recording Method of Accounting), at Bank of America, in San Francisco. Tog was an early and influential employee of Apple Computer, there from 1978 to 1992. In June 1978, Steve Jobs, having seen one of his early programs, The Great American Probability Machine, had Jef Raskin hire him as Apple's first applications software engineer. He's listed on the back of his book Tog on Interface (Addison Wesley, 1991) as "Apple Employee #66" (the same employee number he held later at WebMD). In his early days at Apple, simultaneous with his developing Apple's first human interface, for the Apple II computer, he published Super Hi-Res Chess, a novelty program for the Apple II that, despite its name, did not play chess or have any hi-res (high-resolution) graphics; instead, it seemed to crash to the Applesoft BASIC prompt with an error message, but was actually a parody of Apple's BASIC command line interface that seemingly took over control of one's computer, refusing to give it back until the magic word was discovered.[1] His extensive work in user-interface testing and design, including publishing the first edition, in September, 1978, and seven subsequent editions of The Apple Human Interface Guidelines, played an important role in the direction of Apple's product line from the early days of Apple into the 1990s. (Steve Smith and Chris Espinosa also played a key role, incorporating the initial material on the Lisa and Macintosh computers in the fourth and fifth editions in the early 1980s.) [2] He and his partner, John David Eisenberg, wrote Apple Presents...Apple, the disk that taught new Apple II owners how to use the computer. This disk became a self-fulfilling prophesy: At the time of its authoring, there was no standard Apple II interface. Because new owners were all being taught Tog and David's interface, developers soon began writing to it, aided by Tog's Apple Human Interface Guidelines, and reinforced by AppleWorks, a suite of productivity applications for the Apple II into which Tog had also incorporated the same interface.[2] Others often report him as one of the fathers of the Macintosh interface, a claim he has always been careful to refute. Although he did consult with Jef Raskin in the early days of the Macintosh, during the later, critical development period of the Mac, he was assigned to scale down the Lisa interface, not for the Mac, but for the Apple II. Although he and James Batson were able to develop a viable interface for the Apple II that matched the mousing speed of the

Bruce Tognazzini much faster Macintosh, the Apple executive staff elected not to ship a mouse with the Apple II for fear of cannibalizing Macintosh sales, blunting its success. It was only after Steve Job's early departure from Apple, in 1985, that Tog came to oversee the interface for both machines. During this period, Tog was responsible for the design of the Macintosh's hierarchical menus and invented time-out dialog boxes, which, after a visible countdown, carry out the default activity without the user explicitly clicking. He also invented the "package" illusion later used by Apple for Macintosh applications: Applications, along with all their supporting files, reside inside a "package" that, in turn, appears to be the application itself, appearing as an application icon, not as a folder. This illusion makes possible the simple drag-and-drop installation and deletion of Mac applications. While working at Sun, in 1992 and 1993, he produced the Starfire video prototype, in order to give an idea of a usability centered vision of the Office of the future. The video predicted the rise of a new technology that would become known as the World Wide Web. Popular Science Magazine reported, in March 2009, that Microsoft had just produced a new video showing life in the year 2019: "The 2019 Microsoft details with this video is almost identical to the 2004 predicted in this video produced by Sun Microsystems in 1992."[3] While at Sun, Tog also filed for 58 US patents, with 57 issued in the areas of aviation safety, GPS, and human-computer interaction. Among them is US Patent 6278660, the time-zone-tracking wristwatch with built-in GPS and simple time-zone maps that sets itself using the GPS satellite's atomic clock and re-sets itself automatically whenever crossing into a new time zone.[4] In 2000, after his four-year stint at WebMD, Tog joined his colleagues as the third principal at the Nielsen Norman Group, along with Jakob Nielsen and Don Norman.


The Apple Human Interface Guidelines (1987) ISBN 0-201-17753-6 (uncredited, author is Apple Computer, Inc) Tog on Interface (1992) ISBN 0-201-60842-1 Tog on Software Design (1995) ISBN 0-201-48917-1

[1] "Interview: Bruce Tognazzini," Elizabeth Dykstra-Erickson, Interactions vol 7, number 2 (2000) pp41-46, ACM [2] (http:/ / www. computerhistory. org/ events/ lectures/ appleint_10281997/ appleint_xscript. shtml)"Origins of the Apple Human Interface," transcript of a talk by Larry Tesler & Chris Espinosa, Oct 28, 1997, Computer History Museum" [3] http:/ / www. popsci. com/ scitech/ article/ 2009-03/ future-isnt-what-it-used-be [4] "Time-zone-tracking timepiece - Patent 6278660" (http:/ / www. freepatentsonline. com/ 6278660. html). . Retrieved 2010-04-30.

External links
Ask Tog ( - Bruce Tognazzini's official site. The Starfire Home Page, including link to download film (

John M. Carroll (information scientist)


John M. Carroll (information scientist)

John M. "Jack" Carroll is currently Edward M. Frymoyer Professor of Information Sciences and Technology at Penn State. Carroll is perhaps best known for his theory of Minimalism in computer instruction, training, and technical communication. Carroll is awarded ACM SIGCHI Lifetime Achievement Award in 2003 for his contribution to the field of human-computer interaction (HCI or CHI). Carroll was a founder of the study of human-computer interaction, one of the nine core areas of Computer Science identified by the Association for Computing Machinery (ACM). He served on the program committee of the 1982 Bureau of Standards Conference on the Human Factors of Computing Systems that in effect inaugurated the field, and was the direct predecessor of the field's flagship conference series, the ACM CHI Conferences. Through the past two decades, Carroll has been involved in the development of the field of Human-Computer Interaction. In 1984 he founded the User Interface Institute at the IBM Thomas J. Watson Research Center. In 1994, he joined Virginia Tech as Department Head of Computer Science to establish an HCI focus in research and teaching at the university's Center for Human-Computer Interaction. He was a founding associate editor of the field's premier journal, ACM Transactions on Computer-Human Interaction, and a founding member of editorial boards of Transactions on Information Systems, Behavior and Information Technology, and the International Journal of Human-Computer Interaction.

Carroll, John M. (1990). The Nurnberg Funnel - Designing Minimalist Instruction for Practical Computer Skill. MIT. Carroll, John M. (1998). Minimalism Beyond the Nurnberg Funnel. MIT. Carroll, John M. (2000). Making Use: Scenario-Based Design of Human-Computer Interactions. MIT.

SIGCHI Awards: "SIGCHI Award Recipients (1998-2008)" (

Home page of John Carroll at Penn State [1] List of publications by John Carroll [2]

[1] http:/ / jcarroll. ist. psu. edu/ [2] http:/ / www. informatik. uni-trier. de/ ~ley/ db/ indices/ a-tree/ c/ Carroll:John_M=. html


Design & Cognition

Design is the creation of a plan or convention for the construction of an object or a system (as in architectural blueprints, engineering drawing, business process, circuit diagrams and sewing patterns).[1] Design has different connotations in different fields (see design disciplines below). In some cases the direct construction of an object (as in pottery, engineering, management, cowboy coding and graphic design) is also considered to be design. More formally design has been defined as follows. (noun) a specification of an object, manifested by an agent, intended to accomplish goals, in a particular environment, using a set of primitive components, satisfying a set of requirements, subject to constraints; (verb, transitive) to create a design, in an environment (where the designer operates)[2] Another definition for design is a roadmap or a strategic approach for Design, when applied to fashion, includes considering aesthetics as well as someone to achieve a unique function in the final form. expectation. It defines the specifications, plans, parameters, costs, activities, processes and how and what to do within legal, political, social, environmental, safety and economic constraints in achieving that objective.[3] Here, a "specification" can be manifested as either a plan or a finished product, and "primitives" are the elements from which the design object is composed. With such a broad denotation, there is no universal language or unifying institution for designers of all disciplines. This allows for many differing philosophies and approaches toward the subject (see Philosophies and studies of design, below).
All Saints Chapel in the Cathedral Basilica of St. Louis by Louis Comfort Tiffany. The building structure and decorations are both examples of design.

Design The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). A designer's sequence of activities is called a design process. The scientific study of design is called design science.[4][5][6] Designing often necessitates considering the aesthetic, functional, economic and sociopolitical dimensions of both the design object and design process. It may involve considerable research, thought, modeling, interactive adjustment, and re-design.[7] Meanwhile, diverse kinds of objects may be designed, including clothing, graphical user interfaces, skyscrapers, corporate identities, business processes and even methods of designing.[8]


Design as a process
Substantial disagreement exists concerning how designers in many fields, whether amateur or professional, alone or in teams, produce designs. Dorst and Dijkhuis argued that "there are many ways of describing design processes" and discussed "two basic and fundamentally different ways",[9] both of which have several names. The prevailing view has been called "The Rational Model",[10] "Technical Problem Solving"[11] and "The Reason-Centric Perspective".[12] The alternative view has been called "Reflection-in-Action",[11] "co-evolution"[13] and "The Action-Centric Perspective".[12]

The Rational Model

The Rational Model was independently developed by Simon[14] and Pahl and Beitz.[15] It posits that: 1. designers attempt to optimize a design candidate for known constraints and objectives, 2. the design process is plan-driven, 3. the design process is understood in terms of a discrete sequence of stages. The Rational Model is based on a rationalist philosophy[10] and underlies the Waterfall Model,[16] Systems Development Life Cycle[17] and much of the engineering design literature.[18] According to the rationalist philosophy, design is informed by research and knowledge in a predictable and controlled manner. Technical rationality is at the center of the process.[7] Example sequence of stages Typical stages consistent with The Rational Model include the following. Pre-production design Design brief or Parti pris an early (often the beginning) statement of design goals Analysis analysis of current design goals Research investigating similar design solutions in the field or related topics Specification specifying requirements of a design solution for a product (product design specification)[19] or service. Problem solving conceptualizing and documenting design solutions Presentation presenting design solutions Design during production Development continuation and improvement of a designed solution Testing in situ testing a designed solution Post-production design feedback for future designs Implementation introducing the designed solution into the environment Evaluation and conclusion summary of process and results, including constructive criticism and suggestions for future improvements

Design Redesign any or all stages in the design process repeated (with corrections made) at any time before, during, or after production. Each stage has many associated best practices.[20] Criticism of The Rational Model The Rational Model has been widely criticized on two primary grounds 1. Designers do not work this way extensive empirical evidence has demonstrated that designers do not act as the rational model suggests.[21] 2. Unrealistic assumptions goals are often unknown when a design project begins, and the requirements and constraints continue to change.[22]


The Action-Centric Model

The Action-Centric Perspective is a label given to a collection of interrelated concepts, which are antithetical to The Rational Model.[12] It posits that: 1. designers use creativity and emotion to generate design candidates, 2. the design process is improvised, 3. no universal sequence of stages is apparent analysis, design and implementation are contemporary and inextricably linked[12] The Action-Centric Perspective is a based on an empiricist philosophy and broadly consistent with the Agile approach[23] and amethodical development.[24] Substantial empirical evidence supports the veracity of this perspective in describing the actions of real designers.[21] Like the Rational Model, the Action-Centric model sees design as informed by research and knowledge. However, research and knowledge are brought into the design process through the judgment and common sense of designers by designers "thinking on their feet" more than through the predictable and controlled process stipulated by the Rational Model. Designers' context-dependent experience and professional judgment take center stage more than technical rationality.[7] Descriptions of design activities At least two views of design activity are consistent with the Action-Centric Perspective. Both involve three basic activities. In the Reflection-in-Action paradigm, designers alternate between "framing," "making moves," and "evaluate moves." "Framing" refers to conceptualizing the problem, i.e., defining goals and objectives. A "move" is a tentative design decision. The evaluation process may lead to further moves in the design.[11] In the Sensemaking-Coevolution-Implementation Framework, designers alternate between its three titular activities. Sensemaking includes both framing and evaluating moves. Implementation is the process of constructing the design object. Coevolution is "the process where the design agent simultaneously refines its mental picture of the design object based on its mental picture of the context, and vice versa."[25]

Design Criticism of the Action-Centric Perspective As this perspective is relatively new, it has not yet encountered much criticism. One possible criticism is that it is less intuitive than The Rational Model.


Design disciplines
Applied arts Architecture Engineering Design Fashion Design Game Design Graphic Design Industrial Design Engineering Instructional Design Interaction Design Interior Design Landscape Architecture Military Design Methodology[26] Product Design Process Design Service Design Software Design Web Design Urban design

Philosophies and studies of design

There are countless philosophies for guiding design as the design values and its accompanying aspects within modern design vary, both between different schools of thought and among practicing designers.[27] Design philosophies are usually for determining design goals. A design goal may range from solving the least significant individual problem of the smallest element, to the most holistic influential utopian goals. Design goals are usually for guiding design. However, conflicts over immediate and minor goals may lead to questioning the purpose of design, perhaps to set better long term or ultimate goals.

Philosophies for guiding design

Design philosophies are fundamental guiding principles that dictate how a designer approaches his/her practice. Reflections on material culture and environmental concerns (Sustainable design) can guide a design philosophy. One example is the First Things First manifesto which was launched within the graphic design community and states "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design."[28] In The Sciences of the Artificial by polymath Herbert A. Simon the author asserts design to be a meta-discipline of all professions. "Engineers are not the only professional designers. Everyone designs who devises courses of action aimed at changing existing situations into preferred ones. The intellectual activity that produces material artifacts is no different fundamentally from the one that prescribes remedies for a sick patient or the one that devises a new sales plan for a company or a social welfare policy for a state. Design, so construed, is the core of all professional training; it is the principal mark that distinguishes the professions from the sciences. Schools of engineering, as well as

Design schools of architecture, business, education, law, and medicine, are all centrally concerned with the process of design."[29]


Approaches to design
A design approach is a general philosophy that may or may not include a guide for specific methods. Some are to guide the overall goal of the design. Other approaches are to guide the tendencies of the designer. A combination of approaches may be used if they don't conflict. Some popular approaches include: KISS principle, (Keep it Simple Stupid), which strives to eliminate unnecessary complications. There is more than one way to do it (TIMTOWTDI), a philosophy to allow multiple methods of doing the same thing. Use-centered design, which focuses on the goals and tasks associated with the use of the artifact, rather than focusing on the end user. User-centered design, which focuses on the needs, wants, and limitations of the end user of the designed artifact. Critical design uses designed artifacts as an embodied critique or commentary on existing values, morals, and practices in a culture. Service design designing or organizing the experience around a product, the service associated with a product's use. Transgenerational design, the practice of making products and environments compatible with those physical and sensory impairments associated with human aging and which limit major activities of daily living. Speculative design, the speculative design process doesnt necessarily define a specific problem to solve, but establishes a provocative starting point from which a design process emerges. The result is an evolution of fluctuating iteration and reflection using designed objects to provoke questions and stimulate discussion in academic and research settings

Methods of designing
Design Methods is a broad area that focuses on: Exploring possibilities and constraints by focusing critical thinking skills to research and define problem spaces for existing products or servicesor the creation of new categories; (see also Brainstorming) Redefining the specifications of design solutions which can lead to better guidelines for traditional design activities (graphic, industrial, architectural, etc.); Managing the process of exploring, defining, creating artifacts continually over time Prototyping possible scenarios, or solutions that incrementally or significantly improve the inherited situation Trendspotting; understanding the trend process.



The word "design" is often considered ambiguous, as it is applied differently in a varying contexts.

Design and art

Today the term design is widely associated with the Applied arts as initiated by Raymond Loewy and teachings at the Bauhaus and Ulm School of Design (HfG Ulm) in Germany during the 20th Century. The boundaries between art and design are blurred, largely due to a range of applications both for the term 'art' and the term 'design'. Applied arts has been used as an umbrella term to define fields of industrial design, graphic design, fashion design, etc. The term 'decorative arts' is a traditional term used in historical discourses to describe craft objects, and also sits within the umbrella of Applied arts. In graphic arts (2D image making that ranges from photography to illustration) the distinction is often made between fine art and commercial art, based on the context within which the work is produced and how it is traded. To a degree, some methods for creating work, such as employing intuition, are shared across the disciplines within the Applied arts and Fine The new terminal at Barajas airport in Madrid, Spain art. Mark Getlein suggests the principles of design are "almost instinctive", "built-in", "natural", and part of "our sense of 'rightness'."[30] However, the intended application and context of the resulting works will vary greatly.

Design and engineering

In engineering, design is a component of the engineering process. Many overlapping methods and processes can be seen when comparing Product design, Industrial design and Engineering. The American A drawing for a booster engine for steam Heritage Dictionary defines design as: "To conceive or fashion in the locomotives. Engineering is applied to design, mind; invent," and "To formulate a plan", and defines engineering as: with emphasis on function and the utilization of "The application of scientific and mathematical principles to practical mathematics and science. ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems.".[31][32] Both are forms of problem-solving with a defined distinction being the application of "scientific and mathematical principles". The increasingly scientific focus of engineering in practice, however, has raised the importance of new more "human-centered" fields of design.[33] How much science is applied in a design is a question of what is considered "science". Along with the question of what is considered science, there is social science versus natural science. Scientists at Xerox PARC made the distinction of design versus engineering at "moving minds" versus "moving atoms".



Design and production

The relationship between design and production is one of planning and executing. In theory, the plan should anticipate and compensate for potential problems in the execution process. Design involves problem-solving and creativity. In contrast, production involves a routine or pre-planned process. A design may also be a mere plan that does not include a production or engineering process, although a working knowledge of such processes is usually expected of designers. In some cases, it may be unnecessary and/or impractical to expect a designer with a broad multidisciplinary knowledge required for such designs to also have a detailed specialized knowledge of how to produce the product.

Design and production are intertwined in many creative professional careers, meaning problem-solving is part of execution and the reverse. As the cost of rearrangement increases, the need for separating design from production increases as well. For example, a high-budget project, such as a skyscraper, requires separating (design) architecture from (production) construction. A Low-budget project, such as a locally printed office party invitation flyer, can be rearranged and printed dozens of times at the low cost of a few sheets of paper, a few drops of ink, and less than one hour's pay of a desktop publisher. This is not to say that production never involves problem-solving or creativity, nor that design always involves creativity. Designs are rarely perfect and are sometimes repetitive. The imperfection of a design may task a production position (e.g. production artist, construction worker) with utilizing creativity or problem-solving skills to compensate for what was overlooked in the design process. Likewise, a design may be a simple repetition (copy) of a known preexisting solution, requiring minimal, if any, creativity or problem-solving skills from the designer.

Jonathan Ive has received several awards for his design of Apple Inc. products like this MacBook. In some design fields, personal computers are also used for both design and production

Process design
"Process design" (in contrast to "design process" mentioned above) refers to the planning of routine steps of a process aside from the expected result. Processes (in general) are treated as a product of design, not the method of design. The term originated with the industrial designing of chemical processes. With the increasing complexities of the information age, consultants and executives have found the term useful to describe the design of business processes as well as manufacturing processes.

An example of a business workflow process using Business Process Modeling Notation.



[1] Dictionary meanings in the Cambridge Dictionary of American English (http:/ / dictionary. cambridge. org/ results. asp?searchword=design& x=64& y=13& =), at (http:/ / dictionary. reference. com/ browse/ design) (esp. meanings 15 and 78) and at AskOxford (http:/ / www. askoxford. com/ concise_oed/ design?view=uk) (esp. verbs). [2] Ralph, P. and Wand, Y. (2009). A proposal for a formal definition of the design concept. In Lyytinen, K., Loucopoulos, P., Mylopoulos, J., and Robinson, W., editors, Design Requirements Workshop (LNBIP 14), pp. 103136. Springer-Verlag, p. 109 doi:10.1007/978-3-540-92966-6_6. [3] Don Kumaragamage, Y. (2011). Design Manual Vol 1 [4] Simon (1996) [5] Alexander, C. (1964) Notes on the Synthesis of Form, Harvard University Press. [6] Eekels, J. (2000). "On the Fundamentals of Engineering Design Science: The Geography of Engineering Design Science, Part 1". Journal of Engineering Design 11 (4): 377397. doi:10.1080/09544820010000962. [7] Inge Mette Kirkeby (2011). "Transferable Knowledge" (http:/ / www. sbs. ox. ac. uk/ centres/ bt/ Documents/ KirkebyInterviewMedBF4 0PRINT. pdf). Architectural Research Quarterly 15 (1): 914. . [8] Brinkkemper, S. (1996). "Method engineering: engineering of information systems development methods and tools". Information and Software Technology 38 (4): 275280. doi:10.1016/0950-5849(95)01059-9. [9] Dorst and Dijkhuis 1995, p. 261 [10] Brooks 2010 [11] Schn 1983 [12] Ralph 2010 [13] Dorst and Cross 2001 [14] Newell and Simon 1972; Simon 1969 [15] Pahl and Beitz 1996 [16] Royce 1970 [17] Bourque and Dupuis 2004 [18] Pahl et al. 2007 [19] Cross, N., 2006. T211 Design and Designing: Block 2, p. 99. Milton Keynes: The Open University. [20] Ullman, David G. (2009) The Mechanical Design Process, Mc Graw Hill, 4th edition ISBN 0-07-297574-1 [21] Cross et al. 1992; Ralph 2010; Schn 1983 [22] Brooks 2010; McCracken and Jackson 1982 [23] Beck et al. 2001 [24] Truex et al. 2000 [25] Ralph 2010, p. 67 [26] Headquarters, Department of the Army (May 2012). ADRP 5-0: The Operations Process. Washington D.C.: United States Army. pp.2-4 to 2-11. [27] Holm, Ivar (2006). Ideas and Beliefs in Architecture and Industrial design: How attitudes, orientations and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 82-547-0174-1. [28] First Things First 2000 a design manifesto (http:/ / maxbruinsma. nl/ index1. html?ftf2000. htm). manifesto published jointly by 33 signatories in: Adbusters, the AIGA journal, Blueprint, Emigre, Eye, Form, Items fall 1999/spring 2000 [29] Simon (1996), p. 111. [30] Mark Getlein, Living With Art, 8th ed. (New York: 2008) 121. [31] American Psychological Association (APA): design (http:/ / dictionary. reference. com/ browse/ design). The American Heritage Dictionary of the English Language, Fourth Edition. Retrieved January 10, 2007 [32] American Psychological Association (APA): engineering (http:/ / dictionary. reference. com/ browse/ engineering). The American Heritage Dictionary of the English Language, Fourth Edition. Retrieved January 10, 2007 [33] Faste 2001



Beck, K., Beedle, M., van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R.C., Mellor, S., Schwaber, K., Sutherland, J., and Thomas, D. Manifesto for agile software development (, 2001. Bourque, P., and Dupuis, R. (eds.) Guide to the software engineering body of knowledge ( br/wp-content/uploads/ebooks/book_SWEBOK.pdf) (SWEBOK). IEEE Computer Society Press, 2004 ISBN 0-7695-2330-7. Brooks, F.P. The design of design: Essays from a computer scientist, Addison-Wesley Professional, 2010 ISBN 0-201-36298-8. Cross, N., Dorst, K., and Roozenburg, N. Research in design thinking, Delft University Press, Delft, 1992 ISBN 90-6275-796-0. Dorst, K., and Cross, N. (2001). "Creativity in the design process: Co-evolution of problem-solution". Design Studies 22 (2): 425437. doi:10.1016/0142-694X(94)00012-3. Dorst, K., and Dijkhuis, J. "Comparing paradigms for describing design activity," Design Studies (16:2) 1995, pp 261274. Faste, R. (2001). "The Human Challenge in Engineering Design" ( Ijee1230.pdf). International Journal of Engineering Education 17 (45): 327331. McCracken, D.D., and Jackson, M.A. (1982). "Life cycle concept considered harmful" (http://www.deepdyve. com/lp/association-for-computing-machinery/life-cycle-concept-considered-harmful-WXRCv45NVM). SIGSOFT Software Engineering Notes 7 (2): 2932. doi:10.1145/1005937.1005943. Newell, A., and Simon, H. Human problem solving, Prentice-Hall, Inc., 1972. Pahl, G., and Beitz, W. Engineering design: A systematic approach ( books?id=8fuhesYeJmkC&printsec=frontcover), Springer-Verlag, London, 1996 ISBN 3-540-19917-9. Pahl, G., Beitz, W., Feldhusen, J., and Grote, K.-H. Engineering design: A systematic approach (http://books., (3rd ed.), Springer-Verlag, 2007 ISBN 1-84628-318-3. Pirkl, James J. Transgenerational Design: Products for an Aging Population, Van Nostrand Reinhold, New York, NY, USA, 1994 ISBN 0-442-01065-6. Ralph, P. "Comparing two software design process theories," ( 2011/01/Ralph-Comparing-Two-Software-Design-Process-Theories.pdf) International Conference on Design Science Research in Information Systems and Technology (DESRIST 2010), Springer, St. Gallen, Switzerland, 2010, pp.139153. Royce, W.W. "Managing the development of large software systems: Concepts and techniques," Proceedings of Wescon, 1970. Schn, D.A. The reflective practitioner: How professionals think in action, Basic Books, USA, 1983. Simon, H.A. The sciences of the artificial ( printsec=frontcover), MIT Press, Cambridge, MA, USA, 1996 ISBN 0-262-69191-4. Truex, D., Baskerville, R., and Travis, J. (2000). "Amethodical systems development: The deferred meaning of systems development methods". Accounting, Management and Information Technologies 10 (1): 5379. doi:10.1016/S0959-8022(99)00009-0.

Design elements and principles


Design elements and principles

Design elements and principles describe fundamental ideas about the practice of good visual design. As William Lidwell's stated in Universal Principles of Design: The best designers sometimes disregard the principles of design. When they do so, however, there is usually some compensating merit attained at the cost of the violation. Unless you are certain of doing as well, it is best to abide by the principles. [1]

Design Elements
Design elements are the basic units of a painting, drawing, design or other visual piece[2] and include:

A fundamental mark or stroke used in drawing in which the length is longer than the width. Two connected points form a line and every line has a length, width, and direction it is straight.[3] Uses A line that defines or bounds an edge, but not always the outside edge, could represent a fold or color change.[3] A line that defines the edge of space can also be created by a gap of negative space. Many uses include to separate columns, rows of type, or to show a change in document type.[3] Lines are used in linear shapes and patterns to decorate many different substrates, and can be used to create shadows representing tonal value, called hatching.[3]

Color can play a large role in the elements of design[4] with the color wheel being used as a tool, and color theory providing a body of practical guidance to color mixing and the visual impacts of specific color combination. Uses Color can aid organization so develop a color strategy and stay consistent with those colors.[4] It can give emphasis to create a hierarchy

This image contains contour lines (the outline of the birds) and decoration lines (hatching).

Design elements and principles Attributes Hue[4] Values and tints and shades of colors that are created by adding black to a color for a shade and white for a tint. Creating a tint or shade of a color reduces the saturation.[4] Saturation gives a color brightness or dullness.[4]


A shape is defined as an area that stands out from the space next to or around it due to a defined or implied boundary, or because of differences of value, color, or texture.[5] All objects are composed of shapes and all other 'Elements of Design' are shapes in some way.[3] Categories Mechanical Shapes or Geometric Shapes are the shapes that can be drawn using a ruler or compass. Mechanical shapes, whether simple or complex, produce a feeling of control or order.[3] Organic Shapes are freehand drawn shapes that are complex and normally found in nature. Organic shapes produce a natural feel.[3]

Meaning the way a surface feels or is perceived to feel. Texture can be added to attract or repel interest to an element, depending on the pleasantness of the texture.[3] Types of texture Tactile texture is the actual three-dimension feel of a surface that can be touched. Painter can use impasto to build peaks and create texture.[3] Visual texture is the illusion of the surfaces peaks and valleys, like the tree pictured. Any texture shown in a photo is a visual texture, meaning the paper is smooth no matter how rough the image perceives it to be.[3] Most textures have a natural feel but still seem to repeat a motif in some way. Regularly repeating a motif will result in a texture appearing as a pattern.[3]


The tree's visual texture is represented here in this image.

In design, space is concerned with the area deep within the moment of designated design, the design will take place on. For a two-dimensional design space concerns creating the illusion of a third dimension on a flat surface:[3] Overlap is the effect where objects appear to be on top of each other. This illusion makes the top element look closer to the observer. There is no way to determine the depth of the space, only the order of closeness. Shading adds gradation marks to make an object of a two-dimensional surface seem three-dimensional. Highlight, Transitional Light, Core of the Shadow, Reflected Light, and Cast Shadow give an object a three-dimensional look.[3] Linear Perspective is the concept relating to how an object seems smaller the farther away it gets. Atmospheric Perspective is based on how air acts as a filter to change the appearance of distance objects.

Design elements and principles


Form is any three dimensional object. Form can be measured, from top to bottom (height), side to side (width), and from back to front (depth). Form is also defined by light and dark. There are two types of form, geometric (man-made) and natural (organic form). Form may be created by the combining of two or more shapes. It may be enhanced by tone, texture and color. It can be illustrated or constructed.

Principles of Design
Principles applied to the elements of design that bring them together into one design. How one applies these principles determines how successful a design may be.[2]

According to Alex White, author of The Element of Graphic Design, to achieve visual unity is a main goal of graphic design. When all elements are in agreement, a design is considered unified. No individual part is viewed as more important than the whole design. A good balance between unity and variety must be established to avoid a chaotic or a lifeless design.[4] Methods Proximity Similarity Rhythm is achieved when recurring position, size, color, and use of a graphic element has a focal point interruption. Altering the basic theme achieves unity and helps keep interest.

Point, Line, and Plane

Point, Line, and Plane (PLP) are the three most basic shapes in visual design and a good design contains all three. The key to using PLP is making the shapes overlap and share elements.[4] Point: In design, a point can be the smallest unit of marking not simply a dot. Additionally, a point can be a small plane or a short line.[4] Line: The trace of a point in motion, a thin stroke, or even a narrow plane can be considered a line. Typed text automatically creates visual lines.[4] Plane: A plane can be perceived as a trace of a line in motion like dragging a piece of chalk across a blackboard sideways (long side down). Wide lines and large points may also create a plane.[4]

It is a state of equalized tension and equilibrium, which may not always be calm.[4] Types Symmetry Asymmetrical produces an informal balance that is attention attracting and dynamic. Radial balance is arranged around a central element. The elements placed in a radial balance seem to 'radiate' out from a central point in a circular fashion. Overall is a mosaic form of balance which normally arises from too many elements being put on a page. Due to the lack of hierarchy and contrast, this form of balance can look noisy.

Design elements and principles


A good design contains elements that lead the reader through each element in order of its significance. The type and images should be expressed starting from most important to the least.[4]

Using the relative size of elements against each other can attract attention to a focal point. When elements are designed larger than life, scale is being used to show drama.[4]

Dominance is created by contrasting size, positioning, color, style, or shape. The focal point should dominate the design with scale and contrast without sacrificing the unity of the whole.[4]

Similarity and Contrast

Planning a consistent and similar design is an important aspect of a designers work to make their focal point visible. Too much similarity is boring but without similarity important elements will not exist and an image without contrast is uneventful so the key is to find the balance between similarity and contrast.[4] Similar Environment There are several ways to develop a similar environment:[4] Build a unique internal organization structure. Manipulate shapes of images and text to correlate together. Express continuity from page to page in publications. Items to watch include headers, themes, borders, and spaces. Develop a style manual and stick with the format. Contrasts Space Filled vs Empty Near vs Far 2-D vs 3-D Position Left vs Right Isolated vs Grouped Centered vs Off Center Form Simple vs Complex Beauty vs Ugly Whole vs Broken Direction Stability vs Movement Structure Organized vs Chaotic Mechanical vs Hand Drawn Size Big vs Little

Design elements and principles Deep vs. Shallow fat vs. Thin Color Grayscale vs Color Light vs Dark Texture Fine vs Coarse Smooth vs Rough Sharp vs Dull Density Transparent vs Opaque Thick vs Thin Liquid vs Solid Gravity Light vs Heavy Stable vs Unstable Movement is the path the viewers eye takes through the artwork, often to focal areas. Such movement can be directed along lines edges, shape and color within the artwork.


[1] Lidwell, William; Kritina Holden, Jill Butler (2010). Universal Principles of Design (2nd ed.). Beverly, Massachusetts: Rockport Publishers. ISBN978-1-59253-587-3. [2] Lovett, John. "Design and Color" (http:/ / www. johnlovett. com/ test. htm). . Retrieved 3 April 2012. [3] Saw, James. "Design Notes" (http:/ / daphne. palomar. edu/ design/ Default. htm). Palomar College. . Retrieved 3 April 2012. [4] White, Alex (2011). The Elements of Graphic Design. New York, NY: Allworth Press. pp.81105. ISBN978-1-58115-762-8. [5] Cindy Kovalik, Ph.D. and Peggy King, M.Ed.. "Visual Literacy" (http:/ / www. ehhs. kent. edu/ community/ VLO/ design/ elements/ shape/ index. html). . Retrieved 2010-03-27.

Kilmer, R., & Kilmer, W. O. (1992). Designing Interiors. Orland, FL: Holt, Rinehart and Winston, Inc. ISBN 978-0-03-032233-4. Nielson, K. J., & Taylor, D. A. (2002). Interiors: An Introduction. New York: McGraw-Hill Companies, Inc. ISBN 978-0-07-296520-9 Pile, J.F. (1995; fourth edition, 2007). Interior Design. New York: Harry N. Abrams, Inc. ISBN 978-0-13-232103-7

External links
Art, Design, and Visual Thinking ( Design Elements and Principles (

Cognitive science


Cognitive science
Cognitive science is the interdisciplinary scientific study of the mind and its processes. It examines what cognition is, what it does and how it works. It includes research on intelligence and behavior, especially focusing on how information is represented, processed, and transformed (in faculties such as perception, language, memory, reasoning, and emotion) within nervous systems (human or other animal) and machines (e.g. computers). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, and anthropology.[1] It spans many levels of analysis, from low-level learning and decision mechanisms to high-level logic and planning; from neural circuitry to modular brain organization. The fundamental concept of cognitive science is "that thinking can best be understood in terms of representational structures in the mind and computational procedures that operate on those structures."[1]

Levels of analysis
A central tenet of cognitive science is that a complete understanding of the mind/brain cannot be attained by studying only a single level. An example would be the problem of remembering a phone number and recalling it later. One approach to understanding this process would be to study behavior through direct observation. A person could be presented with a phone number, asked to recall it after some delay. Then the accuracy of the response could be measured. Another approach would be to study the firings of individual neurons while a person is trying to remember the phone number. Neither of these experiments on its own would fully explain how the process of remembering a phone number works. Even if the technology to map out every neuron in the brain in real-time were available, and it were known when each neuron was firing, it would still be impossible to know how a particular firing of neurons translates into the observed behavior. Thus an understanding of how these two levels relate to each other is needed. The Embodied Mind: Cognitive Science and Human Experience says the new sciences of the mind need to enlarge their horizon to encompass both lived human experience and the possibilities for transformation inherent in human experience.[2] This can be provided by a functional level account of the process. Studying a particular phenomenon from multiple levels creates a better understanding of the processes that occur in the brain to give rise to a particular behavior. Marr[3] gave a famous description of three levels of analysis: 1. the computational theory, specifying the goals of the computation; 2. representation and algorithm, giving a representation of the input and output and the algorithm which transforms one into the other; and 3. the hardware implementation, how algorithm and representation may be physically realized. (See also the entry on functionalism.)

Interdisciplinary nature
Cognitive science is an interdisciplinary field with contributors from various fields, including psychology, neuroscience, linguistics, philosophy of mind, computer science, anthropology, sociology, and biology. Cognitive science tends to view the world outside the mind much as other sciences do. Thus it too has an objective, observer-independent existence. The field is usually seen as compatible with the physical sciences, and uses the scientific method as well as simulation or modeling, often comparing the output of models with aspects of human behavior. Some doubt whether there is a unified cognitive science and prefer to speak of the cognitive sciences in plural.[4] Many, but not all, who consider themselves cognitive scientists have a functionalist view of the mindthe view that mental states are classified functionally, such that any system that performs the proper function for some mental state

Cognitive science is considered to be in that mental state. According to some versions of functionalism, even non-human systems, such as other animal species, alien life forms, or advanced computers can, in principle, have mental states.


Cognitive science: the term

The term "cognitive" in "cognitive science" is "used for any kind of mental operation or structure that can be studied in precise terms" (Lakoff and Johnson, 1999). This conceptualization is very broad, and should not be confused with how "cognitive" is used in some traditions of analytic philosophy, where "cognitive" has to do only with formal rules and truth conditional semantics. The earliest entries for the word "cognitive" in the OED take it to mean roughly pertaining "to the action or process of knowing". The first entry, from 1586, shows the word was at one time used in the context of discussions of Platonic theories of knowledge. Most in cognitive science, however, presumably do not believe their field is the study of anything as certain as the knowledge sought by Plato.

Cognitive science is a large field, and covers a wide array of topics on cognition. However, it should be recognized that cognitive science is not equally concerned with every topic that might bear on the nature and operation of the mind or intelligence. Social and cultural factors, emotion, consciousness, animal cognition, comparative and evolutionary approaches are frequently de-emphasized or excluded outright, often based on key philosophical conflicts. Another important mind-related subject that the cognitive sciences tend to avoid is the existence of qualia, with discussions over this issue being sometimes limited to only mentioning qualia as a philosophically open matter. Some within the cognitive science community, however, consider these to be vital topics, and advocate the importance of investigating them.[5] Below are some of the main topics that cognitive science is concerned with. This is not an exhaustive list, but is meant to cover the wide range of intelligent behaviors. See List of cognitive science topics for a list of various aspects of the field.

Artificial intelligence
"... One major contribution of AI and cognitive science to psychology has been the information processing model of human thinking in which the metaphor of brain-as-computer is taken quite literally. ." AAAI Web pages [6]. Artificial intelligence (AI) involves the study of cognitive phenomena in machines. One of the practical goals of AI is to implement aspects of human intelligence in computers. Computers are also widely used as a tool with which to study cognitive phenomena. Computational modeling uses simulations to study how human intelligence may be structured.[7] (See the section on computational modeling in the Research Methods section.) There is some debate in the field as to whether the mind is best viewed as a huge array of small but individually feeble elements (i.e. neurons), or as a collection of higher-level structures such as symbols, schemas, plans, and rules. The former view uses connectionism to study the mind, whereas the latter emphasizes symbolic computations. One way to view the issue is whether it is possible to accurately simulate a human brain on a computer without accurately simulating the neurons that make up the human brain.

Attention is the selection of important information. The human mind is bombarded with millions of stimuli and it must have a way of deciding which of this information to process. Attention is sometimes seen as a spotlight, meaning one can only shine the light on a particular set of information. Experiments that support this metaphor include the dichotic listening task (Cherry, 1957) and studies of inattentional blindness (Mack and Rock, 1998). In the dichotic listening task, subjects are bombarded with two different messages, one in each ear, and told to focus on

Cognitive science only one of the messages. At the end of the experiment, when asked about the content of the unattended message, subjects cannot report it.


Knowledge, and Processing, of Language

The ability to learn and understand language is an extremely complex process. Language is acquired within the first few years of life, and all humans under normal circumstances are able to acquire language proficiently. A major driving force in the theoretical linguistic field is discovering the nature that language must have in the abstract in order to be learned in such a fashion. Some of the driving research questions in studying how the brain itself processes language include: (1) To what extent is linguistic knowledge innate or learned?, (2) Why is it more difficult for adults to acquire a second-language than it is for infants to acquire their first-language?, and (3) How are humans able to understand novel sentences?

A well known example of a Phrase structure tree. This is one way of representing human language that shows how different components are organized hierarchically.

The study of language processing ranges from the investigation of the sound patterns of speech to the meaning of words and whole sentences. Linguistics often divides language processing into orthography, phonology and phonetics, morphology, syntax, semantics, and pragmatics. Many aspects of language can be studied from each of these components and from their interaction. The study of language processing in cognitive science is closely tied to the field of linguistics. Linguistics was traditionally studied as a part of the humanities, including studies of history, art and literature. In the last fifty years or so, more and more researchers have studied knowledge and use of language as a cognitive phenomenon, the main problems being how knowledge of language can be acquired and used, and what precisely it consists of. Linguists have found that, while humans form sentences in ways apparently governed by very complex systems, they are remarkably unaware of the rules that govern their own speech. Thus linguists must resort to indirect methods to determine what those rules might be, if indeed rules as such exist. In any event, if speech is indeed governed by rules, they appear to be opaque to any conscious consideration.

Learning and development

Learning and development are the processes by which we acquire knowledge and information over time. Infants are born with little or no knowledge (depending on how knowledge is defined), yet they rapidly acquire the ability to use language, walk, and recognize people and objects. Research in learning and development aims to explain the mechanisms by which these processes might take place. A major question in the study of cognitive development is the extent to which certain abilities are innate or learned. This is often framed in terms of the nature versus nurture debate. The nativist view emphasizes that certain features are innate to an organism and are determined by its genetic endowment. The empiricist view, on the other hand, emphasizes that certain abilities are learned from the environment. Although clearly both genetic and environmental input is needed for a child to develop normally, considerable debate remains about how genetic information might guide cognitive development. In the area of language acquisition, for example, some (such as Steven Pinker)[8] have argued that specific information containing universal grammatical rules must be contained in the genes, whereas others (such as Jeffrey Elman and colleagues in Rethinking Innateness) have argued that Pinker's claims are biologically unrealistic. They argue that genes determine the architecture of a learning system, but that specific "facts" about how grammar works can only be learned as a result of experience.

Cognitive science


Memory allows us to store information for later retrieval. Memory is often thought of consisting of both a long-term and short-term store. Long-term memory allows us to store information over prolonged periods (days, weeks, years). We do not yet know the practical limit of long-term memory capacity. Short-term memory allows us to store information over short time scales (seconds or minutes). Memory is also often grouped into declarative and procedural forms. Declarative memory--grouped into subsets of semantic and episodic forms of memory--refers to our memory for facts and specific knowledge, specific meanings, and specific experiences (e.g., Who was the first president of the U.S.A.?, or "What did I eat for breakfast four days ago?). Procedural memory allows us to remember actions and motor sequences (e.g. how to ride a bicycle) and is often dubbed implicit knowledge or memory . Cognitive scientists study memory just as psychologists do, but tend to focus in more on how memory bears on cognitive processes, and the interrelationship between cognition and memory. One example of this could be, what mental processes does a person go through to retrieve a long-lost memory? Or, what differentiates between the cognitive process of recognition (seeing hints of something before remembering it, or memory in context) and recall (retrieving a memory, as in "fill-in-the-blank")?

Perception and action

Perception is the ability to take in information via the senses, and process it in some way. Vision and hearing are two dominant senses that allow us to perceive the environment. Some questions in the study of visual perception, for example, include: (1) How are we able to recognize objects?, (2) Why do we perceive a continuous visual environment, even though we only see small bits of it at any one time? One tool for studying visual perception is by looking at how people process optical illusions. The image on the right of a Necker cube is an example of a bistable percept, that is, the cube can be interpreted as being oriented in two different directions. The study of haptic (tactile), olfactory, and gustatory stimuli also fall into the domain of perception. Action is taken to refer to the output of a system. In humans, this is accomplished through motor responses. Spatial planning and movement, speech production, and complex motor movements are all aspects of action.

The Necker cube, an example of an optical illusion

Research methods
Many different methodologies are used to study cognitive science. As the field is highly interdisciplinary, research often cuts across multiple areas of study, drawing on research methods from psychology, neuroscience, computer science and systems theory.
An optical illusion. The square A is exactly the same shade of gray as square B. See checker shadow illusion.

Behavioral experiments
In order to have a description of what constitutes intelligent behavior, one must study behavior itself. This type of research is closely tied to that in cognitive psychology and psychophysics. By measuring behavioral responses to

Cognitive science different stimuli, one can understand something about how those stimuli are processed. Lewandowski and Strohmetz (2009) review a collection of innovative uses of behavioral measurement in psychology including behavioral traces, behavioral observations, and behavioral choice.[9] Behavioral traces are pieces of evidence that indicate behavior occurred, but the actor is not present (e.g., litter in a parking lot or readings on an electric meter). Behavioral observations involve the direct witnessing of the actor engaging in the behavior (e.g., watching how close a person sits next to another person). Behavioral choices are when a person selects between two or more options (e.g., voting behavior, choice of a punishment for another participant). Reaction time. The time between the presentation of a stimulus and an appropriate response can indicate differences between two cognitive processes, and can indicate some things about their nature. For example, if in a search task the reaction times vary proportionally with the number of elements, then it is evident that this cognitive process of searching involves serial instead of parallel processing. Psychophysical responses. Psychophysical experiments are an old psychological technique, which has been adopted by cognitive psychology. They typically involve making judgments of some physical property, e.g. the loudness of a sound. Correlation of subjective scales between individuals can show cognitive or sensory biases as compared to actual physical measurements. Some examples include: sameness judgments for colors, tones, textures, etc. threshold differences for colors, tones, textures, etc. Eye tracking. This methodology is used to study a variety of cognitive processes, most notably visual perception and language processing. The fixation point of the eyes is linked to an individual's focus of attention. Thus, by monitoring eye movements, we can study what information is being processed at a given time. Eye tracking allows us to study cognitive processes on extremely short time scales. Eye movements reflect online decision making during a task, and they provide us with some insight into the ways in which those decisions may be processed.


Brain imaging
Brain imaging involves analyzing activity within the brain while performing various cognitive tasks. This allows us to link behavior and brain function to help understand how information is processed. Different types of imaging techniques vary in their temporal (time-based) and spatial (location-based) resolution. Brain imaging is often used in cognitive neuroscience. Single photon emission computed tomography and Positron emission tomography. SPECT and PET use radioactive isotopes, which are injected into the subject's bloodstream and taken up by the brain. By observing which areas of the brain take up the radioactive isotope, we can see which areas of the brain are more active than other areas. PET has similar spatial resolution to fMRI, but it has extremely poor temporal resolution.

Electroencephalography. EEG measures the electrical fields generated by large populations of neurons in the cortex by placing a series of electrodes on the scalp of the subject. This technique has an extremely high temporal resolution, but a relatively poor spatial resolution. Functional magnetic resonance imaging. fMRI measures the relative amount of oxygenated blood flowing to different parts of the brain. More oxygenated blood in a particular region is assumed to correlate with an increase in neural activity in that part of the brain. This allows us to localize particular functions within different brain regions. fMRI has moderate spatial and temporal resolution.

Image of the human head with the brain. The arrow indicates the position of the hypothalamus.

Cognitive science Optical imaging. This technique uses infrared transmitters and receivers to measure the amount of light reflectance by blood near different areas of the brain. Since oxygenated and deoxygenated blood reflects light by different amounts, we can study which areas are more active (i.e., those that have more oxygenated blood). Optical imaging has moderate temporal resolution, but poor spatial resolution. It also has the advantage that it is extremely safe and can be used to study infants' brains. Magnetoencephalography. MEG measures magnetic fields resulting from cortical activity. It is similar to EEG, except that it has improved spatial resolution since the magnetic fields it measures are not as blurred or attenuated by the scalp, meninges and so forth as the electrical activity measured in EEG is. MEG uses SQUID sensors to detect tiny magnetic fields.


Computational modeling
Computational models require a mathematically and logically formal representation of a problem. Computer models are used in the simulation and experimental verification of different specific and general properties of intelligence. Computational modeling can help us to understand the functional organization of a particular cognitive phenomenon. There are two basic approaches to cognitive modeling. The first is focused on abstract mental functions of an intelligent mind and operates using symbols, and the second, which follows the neural and associative properties of the human brain, is called subsymbolic.

A neural network with two layers.

Symbolic modeling evolved from the computer science paradigms using the technologies of Knowledge-based systems, as well as a philosophical perspective, see for example "Good Old-Fashioned Artificial Intelligence" (GOFAI). They are developed by the first cognitive researchers and later used in information engineering for expert systems . Since the early 1990s it was generalized in systemics for the investigation of functional human-like intelligence models, such as personoids, and, in parallel, developed as the SOAR environment. Recently, especially in the context of cognitive decision making, symbolic cognitive modeling is extended to socio-cognitive approach including social and organization cognition interrelated with a sub-symbolic not conscious layer. Subsymbolic modeling includes Connectionist/neural network models. Connectionism relies on the idea that the mind/brain is composed of simple nodes and that the power of the system comes primarily from the existence and manner of connections between the simple nodes. Neural nets are textbook implementations of this approach. Some critics of this approach feel that while these models approach biological reality as a representation of how the system works, they lack explanatory powers because complicated systems of connections with even simple rules are extremely complex and often less interpretable than the system they model. Other approaches gaining in popularity include the use of dynamical systems theory and also techniques putting symbolic models and connectionist models into correspondence (Neural-symbolic integration). Bayesian models, often drawn from machine learning, are also gaining popularity. All the above approaches tend to be generalized to the form of integrated computational models of a synthetic/abstract intelligence, in order to be applied to the explanation and improvement of individual and social/organizational decision-making and reasoning.

Cognitive science


Neurobiological methods
Research methods borrowed directly from neuroscience and neuropsychology can also help us to understand aspects of intelligence. These methods allow us to understand how intelligent behavior is implemented in a physical system. Single-unit recording Direct brain stimulation Animal models Postmortem studies

Key findings
Cognitive science has much to its credit. Among other accomplishments, it has given rise to models of human cognitive bias and risk perception, and has been influential in the development of behavioral finance, part of economics. It has also given rise to a new theory of the philosophy of mathematics, and many theories of artificial intelligence, persuasion and coercion. It has made its presence firmly known in the philosophy of language and epistemology - a modern revival of rationalism - as well as constituting a substantial wing of modern linguistics. Fields of cognitive science have been influential in understanding the brain's particular functional systems (and functional deficits) ranging from speech production to auditory processing and visual perception. It has made progress in understanding how damage to particular areas of the brain affect cognition, and it has helped to uncover the root causes and results of specific dysfunction, such as dyslexia, anopia, and hemispatial neglect.

Cognitive science has a pre-history traceable back to ancient Greek philosophical texts (see Plato's Meno); and certainly must include writers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke. However, although these early writers contributed greatly to the philosophical discovery of mind and this would ultimately lead to the development of psychology, they were working with an entirely different set of tools and core concepts than those of the cognitive scientist. The modern culture of cognitive science can be traced back to the early cyberneticists in the 1930s and 1940s, such as Warren McCulloch and Walter Pitts, who sought to understand the organizing principles of the mind. McCulloch and Pitts developed the first variants of what are now known as artificial neural networks, models of computation inspired by the structure of biological neural networks. Another precursor was the early development of the theory of computation and the digital computer in the 1940s and 1950s. Alan Turing and John von Neumann were instrumental in these developments. The modern computer, or Von Neumann machine, would play a central role in cognitive science, both as a metaphor for the mind, and as a tool for investigation. In 1959, Noam Chomsky published a scathing review of B. F. Skinner's book Verbal Behavior. At the time, Skinner's behaviorist paradigm dominated psychology: Most psychologists focused on functional relations between stimulus and response, without positing internal representations. Chomsky argued that in order to explain language, we needed a theory like generative grammar, which not only attributed internal representations but characterized their underlying order. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of Artificial Intelligence research.[10] In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded.[11] In 1982, Vassar College became the first institution in the world to grant an undergraduate degree in Cognitive Science.[12] In the 1970s and early 1980s, much cognitive science research focused on the possibility of artificial intelligence. Researchers such as Marvin Minsky would write computer programs in languages such as LISP to attempt to formally characterize the steps that human beings went through, for instance, in making decisions and solving

Cognitive science problems, in the hope of better understanding human thought, and also in the hope of creating artificial minds. This approach is known as "symbolic AI". Eventually the limits of the symbolic AI research program became apparent. For instance, it seemed to be unrealistic to comprehensively list human knowledge in a form usable by a symbolic computer program. The late 80s and 90s saw the rise of neural networks and connectionism as a research paradigm. Under this point of view, often attributed to James McClelland and David Rumelhart, the mind could be characterized as a set of complex associations, represented as a layered network. Critics argue that there are some phenomena which are better captured by symbolic models, and that connectionist models are often so complex as to have little explanatory power. Recently symbolic and connectionist models have been combined, making it possible to take advantage of both forms of explanation.[13]


In a paper written shortly before his death, B.F. Skinner stated that "cognitive science is the creation science of psychology."[14]

Notable researchers
Some of the more recognized names in cognitive science are usually either the most controversial or the most cited. Within philosophy familiar names include Daniel Dennett who writes from a computational systems perspective, John Searle known for his controversial Chinese room, Jerry Fodor who advocates functionalism, and Douglas Hofstadter, famous for writing Gdel, Escher, Bach, which questions the nature of words and thought. In the realm of linguistics, Noam Chomsky and George Lakoff have been influential (both have also become notable as political commentators). In Artificial intelligence Marvin Minsky, Herbert A. Simon, Allen Newell, and Kevin Warwick are prominent. Popular names in the discipline of psychology include George A. Miller, James McClelland, Philip Johnson-Laird, and Steven Pinker. Anthropologists Dan Sperber, Edwin Hutchins, Scott Atran, Pascal Boyer and Joseph Henrich have been involved in collaborative projects with cognitive and social psychologists, political scientists and evolutionary biologists in attempts to develop general theories of culture formation, religion and political association.

[1] Thagard, Paul, Cognitive Science (http:/ / plato. stanford. edu/ archives/ fall2008/ entries/ cognitive-science/ ), The Stanford Encyclopedia of Philosophy (Fall 2008 Edition), Edward N. Zalta (ed.). [2] Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: cognitive science and human experience. Cambridge, Mass.: MIT Press. [3] Marr, D. (1982). Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. W. H. Freeman. [4] Miller, G. A. (2003). The cognitive revolution: a historical perspective. Trends in Cognitive Sciences, 7, 141-144. [5] A number of authors consider the qualia problem to be part of the cognitive science field, e.g. Some philosophical issues in cognitive science: qualia, intentionality, and the mind-body problem (http:/ / portal. acm. org/ citation. cfm?id=166791. 166844), Qualia: The Hard Problem (http:/ / chil. rice. edu/ byrne/ Pubs/ cogsci96. pdf), and indeed the entire discipline of philosophy as being part of the cog sci field, e.g. What is Cognitive Science? (http:/ / ls. berkeley. edu/ ugis/ cogsci/ major/ about. php), while other reputable sources that cover both qualia and cog sci do not draw any obvious connection between them, e.g. the Stanford encyclopedia of philosophy (http:/ / plato. stanford. edu) (Jan 2008 online edition) does have full-size articles on both qualia (http:/ / plato. stanford. edu/ entries/ qualia/ ) and cog sci (http:/ / plato. stanford. edu/ entries/ cognitive-science/ ), but qualia are not even mentioned in the cog sci article while cog sci is not mentioned in the qualia article. [6] http:/ / www. aaai. org/ AITopics/ html/ cogsci. html#simon [7] Sun, Ron (ed.) (2008). The Cambridge Handbook of Computational Psychology. Cambridge University Press, New York. [8] Pinker S., Bloom P. (1990). "Natural language and natural selection". Behavioral and Brain Sciences 13 (4): 707784. doi:10.1017/S0140525X00081061. [9] Lewandowski, Gary; Strohmetz, David (2009). "Actions can speak as loud as words: Measuring behavior in psychological science". Social and Personality Psychology Compass 3 (6): 9921002. doi:10.1111/j.1751-9004.2009.00229. [10] Longuet-Higgins, H. C. (1973). "Comments on the Lighthill Report and the Sutherland Reply", in Artificial Intelligence: a paper symposium, Science Research Council, 35-37

Cognitive science
[11] Cognitive Science Society (http:/ / www. cognitivesciencesociety. org/ about_description. html) [12] Box 729. "About - Cognitive Science - Vassar College" (http:/ / cogsci. vassar. edu/ about/ index. html). . Retrieved 2012-08-15. [13] Artur S. d'Avila Garcez, Luis C. Lamb and Dov M. Gabbay. Neural-Symbolic Cognitive Reasoning. Cognitive Technologies. Springer, 2008, ISBN 978-3-540-73245-7, 2008. [14] B. F. Skinner, "Can Psychology be a Science of Mind?", American Psychologist, November 1990, page 1209, At the APA Web Site (http:/ / psycnet. apa. org/ journals/ amp/ 45/ 11/ 1206. html) Successfully accessed 29 December 2009


External links
Cognitive Science Society ( Cognitive Science Movie Index: A broad list of movies showcasing themes in the Cognitive Sciences (https:// Piero Scaruffi's annotated bibliography on the mind ( List of leading thinkers in cognitive science ( Dr. Carl Stahmer's history page at the University of Santa Barbara ( php)


User Experience engineering [Vol 1 to 6] + [Special A to C]

User experience
User experience (UX) is the way a person feels about using a product, system or service. User experience highlights the experiential, affective, meaningful and valuable aspects of human-computer interaction and product ownership, but it also includes a persons perceptions of the practical aspects such as utility, ease of use and efficiency of the system. User experience is subjective in nature, because it is about an individuals feelings and thoughts about the system. User experience is dynamic, because it changes over time as the circumstances change.

ISO 9241-210[1] defines user experience as "a person's perceptions and responses that result from the use or anticipated use of a product, system or service". According to the ISO definition user experience includes all the users' emotions, beliefs, preferences, perceptions, physical and psychological responses, behaviors and accomplishments that occur before, during and after use. The ISO also list three factors that influence user experience: system, user and the context of use. Note 3 of the standard hints that usability addresses aspects of user experience, e.g. "usability criteria can be used to assess aspects of user experience". Unfortunately, the standard does not go further in clarifying the relation between user experience and usability. Clearly, the two are overlapping concepts, with usability including pragmatic aspects (getting a task done) and user experience focusing on users feelings stemming both from pragmatic and hedonic aspects of the system. In addition to the ISO standard, there exist several other definitions for user experience, see[2] Some of them have been studied by Law et al. (2009).[3]

The term user experience was brought to wider knowledge by Donald Norman, User Experience Architect, in the mid-1990s.[4] Several developments affected the rise of interest in the user experience: 1. Recent advances in mobile, ubiquitous, social, and tangible computing technologies have moved human-computer interaction into practically all areas of human activity. This has led to a shift away from usability engineering to a much richer scope of user experience, where user's feelings, motivations, and values are given as much, if not more, attention than efficiency, effectiveness and basic subjective satisfaction (i.e. the three traditional usability metrics[5]).[6] 2. In website design, it was important to combine the interests of different stakeholders: marketing, branding, visual design, and usability. Marketing and branding people needed to enter the interactive world where usability was important. Usability people needed to take marketing, branding, and aesthetic needs into account when designing web-sites. User experience provided a platform to cover the interests of all stakeholders: making web sites easy to use, valuable, and effective for visitors. This is why several early user-experience publications focus on web-site user experience.[7][8][9][10] The field of user experience was established to cover the holistic perspective to how a person feels about using a system. The focus is on pleasure and value rather than on performance. The exact definition, framework, and

User experience elements of user experience are still evolving.


Influences on user experience

Many factors can influence a user's experience with a system. To address the variety, factors influencing user experience have been classified into three main categories: user's state and previous experience, system properties, and the usage context (situation).[11] Studying typical users, contexts and their interaction helps designing the system.

Momentary emotion or overall user experience

Single experiences influence the overall user experience:[12]: the experience of a key click affects the experience of typing a text message, the experience of typing a message affects the experience of text messaging, and the experience of text messaging affects the overall user experience with the phone. The overall user experience is not simply a sum of smaller interaction experiences, because some experiences are more salient than others. Overall user experience is also influenced by factors outside the actual interaction episode: brand, pricing, friends' opinions, reports in media, etc. One branch in user experience research focuses on emotions, that is, momentary experiences during interaction: designing affective interaction and evaluating emotions. Another branch is interested in understanding the long-term relation between user experience and product appreciation. Especially industry sees good overall user experience with a company's products as critical for securing brand loyalty and enhancing the growth of customer base. All temporal levels of user experience (momentary, episodic, and long-term) are important, but the methods to design and evaluate these levels can be very different.

[1] ISO FDIS 9241-210:2009. Ergonomics of human system interaction - Part 210: Human-centered design for interactive systems (formerly known as 13407). International Organization for Standardization (ISO). Switzerland. [2] http:/ / www. allaboutux. org/ ux-definitions [3] Law, E., Roto, V., Hassenzahl, M., Vermeeren, A., Kort, J.: Understanding, Scoping and Defining User Experience: A Survey Approach. In Proceedings of Human Factors in Computing Systems conference, CHI09. 49 April 2009, Boston, MA, USA (2009) [4] Donald Norman, Jim Miller, Austin Henderson: What You See, Some of What's in the Future, And How We Go About Doing It: HI at Apple Computer. Proceedings of CHI 1995, Denver, Colorado, USA [5] ISO 9241-11:1998, Ergonomics of Human System Interaction: Guidance on usability [6] COST Action IC0904-TwinTide: Towards the Integration of IT Design and Evaluation. (http:/ / www. cost. esf. org/ domains_actions/ ict/ Actions/ IC0904-Towards-the-Integration-of-Transectorial-IT-Design-and-Evaluation-End-date-November-2013) [7] Fleming, J. 1998, Web Navigation: Designing the User Experience. OReilly & Associates, Inc, USA. [8] Garrett, J. 2002, Elements of User Experience: User-Centered Design for the Web. New Riders Press, USA. [9] Kuniavsky, M. 2003, Observing The User Experience A Practitioners Guide to User Research. Morgan Kaufmann Publishers, Elsevier Science, USA. [10] Berry, D. 2000, The user experience - The iceberg analogy of usability. Technical library of the IBM Ease of Use Team. http:/ / www. ibm. com/ developerworks/ library/ w-berry/ [11] Hassenzahl, M. & Tractinsky, N. 2006, User Experience a Research Agenda. Behaviour and Information Technology, Vol. 25, No. 2, MarchApril 2006, pp. 91-97 [12] Forlizzi, J., Battarbee, K. 2004, Understanding Experience in Interactive Systems. Proceedings of DIS2004, 14 August 2004, Cambridge, USA.

User experience


External links
Peer-reviewed definition of User Experience ( user_experience_and_experience_design.html) with commentary by Don Norman

User experience design

User Experience Design (UXD or UED) is a broad term used to explain all aspects of a persons experience with the system including the interface, graphics, industrial design, physical interaction, and the manual. [1] It also refers to the application of user-centered design practices to generate cohesive, predictive and desirable designs based on holistic consideration of users experience. In most cases, User Experience design fully encompasses traditional Human-Computer Interaction (HCI) design and extends it by addressing all aspects of a product or service as perceived by users. [2]

The field of User Experience Design has roots in human factors and ergonomics, a field that since the late 1940s has been focusing on the interaction between human users, machines and the contextual environments to design systems that address the user's experience.[3] The term specifically User Experience came in to existence in early 90s with the proliferation of computers at work places. It was Donald Norman, User Experience Architect, who coined and brought this term to wider knowledge.[4] The term also has a more recent connection to user-centered design, Human-Computer Interaction, and principles and also incorporates elements from similar user-centered design fields.

Elements of User Experience Design

The term user experience design rapidly grew in usage after the commencement of the information age, and many generalizations of the components are based on the building blocks of user experience design of digital systems. User experience design is majorly defined on broader topics that include talk of users emotions, the appeal of a UI and visual design.

Visual Design
Visual design, also commonly known as graphic design, communication design or visual communication, represents the aesthetics or look-and-feel of the front end of any User Interface. Graphic treatment of interface elements, such as the look in the term look-and-feel is often perceived as the visual design. The purpose of visual design is to use visual elements like colors, images, typography and symbols to convey a message to its audience. Fundamentals of Gestalt psychology and Visual Perception give cognitive perspective on how to create effective visual communication.[5]

User experience design


Information Architecture
Information architecture is the art and science of structuring and organizing the information in products and services, supporting usability and findability. More basic concepts that are attached with information architecture are described below. Information In context to information architecture, Information is separate from knowledge and data, but lies indefinitely in the middle. It is information of all shapes and sizes: Websites, documents, software applications, images, and more. It is also concerned with metadata: terms used to describe and represent content objects such as documents, people, process, and organizations. Structuring, Organization and Labeling Structuring is reducing information to its basic building unit and then relating to each other. Organization involves grouping these units into distinctive and meaningful manner. Labelling is using appropriate wordings to support easy navigation and findability. Finding and Managing Findability is the most critical success factor for Information architecture. If users are not able to find required information without browsing, searching or asking then the findability of the architecture fails. Navigation needs to be clearly conveyed to ease finding of the content.

Interaction Design
There are many key factors to understanding Interaction Design and how it can enable a pleasurable end user experience. It is well recognized that building great user experience requires interaction design to play a pivotal role in helping define what works best for the users. High demand of improved user experiences and strong focus on the end-users have made Interaction Designers critical in conceptualizing design that matches user expectations and standards of latest UI patterns and components. While working, Interaction Designers take several things in consideration. A few of them are listed below [6]: Create the layout of the interface Define Interaction patterns best suited in the context Incorporate user needs collected during User Research, into the designs Features and Information that are important to the user Interface behavior like drag-drop, selections, mouse over actions, and so on Effectively communicate strengths of the system Make the interface intuitive by building affordances Maintain consistency throughout the system

In the last few years, the role of interaction designer has shifted from being just focused on specifying UI components and communicating them to the engineers. Now the designers have more freedom to design contextual interfaces which are based on helping meet the user needs. [7]

User experience design


Usability is the extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use.[8] Usability is attached with all tools used by humans and is extended to both digital and non-digital devices. Thus it is a subset of user experience but not wholly contained. The section of usability that intersects with user experience design is related to humans ability to use a system or application. Good usability certainly affects user experience in a positive way but it is not guaranteed. Accessibility Accessibility of a system describes its ease of reach, use and understanding. In terms of user experience design it can also be related to the overall comprehensibility of the information and features. It contributes to shorten the learning curve attached with the system. Accessibility in many contexts can be related to the ease of use for people with disabilities and comes under Usability.

Human-Computer Interaction
Human-computer interaction (HCI) is concerned with the design, evaluation and implementation of interactive computing systems for human use and with the study of major phenomena surrounding them.[9] Human-Computer Interaction is the main contributor to user experience design because of its emphasis human performance aspect rather than on mere usability. It provides key research findings which informs the improvement of systems for the people. HCI extends its study towards more integrated interactions, such as tangible interactions which is generally not covered in the practice of user experience.

User experience design incorporates most or all of the above disciplines to positively impact the overall experience a person has with a particular interactive system, and its provider. User experience design most frequently defines a sequence of interactions between a user (individual person) and a system, virtual or physical, designed to meet or support user needs and goals, primarily, while also satisfying systems requirements and organizational objectives. Typical outputs include: Site Audit (usability study of existing assets) Flows and Navigation Maps User stories or Scenarios Persona (Fictitious users to act out the scenarios) Site Maps and Content Inventory Wireframes (screen blueprints or storyboards) Prototypes (For interactive or in-the-mind simulation) Written specifications (describing the behavior or design) Graphic mockups (Precise visual of the expected end result)

As with the fields mentioned above, user experience design is a highly multi-disciplinary field, incorporating aspects of psychology, anthropology, architecture, sociology, computer science, graphic design, industrial design and cognitive science. Depending on the purpose of the product, UX may also involve content design disciplines such as communication design, instructional design, or game design. The subject matter of the content may also warrant collaboration with a Subject Matter Expert (SME) on planning the UX from various backgrounds in business, government, or private groups. More recently, content strategy has come to represent a sub-field of UX.

User experience design


User experience design is integrated into software development and other forms of application development to inform feature requirements and interaction plans based upon the user's goals. New introduction of software must keep in mind the dynamic pace of technology advancement and the need for change. The benefits associated with integration of these design principles include: Avoiding unnecessary product features Simplifying design documentation and customer-facing technical publications Improving the usability of the system and therefore its acceptance by customers Expediting design and development through detailed and properly conceived guidelines Incorporating business and marketing goals while catering to the user

[1] Peter Merholz (2007). "Peter in Conversation with Don Norman About UX & Innovation" (http:/ / www. adaptivepath. com/ ideas/ e000862). Adaptive Path. . [2] "What is user experience design?" (http:/ / www-01. ibm. com/ software/ ucd/ designconcepts/ whatisUXD. html). IBM. . [3] Human Factors and Ergonomics Society. HFES History. [4] uxdesign, "UX Design Defined" (http:/ / uxdesign. com/ ux-defined), 16/08/2010 [5] Visual Design, (http:/ / webstyleguide. com/ wsg3/ 7-page-design/ 3-visual-design. html), The gestalt of visual design. [6] Steve Psomas (2007). "The Five Competencies of User Experience Design" (http:/ / www. uxmatters. com/ mt/ archives/ 2007/ 11/ the-five-competencies-of-user-experience-design. php). UX Matters. . [7] Jonas Lowgren. "Interaction Design" (http:/ / www. interaction-design. org/ encyclopedia/ interaction_design. html). . [8] International standards for HCI and usability, (http:/ / www. usabilitynet. org/ tools/ r_international. htm#9241-11), ISO 9241-11: Guidance on Usability (1998) [9] Definition of HCI, (http:/ / old. sigchi. org/ cdg/ cdg2. html#2_1), CHAPTER 2: Human-Computer Interaction,ACM SIGCHI Curricula for Human-Computer Interaction

Further reading
Donald Norman: The Design of Everyday Things, ISBN 978-0-465-06710-7 Alan Cooper: The Inmates Are Running the Asylum: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity ISBN 0-672-31649-8 Bill Buxton: Sketching User Experiences: Getting the Design Right and the Right Design, ISBN 978-0-12-374037-3 Alan Cooper: About Face 3: The Essentials of Interaction Design ISBN 978-0-470-08411-3 Jenifer Tidwell: Designing Interfaces' ISBN 978-1-4493-7970-4 Christian Moser: User Experience Design: Mit erlebniszentrierter Softwareentwicklung zu Produkten, die begeistern ISBN 978-3642133626

External links
Usability Professionals Association ( ( Interaction Design Association (

User experience evaluation


User experience evaluation

User experience (UX) evaluation means investigating how a person feels about using a system (product, service, non-commercial item, or a combination of them). It is non-trivial to evaluate user experience and come up with solid results, since user experience is subjective, context-dependent and dynamic over time.[1] Laboratory experiments may work well for studying a specific aspect of user experience, but holistic user experience is optimally studied over a longer period of time with real users in a natural environment. Detailed guidance for user experience evaluation is hard to give, since there are many different dimensions to consider when choosing the evaluation approach: Goal: Summative (score) or Formative (areas for improvement) Approach: Objective or Subjective Data: Quantitative or Qualitative Granularity: Momentary, episodic, or overall UX Setup: Lab or field

In all cases, however, there are certain aspects of user experience that evaluators are interested in (measures), and certain procedures and techniques used for collecting the data (methods). When investigating user experience evaluation methods, we can identify methods for emotion assessment and overall UX assessment. The measures and methods for these two evaluation types are described below. Episodic UX can be evaluated with either approach, depending on the case.

Emotion evaluation
When investigating momentary user experiences, we can evaluate the level of positive affect, negative affect, joy, surprise, frustration, etc. The measures for emotions are bound to the methods used for emotion assessment, but typical emotion measures are e.g. valence and arousal. Objective emotion data can be collected by psychophysiological measurements or by observing expressed emotions. Subjective emotional data can be collected by using self-report [2] methods, which can be verbal or non-verbal. Examples of emotion evaluation methods: Psychophysiological emotion measurements aim to identify emotions from physiological changes in muscles (e.g. face), pupils, skin, heart, brains, etc. Expression observers monitor person's facial and other gestures or the tone of voice to identify emotions manually Think aloud protocol can be used for reporting emotions (real-time verbal self-report) PANAS (retrospective verbal self-report) Geneva emotion wheel[3] (retrospective verbal self-report) Emotion Slider[4] (continuous non-verbal self-report) Sensual Evaluation Instrument[5] (snapshot non-verbal self-report) PrEmo [6], a new version of EmoCards for assessing emotion[7] (snapshot non-verbal self-report)

User experience evaluation


Overall UX evaluation
In contrast to identifying a momentary emotion, overall UX evaluation investigates how a person feels about a system as a whole, typically after using it for a while. Many of the overall UX evaluation methods are suitable also for evaluating episodic UX, i.e., assessing how a person feels about a specific interaction episode or after executing a task. There is no agreement on the exact measures for evaluating the overall UX with a system, largely because different products aim at different kinds of experiences. However, there are some high-level constructs of user experience that can be used as the basis for defining the user experience measures, for example: 1. 2. 3. 4. 5. 6. Utility: Does the user perceive the functions in the system as useful and fit for the purpose? Usability: Does the user feel that it is easy and efficient to get things done with the system? Aesthetics:[8] Does the user see the system as visually attractive? Does it feel pleasurable in hand? Identification: Can I identify myself with the product? Do I look good when using it? Stimulation: Does the system give me inspiration? Or wow experiences? Value: Is the system important to me? What is its value for me?

Since the importance of the above user experience constructs is different to different people, it is an interesting option to define the overall UX measures together with each study participant. Another option to evaluate overall UX is to use simply a scale from positive to negative, without further consideration of the user experience constructs. Overall UX assessment is methodologically different from objective emotion assessment, but similar to subjective emotion assessment. Generic subjective user experience evaluation methods include interviews, questionnaires, story-telling, and often, a combination of these. An individual method can collect data about a set of specific constructs of user experience, such as usability testing is to collect data about usability construct. Examples of overall UX evaluation methods (excluding traditional usability methods): Diary methods[9] for self-reporting experiences during field studies Experience Sampling Method (ESM)[10] for self-reporting during field studies Day Reconstruction Method (DRM)[11] story-telling to reveal the meaningful experiences during field studies AttrakDiff [12][13] questionnaire for overall UX evaluation Ladder interviews e.g. to find out attitudes or values behind behaviour or experience

UX in video games
A relatively new pursuit in video game playtesting is UX and Usability research. An increasing amount of companies including some of the world's biggest publishers have begun outsourcing UX evaluation or opening their own in-house labs[14] .[15][16] Researchers use a variety of HCI and psychological techniques to examine the effectiveness of the User Experience of the games during the design process.[17] There are also some companies starting to use biometrics to scientifically measure the relationship between in-game events and the player's emotions and feelings (the UX), such as Vertical Slice and Serco ExperienceLab in the UK,[18][19] and Valve Software, Electronic Arts, BoltPeters, and VMC Labs in the USA and Canada.[20][21][22][23] The interest in this area comes from both academia and industry, sometimes enabling collaborative work.[24][25] Game UX work has been featured at professional venues, such as the Game Developers Conference (GDC)[26][27]

User experience evaluation


[1] Law, E., Roto, V., Hassenzahl, M., Vermeeren, A., Kort, J.: Understanding, Scoping and Defining User Experience: A Survey Approach. In Proceedings of Human Factors in Computing Systems conference, CHI09. 49 April 2009, Boston, MA, USA (2009) [2] http:/ / en. wiktionary. org/ wiki/ self_report [3] Baenziger, T., Tran, V. and Scherer,K.R. (2005) The EmotionWheel. A Tool for the Verbal Report of Emotional Reactions, poster presented at the conference of the International Society of Research on Emotion, Bari, Italy. [4] Laurans, G., Desmet, P.M.A., & Hekkert, P.P.M. (2009). The emotion slider: a self-report device for the continuous measurement of emotion. Proceedings of the 2009 International Conference on Affective Computing and Intelligent Interaction. Amsterdam, The Netherlands. [5] Isbister, K., Hk, K., Sharp, M., and Laaksolahti, J. 2006. The sensual evaluation instrument: developing an affective evaluation tool. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montral, Qubec, Canada, 2227 April 2006). CHI '06. ACM, New York, NY, 11631172 [6] http:/ / www. premotool. com/ [7] Desmet, P.M.A., Overbeeke, C.J., Tax, S.J.E.T. (2001). Designing products with added emotional value: development and application of an approach for research through design. The Design Journal, 4(1), 3247. [8] Moshagen, M. & Thielsch, M. T. (2010). Facets of visual aesthetics. In: International Journal of Human-Computer Studies, 68 (10), 689709. [9] Bolger, N., Davis, A., & Rafaeli, E. (2003). Diary methods: Capturing life as it is lived. Annual Review of Psychology, 54, 579616. [10] Csikszentmihalyi M, Larson R. (1987). Validity and reliability of the Experience-Sampling Method. Journal of Nervous and Mental Diseases. Sep 1987;175(9):526536. [11] Kahneman, D., Krueger, A., Schkade, D., Schwarz, N., and Stone, A. (2004). A Survey Method for Characterizing Daily Life Experience: The Day Reconstruction Method. Science. 306:5702, pp. 1776780. [12] http:/ / www. attrakdiff. de/ en/ home/ [13] Hassenzahl, M., Burmester, M., & Koller, F. (2003). AttrakDiff: Ein Fragebogen zur Messung wahrgenommener hedonischer und pragmatischer Qualitt. In J.Ziegler & G. Szwillus (Eds.), Mensch & Computer 2003. Interaktion in Bewegung (pp. 187196). Stuttgart, Leipzig: B.G. Teubner. [14] Halo 3: How Microsoft Labs Invented a New Science of Play (http:/ / www. wired. com/ gaming/ virtualworlds/ magazine/ 15-09/ ff_halo?currentPage=all). Retrieved on 21 October 2011. [15] Bolt, Nate. (2009-01-22) Researching Video Games the UX Way Boxes and Arrows: The design behind the design (http:/ / www. boxesandarrows. com/ view/ researching-video). Boxes and Arrows. Retrieved on 21 October 2011. [16] THQ Chooses The Guildhall at SMU to House New Usability Lab | games industry | MCV (http:/ / www. mcvuk. com/ press-releases/ 56236/ THQ-Usability-Lab). Retrieved on 21 October 2011. [17] Hong, T. (2008) Shoot to Thrill: Bio-Sensory Reactions to 3D Shooting Games, Game Developer Magazine, October [18] Video Game Usability and User Experience (http:/ / www. verticalslice. co. uk). Vertical Slice. Retrieved on 21 October 2011. [19] Game usability testing (http:/ / www. playablegames. net). PlayableGames. Retrieved on 21 October 2011. [20] Valve (http:/ / www. valvesoftware. com/ ). Retrieved on 21 October 2011. [21] EA Games Electronic Arts (http:/ / www. ea. com/ ). Retrieved on 21 October 2011. [22] VMC Consulting Tailored Solutions for Your Business (http:/ / www. vmc. com/ gamelabs. aspx). Retrieved on 21 October 2011. [23] Bolt | Peters | Research, design, and products (http:/ / boltpeters. com/ ). Retrieved on 21 October 2011. [24] Nacke, L., Ambinder, M., Canossa, A., Mandryk, R., Stach, T. (2009). "Game Metrics and Biometrics: The Future of Player Experience Research Panel at Future Play 2009 [25] 89 April 2010, Seminar Presentation at Games Research Methods Seminar, "Using physiological measures in conjunction with other UX approaches for better understanding of the players gameplay experiences", University of Tampere, Finland [26] Ambinder, M. (2011) Biofeedback in Gameplay: How Valve Measures Physiology to Enhance Gaming Experience. Game Developers Conference 2011 [27] Zammitto, V. (2011) The Science of Play Testing: EA's Methods for User Research. Game Developers Conference 2011


1. Usability or User engineering

Usability engineering
Usability engineering is a field that is concerned generally with human-computer interaction and specifically with making human-computer interfaces that have high usability or user friendliness. In effect, a user-friendly interface is one that allows users to effectively and efficiently accomplish the tasks for which it was designed and one that users rate positively on opinion or emotional scales. Assessing the usability of an interface and recommending ways to improve it is the purview of the Usability Engineer. The largest subsets of Usability Engineers work to improve usability of software graphical user interfaces (GUIs), web-based user interfaces, and voice user interfaces (VUIs). Several broad disciplines including Psychology, Human Factors and Cognitive Science subsume usability engineering, but the theoretical foundations of the field come from more specific domains: human perception and action; human cognition; behavioral research methodologies; and, to a lesser extent, quantitative and statistical analysis techniques. When usability engineering began to emerge as a distinct area of professional practice in the mid- to late 1980s, many usability engineers had a background in Computer Science or in a sub-field of Psychology such as Perception, Cognition or Human Factors. Today, these academic areas still serve as springboards for the professional practitioner of usability engineering, but Cognitive Science departments and academic programs in Human-Computer Interaction now also produce their share of practitioners in the field. The term usability engineering (in contrast to interaction design and user experience design) implies more of a focus on assessing and making recommendations to improve usability than it does on design, though Usability Engineers may still engage in design to some extent, particularly design of wire-frames or other prototypes.

Standards and guidelines

Usability engineers sometimes work to shape an interface such that it adheres to accepted operational definitions of user requirements. For example, the International Organisation for Standardisation-approved definitions (see e.g., ISO 9241 part 11) usability are held by some to be a context-dependent yardstick for the effectiveness, efficiency and satisfaction with which specific users should be able to perform tasks. Advocates of this approach engage in task analysis, then prototype interface design, and usability testing on those designs. On the basis of such tests, the technology is (ideally) re-designed or (occasionally) the operational targets for user performance are revised. [Dillon, 2000]. The National Institute of Standards and Technology [1] has collaborated with industry to develop the Common Industry Specification for Usability - Requirements [2], which serves as a guide for many industry professionals. The specifications for successful usability in biometrics [3] were also developed by the NIST. Usability.Gov [4] provides a tutorial and wide general reference for the design of usable websites. Usability, especially with the goal of Universal Usability, encompasses the standards and guidelines of design for accessibility. The aim of these guidelines is to facilitate the use of a software application for people with disabilities. Some primary guidelines for web accessibility are: 1. The Web Accessibility Initiative Guidelines [5] 2. The Section 508 [6] government guidelines applicable to all public-sector websites. 3. The ADA Guidelines [7] for accessibility of state and local government websites. 4. The IBM Guidelines [8] for accessibility of websites.

Usability engineering


Methods and tools

Usability Engineers conduct usability evaluations of existing or proposed interfaces and their findings are fed back to the Designer for use in design or redesign. Common usability evaluation methods include: usability testing (Gold standard of Usability Engineering, but the most involved and expensive method) interviews focus groups questionnaires/surveys cognitive walkthroughs heuristic evaluations RITE method cognitive task analysis contextual inquiry Think aloud protocol

Usability testing, the gold standard, is when participants are recruited and asked to use the actual or prototype interface and their reactions, behaviors, errors, and self-reports in interviews are carefully observed and recorded by the Usability Engineer. On the basis of this data, the Usability Engineer recommends interface changes to improve usability. There are a variety of online resources that make the job of the Usability Engineer a little easier. Some examples of these include: 1. The Web Metrics Tool Suite [9] is a product of the National Institute of Standards and Technology [1]. This toolkit is focused on evaluating the HTML of a website versus a wide range of usability guidelines and includes: Web Static Analyzer Tool (WebSAT) - checks web page HTML against typical usability guidelines Web Category Analysis Tool (WebCAT) - lets the usability engineer construct and conduct a web category analysis Web Variable Instrumenter Program (WebVIP) - instruments a website to capture a log of user interaction Framework for Logging Usability Data (FLUD) - a file format and parser for representation of user interaction logs FLUDViz Tool - produces a 2D visualization of a single user session VisVIP Tool - produces a 3D visualization of user navigation paths through a website TreeDec - adds navigation aids to the pages of a website 2. The Usability Testing Environment [10] (UTE) produced by Mind Design Systems [11] is available freely to federal government employees. According to the official company website this tool consists of two tightly-integrated applications. The first is the UTE Manager, which helps a tester set up test scenarios (tasks) as well as survey and demographic questions. The UTE Manager also compiles the test results and produces customized reports and summary data, which can be used as quantitative measures of usability observations and recommendations. The second UTE application is the UTE Runner. The UTE Runner presents the test participants with the test scenarios (tasks) as well as any demographic and survey questions. In addition, the UTE Runner tracks the actions of the subject throughout the test including clicks, keystrokes, and scrolling. 3. The UsableNet Liftmachine [12] is a product of and implements the section 508 Usability and Accessibility guidelines as well as the W3C Web Accessibility Initiative Guidelines [5]. It is important to remember that online tools are only a useful tool, and do not substitute for a complete Usability Engineering analysis.

Usability engineering


Research resources
Some well-known practitioners in the field are Donald Norman, Jakob Nielsen, and John M. Carroll. Nielsen and Carroll have both written books on the subject of usability engineering. Nielsen's book is aptly titled Usability Engineering, and was published in 1993. Carroll wrote "Making Use: Scenario-Based Design of Human-Computer Interactions" in 2000, and co-authored "Usability Engineering: Scenario-Based Development of Human-Computer Interaction" with Mary Beth Rossen in 2001. Some other field leaders are Alan Cooper [13], Larry Constantine and Steve Krug [14] the author of "Don't Make Me Think! A Common Sense Approach to Web Usability". There are many books written on Usability Engineering. A few of the more popular recently published books are as follows: Nielsen, Jakob (1993). Usability engineering. Morgan Kaufmann. pp.362. ISBN978-0-12-518406-9. Spool, Jared; Tara Scanlon, Carolyn Snyder, Terri DeAngelo (1998). Web Site Usability: A Designer's Guide. Morgan Kaufmann. pp.176. ISBN978-1-55860-569-5. Mayhew, Deborah (1999). The Usability Engineering Lifecyle: A Practitioner's Handbook. Morgan Kaufmann. pp.560. ISBN978-1-55860-561-9. Faulkner, Xristine. Usability Engineering. Palgrave. pp.256. ISBN978-0-333-77321-5. Smith, Michael J. (2001). Usability Evaluation and Interface Design: Cognitive Engineering, Intelligent Agents, and Virtual Reality, Volume 1 (Human Factors and Ergonomics). CRC Press. pp.1592. ISBN978-0-8058-3607-3. Rosson, Mary Beth; John Millar Carroll (2002). Usability Engineering: Scenario-Based Development of Human-Computer Interaction. Morgan Kaufmann. pp.422. Jacko, Julie (2012). Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies, and Emerging Applications. CRC Press. ISBN978-1-4398-2943-1. Leventhal, Laura (2007). Usability Engineering: Process, Products & Examples. Prentice Hall. pp.336. ISBN978-0-13-157008-5. Sears, Andrew; Julie A. Jacko (2007). The Human-Computer Interaction Handbook: Fundamentals, Evolving Technologies and Emerging Applications. CRC Press. pp.1384. ISBN978-0-8058-5870-9.

Andrew Dillon. Group dynamics meet cognition: combining socio-technical concepts and usability engineering in the design of information systems ( Jakob Nielsen: Usability Engineering. Academic Press, Boston 1993 ISBN 0-12-518405-0 John M. Carroll: Making Use: Scenario-Based Design of Human-Computer Interactions. MIT Press Cambridge, MA, 2000 ISBN 0-262-03279-1 Mary Beth Rosson, John M. Carroll: Usability engineering: scenario-based development of human-computer interaction. Morgan Kaufmann Publishers Inc. San Francisco, CA, 2001 ISBN 1-55860-712-9

Usability engineering


[1] http:/ / zing. ncsl. nist. gov/ [2] http:/ / zing. ncsl. nist. gov/ iusr/ documents/ CISU-R-IR7432. pdf [3] http:/ / zing. ncsl. nist. gov/ biousa/ docs/ Usability_and_Biometrics_final2. pdf [4] http:/ / www. usability. gov/ [5] http:/ / www. w3. org/ WAI/ intro/ wcag. php [6] http:/ / www. section508. gov/ index. cfm?FuseAction=Content& ID=12 [7] http:/ / www. ada. gov/ websites2. htm [8] http:/ / www-03. ibm. com/ able/ guidelines/ web/ accessweb. html [9] http:/ / zing. ncsl. nist. gov/ WebTools/ [10] http:/ / www. mindd. com/ Content. aspx?pid=UTEStandard [11] http:/ / www. mindd. com/ [12] http:/ / www. usablenet. com/ usablenet_liftmachine. html [13] http:/ / www. cooper. com/ [14] http:/ / www. sensible. com/

Usability is the ease of use and learnability of a human-made object. The object of use can be a software application, website, book, tool, machine, process, or anything a human interacts with. A usability study may be conducted as a primary job function by a usability analyst or as a secondary job function by designers, technical writers, marketing personnel, and others. It is widely used in consumer electronics, communication, and knowledge transfer objects (such as a cookbook, a document or online help) and mechanical objects such as a door handle or a hammer. Usability includes methods of measuring usability, such as needs analysis[1] and the study of the principles behind an object's perceived efficiency or elegance. In human-computer interaction and computer science, usability studies the elegance and clarity with which the interaction with a computer program or a web site (web usability) is designed. Usability differs from user satisfaction insofar as the former also embraces usefulness (see Computer user satisfaction).

The primary notion of usability is that an object designed with a generalized users' psychology and physiology in mind is, for example: More efficient to usetakes less time to accomplish a particular task Easier to learnoperation can be learned by observing the object More satisfying to use Complex computer systems find their way into everyday life, and at the same time the market is saturated with competing brands. This has made usability more popular and widely recognized in recent years, as companies see the benefits of researching and developing their products with user-oriented methods instead of technology-oriented methods. By understanding and researching the interaction between product and user, the usability expert can also provide insight that is unattainable by traditional company-oriented market research. For example, after observing and interviewing users, the usability expert may identify needed functionality or design flaws that were not anticipated. A method called contextual inquiry does this in the naturally occurring context of the users own environment. In the user-centered design paradigm, the product is designed with its intended users in mind at all times. In the user-driven or participatory design paradigm, some of the users become actual or de facto members of the design team.[2]

Usability The term user friendly is often used as a synonym for usable, though it may also refer to accessibility. Usability describes the quality of user experience across websites, software, products, and environments. There is no consensus about the relation of the terms ergonomics (or human factors) and usability. Some think of usability as the software specialization of the larger topic of ergonomics. Others view these topics as tangential, with ergonomics focusing on physiological matters (e.g., turning a door handle) and usability focusing on psychological matters (e.g., recognizing that a door can be opened by turning its handle). Usability is also important in website development (web usability). According to Jakob Nielsen, "Studies of user behavior on the Web find a low tolerance for difficult designs or slow sites. People don't want to wait. And they don't want to learn how to use a home page. There's no such thing as a training class or a manual for a Web site. People have to be able to grasp the functioning of the site immediately after scanning the home pagefor a few seconds at most."[3] Otherwise, most casual users simply leave the site and browse or shop elsewhere.


ISO defines usability as "The extent to which a product can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use." The word "usability" also refers to methods for improving ease-of-use during the design process. Usability consultant Jakob Nielsen and computer science professor Ben Shneiderman have written (separately) about a framework of system acceptability, where usability is a part of "usefulness" and is composed of:[4] Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design? Efficiency: Once users have learned the design, how quickly can they perform tasks? Memorability: When users return to the design after a period of not using it, how easily can they re establish proficiency? Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors? Satisfaction: How pleasant is it to use the design? Usability is often associated with the functionalities of the product (cf. ISO definition, below), in addition to being solely a characteristic of the user interface (cf. framework of system acceptability, also below, which separates usefulness into utility and usability). For example, in the context of mainstream consumer products, an automobile lacking a reverse gear could be considered unusable according to the former view, and lacking in utility according to the latter view. When evaluating user interfaces for usability, the definition can be as simple as "the perception of a target user of the effectiveness (fit for purpose) and efficiency (work or time required to use) of the Interface". Each component may be measured subjectively against criteria, e.g., Principles of User Interface Design, to provide a metric, often expressed as a percentage. It is important to distinguish between usability testing and usability engineering. Usability testing is the measurement of ease of use of a product or piece of software. In contrast, usability engineering (UE) is the research and design process that ensures a product with good usability. Usability is a non-functional requirement. As with other non-functional requirements, usability cannot be directly measured but must be quantified by means of indirect measures or attributes such as, for example, the number of reported problems with ease-of-use of a system.



Intuitive interfaces
The term intuitive is often listed as a desirable trait in usable interfaces, often used as a synonym for learnable. Some experts such as Jef Raskin have discouraged using this term in user interface design, claiming that easy to use interfaces are often easy because of the user's exposure to previous similar systems, thus the term 'familiar' should be preferred.[5] As an example: Two vertical lines "||" on media player buttons do not intuitively mean "pause"they do so by convention. Aiming for "intuitive" interfaces (based on reusing existing skills with interaction systems) could lead designers to discard a better design solution only because it would require a novel approach. This position is sometimes illustrated with the remark that "The only intuitive interface is the nipple; everything else is learned."[6] Bruce Tognazzini even denies the existence of "intuitive" interfaces, since such interfaces must be able to intuit, i.e., "perceive the patterns of the user's behavior and draw inferences."[7] Instead, he advocates the term "intuitable," i.e., "that users could intuit the workings of an application by seeing it and using it." He continues, however, "But even that is a less than useful goal since only 25 percent of the population depends on intuition to perceive anything."

The key principle for maximizing usability is to employ iterative design, which progressively refines the design through evaluation from the early stages of design. The evaluation steps enable the designers and developers to incorporate user and client feedback until the system reaches an acceptable level of usability. The preferred method for ensuring usability is to test actual users on a working system. Although, there are many methods for studying usability, the most basic and useful is user testing, which has three components: Get some representative users. Ask the users to perform representative tasks with the design. Observe what the users do, where they succeed, and where they have difficulties with the user interface. It's important to test users individually and let them solve any problems on their own. If you help them or direct their attention to any particular part of the screen, you will bias the test. Rather than running a big, expensive study, it's better to run many small tests and revise the design between each one so you can fix the usability flaws as you identify them. Iterative design is the best way to increase the quality of user experience. The more versions and interface ideas you test with users, the better. Usability plays a role in each stage of the design process. The resulting need for multiple studies is one reason to make individual studies fast and cheap, and to perform usability testing early in the design process. Here are the main steps: Before starting the new design, test the old design to identify good parts you should keep or emphasize, and bad parts that give users trouble. Test competitors' designs to get data on a range of alternative designs. Conduct a field study to see how users behave in their natural habitat. Make mock-ups or paper prototypes of one or more new design ideas and test them. The less time you invest in these design ideas the better, because you'll need to change them based on the test results. Refine the design ideas that test best through multiple iterations, gradually moving from low-fidelity prototyping to high-fidelity representations that run on the computer. Test each iteration. Inspect the design relative to established usability guidelines, whether from your own earlier studies or published research. Once you decide on and implement the final design, test it again. Subtle usability problems always creep in during implementation. Don't defer user testing until you have a fully implemented design. If you do, it will be impossible to fix the vast majority of the critical usability problems that the test uncovers. Many of these problems are likely to be structural, and fixing them would require major rearchitecting. The only way to a high-quality user experience is to start user

Usability testing early in the design process, and to keep testing every step of the way.


ISO standards
ISO/TR 16982:2002
ISO/TR 16982:2002 ("Ergonomics of human-system interactionUsability methods supporting human-centered design") is a standard that provides information on human-centered usability methods that can be used for design and evaluation. It details the advantages, disadvantages, and other factors relevant to using each usability method. It explains the implications of the stage of the life cycle and the individual project characteristics for the selection of usability methods and provides examples of usability methods in context. The main users of ISO/TR 16982:2002 are project managers. It therefore addresses technical human factors and ergonomics issues only to the extent necessary to allow managers to understand their relevance and importance in the design process as a whole. The guidance in ISO/TR 16982:2002 can be tailored for specific design situations by using the lists of issues characterizing the context of use of the product to be delivered. Selection of appropriate usability methods should also take account of the relevant life-cycle process. ISO/TR 16982:2002 is restricted to methods that are widely used by usability specialists and project managers. It does not specify the details of how to implement or carry out the usability methods described.

ISO 9241
ISO 9241 is a multi-part standard that covers a number of aspects of people working with computers. Although originally titled Ergonomic requirements for office work with visual display terminals (VDTs), it has been retitled to the more generic Ergonomics of Human System Interaction. As part of this change, ISO is renumbering the standard so that it can include many more topics. The first part to be renumbered was part 10 (now renumbered to part 110). Part 1 is a general introduction to the rest of the standard. Part 2 addresses task design for working with computer systems. Parts 39 deal with physical characteristics of computer equipment. Parts 110 and parts 1119 deal with usability aspects of software, including Part 110 (a general set of usability heuristics for the design of different types of dialogue) and Part 11 (general guidance on the specification and measurement of usability).

Usability considerations
Usability includes considerations such as: Who are the users, what do they know, what can they learn? What do users want or need to do? What is the users' general background? What is the users' context for working? What must be left to the machine?

Answers to these are obtained through user and task analysis at the start of the project.



Other considerations
Can users easily accomplish intended tasks at their desired speed? How much training do users need? What documentation or other supporting materials are available to help the user? Can users find solutions in these materials? What and how many errors do users make when they interact with the product? Can the user recover from errors? What do users have to do to recover from errors? Does the product help users recover from errors? For example, does software present comprehensible, informative, non-threatening error messages? Does the product meet the special needs of disabled users? (Is it accessible?) Are there substantial differences between the cognitive approaches of various users that affect the design, or does a one-size-fits-all approach work? Ways to answer these and other questions include user-focused requirements analysis, building user profiles, and usability testing.

Even if software is usable as per the above considerations, it may still be hard to learn to use. Other questions that must be asked are: Is the user ever expected to do something that is not obvious? (e.g., Are important features only accessible by right-clicking on a menu header, on a text box, or on an unusual GUI element?) Are there hints and tips and shortcuts that appear as the user is using the software? Should there be instructions in the manual that actually belong as contextual tips shown in the program? Is the user at a disadvantage if they don't know certain keyboard shortcuts? A user has the right to know all major and minor keyboard shortcuts and features of an application. Is the learning curve (of hints and tips) skewed towards point-and-click users rather than keyboard users? Are there any "hidden" or undocumented keyboard shortcuts, that would better be revealed in a "Keyboard shortcuts" Help-Menu item? A strategy to prevent this "undocumented feature disconnect" is to automatically generate a list of keyboard shortcuts from their definitions in the code.

Lund, 1997 usability maxims

When evaluating the design and usability of a website, consider the following[8]: Know the user, and You are not the user. Things that look the same should act the same. The information for the decision must be there when the decision is needed. Error messages should actually mean something to the user and tell the user how to fix the problem. Every action should have a reaction. Everyone makes mistakes, so every mistake should be fixable. Don't overwhelm the user. Consistency, consistency, consistency. Minimize the need for a mighty memory. Keep it simple. The user should always know what is happening. The more you do something, the easier it should be to do.

The user should control the system. The system should not control the user. The user is the boss and the system should show it. Eliminate unnecessary decisions and illuminate the rest.

Usability The best journey has the fewest steps. Shorten the distance between the user and the goal. User should be able to do what they want. Alert users to an error before things get worse. Users should always know how to find out what to do next. Strive to empower the user, not speed up the system. Things that look different should act different.


These are presented in a descending order determined by their mean rating of importance.

Designing for usability

Any system designed for people should be easy to use, easy to learn, easy to remember, and helpful to users. John Gould and Clayton Lewis recommend that designers striving for usability follow these three design principles[9] Early focus on users and tasks Empirical measurement Iterative design

Early focus on users and tasks

The design team should be user driven and in direct contact with potential users. Several evaluation methods, including personas, cognitive modeling, inspection, inquiry, prototyping, and testing methods may contribute to understanding potential users. Usability considerations such as who the users are and their experience with similar systems must be examined. As part of understanding users, this knowledge must played against the tasks that the users will be expected to perform.[9] This includes the analysis of what tasks the users will perform, which are most important, and what decisions the users will make while using your system. Designers must understand how cognitive and emotional characteristics of users will relate to a proposed system. One way to stress the importance of these issues in the designers minds is to use personas, which are made-up representative users. See below for further discussion of personas. Another more expensive but more insightful method is to have a panel of potential users work closely with the design team from the early stages.[10]

Empirical measurement
Test the system early on, and test the system on real users using behavioral measurements. This includes testing the system for both learnability and usability. (See Evaluation Methods). It is important in this stage to use quantitative usability specifications such as time and errors to complete tasks and number of users to test, as well as examine performance and attitudes of the users testing the system.[10] Finally, reviewing or demonstrating a system before the user tests it can result in misleading results. The emphasis of empirical measurement is on measurement, both informal and formal, which can be carried out through a variety of evaluation methods.[9]

Iterative design
Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented. The key requirements for Iterative Design are: identification of required changes, an ability to make changes, and a willingness to make changes. When a problem is encountered, there is no set method to determine the correct solution. Rather, there are empirical methods that can be used during system development or after the system is delivered, usually a more inopportune time. Ultimately, iterative design

Usability works towards meeting goals such as making the system user friendly, easy to use, easy to operate, simple, etc.[10]


Evaluation methods
There are a variety of usability evaluation methods. Certain methods use data from users, while others rely on usability experts. There are usability evaluation methods for all stages of design and development, from product definition to final design modifications. When choosing a method, consider cost, time constraints, and appropriateness. For a brief overview of methods, see Comparison of usability evaluation methods or continue reading below. Usability methods can be further classified into the subcategories below.

Cognitive modeling methods

Cognitive modeling involves creating a computational model to estimate how long it takes people to perform a given task. Models are based on psychological principles and experimental studies to determine times for cognitive processing and motor movements. Cognitive models can be used to improve user interfaces or predict problem errors and pitfalls during the design process. A few examples of cognitive models include: Parallel Design With parallel design, several people create an initial design from the same set of requirements. Each person works independently, and when finished, shares concepts with the group. The design team considers each solution, and each designer uses the best ideas to further improve their own solution. This process helps generate many different, diverse ideas, and ensures that the best ideas from each design are integrated into the final concept. This process can be repeated several times until the team is satisfied with the final concept. GOMS GOMS stands for goals, operator, methods, and selection rules. It is a family of techniques that analyzes the user complexity of interactive systems. Goals are what the user must accomplish. An operator is an action performed in pursuit of a goal. A method is a sequence of operators that accomplish a goal. Selection rules specify which method satisfies a given goal, based on context. Human Processor Model Sometimes it is useful to break a task down and analyze each individual aspect separately. This helps the tester locate specific areas for improvement. To do this, it is necessary to understand how the human brain processes information. A model of the human processor is shown below.



Many studies have been done to estimate the cycle times, decay times, and capacities of each of these processors. Variables that affect these can include subject age, aptitudes, ability, and the surrounding environment. For a younger adult, reasonable estimates are:
Parameter Eye movement time Mean Range

230 ms 70-700 ms

Decay half-life of visual image storage 200 ms 90-1000 ms Perceptual processor cycle time Cognitive processor cycle time Motor processor cycle time Effective working memory capacity 100 ms 50-200 ms 70 ms 70 ms 25-170 ms 30-100 ms

2 items 2-3 items

Long-term memory is believed to have an infinite capacity and decay time.[11] Keystroke level modeling Keystroke level modeling is essentially a less comprehensive version of GOMS that makes simplifying assumptions in order to reduce calculation time and complexity. See Keystroke level model for more information.



Inspection methods
These usability evaluation methods involve observation of users by an experimenter, or the testing and evaluation of a program by an expert reviewer. They provide more quantitative data as tasks can be timed and recorded. Card sorts Card sorting is a way to involve users in grouping information for a website's usability review. Participants in a card sorting session are asked to organize the content from a Web site in a way that makes sense to them. Participants review items from a Web site and then group these items into categories. Card sorting helps to learn how users think about the content and how they would organize the information on the Web site. Card sorting helps to build the structure for a Web site, decide what to put on the home page, and label the home page categories. It also helps to ensure that information is organized on the site in a way that is logical to users. Tree tests Tree testing is a way to evaluate the effectiveness of a website's top-down organization. Participants are given "find it" tasks, then asked to drill down through successive text lists of topics and subtopics to find a suitable answer. Tree testing evaluates the findability and labeling of topics in a site, separate from its navigation controls or visual design. Ethnography Ethnographic analysis is derived from anthropology. Field observations are taken at a site of a possible user, which track the artifacts of work such as Post-It notes, items on desktop, shortcuts, and items in trash bins. These observations also gather the sequence of work and interruptions that determine the users typical day. Heuristic Evaluation Heuristic evaluation is a usability engineering method for finding and assessing usability problems in a user interface design as part of an iterative design process. It involves having a small set of evaluators examining the interface and using recognized usability principles (the "heuristics"). It is the most popular of the usability inspection methods, as it is quick, cheap, and easy. Heuristic evaluation was developed to aid in the design of computer user-interface design. It relies on expert reviewers to discover usability problems and then categorize and rate them by a set of principles (heuristics.) It is widely used based on its speed and cost-effectiveness. Jakob Nielsen's list of ten heuristics is the most commonly used in industry. These are ten general principles for user interface design. They are called "heuristics" because they are more in the nature of rules of thumb than specific usability guidelines. Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the users' language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo. Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design that prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate.

Usability Flexibility and efficiency of use: Acceleratorsunseen by the novice usermay often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information that is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large. Thus, by determining which guidelines are violated, the usability of a device can be determined. Usability Inspection Usability inspection is a review of a system based on a set of guidelines. The review is conducted by a group of experts who are deeply familiar with the concepts of usability in design. The experts focus on a list of areas in design that have been shown to be troublesome for users. Pluralistic Inspection Pluralistic Inspections are meetings where users, developers, and human factors people meet together to discuss and evaluate step by step of a task scenario. As more people inspect the scenario for problems, the higher the probability to find problems. In addition, the more interaction in the team, the faster the usability issues are resolved. Consistency Inspection In consistency inspection, expert designers review products or projects to ensure consistency across multiple products to look if it does things in the same way as their own designs. Activity Analysis Activity analysis is a usability method used in preliminary stages of development to get a sense of situation. It involves an investigator observing users as they work in the field. Also referred to as user observation, it is useful for specifying user requirements and studying currently used tasks and subtasks. The data collected is qualitative and useful for defining the problem. It should be used when you wish to frame what is needed, or What do we want to know?


Inquiry methods
The following usability evaluation methods involve collecting qualitative data from users. Although the data collected is subjective, it provides valuable information on what the user wants. Task Analysis Task analysis means learning about users' goals and users' ways of working. Task analysis can also mean figuring out what more specific tasks users must do to meet those goals and what steps they must take to accomplish those tasks. Along with user and task analysis, we often do a third analysis: understanding users' environments (physical, social, cultural, and technological environments). Focus Groups A focus group is a focused discussion where a moderator leads a group of participants through a set of questions on a particular topic. Although typically used as a marketing tool, Focus Groups are sometimes used to evaluate usability. Used in the product definition stage, a group of 6 to 10 users are gathered to discuss what they desire in a product. An experienced focus group facilitator is hired to guide the discussion to areas of interest for the developers. Focus groups are typically videotaped to help get verbatim quotes, and clips are often used to summarize opinions. The

Usability data gathered is not usually quantitative, but can help get an idea of a target group's opinion. Questionnaires/Surveys Surveys have the advantages of being inexpensive, require no testing equipment, and results reflect the users opinions. When written carefully and given to actual users who have experience with the product and knowledge of design, surveys provide useful feedback on the strong and weak areas of the usability of a design. This is a very common method and often does not appear to be a survey, but just a warranty card.


Prototyping methods
Rapid Prototyping Rapid prototyping is a method used in early stages of development to validate and refine the usability of a system. It can be used to quickly and cheaply evaluate user-interface designs without the need for an expensive working model. This can help remove hesitation to change the design, since it is implemented before any real programming begins. One such method of rapid prototyping is paper prototyping.

Testing methods
These usability evaluation methods involve testing of subjects for the most quantitative data. Usually recorded on video, they provide task completion time and allow for observation of attitude. Remote usability testing Remote usability testing (also known as unmoderated or asynchronous usability testing) involves the use of a specially modified online survey, allowing the quantification of user testing studies by providing the ability to generate large sample sizes, or a deep qualitative analysis without the need for dedicated facilities. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioural type. The tests are carried out in the users own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas. There are two types, quantitative or qualitative. Quantitative use large sample sized and task based surveys. These types of studies are useful for validating suspected usability issues. Qualitative studies are best used as exploratory research, in small sample sizes but frequent, even daily iterations. Qualitative usually allows for observing respondent's screens and verbal think aloud commentary (Screen Recording Video, SRV), and for a richer level of insight also include the webcam view of the respondent (Video-in-Video, ViV, sometimes referred to as Picture-in-Picture, PiP) Remote usability testing for mobile devices The growth in mobile and associated platforms and services (e.g.: Mobile gaming has experienced 20x growth in 2010-2012) has generated a need for unmoderated remote usability testing on mobile devices, both for websites but especially for app interactions. One methodology consists of shipping cameras and special camera holding fixtures to dedicated testers, and having them record the screens of the mobile smart-phone or tablet device, usually using an HD camera. A drawback of this approach is that the finger movements of the respondent can obscure the view of the screen, in addition to the bias and logistical issues inherent in shipping special hardware to selected respondents. A newer approach uses a wireless projection of the mobile device screen onto the computer desktop screen of the respondent, who can then be recorded through their webcam, and thus a combined Video-in-Video view of the participant and the screen interactions viewed simultaneously while incorporating the verbal think aloud commentary of the respondents. Thinking Aloud The Think aloud protocol is a method of gathering data that is used in both usability and psychology studies. It involves getting a user to verbalize their thought processes as they perform a task or set of tasks. Often an instructor is present to prompt the user into being more vocal as they work. Similar to the Subjects-in-Tandem method, it is

Usability useful in pinpointing problems and is relatively simple to set up. Additionally, it can provide insight into the user's attitude, which can not usually be discerned from a survey or questionnaire. RITE Method Rapid Iterative Testing and Evaluation (RITE)[12] is an iterative usability method similar to traditional "discount" usability testing. The tester and team must define a target population for testing, schedule participants to come in to the lab, decide on how the users behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g. think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as 1 participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users. Subjects-in-Tandem or Co-Discovery Subjects-in-tandem (also called co-discovery) is the pairing of subjects in a usability test to gather important information on the ease of use of a product. Subjects tend to discuss the tasks they have to accomplish out loud and through these discussions observers learn where the problem areas of a design are. To encourage co-operative problem-solving between the two subjects, and the attendant discussions leading to it, the tests can be designed to make the subjects dependent on each other by assigning them complementary areas of responsibility (e.g. for testing of software, one subject may be put in charge of the mouse and the other of the keyboard.) Component-based usability testing Component-based usability testing is an approach which aims to test the usability of elementary units of an interaction system, referred to as interaction components. The approach includes component-specific quantitative measures based on user interaction recorded in log files, and component-based usability questionnaires.


Other methods
Cognitive walkthrough Cognitive walkthrough is a method of evaluating the user interaction of a working prototype or final product. It is used to evaluate the systems ease of learning. Cognitive walkthrough is useful to understand the users thought processes and decision making when interacting with a system, specially for first-time or infrequent users. Benchmarking Benchmarking creates standardized test materials for a specific type of design. Four key characteristics are considered when establishing a benchmark: time to do the core task, time to fix errors, time to learn applications, and the functionality of the system. Once there is a benchmark, other designs can be compared to it to determine the usability of the system. Many of the common objectives of usability studies, such as trying to understand user behavior or exploring alternative designs, must be put aside. Unlike many other usability methods or types of labs studies, benchmark studies more closely resemble true experimental psychology lab studies, with greater attention to detail on methodology, study protocol and data analysis.[13] Meta-Analysis Meta-Analysis is a statistical procedure to combine results across studies to integrate the findings. This phrase was coined in 1976 as a quantitative literature review. This type of evaluation is very powerful for determining the usability of a device because it combines multiple studies to provide very accurate quantitative support. Persona Personas are fictitious characters created to represent a site or product's different user types and their associated demographics and technographics. Alan Cooper introduced the concept of using personas as a part of interactive design in 1998 in his book The Inmates Are Running the Asylum,[14] but had used this concept since as early as 1975.

Usability Personas are a usability evaluation method that can be used at various design stages. The most typical time to create personas is at the beginning of designing so that designers have a tangible idea of who the users of their product will be. Personas are the archetypes that represent actual groups of users and their needs, which can be a general description of person, context, or usage scenario. This technique turns marketing data on target user population into a few physical concepts of users to create empathy among the design team, with the final aim of tailoring a product more closely to how the personas will use it. To gather the marketing data that personas require, several tools can be used, including online surveys, web analytics, customer feedback forms, and usability tests, and interviews with customer-service representatives.[15]


Evaluating with tests and metrics

Regardless to how carefully a system is designed, all theories must be tested using usability tests. Usability tests involve typical users using the system (or product) in a realistic environment [see simulation]. Observation of the users behavior, emotions, and difficulties while performing different tasks, often identify areas of improvement for the system.

It is often very difficult for designers to conduct usability tests with the exact system being designed. Cost constraints, size, and design constraints usually lead the designer to creating a prototype of the system. Instead of creating the complete final system, the designer may test different sections of the system, thus making several small models of each component of the system. The types of usability prototypes may vary from using paper models, index cards, hand drawn models, or storyboards.[16] Prototypes are able to be modified quickly, often are faster and easier to create with less time invested by designers and are more apt to change design; although sometimes are not an adequate representation of the whole system, are often not durable and testing results may not be parallel to those of the actual system.

While conducting usability tests, designers must use usability metrics to identify what it is they are going to measure, or the usability metrics. These metrics are often variable, and change in conjunction with the scope and goals of the project. The number of subjects being tested can also affect usability metrics, as it is often easier to focus on specific demographics. Qualitative design phases, such as general usability (can the task be accomplished?), and user satisfaction are also typically done with smaller groups of subjects.[17] Using inexpensive prototypes on small user groups provides more detailed information, because of the more interactive atmosphere, and the designer's ability to focus more on the individual user. As the designs become more complex, the testing must become more formalized. Testing equipment will become more sophisticated and testing metrics become more quantitative. With a more refined prototype, designers often test effectiveness, efficiency, and subjective satisfaction, by asking the user to complete various tasks. These categories are measured by the percent that complete the task, how long it takes to complete the tasks, ratios of success to failure to complete the task, time spent on errors, the number of errors, rating scale of satisfactions, number of times user seems frustrated, etc.[18] Additional observations of the users give designers insight on navigation difficulties, controls, conceptual models, etc. The ultimate goal of analyzing these metrics is to find/create a prototype design that users like and use to successfully perform given tasks.[16] After conducting usability tests, it is important for a designer to record what was observed, in addition to why such behavior occurred and modify the model according to the results. Often it is quite difficult to distinguish the source of the design errors, and what the user did wrong. However, effective usability tests will not generate a solution to the problems, but provide modified design guidelines for continued testing.



Benefits of usability
The key benefits of usability are: Higher revenues through increased sales Increased user efficiency and satisfaction Reduced development costs Reduced support costs

Corporate integration
An increase in usability generally positively affects several facets of a companys output quality. In particular, the benefits fall into several common areas:[19] Increased productivity Decreased training and support costs Increased sales and revenues Reduced development time and costs Reduced maintenance costs Increased customer satisfaction

Increased usability in the workplace fosters several responses from employees. Along with any positive feedback, workers who enjoy their work do it better, stay longer in the face of temptation, and contribute ideas and enthusiasm to the evolution of enhanced productivity.[20]" In order to create standards, companies often implement experimental design techniques that create baseline levels. Areas of concern in an office environment include (though are not necessarily limited to):[21] Working Posture Design of Workstation Furniture Screen Displays Input Devices Organizational Issues Office Environment Software Interface

By working to improve said factors, corporations can achieve their goals of increased output at lower costs, while potentially creating optimal levels of customer satisfaction. There are numerous reasons why each of these factors correlates to overall improvement. For example, making a piece of softwares user interface easier to understand would reduce the need for extensive training. The improved interface would also tend to lower the time needed to perform necessary tasks, and so would both raise the productivity levels for employees and reduce development time (and thus costs). It is important to note that each of the aforementioned factors are not mutually exclusive, rather should be understood to work in conjunction to form the overall workplace environment.



Usability is now recognized as an important software quality attribute, earning its place among more traditional attributes such as performance and robustness. Various academic programs focus on usability.[22] Several usability consultancy companies have emerged, and traditional consultancy and design firms offer similar services.

Professional development
Usability practitioners are sometimes trained as industrial engineers, psychologists, kinesiologists, systems design engineers, or with a degree in information architecture, information or library science, or Human-Computer Interaction (HCI). More often though they are people who are trained in specific applied fields who have taken on a usability focus within their organization. Anyone who aims to make tools easier to use and more effective for their desired function within the context of work or everyday living can benefit from studying usability principles and guidelines. For those seeking to extend their training, the Usability Professionals' Association offers online resources, reference lists, courses, conferences, and local chapter meetings. The UPA also sponsors World Usability Day each November.[23] Related professional organizations include the Human Factors and Ergonomics Society (HFES) and the Association for Computing Machinery's special interest groups in Computer Human Interaction (SIGCHI), and Computer Graphics and Interactive Techniques (SIGGRAPH). The Society for Technical Communication also has a special interest group on Usability and User Experience (UUX). They publish a quarterly newsletter called Usability Interface.[24]

[1] Karwowski W, Soares M M, Stanton, N A. Human Factors and Ergonomics in Consumer Product Design: Methods and Techniques (Handbook of Human Factors in Consumer Product Design): Needs Analysis: Or, How Do You Capture, Represent, and Validate User Requirements in a Formal Manner/Notation before Design (Chapter 26 by K Tara Smith) , CRC Press. 2011. [2] Holm, Ivar (2006). Ideas and Beliefs in Architecture and Industrial design: How attitudes, orientations, and underlying assumptions shape the built environment. Oslo School of Architecture and Design. ISBN 82-547-0174-1. [3] (http:/ / www. informationweek. com/ 773/ web. htm) [4] Usability 101: Introduction to Usability (http:/ / www. useit. com/ alertbox/ 20030825. html), Jakob Nielsen's Alertbox. Retrieved 2010-06-01 [5] Intuitive equals familiar (http:/ / www. asktog. com/ papers/ raskinintuit. html), Communications of the ACM. 37:9, September 1994, pg. 17. [6] The Only Intuitive Interface is the Nipple (http:/ / www. greenend. org. uk/ rjk/ 2002/ 08/ nipple. html) [7] Tognazzini, B. (1992). Tog on Interface. Boston, MA: Addison-Wesley, p. 246. [8] Lund, A. M. (1997). Expert ratings of usability maxims. Ergonomics in Design, 5(3), 15-20. A study of the heuristics design experts consider important for good design. [9] Gould, J.D., Lewis, C.: "Designing for Usability: Key Principles and What Designers Think", Communications of the ACM, March 1985, 28(3) [10] http:/ / pages. cpsc. ucalgary. ca/ ~saul/ wiki/ uploads/ HCIPapers/ gould-howto-2-pageperside-ocr. pdf [11] Card,S.K., Moran, T.P., & Newell, A. (1983). The psychology of human-computer interaction. Hillsdale, NJ: Lawrence Erlbaum Associates. [12] Medlock, M.C., Wixon, D., Terrano, M., Romero, R., and Fulton, B. (2002). Using the RITE method to improve products: A definition and a case study. Presented at the Usability Professionsals Association 2002, Orlando FL. [13] The art of usability benchmarking (http:/ / www. scottberkun. com/ essays/ 27-the-art-of-usability-benchmarking/ ) [14] Cooper, A. (1999). The Inmates Are Running the Asylum, Sams Publishers, ISBN 0-672-31649-8 [15] Case study on making marketing-data driven Personas (http:/ / seoroi. com/ case-studies/ 4-5-personas-of-my-seo-site/ ) [16] Wickens, C.D et al. (2004). An Introduction to Human Factors Engineering (2nd Ed), Pearson Education, Inc., Upper Saddle River, NJ : Prentice Hall. [17] Dumas, J.S. and Redish, J.C. (1999). A Practical Guide to Usability Testing (revised ed.), Bristol, U.K.: Intellect Books. [18] Kuniavsky, M. (2003). Observing the User Experience: A Practitioners Guide to User Research, San Francisco, CA: Morgan Kaufmann. [19] Benefits of Usability (http:/ / www. usabilityprofessionals. org/ usability_resources/ usability_in_the_real_world/ benefits_of_usability. html) [20] Landauer, T. K. (1996). The trouble with computers. Cambridge, MA, The MIT Press. p158. [21] McKeown, Celine (2008). Office ergonomics: practical applications. Boca Raton, FL, Taylor & Francis Group, LLC.

[22] Usability (http:/ / www. dmoz. org/ Computers/ Human-Computer_Interaction/ Academic/ ) at the Open Directory Project [23] Usability Professionals' Association web site (http:/ / www. usabilityprofessionals. org/ ) Retrieved December 1, 2009 [24] STC Usability SIG Newsletter (http:/ / www. stcsig. org/ usability/ newsletter/ index. html) Retrieved December 1, 2009


Further reading
Donald A. Norman (2002), The Design of Everyday Things, Basic Books, ISBN 0-465-06710-7 Jakob Nielsen (1994), Usability Engineering, Morgan Kaufmann Publishers, ISBN 0-12-518406-9 Jakob Nielsen (1994), Usability Inspection Methods, Morgan John Wiley & Sons, ISBN 0-471-01877-5 Ben Shneiderman, Software Psychology, 1980, ISBN 0-87626-816-5 Andreas Holzinger, Usability Engineering for Software Developers, Communications of the ACM (ISSN 0001-0782), Vol. 48, Issue 1 (January 2005), 71-74 Alan Cooper, The Origin of Personas, Alessandro Inversini, Lorenzo Cantoni and Davide Bolchini, Connecting Usages with Usability Analysis through the User Experience Risk Assessment Model: A Case Study in the Tourism Domain, http://www.springerlink. com/content/gq6257744712h050/

External links ( ( Online guide to usability methods resource

Usability testing
Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system.[1] This is in contrast with usability inspection methods where experts use different methods to evaluate a user interface without involving users. Usability testing focuses on measuring a human-made product's capacity to meet its intended purpose. Examples of products that commonly benefit from usability testing are foods, consumer products, web sites or web applications, computer interfaces, documents, and devices. Usability testing measures the usability, or ease of use, of a specific object or set of objects, whereas general human-computer interaction studies attempt to formulate universal principles.

History of usability testing

Henry Dreyfuss in the late 1940s contracted to design the state rooms for the twin ocean liners "Independence" and "Constitution." He built eight prototype staterooms and installed them in a warehouse. He then brought in a series of travelers to "live" in the rooms for a short time, bringing with them all items they would normally take when cruising. His people were able to discover over time, for example, if there was space for large steamer trunks, if light switches needed to be added beside the beds to prevent injury, etc., before hundreds of state rooms had been built into the ship.[2] A Xerox Palo Alto Research Center (PARC) employee wrote that PARC used extensive usability testing in creating the Xerox Star, introduced in 1981.[3] The Inside Intuit book, says (page 22, 1984), "... in the first instance of the Usability Testing that later became standard industry practice, LeFevre recruited people off the streets... and timed their Kwik-Chek (Quicken) usage with a stopwatch. After every test... programmers worked to improve the program."[4]) Scott Cook, Intuit

Usability testing co-founder, said, "... we did usability testing in 1984, five years before anyone else... there's a very big difference between doing it and having marketing people doing it as part of their... design... a very big difference between doing it and having it be the core of what engineers focus on.[5]


Goals of usability testing

Usability testing is a black-box testing technique. The aim is to observe people using the product to discover errors and areas of improvement. Usability testing generally involves measuring how well test subjects respond in four areas: efficiency, accuracy, recall, and emotional response. The results of the first test can be treated as a baseline or control measurement; all subsequent tests can then be compared to the baseline to indicate improvement. Efficiency -- How much time, and how many steps, are required for people to complete basic tasks? (For example, find something to buy, create a new account, and order the item.) Accuracy -- How many mistakes did people make? (And were they fatal or recoverable with the right information?) Recall -- How much does the person remember afterwards or after periods of non-use? Emotional response -- How does the person feel about the tasks completed? Is the person confident, stressed? Would the user recommend this system to a friend? To assess the usability of the system under usability testing, quantitative and/or qualitative Usability goals (also called usability requirements[6]) have to be defined beforehand.[7][6][8] If the results of the usability testing meet the Usability goals, the system can be considered as usable for the end-users whose representatives have tested it.

What usability testing is not

Simply gathering opinions on an object or document is market research or qualitative research rather than usability testing. Usability testing usually involves systematic observation under controlled conditions to determine how well people can use the product.[9] However, often both qualitative and usability testing are used in combination, to better understand users' motivations/perceptions, in addition to their actions. Rather than showing users a rough draft and asking, "Do you understand this?", usability testing involves watching people trying to use something for its intended purpose. For example, when testing instructions for assembling a toy, the test subjects should be given the instructions and a box of parts and, rather than being asked to comment on the parts and materials, they are asked to put the toy together. Instruction phrasing, illustration quality, and the toy's design all affect the assembly process.

Setting up a usability test involves carefully creating a scenario, or realistic situation, wherein the person performs a list of tasks using the product being tested while observers watch and take notes. Several other test instruments such as scripted instructions, paper prototypes, and pre- and post-test questionnaires are also used to gather feedback on the product being tested. For example, to test the attachment function of an e-mail program, a scenario would describe a situation where a person needs to send an e-mail attachment, and ask him or her to undertake this task. The aim is to observe how people function in a realistic manner, so that developers can see problem areas, and what people like. Techniques popularly used to gather data during a usability test include think aloud protocol, Co-discovery Learning and eye tracking.

Usability testing


Hallway testing
Hallway testing (or Hall Intercept Testing) is a general methodology of usability testing. Rather than using an in-house, trained group of testers, just five to six random people are brought in to test the product, or service. The name of the technique refers to the fact that the testers should be random people who pass by in the hallway.[10] Hallway testing is particularly effective in the early stages of a new design when the designers are looking for "brick walls," problems so serious that users simply cannot advance. Anyone of normal intelligence other than designers and engineers can be used at this point. (Both designers and engineers immediately turn from being test subjects into being "expert reviewers." They are often too close to the project, so they already know how to accomplish the task, thereby missing ambiguities and false paths.)

Remote Usability Testing

In a scenario where usability evaluators, developers and prospective users are located in different countries and time zones, conducting a traditional lab usability evaluation creates challenges both from the cost and logistical perspectives. These concerns led to research on remote usability evaluation, with the user and the evaluators separated over space and time. Remote testing, which facilitates evaluations being done in the context of the users other tasks and technology can be either synchronous or asynchronous. Synchronous usability testing methodologies involve video conferencing or employ remote application sharing tools such as WebEx. The former involves real time one-on-one communication between the evaluator and the user, while the latter involves the evaluator and user working separately.[11] Asynchronous methodologies include automatic collection of users click streams, user logs of critical incidents that occur while interacting with the application and subjective feedback on the interface by users.[12] Similar to an in-lab study, an asynchronous remote usability test is task-based and the platforms allow you to capture clicks and task times. Hence, for many large companies this allows you to understand the WHY behind the visitors' intents when visiting a website or mobile site. Additionally, this style of user testing also provides an opportunity to segment feedback by demographic, attitudinal and behavioural type. The tests are carried out in the users own environment (rather than labs) helping further simulate real-life scenario testing. This approach also provides a vehicle to easily solicit feedback from users in remote areas quickly and with lower organisational overheads. Numerous tools are available to address the needs of both these approaches. WebEx and Go-to-meeting are the most commonly used technologies to conduct a synchronous remote usability test.[13] However, synchronous remote testing may lack the immediacy and sense of presence desired to support a collaborative testing process. Moreover, managing inter-personal dynamics across cultural and linguistic barriers may require approaches sensitive to the cultures involved. Other disadvantages include having reduced control over the testing environment and the distractions and interruptions experienced by the participants in their native environment.[14] One of the newer methods developed for conducting a synchronous remote usability test is by using virtual worlds.[15]

Expert review
Expert review is another general method of usability testing. As the name suggests, this method relies on bringing in experts with experience in the field (possibly from companies that specialize in usability testing) to evaluate the usability of a product.

Automated expert review

Similar to expert reviews, automated expert reviews provide usability testing but through the use of programs given rules for good design and heuristics. Though an automated review might not provide as much detail and insight as reviews from people, they can be finished more quickly and consistently. The idea of creating surrogate users for usability testing is an ambitious direction for the Artificial Intelligence community.

Usability testing


How many users to test?

In the early 1990s, Jakob Nielsen, at that time a researcher at Sun Microsystems, popularized the concept of using numerous small usability teststypically with only five test subjects eachat various stages of the development process. His argument is that, once it is found that two or three people are totally confused by the home page, little is gained by watching more people suffer through the same flawed design. "Elaborate usability tests are a waste of resources. The best results come from testing no more than five users and running as many small tests as you can afford."[10] Nielsen subsequently published his research and coined the term heuristic evaluation. The claim of "Five users is enough" was later described by a mathematical model[16] which states for the proportion of uncovered problems U

where p is the probability of one subject identifying a specific problem and n the number of subjects (or test sessions). This model shows up as an asymptotic graph towards the number of real existing problems (see figure below).

In later research Nielsen's claim has eagerly been questioned with both empirical evidence[17] and more advanced mathematical models.[18] Two key challenges to this assertion are: 1. Since usability is related to the specific set of users, such a small sample size is unlikely to be representative of the total population so the data from such a small sample is more likely to reflect the sample group than the population they may represent 2. Not every usability problem is equally easy-to-detect. Intractable problems happen to decelerate the overall process. Under these circumstances the progress of the process is much shallower than predicted by the Nielsen/Landauer formula.[19] It is worth noting that Nielsen does not advocate stopping after a single test with five users; his point is that testing with five users, fixing the problems they uncover, and then testing the revised site with five different users is a better use of limited resources than running a single usability test with 10 users. In practice, the tests are run once or twice

Usability testing per week during the entire development cycle, using three to five test subjects per round, and with the results delivered within 24 hours to the designers. The number of users actually tested over the course of the project can thus easily reach 50 to 100 people. In the early stage, when users are most likely to immediately encounter problems that stop them in their tracks, almost anyone of normal intelligence can be used as a test subject. In stage two, testers will recruit test subjects across a broad spectrum of abilities. For example, in one study, experienced users showed no problem using any design, from the first to the last, while naive user and self-identified power users both failed repeatedly.[20] Later on, as the design smooths out, users should be recruited from the target population. When the method is applied to a sufficient number of people over the course of a project, the objections raised above become addressed: The sample size ceases to be small and usability problems that arise with only occasional users are found. The value of the method lies in the fact that specific design problems, once encountered, are never seen again because they are immediately eliminated, while the parts that appear successful are tested over and over. While it's true that the initial problems in the design may be tested by only five users, when the method is properly applied, the parts of the design that worked in that initial test will go on to be tested by 50 to 100 people.


[1] Nielsen, J. (1994). Usability Engineering, Academic Press Inc, p 165 [2] NN/G Usability Week 2011 Conference "Interaction Design" Manual, Bruce Tognazzini, Nielsen Norman Group, 2011 [3] http:/ / interactions. acm. org/ content/ XV/ baecker. pdf [4] http:/ / books. google. com/ books?id=lRs_4U43UcEC& printsec=frontcover& sig=ACfU3U1xvA7-f80TP9Zqt9wkB9adVAqZ4g#PPA22,M1 [5] http:/ / news. zdnet. co. uk/ itmanagement/ 0,1000000308,2065537,00. htm [6] International Standardization Organization. ergonomics of human system interaction - Part 210 -: Human centred design for interactive systems (Rep N9241-210). 2010, International Standardization Organization [7] Nielsen, Usability Engineering, 1994 [8] Mayhew. The usability engineering lifecycle: a practitioner's handbook for user interface design. London, Academic press; 1999 [9] http:/ / jerz. setonhill. edu/ design/ usability/ intro. htm [10] "Usability Testing with 5 Users (Jakob Nielsen's Alertbox)" (http:/ / www. useit. com/ alertbox/ 20000319. html). 13.03.2000. .; references Jakob Nielsen, Thomas K. Landauer (April 1993). "A mathematical model of the finding of usability problems" (http:/ / dl. acm. org/ citation. cfm?id=169166& CFID=159890676& CFTOKEN=16006386). Proceedings of ACM INTERCHI'93 Conference (Amsterdam, The Netherlands, 24-29 April 1993). . [11] Andreasen, Morten Sieker; Nielsen, Henrik Villemann; Schrder, Simon Ormholt; Stage, Jan (2007). "What happened to remote usability testing?". Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '07. p.1405. doi:10.1145/1240624.1240838. ISBN9781595935939. [12] Dray, Susan; Siegel, David (2004). "Remote possibilities?". Interactions 11 (2): 10. doi:10.1145/971258.971264. [13] http:/ / www. boxesandarrows. com/ view/ remote_online_usability_testing_why_how_and_when_to_use_it [14] Dray, Susan; Siegel, David (March 2004). "Remote possibilities?: international usability testing at a distance". Interactions 11 (2): 1017. doi:10.1145/971258.971264. [15] Chalil Madathil, Kapil; Joel S. Greenstein (May 2011). "Synchronous remote usability testing: a new approach facilitated by virtual worlds". Proceedings of the 2011 annual conference on Human factors in computing systems. CHI '11: 22252234. doi:10.1145/1978942.1979267. ISBN9781450302289. [16] Virzi, R.A., Refining the Test Phase of Usability Evaluation: How Many Subjects is Enough? Human Factors, 1992. 34(4): p. 457-468. [17] http:/ / citeseer. ist. psu. edu/ spool01testing. html [18] Caulton, D.A., Relaxing the homogeneity assumption in usability testing. Behaviour & Information Technology, 2001. 20(1): p. 1-7 [19] Schmettow, Heterogeneity in the Usability Evaluation Process. In: M. England, D. & Beale, R. (ed.), Proceedings of the HCI 2008, British Computing Society, 2008, 1, 89-98 [20] Bruce Tognazzini. "Maximizing Windows" (http:/ / www. asktog. com/ columns/ 000maxscrns. html). .

Usability testing


External links ( A Brief History of the Magic Number 5 in Usability Testing ( five-history.php)

Usability goals
Tools, devices or software (as diverse as a TV remote control, the interface of an oven, or a word processor) must be evaluated before their release on the market from different points of view such as their technical properties or their usability. Usability evaluation allows assessing whether the product under evaluation is efficient enough (Are the users able to carry out their task while expending reasonable resources such as time, cognitive or physical demand), effective enough (Can the user complete the tasks they are supposed to perform with the tool? Is their performance complete and accurate?) and sufficiently satisfactory for the users (What is the users attitude towards the system? Do they experience discomfort?)[1][2]. For this assessment to be objective, there is a need for measurable goals[3] (for instance in terms of easiness of use or of learning) that the system must achieve. That kind of goals is called usability goals (or also usability requirements[1][4]). They are objective criteria against which the results of the usability evaluation are compared to assess the usability of the product under evaluation[2].

Usability goals through the product design process

Usability goals must be included in every product design process that intends to follow a Human Factors approach (for instance, User-centered design[1] process or Usability Engineering Lifecycle[5]). They have to be clearly stated from the onset of the process, as soon as the end-users needs, risk of use, contexts and aims of use are identified (cf. definition of usability goals part). Then, usability goals are used at each usability evaluation phase of the design process. Whatever the type of evaluation phase (i.e. formative or summative evaluation[6]), they are used to assess the performance of the users against the result of the evaluation process: During formative/constructive evaluations (i.e. evaluations that occur during the design process to contribute to further improvement of the object under evaluation[6]), the comparison of the evaluation results against usability goals allows verifying whether those goals are met or not: as long as they are not met, the product under evaluation must be re-engineered to improve its usability. In this frame, usability goals allow also identifying usability flaws and therefore supporting this re-engineering process. They can also be used all along the iterations of the User-centered design process as indicators to follow-up the evolution of the system in terms of usability. During summative evaluations (i.e. evaluations that try to give a definitive statement on the quality properties of a system under evaluation[6]), the meeting of usability goals means that the system is usable enough to go out the User-centered design[1] process and to be released.

Usability goals


Definition of usability goals

How to define usability goals?
Usability goals must address the three usability components, i.e. effectiveness, efficiency and satisfaction[2]. There definition, for each of those components, must rest on the characteristics of the tasks that the tested system is supposed to support[2]. More practically, Mayhew [5] proposes that their definition should refer to: The identified end-users profiles The tasks that the different categories of identified end-users are supposed to perform with the tested system in a given context of use (results from a Contextual Task Analysis). Business goals Moreover, for certain types of products that are used for sensitive purposes (for instance, medical devices or nuclear plant control interface), usability goals must be defined in close relation to the Risk assessment process of those products[7][8]. This kind of safety-oriented usability goals is used to prevent a tool be released on the market while remaining deficiencies in its interface design aspects that could induced Use errors. Thus, risks that may result in use errors must be identified; then, and then, for each of them, usability goals must be defined taking into account the severity of the potentials consequences of the risk[9][4](for instance, in terms of operator, patient or environment safety).

Prioritization of usability goals

For a given tool under evaluation, several usability goals are defined. If some goals are related to safety issues while others are more comfort of use usability goals", they will not all require the same level of achievement. For instance, a comfort of use usability goal dealing with the easiness of browsing on the Internet that does not endanger users' safety could require a partial achievement (e.g. 80% of users must achieve using a function that make easier the browsing on the Internet, as a short-cut) while a usability goal concerning a major risk for users' or environment' safety would require a total achievement (no error tolerated; e.g.100% of the users must succeed in using a defibrillator at their first trial). For this kind of safety-oriented usability goal, a non-achievement reveals that the use of the tool may lead to dramatic consequences. Those goals should be satisfied before any release of the system (for instance, a patient safety sensitive Health Information Technology cannot be released if it has been shown to induce errors of use [8][7]). Therefore, the achievement level of the defined usability goals should be prioritized[5].

Formulation and measure of usability goals

The goals are defined either in a qualitative or a quantitative way[5]. Nonetheless, whatever their nature, they have to be operationally defined. The achievement of qualitative usability goals can be assessed through verbal protocols analysis. Then, the goal will be formulated in terms related to the coding scheme used for the analysis. Those qualitative goals can be turned into quantitative goals to support an objective quantifiable assessment. This kind of goal can take the shape of: "U% of a sample of the intended user population should express positive comments about a specific function while using the tool" or less than U% of the sample misinterprets the information provided by a display. As for qualitative usability goals assessed through questionnaires, they can be formulated as: The average score of the sample of the intended user population for the scale S must be over N As for quantitative goal, they can be assessed by various methods such as time measurement (instance in keystroke analysis or error rate quantification. They may look like (following[3][10]):


Usability goals U% of a sample of the intended user population should accomplish T% of the benchmark tasks within M minutes and with no more than E errors


[1] International Standardization Organization. Ergonomics of human system interaction - Part 210 -: Human centred design for interactive systems (Rep N9241-210). 2010, International Standardization Organization [2] Nielsen, Usability Engineering, 1994 [3] Salvemini A. V. Challenges for user-interface designers of telemedicine systems. Telemedicine journal, 5 (2), 1999 [4] Van der Peijl J et al. Design for risk control: the role of usability engineering in the management of use-related risks. J Biomed Inform(2012),http:/ 1016/j.jbi.2012.03.006 [5] Mayhew. The usability engineering lifecycle: a practitioner's handbook for user interface design. London, Academic press; 1999 [6] Brender J. Handbook of evaluation methods for health informatics. Burlington, MA: Elsevier; 2006. [7] Schertz et al. the redesigned follitropin alfa pen injector: results of the patient and nurse human factors usability testing. Expert Opin Drug Deliv, 2011 [8] Marcilly et al., Patient Safety Oriented Usability Goals: a pilot study. MIE 2013. [9] Association for the Advancement of Medical Instrumentation. Human factors engineering-design of medical devices (ANSI/AAMI HE75). Arlington, VA: AAMI; 2009. [10] Smith E. Siochi A. Software usability: requirements by evaluation. In: Human factors perspectives on human-computer interaction. Santa Monica, CA: Human factors and Ergonomics Society, 1995.

Focus group
A focus group is a form of qualitative research in which a group of people are asked about their perceptions, opinions, beliefs, and attitudes towards a product, service, concept, advertisement, idea, or packaging.[1] Questions are asked in an interactive group setting where participants are free to talk with other group members. The first focus groups were created at the Bureau of Applied Social Research in the USA, by associate director, sociologist Robert K. Merton.[2] The term itself was coined by psychologist and marketing expert Ernest Dichter.[3]

In the world of marketing, focus groups are seen as an important tool for acquiring feedback regarding new products, as well as various topics. In particular, focus groups allow companies wishing to develop, package, name, or test market a new product, to discuss, view, and/or test the new product before it is made available to the public. This can provide invaluable information about the potential market acceptance of the product. Focus Group is an interview, conducted by a trained moderator among a small group of respondents. The interview is conducted in an unstructured and natural way where respondents are free to give views from any aspect. Today, using audience response keypads to collect questionnaire answers is the new industry trend.

Social sciences
In the social sciences and urban planning, focus groups allow interviewers to study people in a more natural setting than a one-to-one interview. In combination with participant observation, they can be used for gaining access to various cultural and social groups, selecting sites to study, sampling of such sites, and raising unexpected issues for exploration. Their main advantage is their fairly low cost compared to surveys, as one can get results relatively quickly and increase the sample size of a report by talking with several people at once.[4]

Focus group


Usability engineering
In usability engineering, a focus group is a survey method to collect the views of users on a software or website. This marketing method can be applied to computer products to better understand the motivations of users and their perception of the product. Unlike other methods of ergonomics, focus group implies several participants: users or future users of the application. The focus group can only collect subjective data, not objective data on the use of the application as the usability test for example.[5] Alan Cooper, in his book "The inmates are running the asylum", suggests that although focus groups might be effective in many industries, they should not be relied upon in the software industry.

Variants of focus groups include: Two-way focus group - one focus group watches another focus group and discusses the observed interactions and conclusion Dual moderator focus group - one moderator ensures the session progresses smoothly, while another ensures that all the topics are covered Dueling moderator focus group - two moderators deliberately take opposite sides on the issue under discussion Respondent moderator focus group - one and only one of the respondents are asked to act as the moderator temporarily Client participant focus groups - one or more client representatives participate in the discussion, either covertly or overtly Mini focus groups - groups are composed of four or five members rather than 6 to 12 Teleconference focus groups - telephone network is used Online focus groups - computers connected via the internet are used Traditional focus groups can provide accurate information, and are less expensive than other forms of traditional marketing research. There can be significant costs however : if a product is to be marketed on a nationwide basis, it would be critical to gather respondents from various locales throughout the country since attitudes about a new product may vary due to geographical considerations. This would require a considerable expenditure in travel and lodging expenses. Additionally, the site of a traditional focus group may or may not be in a locale convenient to a specific client, so client representatives may have to incur travel and lodging expenses as well.

Group discussion produces data and insights that would be less accessible without interaction found in a group settinglistening to others verbalized experiences stimulates memories, ideas, and experiences in participants. This is also known as the group effect where group members engage in a kind of chaining or cascading effect; talk links to, or tumbles out of, the topics and expressions preceding it (Lindlof & Taylor, 2002, p.182)[6] Group members discover a common language to describe similar experiences. This enables the capture of a form of native language or vernacular speech to understand the situation Focus groups also provide an opportunity for disclosure among similar others in a setting where participants are validated. For example, in the context of workplace bullying, targeted employees often find themselves in situations where they experience lack of voice and feelings of isolation. Use of focus groups to study workplace bullying therefore serve as both an efficacious and ethical venue for collecting data (see, e.g., Tracy, Lutgen-Sandvik, & Alberts, 2006)[7]

Focus group


Problems and criticism

Focus groups are "One shot case studies" especially if they are measuring a property-disposition relationship within the social sciences, unless they are repeated.[8] Focus groups can create severe issues of external validity, especially the reactive effects of the testing arrangement.[9] A fundamental difficulty with focus groups (and other forms of qualitative research) is the issue of observer dependency: the results obtained are influenced by the researcher or his own reading of the group's discussion, raising questions of validity (see Experimenter's bias). Other common (and related) criticism involve groupthink and social desirability bias. Another issue is with the setting itself. If the focus groups are held in a laboratory setting with a moderator who is a professor and the recording instrument is obtrusive, the participants may either hold back on their responses and/or try to answer the moderator's questions with answers the participants feel that the moderator wants to hear. Another issue with the focus group setting is the lack of anonymity. With all of the other participants, there can not be any guarantee of confidentiality. Again we have to deal with the issues of the reactive effects of the testing arrangement (See above). Douglas Rushkoff[10] argues that focus groups are often useless, and frequently cause more trouble than they are intended to solve, with focus groups often aiming to please rather than offering their own opinions or evaluations, and with data often cherry picked to support a foregone conclusion. Rushkoff cites the disastrous introduction of New Coke in the 1980s as a vivid example of focus group analysis gone bad. In addition there is anecdotal evidence of focus groups rebelling, for instance the name for the Ford Focus, was created by a focus group which had grown bored and impatient and the irony of this was not picked up by the marketing team. Jonathan Ive, Apples senior vice president of industrial design, also said that Apple had found a good reason not to do focus groups : "They just ensure that you dont offend anyone, and produce bland inoffensive products."

United States government

The United States federal government makes extensive use of focus groups to assess public education materials and messages for their many programs. While many of these are appropriate for the purpose, many others are reluctant compromises which federal officials have had to make as a result of studies independent of whether a focus group is the best or even appropriate methodology.

Swedish artist Mns Wrange has used the concept of the focus group in his work The Good Rumor Project[11]. In this instance the focus group situation is used not only as a means to investigate the opinions of the group members, but also to spread an idea (the rumor) across society with the help of the group members.

[1] Henderson, Naomi R. (2009). Managing Moderator Stress: Take a Deep Breath. You Can Do This!. Marketing Research, Vol. 21 Issue 1, p28-29. [2] Michael T. Kaufman (February 24, 2003). "Robert K. Merton, Versatile Sociologist and Father of the Focus Group, Dies at 92" (http:/ / www. nytimes. com/ 2003/ 02/ 24/ nyregion/ robert-k-merton-versatile-sociologist-and-father-of-the-focus-group-dies-at-92. html). The New York Times. . [3] Lynne Ames (August 2, 1998). "The View From/Peekskill; Tending the Flame of a Motivator" (http:/ / www. nytimes. com/ 1998/ 08/ 02/ nyregion/ the-view-from-peekskill-tending-the-flame-of-a-motivator. html?n=Top/ News/ Science/ Topics/ Research). The New York Times. . [4] Marshall, Catherine and Gretchen B. Rossman. 1999. Designing Qualitative Research. 3rd Ed. London: Sage Publications, p. 115 [5] Jakob Nielsen (1993) Usability Engineering. Academic Press, Boston. [6] Lindlof, T. R., & Taylor, B. C. (2002). Qualitative Communication Research Methods, 2nd Edition. Thousand Oaks, CA: Sage. [7] Tracy, S. J., Lutgen-Sandvik, P., & Alberts, J. K. (2006). Nightmares, demons and slaves: Exploring the painful metaphors of workplace bullying. Management Communication Quarterly, 20, 148-185.

Focus group
[8] Nachmais, Chava Frankfort; Nachmais, David. 2008. Research methods in the Social Sciences: Seventh Edition New York, NY: Worth Publishers [9] Campbell, Donald T., Stanley, Juilian C. Experimental and Quasi-Experimental Designs for Research. Chicago, IL: Rand McNally [10] Rushkoff, Douglas, Get back in the box : innovation from the inside out, New York : Collins, 2005 [11] http:/ / www. manswrange. com


External links
Focus Groups at ( The British Market Research Association ( The Industry body governing focus groups in the UK Focus Group Principles (archived) ( American Marketing Association Dos and don'ts for using marketing focus groups ( marketing/market_research/dos_and_donts_for_using_marketing_focus_groups.mspx) Microsoft Focus Brands ( Indian Business Consulting Company

Cognitive walkthrough
The cognitive walkthrough method is a usability inspection method used to identify usability issues in a piece of software or web site, focusing on how easy it is for new users to accomplish tasks with the system. Cognitive walkthrough is task-specific, whereas heuristic evaluation takes a holistic view to catch problems not caught by this and other usability inspection methods. The method is rooted in the notion that users typically prefer to learn a system by using it to accomplish tasks, rather than, for example, studying a manual. The method is prized for its ability to generate results quickly with low cost, especially when compared to usability testing, as well as the ability to apply the method early in the design phases, before coding has even begun.

A cognitive walkthrough starts with a task analysis that specifies the sequence of steps or actions required by a user to accomplish a task, and the system responses to those actions. The designers and developers of the software then walk through the steps as a group, asking themselves a set of questions at each step. Data is gathered during the walkthrough, and afterwards a report of potential issues is compiled. Finally the software is redesigned to address the issues identified. The effectiveness of methods such as cognitive walkthroughs is hard to measure in applied settings, as there is very limited opportunity for controlled experiments while developing software. Typically measurements involve comparing the number of usability problems found by applying different methods. However, Gray and Salzman called into question the validity of those studies in their dramatic 1998 paper "Damaged Merchandise", demonstrating how very difficult it is to measure the effectiveness of usability inspection methods. The consensus in the usability community is that the cognitive walkthrough method works well in a variety of settings and applications.

Cognitive walkthrough


Walking through the tasks

After the task analysis has been made the participants perform the walkthrough by asking themselves a set of questions for each subtask. Typically four questions are asked[1]: Will the user try to achieve the effect that the subtask has? Does the user understand that this subtask is needed to reach the user's goal? Will the user notice that the correct action is available? E.g. is the button visible? Will the user understand that the wanted subtask can be achieved by the action? E.g. the right button is visible but the user does not understand the text and will therefore not click on it. Does the user get feedback? Will the user know that they have done the right thing after performing the action? By answering the questions for each subtask usability problems will be noticed.

Common mistakes
In teaching people to use the walkthrough method, Lewis & Rieman have found that there are two common misunderstandings [2]: 1. The evaluator doesn't know how to perform the task themself, so they stumble through the interface trying to discover the correct sequence of actions -- and then they evaluate the stumbling process. (The user should identify and perform the optimal action sequence.) 2. The walkthrough does not test real users on the system. The walkthrough will often identify many more problems than you would find with a single, unique user in a single test session.

The method was developed in the early nineties by Wharton, et al., and reached a large usability audience when it was published as a chapter in Jakob Nielsen's seminal book on usability, "Usability Inspection Methods." The Wharton, et al. method required asking four questions at each step, along with extensive documentation of the analysis. In 2000 there was a resurgence in interest in the method in response to a CHI paper by Spencer who described modifications to the method to make it effective in a real software development setting. Spencer's streamlined method required asking only two questions at each step, and involved creating less documentation. Spencer's paper followed the example set by Rowley, et al. who described the modifications to the method that they made based on their experience applying the methods in their 1992 CHI paper "The Cognitive Jogthrough".

[1] C. Wharton et al. "The cognitive walkthrough method: a practitioner's guide" in J. Nielsen & R. Mack "Usability Inspection Methods" pp. 105-140. [2] http:/ / hcibib. org/ tcuid/ chap-4. html#4-1

Further reading
Blackmon, M. H. Polson, P.G. Muneo, K & Lewis, C. (2002) Cognitive Walkthrough for the Web CHI 2002 vol.4 No.1 pp463470 Blackmon, M. H. Polson, Kitajima, M. (2003) Repairing Usability Problems Identified by the Cognitive Walkthrough for the Web CHI ( LSA-Handbook-Ch18.pdf) 2003 pp497504. Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England: Pearson Education Limited. p321. Gabrielli, S. Mirabella, V. Kimani, S. Catarci, T. (2005) Supporting Cognitive Walkthrough with Video Data: A Mobile Learning Evaluation Study MobileHCI 05 pp7782.

Cognitive walkthrough Goillau, P., Woodward, V., Kelly, C. & Banks, G. (1998) Evaluation of virtual prototypes for air traffic control the MACAW technique. In, M. Hanson (Ed.) Contemporary Ergonomics 1998. Good, N. S. & Krekelberg, A. (2003) Usability and Privacy: a study of KaZaA P2P file-sharing CHI 2003 Vol.5 no.1 pp137144. Gray, W. & Salzman, M. (1998). Damaged merchandise? A review of experiments that compare usability evaluation methods, Human-Computer Interaction vol.13 no.3, 203-61. Gray, W.D. & Salzman, M.C. (1998) Repairing Damaged Merchandise: A rejoinder. Human-Computer Interaction vol.13 no.3 pp325335. Hornbaek, K. & Frokjaer, E. (2005) Comparing Usability Problems and Redesign Proposal as Input to Practical Systems Development CHI 2005 391-400. Jeffries, R. Miller, J. R. Wharton, C. Uyeda, K. M. (1991) User Interface Evaluation in the Real World: A comparison of Four Techniques Conference on Human Factors in Computing Systems pp 119 124 Lewis, C. Polson, P, Wharton, C. & Rieman, J. (1990) Testing a Walkthrough Methodology for Theory-Based Design of Walk-Up-and-Use Interfaces Chi 90 Proceedings pp235242. Mahatody, Thomas / Sagar, Mouldi / Kolski, Christophe (2010). State of the Art on the Cognitive Walkthrough Method, Its Variants and Evolutions, International Journal of Human-Computer Interaction, 2, 8 741-785. Rowley, David E., and Rhoades, David G (1992). The Cognitive Jogthrough: A Fast-Paced User Interface Evaluation Procedure. Proceedings of CHI '92, 389-395. Sears, A. (1998) The Effect of Task Description Detail on Evaluator Performance with Cognitive Walkthroughs CHI 1998 pp259260. Spencer, R. (2000) The Streamlined Cognitive Walkthrough Method, Working Around Social Constraints Encountered in a Software Development Company CHI 2000 vol.2 issue 1 pp353359. Wharton, C. Bradford, J. Jeffries, J. Franzke, M. Applying Cognitive Walkthroughs to more Complex User Interfaces: Experiences, Issues and Recommendations CHI 92 pp381388.


External links
Cognitive Walkthrough (

Heuristic evaluation


Heuristic evaluation
A heuristic evaluation is a usability inspection method for computer software that helps to identify usability problems in the user interface (UI) design. It specifically involves evaluators examining the interface and judging its compliance with recognized usability principles (the "heuristics"). These evaluation methods are now widely taught and practiced in the New Media sector, where UIs are often designed in a short space of time on a budget that may restrict the amount of money available to provide for other types of interface testing.

The main goal of heuristic evaluations is to identify any problems associated with the design of user interfaces. Usability consultant Jakob Nielsen developed this method on the basis of several years of experience in teaching and consulting about usability engineering. Heuristic evaluations are one of the most informal methods[1] of usability inspection in the field of human-computer interaction. There are many sets of usability design heuristics; they are not mutually exclusive and cover many of the same aspects of user interface design. Quite often, usability problems that are discovered are categorizedoften on a numeric scaleaccording to their estimated impact on user performance or acceptance. Often the heuristic evaluation is conducted in the context of use cases (typical user tasks), to provide feedback to the developers on the extent to which the interface is likely to be compatible with the intended users needs and preferences. The simplicity of heuristic evaluation is beneficial at the early stages of design. This usability inspection method does not require user testing which can be burdensome due to the need for users, a place to test them and a payment for their time. Heuristic evaluation requires only one expert, reducing the complexity and expended time for evaluation. Most heuristic evaluations can be accomplished in a matter of days. The time required varies with the size of the artifact, its complexity, the purpose of the review, the nature of the usability issues that arise in the review, and the competence of the reviewers. Using heuristic evaluation prior to user testing will reduce the number and severity of design errors discovered by users. Although heuristic evaluation can uncover many major usability issues in a short period of time, a criticism that is often leveled is that results are highly influenced by the knowledge of the expert reviewer(s). This one-sided review repeatedly has different results than software performance testing, each type of testing uncovering a different set of problems.

Nielsen's heuristics
Jakob Nielsen's heuristics are probably the most-used usability heuristics for user interface design. Nielsen developed the heuristics based on work together with Rolf Molich in 1990.[1][2] The final set of heuristics that are still used today were released by Nielsen in 1994.[3] The heuristics as published in Nielsen's book Usability Engineering are as follows[4] Visibility of system status: The system should always keep users informed about what is going on, through appropriate feedback within reasonable time. Match between system and the real world: The system should speak the user's language, with words, phrases and concepts familiar to the user, rather than system-oriented terms. Follow real-world conventions, making information appear in a natural and logical order. User control and freedom: Users often choose system functions by mistake and will need a clearly marked "emergency exit" to leave the unwanted state without having to go through an extended dialogue. Support undo and redo.

Heuristic evaluation Consistency and standards: Users should not have to wonder whether different words, situations, or actions mean the same thing. Follow platform conventions. Error prevention: Even better than good error messages is a careful design which prevents a problem from occurring in the first place. Either eliminate error-prone conditions or check for them and present users with a confirmation option before they commit to the action. Recognition rather than recall: Minimize the user's memory load by making objects, actions, and options visible. The user should not have to remember information from one part of the dialogue to another. Instructions for use of the system should be visible or easily retrievable whenever appropriate. Flexibility and efficiency of use: Acceleratorsunseen by the novice usermay often speed up the interaction for the expert user such that the system can cater to both inexperienced and experienced users. Allow users to tailor frequent actions. Aesthetic and minimalist design: Dialogues should not contain information which is irrelevant or rarely needed. Every extra unit of information in a dialogue competes with the relevant units of information and diminishes their relative visibility. Help users recognize, diagnose, and recover from errors: Error messages should be expressed in plain language (no codes), precisely indicate the problem, and constructively suggest a solution. Help and documentation: Even though it is better if the system can be used without documentation, it may be necessary to provide help and documentation. Any such information should be easy to search, focused on the user's task, list concrete steps to be carried out, and not be too large.


Gerhardt-Powals cognitive engineering principles

Although Nielsen is considered the expert and field leader in heuristics, Jill Gerhardt-Powals [5] also developed a set of cognitive principles for enhancing computer performance.[6] These heuristics, or principles, are similar to Nielsens heuristics but take a more holistic approach to evaluation. Gerhardt Powals principles[7] are listed below. Automate unwanted workload: free cognitive resources for high-level tasks. eliminate mental calculations, estimations, comparisons, and unnecessary thinking. Reduce uncertainty: display data in a manner that is clear and obvious. Fuse data: reduce cognitive load by bringing together lower level data into a higher-level summation. Present new information with meaningful aids to interpretation: use a familiar framework, making it easier to absorb. use everyday terms, metaphors, etc. Use names that are conceptually related to function: Context-dependent. Attempt to improve recall and recognition. Group data in consistently meaningful ways to decrease search time.

Heuristic evaluation Limit data-driven tasks: Reduce the time spent assimilating raw data. Make appropriate use of color and graphics. Include in the displays only that information needed by the user at a given time. Provide multiple coding of data when appropriate. Practice judicious redundancy.


Weinschenk and Barker classification

Susan Weinschenk and Dean Barker created a categorization of heuristics and guidelines by several major providers into the following twenty types:[8] 1. User Control: heuristics that check whether the user has enough control of the interface. 2. Human Limitations: the design takes into account human limitations, cognitive and sensorial, to avoid overloading them. 3. Modal Integrity: the interface uses the most suitable modality for each task: auditory, visual, or motor/kinesthetic. 4. Accommodation: the design is adequate to fulfill the needs and behaviour of each targeted user group. 5. Linguistic Clarity: the language used to communicate is efficient and adequate to the audience. 6. Aesthetic Integrity: the design is visually attractive and tailored to appeal to the target population. 7. Simplicity: the design will not use unnecessary complexity. 8. Predictability: users will be able to form a mental model of how the system will behave in response to actions. 9. Interpretation: there are codified rules that try to guess the user intentions and anticipate the actions needed. 10. Accuracy: There are no errors, i.e. the result of user actions correspond to their goals. 11. Technical Clarity: the concepts represented in the interface have the highest possible correspondence to the domain they are modeling. 12. Flexibility: the design can be adjusted to the needs and behaviour of each particular user. 13. Fulfillment: the user experience is adequate. 14. Cultural Propriety: user's cultural and social expectations are met. 15. Suitable Tempo: the pace at which users works with the system is adequate. 16. Consistency: different parts of the system have the same style, so that there are no different ways to represent the same information or behavior. 17. User Support: the design will support learning and provide the required assistance to usage. 18. Precision: the steps and results of a task will be what the user wants. 19. Forgiveness: the user will be able to recover to an adequate state after an error. 20.Responsiveness: the interface provides enough feedback information about the system status and the task completion.

Heuristic evaluation


[1] Nielsen, J., and Molich, R. (1990). Heuristic evaluation of user interfaces, Proc. ACM CHI'90 Conf. (Seattle, WA, 15 April), 249-256 [2] Molich, R., and Nielsen, J. (1990). Improving a human-computer dialogue, Communications of the ACM 33, 3 (March), 338-348 [3] Nielsen, J. (1994). Heuristic evaluation. In Nielsen, J., and Mack, R.L. (Eds.), Usability Inspection Methods, John Wiley & Sons, New York, NY [4] Nielsen, Jakob (1994). Usability Engineering. San Diego: Academic Press. pp.115148. ISBN0-12-518406-9. [5] http:/ / loki. stockton. edu/ ~gerhardj/ [6] [ |Gerhardt-Powals, Jill (http:/ / loki. stockton. edu/ ~gerhardj/ )] (1996). "Cognitive engineering principles for enhancing human - computer performance". International Journal of Human-Computer Interaction 8 (2): 189211. [7] Heuristic Evaluation - Usability Methods What is a heuristic evaluation? (http:/ / usability. gov/ methods/ test_refine/ heuristic. html#WhatisaHeuristicEvaluation) [8] Jeff Sauro. "Whats the difference between a Heuristic Evaluation and a Cognitive Walkthrough?" (http:/ / www. measuringusability. com/ blog/ he-cw. php). .

Further reading
Dix, A., Finlay, J., Abowd, G., D., & Beale, R. (2004). Human-computer interaction (3rd ed.). Harlow, England: Pearson Education Limited. p324 Gerhardt-Powals, Jill (1996). Cognitive Engineering Principles for Enhancing Human-Computer Performance. International Journal of Human-Computer Interaction, 8(2), 189-21 Hvannberg, E., Law, E., & Lrusdttir, M. (2007) Heuristic Evaluation: Comparing Ways of Finding and Reporting Usability Problems, Interacting with Computers, 19 (2), 225-240 Nielsen, J. and Mack, R.L. (eds) (1994). Usability Inspection Methods, John Wiley & Sons Inc

External links
Jakob Nielsen's introduction to Heuristic Evaluation ( - Including fundamental points, methodologies and benefits. Alternate First Principles (Tognazzini) ( - Including Jakob Nielsen's ten rules of thumb Heuristic Evaluation at ( Heuristic Evaluation in the RKBExplorer ({http:/ /})

RITE Method


RITE Method
RITE Method, for Rapid Iterative Testing and Evaluation,[1] typically referred to as "RITE" testing, is an iterative usability method. It was defined by Michael Medlock, Dennis Wixon, Bill Fulton, Mark Terrano and Ramon Romero. It has been publicly championed by Dennis Wixon[2] while working in the games space for Microsoft. It has many similarities to "traditional"[3] or "discount"[4] usability testing. The tester and team must define a target population for testing, schedule participants to come in to the lab, decide on how the users' behaviors will be measured, construct a test script and have participants engage in a verbal protocol (e.g. think aloud). However it differs from these methods in that it advocates that changes to the user interface are made as soon as a problem is identified and a solution is clear. Sometimes this can occur after observing as few as one participant. Once the data for a participant has been collected the usability engineer and team decide if they will be making any changes to the prototype prior to the next participant. The changed interface is then tested with the remaining users. Initially it was documented as being used in the PC games business, but it in all truth has probably been in use "unofficially" since designers started prototyping products and watching users use the prototypes. Since its official definition and naming its use has rapidly expanded to many other software industries.[5]

[1] Medlock, M.C., Wixon, D., Terrano, M., Romero, R., and Fulton, B. (2002). (http:/ / download. microsoft. com/ download/ 5/ c/ c/ 5cc406a0-0f87-4b94-bf80-dbc707db4fe1/ mgsut_MWTRF02. doc. doc) Using the RITE method to improve products: A definition and a case study. Presented at the Usability Professionsals Association 2002, Orlando Florida. [2] Dennis Wixon, Evaluating usability methods: why the current literature fails the practitioner, interactions, v.10 n.4, July + August 2003 [3] Dumas J., and Redish J.C. (1993). A Practical Guide to Usability Testing. Ablex, Norwood, N.J. [4] Jakob Nielsen, Usability engineering at a discount, Proceedings of the third international conference on human-computer interaction on Designing and using human-computer interfaces and knowledge based systems (2nd ed.), p.394-401, September 1989, Boston, Massachusetts, United States [5] Medlock, M.C., Wixon, D., McGee, M., & Welsh, D. (2005). The Rapid Iterative Test and Evaluation Method: Better Products in Less Time. In Bias, G., & Mayhew, D. (Eds.), Cost Justifying Usability (pp. 489-517). San Francisco: Morgan Kaufmann

Think aloud protocol


Think aloud protocol

Think-aloud protocol (or think-aloud protocols, or TAP; also talk-aloud protocol) is a method used to gather data in usability testing in product design and development, in psychology and a range of social sciences (e.g., reading, writing, translation research, decision making, and process tracing). The think-aloud method was introduced in the usability field by Clayton Lewis [1] while he was at IBM, and is explained in Task-Centered User Interface Design: A Practical Introduction by C. Lewis and J. Rieman.[2] The method was developed based on the techniques of protocol analysis by Ericsson and Simon.[3][4][5] Think-aloud protocols involve participants thinking aloud as they are performing a set of specified tasks. Users are asked to say whatever they are looking at, thinking, doing, and feeling as they go about their task. This enables observers to see first-hand the process of task completion (rather than only its final product). Observers at such a test are asked to objectively take notes of everything that users say, without attempting to interpret their actions and words. Test sessions are often audio- and video-recorded so that developers can go back and refer to what participants did and how they reacted. The purpose of this method is to make explicit what is implicitly present in subjects who are able to perform a specific task. A related but slightly different data-gathering method is the talk-aloud protocol. This involves participants only describing their action but not giving explanations. This method is thought to be more objective in that participants merely report how they go about completing a task rather than interpreting or justifying their actions (see the standard works by Ericsson & Simon). As Kuusela and Paul [6] state the think-aloud protocol can be divided into two different experimental procedures. The first one is the concurrent think-aloud protocol, collected during the decision task. The second procedure is the retrospective think-aloud protocol, gathered after the decision task.

[1] Lewis, C. H. (1982). Using the "Thinking Aloud" Method In Cognitive Interface Design (Technical report RC-9265). IBM. [2] http:/ / grouplab. cpsc. ucalgary. ca/ saul/ hci_topics/ tcsd-book/ chap-1_v-1. html Task-Centered User Interface Design: A Practical Introduction, by Clayton Lewis and John Rieman. [3] Ericsson, K., & Simon, H. (May 1980). "Verbal reports as data". Psychological Review 87 (3): 215251. doi:10.1037/0033-295X.87.3.215. [4] Ericsson, K., & Simon, H. (1987). "Verbal reports on thinking". In C. Faerch & G. Kasper (eds.). Introspection in Second Language Research. Clevedon, Avon: Multilingual Matters. pp.2454. [5] Ericsson, K., & Simon, H. (1993). Protocol Analysis: Verbal Reports as Data (2nd ed.). Boston: MIT Press. ISBN0-262-05029-3. [6] Kuusela, H., & Paul, P. (2000). "A comparison of concurrent and retrospective verbal protocol analysis". American Journal of Psychology (University of Illinois Press) 113 (3): 387404. doi:10.2307/1423365. JSTOR1423365. PMID10997234.


2. User Interface engineering

User interface design
User interface design or user interface engineering is the design of computers, appliances, machines, mobile communication devices, software applications, and websites with the focus on the user's experience and interaction. The goal of user interface design is to make the user's interaction as simple and efficient as possible, in terms of accomplishing user goalswhat is often called user-centered design. Good user interface design facilitates finishing the task at hand without drawing unnecessary attention to itself. Graphic design may be utilized to support its usability. The design process must balance technical functionality and visual elements (e.g., mental model) to create a system that is not only operational but also usable and adaptable to changing user needs. Interface design is involved in a wide range of projects from computer systems, to cars, to commercial planes; all of these projects involve much of the same basic human interactions yet also require some unique skills and knowledge. As a result, designers tend to specialize in certain types of projects and have skills centered around their expertise, whether that be software design, user research, web design, or industrial design.

There are several phases and processes in the user interface design, some of which are more demanded upon than others, depending on the project. (Note: for the remainder of this section, the word system is used to denote any project whether it is a web site, application, or device.) Functionality requirements gathering assembling a list of the functionality required by the system to accomplish the goals of the project and the potential needs of the users. User analysis analysis of the potential users of the system either through discussion with people who work with the users and/or the potential users themselves. Typical questions involve: What would the user want the system to do? How would the system fit in with the user's normal workflow or daily activities? How technically savvy is the user and what similar systems does the user already use? What interface look & feel styles appeal to the user? Information architecture development of the process and/or information flow of the system (i.e. for phone tree systems, this would be an option tree flowchart and for web sites this would be a site flow that shows the hierarchy of the pages). Prototyping development of wireframes, either in the form of paper prototypes or simple interactive screens. These prototypes are stripped of all look & feel elements and most content in order to concentrate on the interface. Usability testing testing of the prototypes on an actual useroften using a technique called think aloud protocol where you ask the user to talk about their thoughts during the experience. Graphic Interface design actual look & feel design of the final graphical user interface (GUI). It may be based on the findings developed during the usability testing if usability is unpredictable, or based on communication objectives and styles that would appeal to the user. In rare cases, the graphics may drive the prototyping, depending on the importance of visual form versus function. If the interface requires multiple skins, there may be multiple interface designs for one control panel, functional feature or widget. This phase is often a collaborative effort between a graphic designer and a user interface designer, or handled by one who is proficient in both disciplines.

User interface design User interface design requires a good understanding of user needs.


The dynamic characteristics of a system are described in terms of the dialogue requirements contained in seven principles of part 10 of the ergonomics standard, the ISO 9241. This standard establishes a framework of ergonomic "principles" for the dialogue techniques with high-level definitions and illustrative applications and examples of the principles. The principles of the dialogue represent the dynamic aspects of the interface and can be mostly regarded as the "feel" of the interface. The seven dialogue principles are: Suitability for the task: the dialogue is suitable for a task when it supports the user in the effective and efficient completion of the task. Self-descriptiveness: the dialogue is self-descriptive when each dialogue step is immediately comprehensible through feedback from the system or is explained to the user on request. Controllability: the dialogue is controllable when the user is able to initiate and control the direction and pace of the interaction until the point at which the goal has been met. Conformity with user expectations: the dialogue conforms with user expectations when it is consistent and corresponds to the user characteristics, such as task knowledge, education, experience, and to commonly accepted conventions. Error tolerance: the dialogue is error tolerant if despite evident errors in input, the intended result may be achieved with either no or minimal action by the user. Suitability for individualization: the dialogue is capable of individualization when the interface software can be modified to suit the task needs, individual preferences, and skills of the user. Suitability for learning: the dialogue is suitable for learning when it supports and guides the user in learning to use the system. The concept of usability is defined of the ISO 9241 standard by effectiveness, efficiency, and satisfaction of the user. Part 11 gives the following definition of usability: Usability is measured by the extent to which the intended goals of use of the overall system are achieved (effectiveness). The resources that have to be expended to achieve the intended goals (efficiency). The extent to which the user finds the overall system acceptable (satisfaction). Effectiveness, efficiency, and satisfaction can be seen as quality factors of usability. To evaluate these factors, they need to be decomposed into sub-factors, and finally, into usability measures. The information presentation is described in Part 12 of the ISO 9241 standard for the organization of information (arrangement, alignment, grouping, labels, location), for the display of graphical objects, and for the coding of information (abbreviation, color, size, shape, visual cues) by seven attributes. The "attributes of presented information" represent the static aspects of the interface and can be generally regarded as the "look" of the interface. The attributes are detailed in the recommendations given in the standard. Each of the recommendations supports one or more of the seven attributes. The seven presentation attributes are: Clarity: the information content is conveyed quickly and accurately. Discriminability: the displayed information can be distinguished accurately. Conciseness: users are not overloaded with extraneous information. Consistency: a unique design, conformity with users expectation. Detectability: the users attention is directed towards information required. Legibility: information is easy to read.

Comprehensibility: the meaning is clearly understandable, unambiguous, interpretable, and recognizable.

User interface design The user guidance in Part 13 of the ISO 9241 standard describes that the user guidance information should be readily distinguishable from other displayed information and should be specific for the current context of use. User guidance can be given by the following five means: Prompts indicating explicitly (specific prompts) or implicitly (generic prompts) that the system is available for input. Feedback informing about the users input timely, perceptible, and non-intrusive. Status information indicating the continuing state of the application, the systems hardware and software components, and the users activities. Error management including error prevention, error correction, user support for error management, and error messages. On-line help for system-initiated and user initiated requests with specific information for the current context of use.


Research Past and Ongoing

User interface design has been a topic of considerable research, including on its aesthetics.[1] Standards have been developed as far back as the 1980s for defining the usability of software products.[2] One of the structural bases has become the IFIP user interface reference model. The model proposes four dimensions to structure the user interface: The input/output dimension (the look) The dialogue dimension (the feel) The technical or functional dimension (the access to tools and services) The organizational dimension (the communication and co-operation support)

This model has greatly influenced the development of the international standard ISO 9241 describing the interface design requirements for usability. The desire to understand application-specific UI issues early in software development, even as an application was being developed, led to research on GUI rapid prototyping tools that might offer convincing simulations of how an actual application might behave in production use.[3] Some of this research has shown that a wide variety of programming tasks for GUI-based software can, in fact, be specified through means other than writing program code.[4] Research in recent years is strongly motivated by the increasing variety of devices that can, by virtue of Moore's Law, host very complex interfaces.[5] There is also research on generating user interfaces automatically, to match a user's level of ability for different kinds of interaction.[6]

[1] "The role of context in perceptions of the aesthetics of web pages over time" (http:/ / portal. acm. org/ citation. cfm?id=1464532. 1465384& coll=GUIDE& dl=GUIDE& CFID=27731682& CFTOKEN=18425618). International Journal of HumanComputer Studies. 2009-01-05. . Retrieved 2009-04-02. [2] Dr. Reinhard Oppermann (2001). "User-interface design" (http:/ / fit. fraunhofer. de/ ~oppi/ publications/ UserInterfaceLearningSystems. pdf). Institute for Applied Information Technology. . Retrieved 2010-12-01. [3] "The HUMANOID model of interface design" (http:/ / citeseer. ist. psu. edu/ old/ szekely92facilitating. html). Proceedings CHI'92. 1992. . [4] "Creating user interfaces using programming by example, visual programming, and constraints" (http:/ / portal. acm. org/ citation. cfm?id=78942. 78943& coll=GUIDE& dl=GUIDE& CFID=27731682& CFTOKEN=18425618). ACM. 1990-04-11. . Retrieved 2009-04-02. [5] "Past, present, and future of user interface software tools" (http:/ / portal. acm. org/ citation. cfm?id=344949. 344959& coll=GUIDE& dl=GUIDE& CFID=27731682& CFTOKEN=18425618). ACM. 2000-03-01. . Retrieved 2009-04-02. [6] "SUPPLE: Automatically Generating Personalized User Interfaces" (http:/ / www. eecs. harvard. edu/ ~kgajos/ research/ supple/ ). Intelligent Interactive Systems Group (website). Harvard University. 2007-05-07. . Retrieved 2010-07-07.

Interface design


Interface design
Interface design deals with the process of developing a method for two (or more) modules in a system to connect and communicate. These modules can apply to hardware, software or the interface between a user and a machine.[1][2][3] An example of a user interface could include a GUI, a control panel for a nuclear power plant,[4] or even the cockpit of an aircraft.[5] In systems engineering, all the inputs and outputs of a system, subsystem, and its components are listed in an interface control document often as part of the requirements of the engineering project.[6] The development of a user interface is a unique field. More information can be found on the subject here: User interface design

[1] "CMMI for Development, Version 1.3" (http:/ / www. sei. cmu. edu/ reports/ 10tr033. pdf). Carnegie Mellon. p.385. . Retrieved 28 June 2011. [2] Mark Sanders; Ernest McCormick (1997). Human Factors In Engineering and Design (7th ed.). McGraw Hill. pp.1121. ISBN0-07-054901-X. [3] "NASA Software Engineering Requirements" (http:/ / nodis3. gsfc. nasa. gov/ npg_img/ N_PR_7150_002A_/ N_PR_7150_002A_. pdf). NASA Software Engineering Requirements. NASA. . Retrieved 28 June 2011. [4] E.E. Shultz; G.L. Johnson. "User interface design in safety parameter display systems: direction for enhancement" (http:/ / ieeexplore. ieee. org/ xpl/ freeabs_all. jsp?arnumber=27496). Lawrence Livermore Nat. Lab. . Retrieved 28 June 2011. [5] Lance Sherry; Peter Polson, Micheal Feary. "DESIGNING USER-INTERFACES FOR THE COCKPIT:" (http:/ / human-factors. arc. nasa. gov/ publications/ feary_CockpitUIDesignErrors. pdf). Society of Automotive Engineers. . Retrieved 28 June 2011. [6] "NASA Software Engineering Requirements" (http:/ / nodis3. gsfc. nasa. gov/ npg_img/ N_PR_7150_002A_/ N_PR_7150_002A_. pdf). NASA Software Engineering Requirements. NASA. . Retrieved 28 June 2011.

Human interface guidelines

Human interface guidelines (HIG) are software development documents which offer application developers a set of recommendations. Their aim is to improve the experience for the users by making application interfaces more intuitive, learnable, and consistent. Most guides limit themselves to defining a common look and feel for applications in a particular desktop environment. The guides enumerate specific policies. Policies are sometimes based on studies of human-computer interaction (so called usability studies), but most are based on arbitrary conventions chosen by the platform developers. The central aim of a HIG is to create a consistent experience across the environment (generally an operating system or desktop environment), including the applications and other tools being used. This means both applying the same visual design and creating consistent access to and behaviour of common elements of the interface - from simple ones such as buttons and icons up to more complex constructions, such as dialog boxes. HIGs are recommendations and advice meant to help developers create better applications. Developers sometimes intentionally choose to break them if they think that the guidelines do not fit their application, or usability testing reveals an advantage in doing so. But in turn, the organization publishing the HIG might withhold endorsement of the application. Mozilla Firefox's user interface, for example, goes against the GNOME project's HIG, which is one of the main arguments for including Web instead of Firefox in the GNOME distribution.[1]

Human interface guidelines


Human interface guidelines often describe the visual design rules, including icon and window design and style. Frequently they specify how user input and interaction mechanisms work. Aside from the detailed rules, guidelines sometimes also make broader suggestions about how to organize and design the application and write user-interface text. HIGs are also done for applications. In this case the HIG will build on a platform HIG by adding the common semantics for a range of application functions.

Cross-platform guidelines
In contrast to platform-specific guidelines, cross-platform guidelines aren't tied to a distinct platform. These guidelines make recommendations which should be true on any platform. Since this isn't always possible, cross-platform guidelines may weigh the compliance against the imposed work load.

Examples of HIG
Android User Interface Guidelines [2] Apple iOS Human Interface Guidelines [3] Apple OS X Human Interface Guidelines [4] Eclipse User Interface Guidelines [5] Elementary OS Human Interface Guidelines [6] ELMER (guidelines for public forms on the internet) GNOME Human Interface Guidelines [7] Haiku Human Interface Guidelines [8] Java Look and Feel Design Guidelines [9] (Advanced Topics [10]) KDE Human Interface Guidelines [11] OLPC Human Interface Guidelines [12] OLPC Windows User Experience Interaction Guidelines [13] (for Windows 7 and Windows Vista) UX guidelines for Windows Store Apps [14] (for Windows 8 and Windows RT) User Experience Design Guidelines for Windows Phone [15] wyoGuide [16], a cross-platform HIG

[1] Epiphany/ProjectFAQ What about Firefox as the default GNOME browser? (http:/ / live. gnome. org/ Epiphany/ ProjectFAQ#What_about_Firefox_as_the_default_GNOME_browser. 3F) [2] http:/ / developer. android. com/ design/ index. html [3] http:/ / developer. apple. com/ library/ ios/ #documentation/ UserExperience/ Conceptual/ MobileHIG/ Introduction/ Introduction. html [4] https:/ / developer. apple. com/ library/ mac/ #documentation/ UserExperience/ Conceptual/ AppleHIGuidelines/ Intro/ Intro. html#/ / apple_ref/ doc/ uid/ TP30000894-TP6 [5] http:/ / wiki. eclipse. org/ User_Interface_Guidelines [6] http:/ / elementaryos. org/ docs/ human-interface-guidelines [7] http:/ / developer. gnome. org/ hig-book/ stable/ [8] http:/ / api. haiku-os. org/ HIG/ [9] http:/ / java. sun. com/ products/ jlf/ ed2/ book/ index. html [10] http:/ / java. sun. com/ products/ jlf/ at/ book/ index. html [11] http:/ / techbase. kde. org/ Projects/ Usability/ HIG [12] http:/ / wiki. sugarlabs. org/ go/ Human_Interface_Guidelines [13] http:/ / msdn2. microsoft. com/ en-us/ library/ Aa511258. aspx [14] http:/ / msdn. microsoft. com/ en-us/ library/ windows/ apps/ hh465424. aspx [15] http:/ / msdn. microsoft. com/ en-us/ library/ hh202915%28v=VS. 92%29. aspx [16] http:/ / wyoguide. sourceforge. net/ guidelines/ content. html


3. User Interaction engineering

Interaction design
In design, humancomputer interaction, and software development, interaction design, often abbreviated IxD, is "about shaping digital things for peoples use",[1] alternately defined as "the practice of designing interactive digital products, environments, systems, and services."[2]:xxxi,1 Like many other design fields interaction design also has an interest in form but its main focus is on behavior.[2]:1 What clearly marks interaction design as a design field as opposed to a science or engineering field is that it is synthesis and imagining things as they might be, more so than focusing on how things are.[2]:xviii Interaction design is heavily focused on satisfying the needs and desires of the people who will use the product.[2]:xviii Where other disciplines like software engineering have a heavy focus on designing for technical stakeholders of a project.

The term interaction design was first coined by Bill Moggridge[3] and Bill Verplank in the mid-1980s. It would be another 10 years before other designers rediscovered the term and started using it.[2]:xviii To Verplank, it was an adaptation of the computer science term user interface design to the industrial design profession.[4] To Moggridge, it was an improvement over soft-face, which he had coined in 1984 to refer to the application of industrial design to products containing software.[5] The first academic program officially named as Interaction Design was established at Carnegie Mellon University in 1994 as Master of Design in Interaction Design.[6] When the program started it focused mostly on screen interfaces, but today more on the big picture aspects of interaction people, organizations, culture, service, and system. In 1990, Gillian Crampton-Smith established Computer-related Design MA at the Royal College of Art (RCA) in London, which later changed its name to Interaction Design.[7] In 2001, she helped found the Interaction Design Institute Ivrea, a small institute in Northern Italy dedicated solely to interaction design; the institute moved to Milan in October 2005 and merged courses with Domus Academy. In 2007, some of the people originally involved with IDII have now set up the Copenhagen Institute of Interaction Design (CIID). Today, interaction design is taught in many schools worldwide.

Goal-oriented design
Goal-oriented design (or Goal-Directed design) "is concerned most significantly with satisfying the needs and desires of the people who will interact with a product or service."[2]:xviii Alan Cooper argues in The Inmates Are Running The Asylum that we need to take a new approach to how interactive software based problems are solved.[8]:1 The problems faced with designing computer based interfaces are fundamentally different to the challenges we face when designing interfaces for products that do not include software (e.g. hammers). Alan introduces the concept of cognitive friction, whereby we treat things as human when they are significantly complex enough that we cannot always understand how they behave. Computer interfaces are sufficiently complex as to be treated this way.[8]:22

Interaction design It is argued that we must truly understand the goals of a user (both personal and objective) in order to solve the problem in the best way possible and that the current approach is much oriented towards solving individual problems from the perspective of a business or other interested parties. Personas Goal-oriented design as explained in The Inmates Are Running The Asylum advocates for the use of personas, which are created after interviewing a significant number of users. The aim of a persona is to "Develop a precise description of our user and what he wishes to accomplish." The best method as described within The Inmates Are Running The Asylum is to fabricate users with names and back stories who represent real users of a given product. These users are not as much a fabrication but more so as a product of the investigation process. The reason for constructing back stories for a persona is to make them believable, such that they can be treated as real people and their needs can be argued for. Personas also help eliminate idiosyncrasies that may be attributed to a given individual.[8]:93


Cognitive dimensions
The cognitive dimensions framework[9] provides a specialized vocabulary to evaluate and modify particular design solutions. Cognitive dimensions are designed as a lightweight approach to analysis of a design quality, rather than an in-depth, detailed description. They provide a common vocabulary for discussing many factors in notation, UI or programming language design. Dimensions provide high-level descriptions of the interface and how the user interacts with it such as consistency, error-proneness, hard mental operations, viscosity or premature commitment. These concepts aid the creation of new designs from existing ones through design maneuvers that alter the position of the design within a particular dimension.

Affective interaction design

Throughout the process of interaction design, designers must be aware of key aspects in their designs that influence emotional responses in target users. The need for products to convey positive emotions and avoid negative ones is critical to product success.[10] These aspects include positive, negative, motivational, learning, creative, social and persuasive influences to name a few. One method that can help convey such aspects is the use of expressive interfaces. In software, for example, the use of dynamic icons, animations and sound can help communicate a state of operation, creating a sense of interactivity and feedback. Interface aspects such as fonts, color pallete, and graphical layouts can also influence an interface's perceived effectiveness. Studies have shown that affective aspects can affect a user's perception of usability.[10] Emotional and pleasure theories exist to explain people's responses to the use of interactive products. These include Don Norman's emotional design model, Patrick Jordan's pleasure model, and McCarthy and Wright's Technology as Experience framework.

Interaction design


The Five Dimensions of Interaction Design

The dimensions of interaction Design was first introduced in the introduction of the book Designing Interactions. Gillian Crampton Smith stated that there were four dimensions to an interaction design language.[11] An additional fifth dimension was added by Kevin Silver.[12]

1D Words
This dimension defines the interactions. Words are the interaction that users use to interact with.

2D Visual Representations
The visual representations are the things that the user interacts with on the interface. These may include but not limited to "typography, diagrams, icons, and other graphics"

3D Physical objects or space

The space with which the user interacts is the third dimension of interaction design. It defines the space or objects "with which or within which users interact with"

4D Time
The time with which the user interacts with the interface. Some examples of this are "content that changes over time such as sound, video, or animation"

5D Behavior
The behavior defines the users actions reaction to the interface and how they respond to it.

Related disciplines
Industrial design[13] The core principles of industrial design overlap with those of interaction design. Industrial designers use their knowledge of physical form, color, aesthetics, human perception and desire, usability to create a fit of an object with the person using it. Human factors and ergonomics[13] Certain basic principles of ergonomics provide grounding for interaction design. These include anthropometry, biomechanics, kinesiology, physiology and psychology as they relate to human behavior in the built environment. Cognitive psychology[13] Certain basic principles of cognitive psychology provide grounding for interaction design. These include mental models, mapping, interface metaphors, and affordances. Many of these are laid out in Donald Norman's influential book The Design of Everyday Things. Humancomputer interaction[13] Academic research in humancomputer interaction (HCI) includes methods for describing and testing the usability of interacting with an interface, such as cognitive dimensions and the cognitive walkthrough. Design research Interaction designers are typically informed through iterative cycles of user research. User research is used to identify the needs, motivations and behavior of end users. They design with an emphasis on user goals and experience, and evaluate designs in terms of usability and affective influence.

Interaction design Architecture[13] As interaction designers increasingly deal with ubiquitous computing and urban computing, the architects' ability to make, place, and create context becomes a point of contact between the disciplines. User interface design Like user interface design and experience design, interaction design is often associated with the design of system interfaces in a variety of media but concentrates on the aspects of the interface that define and present its behavior over time, with a focus on developing the system to respond to the user's experience and not the other way around.


[1] Encyclopedia of Interaction Design (http:/ / interaction-design. org) [2] Cooper, Alan; Reimann, Robert; Cronin, Dave (2007). About Face 3: The Essentials of Interaction Design (http:/ / books. google. com/ books?id=0gdRAAAAMAAJ). Indianapolis, Indiana: Wiley. pp.610. ISBN978-0-470-08411-3. . Retrieved 18 July 2011. [3] Integrate business modeling and interaction design (http:/ / www. ibm. com/ developerworks/ library/ ws-soa-busmodeling/ index. html) [4] Bill Verplank home site (http:/ / www. billverplank. com/ professional. html) [5] *Moggridge, Bill (2007). Designing Interactions. MIT Press. ISBN0-262-13474-8. [6] (http:/ / www. design. cmu. edu/ show_program. php?s=2& t=3) [7] RCA Design Interactions Website (http:/ / www. interaction. rca. ac. uk) [8] Cooper, Allan (2004). Inmates Are Running the Asylum, The: Why High-Tech Products Drive Us Crazy and How to Restore the Sanity (http:/ / www. amazon. com/ dp/ 0672316498). Sams Publishing. pp.288. ISBN0-672-32614-0. . Retrieved 17 July 2011. [9] T. R. G. Green. "Instructions and Descriptions: some cognitive aspects of programming and similar activities" (http:/ / citeseerx. ist. psu. edu/ viewdoc/ summary?doi=10. 1. 1. 32. 8003). . [10] Sharp, Helen; Rogers, Yvonne; Preece, Jenny (2007). Interaction Design: Beyond HumanComputer Interaction (2nd ed.). John Wiley & Sons. p.184. [11] Moggridge, Bill (2007). Designing Interactions. The MIT Press. ISBN978-0-262-13474-3. [12] Silver, Kevin. "What Puts the Design in Interaction Design" (http:/ / www. uxmatters. com/ mt/ archives/ 2007/ 07/ what-puts-the-design-in-interaction-design. php). UX Matters. . Retrieved 6 March 2012. [13] http:/ / www. interactiondesign. com. au/ disciplines-and-domains

Further reading
Bolter, Jay D.; Gromala, Diane (2008). Windows and Mirrors: Interaction Design, Digital Art, and the Myth of Transparency. Cambridge, Massachusetts: MIT Press. ISBN0-262-02545-0. Buchenau, Marion; Suri, Jane Fulton. Experience Prototyping. DIS 2000. ISBN1-58113-219-0. Buxton, Bill (2005). Sketching the User Experience. New Riders Press. ISBN0-321-34475-8. Dawes, Brendan (2007). Analog In, Digital Out. Berkeley, California: New Riders Press. Goodwin, Kim (2009). Designing for the Digital Age: How to Create Human-Centered Products and Services. ISBN978-0-470-22910-1. Houde, Stephanie; Hill, Charles (1997). "What Do Prototypes Prototype?". In Helander, M; Landauer, T; Prabhu, P. Handbook of HumanComputer Interaction (2nd ed.). Elsevier Science. Jones, Matt & Gary Marsden: Mobile Interaction Design, John Wiley & Sons, 2006, ISBN 0-470-09089-8. Kolko, Jon (2009). Thoughts on Interaction Design. ISBN978-0-12-378624-1. Laurel, Brenda; Lunenfeld, Peter (2003). Design Research: Methods and Perspectives. MIT Press. ISBN0-262-12263-4. Tinauli, Musstanser; Pillan, Margherita (2008). "Interaction Design and Experiential Factors: A Novel Case Study on Digital Pen and Paper". Mobility '08: Proceedings of the International Conference on Mobile Technology, Applications, and Systems. New York: ACM. doi:10.1145/1506270.1506400. ISBN978-1-60558-089-0. Norman, Donald (1988). The Design of Everyday Things. New York: Basic Books. ISBN978-0-465-06710-7. Raskin, Jef (2000). The Humane Interface. ACM Press. ISBN0-201-37937-6. Saffer, Dan (2006). Designing for Interaction. New Riders Press. ISBN0-321-43206-1.

Interaction design


External links ( A peer-reviewed encyclopedia, a comprehensive bibliography, and a calendar of interaction design events Design Patterns in Interaction Design ( Designing Interactions: Interviews ( conversations with key figures in interaction design Introducing Interaction Design Boxes and Arrows ( introducing_interaction_design)

Humancomputer interaction
Humancomputer Interaction (HCI) involves the study, planning, and design of the interaction between people (users) and computers. It is often regarded as the intersection of computer science, behavioral sciences, design and several other fields of study. The term was popularized by Card, Moran, and Newell in their seminal 1983 book, "The Psychology of Human-Computer Interaction", although the authors first used the term in 1980[1], and the first known use was in 1975[2]. The term connotes that, unlike other tools with only limited uses (such as a hammer, useful for driving nails, but not much else), a computer has many affordances for use and this takes place in an open-ended dialog between the user and the computer.

Human use of computers is a major focus of the field of HCI

Because humancomputer interaction studies a human and a machine in conjunction, it draws from supporting knowledge on both the machine and the human side. On the machine side, techniques in computer graphics, operating systems, programming languages, and development environments are relevant. On the human side, communication theory, graphic and industrial design disciplines, linguistics, social sciences, cognitive psychology, and human factors such as computer user satisfaction are relevant. Engineering and design methods are also relevant. Due to the multidisciplinary nature of HCI, people with different backgrounds contribute to its success. HCI is also sometimes referred to as manmachine interaction (MMI) or computerhuman interaction (CHI). Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is the Three Mile Island accident where investigations concluded that the design of the humanmachine interface was at least partially responsible for the disaster.[3][4][5] Similarly, accidents in aviation have resulted from manufacturers' decisions to use non-standard flight instrument and/or throttle quadrant layouts: even though the new designs were proposed to be superior in regards to basic humanmachine interaction, pilots had already ingrained the "standard" layout and thus the conceptually good idea actually had undesirable results.

Humancomputer interaction


A basic goal of HCI is to improve the interactions between users and computers by making computers more usable and receptive to the user's needs. Specifically, HCI is concerned with: methodologies and processes for designing interfaces (i.e., given a task and a class of users, design the best possible interface within given constraints, optimizing for a desired property such as learnability or efficiency of use) methods for implementing interfaces (e.g. software toolkits and libraries; efficient algorithms) techniques for evaluating and comparing interfaces developing new interfaces and interaction techniques developing descriptive and predictive models and theories of interaction A long term goal of HCI is to design systems that minimize the barrier between the human's cognitive model of what they want to accomplish and the computer's understanding of the user's task. Professional practitioners in HCI are usually designers concerned with the practical application of design methodologies to real-world problems. Their work often revolves around designing graphical user interfaces and web interfaces. Researchers in HCI are interested in developing new design methodologies, experimenting with new hardware devices, prototyping new software systems, exploring new paradigms for interaction, and developing models and theories of interaction.

Differences with related fields

HCI differs from human factors (or ergonomics) in that with HCI the focus is more on users working specifically with computers, rather than other kinds of machines or designed artifacts. There is also a focus in HCI on how to implement the computer software and hardware mechanisms to support humancomputer interaction. Thus, human factors is a broader term; HCI could be described as the human factors of computers although some experts try to differentiate these areas. HCI also differs from human factors in that there is less of a focus on repetitive work-oriented tasks and procedures, and much less emphasis on physical stress and the physical form or industrial design of the user interface, such as keyboards and mouse devices. Three areas of study have substantial overlap with HCI even as the focus of inquiry shifts. In the study of personal information management (PIM), human interactions with the computer are placed in a larger informational context people may work with many forms of information, some computer-based, many not (e.g., whiteboards, notebooks, sticky notes, refrigerator magnets) in order to understand and effect desired changes in their world. In computer-supported cooperative work (CSCW), emphasis is placed on the use of computing systems in support of the collaborative work of a group of people. The principles of human interaction management (HIM) extend the scope of CSCW to an organizational level and can be implemented without use of computer systems.

Design principles
When evaluating a current user interface, or designing a new user interface, it is important to keep in mind the following experimental design principles: Early focus on user(s) and task(s): Establish how many users are needed to perform the task(s) and determine who the appropriate users should be; someone who has never used the interface, and will not use the interface in the future, is most likely not a valid user. In addition, define the task(s) the users will be performing and how often the task(s) need to be performed.

Humancomputer interaction Empirical measurement: Test the interface early on with real users who come in contact with the interface on an everyday basis. Keep in mind that results may vary with the performance level of the user and may not be an accurate depiction of the typical human-computer interaction. Establish quantitative usability specifics such as: the number of users performing the task(s), the time to complete the task(s), and the number of errors made during the task(s). Iterative design: After determining the users, tasks, and empirical measurements to include, perform the following iterative design steps: 1. 2. 3. 4. Design the user interface Test Analyze results Repeat


Repeat the iterative design process until a sensible, user-friendly interface is created.[6]

Design methodologies
A number of diverse methodologies outlining techniques for humancomputer interaction design have emerged since the rise of the field in the 1980s. Most design methodologies stem from a model for how users, designers, and technical systems interact. Early methodologies, for example, treated users' cognitive processes as predictable and quantifiable and encouraged design practitioners to look to cognitive science results in areas such as memory and attention when designing user interfaces. Modern models tend to focus on a constant feedback and conversation between users, designers, and engineers and push for technical systems to be wrapped around the types of experiences users want to have, rather than wrapping user experience around a completed system. Activity theory is used in HCI to define and study the context in which human interactions with computers take place. Activity theory provides a framework to reason about actions in these contexts, analytical tools with the format of checklists of items that researchers should consider, and informs design of interactions from an activity-centric perspective.[7] User-centered design: user-centered design (UCD) is a modern, widely practiced design philosophy rooted in the idea that users must take center-stage in the design of any computer system. Users, designers and technical practitioners work together to articulate the wants, needs and limitations of the user and create a system that addresses these elements. Often, user-centered design projects are informed by ethnographic studies of the environments in which users will be interacting with the system. This practice is similar but not identical to Participatory Design, which emphasizes the possibility for end-users to contribute actively through shared design sessions and workshops. Principles of user interface design: these are seven principles that may be considered at any time during the design of a user interface in any order: tolerance, simplicity, visibility, affordance, consistency, structure and feedback.[8] See also list of interface design methods

Display designs
Displays are human-made artifacts designed to support the perception of relevant system variables and to facilitate further processing of that information. Before a display is designed, the task that the display is intended to support must be defined (e.g. navigating, controlling, decision making, learning, entertaining, etc.). A user or operator must be able to process whatever information that a system generates and displays; therefore, the information must be displayed according to principles in a manner that will support perception, situation awareness, and understanding.

Humancomputer interaction


Thirteen principles of display design

Christopher Wickens et al. defined 13 principles of display design in their book An Introduction to Human Factors Engineering.[9] These principles of human perception and information processing can be utilized to create an effective display design. A reduction in errors, a reduction in required training time, an increase in efficiency, and an increase in user satisfaction are a few of the many potential benefits that can be achieved through utilization of these principles. Certain principles may not be applicable to different displays or situations. Some principles may seem to be conflicting, and there is no simple solution to say that one principle is more important than another. The principles may be tailored to a specific design or situation. Striking a functional balance among the principles is critical for an effective design.[10] Perceptual principles 1. Make displays legible (or audible). A displays legibility is critical and necessary for designing a usable display. If the characters or objects being displayed cannot be discernible, then the operator cannot effectively make use of them. 2. Avoid absolute judgment limits. Do not ask the user to determine the level of a variable on the basis of a single sensory variable (e.g. color, size, loudness). These sensory variables can contain many possible levels. 3. Top-down processing. Signals are likely perceived and interpreted in accordance with what is expected based on a users past experience. If a signal is presented contrary to the users expectation, more physical evidence of that signal may need to be presented to assure that it is understood correctly. 4. Redundancy gain. If a signal is presented more than once, it is more likely that it will be understood correctly. This can be done by presenting the signal in alternative physical forms (e.g. color and shape, voice and print, etc.), as redundancy does not imply repetition. A traffic light is a good example of redundancy, as color and position are redundant. 5. Similarity causes confusion: Use discriminable elements. Signals that appear to be similar will likely be confused. The ratio of similar features to different features causes signals to be similar. For example, A423B9 is more similar to A423B8 than 92 is to 93. Unnecessary similar features should be removed and dissimilar features should be highlighted. Mental model principles 6. Principle of pictorial realism. A display should look like the variable that it represents (e.g. high temperature on a thermometer shown as a higher vertical level). If there are multiple elements, they can be configured in a manner that looks like it would in the represented environment. 7. Principle of the moving part. Moving elements should move in a pattern and direction compatible with the users mental model of how it actually moves in the system. For example, the moving element on an altimeter should move upward with increasing altitude.

Humancomputer interaction Principles based on attention 8. Minimizing information access cost. When the users attention is diverted from one location to another to access necessary information, there is an associated cost in time or effort. A display design should minimize this cost by allowing for frequently accessed sources to be located at the nearest possible position. However, adequate legibility should not be sacrificed to reduce this cost. 9. Proximity compatibility principle. Divided attention between two information sources may be necessary for the completion of one task. These sources must be mentally integrated and are defined to have close mental proximity. Information access costs should be low, which can be achieved in many ways (e.g. proximity, linkage by common colors, patterns, shapes, etc.). However, close display proximity can be harmful by causing too much clutter. 10. Principle of multiple resources. A user can more easily process information across different resources. For example, visual and auditory information can be presented simultaneously rather than presenting all visual or all auditory information. Memory principles 11. Replace memory with visual information: knowledge in the world. A user should not need to retain important information solely in working memory or retrieve it from long-term memory. A menu, checklist, or another display can aid the user by easing the use of their memory. However, the use of memory may sometimes benefit the user by eliminating the need to reference some type of knowledge in the world (e.g. an expert computer operator would rather use direct commands from memory than refer to a manual). The use of knowledge in a users head and knowledge in the world must be balanced for an effective design. 12. Principle of predictive aiding. Proactive actions are usually more effective than reactive actions. A display should attempt to eliminate resource-demanding cognitive tasks and replace them with simpler perceptual tasks to reduce the use of the users mental resources. This will allow the user to not only focus on current conditions, but also think about possible future conditions. An example of a predictive aid is a road sign displaying the distance from a certain destination. 13. Principle of consistency. Old habits from other displays will easily transfer to support processing of new displays if they are designed in a consistent manner. A users long-term memory will trigger actions that are expected to be appropriate. A design must accept this fact and utilize consistency among different displays.


Humancomputer interface
The humancomputer interface can be described as the point of communication between the human user and the computer. The flow of information between the human and computer is defined as the loop of interaction. The loop of interaction has several aspects to it including: Task environment: The conditions and goals set upon the user. Machine environment: The environment that the computer is connected to, e.g. a laptop in a college student's dorm room. Areas of the interface: Non-overlapping areas involve processes of the human and computer not pertaining to their interaction. Meanwhile, the overlapping areas only concern themselves with the processes pertaining to their interaction. Input flow: The flow of information that begins in the task environment, when the user has some task that requires using their computer. Output: The flow of information that originates in the machine environment. Feedback: Loops through the interface that evaluate, moderate, and confirm processes as they pass from the human through the interface to the computer and back.

Humancomputer interaction


Current research
Hot topics in HCI include:

User customization
End-user development studies how ordinary users could routinely tailor applications to their own needs and use this power to invent new applications based on their understanding of their own domains. Users, with their deeper knowledge of their own knowledge domains, could increasingly be important sources of new applications at the expense of generic systems programmers (with systems expertise but low domain expertise).

Embedded computation
Computation is passing beyond computers into every object for which uses can be found. Embedded systems make the environment alive with little computations and automated processes, from computerized cooking appliances to lighting and plumbing fixtures to window blinds to automobile braking systems to greeting cards. To some extent, this development is already taking place. The expected difference in the future is the addition of networked communications that will allow many of these embedded computations to coordinate with each other and with the user. [[Human interfaces to these embedded devices will in many cases be very different from those appropriate to workstations.

Augmented reality
A common staple of science fiction, augmented reality refers to the notion of layering relevant information into our vision of the world. Existing projects show real-time statistics to users performing difficult tasks, such as manufacturing. Future work might include augmenting our social interactions by providing additional information about those we converse with.

Factors of change
The means by which humans interact with computers continues to evolve rapidly. Humancomputer interaction is affected by the forces shaping the nature of future computing. These forces include: Decreasing hardware costs leading to larger memory and faster systems Miniaturization of hardware leading to portability Reduction in power requirements leading to portability New display technologies leading to the packaging of computational devices in new forms Specialized hardware leading to new functions Increased development of network communication and distributed computing Increasingly widespread use of computers, especially by people who are outside of the computing profession Increasing innovation in input techniques (e.g., voice, gesture, pen), combined with lowering cost, leading to rapid computerization by people previously left out of the "computer revolution." Wider social concerns leading to improved access to computers by currently disadvantaged groups The future for HCI, based on current promising research, is expected to include the following characteristics: Ubiquitous communication. Computers are expected to communicate through high speed local networks, nationally over wide-area networks, and portably via infrared, ultrasonic, cellular, and other technologies. Data and computational services will be portably accessible from many if not most locations to which a user travels. High-functionality systems. Systems can have large numbers of functions associated with them. There are so many systems that most users, technical or non-technical, do not have time to learn them in the traditional way (e.g., through thick manuals).

Humancomputer interaction Mass availability of computer graphics. Computer graphics capabilities such as image processing, graphics transformations, rendering, and interactive animation are becoming widespread as inexpensive chips become available for inclusion in general workstations and mobile devices. Mixed media. Commercial systems can handle images, voice, sounds, video, text, formatted data. These are exchangeable over communication links among users. The separate worlds of consumer electronics (e.g., stereo sets, VCRs, televisions) and computers are partially merging. Computer and print worlds are expected to cross-assimilate each other. High-bandwidth interaction. The rate at which humans and machines interact is expected to increase substantially due to the changes in speed, computer graphics, new media, and new input/output devices. This can lead to some qualitatively different interfaces, such as virtual reality or computational video. Large and thin displays. New display technologies are finally maturing, enabling very large displays and displays that are thin, lightweight, and low in power consumption. This is having large effects on portability and will likely enable the development of paper-like, pen-based computer interaction systems very different in feel from desktop workstations of the present. Information utilities. Public information utilities (such as home banking and shopping) and specialized industry services (e.g., weather for pilots) are expected to proliferate. The rate of proliferation can accelerate with the introduction of high-bandwidth interaction and the improvement in quality of interfaces.


Academic conferences
One of the top academic conferences for new research in human-computer interaction, especially within computer science, is the annually held ACM's Conference on Human Factors in Computing Systems, usually referred to by its short name CHI (pronounced kai, or khai). CHI is organized by ACM SIGCHI Special Interest Group on ComputerHuman Interaction. CHI is a large, highly competitive conference, with thousands of attendants, and is quite broad in scope. There are also dozens of other smaller, regional or specialized HCI-related conferences held around the world each year, the most important of which include:[11]

Special purpose
ASSETS: ACM International Conference on Computers and Accessibility CSCW: ACM conference on Computer Supported Cooperative Work. DIS: ACM conference on Designing Interactive Systems. ECSCW: European Conference on Computer-Supported Cooperative Work. Every second year. GROUP: ACM conference on supporting group work. HRI: ACM/IEEE International Conference on Humanrobot interaction. ICMI: International Conference on Multimodal Interfaces. ITS: ACM conference on Interactive Tabletops and Surfaces. IUI: International Conference on Intelligent User Interfaces. MobileHCI: International Conference on HumanComputer Interaction with Mobile Devices and Services. NIME: International Conference on New Interfaces for Musical Expression. Ubicomp: International Conference on Ubiquitous computing UIST: ACM Symposium on User Interface Software and Technology. i-USEr: International Conference on User Science and Engineering

Humancomputer interaction


[1] Card, Stuart K.; Thomas P. Moran; Allen Newell (1980). "The keystroke-level model for user performance time with interactive systems". Communications of the ACM 23 (7): 396-410. doi:10.1145/358886.358895. [2] Carlisle, James H. (1976). "Evaluating the impact of office automation on top management communication". Proceedings of the June 7-10, 1976, National Computer Conference and Exposition. pp.611-616. doi:10.1145/1499799.1499885. "Use of 'human-computer interaction' appears in references" [3] Ergoweb. "What is Cognitive Ergonomics?" (http:/ / www. ergoweb. com/ news/ detail. cfm?id=352). . Retrieved August 29, 2011. [4] "NRC: Backgrounder on the Three Mile Island Accident" (http:/ / www. nrc. gov/ reading-rm/ doc-collections/ fact-sheets/ 3mile-isle. html). . Retrieved August 29, 2011. [5] http:/ / www. threemileisland. org/ downloads/ 188. pdf [6] Green, Paul (2008). Iterative Design. Lecture presented in Industrial and Operations Engineering 436 (Human Factors in Computer Systems, University of Michigan, Ann Arbor, MI, February 4, 2008. [7] Kaptelinin, Victor (2012): Activity Theory. In: Soegaard, Mads and Dam, Rikke Friis (eds.). "Encyclopedia of Human-Computer Interaction". The Foundation. Available online at (http:/ / www. interaction-design. org/ encyclopedia/ activity_theory. html) [8] Pattern Language (http:/ / www. mit. edu/ ~jtidwell/ common_ground_onefile. html) [9] Wickens, Christopher D., John D. Lee, Yili Liu, and Sallie E. Gordon Becker. An Introduction to Human Factors Engineering. Second ed. Upper Saddle River, NJ: Pearson Prentice Hall, 2004. 185193. [10] Brown, C. Marlin. Human-Computer Interface Design Guidelines. Intellect Books, 1998. 23. [11] http:/ / www. confsearch. org/ confsearch/ faces/ pages/ topic. jsp?topic=hci& sortMode=1& graphicView=true

Further reading
Academic overview of the field by many authors: Julie A. Jacko (Ed.). (2012). Human-Computer Interaction Handbook (3rd Edition). CRC Press. ISBN 1-4398-2943-8 Andrew Sears and Julie A. Jacko (Eds.). (2007). Human-Computer Interaction Handbook (2nd Edition). CRC Press. ISBN 0-8058-5870-9 Julie A. Jacko and Andrew Sears (Eds.). (2003). Human-Computer Interaction Handbook. Mahwah: Lawrence Erlbaum & Associates. ISBN 0-8058-4468-6 Historically important classic: Stuart K. Card, Thomas P. Moran, Allen Newell (1983): The Psychology of HumanComputer Interaction. Erlbaum, Hillsdale 1983 ISBN 0-89859-243-7 Overview of history of the field: Jonathan Grudin: A moving target: The evolution of humancomputer interaction. In Andrew Sears and Julie A. Jacko (Eds.). (2007). Human-Computer Interaction Handbook (2nd Edition). CRC Press. ISBN 0-8058-5870-9 Brad Myers: A brief history of humancomputer interaction technology. Interactions 5(2):4454, 1998, ISSN 10725520 ACM Press. John M. Carroll: Human Computer Interaction: History and Status. ( encyclopedia/human_computer_interaction_hci.html) Encyclopedia Entry at John M. Carroll, Conceptualizing a possible discipline of humancomputer interaction, Interacting with Computers, Volume 22, Issue 1, January 2010, Pages 3-12, ISSN 0953-5438, 10.1016/j.intcom.2009.11.008. (* Academic journals: ACM Transactions on Computer-Human Interaction Behaviour & Information Technology ( EMinds International Journal on Human-Computer Interaction Interacting with Computers International Journal of Human-Computer Interaction International Journal of Human-Computer Studies

Humancomputer interaction Human-Computer Interaction ( ( 07370024.asp) Collection of key papers: Ronald M. Baecker, Jonathan Grudin, William A. S. Buxton, Saul Greenberg (Eds.) (1995): Readings in humancomputer interaction. Toward the Year 2000. 2. ed. Morgan Kaufmann, San Francisco 1995 ISBN 1-55860-246-1 Treatments by one or few authors, often aimed at a more general audience: Jakob Nielsen: Usability Engineering. Academic Press, Boston 1993 ISBN 0-12-518405-0 Donald A. Norman: The Psychology of Everyday Things. Basic Books, New York 1988 ISBN 0-465-06709-3 Jef Raskin: The Humane Interface. New directions for designing interactive systems. Addison-Wesley, Boston 2000 ISBN 0-201-37937-6 Ben Shneiderman and Catherine Plaisant: Designing the User Interface: Strategies for Effective HumanComputer Interaction. 5th ed. Addison Wesley, 2009 ISBN 0-321-53735-1 Ben Shneiderman and Catherine Plaisant: Designing the User Interface: Strategies for Effective HumanComputer Interaction. 4th ed. Addison Wesley, 2004 ISBN 0-321-19786-0 Bruce Tognazzini: Tog on Interface. Addison-Wesley, Reading 1991 ISBN 0-201-60842-1 Textbooks that could be used in a classroom: Alan Dix, Janet Finlay, Gregory Abowd, and Russell Beale (2003): HumanComputer Interaction. 3rd Edition. Prentice Hall, 2003. 0-13-046109-1 Yvonne Rogers, Helen Sharp & Jenny Preece: Interaction Design: Beyond HumanComputer Interaction, 3rd ed. John Wiley & Sons Ltd., 2011 ISBN 0-470-66576-9 Helen Sharp, Yvonne Rogers & Jenny Preece: Interaction Design: Beyond HumanComputer Interaction, 2nd ed. John Wiley & Sons Ltd., 2007 ISBN 0-470-01866-6 Matt Jones (interaction designer) and Gary Marsden (2006). Mobile Interaction Design, John Wiley and Sons Ltd. See also List of user interface literature See also readings on (


External links
Human And Computer Interaction Review (HCI) ( human-and-computer-interaction-review.html) Bad Human Factors Designs ( The HCI Wiki Bibliography ( with over 100,000 publications. The HCI Bibliography ( Over 71,000 publications about HCI. Human-Centered Computing Education Digital Library ( Usability Views ( HCI Webliography ( with a list of about 100 HCI Organizations worldwide ( html) Interactive computer use impacts cognition

Outline of humancomputer interaction


Outline of humancomputer interaction

The following outline is provided as an overview of and topical guide to humancomputer interaction:

What is humancomputer interaction?

Humancomputer interaction the intersection of computer science and behavioral sciences, this field involves the study, planning, and design of the interaction between people (users) and computers. Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is the Three Mile Island accident where investigations concluded that the design of the humanmachine interface was at least partially responsible for the disaster.

What type of thing is humancomputer interaction?

Humancomputer interaction can be described as all of the following: A field of science systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe.[1] An applied science field that applies human knowledge to build or design useful things. A field of computer science scientific and practical approach to computation and its applications. An application of engineering science, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and also build structures, machines, devices, systems, materials and processes. An application of software engineering application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software.[2] [3] [4] A subfield of computer programming process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as Java, C++, C#, Python, etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors. A social science academic discipline concerned with society and human behavior. A behavioural science discipline that explores the activities of and interactions among organisms. It involves the systematic analysis and investigation of human and animal behaviour through controlled and naturalistic observation, and disciplined scientific experimentation. Examples of behavioural sciences include psychology, psychobiology, and cognitive science. A type of system set of interacting or interdependent components forming an integrated whole or a set of elements (often called 'components' ) and relationships which are different from relationships of the set or its elements to other elements or sets. A system that includes software software is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. A type of technology making, modification, usage, and knowledge of tools, machines, techniques, crafts, systems, methods of organization, in order to solve a problem, improve a preexisting solution to a problem, achieve a goal, handle an applied input/output relation or perform a specific function. It can also refer to the collection of such tools, machinery, modifications, arrangements and procedures. Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments.

Outline of humancomputer interaction A form of computer technology computers and their application.


Styles of humancomputer interaction

Command line interface Graphical user interface (GUI) Copy and paste, Cut and paste Single Document Interface, Multiple Document Interface, Tabbed Document Interface Elements of graphical user interfaces Pointer Widget (computing) icons WIMP (computing) Point-and-click Drag-and-drop Window managers WYSIWYG (what you see is what you get)

Zooming user interface (ZUI) Brushing and linking Crossing-based interfaces

Related fields
Humancomputer interaction draws from the following fields: psychology human memory human perception sensory system sociology and social psychology cognitive science human factors / ergonomics repetitive strain injury computer science computer graphics artificial intelligence computer vision visualization information visualization scientific visualization knowledge visualization design industrial design graphic design and aesthetics information design interaction design sonic interaction design

Outline of humancomputer interaction Interactive Art and HCI library and information science, information science information security HCISec speech-language pathology personal information management phenomenology


History of humancomputer interaction

History of humancomputer interaction Ivan Sutherland's Sketchpad History of automated adaptive instruction in computer applications History of the GUI

Interaction paradigms
Time Sharing (1957) hypertext (Ted Nelson 1963), hypermedia and hyperlinks Direct manipulation (ex. lightpen 1963, mice 1968) Desktop metaphor (197x XEROX PARC) Windows-Paradigm Personal Computer (1981) CSCW: Computer Supported Collaborative (or Cooperative) Work, collaborative software WWW (Tim Berners Lee 1989) Ubiquitous computing ("ubicomp") coined 1988 "sensor-based / context-aware interaction"-paradigm

Notable systems and prototypes

Office of the future (1940s) Sketchpad (1963) The Mother of All Demos (1968) Dynabook (circa 1970) Xerox Alto (1973) Xerox Star (1981) Apple Macintosh (1984) Knowledge Navigator (1987) Project Looking Glass (circa 2003 or 2004) The Humane Environment (alpha release, 2004)

Outline of humancomputer interaction


General humancomputer interaction concepts

accessibility and computer accessibility adaptive autonomy affordance banner blindness computer user satisfaction contextual design and contextual inquiry gender HCI gulf of evaluation gulf of execution habituation human action cycle human interface device humanmachine interface interaction interaction technique look and feel mode physiological interaction principle of least astonishment progressive disclosure sonic interaction design thanatosensitivity transparency usability and usability testing user, luser user experience and user experience design user-friendliness user interface and user interface design user interface engineering and usability engineering handheld devices Humancomputer information retrieval Information retrieval Internet and the World Wide Web multimedia Software agents Universal usability User experience design Visual programming languages. Knowbility

Outline of humancomputer interaction


Hardware input/output devices and peripherals: List of input devices unit record equipment barcode scanner keyboard computer keyboard keyboard shortcut ways to make typing more efficient: command history, autocomplete, autoreplace and Intellisense microphone pointing device computer mouse mouse chording List of output devices visual devices graphical output device display device computer display video projector computer printer plotter auditory devices speakers earphones tactile devices refreshable Braille display braille embosser Haptic devices

Interface design methods

activity-centered design Affordance analysis bodystorming Contextual design focus group iterative design participatory design pictive user interface workshop method rapid prototyping Scenario-based design (SBD) task analysis/task modeling user-centered design

usage-centered design User scenario value-sensitive design

Outline of humancomputer interaction Wizard of Oz experiment


Usability testing heuristic evaluation cognitive walkthrough usability lab

Models and laws

Hick's law Fitts' law Steering law GOMS goals, operators, methods, and selection rules Keystroke-level model (KLM)

Cultural influences
Motion pictures featuring interesting user interfaces: 2001: A Space Odyssey (1968) Star Wars Episode IV: A New Hope (1977) Alien (1979) Blade Runner (1982) Tron (1982) The Last Starfighter (1984) Ghost in the Shell (1991/1995) The Lawnmower Man (1992) Johnny Mnemonic (1995) The Matrix (1999) Serial Experiments Lain Final Fantasy: The Spirits Within (2001) Minority Report (2002) I, Robot (2004) Iron Man (2008)

Outline of humancomputer interaction


Humancomputer interaction organizations

Industrial labs and companies
Industrial labs and companies known for innovation and research in HCI: Alias Wavefront Apple Computer AT&T Labs Bell Labs HP Labs Microsoft Research SRI International (formerly Stanford Research Institute) Xerox PARC

Persons influential in humancomputer interaction

Tim Berners-Lee Bill Buxton John M. Carroll (information scientist) Douglas Engelbart Paul Fitts Alan Kay Steve Mann Ted Nelson Jakob Nielsen (usability consultant) Donald Norman Jef Raskin George G. Robertson Ben Shneiderman Herbert A. Simon Ivan Sutherland Terry Winograd

[1] "... modern science is a discovery as well as an invention. It was a discovery that nature generally acts regularly enough to be described by laws and even by mathematics; and required invention to devise the techniques, abstractions, apparatus, and organization for exhibiting the regularities and securing their law-like descriptions."p.vii, J. L. Heilbron, (2003, editor-in-chief) The Oxford Companion to the History of Modern Science New York: Oxford University Press ISBN 0-19-511229-6 "science" (http:/ / www. merriam-webster. com/ dictionary/ science). Merriam-Webster Online Dictionary. Merriam-Webster, Inc. . Retrieved 2011-10-16. "3 a: knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through scientific method b: such knowledge or such a system of knowledge concerned with the physical world and its phenomena" [2] SWEBOK executive editors, Alain Abran, James W. Moore ; editors, Pierre Bourque, Robert Dupuis. (2004). Pierre Bourque and Robert Dupuis. ed. Guide to the Software Engineering Body of Knowledge - 2004 Version (http:/ / www. swebok. org). IEEE Computer Society. pp.11. ISBN0-7695-2330-7. . [3] ACM (2006). "Computing Degrees & Careers" (http:/ / computingcareers. acm. org/ ?page_id=12). ACM. . Retrieved 2010-11-23. [4] Laplante, Phillip (2007). What Every Engineer Should Know about Software Engineering (http:/ / books. google. com/ ?id=pFHYk0KWAEgC& lpg=PP1& dq=What Every Engineer Should Know about Software Engineering. & pg=PA1#v=onepage& q& f=false). Boca Raton: CRC. ISBN978-0-8493-7228-5. . Retrieved 2011-01-21.

Outline of humancomputer interaction


External links
This outline displayed as a mindmap ( topic=Outline+of+humancomputer+interaction&Submit=Search), at

Human-machine interface
Human-machine interface is the part of the machine that handles the Human-machine interaction

In complex systems, the human-machine interface is typically computerized. The term Human-computer interface refers to this kind of systems. The engineering of the human-machine interfaces is by considering ergonomics (Human Factors). The corresponding disciplines are Human Factors Engineering (HFE) and Usability Engineering (UE), which is part of Systems Engineering. Tools used for incorporating the human factors in the interface design are developed based on knowledge of computer science, such as computer graphics, operating systems, programming languages. Primary methods used in the interface design include prototyping and simulation.

Interface design
Typical human-machine interface design consists of the following stages: interaction specification, interface software specification and prototyping: Common practices for interaction specification include user-centered design, persona, activity-oriented design, scenario-based design, resiliency design. Common practices for interface software specification include use cases, constrain enforcement by interaction protocols (intended to avoid use errors). Common practices for prototyping are based on interactive design based on libraries of interface elements (controls, decoration, etc.).

Principles of user interface design


Principles of user interface design

The principles of user interface design are intended to improve the quality of user interface design. According to Larry Constantine and Lucy Lockwood in their usage-centered design, these principles are:[1] The structure principle: Design should organize the user interface purposefully, in meaningful and useful ways based on clear, consistent models that are apparent and recognizable to users, putting related things together and separating unrelated things, differentiating dissimilar things and making similar things resemble one another. The structure principle is concerned with overall user interface architecture. The simplicity principle: The design should make simple, common tasks easy, communicating clearly and simply in the user's own language, and providing good shortcuts that are meaningfully related to longer procedures. The visibility principle: The design should make all needed options and materials for a given task visible without distracting the user with extraneous or redundant information. Good designs don't overwhelm users with alternatives or confuse with unneeded information. The feedback principle: The design should keep users informed of actions or interpretations, changes of state or condition, and errors or exceptions that are relevant and of interest to the user through clear, concise, and unambiguous language familiar to users. The tolerance principle: The design should be flexible and tolerant, reducing the cost of mistakes and misuse by allowing undoing and redoing, while also preventing errors wherever possible by tolerating varied inputs and sequences and by interpreting all reasonable actions. The reuse principle: The design should reuse internal and external components and behaviors, maintaining consistency with purpose rather than merely arbitrary consistency, thus reducing the need for users to rethink and remember. According to Jef Raskin in his book The Humane Interface, there are two laws of user interface design, based on the fictional laws of robotics created by Isaac Asimov:[2] First Law: A computer shall not harm your work or, through inactivity, allow your work to come to harm. Second Law: A computer shall not waste your time or require you to do more work than is strictly necessary.

[1] http:/ / www. foruse. com/ [2] http:/ / wiki. osafoundation. org/ Journal/ HumaneUserInterface20041102 Laws of Interface Design

User-centered design


User-centered design
In broad terms, user-centered design (UCD) is a type of user interface design and a process in which the needs, wants, and limitations of end users of a product are given extensive attention at each stage of the design process. User-centered design can be characterized as a multi-stage problem solving process that not only requires designers to analyse and foresee how users are likely to use a product, but also to test the validity of their assumptions with regards to user behaviour in real world tests with actual users. Such testing is necessary as it is often very difficult for the designers of a product to understand intuitively what a first-time user of their design experiences, and what each user's learning curve may look like. The chief difference from other product design philosophies is that user-centered design tries to optimize the product around how users can, want, or need to use the product, rather than forcing the users to change their behavior to accommodate the product.

UCD models and approaches

For example, the user-centered design process can help software designers to fulfill the goal of a product engineered for their users. User requirements are considered right from the beginning and included into the whole product cycle. These requirements are noted and refined through investigative methods including: ethnographic study, contextual inquiry, prototype testing, usability testing and other methods. Generative methods may also be used including: card sorting, affinity diagraming and participatory design sessions. In addition, user requirements can be inferred by careful analysis of usable products similar to the product being designed. Cooperative design: involving designers and users on an equal footing. This is the Scandinavian tradition of design of IT artifacts and it has been evolving since 1970.[1] Participatory design (PD), a North American term for the same concept, inspired by Cooperative Design, focusing on the participation of users. Since 1990, there has been a bi-annual Participatory Design Conference.[2] Contextual design, customer-centered design in the actual context, including some ideas from Participatory design[3] All these approaches follow the ISO standard Human-centred design for interactive systems (ISO 9241-210, 2010) [4] . The ISO standard describes 6 key principles that will ensure a design is user centred: 1. 2. 3. 4. 5. 6. The design is based upon an explicit understanding of users, tasks and environments. Users are involved throughout design and development. The design is driven and refined by user-centred evaluation. The process is iterative. The design addresses the whole user experience. The design team includes multidisciplinary skills and perspectives.

User-centered design


UCD answers questions about users and their tasks and goals, then uses the findings to make decisions about development and design. UCD of a web site, for instance, seeks to answer the following questions: Who are the users of the document? What are the users tasks and goals? What are the users experience levels with the document, and documents like it? What functions do the users need from the document? What information might the users need, and in what form do they need it? How do users think the document should work? What are the extreme environments? Is the user multitasking? Does the interface utilize different inputs modes such as touching, spoken, gestures, or orientation?

As examples of UCD viewpoints, the essential elements of UCD of a web site are considerations of visibility, accessibility, legibility and language.

Visibility helps the user construct a mental model of the document. Models help the user predict the effect(s) of their actions while using the document. Important elements (such as those that aid navigation) should be emphatic. Users should be able to tell from a glance what they can and cannot do with the document.

Users should be able to find information quickly and easily throughout the document, regardless of its length. Users should be offered various ways to find information (such as navigational elements, search functions, table of contents, clearly labeled sections, page numbers, color coding, etc.). Navigational elements should be consistent with the genre of the document. Chunking is a useful strategy that involves breaking information into small pieces that can be organized into some type meaningful order or hierarchy. The ability to skim the document allows users to find their piece of information by scanning rather than reading. Bold and italic words are often used.

Text should be easy to read: Through analysis of the rhetorical situation, the designer should be able to determine a useful font style. Ornamental fonts and text in all capital letters are hard to read, but italics and bolding can be helpful when used correctly. Large or small body text is also hard to read. (Screen size of 10-12 pixel sans serif and 12-16 pixel serif is recommended.) High figure-ground contrast between text and background increases legibility. Dark text against a light background is most legible.

User-centered design


Depending on the rhetorical situation, certain types of language are needed. Short sentences are helpful, as are well-written texts used in explanations and similar bulk-text situations. Unless the situation calls for it, jargon or technical terms should not be used. Many writers will choose to use active voice, verbs (instead of noun strings or nominals), and simple sentence structure.

Rhetorical situation
A user-centered design is focused around the rhetorical situation. The rhetorical situation shapes the design of an information medium. There are three elements to consider in a rhetorical situation: Audience, Purpose, and Context.

The audience is the people who will be using the document. The designer must consider their age, geographical location, ethnicity, gender, education, etc.

The purpose is what the document targets or what problem the document is trying to address.

The context is the circumstances surrounding the situation. The context often answers the question: What situation has prompted the need for this document? Context also includes any social or cultural issues that may surround the situation.

Analysis tools used in user-centered design

There are a number of tools that are used in the analysis of user-centered design, mainly: persona, scenarios, and essential use cases.[5]

During the UCD process, a Persona of the user's need may be created. It is a fictional character with all the characteristics of the user. Personas are created after the field research process, which typically consists of members of the primary stakeholder (user) group being observed on their behaviour, and additionally answering questionnaires or participating in interviews, or a mixture of both. After results are gathered from the field research, they are used to create personas of the primary stakeholder group. Often, there may be several personas concerning the same group of individuals, since it is almost impossible to apply all the characteristics of the stakeholder group onto one character. The character depicts a "typical" stakeholder, not an "average" individual in the primary stakeholder group, and is referred to throughout the entire design process.[6] There are also what's called a secondary persona, where the character is not a member of the primary stakeholder group and is not the main target of the design, but their needs should be met and problems solved if possible. They exist to help account for further possible problems and difficulties that may occur even though the primary stakeholder group is satisfied with their solution. There is also an anti-persona, which is the character which the design process is not made for. Personas usually include a name and picture, demographics, roles and responsibilities, goals and tasks, motivations and needs, environment and context, and a quote that can represent the character's personality. Personas are useful in the sense that they create a common shared understanding of the user group for which the design process is built around. Also, they help to prioritize the design considerations by providing a context of what the user needs and what functions are simply nice to add and have. They can also provide a human face and existence to a diversified and scattered user group, and can also create some empathy and add emotions when referring to the users. However, since personas are

User-centered design a generalized perception of the primary stakeholder group from collected data, the characteristics may be too broad and typical, or too much of an "average joe". Sometimes, personas can have stereotypical properties also, which may hurt the entire design process. Overall, personas are a useful tool that can be used since designers in the design process can have an actual person to make design measure around other than referring to a set of data or a wide range of individuals.


A scenario created in the UCD process is a fictional story about the "daily life of" or a sequence of events with the primary stakeholder group as the main character. Typically, a persona that was created earlier is used as the main character of this story. The story should be specific of the events happening that relate to the problems of the primary stakeholder group, and normally the main research questions the design process is built upon. These may turn out to be a simple story about the daily life of an individual, but small details from the events should imply details about the users, and may include emotional or physical characteristics. There can be the "best case scenario", where everything works out best for the main character, the "worst case scenario", where the main character experiences everything going wrong around him or her, and an "average case scenario", which is the typical life of the individual, where nothing really special or really depressing occurs, and the day just moves on. Scenarios create a social context to which the personas exist in, and also create an actual physical world, instead of imagining a character with internal characteristics from gathered data an nothing else; there is more action involved in the persona's existence. A scenario is also more easily understood by people, since it is in the form of a story, and is easier to follow.[7] Yet, like the personas, these scenarios are assumptions made by the researcher and designer, and is also created from a set of organized data. Some even say such scenarios are unrealistic to real life occurrences. Also, it is difficult to explain and inform low level tasks that occur, like the thought process of the persona before acting.

Use case
In short, a use case describes the interaction between an individual and the rest of the world. Each use case describes an event that may occur for a short period of time in real life, but may consist of intricate details and interactions between the actor and the world.[8] It is represented as a series of simple steps for the character to achieve his or her goal, in the form of a cause-and effect scheme. Use cases are normally written in the form of a chart with two columns: first column labelled actor, second column labelled world, and the actions performed by each side written in order in the respective columns. The following is an example of a use case for performing a song on a guitar in front of an audience.
Actor choose music to play pick up guitar display sheet music perform each note on sheet music using guitar convey note to audience using sound audience provides feedback to performer assess performance and adjust as needed based on audience feedback complete song with required adjustments audience applause World

The interaction between actor and the world is an act that can be seen in everyday life, and we take them as granted and don't think too much about the small detail that needs to happen in order for an act like performing a piece of music to exist. It is similar to the fact that when speaking our mother tongue, we don't think too much about grammar

User-centered design and how to phrase words; they just come out since we are so used to saying them. The actions between an actor and the world, notably, the primary stakeholder (user) and the world in this case, should be thought about in detail, and hence use cases are created to understand how these tiny interactions occur. An essential use case is a special kind of use case, also called an "abstract use case". Essential use cases describe the essence of the problem, and deals with the nature of the problem itself. While writing use cases, no assumptions about unrelated details should be made. In additions, the goals of the subject should be separated from the process and implementation to reach that particular goal. Below is an example of an essential use case with the same goal as the former example.
Actor choose sheet music to perform gathers necessary resources provides access to resources performs piece sequentially convey and interprets performance provides feedback completes performance World


Use cases are useful because they help identify useful levels of design work. They allow the designers to see the actual low level processes that are involved for a certain problem, which makes the problem easier to handle, since certain minor steps and details the user makes are exposed. The designers' job should take into consideration of these small problems in order to arrive at a final solution that works. Another way to say this is that use cases breaks a complicated task in to smaller bits, where these bits are useful units. Each bit completes a small task, which then builds up to the final bigger task. Like writing code on a computer, it is easier to write the basic smaller parts and make them work first, and then put them together to finish the larger more complicated code, instead to tackling the entire code from the very beginning. The first solution is less risky because if something goes wrong with the code, it is easier to look for the problem in the smaller bits, since the segment with the problem will be the one that does not work, while in the latter solution, the programmer may have to look through the entire code to search for a single error, which proves time consuming. The same reasoning goes for writing use cases in UCD. Lastly, use cases convey useful and important tasks where the designer can see which one are of higher importance than others. Some drawbacks of writing use cases include the fact that each actions, by the actor or the world, consist of little detail, and is simply a small action. This may possibly lead to further imagination and different interpretation of action from different designers. Also, during the process, it is really easy to oversimplify a task, since a small task from a larger task may consist of even smaller tasks. Picking up a guitar may involve thinking of which guitar to pick up, which pick to use, and think about where the guitar is located first. These tasks may then be divided into smaller tasks, such as first thinking of what colour of guitar fits the place to perform the piece, and other related details. Tasks may be split further down into even tinier tasks, and it is up to the designer to determine what is a suitable place to stop splitting up the tasks.[9] Tasks may not only be oversimplified, they may also be omitted in whole, thus the designer should be aware of all the detail and all the key steps that are involved in an event or action when writing use cases.

User-centered design


User-centered design, needs and emotions

The book "The Design of Everyday Things" (originally called "The Psychology of Everyday Things") was first published in 1986. In this book, Donald A. Norman describes the psychology behind what he deems 'good' and 'bad' design through examples and offers principles of 'good' design. He exalts the importance of design in our everyday lives, and the consequences of errors caused by bad designs. In his book, Norman uses the term "user-centered design" to describe design based on the needs of the user, leaving aside what he considers secondary issues like aesthetics. User-centered design involves simplifying the structure of tasks, making things visible, getting the mapping right, exploiting the powers of constraint, and designing for error. Norman's overly reductive approach in this text was readdressed by him later in his own publication "Emotional Design." Other books in a similar vein include "Designing Pleasurable Products"[10] by Patrick W. Jordan, in which the author suggests that different forms of pleasure should be included in a user-centered approach in addition to traditional definitions of usability.

User-centered design in product lifecycle management systems

Software applications (or often suites of applications) used in product lifecycle management (typically including CAD, CAM and CAx processes) can be typically characterized by the need for these solutions to serve the needs of a broad range of users, with each user having a particular job role and skill level. For example, a CAD digital mockup might be utilized by a novice analyst, design engineer of moderate skills, or a manufacturing planner of advanced skills.

[1] [2] [3] [4] [5]

Greenbaum&Kyng (eds): Design At Work - Cooperative design of Computer Systems, Lawrence Erlbaum 1991 Schuler&Namioka: Participatory Design, Lawrence Erlbaum 1993 and chapter 11 in Helanders Handbook of HCI, Elsevier 1997 Beyer&Holtzblatt, Contextual Design, Kaufmann 1998 http:/ / www. iso. org/ iso/ catalogue_detail. htm?csnumber=52075 (https:/ / 5011581039015022044-a-1802744773732722657-s-sites. googlegroups. com/ site/ csc318/ spring-2011/ CSC318S2011Lecture4-Fieldwork2. pdf?attachauth=ANoY7cpRjZZJUzU88bCcCIta4czeXgoOiyaFD5GjBNsHLYuj2F4YusdBvyppA05GP9wQzi58KKp3KOL1tgyDCyN2HkVxnSY2xxOA9yXRXi attredirects=0) [6] (http:/ / people. clarkson. edu/ ~jsearlem/ cs459/ fa10/ handouts/ Persona-overview. pdf) [7] (http:/ / www. infodesign. com. au/ usabilityresources/ scenarios) [8] (http:/ / www. gatherspace. com/ static/ use_case_example. html) [9] (http:/ / www. markcollinscope. info/ whitepaper_5b. pdf) [10] Designing Pleasurable Products (http:/ / books. google. com/ books?id=0s3el8sDjHsC& dq=Designing+ Pleasurable+ Products& source=gbs_navlinks_s) at Google Books

Further reading
What is User-Centered Design? - Usability Professionals' Association ( usability_resources/about_usability/what_is_ucd.html) The Fable of the User-Centred Designer (, David Travis. An introduction to UCD principles through narrative.

Use-centered design


Use-centered design
Use-centered design is a design philosophy in which the focus is on the goals and tasks associated with skill performance in specific work or problem domains, in contrast to "user-centered design" approach, where the focus is on the needs, wants, and limitations of the end user of the designed artifact. Bennett and Flach (2011) have drawn a contrast between dyadic and triadic approaches to the semiotics of display design. The classical 'user-centered' approach is based on a dyadic semiotic model where the focus is on the human-interface dyad. This approach frames 'meaning' as a process of interpreting the symbolic representation. That is, meaning is constructed from internal information processes. From this dyadic perspective, the design goal is to build interfaces that 'match' the users internal model (i.e., match user expectations). In contrast, the 'use-centered' approach is based on a triadic semiotic model that includes the work domain (or ecology) as a third component of the semiotic system. In the triadic system, the work domain provides a ground for meaning outside of the human information processing system. In this, triadic semiotic system, the focus is on the match between the constraints in the work domain and the mental representations. From this 'use-centered' approach the goal is to design displays that 'shape' the internal mental representations so that they reflect validated models of the work domain. In other words, the goal is to shape user expectations to conform with the validated 'deep structure' of the work domain. In doing this, work analysis (e.g., Vicente, 1999) and multi-level means ends representations of work domain constraints (i.e., Rasmussen's Abstraction Hierarchy) are the typical methods used to specify the 'deep structure' of a work domain. By building configural display representations that conform to this deep structure -- it is possible to facilitate skilled interactions between the human and the work domain. Thus, an emphasis on 'use' rather than 'user' suggests a more problem-centered focus for interface design. Note that it remains important to respect the real limitations of human information processing systems through the use of graphical displays that support efficient chunking of information. However, the main point is that the organization MUST be consistent with the demands of the work or problem domain, if the interactions that result are expected to be skillful. In the end, the representations must be 'grounded' in the use-domain! C.S. Peirce is the inspiration for the triadic model of semiotics. Peirce was interested in the fixation of belief relative to pragmatic demands of everyday experiences. Peirce also introduced the construct of 'abduction' as an alternative to classical logic (deduction and induction). The 'use-centered' approach assumes abduction as the appropriate model for problem solving. Thus, use-centered design focuses on supporting the closed-loop dynamic of learning from experience. That is, by acting on hypotheses and simultaneously testing those hypotheses in terms of the practical consequences of the actions that they guide. The convergence, stability, and robustness of abduction processes depend critically on the information coupling between perception and action. When the coupling is rich an abduction system will typically converge on 'beliefs' that lead to pragmatically successful (i.e., satisfying) interactions (i.e., skilled interactions). This is the ultimate goal of use-centered design - to support skilled interactions between a person and a work domain. Use-centered design was first coined by John Flach and Cynthia Dominguez (Flach, J. M. & Dominguez, C. O. (1995). Use-centered design: Integrating the user, instrument, and goal. Ergonomics in Design, 3, 3, 19-24.) Bennett, K.B. and Flach, J.M. (2011) Display and Interface Design. Subtle Science and Exact Art. Vicente, K.J. (1999). Work Analysis. Rasmussen, J. (1986). Information Processing and Human-Machine Interaction.

Activity theory


Activity theory
For the psychosocial theory of aging, see Activity theory (aging) Activity theory (AT) is an umbrella term for a line of eclectic social sciences theories and research with its roots in the Soviet psychological activity theory pioneered by Alexei Leont'ev and Sergei Rubinstein. These scholars sought to understand human activities as complex, socially situated phenomena and to go beyond paradigms of reflexology (the teaching of Vladimir Bekhterev and his followers) and physiology of higher nervous activity (the teaching of Ivan Pavlov and his school), psychoanalysis and behaviorism. It became one of the major psychological approaches in the former USSR, being widely used in both theoretical and applied psychology, and in education, professional training, ergonomics and work psychology.[1] Activity theory is more of a descriptive meta-theory or framework than a predictive theory. It considers an entire work/activity system (including teams, organizations, etc.) beyond just one actor or user. It accounts for environment, history of the person, culture, role of the artifact, motivations, and complexity of real life activity. One of the strengths of AT is that it bridges the gap between the individual subject and the social realityit studies both through the mediating activity. The unit of analysis in AT is the concept of object-oriented, collective and culturally mediated human activity, or activity system. This system includes the object (or objective), subject, mediating artifacts (signs and tools), rules, community and division of labor. The motive for the activity in AT is created through the tensions and contradictions within the elements of the system.[2] According to ethnographer Bonnie Nardi, a leading theorist in AT, activity theory "focuses on practice, which obviates the need to distinguish 'applied' from 'pure' scienceunderstanding everyday practice in the real world is the very objective of scientific practice. The object of activity theory is to understand the unity of consciousness and activity."[3] AT is particularly useful as a lens in qualitative research methodologies (e.g., ethnography, case study). AT provides a method of understanding and analyzing a phenomenon, finding patterns and making inferences across interactions, describing phenomena and presenting phenomena through a built-in language and rhetoric. A particular activity is a goal-directed or purposeful interaction of a subject with an object through the use of tools. These tools are exteriorized forms of mental processes manifested in constructs, whether physical or psychological. AT recognizes the internalization and externalization of cognitive processes involved in the use of tools, as well as the transformation or development that results from the interaction.[4]

The history of activity theory

The origins of activity theory can be traced to several sources, which have subsequently given rise to various complementary and intertwined strands of development. This account will focus on three of the most important of these strands. The first is associated with the Moscow Institute of Psychology and in particular the "troika" of young Russian researchers, Vygotsky, Leont'ev and Luria. Vygotsky founded cultural-historical psychology, a field that became the basis for modern AT; Leontev, one of the principal founders of activity theory, both developed and reacted against Vygotsky's work. Leont'ev's formulation of general activity theory is currently the most influential in post-Soviet developments in AT, which have largely been in social-scientific, organizational, and writing-studies rather than psychological research. The second major line of development within activity theory involves Russian scientists, such as P. K. Anokhin and N. A. Bernshtein, more directly concerned with the neurophysiological basis of activity; its foundation is associated with the Soviet philosopher of psychology S. L. Rubinshtein. This work was subsequently developed by researchers such as Pushkin, Zinchenko & Gordeeva, Ponomarenko, Zarakovsky and others, and is currently most well-known through the work on systemic-structural activity theory being carried out by G. Z. Bedny and his associates. Finally, in the Western world, discussions and use of AT are primarily framed within the Scandinavian activity theory strand, developed by Yrj Engestrm.

Activity theory


Russian Activity Theory

After Vygotsky's early death, Leont'ev became the leader of the research group nowadays known as the Kharkov school of psychology and extended Vygotsky's research framework in significantly new ways. Leont'ev first examined the psychology of animals, looking at the different degrees to which animals can be said to have mental processes. He concluded that Pavlov's reflexionism was not a sufficient explanation of animal behaviour and that animals have an active relation to reality, which he called "activity." In particular, the behaviour of higher primates such as chimpanzees could only be explained by the ape's formation of multi-phase plans using tools. Leont'ev then progressed to humans and pointed out that people engage in "actions" that do not in themselves satisfy a need, but contribute towards the eventual satisfaction of a need. Often, these actions only make sense in a social context of a shared work activity. This led him to a distinction between "activities," which satisfy a need, and the "actions" that constitute the activities. Leont'ev also argued that the activity in which a person is involved is reflected in their mental activity, that is (as he puts it) material reality is "presented" to consciousness, but only in its vital meaning or significance.

Scandinavian activity theory

AT remained virtually unknown outside the Soviet Union until the mid-1980s, when it was picked up by Scandinavian researchers. The first international conference on activity theory was not held until 1986. The earliest non-Soviet paper cited by Nardi is a 1987 paper by Yrj Engestrm: "Learning by expanding". This resulted in a reformulation of AT. Kuutti notes that the term "activity theory" "can be used in two senses: referring to the original Soviet tradition or referring to the international, multi-voiced community applying the original ideas and developing them further."[5] The Scandinaviant AT school of thought seeks to integrate and develop concepts from Vygotsky's Cultural-Historical Psychology and Leont'ev's activity theory with Western intellectual developments such as Cognitive Science, American Pragmatism, Constructivism, and Actor-Network Theory. It is known as Scandinavian activity theory. Work in the systems-structural theory of activity is also being carried on by researchers in the US and UK. Some of the changes are a systematisation of Leont'ev's work. Although Leont'ev's exposition is clear and well structured, it is not as well-structured as the formulation by Yrj Engestrm. Kaptelinin remarks that Engestrm "proposed a scheme of activity different from that by Leont'ev; it contains three interacting entitiesthe individual, the object and the communityinstead of the two componentsthe individual and the objectin Leont'ev's original scheme."[6] Some changes were introduced, apparently by importing notions from Human-Computer Interaction theory. For instance, the notion of rules, which is not found in Leont'ev, was introduced. Also, the notion of collective subject was introduced in the 1970s and 1980s (Leont'ev refers to "joint labour activity", but only has individuals, not groups, as activity subjects).

Activity theory


The goal of Activity Theory is understanding the mental capabilities of a single individual. However, it rejects the isolated individuals as insufficient unit of analysis, analyzing the cultural and technical aspects of human actions.[7] Activity theory is most often used to describe actions in a socio-technical system through six related elements (Bryant et al.) of a conceptual system expanded by more nuanced theories: Object-orientedness - the objective of the activity system. Object Activity system diagram refers to the objectivness of the reality; items are considered objective according to natural sciences but also have social and cultural properties. Subject or internalization - actors engaged in the activities; the traditional notion of mental processes Community or externalization - social context; all actors involved in the activity system Tools or tool mediation - the artifacts (or concepts) used by actors in the system. Tools influence actor-structure interactions, they change with accumulating experience. In addition to physical shape, the knowledge also evolves. Tools are influenced by culture, and their use is a way for the accumulation and transmission of social knowledge. Tools influence both the agents and the structure. Division of labor - social strata, hierarchical structure of activity, the division of activities among actors in the system Rules - conventions, guidelines and rules regulating activities in the system Activity theory helps explain how social artifacts and social organization mediate social action (Bryant et al.).

Activity theory and information systems

The application of activity theory to information systems derives from the work of Bonnie Nardi and Kari Kuutti. Kuutti's work is addressed below. Nardi's approach is, briefly, as follows: Nardi saw activity theory as "...a powerful and clarifying descriptive tool rather than a strongly predictive theory. The object of activity theory is to understand the unity of consciousness and activity... Activity theorists argue that consciousness is not a set of discrete disembodied cognitive acts (decision making, classification, remembering), and certainly it is not the brain; rather, consciousness is located in everyday practice: you are what you do."{Nardi, 1996} Nardi also argued that "activity theory proposes a strong notion of mediationall human experience is shaped by the tools and sign systems we use."[8] Furthermore, she identifies "some of the main concerns of activity theory: [as] consciousness, the asymmetrical relation between people and things, and the role of artefacts in everyday life."{Nardi, 1996} She explained that "a basic tenet of activity theory is that a notion of consciousness is central to a depiction of activity. Vygotsky described consciousness as a phenomenon that unifies attention, intention, memory, reasoning, and speech..."{Nardi, 1996} and "Activity theory, with its emphasis on the importance of motive and consciousnesswhich belongs only to humanssees people and things as fundamentally different. People are not reduced to 'nodes' or 'agents' in a system; 'information processing' is not seen as something to be modelled in the same way for people and machines."{Nardi, 1996} Nardi argued that the field of Human-Computer Interaction has "largely ignored the study of artefacts, insisting on mental representations as the proper focus of study" and activity theory is seen as a way of addressing this deficit. In a later work, Nardi et al. in comparing activity theory with cognitive science, argue that "activity theory is above all a social theory of consciousness" and therefore "... activity theory wants to define consciousness, that is, all the mental functioning including remembering, deciding, classifying, generalising, abstracting and so forth, as a product of our social interactions with other people and of our use of tools." For Activity Theorists "consciousness" seems to

Activity theory refer to any mental functioning, whereas most other approaches to psychology distinguish conscious from unconscious functions.


Human-computer interaction
The rise of the personal computer challenged the focus in traditional systems developments on mainframe systems for automation of existing work routines. It furthermore brought forth a need to focus on how to work on materials and objects through the computer. In the search of theoretical and methodical perspectives suited to deal with issues of flexibility and more advanced mediation between the human being, material and outcomes through the interface, it seemed promising to turn to the still rather young HCI research tradition that had emerged primarily in the US (for further discussion see Bannon & Bdker, 1991). Specifically the cognitive science-based theories lacked means of addressing a number of issues that came out of the empirical projects (see Bannon & Bdker, 1991): 1. Many of the early advanced user interfaces assumed that the users were the designers themselves, and accordingly built on an assumption of a generic user, without concern for qualifications, work environment, division of work, etc. 2.In particular the role of the artifact as it stands between the user and her materials, objects and outcomes was ill understood. 3. In validating findings and designs there was a heavy focus on novice users whereas everyday use by experienced users and concerns for the development of expertise were hardly addressed. 4.Detailed task analysis and the idealized models created through task analysis failed to capture the complexity and contingency of real-life action. 5.From the point of view of complex work settings, it was striking how most HCI focused on one user - one computer in contrast to the ever-ongoing cooperation and coordination of real work situations (this problem later lead to the development of CSCW). 6.Users were mainly seen as objects of study. Because of these shortcomings, it was necessary to move outside cognitive science-based HCI to find or develop the necessary theoretical platform. European psychology had taken different paths than had American with much inspiration from dialectical materialism (Hydn 1981, Engestrm, 1987). Philosophers such as Heidegger and Wittgenstein came to play an important role, primarily through discussions of the limitations of AI (Winograd & Flores 1986, Dreyfus & Dreyfus 1986). Suchman (1987) with a similar focus introduced ethnomethodology into the discussions, and Ehn (1988) based his treatise of design of computer artifacts on Marx, Heidegger and Wittgenstein. The development of the activity theoretical angle was primarily carried out by Bdker (1991, 1996) and by Kuutti (Bannon & Kuutti, 1993, Kuutti, 1991, 1996), both with strong inspiration from Scandinavian activity theory groups in psychology. Bannon (1990, 1991) and Grudin (1990a and b) made significant contributions to the furthering of the approach by making it available to the HCI audience. The work of Kaptelinin (1996) has been important to connect to the earlier development of activity theory in Russia. Nardi produced the, hitherto, most applicable collection of activity theoretical HCI literature (Nardi, 1996).

Systemic-structural activity theory (SSAT)

At the end of the 1990s, a group of Russian and American activity theorists working in the systems-cybernetic tradition of Bernshtein and Anokhin began to publish English-language articles and books dealing with topics in human factors and ergonomics[9] and, latterly, human-computer interaction.[10] Under the rubric of systemic-structural activity theory (SSAT), this work represents a modern synthesis within activity theory which brings together the cultural-historical and systems-structural strands of the tradition (as well as other work within Soviet psychology such as the Psychology of Set) with findings and methods from Western human factors/ergonomics and cognitive psychology. The development of SSAT has been specifically oriented toward the analysis and design of the basic elements of human work activity: tasks, tools, methods, objects and results, and the skills, experience and abilities of involved subjects. SSAT has developed techniques for both the qualitative and quantitative description of work activity.[11] Its design-oriented analyses specifically focus on the interrelationship between the structure and self-regulation of work

Activity theory activity and the configuration of its material components.


An explanation of activity theory

This section presents a short introduction to activity theory, and some brief comments on human creativity in activity theory and the implications of activity theory for tacit knowledge and learning.

Activity theory begins with the notion of activity. An activity is seen as a system of human "doing" whereby a subject works on an object in order to obtain a desired outcome. In order to do this, the subject employs tools, which may be external (e.g. an axe, a computer) or internal (e.g. a plan). As an illustration, an activity might be the operation of an automated call centre. As we shall see later, many subjects may be involved in the activity and each subject may have one or more motives (e.g. improved supply management, career advancement or gaining control over a vital organisational power source). A simple example of an activity within a call centre might be a telephone operator (subject) who is modifying a customer's billing record (object) so that the billing data is correct (outcome) using a graphical front end to a database (tool). Kuutti formulates activity theory in terms of the structure of an activity. "An activity is a form of doing directed to an object, and activities are distinguished from each other according to their objects. Transforming the object into an outcome motivates the existence of an activity. An object can be a material thing, but it can also be less tangible."[12] Kuutti then adds a third term, the tool, which mediates between the activity and the object. "The tool is at the same time both enabling and limiting: it empowers the subject in the transformation process with the historically collected experience and skill crystallised to it, but it also restricts the interaction to be from the perspective of that particular tool or instrument; other potential features of an object remain invisible to the subject...".[13] As Verenikina remarks, tools are "social objects with certain modes of operation developed socially in the course of labour and are only possible because they correspond to the objectives of a practical action."

The levels of activity theory

An activity is modelled as a four-level hierarchy. Kuutti schematises processes in activity theory as a four-level system. Verenikina paraphrases Leont'ev as explaining that "the non-coincidence of action and operations... appears in actions with tools, that is, material objects which are crystallised operations, not actions nor goals. If a person is confronted with a specific goal of, say, dismantling a machine, then they must make use of a variety of operations; it makes no difference how the individual operations were learned because the formulation of the operation proceeds differently to the formulation of the goal that initiated the action." The levels of activity are also characterised by their purposes: "Activities are oriented to motives, that is, the objects that are impelling by themselves. Each motive is an object, material or ideal, that satisfies a need. Actions are the processes functionally subordinated to activities; they are directed at specific conscious goals... Actions are realised through operations that are determined by the actual conditions of activity." Engestrm developed an extended model of an activity, which adds another component, community ("those who share the same object"), and then adds rules to mediate between subject and community, and the division of labour to mediate between object and community. Kuutti asserts that "These three classes should be understood broadly. A tool can be anything used in the transformation process, including both material tools and tools for thinking. Rules cover both explicit and implicit norms, conventions, and social relations within a community. Division of labour refers to the explicit and implicit organisation of the community as related to the transformation process of the object into the outcome."

Activity theory Activity theory therefore includes the notion that an activity is carried out within a social context, or specifically in a community. The way in which the activity fits into the context is thus established by two resulting concepts: rules: these are both explicit and implicit and define how subjects must fit into the community; division of labour: this describes how the object of the activity relates to the community.


The internal plane of action

Activity theory provides a number of useful concepts that can be used to address the lack of expression for soft factors which are inadequately represented by most process modelling frameworks. One such concept is the internal plane of action. Activity theory recognises that each activity takes place in two planes: the external plane and the internal plane. The external plane represents the objective components of the action while the internal plane represents the subjective components of the action. Kaptelinin defines the internal plane of actions as "[...] a concept developed in activity theory that refers to the human ability to perform manipulations with an internal representation of external objects before starting actions with these objects in reality."[14] The concepts of motives, goals and conditions discussed above also contribute to the modelling of soft factors. One principle of activity theory is that many activities have multiple motivation (polymotivation). For instance, a programmer in writing a program may address goals aligned towards multiple motives such as increasing his or her annual bonus, obtaining relevant career experience and contributing to organisational objectives. Activity theory further argues that subjects are grouped into communities, with rules mediating between subject and community and a division of labour mediating between object and community. A subject may be part of several communities and a community, itself, may be part of other communities.

Human creativity
Human creativity plays an important role in activity theory, that "human beings... are essentially creative beings" in "the creative, non-predictable character". Tikhomirov also analyses the importance of creative activity, contrasting it to routine activity, and notes the important shift brought about by computerisation in the balance towards creative activity.

Learning and tacit knowledge

Activity theory has an interesting approach to the difficult problems of learning and, in particular, tacit knowledge. Learning has been a favourite subject of management theorists, but it has often been presented in an abstract way separated from the work processes to which the learning should apply. Activity theory provides a potential corrective to this tendency. For instance, Engestrm's review of Nonaka's work on knowledge creation suggests enhancements based on activity theory, in particular suggesting that the organisational learning process includes preliminary stages of goal and problem formation not found in Nonaka. Lompscher, rather than seeing learning as transmission, sees the formation of learning goals and the student's understanding of which things they need to acquire as the key to the formation of the learning activity. Of particular importance to the study of learning in organisations is the problem of tacit knowledge, which according to Nonaka, "is highly personal and hard to formalise, making it difficult to communicate to others or to share with others."[15] Leont'ev's concept of operation provides an important insight into this problem. In addition, the key idea of internalisation was originally introduced by Vygotsky as "the internal reconstruction of an external operation." Internalisation has subsequently become a key term of the theory of tacit knowledge and has been defined as "a process of embodying explicit knowledge into tacit knowledge." Internalisation has been described by Engestrm as the "key psychological mechanism" discovered by Vygotsky and is further discussed by Verenikina.

Activity theory


[1] Bedny, Gregory; Meister, David (1997). The Russian Theory of Activity: Current Applications To Design and Learning. Series in Applied Psychology. Psychology Press. ISBN978-0-8058-1771-3. [2] Engestrm, Yrj; Miettinen, Reijo; Punamki, Raija-Leena (1999). Perspectives on Activity Theory. Cambridge University Press. ISBN0-521-43730-X. [3] Nardi, Bonnie (1995). Context and Consciousness: Activity Theory and Human-Computer Interaction. MIT Press. ISBN0-262-14058-6. [4] Fjeld, M., Lauche, K., Bichsel, M., Voorhorst, F., Krueger, H., Rauterberg, M. (2002): Physical and Virtual Tools: Activity Theory Applied to the Design of Groupware. In B. A. Nardi & D. F. Redmiles (eds.) A Special Issue of Computer Supported Cooperative Work (CSCW): Activity Theory and the Practice of Design, Volume 11 (1-2), pp. 153-180. [5] Engestrm, Yrj; Miettinen, Reijo; Punamki, Raija-Leena (1999). Perspectives on Activity Theory. Cambridge University Press. ISBN0-521-43730-X. [6] Kaptelinin 1996, p. 57 [7] "Bertelsen, O. W. and S. Bodker. (2003)". "Activity theory.". [8] Nardi 1996, p. 10 [9] Bedny, G. Z. & Meister, D. (1997). The Russian Theory of Activity: Current Applications to Design and Learning, Mahwah, NJ, Lawrence Erlbaum Associates. [10] Bedny, G. Z. & Karwowski, W. (2003b). A Systemic-Structural Activity Approach to the Design of Human-Computer Interaction Tasks. International Journal of Human-Computer Interaction, 16, pp. 235-260. [11] Bedny, G. Z. & Karwowski, W. (2006) A Systemic-Structural Theory of Activity: Applications to Human Performance and Work Design. Boca Raton, CRC Press/Taylor & Francis. [12] Engestrm, Yrj; Miettinen, Reijo; Punamki, Raija-Leena (1999). Perspectives on Activity Theory. Cambridge University Press. ISBN0-521-43730-X. [13] Engestrm, Yrj; Miettinen, Reijo; Punamki, Raija-Leena (1999). Perspectives on Activity Theory. Cambridge University Press. ISBN0-521-43730-X. [14] Kaptelinin 1996, p. 51 [15] Nonaka, Ikujiro; Takeuchi, Hirotaka (1995). The Knowledge-Creating Company: How Japanese Companies Create the Dynamics of Innovation. Oxford University Press. ISBN0-19-509269-4.

External links

What is Activity Theory? ( The Future of Activity Theory ( edu/mca/Paper/ISCARkeyEngestrom.pdf+&hl=en&gl=us&pid=bl& srcid=ADGEESgqWI9vdb5xf-NKOMSq19IRdvoI868qOIid3uacZufpI-FZNs3Iintbsh2EYSbihVhqXdUPwXgyPwUAokvThp1OL sig=AHIEtbQcIUhwI_TB3Knypb60wvHugonURw)

Further reading
Bryant, Susan, Andrea Forte and Amy Bruckman, Becoming Wikipedian: Transformation of participation in a collaborative online encyclopedia, Proceedings of GROUP International Conference on Supporting Group Work, 2005. pp 1.-10 ( Kaptelinin, Victor, and Bonnie A. Nardi. (2006) Acting with Technology: Activity Theory and Interaction Design., MIT Press. Mazzoni, E. (2006). "Extending Web Sites' Usability: from a Cognitive Perspective to an Activity Theory Approach". In S. Zappala and C. Gray (Eds.) Impact of e-Commerce on Consumers and Small Firms. Aldershot, Hampshire (England), Ashgate.

Activity theory


Leont'ev, A. Problems of the development of mind. English translation, Progress Press, 1981, Moscow. (Russian original 1947). Leont'ev, A. Activity, Consciousness, and Personality ( htm) Engestrm, Y. Learning by expanding ( Yasnitsky, A. (2011). Vygotsky Circle as a Personal Network of Scholars: Restoring Connections Between People and Ideas ( Integrative Psychological and Behavioral Science, doi:10.1007/s12124-011-9168-5 pdf ( home?SGWID=0-0-1003-0-0&aqId=1811333&download=1&checkval=0907c49169f19f7eba2658bf481c1bb9) Verenikina,I. & Gould, E. (1998) Cultural-historical Psychology & Activity Theory. In Hasan, H., Gould., E. & Hyland, P. (Eds.) Activity Theory and Information Systems (7-18), Vol. 1.Wollongong: UOW Press

Participatory design
Participatory design (known before as 'Cooperative Design') is an approach to design attempting to actively involve all stakeholders (e.g. employees, partners, customers, citizens, end users) in the design process in order to help ensure the product designed meets their needs and is usable. The term is used in a variety of fields e.g. software design, urban design, architecture, landscape architecture, product design, sustainability, graphic design, planning or even medicine as a way of creating environments that are more responsive and appropriate to their inhabitants' and users' cultural, emotional, spiritual and practical needs. It is one approach to placemaking. It has been used in many settings and at various scales. Participatory design is an approach which is focused on processes and procedures of design and is not a design style. For some, this approach has a political dimension of user empowerment and democratization. For others, it is seen as a way of abrogating design responsibility and innovation by designers. In several Scandinavian countries of the 1960s and 1970s, it was rooted in work with trade unions; its ancestry also includes Action research and Sociotechnical Design.[1]

In participatory design participants (putative, potential or future) are invited to cooperate with designers, researchers and developers during an innovation process. Potentially, they participate during several stages of an innovation process: they participate during the initial exploration and problem definition both to help define the problem and to focus ideas for solution, and during development, they help evaluate proposed solutions.

From the 1960s onwards there was a growing demand for greater consideration of community opinions in major decision-making. In Australia many people believed that they were not being planned for but planned at. (Nichols 2009). A lack of consultation made the planning system seem paternalistic and without proper consideration of how changes to the built environment effected its primary users. In Britain the idea that the public should participate was first raised in 1965 (Taylor, 1998, p.86). However the level of participation is an important issue. At a minimum public workshops and hearings have now been included in almost every planning endeavour. (Wheeler, 2004, p.46) Yet this level of consultation can simply mean information about change without detailed participation. Involvement that recognises an active part in plan making (Taylor, 1998, p.86) has not always been straightforward to achieve. Participatory design has attempted to create a platform for active participation in the design process, for end users.

Participatory design


History in Scandinavia
Participatory design was actually born in Scandinavia and named Cooperative Design. However, when the methods were presented to the US community 'cooperation' was a word that didn't resonate with the strong separation between workers and managers - they weren't supposed to discuss ways of working face-to-face. Hence, 'participatory' was instead used as the initial Participatory Design sessions weren't a direct cooperation between workers and managers, sitting in the same room discussing how to improve their work environment and tools, but there were separate sessions for workers and managers. Each group was participating in the process, not directly cooperating. (in historical review of Cooperative Design, at a Scandinavian conference). In Scandinavia, research projects on user participation in systems development date back to the 1970s (Bdker 1996). The so-called "collective resource approach" developed strategies and techniques for workers to influence the design and use of computer applications at the workplace: The Norwegian Iron and Metal Workers Union (NJMF) project took a first move from traditional research to working with people, directly changing the role of the union clubs in the project (Ehn & Kyng, 1987). The Scandinavian projects developed an action research approach, emphasizing active co-operation between researchers and workers of the organization to help improve the latter's work situation. While researchers got their results, the people whom they worked with were equally entitled to get something out of the project. The approach built on people's own experiences, providing for them resources to be able to act in their current situation. The view of organizations as fundamentally harmonious according to which conflicts in an organization are regarded as pseudo-conflicts or "problems" dissolved by good analysis and increased communication was rejected in favor of a view of organizations recognizing fundamental "un-dissolvable" conflicts in organizations (Ehn & Sandberg, 1979). In the Utopia project (Bdker et al., 1987, Ehn, 1988), the major achievements were the experience-based design methods, developed through the focus on hands-on experiences, emphasizing the need for technical and organizational alternatives (Bdker et al., 1987). The parallel Florence project (Gro Bjerkness & Tone Bratteteig) started a long line of Scandinavian research projects in the health sector. In particular, it worked with nurses and developed approaches for nurses to get a voice in the development of work and IT in hospitals. The Florence project put gender on the agenda with its starting point in a highly gendered work environment. The 1990s led to a number of projects including the AT project (Bdker et al., 1993) and the EureCoop/EuroCode projects (Grnbk, Kyng & Mogensen, 1995). In recent years, it has been a major challenge to participatory design to embrace the fact that much technology development no longer happens as design of isolated systems in well-defined communities of work (Beck, 2002). At the dawn of the 21st century, we use technology at work, at home, in school, and while on the move. Many groups and projects throughout Scandinavia apply participatory design research methods on a regular basis, and, hence, are part of the development and appropriation of the methods, as well as of disseminating the methods to industrial practice. Among the more prominent has been the Center for User-oriented IT-Design [2] (CID) at the Royal Institute of Technology. With his background in the Utopia project, Yngve Sundblad [3] and a number of collaborators have developed a platform for a number of projects where industrial partners as well as partners from the labor movement and NGOs participated.

Participatory design


Fields of Participatory Design

Community Planning and Placemaking
Major international organizations such as Project for Public Spaces create opportunities for rigorous participation in the design and creation of place, believing that it is the essential ingredient for successful environments. Rather than simply consulting the public, PPS creates a platform for the community to participate and co-design new areas, which reflect their intimate knowledge. Providing insights, which independent design professionals such as architects or even local government planners may not have. Using a method called Place Performance Evaluation or (Place Game), groups from the community are taken on the site of proposed development, where they use their knowledge to develop design strategies, which would benefit the community. Whether the participants are schoolchildren or professionals, the exercise produces dramatic results because it relies on the expertise of people who use the place every day, or who are the potential users of the place.[4] This successfully engages with the ultimate idea of participatory design, where various stakeholders who will be the users of the end product, are involved in the design process as a collective. Similar projects have had success in Melbourne, Australia particularly in relation to contested sites, where design solutions are often harder to establish. The Talbot Reserve in St Kilda faced numerous problems of use, such as becoming a regular spot for sex workers and drug users to congregate. A Design In, which incorporated a variety of key users in the community about what they wanted for the future of the reserve allowed traditionally marginalised voices to participate in the design process. Participants described it as a transforming experience as they saw the world through different eyes. (Press, 2003, p.62). This is perhaps the key attribute of participatory design, a process which, allows multiple voices to be heard and involved in the design, resulting in outcomes which suite a wider range of users. As planning affects everyone it is believed that those whose livelihoods, environments and lives are at stake should be involved in the decisions which affect them (Sarkissian and Perglut, 1986, p.3)

In the Built Environment

Participatory design has many applications in development and changes to the built environment. It has particular currency to planners and architects, in relation to placemaking and community regeneration projects. It potentially offers a far more democratic approach to the design process as it involves more than one stakeholder. By incorporating a variety of views there is greater opportunity for successful outcomes. Many universities and major institutions are beginning to recognise its importance. The UN, Global studio involved students from Columbia University, University of Sydney and Sapienza University of Rome to provide design solutions for Vancouver's downtown eastside, which suffered from drug- and alcohol-related problems. The process allowed cross-discipline participation from planners, architects and industrial designers, which focused on collaboration and the sharing if ideas and stories, as opposed to ridged and singular design outcomes. (Kuiper, 2007, p.52)

From community consultation to community design

Many local governments particularly in Melbourne, Australia require community consultation in any major changes to the built environment. Community involvement in the planning process is almost a standard requirement in most strategic changes. The City of Melbourne Swanton Street redevelopment project received over 5000 responses from the public allowing them to participate in the design process by commenting on seven different design options.[5] While the City of Yarra recently held a Stories in the Street[6] consultation, to record peoples ideas about future of Smith Street. It offered participants a variety of mediums to explore their opinions such as mapping, photo surveys and storytelling. Although local councils are taking positive steps towards participatory design as opposed to traditional top down approaches to planning, many communities are moving to take design into their own hands. Portland, Oregon city repair[7] project is a form of participatory design, which involves the community co-designing problem areas together to make positive changes to their environment. It involves collaborative decision-making and

Participatory design design without traditional involvement from local government or professionals but instead runs on volunteers from the community. The process has created successful projects such as intersection repair,[8] which saw a misused intersection develop into a successful community square. Peer-to-peer urbanism[9] is a form of decentralized, participatory design for urban environments and individual buildings. It borrows organizational ideas from the open-source software movement, so that knowledge about construction methods and urban design schemes is freely exchanged.


In software development
In the English-speaking world, the term has a particular currency in the world of software development, especially in circles connected to Computer Professionals for Social Responsibility (CPSR), who have put on a series of Participatory Design Conferences. It overlaps with the approach Extreme Programming takes to user involvement in design, but (possibly because of its European trade union origins) the Participatory Design tradition puts more emphasis on the involvement of a broad population of users rather than a small number of user representatives. Participatory design can be seen as a move of end-users into the world of researchers and developers, whereas empathic design can be seen as a move of researchers and developers into the world of end-users. There is a very significant differentiation between user-design and User-centered design in that there is an emancipatory theoretical foundation, and a systems theory bedrock (Ivanov, 1972, 1995), on which user-design is founded. Indeed, user-centered design is a useful and important construct, but one that suggests that users are taken as centers in the design process, consulting with users heavily, but not allowing users to make the decisions, nor empowering users with the tools that the experts use. For example, Wikipedia content is user-designed. Users are given the necessary tools to make their own entries. Wikipedia's underlying wiki software is based on user-centered design: while users are allowed to propose changes or have input on the design, a smaller and more specialized group decide about features and system design. Participatory work in software development has historically tended toward two distinct trajectories, one in Scandinavia and northern Europe, and the other in North America. The Scandinavian and northern European tradition has remained closer to its roots in the labor movement (e.g., Beck, 2002; Bjerknes, Ehn, and Kyng, 1987). The North American and Pacific rim tradition has tended to be both broader (e.g., including managers and executives as "stakeholders" in design) and more circumscribed (e.g., design of individual features as contrasted with the Scandinavian approach to the design of entire systems and design of the work that the system is supposed to support) (e.g., Beyer and Holtzblatt, 1998; Noro and Imada, 1991). However, some more recent work has tended to combine the two approaches (Bdker et al., 2004; Muller, 2007).

Processes, Procedures and Methods of Participatory Design

Distributed participatory design
Distributed Participatory design (DPD) is a design approach and philosophy that supports the direct participation of users and other stakeholders in system analysis and design work. Nowadays design teams most often are distributed, which stress a need for support and knowledge gathered from design of distributed systems. Distributed Participatory design aims to facilitate understanding between different stakeholders in distributed design teams by giving each the opportunity to engage in hands-on activities.

Participatory design


Notes and references

[1] [2] [3] [4] [5] [6] Web Page on Participatory Design (http:/ / cpsr. org/ issues/ pd/ ) on the site of CPSR. Retrieved 13 April 2006. http:/ / cid. nada. kth. se/ en/ http:/ / hci. csc. kth. se/ personView. jsp?userName=yngve Projects for Public Spaces http:/ / www. pps. org/ info/ services/ our_approach/ building_the_vision Building The Vision May 15, 2009 The City of Melbourne http:/ / www. melbourne. vic. gov. au/ info. cfm?top=192& pa=1323& pg=4460 Have Your Say May 14, 2009 Andrea Cook http:/ / www. yarracity. vic. gov. au/ Consultation/ pdf/ Stories%20in%20the%20Street%20Publicity%20Files. pdf Stories in the Street" May 14, 2009 [7] City Repair http:/ / cityrepair. org/ about/ "What is City repair" May 13, 2009 [8] Clarence Eckerson Jr (2007-05-31). "Intersection repair" (http:/ / www. streetfilms. org/ archives/ intersection-repair/ ). Streetfilms. . [9] "P2P Urbanism", collection of articles (http:/ / zeta. math. utsa. edu/ ~yxk833/ P2PURBANISM. pdf)

Asaro, Peter M. (2000). "Transforming society by transforming technology: the science and politics of participatory design." ( Accounting Management and Information Technology 10: 257290. Banathy, B.H. (1992). Comprehensive systems design in education: building a design culture in education. Educational Technology, 22(3) 33-35. Beck, E. (2002).P for Political - Participation is Not Enough. SJIS, Volume 14 - 2002 Belotti, V. and Bly, S., 1996. Walking away from desktop computer: distributed collaboration and mobility in a product design team. In Proceedings of CSCW 96, Cambridge, Mass., November 1620, ACM press: 209-218. Beyer, H., and Holtzblatt, K. (1998). Contextual design: Defining customer-centered systems. San Francisco: Morgan Kaufmann. Button, G. and Sharrock, W. 1996. Project work: the organisation of collaborative design and development in software engineering. CSCW Journal, 5 (4), p.369-386. Bdker, S. and Iversen, O. S. (2002): Staging a professional participatory design practice: moving PD beyond the initial fascination of user involvement. In Proceedings of the Second Nordic Conference on Human-Computer interaction (Aarhus, Denmark, October 1923, 2002). NordiCHI '02, vol. 31. ACM Press, New York, NY, 11-18 Bdker, K., Kensing, F., and Simonsen, J. (2004). Participatory IT design: Designing for business and workplace realities. Cambridge, MA, USA: MIT Press. Bdker, S. (1996). Creating conditions for participation: Conflicts and resources in systems design, Human Computer Interaction 11(3), 215-236 Bdker, S., Christiansen, E., Ehn, P., Markussen, R., Mogensen, P., & Trigg, R. (1993). The AT Project: Practical research in cooperative design, DAIMI No. PB-454. Department of Computer Science, Aarhus University. Bdker, S., Ehn, P., Kammersgaard, J., Kyng, M., & Sundblad, Y. (1987). A Utopian experience: In G. Bjerknes, P. Ehn, & M. Kyng. (Eds.), Computers and democracy: A Scandinavian challenge (pp.251278). Aldershot, UK: Avebury. Carr, A.A. (1997). User-design in the creation of human learning systems. Educational Technology Research and Development, 45 (3), 5-22. Carr-Chellman, A.A., Cuyar, C., & Breman, J. (1998). User-design: A case application in health care training. Educational Technology Research and Development, 46 (4), 97-114. Divitini, M. & Farshchian, B.A. 1999. Using Email and WWW in a Distributed Participatory Design Project. In SIGGROUP Bulletin 20(1), pp.1015. Ehn, P. & Kyng, M. (1987). The Collective Resource Approach to Systems Design. In Bjerknes, G., Ehn, P., & Kyng, M. (Eds.), Computers and Democracy - A Scandinavian Challenge. (pp.1758). Aldershot, UK: Avebury Ehn, P. & Kyng, M., 1991. Cardboard Computers: Mocking-it-up or Hands-on the Future. In, Greenbaum, J. & Kyng, M. (Eds.) Design at Work, pp.169 196. Hillsdale, New Jersey: Laurence Erlbaum Associates. Ehn, P. (1988). Work-oriented design of computer artifacts. Falkping: Arbetslivscentrum/Almqvist & Wiksell International, Hillsdale, NJ: Lawrence Erlbaum Associates Ehn, P. and Sandberg, . (1979). God utredning: In Sandberg, . (Ed.): Utredning och frndring i frvaltningen[Investigation and change in administration]. Stockholm: Liber.

Participatory design Grudin, J. (1993). Obstacles to Participatory Design in Large Product Development Organizations: In Namioka, A. & Schuler, D. (Eds.), Participatory design. Principles and practices (pp.99122). Hillsdale NJ: Lawrence Erlbaum Associates. Grnbk, K., Kyng, M. & P. Mogensen (1993). CSCW challenges: Cooperative Design in Engineering Projects, Communications of the ACM, 36, 6, pp.6777 Ivanov, K. (1972). Quality-control of information: On the concept of accuracy of information in data banks and in management information systems ( The University of Stockholm and The Royal Institute of Technology. Doctoral dissertation. Ivanov, K. (1995). A subsystem in the design of informatics: Recalling an archetypal engineer. In B. Dahlbom (Ed.), The infological equation: Essays in honor of Brje Langefors ( BLang80.html), (pp.287301). Gothenburg: Gothenburg University, Dept. of Informatics (ISSN 1101-7422). Note #16. Kensing, F. & Blomberg, J. 1998. Participatory Design: Issues and Concerns In Computer Supported Cooperative Work, Vol. 7, pp.167185. Kensing, F. 2003. Methods and Practices in Participatory Design. ITU Press, Copenhagen, Denmark. Kuiper, Gabrielle, June 2007, Participatory planning and design in the downtown eastside: reflections on Global Studio Vancouver, Australian Planner, v.44, no.2, pp.5253 Kyng, M. (1989). Designing for a dollar a day. Office, Technology and People, 4(2): 157-170. Muller, M.J. (2007). Participatory design: The third space in HCI (revised). In J. Jacko and A. Sears (eds.), Handbook of HCI 2nd Edition. Mahway NJ USA: Erlbaum. Naghsh, A. M., Ozcan M. B. 2004. Gabbeh - A Tool For Computer Supported Collaboration in Electronic Paper-Prototyping. In *Dearden A & Watts L. (Eds). Proceedings of HCI 04: Design for Life volume 2. British HCI Group pp77 80 Nslund, T., 1997. Computers in Context But in Which Context? In Kyng, M. & Mathiassen, L. (Eds). Computers and Design in Context. MIT Press, Cambridge, MA. pp.171 200. Nichols, Dave, (2009) Planning Thought and History Lecture, The University of Melbourne Noro, K., & Imada, A. S. (Eds.). (1991) Participatory ergonomics. London: Taylor and Francis. Perry, M. & Sanderson, D. 1998. Coordinating Joint Design Work: The Role of Communication and Artefacts. Design Studies, Vol. 19, pp.27328 Press, Mandy, 2003. Communities for Everyone: redesigning contested public places in Victoria, Chapter 9 of end Weeks et al. (eds), Community Practices in Australia (French Forests NSW: Pearson Sprint Print), pp.5965 Reigeluth, C. M. (1993). Principles of educational systems design. International Journal of Educational Research, 19 (2), 117-131. Sarkissian,W, Perglut, D. 1986, Community Participation in Practice, The Community Participation handbook, Second edition, Murdoch University Schuler, D. & Namioka, A. (1993). Participatory design: Principles and practices. Hillsdale, NJ: Erlbaum. Trainer, Ted 1996, Towards a sustainable economy: The need for fundamental change Envirobook/ Jon Carpenter, Sydney/Oxford, pp.135167 Wojahn, P. G., Neuwirth, C. M., Bullock, B. 1998. Effects of Interfaces for Annotation on Communication in a Collaborative Task. In Proceedings of CHI 98, LA, CA, April 1823, ACM press: 456-463 Wheeler, Stephen, 2004, Planning for Sustainability, Routledge pp.3452 Von Bertalanffy, L. (1968). General systems theory. New York: Braziller.


Participatory design


External links
Web Page on Participatory Design ( on the site of CPSR. Links to various papers and information about Participatory Design conferences. Institute for Participatory Design ( Participatory Design theory and practice, interesting project examples from Germany. Participle ( Creating new types of public services (London). Human Centered Design Toolkit ( IDEOs free toolkit We build the parts you build the product ( we-build-parts-you-build-product) Fast Company Magazine The World Seed Project ( Technical report on participatory theory and methods ( nsf/2b4f81291401771785256976004a8d13/56844f3de38f806285256aaf005a45ab?OpenDocument) emphasizing hybridity (methods and work practices that share attributes of multiple domains or disciplines).


4. Visual engineering
Communication design
Communication design is a mixed discipline between design and information-development which is concerned with how media intermission such as printed, crafted, electronic media or presentations communicate with people. A communication design approach is not only concerned with developing the message aside from the aesthetics in media, but also with creating new media channels to ensure the message reaches the target audience. Some designers use graphic design and communication design interchangeably. Communication design, can also refer to a systems-based approach, in which the totality of media and messages within a culture or organization are designed as a single integrated process rather than a series of discrete efforts. Communication design seeks to attract, inspire, create desires and motivate the people to respond to messages, with a view to making a favorable impact to the bottom line of the commissioning body, which can be either to build a brand, move sales, or for humanitarian purposes. Its process involves strategic business thinking, using market research, creativity, and problem-solving. The term communication design is often used interchangeably with visual communication, but has an alternate broader meaning that includes auditory, vocal, touch and smell. Examples of communication design include information architecture, editing, typography, illustration, web design, animation, advertising, ambient media, visual identity design, performing arts, copywriting and professional writing skills applied in the creative industries.

Advertising Art director Brand management Content strategy Copywriting Creative director Information architecture Information graphics Instructional design Marketing communications Performing arts Presentation Technical writing Visual arts

Visual Design
Visual Design is the design working in any media or support of visual communication.[1][2][3] This is considered by some to be madara daf terminology to cover all types of design applied in communication that uses visual channel for transmission of messages,[4][5][6] precisely because this term relates to the concept of visual language of some media and not limited to support a particular form of content, as do the terms graphic design (graphics)[7] or Interface design (electronic media).

Communication design


[1] [2] [3] [4] [5] [6] [7] MUNARI, Bruno. Design and visual communication. Chronicle Books, 2006 WOLLNER, Alexandre. Visual Design 50 years. Cosac & Naify, 2003 LANGENFELDS, Ranya. Visual design. TEAME, 1997 LEEUWEN, Theo Van. Reading images: the grammar of visual design. Routledge, 2006 - Pg. 4 FRASCARA, Jorge. Communication design: principles, methods, and practice. Allworth Communications, Inc., 2004 - Pg. 4 GARRET, Lillian. Visual design: a problem-solving approach. Michigan: R. E. Krieger Pub. Co., 1975. MEGGS, Philip B. A history of graphic design. Michigan, Van Nostrand Reinhold, 1992 - Pg.xiii Preface

External links
Simone Gilges, "Information Age" Triple Canopy ( Dossier Communication Design in Germany of the Goethe-Institut ( kom/enindex.htm)


5. Information Architecture engineering

Information architecture
Information architecture (IA) is the art and science of organizing and labelling data including: websites, intranets, online communities and software to support usability.[1] It is an emerging discipline and community of practice focused on bringing together principles of design and architecture to the digital landscape.[2] Typically it involves a model or concept of information which is used and applied to activities that require explicit details of complex information systems. These activities include library systems and database development. Historically the term "information architect" is attributed to Richard Saul Wurman,"[3] and now there is a growing network of active IA specialists who comprise the Information Architecture Institute.[4]

Information architecture has somewhat different meanings in different branches of IS or IT # The structural design of shared information environments.[2] 1. The art and science of organizing and labeling web sites, intranets, online communities, and software to support findability and usability.[1][2] 2. An emerging community of practice focused on bringing principles of design and architecture to the digital landscape.[2] 3. The combination of organization, labeling, search and navigation systems within websites and intranets.[2] 4. An emerging discipline and community of practice focused on bringing principles of design and architecture to the digital landscape.[2]

The difficulty in establishing a common definition for "information architecture" arises partly from the term's existence in multiple fields. In the field of systems design, for example, information architecture is a component of enterprise architecture that deals with the information component when describing the structure of an enterprise. While the definition of information architecture is relatively well-established in the field of systems design, it is much more debatable within the context of online information systems (i.e., websites). Andrew Dillon refers to the latter as the "big IA-little IA debate".[5] In the little IA view, information architecture is essentially the application of information science to webdesign, which considers, for example, issues of classification and information retrieval. In the big IA view, information architecture involves more than just the organization of a website; it also factors in user experience, thereby considering usability issues of information design.

The role of IA
Information architecture is a specialized skill set that interprets information and expresses distinctions between signs and systems of signs. More concretely, it involves the categorization of information into a coherent structure, preferably one that the intended audience can understand quickly, if not inherently, and then easily retrieve the information for which they are searching[2]. The organization structure is usually hierarchical, but can have other structures, such as concentric or even chaotic[2]. Typically this is required in activities such as library systems, Content Management Systems, web development, user interactions, database development, programming, technical writing, enterprise architecture, and critical system software design. Information architecture originates, to some

Information architecture degree, in the library sciences. Many schools with library and information science departments teach information architecture.[6] In the context of information systems design, information architecture refers to the analysis and design of the data stored by information systems, concentrating on entities, their attributes, and their interrelationships. It refers to the modeling of data for an individual database and to the corporate data models an enterprise uses to coordinate the definition of data in several (perhaps scores or hundreds) distinct databases. The "canonical data model" is applied to integration technologies as a definition for specific data passed between the systems of an enterprise. At a higher level of abstraction it may also refer to the definition of data stores.


Information architect
Richard Saul Wurman says of the term information architect "used in the words architect of foreign policy. I mean architect as in the creating of systemic, structural, and orderly principles to make something work the thoughtful making of either artifact, or idea, or policy that informs because it is clear."[3]

[1] (PDF) What is IA? (http:/ / www. iainstitute. org/ documents/ learn/ What_is_IA. pdf), Information Architecture Institute, . [2] [3] [4] [5] Rosenfeld & Morville 1998 Wurman, RS, Information Architects. Join the IA Network (http:/ / www. iainstitute. org/ en/ network/ ), Information Architecture Institute, . Dillon, A (2002), "Information Architecture in JASIST: Just where did we come from?", Journal of the American Society for Information Science and Technology 53 (10): 82123. [6] "Schools Teaching IA" (http:/ / www. iainstitute. org/ en/ learn/ education/ schools_teaching_ia. php), Education, IAinstitute, .

Rosenfeld, Louis Lou; Morville, Peter (1998), Information architecture for the World Wide Web (http://books. ei=jXxyTc-6MpHCvgOF0eS9AQ&sa=X&oi=book_result&ct=result&resnum=1& ved=0CCcQ6AEwAA#v=onepage&q&f=false), 1st, Sebastopol, CA: O'Reilly & Associates, ISBN0-596-52734-9

Further reading
Wei Ding; Xia Lin (15 May 2009). Information Architecture: The Design and Integration of Information Spaces ( Morgan & Claypool. ISBN978-1-59829-959-5.



Information graphics or infographics are graphic visual representations of information, data or knowledge intended to present complex information quickly and clearly.[1][2] They can improve cognition by utilizing graphics to enhance the human visual systems ability to see patterns and trends.[3][4] The process of creating infographics can be referred to as data visualization, information design, or information architecture.[2]

Infographics have been around for many years and recently the proliferation of a number of easy-to-use, free tools have made the creation of infographics available to a large segment of the population. Social media sites such as Facebook and Twitter have also allowed for individual infographics to be spread among many people around the world.

The Washington Metro subway map

In newspapers, infographics are commonly used to show the weather, as well as maps, site plans, and graphs for statistical data. Some books are almost entirely made up of information graphics, such as David Macaulay's The Way Things Work. The Snapshots in USA Today are also an example of simple infographics used to convey news and current events.[5] Modern maps, especially route maps for transit systems, use infographic techniques to integrate a variety of information, such as the conceptual layout of the transit network, transfer points, and local landmarks. Public transportation maps, such as those for the Washington Metro and the London Underground, are well-known infographics. Public places such as transit terminals usually have some sort of integrated "signage system" with standardized icons and stylized maps.



134 Early humans created cave paintings and maps, artifacts that could be considered the very first instances of infographics. Some examples of early infographics include an ancient map at the Neolithic site atalhyk, which dates from around 7500 BCE, as well as imagery used by the Indians of Mesoamerica to depict journeys of past generations. These artifacts often served as supportive elements to memory and storytelling, not as the stand-alone graphics often seen in the 21st Century. In 1626, Christoph Scheiner published the Rosa Ursina sive Sol, a book that revealed his research about the rotation of the sun; Infographics appeared in the form of illustrations demonstrating the Suns rotation patterns.

In 1786, William Playfair, an engineer and political economist, published the first data graphs in his book The Commercial and Political Atlas. To represent the economy of 18th Century England, Playfair used statistical graphs, bar charts, line graphs and histograms. In his work, Statistical Breviary, he is credited with introducing the first area chart and pie chart.[6] Around 1820, modern geography was established by Carl Ritter.[7] His maps included shared frames, agreed map legends, scales, repeatability, and fidelity. Such a map can be considered a "supersign" which combines sign systemsas defined by Charles Sanders Peirceconsisting of symbols, icons, indexes as representations.[8] Other examples can be seen in the works of geographers Ritter and Alexander von Humboldt.[9] In 1857, English nurse Florence Nightingale used information graphics to persuade Queen Victoria to improve conditions in military hospitals. The principal one she used was the Coxcomb chart, a combination of stacked bar and pie charts, depicting the number and causes of deaths during each month of the Crimean War.

Pie chart from Playfair's Statistical Breviary (1801)

Polar area diagram by Florence Nightingale illustrating causes of mortality during the Crimean War (1857).

Charles Minard's information graphic of Napoleon's invasion of Russia.

1861 saw the release of an influential information graphic on the subject of Napoleon's disastrous march on Moscow. The graphics creator, Charles Joseph Minard, captured four different changing variables that contributed to Napoleons downfall in a single two-dimensional image: the army's direction as they traveled, the location the troops passed through, the size of the army as troops died from hunger and wounds, and the freezing temperatures they experienced.

James Joseph Sylvester introduced the term "graph" in 1878 in the scientific magazine Nature and published a set of diagrams showing the relationship between chemical bonds and mathematical properties.[10] Graph Theory 1736-1936, pp.65. These were also some of the first mathematical graphs.



20th century
In 1942 Isidore Isou published the Lettrist manifesto, a document covering art, culture, poetry, film, and political theory. The included works, also called metagraphics and hypergraphics, are a synthesis of writing and visual art. In 1958 Stephen Toulmin proposed a graphical argument model, called The Toulmin Model of Argumentation. The diagram contained six interrelated components used for analyzing arguments, and was considered Toulmins most influential work, particularly in the field of rhetoric, communication, and computer science. The Toulmin Model of Argumentation became influential in argumentation theory and its applications. In 1972 and 1973, respectively, the Pioneer 10 and Pioneer 11 spacecraft included on their vessels the Pioneer Plaques, a pair of gold-anodized aluminum plaques, each featuring a pictorial message. The pictorial messages included nude male and female figures as well as symbols that were intended to provide information about the origin of the spacecraft. The images were designed by Carl Sagan and Frank Drake and were unique in that their graphical meanings were to be understandable to extraterrestrial beings, who would have no conception of human language. A pioneer in data visualization, Edward Tufte, wrote a series of books - Visual Explanations, The Visual Display of Quantitative Information, and Envisioning Information - on the subject of information graphics.[11][12][13] Referred to by The New York Times as the da Vinci of Data, Tufte began to give day-long lectures and workshops on the subject of infographics starting in 1993. As of 2012, Tufte still gives these lectures.[14] To Tufte, good data visualizations represent every data point accurately and enable a viewer to see trends and patterns in the data. Tuftes contribution to the field of data visualization and infographics is considered immense, and his design principles can be seen in many websites, magazines, and newspapers today.[15]
The Pioneer Plaque.

The infographics created by Peter Sullivan for The Sunday Times in the 1970s, 1980s, and 1990s were some of the key factors in encouraging newspapers to use more infographics. Sullivan is also one of the few authors who have written about information graphics in newspapers. Likewise the staff artists at USA Today, the United States newspaper that debuted in 1982, established the goal of using graphics to make information easier to comprehend. However, the paper has received criticism for oversimplifying news stories and for creating infographics that some find emphasize entertainment over content and data. Tufte coined the term chartjunk to refer to graphics that are visually appealing to the point of losing the information contained within them. With vector graphics and raster graphics becoming ubiquitous in computing in the 21st Century, data visualizations have been applied to commonly used computer systems, including desktop publishing and Geographic Information Systems (GIS). Closely related to the field of information graphics is information design, which is the creation of infographics. Author and founder of the TED, Richard Saul Wurman, is considered the originator of the phrase "information architect" and many of his books, such as Information Anxiety, helped propel the phrase "information design" from a concept to a job category.[16]



21st century
By the year 2000, Adobe Flash-based animations on the Internet had made use of many key practices in creating infographics in order to create a variety of products and games. Likewise, television began to incorporate infographics into the viewers experiences in the early 2000s. One example of infographics usage in television and in pop culture is the 2002 music video by the Norwegian musicians of Ryksopp, for their song "Remind Me." The video was comprised entirely of animated infographics. Similarly, in 2004, a television commercial for the French energy company Areva used animated infographics as an advertising tactic. Both of these videos and the attention they received have conveyed to other fields the potential value in using information graphics to describe complex information efficiently. With the rise of alternatives to Adobe Flash, such as HTML 5 and CSS3, infographics are now created in a variety of media with a number of software tools.[17] The field of journalism has also incorporated and applied information graphics to news stories. For stories that intend to include text, images, and graphics, the system called the maestro concept allows entire newsrooms to collaborate and organize a story to successfully incorporate all components. Across many newsrooms, this teamwork-integrated system is applied to improve time management. The maestro system is designed to improve the presentation of stories for busy readers of media. Many businesses use infographics as a tool for communicating with and attracting potential customers.[18] Information graphics have become a tool for internet marketers and companies to create content that others will link to, thus possibly boosting a company's reputation and online presence.[19] Infographics are finding a home in the classroom as well. Courses that teach students to create their own infographics using a variety of tools may encourage engagement in the classroom and may lead to a better understanding of the concepts they are mapping onto the graphics.[20]

The three parts of all infographics are the visual, the content, and the knowledge.[21] The visual consists of colors and graphics. There are two different types of graphics theme and reference. Theme graphics are included in all infographics and represent the underlying visual representation of the data. Reference graphics are generally icons that can be used to point to certain data, although they are not always found in infographics. Statistics and facts usually serve as the content for infographics, and can be obtained from any number of sources, including census data and news reports. One of the most important aspects of infographics is that they contain some sort of insight into the data that they are presenting this is the knowledge.[21]

Infographics are effective because of their visual element. Humans receive input from all five of their senses (sight, touch, hearing, smell, taste), but they receive significantly more information from vision than any of the other four.[22] Fifty percent of the human brain is dedicated to visual functions, and images are processed faster than text. The brain processes pictures all at once, but processes text in a linear fashion, meaning it takes much longer to obtain information from text.[2] Furthermore, it is estimated that 65% of the population are visual learners (as opposed to auditory or kinesthetic), so the visual nature of infographics caters to a large portion of the population.[2] Entire business processes or industry sectors can be made relevant to a new audience through a guidance design technique that leads the eye. The page

A chart attempting to depict business expectations about emerging technologies as of July 2009.

Infographic may link to a more complete report, but the infographic primes the reader making the subject-matter more accessible.[23] When designing the visual aspect of an infographic, a number of considerations must be made to optimize the effectiveness of the visualization. The six components of visual encoding are spatial, marks, connection, enclosure, retinal properties, and temporal encoding.[4] Each of these can be utilized in its own way to represent relationships between different types of data. However, studies have shown that spatial position is the most effective way to represent numerical data and leads to the fastest and easiest understanding by viewers.[3] Therefore, the designers often spatially represent the most important relationship being depicted in an infographic. There are also three basic provisions of communication that need to be assessed when designing an infographic appeal, comprehension, and retention.[24] Appeal is the idea that the communication needs to engage its audience. Comprehension implies that the viewer should be able to easily understand the information that is presented to them. And finally, retention means that the viewer should remember the data presented by the infographic. The order of importance of these provisions depends on the purpose of the infographic. If the infographic is meant to convey information in an unbiased way, such as in the domains of academia or science, comprehension should be considered first, then retention, and finally appeal. However, if the infographic is being used for commercial purposes, then appeal becomes most important, followed by retention and comprehension. When infographics are being used for editorial purposes, such as in a newspaper, appeal is again most important, but is followed first by comprehension and then retention.[24] When the varieties of factors listed above are taken into consideration when designing infographics, they can be a highly efficient and effective way to convey large amounts of information in a visual manner.


Data visualization
Data visualizations are often used in infographics and may make up the entire infographic. There are many types of visualizations that can be used to represent the same set of data. Therefore it is crucial to identify the appropriate visualization for the data set and infographic by taking into consideration graphical features such as position, size, shape, and color. There are primarily five types of visualization categories time-series data, statistical distributions, maps, hierarchies, and networking.[25]

Time-series data is one of the most common forms of data visualization. It documents sets of values over time. Examples of graphics in this category include index charts, stacked graphs, small multiples, and horizon graphs. Index charts are ideal to use when raw values are less important than relative changes. It is an interactive line chart that shows percentage changes for a collection of time-series data based on a selected index point. For example, stock investors could use this because they are less concerned with the specific price and more concerned with the rate of growth. Stacked A stacked graph showing processor families in Top500 graphs are area charts that are stacked on top of each supercomputers other, and depict aggregate patterns. They allow viewers to see overall patterns and individual patterns. However, they do not support negative numbers and make it difficult to accurately interpret trends. An alternative to

Infographic stacked graphs is small multiples. Instead of stacking each area chart, each series is individually shown so the overall trends of each sector are more easily interpreted. Horizon graphs are a space efficient method to increase the data density of a time-series while preserving resolution.[25]


Statistical distributions reveal trends based on how numbers are distributed. Common examples include histograms and box-and-whisker plots, which convey statistical features such as mean, median, and outliers. In addition to these common infographics, alternatives include stem-and-leaf plots, Q-Q plots, scatter plot matrices (SPLOM) and parallel coordinates. For assessing a collection of numbers and focusing on frequency distribution, stem-and-leaf plots can be helpful. The numbers are binned based on the first significant digit, and within each stack binned again based on the second significant digit. On the other hand, Q-Q plots compare two probability distributions by graphing quantiles against each other. This allows the viewer to see if the plot values are similar and if the two are linearly related. SPLOM is a technique that represents the relationships among multiple variables. It uses multiple scatter plots to represent a pairwise relation among variables. Another statistical distribution approach to visualize multivariate data is parallel coordinates. Rather than graphing every pair of variables in two dimensions, the data is repeatedly plotted on a parallel axis and corresponding points are then connected with a line. The advantage of parallel coordinates is that they are relatively compact, allowing many variables to be shown simultaneously.[25]

Maps are a natural way to represent geographical data. Time and space can be depicted through the use of flow maps. Line strokes are used with various widths and colors to help encode information. Choropleth maps, which encode data through color and geographical region, are also commonly used. Graduated symbol maps are another method to represent geographical data. They are an alternative to choropleth map and use symbols, such as pie charts for each area, over a map. A cartogram showing the final electoral results of the 2008 US This map allows for more dimensions to represented presidential election using various shapes, size, and color. Cartograms, on the other hand, completely distort the shape of a region and directly encode a data variable. Instead of using a geographic map, regions are redrawn proportionally to the data. For example, each region can be represented by a circle and the size/color is directly proportional to other information, such as population size.[25]



Many data sets, such as spatial entities of countries or common structures for governments, can be organized into natural hierarchies. Node-link diagrams, adjacency diagrams, and enclosure diagrams are all types of infographics that effectively communicate hierarchical data. Node-link diagrams are a popular method due to the tidy and space-efficient results. A node-link diagram is similar to a tree, where each node branches off into multiple sub-sections. An alternative is adjacency diagrams, which is a space-filling variant of the node-link diagram. Instead of drawing a link between hierarchies, nodes are drawn as solid areas with sub-sections inside of each section. This method A node-link diagram showing the exports of Uganda allows for size to be easily represented than in the node-link diagrams. Enclosure diagrams are also a space-filling visualization method. However, they uses containment rather than adjacency to represent the hierarchy. Similar to the adjacency diagram, the size of the node is easily represented in this model.[25]

Network visualization explores relationships, such as friendships and cliques. Three common types are force-directed layout, arc diagrams, and matrix view. Force-directed layouts are a common and intuitive approach to network layout. In this system, nodes are similar to charged particles, which repel each other. Links are used to pull related nodes together. Arc diagrams are one-dimensional layouts of nodes with circular arcs linking each node. When used properly, with good order in nodes, cliques and bridges are easily identified in this layout. Alternatively, mathematicians and computer scientists more often use matrix views. Each value has an (x,y) value in the matrix that corresponds to a node. By using color and saturation Arc diagram representing the mathematical Farey sequence instead of text, values associated with the links can be perceived rapidly. While this method makes it hard to view the path of the nodes, there are no line crossings, which in a large and highly connected network can quickly become too cluttered.[25] While all of these visualizations can be effectively used on their own, many modern infographics combine multiple types into one graphic, along with other features, such as illustrations and text. Some modern infographics do not even contain data visualization, and instead are simply a colorful and succinct ways to present knowledge. Fifty-three percent of the 30 most-viewed infographics on the infographic sharing site did not contain actual data.[26]



Infographics can be created by hand using simple everyday tools such as graph paper, pencils, markers, and rulers. However, today they are more often created using computer software, which is often both faster and easier. They can be created with general illustration software, such as Adobe Illustrator or the freeware Inkscape. There are also a number of specialized websites and tools that can be used to construct infographics. Several on-line infographics creators, such as, Piktochart and has been launched on 2012. Those are sites that allows users to create infographics from pre-designed templates, add custom data and share infographics and charts on the web or download as pictures for placing in presentations. is a free service that generates interactive, javascript based online infographics and charts.[27] Piktochart is a site that allows users to create infographics using pre-defined themes that allow some customization.[28] Users can export an image of their infographic when they are done. Free access is limited, but a paid subscription allows users to create more infographics and utilize many more themes. is another free infographic creation site utilizing themes.[29] Users have a canvas that they can drag themes and customizable graphics onto in order to personalize the look of their infographic. Diagrams can be manually created and drawn using Creately, which can be downloaded for the desktop or used online.[30] It also includes a number of templates to get users started on their diagrams. Additionally, it allows users to collaborate on diagrams in real time over the Internet. Gliffy is a similar diagram creation tool that requires a paid subscription to use.[31] Tableau Public is a downloadable program that automatically parses datasets when users upload them.[32] It then suggests visualizations of the data and allows the user to customize the infographic using a simple drag-and-drop interface. Users may also simultaneously make a number of infographics using different parts of the same dataset. It provides users with HTML of their infographic so that they can share it on the web. ManyEyes is a project by IBM that allows users to create visualizations from either their own or other users uploaded datasets.[33] They can then share their visualizations with all the other users, who can comment on and modify the visualization. It is meant as a sharing and collaboration platform for infographics, allowing them to change over time based on input from numerous people. A wealth of global data from sources such as the OECD and World Bank are built into the website and desktop program Gapminder.[34] Users can view and customize infographics of world data such as birth rates and GDP. It was built on a platform called Trendalyzer, which was sold to Google in 2007.[35] This explains some of the similarities between Gapminder and Google Public Data Explorer, which is a large online repository of publicly available data from resources such as the U.S. Census Bureau, the World Resources Institute, and Eurostat.[36] Users can also upload their own datasets. Users can select specific data from a set, and the site will create visualizations of the data in the form of different graphs, such as bar and line graphs. There are a number of options for users to tailor the visualization by changing the scale, axes, and other variables. is a large infographics-sharing site that allows users to upload visualizations that they have created and explore other users visualizations by topic area.[37] There are also several visualizations based on social network data that users can select and customize based on their own social network data. There are also numerous tools to create very specific types of visualizations. The Photo Stats App and InFoto can be used to create a visualization based on embedded data in the photos on a users smartphone. Users can create an infographic of their resume using or a picture of their digital life using Intels What About Me?[38][39] The site Wordle allows users to provide text and create word clouds from it.[40]



[1] [2] [3] [4] Doug Newsom and Jim Haynes (2004). Public Relations Writing: Form and Style. p.236. Mark Smiciklas (2012). The Power of Infographics: Using Pictures to Communicate and Connect with Your Audience. Heer, J., Bostock, M., & Ogievetskey, V. (2010). A tour through the visualization zoo. Communications of the ACM, 53(6), 59-67. Card, Scott (2009). Information visualization. In A. Sears & J. A. Jacko (Eds.), Human-Computer Interaction: Design Issues, Solutions, and Applications (pp. 510-543). Boca Raton, FL: CRC Press. [5] USA Today Snapshots. http:/ / usatoday30. usatoday. com/ news/ snapshot. htm [6] H. Gray Funkhouser (1937) Historical Development of the Graphical Representation of Statistical Data. Osiris, Vol. 3., pp. 269404. [7] The Profession of Geography: Alexander von Humboldt and Carl Ritter (http:/ / www. valpo. edu/ geomet/ geo/ courses/ geo466/ topics/ humboldt. html) [8] Benking, Heiner, Using Maps and Models, SuperSigns and SuperStructurs, 2005. (http:/ / benking. de/ systems/ codata/ CODATA-MIST2005. htm) [9] 1st Berlin Symposium on Internet and Society, Learnings from Alexander von Humboldt and Carl Ritter towards the Grand Global Modern Communication Challenges. [10] Biggs, N., Lloyd, K., & Wilson, R. (1999). [11] Tufte, Edward R. (1990). Envisioning Information. ISBN0961392118. [12] Tufte, Edward R.. ISBN0961392142. [13] Tufte, Edward R. (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. ISBN0961392126. [14] Freymann-Weyr, Jeffrey, Edward Tufte, Offering Beautiful Evidence,, August 20, 2006. (http:/ / www. npr. org/ templates/ story/ story. php?storyId=5673332) [15] Romano, Andrew, How Master Information Designer Edward Tufte Can Help Obama Govern,, March 9, 2010. (http:/ / www. thedailybeast. com/ newsweek/ blogs/ the-gaggle/ 2010/ 03/ 09/ how-master-information-designer-edward-tufte-can-help-obama-govern. html) [16] Knemeyer, Dirk, Richard Saul Wurman: The InfoDesign Interview, January 2004. (http:/ / www. informationdesign. org/ special/ wurman_interview. htm) [17] "Why you should build your infographics in HTML5 and CSS3.". Paul Rouget. Retrieved 2012-07-10. [18] Khazan, Olga, How can businesses use infographics?,, April 8, 2012. (http:/ / www. washingtonpost. com/ blogs/ on-small-business/ post/ how-can-businesses-use-infographics/ 2012/ 04/ 06/ gIQAjbbh4S_blog. html) [19] "SEO Guide to Creating Viral Linkbait and Infographics" (http:/ / www. distilled. net/ linkbait-guide/ ). Distilled. . Retrieved 2012-07-19. [20] MacQuarrie, Ashley, Infographics in Education, July 10, 2012 (http:/ / blog. k12. com/ 2012/ 07/ 10/ infographics-education) [21] The Anatomy of an Infographic: 5 Steps to Create a Powerful Visual (http:/ / spyrestudios. com/ the-anatomy-of-an-infographic-5-steps-to-create-a-powerful-visual/ ) [22] David McCandless (2010). The Beauty of Data Visualization. TED Talk (http:/ / www. ted. com/ talks/ david_mccandless_the_beauty_of_data_visualization. html) [23] Turnbull, Dominic. "EPRA real economy infographic" (http:/ / www. epra. com/ regulation-and-reporting/ the-property-business/ ). . Retrieved 6 December 2012. [24] Jason Lankow, Josh Ritchie, Ross Crooks (2012). Infographics: The Power of Visual Storytelling [25] Heer, J., Bostock, M., & Ogievetsky, V. (2010). A tour through the visualization zoo. Communications of the ACM, 53(6), 59-67. [26] Van Slembrouck, Paul, Analyzing the Top 30 Infographics on Visually, June 2012. (http:/ / blog. visual. ly/ top-30-viral-infographics/ ) [27] (http:/ / infogr. am/ ) [28] Piktochart (http:/ / piktochart. com/ ) [29] (http:/ / www. easel. ly/ ) [30] Creately (http:/ / creately. com/ ) [31] Gliffy (http:/ / www. gliffy. com/ ) [32] Tableau Public (http:/ / www. tableausoftware. com/ public/ community) [33] ManyEyes [] [34] GapMinder (http:/ / www. gapminder. org/ ) [35] Rosmarin, Rachel, Google Buys Data Visualization Software,, March 16, 2007. (http:/ / www. forbes. com/ 2007/ 03/ 16/ google-trendalyzer-gapminder-tech-internet_cx_rr_0316google. html) [36] Google Public Data Explorer (http:/ / www. google. com/ publicdata/ directory) [37] (http:/ / visual. ly/ ) [38] (http:/ / vizualize. me/ ) [39] Intels What About Me? http:/ / www. intel. com/ content/ www/ us/ en/ what-about-me/ what-about-me. html [40] Wordle (http:/ / www. wordle. net/ )



Further reading
Heiner Benking (1981-1988) Requisite inquiry and time-line: computer graphics-infographics infographics/see there: Computer Graphics in the Environmental Sector - Possibilities and Limitations of Data-visualisation ( Moglichkeiten_und_Grenzen_der_Datenprasentation_durch_Computergrafik_im_Umweltbereich) this citation in chapter 3: technical possibilities and human potentials and capacities, "a picture is more than 10.000 words", and "10.000 miles equal 10.000 books". Sullivan, Peter. (1987) Newspaper Graphics. IFRA, Darmstadt. Jacques Bertin (1983). Semiology of Graphics. Madison, WI: University of Wisconsin Press. Translation by William Berg of Semiologie Graphique. Paris: Mouton/Gauthier-Villars, 1967. William S. Cleveland (1985). The Elements of Graphing Data. Summit, NJ: Hobart Press. ISBN 978-1584655121 Heiner Benking (1993), Visual Access Strategies for Multi-Dimensional Objects and Issues (http://www. / " Our View of Life is too Flat ( ceptualinstitute/12theses.htm)", WFSF, Turku, FAW Report TR-93019 ( books?id=A-RGtwAACAAJ&dq=benking+FAW&source=bl&ots=6vMLbJV0Qb& sig=eOmHk6JIHOZxS_71ClX0uDQqu_E&hl=de&sa=X&ei=9tM-UM_uBsGo4gT1vIDgBQ&redir_esc=y) William S. Cleveland (1993). Visualizing Data. Summit, NJ: Hobart Press. ISBN 978-0963488404 Sullivan, Peter. (1993) Information Graphics in Colour. IFRA, Darmstadt. John Emerson (2008). Visualizing Information for Advocacy: An Introduction to Information Design (http:// New York: OSI. Paul Lewi (2006). "Speaking of Graphics" ( Thomas L. Hankins (1999). "Blood, dirt, and nomograms: A particular history of graphs". In: Isis, 90:5080. Robert L. Harris (1999). Information Graphics: A Comprehensive Illustrated Reference. Oxford University Press. Eric K. Meyer (1997). Designing Infographics. Hayden Books. Edward R. Tufte (1983). The Visual Display of Quantitative Information. Edition, Cheshire, CT: Graphics Press. Edward R. Tufte (1990). Envisioning Information. Cheshire, CT: Graphics Press. Edward R. Tufte (1997). Visual Explanations: Images and Quantities, Evidence and Narrative. Cheshire, Edward R. Tufte (2006). Beautiful Evidence. Cheshire. CT: Graphics Press. John Wilder Tukey (1977). Exploratory Data Analysis. Addison-Wesley. Sandra Rendgen, Julius Wiedemann (2012). Information Graphics. Taschen Publishing. ISBN 978-3836528795 Jason Lankow, Josh Ritchie, Ross Crooks (2012). Infographics: The Power of Visual Storytelling (http://www. Wiley. ISBN 978-1118314043

External links
Milestones in the History of Thematic Cartography, Statistical Graphics and Data Visualization (http://www. Periodic Table of Visualization Methods ( Society for Newsdesign (


6. Accessibility
Accessibility is the degree to which a product, device, service, or environment is available to as many people as possible. Accessibility can be viewed as the "ability to access" and benefit from some system or entity. The concept often focuses on people with disabilities or special needs (such as the Convention on the Rights of Persons with Disabilities) and their right of access, enabling the use of assistive technology. Accessibility is not to be confused with usability, which is the extent to which a product (such as a device, service, or environment) can be used by specified users to achieve specified goals with effectiveness, efficiency and satisfaction in a specified context of use. Accessibility is strongly related to universal design when the approach involves "direct access." This is about making things accessible to all people (whether they have a disability or not). An alternative is to provide "indirect access" by having the entity support the use of a person's assistive technology to achieve access (for example, computer screen readers).

Accessibility legislation
The disability rights movement advocates equal access to social, political, and economic life which includes not only physical access but access to the same tools, services, organizations and facilities which we all pay for. Article 9 of the United Nations Convention on the Rights of Persons with Disabilities commits signatories to provide for full accessibility in their countries.

Universal access is provided in Curitiba's public transport system, Brazil.



While it is often used to describe facilities or amenities to assist people with disabilities, as in "wheelchair accessible", the term can extend to Braille signage, wheelchair ramps, elevators, audio signals at pedestrian crossings, walkway contours, website design, reading accessibility, and so on. Accessibility modifications may be required to enable persons with disabilities to gain access to education, employment, transportation, housing, recreation, or even simply to exercise their right to vote.

National legislation
Various countries have legislation requiring physical accessibility which are (in order of enactment):
This is the internationally recognized symbol for accessibility

In the US, under the Americans with Disabilities Act of 1990, new public and private business construction generally must be accessible. Existing private businesses are required to increase the accessibility of their facilities when making any other renovations in proportion to the cost of the other renovations. The United States Access Board is "A Federal Agency Committed to Accessible Design for People with Disabilities." The Job Accommodation Network discusses accommodations for people with disabilities in the workplace. Many states in the US have their own disability laws. In Australia, the Disability Discrimination Act 1992 has numerous provisions for accessibility. In Canada, relevant federal legislation includes the Canadian Human Rights Act, the Employment Equity Act, and the Canadian Labour Code. In the UK, the Equality Act 2010 has numerous provisions for accessibility. In South Africa the Promotion of Equality and Prevention of Unfair Discrimination Act 2000 has numerous provisions for accessibility. Legislation may also be enacted on a state, provincial or local level. In Ontario, Canada, the Ontarians with Disabilities Act of 2001 is meant to "improve the identification, removal and prevention of barriers faced by persons with disabilities..." The European Union (EU), which has signed the United Nations' Convention on the Rights of Persons with Disabilities, also has adopted a European Disability Strategy for 2010-20. The Strategy includes the following goals, among others:[1] devising policies for inclusive, high-quality education; ensuring the European Platform Against Poverty includes a special focus on people with disabilities (the forum brings together experts who share best practices and experience); working towards the recognition of disability cards throughout the EU to ensure equal treatment when working, living or travelling in the bloc developing accessibility standards for voting premises and campaign material; taking the rights of people with disabilities into account in external development programmes and for EU candidate countries. A European Accessibility Act is to be implemented in late 2012. This Act would establish standards within member countries for accessible products, services, and public buildings. The harmonization of accessibility standards within the EU "would facilitate the social integration of persons with disabilities and the elderly and their mobility across member states, thereby also fostering the free movement principle".[2]



Assistive technology and adaptive technology

Assistive technology is the creation of a new device that assists a person in completing a task that would otherwise be impossible. Some examples include new computer software programs, and inventions such as assistive listening devices, including hearing aids, and traffic lights with a standard color code that enables colorblind individuals to understand the correct signal. Adaptive technology is the modification, or adaptation, of existing devices, methods, or the creation of new uses for existing devices, to enable a person to complete a task. Examples include the use of remote controls, and the autocomplete (word completion) feature in computer word processing programs, which both help individuals with mobility impairments to complete tasks. Adaptations to wheelchair tires are another example; widening the tires enables wheelchair users to move over soft surfaces, such as deep snow on ski hills, and sandy beaches. Assistive technology and adaptive technology have a key role in developing the means for people with disabilities to live more independently, and to more fully participate in mainstream society. In order to have access to assistive or adaptive technology, however, educating the public and even legislating requirements to incorporate this technology have been necessary.

Accessibility of employment covers a wide range of issues, from skills training, to occupational therapy, finding employment, and retaining employment. Employment rates for workers with disabilities are lower than for the general workforce. Workers in Western countries fare relatively well, having access to more services and training as well as legal protections against employment discrimination. Despite this, in the United States the 2012 unemployment rate for workers with disabilities was 12.9%, while it was 7.3% for workers without disabilities.[3] Surveys of non-Western countries are limited, but the available statistics also indicate fewer jobs being filled by workers with disabilities. In India, a large 1999 survey found that "of the 'top 100 multinational companies' in the country [...] the employment rate of persons with disabilities in the private sector was a mere 0.28%, 0.05% in multinational companies and only 0.58% in the top 100 IT companies in the country".[4] India, like much of the world, has large sections of the economy that are without strong regulation or social protections, such as the informal economy. Other factors have been cited as contributing to the high unemployment rate, such as public service regulations. Although employment for workers with disabilities is higher in the public sector due to hiring programs targeting persons with disabilities, regulations currently restrict types of work available to persons with disabilities: "Disability-specific employment reservations are limited to the public sector and a large number of the reserved positions continue to be vacant despite nearly two decades of enactment of the PWD Act".[4] Expenses related to adaptive or assistive technology required to participate in the workforce may be tax deductible expenses for individuals with a medical practitioner's prescription in some jurisdictions.

Disability Management (DM)

Disability Management (DM) is a specialized area of human resources, to support efforts by employers to better integrate and retain workers with disabilities. Some workplaces have policies in place to provide "reasonable accommodation" for employees with disabilities, however, many do not. In some jurisdictions, employers may have legal requirements to end discrimination against persons with disabilities. It has been noted by researchers that where accommodations are in place for employees with disabilities, these frequently apply to individuals with "pre-determined or apparent disabilities as determined by national social protection or Equality Authorities",[5] which include persons with pre-existing conditions who receive an official disability designation. One of the biggest challenges for employers is in developing policies and practises to manage employees who develop disabilities during the course of employment. Even where these exist, they tend to focus on

Accessibility workplace injuries, overlooking job retention challenges faced by employees who acquire a non-occupation injury or illness. Protecting employability is a factor that can help close the unemployment gap for persons with disabilities.[5]


Meeting and conference access

Meetings and conferences should consider the needs of all of their participants. Checklists such as this may make it easier to identify specific needs: Mobility access Wheelchair accessible transportation Reserved parking Barrier-free meeting rooms / restrooms / podium/speaker's platform ADA Compliant Ramp Access to businesses and public places[6] Accessible lodging

Hearing access Advance copies of papers An assistive listening system Sign language interpreters A quiet place to gather for social conversation (a quieter space that is still visible to others should be reserved at social events or dinners so that people who are hard of hearing may go there to talk with their colleagues.) TTY access or Internet-based TRS Sight access Large print/braille copies of the program and papers A student volunteer to guide and describe the artwork, computer work, etc. A tech to help with assistive devices and screen readers (e.g., JAWS) Gloves to touch three dimensional work (where permissible)

Other issues Notification if social events include flashing lights and noises (these can cause seizures, so either avoid them or announce them ahead of time). Notices asking participants to refrain from allergy-producing problems (e.g., perfumes) Inform food providers of food allergies (e.g., peanuts, shellfish, etc.) Referral information for local personal care attendant agencies Referral information for veterinarian care for service animals Access to a place to rest during the day (if the conference venue is far from the lodgings) For a complete checklist, consult Equal Access: Universal Design of Conference Exhibits and Presentations [7].



In transportation, accessibility refers to the ease of reaching destinations. Academics have disputed how the term "ease" should be defined and measured. People who are in places that are highly accessible can reach many other activities or destinations quickly, people in inaccessible places can reach fewer places in the same amount of time. A measure that is often used is to measure accessibility in a traffic analysis zone i is: where: = index of origin zones = index of destination zones = function of generalized travel cost (so that nearer or less expensive places are weighted more than farther or more expensive places). For a non-motorized mode of transport, such as walking or cycling, the generalized travel cost may include additional factors such as safety or gradient. Transport for London utilise a calculated approach known as Public Transport Accessibility Level (PTAL) that uses the distance from any point to the nearest public transport stops, and service frequency at those stops, to assess the accessibility of a site to public transport services.

Accessibility to all buses is provided in Curitiba's public transport system, Brazil.

Adapted automobiles for persons with disabilities

Automobile accessibility also refers to ease of use by disabled people. Automobiles, whether a car or a van, can be adapted for a range of physical disabilities. Foot pedals can be raised, or Wheelchair-access ramp in Protram 205 WrAs tram replaced with hand-controlled devices. Wheelchair hoists, lifts or ramps may be customized according to the needs of the driver. Ergonomic adaptations, such as a lumbar support cushion, may also be needed.[8] Generally, the more limiting the disability, the more expensive the adaptation needed for the vehicle. Financial assistance is available through some organizations, such as Motability in the United Kingdom, which requires a contribution by the prospective vehicle owner. Motability makes vehicles available for purchase or lease.[9] A challenge for mobility-impaired drivers is renting a vehicle when they travel. Organizations that specialize in adaptive tourism can assist in finding a vehicle, when possible. In New Zealand, Enable Tourism is an organization that helps drivers with disabilities to locate car rentals offering adapted cars or vans.[10] In France, adapted cars with hand-controls are available from leading car rental businesses, however, it is advisable for drivers with disabilities to reserve a car well in advance of travelling.[11] When an employee with a disability requires an adapted car for work use, the employee does not have to pay for a "reasonable adjustment" in the United Kingdom; if the employer is unable to pay the cost, assistance is offered by government programs.[12]



Low floor
"Low floor" redirects here A significant development in transportation, and public transport in particular, to achieve accessibility, is the move to "low-floor" vehicles. In a low-floor vehicle, access to part or all of the passenger cabin is unobstructed from one or more entrances by the presence of steps, enabling easier access for the infirm or people with push chairs. A further aspect may be that the entrance and corridors are wide enough to accommodate a wheelchair. Low-floor vehicles have been developed for buses, trolleybuses and trams. A low floor in the vehicular sense is normally combined in a conceptual meaning with normal pedestrian access from a standard kerb height. However, the accessibility of a low-floor vehicle can also be utilised from slightly raising portions of kerb at bus stops, or through use of level boarding bus rapid transit 'stations' or tram stops. The combination of access from a kerb was the technological development of the 1990s, as step-free interior layouts for buses had existed in some cases for decades, with entrance steps being introduced as chassis designs and overall height regulations changed. Low-floor buses may also be designed with special height adjustment controls that permit a stationary bus to temporarily lower itself to ground level, permitting wheelchair access. This is referred to as a kneeling bus. At rapid transit systems, vehicles generally have floors in the same height as the platforms but the stations are often underground or elevated, so accessibility there isn't a question of providing low-floor vehicles, but providing a step-free access from street level to the platforms (generally by elevators, which are somewhere restricted to disabled passengers only, so that the step-free access isn't obstructed by healthy people taking advantage).

Accessibility planning for transportation

In the United Kingdom, the Department for Transport has mandated that each local authority produce an Accessibility Plan that is incorporated in their Local Transport Plan. An Accessibility Plan sets out how each local authority plans to improve access to employment, learning, health care, food shops and other services of local importance, particularly for disadvantaged groups and areas. Accessibility targets are defined in the accessibility plans, these are often the distance or time to access services by different modes of transport including walking, cycling and public transport. Accessibility Planning was introduced as a result of the report "Making the Connections: Final Report on Transport and Social Exclusion".[13] This report was the result of research carried out by the Social Exclusion Unit. The United Kingdom also has a "code of practice" for making train and stations accessible: "Accessible Train and Station Design for Disabled People: A Code of Practice".[14] This code of practice was first published in 2002 with the objective of compliance to Section 71B of the Railways Act 1993, and revised after a public consultation period in 2008. Making public services fully accessible to the public has led to some technological innovations. Public announcement systems using audio induction loop technology can broadcast announcements directly into the hearing aid of anyone with a hearing impairment, making them useful in such public places as auditoriums and train stations. Australia's government has supported the creation of the National Public Toilet Map, to enable users to locate public toilet facilities throughout the country. GPS is also included as a feature. The service assists people with continence issues, which is estimated to be up to 18% of the population, including the elderly and families with young children. Accessibility in urban design Accessibility modifications to conventional urban environments has become common in recent decades. The use of a curb cut, or kassel kerb, to enable wheelchair or walker movement between sidewalk and street level is found in most major cities of wealthy countries. The creation of priority parking spaces and of disabled parking permits has made them a standard feature of urban environments. Features that assist people with visual impairments include braille signs and tactile paving to allow a user with a cane to easily identify stairways, train platforms, and similar

Accessibility areas that could pose a physical danger to anyone who has a visual impairment. Urban design features that may appear to be simple conveniences for persons without disabilities are often essential to anyone who has a disability. The loss of these features presents a significant barrier. For example, sometimes a lack of prompt snow-clearing on sidewalks of major Canadian city streets means that wheelchair and walker users cannot reach pedestrian crossing buttons on crosswalk posts, due to snow bank accumulation around the posts, making the crossing buttons inaccessible. Public services must take into account the need to maintain accessibility features in the urban environment.


Most existing and new housing, even in the wealthiest nations, lack basic accessibility features unless the designated, immediate occupant of a home currently has a disability. However, there are some initiatives to change typical residential practices so that new homes incorporate basic access features such as zero-step entries and door widths adequate for wheelchairs to pass through. Occupational Therapists are a professional group skilled in the assessment and making of recommendations to improve access to homes.[15] They are involved in both the adaptation of existing housing to improve accessibility,[16] and in the design of future housing.[17] The broad concept of Universal design is relevant to housing, as it is to all aspects of the built environment. Furthermore, a Visitability movement begun by grass roots disability advocates in the 1980s focuses specifically on changing construction practices in new housing. This movement, a network of interested people working in their locales, works on educating, passing laws, and spurring voluntary home access initiatives with the intention that basic access become a routine part of new home construction.

Accessibility and 'ageing in place'

Accessibility in the design of housing and household devices has become more prominent in recent decades due to a rapidly ageing population in developed countries. Ageing seniors may wish to continue living independently, but the ageing process naturally increases the disabilities that a senior citizen will experience. A growing trend is the desire for many senior citizens to 'age in place', living as independently as possible for as long as possible. Accessibility modifications that allow ageing in place are becoming more common. Housing may even be designed to incorporate accessibility modifications that can be made throughout the life cycle of the residents.

Disability, information technology (IT) and telecommunications

Advances in information technology and telecommunications have represented a leap forward for accessibility. Access to the technology is restricted to those who can afford it, but it has become more widespread in Western countries in recent years. For those who use it, it provides the ability to access information and services by minimizing the barriers of distance and cost as well as the accessibility and usability of the interface. In many countries this has led to initiatives, laws and/or regulations that aim toward providing universal access to the internet and to phone systems at reasonable cost to citizens.[18] A major advantage of advanced technology is its flexibility. Some technologies can be used at home, in the workplace, and in school, expanding the ability of the user to participate in various spheres of daily life. Augmentative and alternative communication technology is one such area of IT progress. It includes inventions such as speech-generating devices, Teletypewriter devices, adaptive pointing devices to replace computer mouse devices, and many others. They can be adapted to create accessibility to a range of tasks, and may be suitable for different kinds of disability. The following impairments are some of the disabilities that affect communications and technology access, as well as many other life activities: communication disorders;[19]

Accessibility hearing impairments;[20] visual impairments;[21] mobility impairments; a learning disability or impairment in mental functioning.


Each kind of disability requires a different kind of accommodation, and this may require analysis by a medical specialist, an educational specialist or a job analysis when the impairment requires accommodation. Job analysis[22]

Examples of common assistive technologies

Impairment Communication impairment Hearing impairment Mobility impairment Assistive technology Blissymbols board or similar device; Electronic speech synthesizer earphones, headphones, headsets; Real-time closed captioning; Teletypewriter Page-turning device; Adaptive keyboards and computer mice (pointing devices such as trackballs, vertical mouse, foot mouse, or programmable pedal) Voice recognition software Talking textbooks

Physical or mental impairment Perceptual disability, learning disability Visual impairment, learning disability Visual impairment

Modified monitor interface, magnification devices; Reading service, E-text

Braille note-taker; Braille printer; screen magnifiers; Optical scanner

Mobility impairments One of the first areas where information technology improved the quality of life for disabled individuals is the voice operated wheelchair. Quadriplegics have the most profound disability, and the voice operated wheel chair technology was first developed in 1977 to provide increased mobility. The original version replaced the joystick system with a module that recognized 8 commands. Many other technology accommodation improvements have evolved from this initial development.[23] Missing arms and fingers interferes with the use of a keyboard and pointing device (mouse). This can be one of the most devastating types of handicap, and technology has made great improvements in this area during the last 20 years. Speech recognition devices and software can improve technology access. Communication (including speech) impairments A communication disorder interferes with the ability to produce clearly understandable speech. There can be many different causes, such as nerve degeneration, muscle degeneration, stroke, and vocal cord injury. The modern method to deal with speaking disabilities has been to provide a text interface for a speech synthesizer for complete vocal disability. This can be a great improvement for people that have been limited to the use of a throat vibrator to produce speech since the 1960s. Hearing impairment An individual satisfies the definition of hearing disabled when hearing loss is about 30dB for a single frequency, but this is not always perceptible as a handicap. For example, loss of sensitivity in one ear interferes with sound localization (directional hearing), which can interfere with communication in a crowd. This is often recognized when certain words are confused during normal conversation. This can interfere with voice-only interfaces, like automated customer service telephone systems, because it is sometimes difficult to increase the volume and repeat the message.

Accessibility Mild to moderate hearing loss may be accommodated with a hearing aid that amplifies ambient sounds. Portable devices with speed recognition that can produce text can reduce problems associated with understanding conversation. This kind of hearing loss is relatively common, and this often grows worse with age. The modern method to deal with profound hearing disability is the Internet using email or word processing applications. The Telecommunication Device for the Deaf (TDD) became available in the form of the teletype (TTY) during the 1960s. These devices consist of a keyboard, display and modem that connects two or more of these devices using a dedicated wire or plain old telephone service. Visual impairments A wide range of technology products are available to deal with visual impairment. This includes screen magnification for monitors, mouse-over speech synthesis browsing, braille displays, braille printers, braille cameras, voice operated phones and tablets. One emerging product that will make ordinary computer displays available for the blind is the refreshable tactile display, which is very different from a conventional braille display. This provides a raised surface corresponding to the bright and dim spots on a conventional display. An example is the Touch Sight Camera for the Blind. Refreshable Tactile Display [24][25] Touch Sight Camera for the Blind [26][27] Speech Synthesis Markup Language and Speech Recognition Grammar Specification are relatively recent technologies intended to standardize communication interfaces using BNF Form and XML Form. These technologies assist visual impairments and physical impairment by providing interactive access to web content without the need to visually observe the content. While these technologies provides access for visually impaired individuals, the primary benefactor has been automated systems that replace live human customer service representatives that handle telephone calls.


Web Accessibility
International standards and guidelines There have been a few major movements to coordinate a set of guidelines for accessibility for the web. The first and most well known is The Web Accessibility Initiative (WAI), which is part of the World Wide Web Consortium (W3C). This organization developed the Web Content Accessibility Guidelines (WCAG) 1.0 and 2.0 which explain how to make Web content accessible to everyone, including people with disabilities. Web "content" generally refers to the information in a Web page or Web application, including text, images, forms, and sounds. (More specific definitions are available in the WCAG documents.)[28] The WCAG is separated into 3 levels of compliance, A, AA and AAA. Each level requires a stricter set of conformance guidelines, such as different versions of HTML (Transitional vs Strict) and other techniques that need to be incorporated into your code before accomplishing validation. Online tools allow users to submit their website and automatically run it through the WCAG guidelines and produce a report, stating whether or not they conform to each level of compliance. Adobe Dreamweaver also offers plugins which allow web developers to test these guidelines on their work from within the program. Another source of web accessibility guidance comes from the US government. In response to Section 508 of the US Rehabilitation Act, the Access Board developed standards to which U.S. federal agencies must comply in order to make their sites accessible. The U.S. General Services Administration has developed a website where one can take online training courses for free to learn about these rules.[29]

Accessibility Features for Web accessibility Examples of website features that can help to make it accessible include the following: At least WAI-AA (preferably AAA) compliance with the WAI's WCAG Semantic Web markup (X)HTML Validation from the W3C for the pages content CSS Validation from the W3C for the pages layout Compliance with all guidelines from Section 508 of the US Rehabilitation Act A high contrast version of the site for individuals with low vision, and a low contrast (yellow or blue) version of the site for individuals with dyslexia Alternative media for any multimedia used on the site (video, flash, audio, etc.) Simple and consistent navigation Device Independent While WCAG provides much technical information for use by web designers, coders and editors, BS 8878:2010 Web accessibility - Code of Practice [30] has been introduced, initially in the UK, to help site owners and product managers to understand the importance of accessibility. It includes advice on the business case behind accessibility, and how organisations might usefully update their policies and production processes to embed accessibility in their business-as-usual. Another useful idea is for websites to include a web accessibility statement on the site. Initially introduced in PAS 78 [31] , the best practice for web accessibility statements has been updated in BS 8878 [30] to emphasise the inclusion of: information on how disabled and elderly people could get a better experience of using the website by using assistive technologies or accessibility settings of browsers and operating systems (linking to BBC My Web My Way [32] can be useful here); information on what accessibility features the site's creators have included, and if there are any user needs which the site doesn't currently support (for example, descriptive video to allow blind people to access the information in videos more easily); and contact details for disabled people to be able to use to let the site creators know if they have any problems in using the site. While validations against WCAG, and other accessibility badges can also be included, they should be put lower down the statement, as most disabled people still do not understand these technical terms. Example of an accessibility statement written by the lead-author of BS 8878 [33]




Education and accessibility for students

Equal access to education for students with disabilities is supported in some countries by legislation. It is still challenging for some students with disabilities to fully participate in mainstream education settings, but many adaptive technologies and assistive programs are making improvements. Students with a physical or mental impairment or learning disability may require note-taking assistance, which may be provided by a business offering such services, as with tutoring services. Talking books in the form of talking textbooks are available in Canadian secondary and post-secondary schools. Also, students may require adaptive technology to access computers and the Internet. These may be tax-exempt expenses in some jurisdictions with a medical prescription.

Test accessibility
Test accessibility is defined as the extent to which a test and its constituent item set eliminates barriers and permits the test-taker to demonstrate his or her knowledge of the tested content. Test accessibility involves an interaction between features of the test and individual test-taker characteristics. With the passage of the No Child Left Behind Act of 2001, student accountability in essential content areas such as reading, mathematics, and science has become a major area of focus in educational reform. As a result, test developers have needed to create tests to ensure all students, including those with special needs (e.g., students identified with disabilities), are given the opportunity to demonstrate the extent to which they have mastered the content measured on state assessments. Currently, states are permitted to develop two different types of tests in addition to the standard grade-level assessments to target students with special needs. First, the alternate assessment may be used to report proficiency for up to 1% of students in a state. Second, new regulations permit the use of alternate assessments based on modified academic achievement standards to report proficiency for up to 2% of students in a state.

A teacher helps her student at an orphanage in central Vietnam. The orphanage caters to many abandoned and disabled children - through education and communication programs they are able to have a life that would otherwise not be possible.

Construction of a ramp for a school latrine in Ukunda, Kenya, making the school building more accessible to students with disabilities.

To ensure these new tests generate results that permit valid inferences about student performance, they must be accessible to as many individuals as possible. The Test Accessibility and Modification Inventory (TAMI)[34] and its companion evaluation tool, the Accessibility Rating Matrix (ARM), were designed to facilitate the evaluation of tests and test items with a focus on enhancing their accessibility. Both instruments integrate principles of accessibility theory and were guided by research on universal design, assessment accessibility, cognitive load theory, and research on item-writing and test development. The TAMI is a non-commercial instrument that has been made available to all state assessment directors and testing companies. Assessment researchers have used the ARM to conduct accessibility reviews of state assessment items for several state departments of education.



[1] "EU disability strategy 2010-20: access and rights" (http:/ / ec. europa. eu/ news/ justice/ 101115_en. htm). European Commission. . Retrieved November 12, 2012. [2] "European Accessibility Act proposed for 2012" (http:/ / www. eurocities. eu/ eurocities/ news/ European-Accessibility-Act-proposed-for-2012-WSPO-8SMHJQ). EUROCITIES. . Retrieved November 12, 2012. [3] "Disability Employment Resources by Topic" (http:/ / www. dol. gov/ odep/ #. ULk_D47R3zI). U.S. Department of Labor - Office of Disability Employment Policy. . Retrieved November 30, 2012. [4] "Trapped Between Ableism And Neoliberalism: Critical Reflections On Disability And Employment In India" (http:/ / dsq-sds. org/ article/ view/ 3235/ 3109). Disability Studies Quarterly 32 (3): N.p.. 2012. . Retrieved November 30, 2012. [5] Geisen, Thomas, and Henry George Harder (2011). Disability Management and Workplace Integration: International Research Findings. Gower Publishing. pp.165. ISBN9781409418887. [6] "ADA Specifications for Wheelchair use" (http:/ / www. modular-wheelchair-ramps. com/ Modular_Ramps/ ADA_Modular_Ramp_Specs. aspx). . Retrieved February 2012. [7] http:/ / www. washington. edu/ doit/ Brochures/ Programs/ equal_conf. html [8] Dimond, Bridget C. (2009). Legal Aspects of Physiotherapy. John Wiley & Sons. pp.263. ISBN9781405176156. [9] Dimond, Bridget C. (2011). Legal Aspects of Occupational Therapy. John Wiley & Sons. pp.n.p.. ISBN9781444348163. [10] Harper, Laura and Tony Mudd, Paul Whitfield (2002). Rough Guide to New Zealand 3. Rough Guides. pp.69. ISBN9781858288963. [11] Dodd, Jan (2004). Rough Guide to the Dordogne the Lot 2. Rough Guides. pp.57. ISBN9781843532484. [12] Disability Rights Commission (2004). Disability Discrimination Act 1995: Code of Practice ; Employment and Occupation. The Stationery Office. pp.5. ISBN9780117034198. [13] Office of the Deputy Prime Minister Social Exclusion Unit: " Making the Connections: Final Report on Transport and Social Exclusion (http:/ / www. cabinetoffice. gov. uk/ media/ cabinetoffice/ social_exclusion_task_force/ assets/ publications_1997_to_2006/ making_transport_2003. pdf)". February 2003. [14] Department of Transport & Transport Scotland: " Accessible Train and Station Design for Disabled People: A Code of Practice (http:/ / www. dft. gov. uk/ transportforyou/ access/ rail/ railstations/ accessiblestationdesigns/ cop. pdf)". July 2008. [15] Occupational therapy research on assistive technology and physical environmental issues: A literature review, Fange et al. (2006), Canadian Journal of Occupational Therapy [16] Changes in accessibility and usability in housing: an exploration of the housing adaptation process (2005), Fange and Iwarsson, Occupational Therapy International [17] Accessibility and usability in housing: construct validity and implications for research and practice (2003), Fange and Iwarsson, Disability and Rehabilitation [18] "Better Web Browsing: Tips for Customizing Your Computer" (http:/ / www. w3. org/ WAI/ users/ browsing. html). World Wide Web Consortium. . [19] "Speech and Communication Disorders" (http:/ / health. nih. gov/ topic/ SpeechCommunicationDisorders). National Institutes of Health. . [20] "Hearing Disorders and Deafness" (http:/ / www. nlm. nih. gov/ medlineplus/ hearingdisordersanddeafness. html). National Library of Medicine. . [21] "Visual Impairment and Blindness" (http:/ / www. nlm. nih. gov/ medlineplus/ visionimpairmentandblindness. html). National Library of Medicine. . [22] "Pre-employment and periodical health examinations, job analysis and placement of workers". Bull. World Health Organ. (National Library of Medicine) 13 (4): 495503. 1955. PMC2538128. PMID13276805. [23] "Voice Operated Wheelchair". Arch Phys Med Rehabil (National Library of Medicine) 58 (4): 16975. April 1977. PMID849131. [24] http:/ / www. yenra. com/ refreshable-tactile-display/ [25] "Refreshable Tactile Display" (http:/ / www. yenra. com/ refreshable-tactile-display/ ). Ventra. . [26] http:/ / current. com/ 16cvu4c [27] "Touch Sight Camera for the Blind" (http:/ / current. com/ 16cvu4c). Current TV. . [28] WAI Resources on Introducing Web Accessibility (http:/ / www. w3. org/ WAI/ gettingstarted/ Overview. html) [29] Section 508: 508 Training (http:/ / www. section508. gov/ index. cfm?FuseAction=Content& ID=5). [30] http:/ / www. hassellinclusion. com/ bs8878/ [31] http:/ / www. equalityhumanrights. com/ footer/ accessibility-statement/ general-web-accessibility-guidance/ [32] http:/ / www. bbc. co. uk/ accessibility/ [33] http:/ / www. hassellinclusion. com/ accessibility/ [34] "Peabody College of Education and Human Development | Vanderbilt University" (http:/ / peabody. vanderbilt. edu/ tami. xml). 2012-07-30. . Retrieved 2012-08-13.



External links
The Center for Universal Design ( The Center for Universal Design in Education (


Web design
Web design
Web design encompasses many different skills and disciplines in the production and maintenance of websites.[1] The different areas of web design include web graphic design; interface design; authoring, including standardised code and proprietary software; user experience design; and search engine optimization. Often many individuals will work in teams covering different aspects of the design process, although some designers will cover them all.[2] The term web design is normally used to describe the design process relating to the front-end (client side) design of a website including writing mark up, but this is a grey area as this is also covered by web development. Web designers are expected to have an awareness of usability and if their role involves creating mark up then they are also expected to be up to date with web accessibility guidelines.

Although web design has a fairly recent history, it can be linked to other areas such as graphic design. However web design is also seen as a technological standpoint. It has become a large part of peoples everyday lives. It is hard to imagine the Internet without animated graphics, different styles of typography, background and music. The start of the web and web design In 1989, whilst working at CERN Tim Berners-Lee proposed to create a global hypertext project, which later became known as the World Wide Web. Throughout 1991 to 1993 the World Wide Web was born. Text only pages could be viewed using a simple line-mode browser.[3] In 1993 Marc Andreessen and Eric Bina, created the Mosaic browser. At the time there were multiple browsers however the majority of them were Unix-based and were naturally text heavy. There had been no integrated approach to graphical design elements such as images or sounds. The Mosaic browser broke this mould.[4] The W3C was created in October 1994, to "lead the World Wide Web to its full potential by developing common protocols that promote its evolution and ensure its interoperability."[5] This discouraged any one company from monopolizing a propriety browser and programming language, which could have altered the effect of the World Wide Web as a whole. The W3C continues to set standards, which can today be seen with JavaScript. In 1994 Andreessen formed Communications corp. That later became known as Netscape Communications the Netscape 0.9 browser. Netscape created its own HTML tags without regards to the traditional standards process. For example Netscape 1.1 included tags for changing background colours and formatting text with tables on web pages. Throughout 1996 to 1999 the browser wars began. The browser wars saw Microsoft and Netscape battle it out for the ultimate browser dominance. During this time there were many new technologies in the field, notably Cascading Style Sheets, JavaScript, and Dynamic HTML. On a whole the browser competition did lead to many positive creations and helped web design evolve at a rapid pace.[6]

Web design Evolution of web design In 1996, Microsoft released its first competitive browser, which was complete with its own features and tags. It was also the first browser to support style sheets, which at the time was seen as an obscure authoring technique.[6] The HTML markup for tables was originally intended for displaying tabular data. However designers quickly realized the potential of using HTML tables for creating the complex, multi-column layouts that were otherwise not possible. At this time, as design and good aesthetics seemed to take precedence over good mark-up structure, and little attention was paid to semantics and web accessibility. HTML sites were limited in their design options, even more so with earlier versions of HTML. To create complex designs, many web designers had to use complicated table structures or even use blank spacer .GIF images to stop empty table cells from collapsing.[7] CSS was introduced in December 1996 by the W3C to support presentation and layout; this allowed HTML code to be semantic rather than both semantic and presentational, and improved web accessibility, see tableless web design. In 1996 Flash (originally known as FutureSplash) was developed. At the time it was of a very simple layout basic tools and a timeline but it enabled web designers to go beyond the point of HTML at the time. It has now progressed to be very powerful, enabling it to develop entire sites.[7] End of the first browser wars During 1998 Netscape released Netscape Communicator code under an open source licence, enabling thousands of developers to participate in improving the software. However they decided to stop and start from the beginning, which guided the development of the open source browser and soon expanded to a complete application platform.[6] The Web Standards Project was formed, and promoted browser compliance with HTML and CSS standards by creating Acid1, Acid2, and Acid3 tests. 2000 was a big year for Microsoft. Internet Explorer had been released for Mac, this was significant as it was the first browser that fully supported HTML 4.01 and CSS 1, raising the bar in terms of standards compliance. It was also the first browser to fully support the PNG image format.[6] During this time Netscape was sold to AOL and this was seen as Netscapes official loss to Microsoft in the browser wars.[6]


Since the start of the 21st century the web has become more and more integrated into peoples lives, as this has happened the technology of the web has also moved on. There have also been signifigent changes in the way people use and access the web, this has changed how sites are designed. The Modern Browsers Since the end of the browsers wars there have been new browsers coming onto the scence, many of these are open source meaning that they tend to have faster development and are more supportive of new standards. The new options are considered by many to be better that Microsoft's Internet Explorer. New Standards The W3C has released new standards of HTML (HTML5) and CSS (CSS3), as well as new JavaScript API's each as a new but individual standard, however while the term HTML5 is only used to refer to the new version of HTML and some of the JavaScript API's, it has become common to use it to refer to the entire suite of new standards (HTML5, CSS3 and JavaScript)

Tools and technologies

Web designers use a variety of different tools depending on what part of the production process they are involved in. These tools are updated over time by newer standards and software but the principles behind them remain the same. Web graphic designers use vector and raster graphics packages for creating web formatted imagery or design prototypes. Technologies used for creating websites include standardised mark up which could be hand coded or

Web design generated by WYSIWYG editing software. There is also proprietary software based on plug-ins that bypasses the clients browsers version, these are often WYSIWYG but with the option of using the softwares scripting language. Search engine optimisation tools may be used to check search engine ranking and suggest improvements. Other tools web designers might use include mark up validators[8] and other testing tools for usability and accessibility to ensure their web sites meet web accessibility guidelines.[9]


Skills and techniques

Usually a successful website has only a few typefaces which are of a similar style, instead of using a range of typefaces. Preferably a website should use sans serif or serif typefaces, not a combination of the two. Typography in websites should also be careful the amount of typefaces used, good design will incorporate a few similar typefaces rather than a range of type faces. Most browsers recognize a specific number of safe fonts, which designers mainly use in order to avoid complications. Font downloading was later included in the CSS3 fonts module, and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased interest in Web typography, as well as the usage of font downloading.[10] Most layouts on a site incorporate white spaces to break the text up into paragraphs and also avoid centre aligned text. [11]

Page layout
Web pages should be well laid out to improve navigation for the user. Also for navigation purposes, the sites page layout should also remain consistent on different pages.[12] When constructing sites, it's important to consider page width as this is vital for aligning objects and in layout design. The most popular websites generally have a width close to 1024 pixels. Most pages are also centre aligned, to make objects look more aesthetically pleasing on larger screens.[13] Fluid layouts developed around 2000 as a replacement for HTML-table-based layouts, as a rejection of grid-based design both as a page layout design principle, and as a coding technique, but were very slow to be adopted.[14] The axiomatic assumption is that readers will have screen devices, or windows thereon, of different sizes and that there is nothing the page designer can do to change this. Accordingly, a design should be broken down into units (sidebars, content blocks, advert areas, navigation areas) that are sent to the browser and which will be fitted into the display window by the browser, as best it can. As the browser does know the details of the reader's screen (window size, font size relative to window etc.) the browser does a better job of this than a presumptive designer. Although such a display may often change the relative position of major content units, sidebars may be displaced below body text rather than to the side of it, this is usually a better and particularly a more usable display than a compromise attempt to display a hard-coded grid that simply doesn't fit the device window. In particular, the relative position of content blocks may change, but each block is less affected. Usability is also better, particularly by the avoidance of horizontal scrolling. Responsive Web Design is a new approach, based on CSS3, and a deeper level of per-device specification within the page's stylesheet, through an enhanced use of the CSS @media pseudo-selector.

Web design


Quality of code
When creating a site it is good practice to conform to standards. This is usually done via a description specifying what the element is doing. Not conforming to standards may not make a website unusable or error prone, standards can relate to the correct layout of pages for readability as well making sure coded elements are closed appropriately. This includes errors in code, better layout for code as well as making sure your IDs and classes are identified properly. Poorly-coded pages are sometimes colloquially called tag soup. Validating via W3C[8] can only be done when a correct DOCTYPE declaration is made, which is used to highlight errors in code. The system identifies the errors and areas that do not conform to web design standards. This information can then be corrected by the user.[15]

Visual design
Good visual design on a website identifies and works for its target market. This can be an age group or particular strand of culture thus the designer should understand the trends of its audience. Designers should also understand the type of website they are designing, meaning a business website should not be designed the same as a social media site for example. Designers should also understand the owner or business the site is representing, to make sure they are portrayed favourably. The aesthetics or overall design of a site should not clash with the content, making it easier for the user to navigate and can find the desired information or products etc.[16]

User experience design

For a user to understand a website they must be able to understand how the website works. This affects their experience. User experience is related to layout, clear instructions and labelling on a website. The user must understand how they can interact on a site. In relation to continued use, a user must perceive the usefulness of that website if they are to continue using it. With users who are skilled and well versed with website use, this influence relates directly to how they perceive websites, which encourages further use. Therefore users with less experience are less likely to see the advantages or usefulness of websites. This in turn should focus, on design for a more universal use and ease of access to accommodate as many users as possible regardless of user skill.[17]

There are two primary jobs involved in creating a website: the web designer and web developer, who often work closely together on a website.[18] The web designers are responsible for the visual aspect, which includes the layout, colouring and typography of a web page. A web designer will also have a working knowledge of using a variety of languages such as HTML, CSS, JavaScript, PHP and Flash to create a site, although the extent of their knowledge will differ from one web designer to another. Particularly in smaller organizations one person will need the necessary skills for designing and programming the full web page, whilst larger organizations may have a web designer responsible for the visual aspect alone.[19] Further jobs, which under particular circumstances may become involved during the creation of a website include: Graphic designers, to create visuals for the site such as logos, layouts and buttons Internet marketing specialists, to help maintain web presence through strategic solutions on targeting viewers to the site, by using marketing and promotional techniques on the internet. SEO writers, to research and recommend the correct words to be incorporated into a particular website and make the website more accessible and found on numerous search engines. Internet copywriter, to create the written content of the page to appeal to the targeted viewers of the site.[2] User experience (UX) designer, incorporates aspects of user focused design considerations which include information architecture, user centred design, user testing, interaction design, and occasionally visual design.[20]

Web design


[1] Pleasanton Web Design. "Web Design Definition" (http:/ / pleasantonwebdesignblog. com/ 2007/ 01/ web-design-definition. html). . Retrieved 2012-03-17. [2] Lester, Georgina. "Different jobs and responsibilities of various people involved in creating a website" (http:/ / www. arts-wales. co. uk/ index. php?option=com_content& task=view& id=152& Itemid=48). Arts Wales UK. . Retrieved 2012-03-17. [3] "" (http:/ / www. w3. org/ People/ Berners-Lee/ Longer. html). . Retrieved 2012-03-16. [4] "Mosaic Browser" (http:/ / www. techopedia. com/ images/ pdfs/ history-of-the-internet. pdf). . Retrieved 2012-03-16. [5] Zwicky, E.D, Cooper, S and Chapman, D,B. (2000). Building Internet Firewalls. United States: OReily & Associates. p.804. ISBN1-56592-871-7. [6] Niederst, Jennifer (2006). Web Design In a Nutshell (http:/ / books. google. co. uk/ books?id=bdf4vS2n7N8C& pg=PT42& dq=history+ of+ web+ design& hl=en& sa=X& ei=NZxgT7SXKJL98QPvvbGmBw& ved=0CF4Q6AEwAw#v=onepage& q=history of web design& f=false). United States of America: O'Reilly Media. pp.1214. ISBN0-596-00987-9. . [7] Chapman, Cameron. "The Evolution of Web Design" (http:/ / sixrevisions. com/ web_design/ the-evolution-of-web-design/ ). . Retrieved 2012-03-17. [8] "W3C Markup Validation Service" (http:/ / validator. w3. org/ ). . [9] W3C. "Web Accessibility Initiative (WAI)" (http:/ / www. w3. org/ WAI/ ). . [10] "Web typography" (http:/ / en. wikipedia. org/ wiki/ Web_typography). . [11] Stone, John. "20 Dos and Donts of Effective Web Typography" (http:/ / webdesignledger. com/ tips/ 20-dos-and-donts-of-effective-web-typographyaccessdate=19/ 03/ 2012). . [12] Grantastic Designs. "5 Basic Rules of web page design and layout" (http:/ / www. grantasticdesigns. com/ 5rules. html). . Retrieved 2012-03-19. [13] Iteracy. "Web page size and layout" (http:/ / www. iteracy. com/ resources/ build-a-better-website/ size-and-layout-of-a-web-page/ ). . Retrieved 2012-03-19. [14] <table>-based markup and spacer .GIF images [15] W3C QA. "My Web site is standard! And yours?" (http:/ / www. w3. org/ QA/ 2002/ 04/ Web-Quality). . Retrieved 2012-03-21. [16] THORLACIUS, LISBETH (2007). "The Role of Aesthetics in Web Design" (http:/ / www. carlosmoreno. info/ upn/ 2012/ PDF-1. pdf). Nordicom Review (28): 6376. . Retrieved 2012-03-21. [17] Castan eda, J.A; Francisco Mun oz-Leiva, Teodoro Luque (18). "Web Acceptance Model (WAM): Moderating effects of user experience" (http:/ / ac. els-cdn. com/ S0378720607000286/ 1-s2. 0-S0378720607000286-main. pdf?_tid=89aa9835d26b34a08f877471a11dae9a& acdnat=1332359996_d62ae611e4300824b5463f315079b6dd). Information & Management 44: 384396. . Retrieved 2012-03-21. [18] Oleksy, Walter (2001). Careers in Web Design (http:/ / books. google. co. uk/ books?id=-OJSA5wS7kQC& pg=PA7& dq=history+ of+ web+ design& hl=en& sa=X& ei=NZxgT7SXKJL98QPvvbGmBw& ved=0CHwQ6AEwCA#v=onepage& q=history of web design& f=false). New York: The Rosen Publishing Group,Inc.. pp.9, 10, 11. ISBN0-8239-3919-9. . [19] "Web Designer" (http:/ / www. myjobsearch. com/ careers/ web-designer. html). . Retrieved 2012-03-19. [20] Davies, Anthony, J.. "What is a UX/IA?" (http:/ / www. userexperiencedesigner. co. uk/ new-what-is-ux-designer-ia. htm). . Retrieved 2012-03-19.

External links
W3C consortium for web standards ( Web design and development ( at the Open Directory Project

Web usability


Web usability
Web usability is the application of usability in those domains where web browsing can be considered as a general paradigm (or "metaphor") for constructing a GUI.

Web usability is an approach to make web sites easy to use for an end-user, without the requirement that any specialized training be undertaken.[1] The user should be able to intuitively relate the actions she needs to perform on the web page with other interactions she sees similar contexts, e.g., press a button to perform some action. Some broad goals of usability could be: 1. 2. 3. 4. Present the information to the user in a clear and concise way. Give the correct choices to the users in an obvious way. Remove any ambiguity regarding the consequences of an action (e.g. clicking on delete/remove/purchase). Place important items in an appropriate area on a web page or a web application.

As more results of usability research become available, this leads to the development of methodologies for enhancing web-usability.[2]

In the context of eCommerce websites, the meaning of web-usability is narrowed down to efficiency: triggering sales and/or performing other transactions valuable to the business. Web usability received renewed attention as many early e-commerce websites started failing in 2000. Whereas fancy graphical design had been regarded as indispensable for a successful e-business application during the emergence of internet in the 1990s, web-usability protagonists said quite the reverse was true. They advocated the KISS principle (keep it simple, stupid), which had proven to be effective in focusing end-user attention.

[1] Nielsen, Jakob. (August 2003). Usability 101: Introduction to Usability. Alertbox: Current Issues in Web Usability. Retrieved from http:/ / www. useit. com/ alertbox/ 20030825. html [2] GOSS Interactive. (October 2011). Conducting a website review and implementing results for increased customer engagement and conversions. Retrieved from http:/ / www. gossinteractive. com/ community/ whitepapers/ conducting-a-website-review-and-implementing-results-for-increased-customer-engagement-and-conversions

External links
See also the "External links" section for the Usability article. ( basics with focus on web usability Evaluating Web Sites for Accessibility ( is a crucial subset of usability for people with disabilities. This W3C/WAI suite includes a section on involving users in testing for accessibility. Usability News ( from the Software Usability Research Laboratory at Wichita State University Usability Professionals' Association ( for people practicing and promoting usability

Web usability The Usability Methods Toolbox ( Jakob Nielsen's Alertbox ( bi-weekly column about current issues in web usability


Online books
The (Usable) Web Style Guide ( User In Your Face ( free, online book about user interface design, written in installments

Web accessibility
Web accessibility refers to the inclusive practice of making websites usable by people of all abilities and disabilities. When sites are correctly designed, developed and edited, all users can have equal access to information and functionality. For example, when a site is coded with semantically meaningful HTML, with textual equivalents provided for images and with links named meaningfully, this helps blind users using text-to-speech software and/or text-to-Braille hardware. When text and images are large and/or enlargeable, it is easier for users with poor sight to read and understand the content. When links are underlined (or otherwise differentiated) as well as coloured, this ensures that color blind users will be able to notice them. When clickable links and areas are large, this helps users who cannot control a mouse with precision. When pages are coded so that users can navigate by means of the keyboard alone, or a single switch access device alone, this helps users who cannot use a mouse or even a standard keyboard. When videos are closed captioned or a sign language version is available, deaf and hard-of-hearing users can understand the video. When flashing effects are avoided or made optional, users prone to seizures caused by these effects are not put at risk. And when content is written in plain language and illustrated with instructional diagrams and animations, users with dyslexia and learning difficulties are better able to understand the content. When sites are correctly built and maintained, all of these users can be accommodated without decreasing the usability of the site for non-disabled users. The needs that Web accessibility aims to address include: Visual: Visual impairments including blindness, various common types of low vision and poor eyesight, various types of color blindness; Motor/Mobility: e.g. difficulty or inability to use the hands, including tremors, muscle slowness, loss of fine muscle control, etc., due to conditions such as Parkinson's Disease, muscular dystrophy, cerebral palsy, stroke; Auditory: Deafness or hearing impairments, including individuals who are hard of hearing; Seizures: Photoepileptic seizures caused by visual strobe or flashing effects. Cognitive/Intellectual: Developmental disabilities, learning disabilities (dyslexia, dyscalculia, etc.), and cognitive disabilities of various origins, affecting memory, attention, developmental "maturity," problem-solving and logic skills, etc.

Web accessibility


Assistive technologies used for web browsing

Individuals living with a disability use assistive technologies such as the following to enable and assist web browsing: Screen reader software, which can read out, using synthesized speech, either selected elements of what is being displayed on the monitor (helpful for users with reading or learning difficulties), or which can read out everything that is happening on the computer (used by blind and vision impaired users). Braille terminals, consisting of a Refreshable Braille display which renders text as Braille characters (usually by means of raising pegs through holes in a flat surface) and either a QWERTY or Braille keyboard. Screen magnification software, which enlarges what is displayed on the computer monitor, making it easier to read for vision impaired users. Speech recognition software that can accept spoken commands to the computer, or turn dictation into grammatically correct text - useful for those who have difficulty using a mouse or a keyboard. Keyboard overlays, which can make typing easier and more accurate for those who have motor control difficulties.

Guidelines on accessible web design

Web Content Accessibility Guidelines
In 1999 the Web Accessibility Initiative, a project by the World Wide Web Consortium (W3C), published the Web Content Accessibility Guidelines WCAG 1.0. In recent years, these have been widely accepted as the definitive guidelines on how to create accessible websites. On 11 December 2008, the WAI released the WCAG 2.0 as a Recommendation. WCAG 2.0 aims to be up to date and more technology neutral. Criticism of WAI guidelines For a general criticism of the W3C process, read Putting the user at the heart of the W3C process [1]. There was a formal objection to WCAG's original claim that WCAG 2.0 will address requirements for people with learning disabilities and cognitive limitations headed by Lisa Seeman and signed by 40 organisations and people.[2] In articles such as WCAG 2.0: The new W3C guidelines evaluated [3], To Hell with WCAG 2.0 [4] and Testability Costs Too Much [5], the WAI has been criticised for allowing WCAG 1.0 to get increasingly out of step with today's technologies and techniques for creating and consuming web content, for the slow pace of development of WCAG 2.0, for making the new guidelines difficult to navigate and understand, and other argued failings.

Other guidelines
Canada Canada has the Common Look and Feel Standards [6] requiring federal government internet websites to meet Web Content Accessibility Guidelines (WCAG) 1.0 Checkpoints Priorities 1 and 2 (Double A conformance level). The standards have existed since 2000 and were updated in 2007. Philippines As part of the Web Accessibility Initiatives in the Philippines, the government through the National Council for the Welfare of Disabled Persons (NCWDP) board approved the recommendation of forming an adhoc or core group of webmasters that will help in the implementation of the Biwako Millennium Framework set by the UNESCAP. The Philippines was also the place where the Interregional Seminar and Regional Demonstration Workshop on Accessible Information and Communications Technologies (ICT) to Persons with Disabilities was held where eleven

Web accessibility countries from Asia - Pacific were represented. The Manila Accessible Information and Communications Technologies Design Recommendations was drafted and adopted in 2003. Spain In Spain, UNE 139803 is the norm entrusted to regulate web accessibility. This standard is based on Web Content Accessibility Guidelines 1.0.[7] Sweden In Sweden, Verva, the Swedish Administrative Development Agency is responsible for a set of guidelines for Swedish public sector web sites. Through the guidelines, Web accessibility is presented as an integral part of the overall development process and not as a separate issue. The Swedish guidelines contain criteria which cover the entire lifecycle of a website; from its conception to the publication of live web content. These criteria address several areas which should be considered, including: accessibility usability web standards privacy issues information architecture developing content for the web Content Management Systems (CMS) / authoring tools selection. development of web content for mobile devices.


An English translation was released in April 2008: Swedish National Guidelines for Public Sector Websites [8] The translation is based on the latest version of Guidelines which was released in 2006.[9] United Kingdom In December 2010, the BSI (British Standards Institute) released the standard BS 8878:2010 Web accessibility. Code of practice. This standard effectively supersedes PAS 78 (pub. 2006). PAS 78, produced by the The Disability Rights Commission and British Standards Institution, provided guidance to organisations in how to go about commissioning an accessible website from a design agency. It describes what is expected from websites to comply with the UK Disability Discrimination Act 1995 (DDA), making websites accessible to and usable by disabled people. BS 8878:2010 [10] Web accessibility - Code of Practice. The standard has been designed to introduce non-technical professionals to improved accessibility, usability and user experience for disabled and older people. It will be especially beneficial to anyone new to this subject as it gives guidance on process, rather than on technical and design issues. BS 8878 is consistent with the Equality Act 2010 [11] and is referenced in the UK governments e-Accessibility Action Plan as the basis of updated advice on developing accessible online services. It includes recommendations for: Involving disabled people in the development process and using automated tools to assist with accessibility testing The management of the guidance and process for upholding existing accessibility guidelines and specifications. BS 8878 is intended for anyone responsible for the policies covering web product creation within their organization, and governance against those policies (e.g. Chief Executive Officers, Managing Directors, Headteachers, ICT managers). It would also assist: People responsible for promoting and supporting equality and inclusion initiatives within an organization (e.g. Human Resource (HR) managers or those responsible for Corporate Social Responsibility - CSR). Procurement managers (e.g. those responsible for procuring web products or the tools to create them such as content production systems or virtual learning environments).

Web accessibility Web production teams (e.g. product owners, project managers, technical architects and web developers, designers, usability and accessibility engineers, test engineers). People with responsibility for creating or shaping online content (e.g. website editors, marketing managers, web content authors). People who create web production, testing or validation tools. People who write and deliver training courses in web production, design or coding. Other audiences that might also be interested in this British Standard include: Assistive technology creators, vendors and trainers who need insights into how their technologies impact on the production of accessible web products. Those disabled and older people whose web accessibility needs the Standard aims to support and present. Its lead-author, Jonathan Hassell, has created a summary of BS 8878 [30] to help organisations better understand how the standard can help them embed accessibility and inclusive design in their business-as-usual processes. Japan Web Content Accessibility Guidelines in Japan were established in 2004 as JIS (Japanese Industrial Standards) X 8341-3. JIS X 8341-3 was revised in 2010 to adopt WCAG 2.0. The new version has the same four principles, 12 guidelines, and 61 success criteria as WCAG 2.0 has.[12]


Essential components of web accessibility

The accessibility of websites relies on the cooperation of eight components[13]: 1. the website itself - natural information (text, images and sound) and the markup code that defines its structure and presentation 2. user agents, such as web browsers and media players 3. assistive technologies, such as screen readers and input devices used in place of the conventional keyboard and mouse 4. users' knowledge and experience using the web 5. developers 6. authoring tools 7. evaluation tools 8. a defined web accessibility standard, or a policy for your organization (against which to evaluate the accessibility) These components interact with each other to create an environment that is accessible to people with disabilities. Web developers usually use authoring tools and evaluation tools to create Web content. People ("users") use Web browsers, media players, assistive technologies or other "user agents" to get and interact with the content."[13]

Web accessibility


Guidelines for different components

Authoring Tool Accessibility Guidelines (ATAG)
ATAG[14] contains 28 checkpoints that provide guidance on: producing accessible output that meets standards and guidelines promoting the content author for accessibility-related information providing ways of checking and correcting inaccessible content integrating accessibility in the overall look and feel making the authoring tool itself accessible to people with disabilities

Web Content Accessibility Guidelines (WCAG)

WCAG 1.0: 14 guidelines that are general principles of accessible design WCAG 2.0: 12 principal guidelines

User Agent Accessibility Guidelines (UAAG)

UAAG[15] contains a comprehensive set of checkpoints that cover: access to all content user control over how content is rendered user control over the user interface standard programming interfaces

Legally required web accessibility

A growing number of countries around the world have introduced legislation which either directly addresses the need for websites and other forms of communication to be accessible to people with disabilities, or which addresses the more general requirement for people with disabilities not to be discriminated against.

In 2000, an Australian blind man won a court case against the Sydney Organizing Committee of the Olympic Games (SOCOG). This was the first successful case under Disability Discrimination Act 1992 because SOCOG had failed to make their official website, Sydney Olympic Games, adequately accessible to blind users. The Human Rights and Equal Opportunity Commission (HREOC) also published World Wide Web Access: Disability Discrimination Act Advisory Notes [16]. All Governments in Australia also have policies and guidelines that require accessible public websites; Vision Australia maintain a complete list of Australian web accessibility policies.

In Ireland, the Disability Act 2005 [17] was supplemented with the National Disability Authority's Code of Practice on Accessible Public Services [18] in July 2006. It is a practical guide to help all Government Departments and nearly 500 public bodies to comply with their obligations under the Disability Act 2005.

United Kingdom
In the UK, the Equality Act 2010 does not refer explicitly to website accessibility, but makes it illegal to discriminate against people with disabilities. The Act applies to anyone providing a service; public, private and voluntary sectors. The Code of Practice: Rights of Access - Goods, Facilities, Services and Premises document[19] published by the government's Equality and Human Rights Commission to accompany the Act does refer explicitly to websites as one of the "services to the public" which should be considered covered by the Act.

Web accessibility


Website accessibility audits

A growing number of organizations, companies and consultants offer website accessibility audits. These audits, a type of system testing, identify accessibility problems that exist within a website, and provide advice and guidance on the steps that need to be taken to correct these problems. A range of methods are used to audit websites for accessibility: Automated tools are available which can identify some of the problems that are present. Depending on the tool the result may vary widely making it difficult to compare tests results.[20] Expert technical reviewers, knowledgeable in web design technologies and accessibility, can review a representative selection of pages and provide detailed feedback and advice based on their findings. User testing, usually overseen by technical experts, involves setting tasks for ordinary users to carry out on the website, and reviewing the problems these users encounter as they try to carry out the tasks. Each of these methods has its strengths and weaknesses: Automated tools can process many pages in a relatively short length of time, but can only identify some of the accessibility problems that might be present in the website. Technical expert review will identify many of the problems that exist, but the process is time consuming, and many websites are too large to make it possible for a person to review every page. User testing combines elements of usability and accessibility testing, and is valuable for identifying problems that might otherwise be overlooked, but needs to be used knowledgeably to avoid the risk of basing design decisions on one user's preferences. Ideally, a combination of methods should be used to assess the accessibility of a website.

Accessible Web applications and WAI-ARIA

For a Web page to be accessible all important semantics about the page's functionality must be available so that assistive technology can understand and process the content and adapt it for the user. However as content becomes more and more complex, the standard HTML tags and attributes become inadequate in providing semantic reliably. Modern Web applications often apply scripts to elements to control their functionality and to enable them to act as a control or other dynamic component. These custom components or widgets do not provide a way to convey semantic information to the user agent. WAI-ARIA (Accessible Rich Internet Applications) is a specification[21] published by the World Wide Web Consortium that specifies how to increase the accessibility of dynamic content and user interface components developed with Ajax, HTML, JavaScript and related technologies. ARIA enables accessibility by enabling the author to provide all the semantics to fully describe its supported behaviour. It also allows each element to expose its current states and properties and its relationships between other elements. Accessibility problems with the focus and tab index are also corrected.

[1] http:/ / wiki. cetis. ac. uk/ Accessibility_SIG_Meeting_24th_July_2007#Putting_the_User_at_the_Heart_of_the_W3C_Process [2] Lisa Seeman (20 June 2006). "Formal Objection to WCAG 2.0" (http:/ / lists. w3. org/ Archives/ Public/ w3c-wai-gl/ 2006AprJun/ 0368. html). W3C Public Mailing List Archives. . Retrieved 2012-12-16. [3] http:/ / www. webcredible. co. uk/ user-friendly-resources/ web-accessibility/ wcag-guidelines-20. shtml [4] http:/ / alistapart. com/ articles/ tohellwithwcag2 [5] http:/ / alistapart. com/ articles/ testability [6] http:/ / www. tbs-sct. gc. ca/ clf2-nsi2/ [7] "La norma UNE 139803:2004 constituye la base de la certificacin en Accesibilidad Web." (http:/ / www. inteco. es/ Accesibilidad/ difusion/ Normativa/ Descarga/ DescargaUNE_139803) (in Spanish). INTECO. . Retrieved 2012-12-16. [8] http:/ / www. eutveckling. se/ static/ doc/ swedish-guidelines-public-sector-websites. pdf [9] Peter Krantz (2006). "New Version of Guidelines for Swedish Public Sector Web Sites" (http:/ / www. standards-schmandards. com/ 2006/ swe-guidelines/ ). . Retrieved 2012-12-18.

Web accessibility
[10] http:/ / shop. bsigroup. com/ en/ ProductDetail/ ?pid=000000000030180388 [11] http:/ / www. legislation. gov. uk/ ukpga/ 2010/ 15/ contents [12] "JIS X 8341-3" (http:/ / ja. wikipedia. org/ wiki/ #. E3. 81. 9D. E3. 81. AE. E4. BB. 96. EF. BC. 88. E5. 85. 89. E5. AD. A6. E6. 96. 87. E5. AD. 97. E8. AA. 8D. E8. AD. 98_. 28OCR. 29_. E3. 81. AA. E3. 81. A9. EF. BC. 89) (in Japanese). Wikipedia. . Retrieved 2012-12-18. [13] Shawn Lawton Henry (August 2005). "Essential Components of Web Accessibility" (http:/ / www. w3. org/ WAI/ intro/ components. php). World Wide Web Consortium. . Retrieved 2012-12-18. [14] Shawn Lawton Henry (December 2008). "Authoring Tool Accessibility Guidelines (ATAG) Overview" (http:/ / www. w3. org/ WAI/ intro/ atag. php). World Wide Web Consortium. . Retrieved 2012-12-18. [15] Shawn Lawton Henry (July 2005). "User Agent Accessibility Guidelines (UAAG) Overview" (http:/ / www. w3. org/ WAI/ intro/ uaag. php). World Wide Web Consortium. . Retrieved 2012-12-18. [16] http:/ / www. hreoc. gov. au/ disability_rights/ standards/ www_3/ www_3. html [17] http:/ / www. oireachtas. ie/ viewdoc. asp?DocID=4338& CatID=87 [18] http:/ / www. nda. ie/ CntMgmtNew. nsf/ D587E497372667FC80256C200073124D/ 9EE7337F7BB12066802571B5004E0A71?OpenDocument [19] "A guide to good practice in commissioning accessible websites" (http:/ / www. equalityhumanrights. com/ uploaded_files/ pas78. pdf). Equality and Human Rights Commission. . Retrieved 2012-12-18. [20] Krantz, Peter. "Pitfalls of Web Accessibility Evaluation Tools" (http:/ / www. standards-schmandards. com/ 2009/ pitfalls-of-web-accessibility-evaluation-tools/ ). . Retrieved 23 December 2012. [21] "Accessible Rich Internet Applications (WAI-ARIA) 1.0" (http:/ / www. w3. org/ WAI/ PF/ aria/ ). World Wide Web Consortium. 12 December 2012. . Retrieved 2012-12-18.


Further reading
Clark, Joe (2003). Building Accessible Websites ( New Riders Press. ISBN0-7357-1150-X. Thatcher, Jim; Cynthia Waddell, Shawn Henry, Sarah Swierenga, Mark Urban, Michael Burks, Paul Bohman (2003). Constructing Accessible Web Sites (Reprint ed.). Apress (Previously by Glasshaus). ISBN1-59059-148-8. Slatin, John; Sharron Rush (2002). Maximum Accessibility: Making Your Web Site More Usable for Everyone. Addison-Wesley Professional. ISBN0-201-77422-4. Paciello, Michael (2000). Web Accessibility for People with Disabilities ( resources/books.htm). CMP Books. ISBN1-929629-08-7. Bangeman, Eric (2006-09-10). "Judge: ADA lawsuit against Target can proceed" ( ars/post/20060910-7705.html). Ars Technica. Retrieved 2006-09-26.

External links
Standards and guidelines
The main page for the W3C's ( Web Accessibility Initiative (WAI) ( WAI/) The W3C's WAI Web Content Accessibility Guidelines 2.0 (, but read the WCAG Overview ( first BS 8878:2010 Web accessibility - Code of Practice ( ?pid=000000000030180388), but read the summary of BS 8878 ( first Equality and Human Rights Commission: PAS 78: a guide to good practice in commissioning accessible websites (which BS 8878 supersedes) ( general-web-accessibility-guidance/) Wikipedia accessibility guidelines University of Illinois iCITA HTML Accessibility Best Practices (

Web accessibility New York State Mandatory Technology Standards for Accessibility of State Agency Web-Based Intranet and Internet Information and Applications ( standards derived from both U.S. Section 508 and the WAI's WCAG 1.0 and required for NYS agency web sites. Guidelines for Complying with Section 508 of the Rehabilitation Act ( guide/1194.22.htm) and Section 508 Homepage ( Unified Web Evaluation Methodology 1.2 ( Good website reviewing the WCAG 2.0 ( Good Article with much informative comments about Web Guidelines and WCAG 2.0 (http://www. E accessibility ( eSSENTIAL accessibility for Australian online shoppers with disabilities ( essential-accessibility/) Open source javascript based rules, examples and test suites for implementing WCAG 2.0 (http://www.


Government regulations
Searchable index of government web guidelines ( government-guidelines/) UK Equality Act 2010 ( which supersedes UK Disability Discrimination Act ( YourRightsArticles/fs/en?CONTENT_ID=4001068&chk=eazXEG) The Americans with Disabilities Act of 1990 (ADA) ( does not require websites to be accessible. However, since June 2010 the U.S. Department of Justice is considering to amend the ADA ( on this particular point. Section 508 of the Rehabilitation Act ( - requires U.S. government web sites to be accessible New York State Technology Policy P04-002 ( htm)Requires Accessibility of State Agency Web-Based Intranet and Internet Information and Applications, requires all State entity web sites to be accessible according to NYS standards which are a hybrid of Section 508 and the W3C's WCAG 1.0. Updates Statewide Technology Policy 99-3, which required sites to conform to the W3C WCAG 1.0, Priority one checkpoints only. Disability Act 2005 Ireland ( Common Look and Feel Standards 2.0 Canada ( The Foundation for Information Technology Accessibility (Malta) (

Website architecture


Website architecture
Website architecture is an approach to the design and planning of websites which, like architecture itself, involves technical, aesthetic and functional criteria. As in traditional architecture, the focus is properly on the user and on user requirements. This requires particular attention to web content, a business plan, usability, interaction design, information architecture and web design. For effective search engine optimization it is necessary to have an appreciation of how a single website relates to the World Wide Web. Since web content planning, design and management come within the scope of design methods, the traditional vitruvian aims of commodity, firmness and delight can guide the architecture of websites, as they do physical architecture and other design disciplines. Website architecture is coming within the scope of aesthetics and critical theory and this trend may accelerate with the advent of the semantic web and web 2.0. Both ideas emphasise the structural aspects of information. Structuralism is an approach to knowledge which has influenced a number of academic disciplines including aesthetics, critical theory and postmodernism. Web 2.0, because it involves user-generated content, directs the website architect's attention to the structural aspects of information. "Website architecture" has the potential to be a term used for the intellectual discipline of organizing website content. "Web design", by way of contrast, describes the practical tasks, part-graphic and part-technical, of designing and publishing a website. The distinction compares to that between the task of editing a newspaper or magazine and its graphic design and printing. But the link between editorial and production activities is much closer for web publications than for print publications. "Website architecture" also refers to the changing in the codes of the site and making it better in terms of looks, quality and speed. The web architectures are many times asked to edit the codes in a way that it becomes easily useful and its navigation becomes better. The web architecture and designing includes the knowledge of web scripting languages such as PHP, AJAX, Javascript and many more. But, some simple forms of Web Architecture may reside with the coding languages such as HTML, CSS and XML. To be able to gain opportunities from this field; one should be able to be updated by the coding languages such as HTML upgrading to HTML 5, CSS updating to the CSS3 and much more. While the web architecture has been changed in many years and the support of many websites for more media products has been pulled off due to need of better speed and loading issues. One great example that came alive to this topic is the Pulling off of many websites on the Flash and use of HTML 5 in its place. This came to be better in terms of looks and multimedia projects.

Website design styles

Over the short history of the web, various architectural and artistic styles have developed among different online language, national, social and cultural communities. Such differences in website design styles would set European websites apart from North American ones, Taiwanese websites from the ones originated in Mainland China (marked by the tendency to proliferate pop-up windows activated by left-click), Japanese (marked by employment of bright colors and flashing cute anime characters) from Korean (marked by gray text-white background, clean, "Apple"-style interface).

Web navigation


Web navigation
Web navigation refers to the process of navigating a network of web resources, and the user interface that is used to do so. A central theme in web design is the development of a web navigation interface that maximizes usability.

Akanda, Muhammed A.K. & German, Daniel M. (2005). "A System of Patterns in Web Navigation" [1]. In Lowe, David & Gaedke, Martin. Web engineering: 5th international conference, ICWE 2005, Sydney, Australia, July 27-29, 2005 : proceedings. Birkhuser. p.136. ISBN978-3-540-27996-9. Kalbach, James(2007), Designing Web Navigation Worldcat [2] Additional verification Linda Tauscher and Saul Greenberg et al Copyright ACM 1997 [3] retrieved 23/09/11 Steven.Pemberton et al Copyright is held by the author/owner [4] retrieved 23/09/11 short list [5]{A.Genest [6] retrieved 23/09/11

External links
Sites about usability [7] at UsableWeb

[1] [2] [3] [4] [5] [6] http:/ / books. google. com/ books?id=ueg1YMiF3SEC& pg=PA136 http:/ / www. worldcat. org/ search?qt=wikipedia& q=isbn%3A9780596528102 http:/ / www. sigchi. org/ chi97/ proceedings/ paper/ sg. htm http:/ / www10. org/ cdrom/ papers/ 599/ index. html http:/ / academia. edu/ Papers/ in/ Web_Navigation http:/ / usask. academia. edu/ AaronGenest/ Papers/ 633994/ Looking_Ahead_A_Comparison_of_Page_Preview_Techniques_for_Goal-Directed_Web_Navigation [7] http:/ / usableweb. com/

Web typography


Web typography
Web typography refers to the use of fonts on the World Wide Web. When HTML was first created, font faces and styles were controlled exclusively by the settings of each Web browser. There was no mechanism for individual Web pages to control font display until Netscape introduced the <font> tag in 1995, which was then standardized in the HTML 2 specification. However, the font specified by the tag had to be installed on the user's computer or a fallback font, such as a browser's default sans-serif or monospace font, would be used. The first Cascading Style Sheets specification was published in 1996 and provided the same capabilities.

Web fonts allow Web designers to use fonts that are not installed on the viewer's computer.

The CSS2 specification was released in 1998 and attempted to improve the font selection process by adding font matching, synthesis and download. These techniques did not gain much use, and were removed in the CSS2.1 specification. However, Internet Explorer added support for the font downloading feature in version 4.0, released in 1997.[1] Font downloading was later included in the CSS3 fonts module, and has since been implemented in Safari 3.1, Opera 10 and Mozilla Firefox 3.5. This has subsequently increased interest in Web typography, as well as the usage of font downloading.

In the first CSS specification,[2] authors specifed font characteristics via a series of properties: font-family font-style font-variant font-weight font-size

All fonts were identified solely by name. Beyond the properties mentioned above, designers had no way to style fonts, and no mechanism existed to select fonts which were not present on the client system.

Web typography


Web-safe fonts
Web-safe fonts are fonts likely to be present on a wide range of computer systems, and used by Web content authors to increase the likelihood that content will be displayed in their chosen font. If a visitor to a Web site does not have the specified font, their browser will attempt to select a similar alternative, based on the author-specified fallback fonts and generic families or it will use font substitution defined in the visitor's operating system.

Microsoft's Core fonts for the Web

In order to ensure that all Web users had a basic set of fonts, Microsoft started the Core fonts for the Web initiative in 1996 (terminated in 2002). The released fonts include Arial, Courier New, Times New Roman, Comic Sans, Impact, Georgia, Trebuchet, Webdings and Verdana, under an EULA which made them freely distributable but also limited some usage rights. Their high penetration rate has made them a staple for Web designers. However, these fonts (or some of them) are not included in various operating systems by default. CSS2 attempted to increase the tools available to Web developers by adding font synthesis, improved font matching and the ability to download remote fonts.[3] Some CSS2 font properties were removed from CSS2.1 and later included in CSS3.[4][5]
Since being released under Microsoft's Core fonts for the Web program, Arial, Georgia, and Verdana have become three de facto fonts of the Web.

Fallback fonts
The CSS specification allows for multiple fonts to be listed as fallback fonts.[6] In CSS, the font-family property accepts a list of comma-separated font faces to be used, like so: font-family: Helvetica, "Nimbus Sans L", "Liberation Sans", Arial, sans-serif; The first font specified is the preferred font. If this font is not available, the Web browser will attempt to use the next font in the list. If none of the fonts specified are found, the browser will resort to displaying its default font face. This same process also happens on a per-character basis if the browser is trying to display a character which is not present in the specified font.

Generic font families

In order to give Web designers some control over the appearance of fonts on their Web pages even when the specified fonts are not available, the CSS specification allows the use of several generic font families. These families are designed to split fonts into several categories based on their general appearance. They are commonly specified as the last in a series of fallback fonts, as a last resort in the event that none of the fonts specified by the author are available. There are five generic families:[6] Sans-serif Fonts that do not have decorative markings, or serifs, on their letters. These fonts are often considered easier to read on screens.[7] Serif Fonts that have decorative markings, or serifs, present on their characters.

Web typography Monospace Fonts in which all characters are equally wide. Cursive Fonts that resemble cursive writing. These fonts may have a decorative appearance, but they can be difficult to read at small sizes, so they are generally used sparingly. Fantasy Fonts that may contain symbols or other decorative properties, but still represent the specified character.


Web fonts
A technique to download remote fonts was first specified in the CSS2 specification, which introduced the @font-face rule. It was (and remains[8]) controversial because using a remote font as part of a Web page allows the font to be freely downloaded. This could result in fonts being used against the terms of their license or illegally spread through the Web. TrueDoc (PFR), Embedded OpenType (EOT) and Web Open Font Format (WOFF) are formats designed to address these issues. Since the introduction of Internet Explorer 4, font embedding employing EOT has been used mainly for displaying characters in writing systems that are not supported by default fonts. Use on English-language Web sites was virtually non-existent. With the releases of Firefox 3.5, Opera 10 and Safari 3.1, usage employing other formats is expected to increase.

File formats
By using a specific CSS @font-face embedding technique[9] it is possible to embed fonts such that they work with IE4+, Firefox 3.5+, Safari 3.1+, Opera 10+ and Chrome 4.0+. This allows the vast majority of Web users to access this functionality. Some commercial foundries object to the redistribution of their fonts. For example, Hoefler & Frere-Jones says that, while they "enthusiastically [support] the emergence of a more expressive Web in which designers can safely and reliably use high-quality fonts online", the current delivery of fonts using @font-face is considered "illegal distribution" by the foundry and is not permitted.[10] Naturally this does not interfere with fonts and foundries under free licences.[11] TrueDoc Bitstream developed TrueDoc, the first standard for embedding fonts. TrueDoc was natively supported in Netscape Navigator 4, but was discontinued in Netscape Navigator 6 and Mozilla, because Netscape could not release Bitstream's source code. A WebFont Player plugin was available for Internet Explorer, but the technology had to compete against Microsoft's Embedded OpenType fonts, natively supported since version 4.0. Embedded OpenType Internet Explorer has supported font embedding through the proprietary Embedded OpenType standard since version 4.0. It uses digital rights management techniques to help prevent fonts from being copied and used without a license. A simplified subset of EOT has been formalized under the name of CWT (Compatibility Web Type, formerly EOT-Lite)[12]

Web typography Scalable Vector Graphics Web typography applies to SVG in two ways: 1. All versions of the SVG 1.1 specification, including the SVGT subset, define a font module allowing the creation of fonts within an SVG document. Safari introduced support for many of these properties in version 3. Opera added preliminary support in version 8.0, with support for more properties in 9.0. 2. The SVG specification allows for CSS to be applied to SVG documents in a similar manner to HTML documents, and the @font-face rule can be applied to text in SVG documents. Opera added support for this in version 10,[13] and WebKit since version 325 also supports this method using SVG fonts only. TrueType/OpenType Linking to industry-standard TrueType (TTF) and OpenType (TTF/OTF) fonts is supported by Mozilla Firefox 3.5+, Opera 10+,[14] Safari 3.1+,[15] Google Chrome 4.0+.[16] Internet Explorer 9+ will support only those fonts with embedding permissions set to installable.[17] Web Open Font Format WOFF has been supported by Mozilla Firefox3.6+,[18] Google Chrome 5+,[19][20] Opera Presto,[21] and is supported by Internet Explorer 9 (since March 14, 2011).[22] Support is available on MacOSXLion's Safari from release5.1.


Unicode fonts
Only two fonts available by default on the Windows platform, Microsoft Sans Serif and Lucida Sans Unicode, provide a wide Unicode character repertoire. A bug in Verdana (and the different handling of it by various user agents) hinders its usability where combining characters are desired.

A common hurdle in Web design is the design of mockups that include fonts that are not Web-safe. There are a number of solutions for situations like this. One common solution is to replace the text with a similar Web-safe font or use a series of similar-looking fallback fonts. Another technique is image replacement. This practice involves overlaying text with an image containing the same text written in the desired font. This is good for search engine optimization and aesthetic purposes, but prevents text selection and increases bandwidth use. Also common is the use of Flash-based solutions such as sIFR. This is similar to image replacement techniques, though the text is selectable and rendered as a vector. However, this method requires the presence of a proprietary plugin on a client's system. Another solution is using Javascript to replace the text with VML (for Internet Explorer) or SVG (for all other browsers). Font hosting services allow users to pay a subscription to host non-Web-safe fonts online. Most services host the font for the user and provide the necessary @font-face CSS declaration.

Web typography


[1] Garaffa, Dave (2 September 1997). "Embedded Fonts In Microsoft IE4pr2" (http:/ / web. archive. org/ web/ 19980708194539/ browserwatch. internet. com/ news/ story/ microsoft265. html). Archived from the original (http:/ / browserwatch. internet. com/ news/ story/ microsoft265. html) on 8 July 1998. . [2] Cascading Style Sheets, level 1 (http:/ / www. w3. org/ TR/ CSS1/ ), W3C, 1996-12-17, [3] "Fonts" (http:/ / www. w3. org/ TR/ 2008/ REC-CSS2-20080411/ fonts. html), Cascading Style Sheets, level 2:CSS2 Specification (World Wide Web Consortium), 1998-05-12, , retrieved 2009-07-28 [4] CSS2.1 ChangesC.2.97 Chapter 15 Fonts (http:/ / www. w3. org/ TR/ CSS21/ changes. html#q104), World Wide Web Consortium, , retrieved 2010-01-30 [5] CSS3 module: Web Fonts (http:/ / www. w3. org/ TR/ css3-webfonts), World Wide Web Consortium, , retrieved 2010-01-30 [6] "CSS2 specification" (http:/ / www. w3. org/ TR/ CSS2/ fonts. html), Fonts (World Wide Web Consortium), [7] Poole, Alex (2005-04-07), Which Are More Legible: Serif or Sans Serif Typefaces? (http:/ / www. alexpoole. info/ academic/ literaturereview. html), , retrieved 2010-01-30 [8] Hill, Bill (2008-07-21), Font Embedding on the Web (http:/ / blogs. msdn. com/ ie/ archive/ 2008/ 07/ 21/ font-embedding-on-the-web. aspx), Microsoft, [9] Kimler, Scott Thomas (2009-07-04), xBrowser Fonts Expand Your Font Palette Using CSS3 (http:/ / randsco. com/ index. php/ 2009/ 07/ 04/ cross_browser_font_embedding), , retrieved 2010-02-05 [10] Wubben, Mark (February 27, 2009). "Geek Meet: Web Typography and sIFR 3 - Slide 15 and 16" (http:/ / www. slideshare. net/ novemberborn/ geek-meet-web-typography-and-sifr-3#15). SlideShare. . Retrieved 17 March 2010. [11] See Open source typefaces and Free software Unicode typefaces listings for such fonts. [12] Daggett, John (2009-07-31), EOT-Lite File Format v.1.1 (http:/ / lists. w3. org/ Archives/ Public/ www-font/ 2009JulSep/ 0969. html), World Wide Web Consortium, , retrieved 2010-01-30 [13] Mills, Chris (2008-12-04), Opera Presto 2.2 and Opera 10 a first look (http:/ / dev. opera. com/ articles/ view/ presto-2-2-and-opera-10-a-first-look/ #webfontssvg), Opera Software, , retrieved 2010-01-30 [14] Mills, Chris (2008-12-04), Opera Presto 2.2 and Opera 10 a first look (http:/ / www. opera. com/ docs/ specs/ presto22/ #css), Opera Developer Community, , retrieved 2010-01-29 [15] Marsal, Katie (2008-02-07), Apple's Safari 3.1 to support downloadable web fonts, more (http:/ / www. appleinsider. com/ articles/ 08/ 02/ 07/ apples_safari_3_1_to_support_downloadable_web_fonts_more. html), AppleInsider, , retrieved 2010-02-05 [16] Irish, Paul (2010-01-25), Chrome and @font-face: It's here! (http:/ / paulirish. com/ 2009/ chrome-and-font-face-a-summary/ ), [17] Galineau, Sylvain (2010-07-15), The CSS Corner: Better Web Typography For Better Design (http:/ / blogs. msdn. com/ b/ ie/ archive/ 2010/ 07/ 15/ the-css-corner-better-web-typography-for-better-design. aspx), Microsoft, [18] Shapiro, Melissa (2009-10-20), Mozilla Supports Web Open Font Format (http:/ / blog. mozilla. com/ blog/ 2009/ 10/ 20/ mozilla-supports-web-open-font-format/ ), Mozilla, , retrieved 2010-02-05 [19] Gilbertson, Scott (2010-04-26), Google Chrome to Support the Web Open Font Format (http:/ / www. webmonkey. com/ 2010/ 04/ google-chrome-to-support-the-web-open-font-format), webmonkey, [20] Bug 38217 - [chromium] Add WOFF support (https:/ / bugs. webkit. org/ show_bug. cgi?id=38217), WebKit, [21] Web specifications support in Opera Presto 2.7 (http:/ / www. opera. com/ docs/ specs/ presto27/ ), Opera, [22] Galineau, Sylvain (2010-04-23), Meet WOFF, The Standard Web Font Format (http:/ / blogs. msdn. com/ ie/ archive/ 2010/ 04/ 23/ meet-woff-the-standard-web-font-format. aspx), Microsoft,

References External links

CSS @ Ten: The Next Big Thing ( (By Hkon Wium Lie) (In A List Apart) W3C Working Draft for CSS Fonts ( Alberto Martinez Perez (2008-06-03). "Common fonts to all versions of Windows & Mac equivalents" (http:// Retrieved 2010-06-29. Font Descriptions and @font-face ( Font embedding for the Web ( Hkon Wium Lie (2006-06-19). "Microsoft's forgotten monopoly" ( Microsofts-forgotten-monopoly/2010-1032_3-6085417.html). CNET News. CNET Networks. Retrieved 2010-06-29.

Web typography "I have seen the shadow of the moon" by Golden Krishna ( shadow_of_the_moon.html) Real Web Type in Real Web Context ( Tim Brown, A List Apart, Nov. 17, 2009 On Web Typography ( Jason Santa Maria, A List Apart, Nov. 17, 2009 How to use CSS @font-face ( Nice Web Type, Oct. 30, 2009 Web Fonts and Standards ( Jeffrey Zeldman, Aug. 17, 2009 WebINK Web Fonts ( Web Fonts ( Google Web Fonts ( Open Font Library ( M+ Web Fonts ( (How to use M+ Fonts in web) (in English)


Website wireframe
A website wireframe, also known as a page schematic or screen blueprint, is a visual guide that represents the skeletal framework of a website.[1] Wireframes are created by User Experience professionals called Interaction Designers. The interaction designers who have broad backgrounds in visual design, information architecture and user research, create wireframes for the purpose of arranging elements to best accomplish a particular purpose. The purpose is usually being informed by a business objective and a creative idea. The wireframe depicts the page layout or arrangement of the websites content, including interface elements and navigational systems, and how they work together.[2] The wireframe usually lacks typographic style, color, or graphics, since the main focus lies in functionality, behavior, and priority of content.[3] In other words, it focuses on what a screen does, not what it looks like.[4] Wireframes can be pencil drawings or sketches on a whiteboard, or they can be produced by means of a broad array of free or commercial software applications. Wireframes focus on The kinds of information displayed The range of functions available The relative priorities of the information and functions The rules for displaying certain kinds of information The effect of different scenarios on the display[5]

A wireframe document for a person profile view

The website wireframe connects the underlying conceptual structure, or information architecture, to the surface, or visual design of the website.[2] Wireframes help establish functionality, and the relationships between different screen templates of a website. An iterative process, creating wireframes is an effective way to make rapid prototypes of pages, while measuring the practicality of a design concept. Wireframing typically begins between high-level

Website wireframe structural worklike flowcharts or site mapsand screen designs.[3] Within the process of building a website, wireframing is where thinking becomes tangible.[6] Aside from websites, wireframes are utilized for the prototyping of mobile sites, computer applications, or other screen-based products that involve human-computer interaction.[7] Future technologies and media will force wireframes to adapt and evolve.


Uses of wireframes
Wireframes may be utilized by different disciplines. Developers use wireframes to get a more tangible grasp of the sites functionality, while designers use them to push the user interface (UI) process. User experience designers and information architects use wireframes to show navigation paths between pages. Business stakeholders use wireframes to ensure that requirements and objectives are met through the design.[3] Other professionals who create wireframes include information architects, interaction designers, user experience designers, graphic designers, programmers, and product managers.[7] Working with wireframes may be a collaborative effort since it bridges the information architecture to the visual design. Due to overlaps in these professional roles, conflicts may occur, making wireframing a controversial part of the design process.[6] Since wireframes signify a bare bones aesthetic, it is difficult for designers to assess how closely the wireframe needs to depict actual screen layouts.[4] Another difficulty with wireframes is that they dont effectively display interactive details. Modern UI design incorporates various devices such as expanding panels, hover effects, and carousels that pose a challenge for 2-D diagrams.[8] Wireframes may have multiple levels of detail and can be broken up into two categories in terms of fidelity, or how closely they resemble the end product. Low-fidelity Resembling a rough sketch or a quick mock-up, low-fidelity wireframes have less detail and are quick to produce. These wireframes help a project team collaborate more effectively since they are more abstract, using rectangles and labeling to represent content.[9] Dummy content, Latin filler text (lorem ipsum), sample or symbolic content are used to represent data when real content is not available.[10] High-fidelity High-fidelity wireframes are often used for documenting because they incorporate a level of detail that more closely matches the design of the actual webpage, thus taking longer to create.[9] For simple or low-fidelity drawings, paper prototyping is a common technique. Since these sketches are just representations, annotationsadjacent notes to explain behaviorare useful.[11] For more complex projects, rendering wireframes using computer software is popular. Some tools allow the incorporation of interactivity including Flash animation, and front-end web technologies such as, HTML, CSS, and JavaScript.

Elements of wireframes
The skeleton plan of a website can be broken down into three components: information design, navigation design, and interface design. Page layout is where these components come together, while wireframing is what depicts the relationship between these components.[2]

Information design
Main article: Information design Information design is the presentationplacement and prioritization of information in a way that facilitates understanding. Information design is an area of graphic design, meant to display information effectively for clear communication. For websites, information elements should be arranged in a way that reflects the goals and tasks of the user.[12]

Website wireframe


Navigation design
The navigation system provides a set of screen elements that allow the user to move page to page through a website. The navigation design should communicate the relationship between the links it contains so that users understand the options they have for navigating the site. Often, websites contain multiple navigation systems such as a global navigation, local navigation, supplementary navigation, contextual navigation, and courtesy navigation.[13]

Interface design
Main article: User interface design User interface design includes selecting and arranging interface elements to enable users to interact with the functionality of the system.[14] The goal is to facilitate usability and efficiency as much as possible. Common elements found in interface design are action buttons, text fields, check boxes, radio buttons and drop-down menus.

[1] [2] [3] [4] Brown 2011, p. 166 Garrett 2010, p. 131 Brown 2011, p. 167 Brown 2011, p. 168

[5] Brown (2011), p. 169 [6] Wodtke, Govella 2009, p. 186 [7] 2011 [8] Brown 2011, p. 169 [9] Wodtke, Govella 2009, p. 185 [10] Brown 2011, p. 175 [11] Brown 2011, p. 194 [12] Garrett 2010, p. 126 [13] Garrett 2010, p. 120-122 [14] Garrett 2010, p. 30

Brown, Dan M. (2011). Communicating Design: Developing Web Site Documentation for Design and Planning, Second Edition. New Riders. ISBN978-0-13-138539-9. Garrett, Jesse James (2010). The Elements of User Experience: User-Centered Design for the Web and Beyond. New Riders. ISBN978-0-321-68865-1. "Konigi Wiki Wireframes" ( Retrieved 2011-03-25. Wodtke, Christina; Govella, Austin (2009). Information Architecture: Blueprints for the Web, Second Edition. New Riders. ISBN978-0-321-59199-9.

Web colors


Web colors
Web colors are colors used in designing web pages, and the methods for describing and specifying those colors. Colors may be specified as an RGB triplet or in hexadecimal format (a hex triplet). They may also be specified according to their common English names in some cases. Often a color tool or other graphics software is used to generate color values. Hexadecimal color codes begin with a number sign (#).[1][2] A color is specified according to the intensity of its red, green and blue components, each represented by eight bits. Thus, there are 24 bits used to specify a web color, and 16,777,216 colors that may be so specified. The first versions of Mosaic and Netscape Navigator used the X11 color names as the basis for their color lists, as both started as X Window System applications.[3] Web colors have an unambiguous colorimetric definition, sRGB, which relates the chromaticities of a particular phosphor set, a given transfer curve, adaptive whitepoint, and viewing conditions.[4] These have been chosen to be similar to many real-world monitors and viewing conditions, so that even without color management rendering is fairly close to the specified values. However, user agents vary in the fidelity with which they represent the specified colors. More advanced user agents use color management to provide better color fidelity; this is particularly important for Web-to-print applications.

Hex triplet
A hex triplet is a six-digit, three-byte hexadecimal number used in HTML, CSS, SVG, and other computing applications, to represent colors. The bytes represent the red, green and blue components of the color. One byte represents a number in the range 00 to FF (in hexadecimal notation), or 0 to 255 in decimal notation. This represents the least (0) to the most (255) intensity of each of the color components. Thus web colors specify colors in the Truecolor (24-bit RGB) color scheme. The hex triplet is formed by concatenating three bytes in hexadecimal notation, in the following order: Byte 1: red value (color type red) Byte 2: green value (color type green) Byte 3: blue value (color type blue) For example, consider the color where the red/green/blue values are decimal numbers: red=36, green=104, blue=160 (a greyish-blue color). The decimal numbers 36, 104 and 160 are equivalent to the hexadecimal numbers 24, 68 and A0 respectively. The hex triplet is obtained by concatenating the 6 hexadecimal digits together, 2468A0 in this example. Note that if any one of the three color values is less than 16 (decimal) or 10 (hex), it must be represented with a leading zero so that the triplet always has exactly six digits. For example, the decimal triplet 4, 8, 16 would be represented by the hex digits 04, 08, 10, forming the hex triplet 040810. The number of colors that can be represented by this system is 256 256 256 (2563, or 256 cubed) = 16,777,216.

Web colors


Shorthand hexadecimal form

An abbreviated, three (hexadecimal) digits form is sometimes used.[5] Expanding this form to the six-digit form is as simple as doubling each digit: 09C becomes 0099CC as presented on the following CSS example: .threedigit { color: #09C; } .sixdigit { color: #0099CC; } /*same color as above*/ The three-digit form is described in the CSS specification, not in HTML. As a result, the three-digit form in an attribute other than "style" is not interpreted as a valid color in some browsers. This shorthand form reduces the palette to 4,096 colors, equivalent of 12-bit color as opposed to 24-bit color using the whole six-digit form (16,777,216 colors). However, this limitation is often sufficient for text based documents.

Converting RGB to hexadecimal

RGB values are usually given in the 0255 range; if they are in the 01 range, the values are multiplied by 255 before conversion. This number divided by 16 (integer division; ignoring any remainder) gives us the first hexadecimal digit (between 0 and F, where the letters A to F represent the numbers 10 to 15. See hexadecimal for more details). The remainder gives us the second hexadecimal digit. For instance the RGB value 201 divides into 12 groups of 16, thus the first digit is C. A remainder of 9 gives us the hexadecimal number C9. This process is repeated for each of the three color values. Conversion between number bases is a common feature of calculators, including both hand-held models and the software calculators bundled with most modern operating systems. Web-based tools specifically for converting color values are also available.[6][7] [8]

HTML color names

The HTML 4.01 specification[9] defines sixteen named colors, as follows (names are defined in this context to be case-insensitive):

CSS 12.0 / HTML 3.24 / VGA color names

Color Name Hex (RGB) Red Green Blue Hue Satur Light Satur Value CGA number (name); alias (RGB) (RGB) (RGB) (HSL/HSV) (HSL) (HSL) (HSV) (HSV) 100% 75% 50% 0% 100% 50% 100% 50% 0% 0% 0% 0% 0% 0% 100% 75% 50% 0% 0% 0% 100% 50% 100% 50% 100% 50% 0% 0% 100% 75% 50% 0% 0% 0% 0% 0% 0% 0% 100% 50% 100% 50% 0 0 0 0 0 0 60 60 120 120 180 180 240 240 0% 0% 0% 0% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 75% 50% 0% 50% 25% 50% 25% 50% 25% 50% 25% 50% 25% 0% 0% 0% 0% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 15 (white) 75% 7 (light gray) 50% 8 (dark gray) 0% 0 (black) 100% 12 (high red) 50% 4 (low red) 100% 14 (yellow) 50% 6 (brown) 100% 10 (high green); green 50% 2 (low green) 100% 11 (high cyan); cyan 50% 3 (low cyan) 100% 9 (high blue) 50% 1 (low blue)

White Silver Gray Black Red

#FFFFFF #C0C0C0 #808080 #F0F0F0 #FFF0F0

Maroon #80F0F0 Yellow Olive Lime Green Aqua Teal Blue Navy #FFFFF0 #8080F0 #F0FFF0 #F080F0 #F0FFFF #F08080 #F0F0FF #F0F080

Web colors

Fuchsia #FFF0FF Purple #80F080 100% 50% 0% 0% 100% 50% 300 300 100% 100% 50% 25% 100% 100% 100% 13 (high magenta); magenta 50% 5 (low magenta)

These 16 were labelled as sRGB and included in the HTML 3.0 specification, which noted they were "the standard 16 colors supported with the Windows VGA palette."[10]

X11 color names

In addition, a number of colors are defined by web browsers. A particular browser may not recognize all of these colors, but as of 2005 all modern general-use browsers support the full list of colors. Many of these colors are from the list of X11 color names distributed with the X Window System. These colors were standardized by SVG 1.0, and are accepted by SVG Full user agents. They are not part of SVG Tiny. The list of colors actually shipped with the X11 product varies between implementations, and clashes with certain of the HTML names such as green. Furthermore, X11 colors are defined as simple RGB (hence, no particular color space), rather than sRGB. This means that the list of colors found in X11 (e.g. in /usr/lib/X11/rgb.txt) should not directly be used to choose colors for the web.[11] The list of web "X11 colors" from the CSS3 specification, along with their hexadecimal and decimal equivalents, is shown below, compare the alphabetical lists in the W3C standards. Note that this includes the common synonyms: aqua (HTML4/CSS 1.0 standard name) and cyan (common sRGB name), magenta (common sRGB name) and fuchsia (HTML4/CSS 1.0 standard name), gray (HTML4/CSS 1.0 standard name) and grey. [12][13]
HTML name Hex code RGB Decimal code RGB

Pink colors
Pink LightPink HotPink DeepPink PaleVioletRed MediumVioletRed FFC0CB 255192203 FFB6C1 255182193 FF69B4 255105180 FF1493 25520147 DB7093 219112147 C71585 19921133

Red colors
LightSalmon Salmon DarkSalmon LightCoral IndianRed Crimson FireBrick DarkRed Red FFA07A 255160122 FA8072 250128114 E9967A 233150122 F08080 240128128 CD5C5C 2059292 DC143C 2202060 B22222 1783434 8B0000 13900 FF0000 25500

Orange colors
OrangeRed Tomato Coral FF4500 255690 FF6347 2559971 FF7F50 25512780

Web colors

DarkOrange Orange Gold FF8C00 2551400 FFA500 2551650 FFD700 2552150

Yellow colors
Yellow LightYellow LemonChiffon FFFF00 2552550 FFFFE0 255255224 FFFACD 255250205

LightGoldenrodYellow FAFAD2 250250210 PapayaWhip Moccasin PeachPuff PaleGoldenrod Khaki DarkKhaki FFEFD5 255239213 FFE4B5 255228181 FFDAB9 255218185 EEE8AA 238232170 F0E68C 240230140 BDB76B 189183107

Brown colors
Cornsilk BlanchedAlmond Bisque NavajoWhite Wheat BurlyWood Tan RosyBrown SandyBrown Goldenrod DarkGoldenrod Peru Chocolate SaddleBrown Sienna Brown Maroon FFF8DC 255248220 FFEBCD 255235205 FFE4C4 255228196 FFDEAD 255222173 F5DEB3 245222179 DEB887 222184135 D2B48C 210180140 BC8F8F 188143143 F4A460 24416496 DAA520 21816532 B8860B 18413411 CD853F 20513363 D2691E 21010530 8B4513 1396919 A0522D 1608245 A52A2A 1654242 800000 12800

Web colors


HTML name

Hex code RGB

Decimal code RGB

Green colors
DarkOliveGreen Olive OliveDrab YellowGreen LimeGreen Lime LawnGreen Chartreuse GreenYellow SpringGreen 556B2F 8510747 808000 1281280 6B8E23 10714235 9ACD32 15420550 32CD32 5020550 00FF00 02550 7CFC00 1242520 7FFF00 1272550 ADFF2F 17325547 00FF7F 0255127

MediumSpringGreen 00FA9A 0250154 LightGreen PaleGreen DarkSeaGreen MediumSeaGreen SeaGreen ForestGreen Green DarkGreen 90EE90 144238144 98FB98 152251152 8FBC8F 143188143 3CB371 60179113 2E8B57 4613987 228B22 3413934 008000 01280 006400 01000

Cyan colors
MediumAquamarine 66CDAA 102205170 Aqua Cyan LightCyan PaleTurquoise Aquamarine Turquoise MediumTurquoise DarkTurquoise LightSeaGreen CadetBlue DarkCyan Teal 00FFFF 0255255 00FFFF 0255255 E0FFFF 224255255 AFEEEE 175238238 7FFFD4 127255212 40E0D0 64224208 48D1CC 72209204 00CED1 0206209 20B2AA 32178170 5F9EA0 95158160 008B8B 0139139 008080 0128128

Blue colors
LightSteelBlue PowderBlue B0C4DE 176196222 B0E0E6 176224230

Web colors

LightBlue SkyBlue LightSkyBlue DeepSkyBlue DodgerBlue CornflowerBlue SteelBlue RoyalBlue Blue MediumBlue DarkBlue Navy MidnightBlue ADD8E6 173216230 87CEEB 135206235 87CEFA 135206250 00BFFF 0191255 1E90FF 30144255 6495ED 100149237 4682B4 70130180 4169E1 65105225 0000FF 00255 0000CD 00205 00008B 00139 000080 00128 191970 2525112

HTML name

Hex code RGB

Decimal code RGB

Purple colors
Lavender Thistle Plum Violet Orchid Fuchsia Magenta MediumOrchid MediumPurple BlueViolet DarkViolet DarkOrchid DarkMagenta Purple Indigo DarkSlateBlue SlateBlue E6E6FA 230230250 D8BFD8 216191216 DDA0DD 221160221 EE82EE 238130238 DA70D6 218112214 FF00FF 2550255 FF00FF 2550255 BA55D3 18685211 9370DB 147112219 8A2BE2 13843226 9400D3 1480211 9932CC 15350204 8B008B 1390139 800080 1280128 4B0082 750130 483D8B 7261139 6A5ACD 10690205

MediumSlateBlue 7B68EE 123104238

White/Gray/Black colors
White Snow Honeydew MintCream FFFFFF 255255255 FFFAFA 255250250 F0FFF0 240255240 F5FFFA 245255250

Web colors

Azure AliceBlue GhostWhite WhiteSmoke Seashell Beige OldLace FloralWhite Ivory AntiqueWhite Linen LavenderBlush MistyRose Gainsboro LightGray Silver DarkGray Gray DimGray LightSlateGray SlateGray DarkSlateGray Black F0FFFF 240255255 F0F8FF 240248255 F8F8FF 248248255 F5F5F5 245245245 FFF5EE 255245238 F5F5DC 245245220 FDF5E6 253245230 FFFAF0 255250240 FFFFF0 255255240 FAEBD7 250235215 FAF0E6 250240230 FFF0F5 255240245 FFE4E1 255228225 DCDCDC 220220220 D3D3D3 211211211 C0C0C0 192192192 A9A9A9 169169169 808080 128128128 696969 105105105 778899 119136153 708090 112128144 2F4F4F 477979 000000 000

Web-safe colors
At one time many computer displays were only capable of displaying 256 colors. These may be dictated by the hardware or changeable by a "color table". When a color is found (e.g., in an image) that is not one available, a different one has to be used. This can done by either using the closest color, which greatly speeds up the load time, or by using dithering, which results in more accurate results, but takes a longer to load due to the complex calculations. There were various attempts to make a "standard" color palette. A set of colors was needed that could be shown without dithering on 256-color displays; the number 216 was chosen partly because computer operating systems customarily reserved sixteen to twenty colors for their own use; it was also selected because it allows exactly six equally-spaced shades of red, green, and blue (6 6 6 = 216), each from 00 to FF (including both limits). The list of colors is often presented as if it has special properties that render them immune to dithering. In fact, on 256-color displays applications can set a palette of any selection of colors that they choose, dithering the rest. These colors were chosen specifically because they matched the palettes selected by the then leading browser applications. Fortunately, there were not radically different palettes in use in different popular browsers. "Web-safe" colors had a flaw in that, on systems such as X11 where the palette is shared between applications, smaller color cubes (555 or 444) were often allocated by browsersthus, the "web safe" colors would actually dither on such systems. Better results were obtained by providing an image with a larger range of colors and

Web colors allowing the browser to quantize the color space if needed, rather than suffer the quality loss of a double quantization. As of 2011, personal computers typically[14] have 24-bit (TrueColor) and the use of "web-safe" colors has fallen into practical disuse. Even mobile devices have at least 16-bit color, driven by the inclusion of cameras on cellphones. The "web-safe" colors do not all have standard names, but each can be specified by an RGB triplet: each component (red, green, and blue) takes one of the six values from the following table (out of the 256 possible values available for each component in full 24-bit color).


6 shades of each color

Key 0 3 6 9 Hex Decimal Fraction 00 33 66 99 0 51 102 153 204 255 0 0.2 0.4 0.6 0.8 1

C or (12) CC F or (15) FF

The following table shows all of the "web-safe" colors, underlining the really-safe colors. (One shortcoming of the web-safe palette is its poor selection of light background colors.) The intensities at the low end of the range, especially the two darkest, are often hard to distinguish.

Color table
In the table below, each color code listed is a short-hand for the RGB value; for example, code 609 is equivalent to RGB code 102-0-153 or HEX code #660099.[15]

Web-Safe Colors
*000* *003* 006 009 00C *00F* 030 033 036 039 03C 03F 060 063 066 069 300 303 306 309 30C 30F 330 333 336 339 33C 33F 360 363 366 369 600 603 606 609 60C 60F 630 633 636 639 63C 63F 660 663 666 669 900 903 906 909 90C 90F 930 933 936 939 93C 93F 960 963 966 969 C00 C03 C06 C09 C0C C0F C30 C33 C36 C39 C3C C3F C60 C63 C66 C69 *F00* *F03* F06 F09 F0C *F0F* F30 F33 F36 F39 F3C F3F F60 F63 F66 F69

Web colors

06C 06F 090 093 096 099 09C 09F 0C0 0C3 0C6 0C9 0CC 0CF *0F0* 0F3 36C 36F 390 393 396 399 39C 39F 3C0 3C3 3C6 3C9 3CC 3CF 3F0 66C 66F 690 693 696 699 69C 69F 6C0 6C3 6C6 6C9 6CC 6CF 96C 96F 990 993 996 999 99C 99F 9C0 9C3 9C6 9C9 9CC 9CF C6C C6F C90 C93 C96 C99 C9C C9F CC0 CC3 CC6 CC9 CCC CCF CF0 CF3 F6C F6F F90 F93 F96 F99 F9C F9F FC0 FC3 FC6 FC9 FCC FCF *FF0* *FF3*

*6F0* 9F0

*3F3* *6F3* 9F3 6F6 6F9 6FC

*0F6* *3F6* 0F9 3F9

9F6 *CF6* *FF6* 9F9 9FC CF9 CFC CFF FF9 FFC *FFF*

*0FC* *3FC*

*0FF* *3FF* *6FF* 9FF

Safest web colors

Designers were often encouraged to stick to these 216 "web-safe" colors in their websites; however, 8-bit color displays were much more common when the 216-color palette was developed than they are now. David Lehn and Hadley Stern have since discovered that only 22 of the 216 colors in the web-safe palette are reliably displayed without inconsistent remapping on 16-bit computer displays. They called these 22 colors the "really safe" palette; it consists mainly of shades of green and yellow, as can be seen in the table above, where the "really safe" colors are underlined.[16]

CSS colors
The Cascading Style Sheets language defines the same number of named colors as the HTML 4 spec, namely the 16 listed previously. Additionally, CSS 2.1 adds the 'orange' color name to the list[17]:

Web colors


Colors added in CSS 2.1

Color Name Hex (RGB) Red Green Blue Hue Satur Light Satur Value Alias (RGB) (RGB) (RGB) (HSL/HSV) (HSL) (HSL) (HSV) (HSV) 100% 65% 0% 39 100% 50% 100% 100%

orange #FFA5F0

CSS 2, SVG and CSS 2.1 also allow web authors to use so-called system colors, which are color names whose values are taken from the operating system, for example, picking the operating system's highlighted text color, or the background color for tooltip controls. This enables web authors to style their content in line with the operating system of the user agent.[18] The CSS3 color module has deprecated the use of system colors in favor of CSS3 UI System Appearance property,[19][20] which itself was subsequently dropped from CSS3.[21] The developing CSS3 specification will also introduce HSL color space values to style sheets: /* RGB model */ p { color: #F00 } /* #rgb */ p { color: #FF0000 } /* #rrggbb */ p { color: rgb(255, 0, 0) } /* integer range 0 - 255 */ p { color: rgb(100%, 0%, 0%) } /* float range 0.0% - 100.0% */ /* RGB with alpha channel, added to CSS3 */ p { color: rgba(255, 0, 0, 0.5) } /* 0.5 opacity, semi-transparent */ /* HSL model, added to CSS3 */ p { color: hsl(0, 100%, 50%) } /* red */ p { color: hsl(120, 100%, 50%) } /* green */ p { color: hsl(120, 100%, 25%) } /* dark green */ p { color: hsl(120, 100%, 75%) } /* light green */ p { color: hsl(120, 50%, 50%) } /* pastel green */ /* HSL model with alpha channel p { color: hsla(120, 100%, 50%, p { color: hsla(120, 100%, 50%, p { color: hsla(120, 100%, 50%, */ 1) } /* green */ 0.5) } /* semi-transparent green */ 0.1) } /* very transparent green */

Some browsers and devices do not support colors. For these blind and colorblind users, Web content depending on colors can be unusable or difficult to use. Either no colors should be specified (to invoke the browser's default colors), or both the background and all foreground colors (primarily the colors of plain text, unvisited links, hovered links, active links, and visited links) should be specified to avoid black on black or white on white effects.[22]

Web colors


[1] Niederst Robbins, Jennifer. Web Design in a Nutshell, p. 103. [2] York, Richard. Beginning CSS, pp. 7172. [3] Guide to Graphics (http:/ / www. splus. com/ support/ splus80win/ graphics. pdf). SP LUS, Page 13. [4] Digital Color Imaging Handbook By Gaurav Sharma. ISBN 0-8493-0900-X [5] CSS3 color module (http:/ / www. w3. org/ TR/ css3-color/ #rgb-color) [6] RGB to Hexadecimal Color Converter (http:/ / www. telacommunications. com/ nutshell/ rgbform. htm) [7] Color Converter Tool (http:/ / www. colorhexa. com/ ) [8] List of Web Safe Colors with conversions (http:/ / hex-code. com/ web-safe-colors) [9] HTML 4.01 Specification section 6.5 "Colors" (http:/ / www. w3. org/ TR/ REC-html40/ types. html#h-6. 5) [10] HTML 3.2 Specification "The BODY element" (http:/ / www. w3. org/ TR/ REC-html32#body) [11] Public discussion on SVG mailing list Re: color names in SVG-1.0 conflict with /usr/lib/X11/rgb.txt (http:/ / lists. w3. org/ Archives/ Public/ www-svg/ 2002Apr/ 0052. html) [12] W3C TR CSS3 Color Module, SVG color keywords (http:/ / www. w3. org/ TR/ css3-color/ #svg-color) [13] W3C TR SVG 1.0, recognized color keyword names (http:/ / www. w3. org/ TR/ SVG/ types. html#ColorKeywords) [14] Browser Display Statistics (http:/ / www. w3schools. com/ browsers/ browsers_display. asp) [15] #660099 Color Information (http:/ / www. colorhexa. com/ 660099) [16] Death of the Websafe Color Palette? (http:/ / www. physics. ohio-state. edu/ ~wilkins/ color/ websafecolors. html) [17] "CSS 2.1 Specification: Syntax and basic data types: Colors" (http:/ / www. w3. org/ TR/ CSS21/ syndata. html#color-units). 2009-09-08. . Retrieved 2009-12-21. [18] User interface - System colors (http:/ / www. w3. org/ TR/ CSS21/ ui. html#system-colors) [19] CSS3 Color Module - CSS2 System Colors (http:/ / www. w3. org/ TR/ css3-color/ #css-system) [20] CSS3 Basic User Interface Module, W3C Candidate Recommendation 11 May 2004: System Appearance (http:/ / www. w3. org/ TR/ 2004/ CR-css3-ui-20040511/ #system) [21] CSS Basic User Interface Module Level 3 (CSS3 UI), W3C Working Draft 17 January 2012: List of substantial changes (http:/ / www. w3. org/ TR/ css3-ui/ #changes-list), "System Appearance has been dropped, including appearance values & property, and system fonts / extension of the font property shorthand." [22] If You Pick One Color, Pick Them All (http:/ / www. w3. org/ QA/ Tips/ color)

External links
CSS2.1 Color Specification ( Web colors ( at the Open Directory Project

Web interoperability


Web interoperability
Web interoperability means producing web pages viewable in standard compatible web browsers, various operating systems such as Windows, Macintosh and Linux and devices such as PC, PDA and mobile phone based on the latest web standards.

This term was originated by the Web Interoperability Pledge [1] that is a promise to adhere to current HTML Recommendations as promulgated by the World Wide Web Consortium (W3C). The WIP was not a W3C initiative. but it was started by and has been run by ZDNet AnchorDesk quite independently. This issue was known by cross browsing in browser war between Internet Explorer and Netscape. Windows Internet Explorer was the dominant browser after that, but modern web browsers such as Mozilla Firefox, Opera and Safari have supported web standards. Because of backward compatibility of Internet Explorer, many web pages has supported non-standard HTML tags and DOM handling script yet as well platform-dependent techniques such as ActiveX. These are very harmful for Web accessibility and Device Independence.

Elements of Web interoperability

Structural and semantic markup with XHTML. CSS based layout with layout elements such as position and float. Separating among structure, presentation and behavior in web pages. DOM scripting based on W3C DOM Standard and ECMAScript.

It has been various activities, for example Web Standards Project, Mozilla's Technology Evangelism Standards Group [3]. Also there are educational activities such as Web Essential Conference [4].

and Web

Local Activities
Web Standards Korea [5]

[1] [2] [3] [4] [5] http:/ / www. w3. org/ Promotion/ WIP/ http:/ / www. mozilla. org/ projects/ tech-evangelism/ http:/ / webstandardsgroup. org/ http:/ / we05. com http:/ / webstandard. or. kr

Web modeling


Web modeling
Web modeling (aka model-driven Web development) is a branch of Web engineering which addresses the specific issues related to design and development of large-scale Web applications. In particular, it focuses on the design notations and visual languages that can be used for the realization of robust, well-structured, usable and maintainable Web applications. Designing a data-intensive Web site amounts to specifying its characteristics in terms of various orthogonal abstractions. The main orthogonal models that are involved in complex Web application design are: data structure, content composition, navigation paths, and presentation model. In the beginning of Web development [1], it was normal to accessed Web applications by creating something with no attention to the developmental stage. In the past years, Web design firms [2] had many issues with managing their Web sites as the developmental process grew and complicated other applications. Web development [1] tools have helped with simplifying data-intensive Web applications by using page generators. Microsoft's Active Server Pages and JavaSoft's Java Server Pages have helped by bringing out content and using user-programmed templates. Several languages and notations have been devised for Web application modeling. Among them, we can cite: HDM - W2000 RMM OOHDM ARANEUS STRUDEL TIRAMISU WebML Hera UML Web Application Extension UML-based Web Engineering (UWE) ACE WebArchitect OO-H

One of the main discussion venues for this discipline is the Model-Driven Web Engineering Workshop (MDWE) [3] held yearly in conjunction with the International Conference on Web Engineering (ICWE) [4] conference.

[1] [2] [3] [4] http:/ / sdtechdesigns. com/ solutions/ web-design/ http:/ / sdtechdesigns. com/ company/ about-our-company/ http:/ / mdwe2011. pst. ifi. lmu. de/ http:/ / icwe2011. webengineering. org/

Web template


Web template
A web template is a tool used to separate content from presentation in web design, and for mass-production of web documents. It is a basic component of a web template system. Web templates can be used to set up any type of website. In its simplest sense, a web template operates similarly to a form letter for use in setting up a website.

Template uses
Web templates can be used by any individual or organization to set up their website. Once a template is purchased or downloaded, the user will replace all generic information included in the web template with their own personal, organizational or product information. Templates can be used to: Display personal information or daily activities as in a blog. Sell products on-line. Display information about a company or organization. Display family history. Display a gallery of photos.

Place music files such as MP3 files on-line for play through a web browser. Place videos on-line for public viewing. To set up a private login area on-line.

Effective separation
A common goal among experienced web developers is to develop and deploy applications that are flexible and easily maintainable. An important consideration in reaching this goal is the separation of business logic from presentation logic.[1] Developers use web template systems (with varying degrees of success) to maintain this separation.[1] One difficulty in evaluating this separation is the lack of well-defined formalisms to measure when and how well it is actually met.[1] There are, however, fairly standard heuristics that have been borrowed from the domain of software engineering. These include 'inheritance' (based on principles of object-oriented programming); and 'templating and generative programming', (consistent with the principles of MVC separation).[2] The precise difference between the various guidelines is subject to some debate, and some aspects of the different guidelines share a degree of similarity.[3]

Flexible presentation
One major rationale behind "effective separation" is the need for maximum flexibility in the code and resources dedicated to the presentation logic.[2] Client demands, changing customer preferences and desire to present a "fresh face" for pre-existing content often result in the need to dramatically modify the public appearance of web content while disrupting the underlying infrastructure as little as possible. The distinction between "presentation" (front end) and "business logic" (infrastructure) is usually an important one, because: the presentation source code language may differ from other code assets the production process for the application may require the work to be done at separate times and locations different workers have different skill sets, and presentation skills do not always coincide with skills for coding business logic code assets are easier to maintain and more readable when disparate components are kept separate and loosely coupled[2]

Web template


Not all potential users of web templates have the willingness and ability to hire developers to design a system for their needs. Additionally, some may wish to use the web but have limited or no technical proficiency. For these reasons, a number of developers and vendors have released web templates specifically for reuse by non-technical people. Although web template reusability is also important for even highly-skilled and technically experienced developers, it is especially critical to those who rely on simplicity and "ready-made" web solutions. Such "ready-made" web templates are sometimes free, and easily made by an individual domestically. However, specialized web templates are sometimes sold online. Although there are numerous commercial sites that offer web templates for a licensing fee, there are also free and "open-source" sources as well.

Notes and references

[1] Parr, Terence John (2004). Enforcing strict model-view separation in template engines. Proceedings of the 13th international conference on World Wide Web. ISBN1-58113-844-X. [2] Paragon Corporation (2003-07-19). "Separation of Business Logic from Presentation Logic in Web Applications" (http:/ / www. paragoncorporation. com/ ArticleDetail. aspx?ArticleID=21). . [3] MVC vs OOP (http:/ / c2. com/ cgi/ wiki?MvcIsNotObjectOriented)


Web Analytics & Optimization

Web analytics
Web analytics is the measurement, collection, analysis and reporting of internet data for purposes of understanding and optimizing web usage.[1] It is often done without the permission or knowledge of the user, in which case particularly with third party cookies which can be shared between different web sites it can be a breach of privacy. Web analytics is not just a tool for measuring web traffic but can be used as a tool for business and market research, and to assess and improve the effectiveness of a web site. Web analytics applications can also help companies measure the results of traditional print or broadcast advertising campaigns. It helps one to estimate how traffic to a website changes after the launch of a new advertising campaign. Web analytics provides information about the number of visitors to a website and the number of page views. It helps gauge traffic and popularity trends which is useful for market research. There are two categories of web analytics; off-site and on-site web analytics. Off-site web analytics refers to web measurement and analysis regardless of whether you own or maintain a website. It includes the measurement of a website's potential audience (opportunity), share of voice (visibility), and buzz (comments) that is happening on the Internet as a whole. On-site web analytics measure a visitor's behavior once on your website. This includes its drivers and conversions; for example, the degree to which different landing pages are associated with online purchases. On-site web analytics measures the performance of your website in a commercial context. This data is typically compared against key performance indicators for performance, and used to improve a web site or marketing campaign's audience response. GoogleAnalyticsisthemostwidely-used on-site web analytics service; although new tools are emerging that provide additional layers of information, including heat maps and session replay. Historically, web analytics has referred to on-site visitor measurement. However in recent years this has blurred, mainly because vendors are producing tools that span both categories.

On-site web analytics technologies

Many different vendors provide on-site web analytics software and services. There are two main technical ways of collecting the data. The first and older method, server log file analysis, reads the logfiles in which the web server records file requests by browsers. The second method, page tagging, uses JavaScript embedded in the site page code to make image requests to a third-party analytics-dedicated server, whenever a page is rendered by a web browser or, if desired, when a mouse click occurs. Both collect data that can be processed to produce web traffic reports. In addition, other data sources may be added to augment the web site behavior data described above. For example: e-mail open and click-through rates, direct mail campaign data, sales and lead history, or other data types as needed.

Web analytics


Web server logfile analysis

Web servers record some of their transactions in a logfile. It was soon realized that these logfiles could be read by a program to provide data on the popularity of the website. Thus arose web log analysis software. In the early 1990s, web site statistics consisted primarily of counting the number of client requests (or hits) made to the web server. This was a reasonable method initially, since each web site often consisted of a single HTML file. However, with the introduction of images in HTML, and web sites that spanned multiple HTML files, this count became less useful. The first true commercial Log Analyzer was released by IPRO in 1994.[2] Two units of measure were introduced in the mid 1990s to gauge more accurately the amount of human activity on web servers. These were page views and visits (or sessions). A page view was defined as a request made to the web server for a page, as opposed to a graphic, while a visit was defined as a sequence of requests from a uniquely identified client that expired after a certain amount of inactivity, usually 30 minutes. The page views and visits are still commonly displayed metrics, but are now considered rather rudimentary. The emergence of search engine spiders and robots in the late 1990s, along with web proxies and dynamically assigned IP addresses for large companies and ISPs, made it more difficult to identify unique human visitors to a website. Log analyzers responded by tracking visits by cookies, and by ignoring requests from known spiders. The extensive use of web caches also presented a problem for logfile analysis. If a person revisits a page, the second request will often be retrieved from the browser's cache, and so no request will be received by the web server. This means that the person's path through the site is lost. Caching can be defeated by configuring the web server, but this can result in degraded performance for the visitor and bigger load on the servers.

Page tagging
Concerns about the accuracy of logfile analysis in the presence of caching, and the desire to be able to perform web analytics as an outsourced service, led to the second data collection method, page tagging or 'Web bugs'. In the mid 1990s, Web counters were commonly seen these were images included in a web page that showed the number of times the image had been requested, which was an estimate of the number of visits to that page. In the late 1990 this concept evolved to include a small invisible image instead of a visible one, and, by using JavaScript, to pass along with the image request certain information about the page and the visitor. This information can then be processed remotely by a web analytics company, and extensive statistics generated. The web analytics service also manages the process of assigning a cookie to the user, which can uniquely identify them during their visit and in subsequent visits. Cookie acceptance rates vary significantly between web sites and may affect the quality of data collected and reported. Collecting web site data using a third-party data collection server (or even an in-house data collection server) requires an additional DNS look-up by the user's computer to determine the IP address of the collection server. On occasion, delays in completing a successful or failed DNS look-ups may result in data not being collected. With the increasing popularity of Ajax-based solutions, an alternative to the use of an invisible image is to implement a call back to the server from the rendered page. In this case, when the page is rendered on the web browser, a piece of Ajax code would call back to the server and pass information about the client that can then be aggregated by a web analytics company. This is in some ways flawed by browser restrictions on the servers which can be contacted with XmlHttpRequest objects. Also, this method can lead to slightly lower reported traffic levels, since the visitor may stop the page from loading in mid-response before the Ajax call is made.

Web analytics


Logfile analysis vs page tagging

Both logfile analysis programs and page tagging solutions are readily available to companies that wish to perform web analytics. In some cases, the same web analytics company will offer both approaches. The question then arises of which method a company should choose. There are advantages and disadvantages to each approach.[3] Advantages of logfile analysis The main advantages of logfile analysis over page tagging are as follows: The web server normally already produces logfiles, so the raw data is already available. No changes to the website are required. The data is on the company's own servers, and is in a standard, rather than a proprietary, format. This makes it easy for a company to switch programs later, use several different programs, and analyze historical data with a new program. Logfiles contain information on visits from search engine spiders, which generally do not execute JavaScript on a page and are therefore not recorded by page tagging. Although these should not be reported as part of the human activity, it is useful information for search engine optimization. Logfiles require no additional DNS lookups or TCP slow starts. Thus there are no external server calls which can slow page load speeds, or result in uncounted page views. The web server reliably records every transaction it makes, e.g. serving PDF documents and content generated by scripts, and does not rely on the visitors' browsers cooperating. Advantages of page tagging The main advantages of page tagging over logfile analysis are as follows: Counting is activated by opening the page (given that the web client runs the tag scripts), not requesting it from the server. If a page is cached, it will not be counted by the server. Cached pages can account for up to one-third of all pageviews. Not counting cached pages seriously skews many site metrics. It is for this reason server-based log analysis is not considered suitable for analysis of human activity on websites. Data is gathered via a component ("tag") in the page, usually written in JavaScript, though Java can be used, and increasingly Flash is used. Ajax can also be used in conjunction with a server-side scripting language (such as PHP) to manipulate and (usually) store it in a database, basically enabling complete control over how the data is represented. The script may have access to additional information on the web client or on the user, not sent in the query, such as visitors' screen sizes and the price of the goods they purchased. Page tagging can report on events which do not involve a request to the web server, such as interactions within Flash movies, partial form completion, mouse events such as onClick, onMouseOver, onFocus, onBlur etc. The page tagging service manages the process of assigning cookies to visitors; with logfile analysis, the server has to be configured to do this. Page tagging is available to companies who do not have access to their own web servers. Lately page tagging has become a standard in web analytics.[4]

Web analytics Economic factors Logfile analysis is almost always performed in-house. Page tagging can be performed in-house, but it is more often provided as a third-party service. The economic difference between these two models can also be a consideration for a company deciding which to purchase. Logfile analysis typically involves a one-off software purchase; however, some vendors are introducing maximum annual page views with additional costs to process additional information. In addition to commercial offerings, several open-source logfile analysis tools are available free of charge. For Logfile analysis you have to store and archive your own data, which often grows very large quickly. Although the cost of hardware to do this is minimal, the overhead for an IT department can be considerable. For Logfile analysis you need to maintain the software, including updates and security patches. Complex page tagging vendors charge a monthly fee based on volume i.e. number of pageviews per month collected. Which solution is cheaper to implement depends on the amount of technical expertise within the company, the vendor chosen, the amount of activity seen on the web sites, the depth and type of information sought, and the number of distinct web sites needing statistics. Regardless of the vendor solution or data collection method employed, the cost of web visitor analysis and interpretation should also be included. That is, the cost of turning raw data into actionable information. This can be from the use of third party consultants, the hiring of an experienced web analyst, or the training of a suitable in-house person. A cost-benefit analysis can then be performed. For example, what revenue increase or cost savings can be gained by analysing the web visitor data?


Hybrid methods
Some companies produce solutions that collect data through both logfiles and page tagging and can analyze both kinds. By using a hybrid method, they aim to produce more accurate statistics than either method on its own. An early hybrid solution was produced in 1998 by Rufus Evison.

Geolocation of visitors
With IP geolocation, it is possible to track visitors location. Using IP geolocation database or API, visitors can be geolocated to city, region or country level.[5] IP Intelligence, or Internet Protocol (IP) Intelligence, is a technology that maps the Internet and catalogues IP addresses by parameters such as geographic location (country, region, state, city and postcode), connection type, Internet Service Provider (ISP), proxy information, and more. The first generation of IP Intelligence was referred to as geotargeting or geolocation technology. This information is used by businesses for online audience segmentation in applications such online advertising, behavioral targeting, content localization (or website localization), digital rights management, personalization, online fraud detection, geographic rights management, localized search, enhanced analytics, global traffic management, and content distribution.

Web analytics


Click analytics
Click analytics is a special type of web analytics that gives special attention to clicks. Commonly, click analytics focuses on on-site analytics. An editor of a web site uses click analytics to determine the performance of his or her particular site, with regards to where the users of the site are clicking. Also, click analytics may happen real-time or "unreal"-time, depending on the type of information sought. Typically, front-page Clickpath Analysis with referring pages on the left and arrows and rectangles editors on high-traffic news media sites will differing in thickness and expanse to symbolize movement quantity. want to monitor their pages in real-time, to optimize the content. Editors, designers or other types of stakeholders may analyze clicks on a wider time frame to aid them assess performance of writers, design elements or advertisements etc. Data about clicks may be gathered in at least two ways. Ideally, a click is "logged" when it occurs, and this method requires some functionality that picks up relevant information when the event occurs. Alternatively, one may institute the assumption that a page view is a result of a click, and therefore log a simulated click that led to that page view.

Customer lifecycle analytics

Customer lifecycle analytics is a visitor-centric approach to measuring that falls under the umbrella of lifecycle marketing. Page views, clicks and other events (such as API calls, access to third-party services, etc.) are all tied to an individual visitor instead of being stored as separate data points. Customer lifecycle analytics attempts to connect all the data points into a marketing funnel that can offer insights into visitor behavior and website optimization.

Other methods
Other methods of data collection are sometimes used. Packet sniffing collects data by sniffing the network traffic passing between the web server and the outside world. Packet sniffing involves no changes to the web pages or web servers. Integrating web analytics into the web server software itself is also possible.[6] Both these methods claim to provide better real-time data than other methods.

On-site web analytics - definitions

There are no globally agreed definitions within web analytics as the industry bodies have been trying to agree on definitions that are useful and definitive for some time. The main bodies who have had input in this area have been JICWEBS (The Joint Industry Committee for Web Standards in the UK and Ireland) [7], ABCe (Audit Bureau of Circulations electronic, UK and Europe) [8], The DAA (Digital Analytics Association) [9], formally known as the WAA (Web Analytics Association, US) and to a lesser extent the IAB (Interactive Advertising Bureau). However, many terms are used in consistent ways from one major analytics tool to another, so the following list, based on those conventions, can be a useful starting point. Both the WAA and the ABCe provide more definitive lists for those who are declaring their statistics as using the metrics defined by either.

Web analytics Hit - A request for a file from the web server. Available only in log analysis. The number of hits received by a website is frequently cited to assert its popularity, but this number is extremely misleading and dramatically overestimates popularity. A single web-page typically consists of multiple (often dozens) of discrete files, each of which is counted as a hit as the page is downloaded, so the number of hits is really an arbitrary number more reflective of the complexity of individual pages on the website than the website's actual popularity. The total number of visits or page views provides a more realistic and accurate assessment of popularity. Page view - A request for a file, or sometimes an event such as a mouse click, that is defined as a page in the setup of the web analytics tool. An occurrence of the script being run in page tagging. In log analysis, a single page view may generate multiple hits as all the resources required to view the page (images, .js and .css files) are also requested from the web server. Visit / Session - A visit or session is defined as a series of page requests or, in the case of tags, image requests from the same uniquely identified client. A visit is considered ended when no requests have been recorded in some number of elapsed minutes. A 30 minute limit ("time out") is used by many analytics tools but can, in some tools, be changed to another number of minutes. Analytics data collectors and analysis tools have no reliable way of knowing if a visitor has looked at other sites between page views; a visit is considered one visit as long as the events (page views, clicks, whatever is being recorded) are 30 minutes or less closer together. Note that a visit can consist of one page view, or thousands. First Visit / First Session - (also called 'Absolute Unique Visitor' in some tools) A visit from a uniquely identified client that has theoretically not made any previous visits. Since the only way of knowing whether the uniquely identified client has been to the site before is the presence of a persistent cookie that had been received on a previous visit, the First Visit label is not reliable if the site's cookies have been deleted since their previous visit. Visitor / Unique Visitor / Unique User - The uniquely identified client that is generating page views or hits within a defined time period (e.g. day, week or month). A uniquely identified client is usually a combination of a machine (one's desktop computer at work for example) and a browser (Firefox on that machine). The identification is usually via a persistent cookie that has been placed on the computer by the site page code. An older method, used in log file analysis, is the unique combination of the computer's IP address and the User Agent (browser) information provided to the web server by the browser. It is important to understand that the "Visitor" is not the same as the human being sitting at the computer at the time of the visit, since an individual human can user different computers or, on the same computer, can use different browsers, and will be seen as a different visitor in each circumstance. Increasingly, but still somewhat rarely, visitors are uniquely identified by Flash LSO's (Local Shared Object), which are less susceptible to privacy enforcement. Repeat Visitor - A visitor that has made at least one previous visit. The period between the last and current visit is called visitor recency and is measured in days. New Visitor - A visitor that has not made any previous visits. This definition creates a certain amount of confusion (see common confusions below), and is sometimes substituted with analysis of first visits. Impression - The most common definition of "Impression" is an instance of an advertisement appearing on a viewed page. Note that an advertisement can be displayed on a viewed page below the area actually displayed on the screen, so most measures of impressions do not necessarily mean an advertisement has been viewable. Single Page Visit / Singleton - A visit in which only a single page is viewed (a 'bounce'). Bounce Rate - The percentage of visits that are single page visits. Exit Rate / % Exit - A statistic applied to an individual page, not a web site. The percentage of visits seeing a page where that page is the final page viewed in the visit. Page Time Viewed / Page Visibility Time / Page View Duration - The time a single page (or a blog, Ad Banner...) is on the screen, measured as the calculated difference between the time of the request for that page and the time of the next recorded request. If there is no next recorded request, then the viewing time of that instance of that page is not included in reports.


Web analytics Session Duration / Visit Duration - Average amount of time that visitors spend on the site each time they visit. This metric can be complicated by the fact that analytics programs can not measure the length of the final page view.[10] Average Page View Duration - Average amount of time that visitors spend on an average page of the site. Active Time / Engagement Time - Average amount of time that visitors spend actually interacting with content on a web page, based on mouse moves, clicks, hovers and scrolls. Unlike Session Duration and Page View Duration / Time on Page, this metric can accurately measure the length of engagement in the final page view, but it is not available in many analytics tools or data collection methods. Average Page Depth / Page Views per Average Session - Page Depth is the approximate "size" of an average visit, calculated by dividing total number of page views by total number of visits. Frequency / Session per Unique - Frequency measures how often visitors come to a website in a given time period. It is calculated by dividing the total number of sessions (or visits) by the total number of unique visitors during a specified time period, such as a month or year. Sometimes it is used interchangeable with the term "loyalty." Click path - the chronological sequence of page views within a visit or session. Click - "refers to a single instance of a user following a hyperlink from one page in a site to another".[11] Site Overlay is a report technique in which statistics (clicks) or hot spots are superimposed, by physical location, on a visual snapshot of the web page.


Common sources of confusion in web analytics

The hotel problem
The hotel problem is generally the first problem encountered by a user of web analytics. The problem is that the unique visitors for each day in a month do not add up to the same total as the unique visitors for that month. This appears to an inexperienced user to be a problem in whatever analytics software they are using. In fact it is a simple property of the metric definitions. The way to picture the situation is by imagining a hotel. The hotel has two rooms (Room A and Room B).
Day 1 Day 2 Day 3 Total Room A John Room B Mark Total 2 John Jane 2 Mark Jane 2 2 Unique Users 2 Unique Users ?

As the table shows, the hotel has two unique users each day over three days. The sum of the totals with respect to the days is therefore six. During the period each room has had two unique users. The sum of the totals with respect to the rooms is therefore four. Actually only three visitors have been in the hotel over this period. The problem is that a person who stays in a room for two nights will get counted twice if you count them once on each day, but is only counted once if you are looking at the total for the period. Any software for web analytics will sum these correctly for the chosen time period, thus leading to the problem when a user tries to compare the totals.

Web analytics


New visitors + Repeat visitors unequal to total visitors

Another common misconception in web analytics is that the sum of the new visitors and the repeat visitors ought to be the total number of visitors. Again this becomes clear if the visitors are viewed as individuals on a small scale, but still causes a large number of complaints that analytics software cannot be working because of a failure to understand the metrics. Here the culprit is the metric of a new visitor. There is really no such thing as a new visitor when you are considering a web site from an ongoing perspective. If a visitor makes their first visit on a given day and then returns to the web site on the same day they are both a new visitor and a repeat visitor for that day. So if we look at them as an individual which are they? The answer has to be both, so the definition of the metric is at fault. A new visitor is not an individual; it is a facet of the web measurement. For this reason it is easiest to conceptualize the same facet as a first visit (or first session). This resolves the conflict and so removes the confusion. Nobody expects the number of first visits to add to the number of repeat visitors to give the total number of visitors. The metric will have the same number as the new visitors, but it is clearer that it will not add in this fashion. On the day in question there was a first visit made by our chosen individual. There was also a repeat visit made by the same individual. The number of first visits and the number of repeat visits will add up to the total number of visits for that day.

Web analytics methods

Problems with cookies
Historically, vendors of page-tagging analytics solutions have used third-party cookies sent from the vendor's domain instead of the domain of the website being browsed. Third-party cookies can handle visitors who cross multiple unrelated domains within the company's site, since the cookie is always handled by the vendor's servers. However, third-party cookies in principle allow tracking an individual user across the sites of different companies, allowing the analytics vendor to collate the user's activity on sites where he provided personal information with his activity on other sites where he thought he was anonymous. Although web analytics companies deny doing this, other companies such as companies supplying banner ads have done so. Privacy concerns about cookies have therefore led a noticeable minority of users to block or delete third-party cookies. In 2005, some reports showed that about 28% of Internet users blocked third-party cookies and 22% deleted them at least once a month.[12] Most vendors of page tagging solutions have now moved to provide at least the option of using first-party cookies (cookies assigned from the client subdomain). Another problem is cookie deletion. When web analytics depend on cookies to identify unique visitors, the statistics are dependent on a persistent cookie to hold a unique visitor ID. When users delete cookies, they usually delete both first- and third-party cookies. If this is done between interactions with the site, the user will appear as a first-time visitor at their next interaction point. Without a persistent and unique visitor id, conversions, click-stream analysis, and other metrics dependent on the activities of a unique visitor over time, cannot be accurate. Cookies are used because IP addresses are not always unique to users and may be shared by large groups or proxies. In some cases, the IP address is combined with the user agent in order to more accurately identify a visitor if cookies are not available. However, this only partially solves the problem because often users behind a proxy server have the same user agent. Other methods of uniquely identifying a user are technically challenging and would limit the trackable audience or would be considered suspicious. Cookies are the selected option because they reach the lowest common denominator without using technologies regarded as spyware.

Web analytics


Secure analytics (metering) methods

All the methods described above (and some other methods not mentioned here, like sampling) have the central problem of being vulnerable to manipulation (both inflation and deflation). This means these methods are imprecise and insecure (in any reasonable model of security). This issue has been addressed in a number of papers [13] [14] [15] ,[16] but to-date the solutions suggested in these papers remain theoretic, possibly due to lack of interest from the engineering community, or because of financial gain the current situation provides to the owners of big websites. For more details, consult the aforementioned papers.

[1] The Official WAA Definition of Web Analytics (http:/ / www. webanalyticsassociation. org/ ?page=aboutus) [2] Web Traffic Data Sources and Vendor Comparison (http:/ / www. advanced-web-metrics. com/ docs/ web-data-sources. pdf) by Brian Clifton and Omega Digital Media Ltd [3] Increasing Accuracy for Online Business Growth (http:/ / www. advanced-web-metrics. com/ blog/ 2008/ 02/ 16/ accuracy-whitepaper/ ) - a web analytics accuracy whitepaper [4] "Revisiting log file analysis versus page tagging": McGill University Web Analytics blog article (CMIS 530) Archive (http:/ / web. archive. org/ web/ 20110706165119/ http:/ / web. analyticsblog. ca/ 2010/ 02/ revisiting-log-file-analysis-versus-page-tagging/ ) [5] IPInfoDB (2009-07-10). "IP geolocation database" (http:/ / ipinfodb. com/ ip_database. php). IPInfoDB. . Retrieved 2009-07-19. [6] Web analytics integrated into web software itself (http:/ / portal. acm. org/ citation. cfm?id=1064677. 1064679& coll=GUIDE& dl=GUIDE& CFID=66492168& CFTOKEN=93187844) [7] http:/ / www. jicwebs. org/ [8] http:/ / www. abc. org. uk/ [9] http:/ / www. digitalanalyticsassociation. org/ default. asp?page=aboutus [10] ClickTale Blog Blog Archive What Google Analytics Can't Tell You, Part 1 (http:/ / blog. clicktale. com/ 2009/ 10/ 14/ what-google-analytics-cant-tell-you-part-1/ ) [11] Clicks - Analytics Help (http:/ / www. google. com/ support/ googleanalytics/ bin/ answer. py?hl=en& answer=32981) [12] clickz report (http:/ / www. clickz. com/ showPage. html?page=3489636) [13] Naor, M.; Pinkas, B. (1998). "Secure and efficient metering". Advances in Cryptology EUROCRYPT'98. Lecture Notes in Computer Science. 1403. pp.576. doi:10.1007/BFb0054155. ISBN3-540-64518-7. [14] Naor, M.; Pinkas, B. (1998). "Secure accounting and auditing on the Web". Computer Networks and ISDN Systems 30: 541. doi:10.1016/S0169-7552(98)00116-0. [15] Franklin, M. K.; Malkhi, D. (1997). "Auditable metering with lightweight security". Financial Cryptography. Lecture Notes in Computer Science. 1318. pp.151. doi:10.1007/3-540-63594-7_75. ISBN978-3-540-63594-9. [16] Johnson, R.; Staddon, J. (2007). "Deflation-secure web metering". International Journal of Information and Computer Security 1: 39. doi:10.1504/IJICS.2007.012244.

Clifton, Brian (2010) Advanced Web Metrics with Google Analytics, 2nd edition, Sybex (Paperback.) Kaushik, Avinash (2009) Web Analytics 2.0 - The Art of Online Accountability and Science of Customer Centricity. Sybex, Wiley. Mortensen, Dennis R. (2009) Yahoo! Web Analytics. Sybex. Farris, P., Bendle, N.T., Pfeifer, P.E. Reibstein, D.J. (2009) Key Marketing Metrics The 50+ Metrics Every Manager needs to know, Prentice Hall, London. Plaza, B (2009) Monitoring web traffic source effectiveness with Google Analytics: An experiment with time series. Aslib Proceedings, 61(5): 474482. Arikan, Akin (2008) Multichannel Marketing. Metrics and Methods for On and Offline Success. Sybex. Tullis, Tom & Albert, Bill (2008) Measuring the User Experience. Collecting, Analyzing and Presenting Usability Metrics. Morgan Kaufmann, Elsevier, Burlington MA. Kaushik, Avinash (2007) Web Analytics: An Hour a Day, Sybex, Wiley. Bradley N (2007) Marketing Research. Tools and Techniques. Oxford University Press, Oxford. Burby, Jason and Atchison, Shane (2007) Actionable Web Analytics: Using Data to Make Smart Business Decisions.

Web analytics Davis, J. (2006) Marketing Metrics: How to create Accountable Marketing plans that really work John Wiley & Sons (Asia). Peterson Eric T (2005) Web Site Measurement Hacks. O'Reilly ebook. Peterson Eric T (2004) Web Analytics Demystified: A Marketers Guide to Understanding How Your Web Site Affects Your Business. Celilo Group Media Lenskold, J. (2003) Marketing ROI: how to plan, Measure and Optimise strategies for Profit London: McGraw Hill Contemporary Sterne, J. (2002) Web metrics, Proven Methods for Measuring Web Site Success, London: John Wiley & Sons. Srinivasan, J .(2001) E commerce Metrics, Models and Examples, London: Prentice Hall.


External links
Technology enablers and business goals for web analytics initiatives ( 02/02/web-analytics--overview-options-and-technology-enablers/) ABCe (Audit Bureau of Circulations electronic, UK and Europe), ( JICWEBS (The Joint Industry Committee for Web Standards in the UK and Ireland) ( Cancanit Website Analysis (Online web analytical software) ( Piwik - Open Source Web Analytics Software (

List of web analytics software

This is a list of web analytics software used to collect and display data about visiting website users.

Self-hosted software
Free / Open source (FLOSS)
This is a comparison table of web analytics software released under a free software license.
Name Platform Supported databases Logfile-based Logfile-based MySQL MySQL Tracking Method Latest stable release 6.0 7.1 3.3.2 1.5.2 License

Analog AWStats CrawlTrack Open Web Analytics Piwik



Web log files Web log files PHP pagetag JavaScript or PHP pagetag




JavaScript or PHP pagetag or Web log files JavaScript



SnowPlow W3Perl

Apache Hadoop Perl C

Apache Hive


Apache License GNU GPL GNU GPL

Logfile-based Logfile-based Web log files

3.16 2.23-05


List of web analytics software


This is a comparison table of web analytics proprietary software.
Name Company Platform Supported databases Tracking Method Latest stable release 5.0 Price in USD

Deep Log Analyzer


Deep Software Inc.


MS Access

Web log files & Cookies via JavaScript Cookies via JavaScript

Free $199.95/computer








Flowerfire Windows/Linux/BSD/POSIX MS SQL/MySQL/Oracle Cookies via Inc Database/PostgreSQL/Proprietary JavaScript & Logs Splunk Inc. Windows/Linux/BSD/Solaris Proprietary Web log files


mixed, from $99/profile



Negotiable, 500MB per day free Sale has been discontinued See web-site




MySQL, PostgreSQL

Cookies & Logs Network traffic monitor


Tealeaf cx*



MS SQL/Proprietary


Unica NetInsight



MS SQL/DB2/Oracle Database/Netezza

Web log 8.6 (as of files & 2012-05-15) Cookies (with or without JavaScript) Web log files 1.2

Various pricing options

Logscape Liquidlabs Windows/Linux/BSD/Solaris



From $1000 per server

This is a comparison table of web analytics. LogZilla is 99% open source, using a single file for licensing.
Name Company Platform Supported databases Tracking Method Latest stable release Linux MySQL syslog-ng 3.1.122 Price in USD mixed, including a free version

LogZilla LogZilla, LLC

Hosted / Software as a service

This is a comparison table of hosted web analytics software as a service.

List of web analytics software




Tracking Method

Latest stable release N/A N/A 4.0

Price in USD

Analyzer Apptegic

AT Internet Apptegic Bango plc

Cookies via JavaScript Cookies via JavaScript Mobile ID and cookies

Negotiable Free - $4,000+/month From $49/month

Bango Mobile Web Analytics Chartbeat ClickTale Clicky


Chartbeat Inc. ClickTale Roxr Software Ltd

Cookies via Javascript Cookies via Javascript Cookies via Javascript


From $9.95/month Free - $990/month $9.99/month for 1m page views Negotiable Negotiable From $499/month (30 day trial) Free $6$48/month Free Free, 5-69/month From $19/month Free Negotiable

Coremetrics Digitial Analytix Dash


IBM comScore

Cookies via JavaScript Cookies via JavaScript Cookies via Javascript


Flurry Analytics Gauges


Flurry GitHub Google Go Squared Ltd HitsLink Histats Omniture (Adobe Systems) Anametrix

Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript


Google Analytics GoSquared HitsLink Histats





InstaVista for Web Analytics KISSmetrics Logaholic





KISSmetrics Logaholic

Cookies via JavaScript Cookies via JavaScript / Web Log files Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript


$29$499/month Free - $44/$97

[12] Mixpanel

Mapmyuser, LLC Mixpanel Quantcast Corporation Omniture (Adobe Systems) Splunk Inc.


Free Free-$1,600+/month Free Negotiable

Quantcast SiteCatalyst

Web log files


Negotiable / Free during beta Free - $5/month ... $119/month Free - $29.95/month



Cookies via JavaScript


TraceMyIP, LLC.

Cookies or Cookie-less via JavaScript/Server Cookies via Javascript



Awio Web Services LLC. Webtrekk


Free - $9.95/month

Webtrekk Q3

Cookies via JavaScript


From $202/month

List of web analytics software

Webtrends iFusion Labs LLC Yahoo! Cookies via JavaScript Cookies via JavaScript Cookies via JavaScript N/A 1.2 N/A N/A Free - $499.95+/month Free

Webtrends Woopra Yahoo! Web Analytics

[1] https:/ / github. com/ snowplow/ snowplow [2] http:/ / www. deep-software. com [3] http:/ / logscape. com [4] http:/ / www. apptegic. com/ [5] http:/ / getclicky. com/ [6] http:/ / parse. ly/ [7] http:/ / get. gaug. es/ [8] https:/ / www. gosquared. com/ [9] http:/ / www. hitslink. com/ [10] http:/ / www. histats. com/ [11] http:/ / www. kissmetrics. com/ [12] http:/ / www. logaholic. com/ [13] "Logaholic in cPanel" (http:/ / blog. cpanel. net/ preview_of_the_new_logoholic_website_analytics_feature/ ). . Retrieved December 05, 2012. [14] https:/ / mixpanel. com/

External links
List of web analytics software ( Log_Analysis/) at the Open Directory Project

Google Analytics


Google Analytics
Google Analytics



Operating system Cross-platform (web-based application) Type Website Statistics, Analysis [1]

Google Analytics (GA) is a service offered by Google that generates detailed statistics about the visits to a website. The product is aimed at marketers as opposed to webmasters and technologists from which the industry of web analytics originally grew. It is the most widely used website statistics service. The basic service is free of charge and a premium version is available for a fee.[2] GA can track visitors from all referrers, including search engines, display advertising, pay-per-click networks, e-mail marketing and digital collateral such as links within PDF documents.

Google acquired Urchin Software Corp. in April 2005.[3] Google's service was developed from Urchin on Demand. The system also brings ideas from Adaptive Path, whose product, Measure Map, was acquired and used in the redesign of Google Analytics in 2006.[4] Google continued to sell the standalone, installable Urchin WebAnalytics Software through a network of value-added resellers until discontinuation on March 28, 2012.[5][6] The Google-branded version was rolled out in November 2005 to anyone who wished to sign up. However due to extremely high demand for the service, new sign-ups were suspended only a week later. As capacity was added to the system, Google began using a lottery-type invitation-code model. Prior to August 2006 Google was sending out batches of invitation codes as server availability permitted; since mid-August 2006 the service has been fully available to all users whether they use Google for advertising or not. The latest version of Google Analytics tracking code is known as the asynchronous tracking code,[7] which Google claims, is significantly more sensitive and accurate, and is able to track even very short activities on the website. The previous version delayed page loading and so, for performance reasons, it was generally placed just before the </body> body close HTML tag. The new code can be placed between the <head>...</head> HTML head tags because, once triggered, it runs in parallel with page loading. In April 2011, Google announced the availability of a new version of Google Analytics, featuring multiple dashboards, more options of custom reports and a new interface design.[8] This version was later updated with some other features such as real-time analytics and goal flow charts.[9][10]

Google Analytics


Integrated with AdWords, users can now review online campaigns by tracking landing page quality and conversions (goals). Goals might include sales, lead generation, viewing a specific page, or downloading a particular file. GA's approach is to show high-level, dashboard-type data for the casual user, and more in-depth data further into the report set. GA analysis can identify poorly performing page with techniques such as funnel visualization, where visitors came from (referrers), how long they stayed and their geographical position. It also provides more advanced features, including custom visitor segmentation. Google Analytics e-commerce reporting can track sales activity and performance. The e-commerce reports shows a site's transactions, revenue, and many other commerce-related metrics. A user can have 50 site profiles. Each profile generally corresponds to one website. It is limited to sites which have a traffic of fewer than 5 million pageviews per month (roughly 2 pageviews per second), unless the site is linked to an AdWords campaign.[11] Google Analytics includes Google Website Optimizer, rebranded as Google Analytics Content Experiments.[12][13]

Google Analytics is implemented with "page tags". A page tag, in this case called the Google Analytics Tracking Code (GATC) is a snippet of JavaScript code that the website owner user adds to every page of the website. The GATC code runs in the client browser when the client browses the page (if JavaScript is enabled in the browser) and collects visitor data and sends it to a Google data collection server as part of a request for a web beacon. The GATC loads a larger Javascript file from the Google webserver and then sets variables with the user's account number. The larger file (currently known as ga.js) is typically 18KB. The file does not usually have to be loaded, though because of browser caching. Assuming caching is enabled in the browser, it downloads ga.js only once at the start of the visit. Furthermore, as all websites that implement GA with the ga.js code use the same master file from Google, a browser that has previously visited any other website running Google Analytics will already have the file cached on their machine. In addition to transmitting information to a Google server, the GATC sets first party cookies (If cookies are enabled in the browser) on each visitor's computer. These cookies store anonymous information such as whether the visitor has been to the site before (new or returning visitor), the timestamp of the current visit, and the referrer site or campaign that directed the visitor to the page (e.g. search engine, keywords, banner or email). If the visitor arrived at the site by clicking on a link tagged with Urchin Tracking Module (UTM) codes such as: the tag values are passed to the database too.[14]

In addition, Google Analytics for Mobile Package allows GA to be applied to mobile websites. The Mobile Package contains server-side tracking codes that use PHP, JavaServer Pages, ASP.NET, or Perl for its server-side language.[15] However, many ad filtering programs and extensions (such as Firefox's Adblock and NoScript) can block the GATC. This prevents some traffic and users from being tracked, and leads to holes in the collected data. Also, privacy networks like Tor will mask the user's actual location and present inaccurate geographical data. Some users do not have JavaScript-enabled/capable browsers or turn this feature off. However, these limitations are considered smallaffecting only a small percentage of visits.[16] The largest potential impact on data accuracy comes from users deleting or blocking Google Analytics cookies.[17] Without cookies being set, GA cannot collect data. Any individual web user can block or delete cookies resulting in

Google Analytics the data loss of those visits for GA users. Website owners can encourage users not to disable cookies, for example by making visitors more comfortable using the site through posting a privacy policy. These limitations affect the majority of web analytics tools which use page tags (usually JavaScript programs) embedded in web pages to collect visitor data, store it in cookies on the visitor's computer, and transmit it to a remote database by pretending to load a tiny graphic "beacon". Another limitation of GA for large websites is the use of sampling in the generation of many of its reports. To reduce the load on their servers and to provide users with a relatively quick response for their query, GA limits reports to 500,000 randomly sampled visits at the profile level for its calculations. While margins of error are indicated for the visits metric, margins of error are not provided for any other metrics in the GA reports. For small segments of data, the margin of error can be very large.[18]


Performance concerns
There have been several online discussions about the impact on Google Analytics on site performance.[19][20][21] However, Google introduced asynchronous JavaScript code in December 2009 to reduce the risk of slowing the loading of pages tagged with the ga.js script.[22][23]

Privacy issues
Due to its ubiquity, Google Analytics raises some privacy concerns. Whenever someone visits a website that uses Google Analytics, if Javascript is enabled in the browser then Google tracks that visit via the user's IP address in order to determine the user's approximate geographic location. (To meet German legal requirements, Google Analytics can anonymize the IP address.[24]) The opt-in Google Account privacy policy[25] is quite different from the Google privacy policies as applied to Google AdWords, or the terms of service for users of Google Analyticswhich forbid the storing of PII (Personally-Identifiable Information).[26][27] If a website visitor uses a Google Account as ID when entering a comment or uploading to a Google property such as Blogger [28] or YouTube [29], then Google receives sufficient information to identify the user and thus associate the details of the website visit with that user. Google has announced an updated privacy policy which will allow Google to specifically identify and track users of any website that uses a Google Account, if that user is also a user of any other Google product (Gmail, Picasa, YouTube, BlogSpot,etc.) to which the same privacy policy applies.[25][30][31] Much of this Google Account profile information is optional and private (viewable only by Google) by default, and the user may update or remove it.[32][33] But, as described above, it is against Google's privacy policies and the Google Analytics Terms of Service to store personally-identifiable information without a user's consent. Google has also released a browser plugin that turns off data about a page visit being sent to Google.[34][35] Since this plug-in is produced and distributed by Google itself, it has met much discussion and criticism. Furthermore, the realisation of Google scripts tracking user behaviours has spawned the production of multiple, often open-source, browser plug-ins to reject tracking cookies.[36] These plug-ins offer the user a choice, whether to allow Google Analytics (for example) to track his/her activities. However, partially because of new European privacy laws, most modern browsers allow users to reject tracking cookies, though Flash cookies can be a separate problem again. It has been anecdotally reported that behind proxy servers and multiple firewalls that errors can occur changing time stamps and registering invalid searches.[37] Webmasters who seek to mitigate Google Analytics specific privacy issues can employ a number of alternatives having their backends hosted on their own machines. Until its discontinuation, an example of such a product was Urchin WebAnalytics Software from Google itself.

Google Analytics


In May 2011 it was ruled that EU websites must get user permission to store non-essential cookies on client computers. Website owners were given 1 year to comply before legal action is enforced. This resulted in all EU websites having to stop collecting Google Analytics data without the consent of the end user.[38][39]

Support and training

Google offers free Google Analytics IQ Lessons,[40] a $50 Google Analytics certification test,[41] free Help Center[42] FAQ and Google Groups forum[43] for official Google Analytics product support. New product features are announced on the Goggle Analytics Blog.[44] Enterprise support is provided through Certified Partners.[45]

APIs for third-party application support

The Google Analytics API[46] is used by third parties to build custom applications[47] such as reporting tools. Many such applications exist. One was built to run on iOS (Apple) devices and is featured in Apple's app store.[48]

Google Analytics is the most widely used website statistics service,[49] currently in use on around 55% of the 10,000 most popular websites.[50] Another market share analysis claims that Google Analytics is used at around 49.95% of the top 1,000,000 websites (as currently ranked by Alexa).[51] Google Analytics is used by 57% of the 10,000 most popular websites (as ranked by Alexa Internet) ordered by popularity, as reported by (now defunct) In May 2008, Pingdom released a survey stating that 161 (or 32%) out of 500 biggest sites globally according to their Alexa rank were using Google Analytics.[52] Twitter MySpace Dailymotion

[1] http:/ / analytics. google. com/ analytics [2] "Get the Power of Google Analytics: Now available in Standard or Premium, whatever your needs are Google Analytics can help." (http:/ / www. google. com/ analytics/ premium/ features. html). . Retrieved April, 8 2012. [3] "Our history in depth" (http:/ / www. google. com/ about/ company/ history/ ). Google. . Retrieved 2012-07-16. [4] Official Google Blog: Here comes Measure Map (http:/ / googleblog. blogspot. com/ 2006/ 02/ here-comes-measure-map. html) [5] Muret, Paul (January 20, 2012). "The End of an Era for Urchin Software" (http:/ / analytics. blogspot. com/ 2012/ 01/ end-of-era-for-urchin-software. html). Google Analytics. . Retrieved April 7, 2012. [6] Muret, Paul. "The End of an Era for Urchin Software" (http:/ / www. google. com/ urchin/ ). Google Analytics. . Retrieved April 7, 2012. [7] "Asynchronous Tracking Code" (http:/ / www. google. com/ support/ analytics/ bin/ answer. py?answer=161379). . [8] "The New Google Analytics Available to Everyone" (http:/ / analytics. blogspot. com/ 2011/ 04/ new-google-analytics-available-to. html). . [9] "Introducing Flow Visualization: visualizing visitor flow" (http:/ / analytics. blogspot. com/ 2011/ 10/ introducing-flow-visualization. html). . [10] "Whats happening on your site right now?" (http:/ / analytics. blogspot. com/ 2011/ 09/ whats-happening-on-your-site-right-now. html). . [11] Google Analytics Help: Does Google Analytics have a pageview limit? (http:/ / www. google. com/ support/ analytics/ bin/ answer. py?hl=en& answer=55476) [12] "Website Optimizer" (http:/ / www. google. com/ websiteoptimizer/ ). Google. . Retrieved 2012-07-20. [13] Tzemah, Nir. "Helping to Create Better Websites: Introducing Content Experiments" (http:/ / analytics. blogspot. com/ 2012/ 06/ helping-to-create-better-websites. html). Google Analytics Blog. . Retrieved 4 June 2012. [14] "Google Analytics: UTM Link Tagging Explained" (http:/ / www. intownwebdesign. com/ google-analytics/ google-analytics-utm-link-tagging-explained. html). . [15] "Google Analytics for Mobile package" (http:/ / code. google. com/ intl/ en/ mobile/ analytics/ docs/ web/ ). . [16] EU and US JavaScript Disabled Index numbers + Web Analytics data collection impact (http:/ / visualrevenue. com/ blog/ 2007/ 08/ eu-and-us-javascript-disabled-index. html),

Google Analytics
[17] "Increasing Accuracy for Online Business Growth" (http:/ / www. advanced-web-metrics. com/ blog/ 2008/ 02/ 16/ accuracy-whitepaper/ ). . a web analytics accuracy whitepaper [18] "Segmentation Options in Google Analytics" (http:/ / www. epikone. com/ blog/ 2009/ 04/ 21/ segmentation-options-in-google-analytics/ ). . [19] Does Google Analytics Slow down page loading? (http:/ / groups. google. com/ group/ analytics-help-basics/ browse_thread/ thread/ 62997a00d5a50406?pli=1) [20] Google Analytics Code is Slowing Down My Site (http:/ / www. analyticsmarket. com/ blog/ tracking-code-slows-my-site) [21] Is Google Analytics Slow or Not? (http:/ / www. woopra. com/ blog/ 2009/ 02/ 04/ is-google-analytics-slow-or-not/ ) [22] Google Analytics Launches Asynchronous Tracking (http:/ / googlecode. blogspot. com/ 2009/ 12/ google-analytics-launches-asynchronous. html) [23] Making the Web Faster (http:/ / analytics. blogspot. com/ 2010/ 04/ making-web-faster. html) [24] "Tracking Code: The _gat Global Object" (https:/ / developers. google. com/ analytics/ devguides/ collection/ gajs/ methods/ gaJSApi_gat#_gat. _anonymizeIp). Google. January 24, 2012. . Retrieved June 27, 2012. [25] "New Google Privacy Policy" (http:/ / www. google. com/ intl/ en/ policies/ privacy/ ). Google. March 1, 2012. . Retrieved June 27, 2012. [26] "Google Advertising Privacy FAQ" (http:/ / www. google. com/ policies/ privacy/ ads/ ). Google. . Retrieved June 27, 2012. [27] Cutroni, Justin (June 26, 2007). "Understanding The Google Analytics Terms of Service" (http:/ / cutroni. com/ blog/ 2007/ 06/ 26/ understanding-the-google-analytics-terms-of-service/ ). . Retrieved June 27, 2012. [28] http:/ / www. blogger. com [29] http:/ / www. youtube. com [30] "Google privacy changes 'in breach of EU law'" (http:/ / www. bbc. co. uk/ news/ technology-17205754). BBC News. . Retrieved June 27, 2012. [31] "Policies by (Google) product" (http:/ / support. google. com/ accounts/ bin/ answer. py?hl=en& answer=147806). Google. . Retrieved June 27, 2012. [32] "Google Profile settings" (https:/ / accounts. google. com/ ServiceLogin?service=profiles& continue=https:/ / profiles. google. com/ me). Google. . Retrieved June 27, 2012. [33] "About your (Google) profile" (http:/ / support. google. com/ accounts/ bin/ answer. py?hl=en& answer=97706). Google. . Retrieved June 27, 2012. [34] Albanesius, Chloe (May 25, 2010). "Opt Out of Google Analytics Data Gathering With New Beta Tool" (http:/ / www. pcmag. com/ article2/ 0,2817,2364174,00. asp). . [35] "Greater choice and transparency for Google Analytics" (http:/ / analytics. blogspot. jp/ 2010/ 05/ greater-choice-and-transparency-for. html). Google. May 25, 2010. . [36] "The NoScript Firefox extension provides extra protection for Firefox, Flock, Seamonkey and other mozilla-based browsers" (http:/ / noscript. net/ ). . [37] Greenberg, Andy (Dec 11, 2008). "The Virus Filters" (http:/ / www. forbes. com/ 2008/ 12/ 11/ virus-filter-avira-tech-security-cx_ag_1211virus. html). Forbes. . [38] "New EU cookie law (e-Privacy Directive)" (http:/ / www. ico. gov. uk/ for_organisations/ privacy_and_electronic_communications/ the_guide/ cookies. aspx). UK Government: Information Commissioner's Office. . [39] "Thousands of websites in breach of new cookie law" (http:/ / www. bbc. co. uk/ news/ technology-18206810). BBC News. May 26, 2012. . [40] Google Analytics IQ Lessons (http:/ / www. google. com/ intl/ en/ analytics/ iq. html) [41] Google Analytics certification test (http:/ / google. starttest. com/ ) [42] Google Analytics Help Center (http:/ / support. google. com/ googleanalytics/ ) [43] Official Google Analytics product forum (http:/ / productforums. google. com/ forum/ #!forum/ analytics) [44] Official Google Analytics Blog (http:/ / analytics. blogspot. jp/ ) [45] Google Analytics Certified Partners (http:/ / www. google. com/ analytics/ partners/ index. html) [46] Google Analytics API (https:/ / developers. google. com/ analytics/ devguides/ ) [47] Google Analytics Applications (http:/ / www. google. com/ analytics/ apps/ results?q=google analytics api applications) [48] "Analytics by Net Conversion" (http:/ / itunes. apple. com/ us/ app/ analytics-by-net-conversion/ id421164239?mt=8). . [49] "Usage of traffic analysis tools for websites" (http:/ / w3techs. com/ technologies/ overview/ traffic_analysis/ all). W3Techs. . Retrieved 2009-12-10. [50] "Google Biz Chief: Over 10M Websites Now Using Google Analytics" (http:/ / techcrunch. com/ 2012/ 04/ 12/ google-analytics-officially-at-10m). TechCrunch. . Retrieved 2012-04-25. [51] "Google Analytics Market Share" (http:/ / metricmail. tumblr. com/ post/ 904126172/ google-analytics-market-share). MetricMail. . Retrieved 2010-08-21. [52] "Google Analytics dominates the top 500 websites" (http:/ / royal. pingdom. com/ 2008/ 05/ 28/ google-analytics-dominate-the-top-500-websites/ ). Pingdom. . Retrieved 2012-07-17.


Google Analytics


External links
Google Analytics Official Website ( Google Analytics Official Blog ( Plaza, B (2009) Monitoring web traffic source effectiveness with Google Analytics: An experiment with time series Monitoring web traffic source effectiveness with Google Analytics: An experiment with time series (http:// Emerald, Aslib Proceedings, 61(5): 474482 Plaza, B (2009) Using Google Analytics for measuring inlinks effectiveness ( 19676/) MPRA Paper No. 19676 Google Analytics client for Windows Phone 7 ( 4cc95355-1170-4a31-b553-faa37a92e992) Google Analytics for Wordpress plugin (

Google Website Optimizer

Google Website Optimizer was a free website optimization tool that helped online marketers and webmasters increase visitor conversion rates and overall visitor satisfaction by continually testing different combinations of website content.[1] Google Website Optimizer could test any element that existed as HTML code on a page including calls to action, fonts, headlines, point of action assurances, product copy, product images, product reviews, and forms. It could be used at multiple stage in the conversion funnel. On 1 June 2012, Google announced that GWO as a separate product would be retired as of 1 August, and some of its functionality would be integrated into Google Analytics as Google Analytics Content Experiments.[1][2] Google Website Optimizer allowed webmasters to test an alternative version of an entire page, known as A/B testing or test multiple combinations of page elements such as headings, images, or body copy; known as Multivariate testing. This tool was part of Google Analytics, though accessed through a different user interface. As it was part of Google Analytics, it used the Google analytics scripts.

[1] "Website Optimizer" (http:/ / www. google. com/ websiteoptimizer/ ). Google. . Retrieved 2010-12-14. [2] Tzemah, Nir. "Helping to Create Better Websites: Introducing Content Experiments" (http:/ / analytics. blogspot. com/ 2012/ 06/ helping-to-create-better-websites. html). Google Analytics Blog. . Retrieved 4 June 2012.

External links
Google website optimizer (

Performance indicator


Performance indicator
A performance indicator or key performance indicator (KPI) is industry jargon for a type of performance measurement.[1] KPIs are commonly used by an organization to evaluate its success or the success of a particular activity in which it is engaged. Sometimes success is defined in terms of making progress toward strategic goals,[2] but often success is simply the repeated achievement of some level of operational goal (for example, zero defects, 10/10 customer satisfaction, etc.). Accordingly, choosing the right KPIs is reliant upon having a good understanding of what is important to the organization. 'What is important' often depends on the department measuring the performance - the KPIs useful to finance will be quite different than the KPIs assigned to sales, for example. Because of the need to develop a good understanding of what is important, performance indicator selection is often closely associated with the use of various techniques to assess the present state of the business, and its key activities. These assessments often lead to the identification of potential improvements; and as a consequence, performance indicators are routinely associated with 'performance improvement' initiatives. A very common way for choosing KPIs is to apply a management framework such as the balanced scorecard.

Categorization of indicators
Key performance indicators define a set of values used to measure against. These raw sets of values, which are fed to systems in charge of summarizing the information, are called indicators. Indicators identifiable and marked as possible candidates for KPIs can be summarized into the following sub-categories: Quantitative indicators which can be presented as a number. Practical indicators that interface with existing company processes. Directional indicators specifying whether an organization is getting better or not. Actionable indicators are sufficiently in an organization's control to affect change. Financial indicators used in performance measurement and when looking at an operating index.

Key performance indicators, in practical terms and for strategic development, are objectives to be targeted that will add the most value to the business. These are also referred to as key success indicators.

Some important aspects

Key performance indicators (KPIs) are ways to periodically assess the performances of organizations, business units, and their division, departments and employees. Accordingly, KPIs are most commonly defined in a way that is understandable, meaningful, and measurable. They are rarely defined in such a way such that their fulfillment would be hampered by factors seen as non-controllable by the organizations or individuals responsible. Such KPIs are usually ignored by organizations. In order to be evaluated, KPIs are linked to target values, so that the value of the measure can be assessed as meeting expectations or not.

Performance indicator


Identifying indicators of organization

Performance indicators differ from business drivers and aims (or goals). A school might consider the failure rate of its students as a key performance indicator which might help the school understand its position in the educational community, whereas a business might consider the percentage of income from returning customers as a potential KPI. The key stages in identifying KPIs are: Having a pre-defined business process (BP). Having requirements for the BPs. Having a quantitative/qualitative measurement of the results and comparison with set goals. Investigating variances and tweaking processes or resources to achieve short-term goals.

A KPI can follow the SMART criteria. This means the measure has a Specific purpose for the business, it is Measurable to really get a value of the KPI, the defined norms have to be Achievable, the improvement of a KPI has to be Relevant to the success of the organization, and finally it must be Time phased, which means the value or outcomes are shown for a predefined and relevant period.

KPI examples
Some examples are: 1. New customers acquired 2. Demographic analysis of individuals (potential customers) applying to become customers, and the levels of approval, rejections, and pending numbers 3. Status of existing customers 4. Customer attrition 5. Turnover (i.e., revenue) generated by segments of the customer population 6. Outstanding balances held by segments of customers and terms of payment 7. Collection of bad debts within customer relationships 8. Profitability of customers by demographic segments and segmentation of customers by profitability Many of these customer KPIs are developed and managed with customer relationship management software. Faster availability of data is a competitive issue for most organizations. For example, businesses which have higher operational/credit risk (involving for example credit cards or wealth management) may want weekly or even daily availability of KPI analysis, facilitated by appropriate IT systems and tools.

Overall equipment effectiveness, is a set of broadly accepted non-financial metrics which reflect manufacturing success. Cycle Time Cycle time is the total time from the beginning to the end of your process, as defined by you and your customer. Cycle time includes process time, during which a unit is acted upon to bring it closer to an output, and delay time, during which a unit of work is spent waiting to take the next action. Cycle Time Ratio (CTR) CTR = Standard Cycle Time / Real Cycle Time Utilization Rejection rate

Performance indicator


Availability Mean time between failure Mean time to repair Unplanned availability

Supply Chain Management

Businesses can utilize KPIs to establish and monitor progress toward a variety of goals, including lean manufacturing objectives, minority business enterprise and diversity spending, environmental "green" initiatives, cost avoidance programs and low-cost country sourcing targets. Any business, regardless of size, can better manage supplier performance with the help of KPIs robust capabilities, which include: Automated entry and approval functions On-demand, real-time scorecard measures Rework on procured inventory. Single data repository to eliminate inefficiencies and maintain consistency Advanced workflow approval process to ensure consistent procedures Flexible data-input modes and real-time graphical performance displays Customized cost savings documentation Simplified setup procedures to eliminate dependence upon IT resources.

Main SCM KPIs will detail the following processes: Sales forecasts Inventory Procurement and suppliers Warehousing Transportation Reverse logistics

Suppliers can implement KPIs to gain an advantage over the competition. Suppliers have instant access to a user-friendly portal for submitting standardized cost savings templates. Suppliers and their customers exchange vital supply chain performance data while gaining visibility to the exact status of cost improvement projects and cost savings documentation.

The provincial government of Ontario, Canada has been using KPIs since 1998 to measure the performance of higher education institutions in the province. All post secondary schools collect and report performance data in five areas graduate satisfaction, student satisfaction, employer satisfaction, employment rate, and graduation rate.[3]

Further performance indicators

Duration of a stockout situation Customer order waiting time

Performance indicator


In practice, overseeing key performance indicators can prove expensive or difficult for organizations. Some indicators such as staff morale may be impossible to quantify. As such dubious KPIs can be adopted that can be used as a rough guide rather than a precise benchmark. Another serious issue in practice is that once a measure is created, it becomes difficult to adjust to changing needs as historical comparisons will be lost. As such measures are kept even if of dubious relevance, because history does exist. Comparisons between different organizations are often difficult as they depend on specific in-house practices and policies. Key performance indicators can also lead to perverse incentives and unintended consequences as a result of employees working to the specific measurements at the expense of the actual quality or value of their work.[4][5][6][7] For example, measuring the productivity of a software development team in terms of source lines of code encourages copy and paste code and over-engineered design, leading to bloated code bases that are particularly difficult to maintain, understand and modify.

[1] Carol Taylor Fitz-Gibbon (1990), "Performance indicators" (http:/ / books. google. com/ ?id=uxK0MUHeiI4C), BERA Dialogues (2), ISBN978-1-85359-092-4, [2] Key Performance Indicators What Are Key Performance Indicators or KPI (http:/ / management. about. com/ cs/ generalmanagement/ a/ keyperfindic. htm) [3] http:/ / www. collegesontario. org/ outcomes/ key-performance-indicators/ 2011_kpi_results. pdf [4] Robert D Austin, "Measuring and Managing Performance in Organizations" (http:/ / www. amazon. co. uk/ Measuring-Managing-Performance-Organizations-Robert/ dp/ 0932633366) [5] "KPI: The critical element for organizational performance measurement" (http:/ / www. kpistandard. com,) [6] http:/ / www. joelonsoftware. com/ news/ 20020715. html [7] http:/ / martinfowler. com/ bliki/ CannotMeasureProductivity. html

Further reading
David Parmenter, Key Performance Indicators. John Wiley & Sons 2007, ISBN 0-470-09588-1.

Session replay


Session replay
Session replay is the ability to replay a visitors journey on a web site which can include mouse movements, clicks, and form entry. This can be used in studying web site usability and customer behavior as well handling customer service questions as the full customer journey with all interactions can be replayed. There are generally two ways to capture and replay visitor sessions, tag-free server side and client side.

Tag-free server side

Solutions capture all website traffic and replay every visitor interaction, from every device, including all mobile users from any location. Sessions are replayed step-by-step, providing the ability to search, locate and analyse aspects of a visitors session including clicks and form entry. Server side solutions require software to be installed "on premise".

Client side
There are many tag based solutions that offer video replay of a visitors session. These solutions can also show mouse movements and clicks. The underlying data for the videos is recorded by tagging pages.


Heat map
A heat map is a graphical representation of data where the individual values contained in a matrix are represented as colors. Fractal maps and tree maps both often use a similar system of color-coding to represent the values taken by a variable in a hierarchy. The term is also used to mean its thematic application as a choropleth map. The term "Heatmap" was originally coined and trademarked by software designer Cormac Kinney in 1991, to describe a 2D display depicting real time financial market information.[1] Heat maps originated in 2D displays of the values in a data matrix. Larger values were represented by small dark gray or black squares (pixels) and smaller values by lighter squares. Sneath (1957) displayed the results of a cluster analysis by permuting the rows and the columns of a matrix to place similar values near each other according to the clustering. Jacques Bertin used a similar representation to display data that conformed to a Guttman scale. The idea for joining cluster trees to the rows and columns of the data matrix originated with Robert Ling in 1973. Ling used overstruck printer characters to represent different shades of gray, one character-width per pixel. Leland Wilkinson developed the first computer program in 1994 (SYSTAT) to produce cluster heat maps with high-resolution color graphics. The Eisen et al. display shown in the figure is a replication of the earlier SYSTAT design.

Heat map


There are different kinds of heat maps: Web heat maps have been used for displaying areas of a Web page most frequently scanned by visitors. Web heatmaps are often used alongside other forms of web analytics and session replay tools. Biology heat maps are typically used in molecular biology to represent the level of expression of many genes across a number of comparable samples (e.g. cells in different states, samples from different patients) as they are obtained from DNA microarrays. The tree map is a 2D hierarchical partitioning of data that visually resembles a heat map.
Heat map generated from DNA microarray A mosaic plot is a tiled heat map for representing a two-way or data reflecting gene expression values in higher-way table of data. As with treemaps, the rectangular regions in a several conditions mosaic plot are hierarchically organized. The means that the regions are rectangles instead of squares. Friendly (1994) surveys the history and usage of this graph.

Software Implementations
Several heat map software implementations are listed here (the list is not complete): NeoVision Hypersystems, Inc., a software firm founded by Cormac Kinney, and funded by Intel and Deutsche Bank, developed Heatmaps depicting real time financial data and calculations, which were licensed to over 50,000 users. NeoVision Heatmaps became a feature on[2] R Statistics, a free software environment for statistical computing and graphics, contains several functions to trace heat maps [3] Gnuplot, a universal and free command-line plotting program, can trace 2D and 3D heat maps [4] The Google Docs spreadsheet application includes a Heat Map gadget, but for country-wise data only, not for general matrix data. Qlucore includes a heat map that is dynamically updated when filter parameters are changed. The ESPN Gamecast for soccer games uses heat maps to show where certain players have spent time on the field. GENE-E [5] is a matrix visualization and analysis platform designed to support visual data exploration. By searching the List of bioinformatics companies more tools for heat maps can be found. Microsoft Excel can be used to generate heat maps using the Surface Chart. Though the default color range for Surface Charts in Excel is not conducive to heat maps, the colors can be edited to generate user-friendly and intuitive heat maps.

[1] "United States Patent and Trademark Office, registration #75263259" (http:/ / tess2. uspto. gov/ ). 1993-09-01. . [2] "Forbes Magazine Article on NeoVision Heatmaps" (http:/ / www. forbes. com/ global/ 1999/ 0517/ 0210064a. html). 1999-05-17. . [3] http:/ / www2. warwick. ac. uk/ fac/ sci/ moac/ currentstudents/ peter_cock/ r/ heatmap/ [4] http:/ / gnuplot. sourceforge. net/ demo_4. 4/ heatmaps. html [5] http:/ / www. broadinstitute. org/ cancer/ software/ GENE-E/ A sample heat map created using a Surface Chart in Microsoft Excel.

Bertin, J. (1967). Smiologie Graphique. Les diagrammes, les rseaux, les cartes. Gauthier-Villars. Eisen, M.B., Spellman, P.T., Brown, P.O. & Botstein, D. (1998). "Cluster analysis and display of genome-wide expression patterns" ( Proc. Natl. Acad. Sci. USA 95

Heat map (25): 1486314868. doi:10.1073/pnas.95.25.14863. PMC24541. PMID9843981. Friendly, M. (1994). "Mosaic displays for multi-way contingency tables" ( Papers/drew). Journal of the American Statistical Association (American Statistical Association) 89 (425): 190200. doi:10.2307/2291215. JSTOR2291215. Ling, R.F. (1973). "A computer generated aid for cluster analysis". Communications of the ACM 16 (6): 355361. doi:10.1145/362248.362263. Sneath, P.H.A. (1957). "The application of computers to taxonomy". Journal of General Microbiology 17 (1): 201226. PMID13475686. Wilkinson, L. (1994). Advanced Applications: Systat for DOS Version 6. SYSTAT Inc.. ISBN978-0-13-447285-0.


External links
The History of the Cluster Heat Map ( Leland Wilkinson and Michael Friendly. Heatmap Builder ( Heatmap Builder, a program for generating heatmaps developed at the Ashley Labs. Matrix2png ( Web-based or command-line generation of heat maps. ( A Map of the Market using a heatmap data visualization and London Stock Exchange data (FTSE 100 Index) from Panopticon Software NASDAQ 100 Heatmap ( Heatmap visualization of NASDAQ 100 index. heatmap.js ( Open Source JavaScript library for generating realtime web heatmaps heatcanvas ( Another open source library for modern web browser. Tweets heatmap ( Show a density heatmap of tweets with keyword distributed in time.

Click-through rate


Click-through rate
Click-through rate (CTR) is a way of measuring the success of an online advertising campaign for a particular website as well as the effectiveness of an email campaign.

Online Advertising CTR

The click-through rate of an advertisement is defined as the number of clicks on an ad divided by the number of times the ad is shown (impressions), expressed as a percentage.[1][2][3][4][5] For example, if a banner ad is delivered 100 times (100 impressions) and receives one click, then the click-through rate for the advertisement would be 1%.

Click-through rates for banner ads have fallen over time. When banner ads first started to appear, it was not uncommon to have rates above five percent. They have fallen since then, currently averaging closer to 0.2 or 0.3 percent.[6] In most cases, a 2% click-through rate would be considered very successful, though the exact number is hotly debated and would vary depending on the situation. The average click-through rate of 3% in the 1990s declined to 0.1%-0.3% by 2011.[7] Since advertisers typically pay more for a high click-through rate, getting many click-throughs with few purchases is undesirable to advertisers.[6] Similarly, by selecting an appropriate advertising site with high affinity (e.g. a movie magazine for a movie advertisement), the same banner can achieve a substantially higher CTR. Though personalized ads, unusual formats, and more obtrusive ads typically result in higher click-through rates than standard banner ads, overly intrusive ads are often avoided by viewers.[7][8][9]

Email CTR
An email click-through rate is defined as the number of recipients who clicked one or more links in an email and landed on the sender's website, blog, or other desired destination. More simply, email click-through rates represent the number of clicks that your email generated.[10][11] Email click-through rate, is expressed as a percentage, and calculated by dividing the number of click throughs by the number of messages delivered.[12][13] Most email marketers use this metrics along with open rate, bounce rate and other metrics, to understand the effectiveness and success of their email campaign. [14] In general there is no ideal click-through rate. This metric can vary based on the type of email sent, how frequently emails are sent, how the list of recipients is segmented, how relevant the content of the email is to the audience, and many other factors. [15] Even time of day can affect click-through rate. Sunday appears to generate considerably higher click-through rates on average when compared to the rest of the week.[16] Every year studies and various types of research are conducted to track the overall effectiveness of click-through rates in email marketing.[17][18]

Click-through rate


[1] Google AdWords Help: Clickthrough rate (CTR) (http:/ / adwords. google. com/ support/ aw/ bin/ answer. py?hl=en& answer=107955& from=6305& rd=1) [2] Yahoo Search Marketing Glossary (http:/ / developer. searchmarketing. yahoo. com/ docs/ V7/ gsg/ glossary. php) [3] IAB Click Measurement Guidelines (http:/ / www. iab. net/ media/ file/ click-measurement-guidelines2009. pdf) [4] Google Analytics Help: What's the difference between clicks, visits, visitors, pageviews, and unique pageviews? (http:/ / www. google. com/ support/ analytics/ bin/ answer. py?answer=57164) [5] IAB Glossary of Interactive Advertising Terms (http:/ / www. iab. net/ media/ file/ GlossaryofInteractivAdvertisingTerms. pdf) [6] Stern, Andrew (February 1, 2010). "8 ways to improve your click-through rate" (http:/ / www. imediaconnection. com/ content/ 25781. asp). iMedia Connection. . Retrieved February 7, 2010. [7] Li, Hairong; Leckenby, John D. (2004). "Internet Advertising Formats and Effectiveness" (http:/ / champtec. googlepages. com/ ad_format_print. pdf). Center for Interactive Advertising. . Retrieved 26 February 2010. [8] "How to Price and Place Your Ads" (http:/ / blog. buysellads. com/ 2010/ 12/ how-to-price-and-place-your-ads/ ). . Retrieved 21 October 2011. [9] US application 20,090,157,495 (http:/ / worldwide. espacenet. com/ textdoc?DB=EPODOC& IDX=US20,090,157,495) [10] "Email Campaign Performance Metrics Definitions" (http:/ / www. iab. net/ guidelines/ 508676/ 508905/ 79176). . Retrieved December 18, 2012. [11] Kevin Gao. "Click Through Rates: Click Through Rates Numbers and Their Meaning" (http:/ / emailmarketing. comm100. com/ email-marketing-ebook/ click-through-rates. aspx). . Retrieved December 18, 2012. [12] "The Basics of Email Metrics: Are Your Campaigns Working?" (http:/ / www. idealware. org/ articles/ email_metrics. php). October, 2008. . [13] John Arnold (April, 2011). "Calculating the Click-through Rate for Your E-Mail Marketing Campaign from E-Mail Marketing for Dummies, 2nd Ed." (http:/ / www. dummies. com/ how-to/ content/ calculating-the-clickthrough-rate-for-your-email-m. html). . [14] "Email marketing metrics: Click through rate (CTR) relevant to email marketing measurement" (http:/ / www. michaelleander. com/ blog/ 2010/ 01/ email-marketing-metrics-click-through-rate-ctr-relevant-to-email-marketing-measurement/ ). January 17, 2010. . [15] "Average Email Click-Through Rate" (http:/ / bluesite. lyris. com/ blog/ 85-Average-Email-Click-Through-Rate). . Retrieved December 20, 2012. [16] Pete Prestipino (July 21, 2011). "EMail Marketing Metrics 2011" (http:/ / www. websitemagazine. com/ content/ blogs/ posts/ archive/ 2011/ 07/ 21/ email-marketing-metrics-2011-mailermailer. aspx). . [17] Matt McGee (July 23, 2012). "E-mail Open Rates Declining, Click-Through Rates Rising [Study (http:/ / marketingland. com/ e-mail-open-rates-declining-click-through-rates-rising-study-17005)"]. . [18] David Moth. "Email marketing stats: consumers open just 20% of messages" (http:/ / econsultancy. com/ us/ blog/ 10404-email-marketing-stats-consumers-open-just-20-of-messages). .

Further reading
Sherman, Lee and John Deighton, (2001), "Banner advertising: Measuring effectiveness and optimizing placement," Journal of Interactive Marketing, Spring, Vol. 15, Iss. 2. Ward A. Hanson and Kirthi Kalyanam, (2007), Internet Marketing and eCommerce, Chapter8, Traffic Building, Thomson College Pub, Mason, Ohio.

Conversion rate


Conversion rate
In internet marketing, the conversion rate is the proportion of visitors to a website who take action to go beyond a casual content view or website visit, as a result of subtle or direct requests from marketers, advertisers, and content creators.

Successful conversions are defined differently by individual marketers, advertisers, and content creators. To online retailers, for example, a successful conversion may be defined as the sale of a product to a consumer whose interest in the item was initially sparked by clicking a banner advertisement. To content creators, a successful conversion may refer to a membership registration, newsletter subscription, software download, or other activity.

For websites that seek to generate offline responses, for example telephone calls or foot traffic to a store, measuring conversion rates can be difficult because a phone call or personal visit is not automatically traced to its source, such as the Yellow Pages, website, or referral. Possible solutions include asking each caller or shopper how they heard about the business and using a toll-free number on the website that forwards to the existing line. For websites where the response occurs on the site itself, a conversion funnel can be set up in a site's analytics package to track user behavior.

Methods of increasing conversion rates in e-commerce

Among the many actions taken to attempt to increase the conversion rate, these are the most relevant: Generate user reviews of the product or service clear distinction of the website for a certain conversion goal (e.g. "increase sign-ins for newsletter") Improve and focus the content of the website (which may include text, pictures and video) to target conversion Increase usability to reduce the barriers to conversion

Improve site navigation structure so that users can find and browse without thinking too much about where to click Improve credibility and trust by showing third-party trust logos and by good site design use AIDA (attention, interest, desire, action) to move the user through the conversion funnel

Definition of the Conversion rate [1] Facts about typical conversion rates and hints how to increase them [2] Berkeley-Study on Conversion rate in Spam [3]

[1] http:/ / www. marketingterms. com/ dictionary/ conversion_rate/ [2] http:/ / www. seochat. com/ c/ a/ Website-Marketing-Help/ Conversion-Rate-Optimization/ [3] http:/ / www. icsi. berkeley. edu/ pubs/ networking/ 2008-ccs-spamalytics. pdf

Landing page


Landing page
In online marketing a landing page, sometimes known as a "lead capture page" or a "lander", is a single web page that appears in response to clicking on a search engine optimized search result or an online advertisement. The landing page will usually display directed sales copy that is a logical extension of the advertisement, search result or link. Landing pages are often linked to from social media, email campaigns or search engine marketing campaigns in order to enhance the effectiveness of the advertisements. The general goal of a landing page is to convert site visitors into sales leads. By analyzing activity generated by the linked URL, marketers can use click-through rates and Conversion rate to determine the success of an advertisement.[1]

Types of landing pages

There are two types of landing pages: reference and transactional.

Reference landing page

A reference landing page presents information that is relevant to the visitor. These can display text, images, dynamic compilations of relevant links, or other elements.

Transactional landing page

A transactional landing page seeks to persuade a visitor to complete a transaction such as filling out a form or interacting with advertisements or other objects on the landing page, with the goal being the immediate or eventual sale of a product or service. If information is to be captured, the page will usually withhold information until some minimal amount of visitor information is provided, typically an email address and perhaps a name and telephone number as well enough to "capture the lead" and add the prospect to a mailing list. A visitor taking the desired action on a transactional landing page is referred to as a conversion.[2] The efficiency or quality of the landing page can be measured by its conversion rate, the percentage of visitors who complete the desired action.[3]

[1] Ash, Tim. Landing Page Optimization: The Definitive Guide to Testing and Tuning for Conversions. Wiley Publishing. ISBN0-470-17462-5. [2] "What is a landing page?" (http:/ / www. bestseopluginforwordpress. com/ what-is-a-landing-page/ ). . Retrieved 8 October 2011. [3] "What is a conversion rate?" (http:/ / www. wordstream. com/ conversion-rate). . Retrieved 6 June 2012.

Landing page optimization


Landing page optimization

Landing page optimization (LPO) is one part of a broader Internet marketing process called conversion optimization, or conversion rate optimization (CRO), with the goal of improving the percentage of visitors to the website that become sales leads and customers. A landing page is a webpage that is displayed when a potential customer clicks an advertisement or a search engine result link. This webpage typically displays content that is a relevant extension of the advertisement or link. LPO aims to provide page content and appearance that makes the webpage more appealing to target audiences.

Bases for landing page optimization

There are three major types of LPO based on targeting[1]: 1. Associative content targeting (also called rule-based optimization or passive targeting). The page content is modified based on information obtained about the visitor's search criteria, geographic information of source traffic, or other known generic parameters that can be used for explicit non-research-based consumer segmentation. 2. Predictive content targeting (also called active targeting). The page content is adjusted by correlating any known information about the visitor (e.g., prior purchase behavior, personal demographic information, browsing patterns, etc.) to anticipate (desired) future actions based on predictive analytics. 3. Consumer directed targeting (also called social targeting). The page content is created using the relevance of publicly available information through a mechanism based on reviews, ratings, tagging, referrals, etc. There are two major types of LPO based on experimentation: 1. Closed-ended experimentation. Consumers are exposed to several variations of landing pages while their behavior is observed. At the conclusion of the experiment, an optimal page is selected based on the outcome of the experiment. 2. Open-ended experimentation. This approach is similar to closed-ended experimentation, except that the experimentation is ongoing, meaning that the landing page is adjusted dynamically as the experiment results change.

Experimentation-based landing page optimization

Experimentation-based LPO can be achieved using A/B testing, multivariate LPO, and total-experience testing. These methodologies are applicable to both closed- and open-ended experimentation.

A/B testing
A/B testing, or A/B split testing, is a method for testing two versions of a webpage: version "A" and version "B". The goal is to test multiple versions of webpages (e.g., home page, product page, FAQ) to determine which version is most appealing/effective. This testing method may also be known as A/B/n split testing; the n denoting more than 2 tests being measured and compared. The data for A/B testing is usually measured via click-through or conversion.[2] Testing can be conducted sequentially or in parallel. In sequential testing, often the easiest to implement, the various versions of the webpages are made available online for a specified time period. In parallel (split) testing, both versions are made available, and the traffic is divided between the two. The results of sequential split testing can be skewed by differing time periods and traffic patterns in which the different tests are run. A/B testing has the following advantages: Inexpensive because existing resources and tools are used. Simple because no complex statistical analysis is required.

Landing page optimization A/B testing has the following disadvantages: Difficult to control all external factors (e.g., campaigns, search traffic, press releases, seasonality) when using sequential testing. Very limited in that reliable conclusions cannot be drawn for pages that contain multiple elements that vary in each version.


Multivariate landing page optimization

Multivariate landing page optimization (MVLPO) accounts for multiple variations of visual elements (e.g., graphics, text) on a page. For example, a given page may have k choices for the title, m choices for the featured image or graphic, and n choices for the company logo. This example yields kmn landing page configurations. Significant improvements can be seen through testing different copy text, form layouts, landing page images and background colours. However, not all elements produce the same improvements in conversions, and by looking at the results from different tests, it is possible to identify the elements that consistently tend to produce the greatest increase in conversions. The first application of an experimental design for MVLPO was performed by Moskowitz Jacobs Inc. in 1998 as a simulation/demonstration project for Lego. MVLPO did not become a mainstream approach until 2003 or 2004. MVLPO has the following advantages: Provides a reliable, scientifically based approach for understanding customers' preferences and optimizing their experience. Has evolved to be an easy-to-use approach in which not much IT involvement is required. In many cases, a few lines of JavaScript allow remote vendor servers to control changes, collect data, and analyze the results. Provides a foundation for open-ended experimentation. MVLPO has the following disadvantages: As with any quantitative consumer research, there is a danger of GIGO (garbage in, garbage out). Ideas that are sourced from known customer touchpoints or strategic business objectives are needed to obtain optimal results. Focuses on optimizing one page at a time. Website experiences for most sites involve multiple pages, which are typically complex. For an e-commerce website, it is typical for a successful purchase to involve between twelve and eighteen pages; for a support site, even more pages are often required.

Total-experience testing
Total-experience testing, or experience testing, is a type of experiment-based testing in which the entire website experience of the visitor is examined using technical capabilities of the website platform (e.g., ATG, Blue Martini Software, etc.). Rather than creating multiple websites, total-experience testing uses the website platform to create several persistent experiences, and monitors which one is preferred by the customers. An advantage of total-experience testing is that it reflects the customer's total website experience, not just the experience with a single page. Two disadvantages are that total-experience testing requires a website platform that supports experience testing, and it takes longer to obtain results than A/B testing and MVLPO.

Landing page optimization


[1] Alex Gofman, Howard Moskowitz, and Tonis Mets. 2009. Integrating Science into Web Design: Consumer Driven Website Optimization. The Journal of Consumer Marketing, 26(4): 286-298. doi:10.1108/07363760910965882. [2] Matthew Roche (2005-12-19). "Landing Page Testing Best Practices" (http:/ / www. siteisdead. com/ 2005/ 12/ landing_page_te_1. html). Site is Dead. . Retrieved 2007-07-02.

A/B testing
In web development and marketing, A/B testing or split testing is an experimental approach to web design (especially user experience design), which aims to identify changes to web pages that increase or maximize an outcome of interest (e.g., click-through rate for a banner advertisement). As the name implies, two versions (A and B) are compared, which are identical except for one variation that might impact a user's behavior. Version A might be the currently used version, while Version B is modified in some respect. For instance, on an e-commerce website the purchase funnel is typically a good candidate for A/B testing, as even marginal improvements in drop-off rates can represent a significant gain in sales. Significant improvements can be seen through testing elements like copy text, layouts, images and colors.[1] Multivariate testing or bucket testing is similar to A/B testing, but tests more than two different versions at the same time. While the approach is identical to a between-subjects design, which is commonly used in a variety of research traditions, A/B testing is seen as a significant change in philosophy and business strategy in Silicon Valley.[2][3][4] A/B testing as a philosophy of web development brings the field into line with a broader movement toward evidence-based practice.

An emailing campaign example

A company with a customer database of 2000 people decides to create an email campaign with a discount code in order to generate sales through its website. It creates an email and then modifies the Call To Action (the part of the copy which encourages customers to do something in the case of a sales campaign, make a purchase). To 1000 people it sends the email with the Call To Action stating "Offer ends this Saturday! Use code A1", and to another 1000 people it sends the email with the Call To Action stating "Limited time offer! Use code B1". All other elements of the email's copy and layout are identical. The company then monitors which campaign has the highest success rate by analysing the use of the promotional codes. The email using the code A1 has a 5% response rate (50 of the 1000 people emailed used the code to buy a product), and the email using the code B1 has a 3% response rate (30 of the recipients used the code to buy a product). The company therefore determines that in this instance, the first Call To Action is more effective and will use it in future sales. In the example above, the purpose of the test is to determine which is the most effective way to impel customers into making a sale. If, however, the aim of the test were to see which would generate the highest click-rate that is, the number of people who actually click onto the website after receiving the email then the results may have been different. More of the customers receiving the code B1 may have accessed the website after receiving the email, but because the Call To Action didn't state the end-date of the promotion, there was less incentive for them to make an immediate purchase. If the purpose of the test was simply to see which would bring more traffic to the website, then the email containing code B1 may have been more successful. An A/B test should have a defined outcome that is measurable, e.g. number of sales made, click-rate conversion, number of people signing up/registering etc.[5]

A/B testing


Companies well-known for using A/B testing

Many companies use the "designed experiment" approach to making marketing decisions. It is an increasingly common practice as the tools and expertise grows in this area. There are many A/B testing case studies which show that the practice of testing is increasingly becoming popular with small and medium-sized businesses as well.[6] While it is widely used behind the scenes to maximize profits, the practice occasionally makes it into the spotlight: pioneered its use within the web e-commerce space.[7] BBC[8] eBay Google[9] LogMeIn[10] Microsoft[11] Netflix[12] Playdom (Disney Interactive)[13] Zynga[14]

A/B testing tools

Many A/B testing tools are actively developed. Some are available under an open source license or free Google Analytics Content Experiments (formerly Google Website Optimizer) (server-side tagging required) Easy Website Optimizer [15] Other solutions are commercially supported, generally offering a broader range of features: GlobalMaxer [16] Artisan App Testing [17] SiteSpect [18] Optimizely [19][20] Visual Website Optimizer [21] Convert [22] Unbounce [23] Monetate [24] Autonomy Optimost [25] Omniture Test & Target Personyze Plumb5 [26]

A detailed overview of such tools is available at WhichMVT [27].

A/B testing


[1] "Split Testing Guide for Online Stores" (http:/ / www. webics. com. au/ blog/ google-adwords/ split-testing-guide-for-online-retailers/ ). August 27, 2012. . Retrieved 2012-08-28. [2] http:/ / www. wired. com/ business/ 2012/ 04/ ff_abtesting/ [3] http:/ / www. wired. com/ wiredenterprise/ 2012/ 05/ test-everything/ [4] http:/ / boingboing. net/ 2012/ 04/ 26/ ab-testing-the-secret-engine. html [5] Kohavi, R.; Longbotham, R., Sommerfield, D., Henne, R.M. (2009). "Controlled experiments on the web: survey and practical guide" (http:/ / www. springerlink. com/ content/ r28m75k77u145115/ ). Data Mining and Knowledge Discovery (Berlin: Springer) 18 (1): 140181. doi:10.1007/s10618-008-0114-1. ISSN1384-5810. . [6] "A/B Split Testing | Multivariate Testing | Case Studies" (http:/ / visualwebsiteoptimizer. com/ case-studies. php). Visual Website Optimizer. . Retrieved 2011-07-10. [7] http:/ / www. grokdotcom. com/ 2008/ 02/ 26/ amazon-shopping-cart/ [8] "Web Developer: A/B Testing" (http:/ / www. bbc. co. uk/ blogs/ webdeveloper/ 2010/ 01/ ab-testing. shtml). BBC. 2010-01-12. . Retrieved 2011-07-10. [9] "Goodbye, Google" (http:/ / stopdesign. com/ archive/ 2009/ 03/ 20/ goodbye-google. html). stopdesign. Archived (http:/ / web. archive. org/ web/ 20110709054003/ http:/ / stopdesign. com/ archive/ 2009/ 03/ 20/ goodbye-google. html) from the original on 9 July 2011. . Retrieved 2011-07-10. [10] "Several marketing flows are using A/B testing for Central and Pro" (http:/ / secure. logmein. com/ ). . [11] "Experimentation at Microsoft" (http:/ / exp-platform. com/ expMicrosoft. aspx). 2009-09-09. Archived (http:/ / web. archive. org/ web/ 20110710211806/ http:/ / exp-platform. com/ expMicrosoft. aspx) from the original on 10 July 2011. . Retrieved 2011-07-10. [12] "The Netflix Tech Blog: "More Like This" Building a network of similarity" (http:/ / techblog. netflix. com/ 2011/ 04/ more-like-this-building-network-of. html). 2011-04-18. Archived (http:/ / web. archive. org/ web/ 20110725185038/ http:/ / techblog. netflix. com/ 2011/ 04/ more-like-this-building-network-of. html) from the original on 25 July 2011. . Retrieved 2011-07-10. [13] "The web's most visited city travel site wins big by optimizing conversion rates through automated multivariate testing" (http:/ / www. sitespect. com/ sitespect-vegas-case-study. shtml). SiteSpect. . Retrieved 2010-02-08. [14] "Brandon Smietana's answer to What is Zynga's core competency?" (http:/ / www. quora. com/ What-is-Zyngas-core-competency/ answer/ Brandon-Smietana). Quora. . Retrieved 2011-07-10. [15] http:/ / www. easywebsiteoptimizer. com [16] http:/ / www. globalmaxer. com/ [17] http:/ / useartisan. com [18] http:/ / sitespect. com [19] http:/ / optimizely. com [20] http:/ / www. wired. com/ wiredenterprise/ 2012/ 05/ test-everything/ [21] http:/ / visualwebsiteoptimizer. com [22] http:/ / convert. com [23] http:/ / unbounce. com [24] http:/ / monetate. com [25] http:/ / promote. autonomy. com/ promote/ products/ optimost. page [26] http:/ / www. plumb5. com [27] http:/ / www. whichmvt. com

Multivariate testing


Multivariate testing
In statistics, multivariate testing or multi-variable testing is a technique for testing hypotheses on complex multi-variable systems, especially used in testing market perceptions.[1]

In internet marketing
In internet marketing, multivariate testing is a process by which more than one component of a website may be tested in a live environment. It can be thought of in simple terms as numerous A/B tests performed on one page at the same time. A/B tests are usually performed to determine the better of two content variations; multivariate testing can theoretically test the effectiveness of limitless combinations. The only limits on the number of combinations and the number of variables in a multivariate test are the amount of time it will take to get a statistically valid sample of visitors and computational power. Multivariate testing is usually employed in order to ascertain which content or creative variation produces the best improvement in the defined goals of a website, whether that be user registrations or successful completion of a checkout process (that is, conversion rate).[2] Dramatic increases can be seen through testing different copy text, form layouts and even landing page images and background colours. However, not all elements produce the same increase in conversions, and by looking at the results from different tests, it is possible to identify those elements that consistently tend to produce the greatest increase in conversions.[3] Testing can be carried out on a dynamically generated website by setting up the server to display the different variations of content in equal proportions to incoming visitors. Statistics on how each visitor went on to behave after seeing the content under test must then be gathered and presented. Outsourced services can also be used to provide multivariate testing on websites with minor changes to page coding. These services insert their content to predefined areas of a site and monitor user behavior. In a nutshell, multivariate testing can be seen as allowing website visitors to vote with their clicks for which content they prefer and will stand the most chance of their proceeding to a defined goal. The testing is transparent to the visitor with all commercial solutions capable of ensuring that each visitor is shown the same content on every visit. Some websites benefit from constant 24/7 continuous optimization as visitor response to creatives and layouts differ by time of day/week or even season. Multivariate testing is currently an area of high growth in internet marketing as it helps website owners to ensure that they are getting the most from the visitors arriving at their site. Areas such as search engine optimization and pay per click advertising bring visitors to a site and have been extensively used by many organisations but multivariate testing allows internet marketeers to ensure that visitors are being shown the right offers, content and layout to convert them to sale, registration or the desired action once they arrive at the website. There are two principal approaches used to achieve multivariate testing on websites. One being Page Tagging; a process where the website creator inserts Javascript into the site to inject content variants and monitor visitor response. Page tagging typically tracks what a visitor viewed on the website and for how long that visitor remained on the site together with any click or conversion related actions performed. Page tagging is often done by a technical team rather than the online marketer who designs the test and interprets the results in the light of usability analysis.[4] Later refinements on this method allow for a single common tag to be deployed across all pages, reducing deployment time and removing the need for re-deployment between tests. Companies known to employ a tag based method of multivariate testing are: Visual Website Optimizer, Monetate, TraceAd Analytics, Avenseo, Conversion Works, Adobe, Business Intelligence Group GmbH (B.I.G.), Amadesa, DIVOLUTION, Maxymiser, Webtrends Optimize, Conversion Voodoo, Google Website Optimizer (now defunct), Google Content Experiments, GlobalMaxer, Optimizely, Vertster and Autonomy Corporation

Multivariate testing The second principal approach used does not require page tagging. By establishing a DNS-proxy or hosting within a website's own datacenter, it is possible to intercept and process all web traffic to and from the site undergoing testing, insert variants and monitor visitor response. In this case, all logic sits server rather than browser-side and after initial DNS changes are made, no further technical involvement is required from the website point of view. SiteSpect is known to employ this method of implementation. Multivariate testing can also be applied to email body content and mobile web pages. In addition to testing the efficacy of various creative/content executions on a website, the principles of multivariate testing can and often are used to test various offer combinations. Examples of this are testing various price points, purchase incentives, premiums, trial periods or other similar purchase incentives both individually and in combination with each other. The value of this is that marketers (both traditional and online) can use multivariate testing principles online to quickly ascertain and predict the effectiveness of offers without going through the more traditional multivariate testing methods which take significantly more time and money (focus groups, telephone surveys, etc.).


Design of experiments
Statistical testing relies on design of experiments. Several methods in use for multivariate testing include: 1. Discrete choice and what has mutated to become choice modeling is the complex technique that won Daniel McFadden the Nobel Prize in Economics in 2000. Choice modeling models how people make tradeoffs in the context of a purchase decision. By systematically varying the attributes or content elements, one can quantify their impact on outcome, such as a purchase decision. What is most important are the interaction effects uncovered, which neither the Taguchi methods nor Optimal design solve for.[5] 2. Optimal design involves iterations and waves of testings. Optimal design allows marketers the ability not only to test the maximum number of creative permutations in the shortest period of time but also to take into account relationships, interactions, and constraints across content elements on a website. This allows one to find the optimal solution unencumbered by limitations. 3. Taguchi methods: with multiple variations of content in multiple locations on a website, a large number of combinations need to be statistically tested and medium/low traffic websites can take some time to get a large enough sample of visitors to decide which content gives the best performance. For example, if 3 different images are to be tested in 3 locations, there are 27 combinations to test. Taguchi methods (namely Taguchi orthogonal arrays) can be used in the design of experiments in order to reduce the variations but still give statistically valid results on individual content elements.[6] Taguchi uses fractional factorial designs.

[1] Josef A. Mazanec and Helmut Strasser (2000). A Nonparametric Approach to Perceptions-Based Market Segmentation: Foundations (http:/ / books. google. com/ books?id=fA3YyQm8rLMC& pg=PA171& ots=dCIGXKK6L2& dq="multivariate+ testing"& as_brr=3& sig=Golefq6a0hDV-F62naRyn7AEjX8). Springer. ISBN3-211-83473-7. . [2] "Experimentation & Testing: A Primer" (http:/ / www. kaushik. net/ avinash/ 2006/ 05/ experimentation-and-testing-a-primer. html). Avinash Kaushik. 2006-05-22. . [3], Conversion/Testing: 10 Factors to Test that Could Increase the Conversion Rate of your Landing Pages, by Sumantra Roy, 06/05/2007 (http:/ / www. wilsonweb. com/ conversion/ sumantra-landing-pages. htm) [4] http:/ / judah. webanalyticsdemystified. com/ 2007/ 07/ web-analytics-and-data-collection-the-page-tag. html "Web Analytics Demystified", "Web Analytics and Data Collection: The Page Tag", By Judah Phillips [5] MarketingNPV , 3 Ways to Accelerate Your Learning Process (http:/ / www. marketingnpv. com/ articles/ features/ 3_Ways_to_Accelerate_Your_Learning_Process) [6], Scientific Web Site Optimization using AB Split Testing, Multi Variable Testing, and The Taguchi Method, by Matthew Roche, 07/26/2004 (http:/ / www. webpronews. com/ topnews/ 2004/ 07/ 26/ scientific-web-site-optimization-using-ab-split-testing-multi-variable-testing-and-the-taguchi-method)

Multivariate landing page optimization


Multivariate landing page optimization

Multivariate landing page optimization (MVLPO) is a specific form of landing page optimization where multiple variations of visual elements (e.g., graphics, text) on a webpage are evaluated. For example, a given page may have k choices for the title, m choices for the featured image or graphic, and n choices for the company logo. This example yields kmn landing page configurations. The first application of an experimental design for MVLPO was performed by Moskowitz Jacobs Inc. in 1998 as a simulation/demonstration project for LEGO. MVLPO did not become a mainstream approach until 2003 or 2004. Multivariate landing page optimization can be executed in a live (production) environment, or through simulations and market research surveys.

Multivariate landing page optimization is based on experimental design (e.g., discrete choice, conjoint analysis, Taguchi methods, IDDEA, etc.), which tests a structured combination of webpage elements. Some vendors (e.g., use a "full factorial" approach, which tests all possible combinations of elements. This approach requires a smaller sample sizetypically, many thousandsthan traditional fractional Taguchi designs to achieve statistical significance. This quality is one reason that choice modeling won the Nobel Prize in 2000. Fractional designs typically used in simulation environments require the testing of small subsets of possible combinations, and have a higher margin of error. Some critics of the approach question the possible interactions between the elements of the webpages, and the inability of most fractional designs to address this issue. To resolve the limitations of fractional designs, an advanced simulation method based on the Rule Developing Experimentation (RDE) paradigm was introduced.[1] RDE creates individual models for each respondent, discovers any and all synergies and suppressions among the elements,[2] uncovers attitudinal segmentation, and allows for databasing across tests and over time.[3]

Live environment execution

In live environment MVLPO execution, a special tool makes dynamic changes to a page so that visitors are directed to different executions of landing pages created according to an experimental design. The system keeps track of the visitors and their behaviorincluding their conversion rate, time spent on the page, etc. Once sufficient data has accumulated, the system estimates the impact of individual components on the target measurement (e.g., conversion rate). Live environment execution has the following advantages: Capable of testing the effect of variations as a real-life experience Generally transparent to visitors Relatively simple and inexpensive to execute Live environment execution has the following disadvantages: High cost Increased complexity involved in modifying a production-level website Long period of time required to achieve statistically reliable data. This situation is due to variations in the amount of traffic that generates the data necessary for a decision. Likely inappropriate for low-traffic, high-importance websites when the site administrators do not want to lose any potential customers

Multivariate landing page optimization


Simulation (survey) execution

In simulation (survey) MVLPO execution, the foundation consists of advanced market research techniques. In the research phase, the respondents are directed to a survey that presents them with a set of experimentally designed combinations of a landing page. The respondents rate each version based on some factor (e.g., purchase intent). At the end of the research phase, regression analysis models are created either for individual pages or for the entire panel of pages. The outcome relates the presence or absence of page elements on the different landing page executions to the respondents ratings. These results can be used to synthesize new landing pages as combinations of the top-scoring elements optimized for subgroups or market segments, with or without interactions.[4] Simulation execution has the following advantages: Faster and easier to prepare and execute in many cases, as compared to live environment execution Applicable to low-traffic websites Capable of producing more robust and rich data because of increased control over the page design Simulation execution has the following disadvantages: Possible bias because of a simulated environment rather than a live environment. Necessity to recruit and optionally incentivize the respondents

[1] Howard R. Moskowitz; Alex Gofman (2007-04-11). Selling Blue Elephants: How to make great products that people want BEFORE they even know they want them. Wharton School Publishing. pp.272. ISBN0-13-613668-0. [2] Alex Gofman. 2006. Emergent Scenarios, Synergies, And Suppressions Uncovered within Conjoint Analysis. Journal of Sensory Studies, 21(4): 373-414. doi:10.1111/j.1745-459X.2006.00072.x [3] Alex Gofman (2007-09-21). "Improving the Stickiness of Your Website" (http:/ / www. ftpress. com/ articles/ article. aspx?p=1015178). InformIT Network. Financial Times Press. . Retrieved 2007-09-22. [4] Alex Gofman, Howard Moskowitz, and Tonis Mets. 2009. Integrating Science into Web Design: Consumer Driven Website Optimization. The Journal of Consumer Marketing, 26(4): 286-298. doi:10.1108/07363760910965882.

Purchase funnel


Purchase funnel
The purchase or purchasing funnel is a consumer focused marketing model which illustrates the theoretical customer journey towards the purchase of a product or service. In 1898, E. St. Elmo Lewis developed a model which mapped a theoretical customer journey from the moment a brand or product attracted consumer attention to the point of action or purchase.[1] St. Elmo Lewis idea is often referred to as the AIDA-model - an acronym which stands for Awareness, Interest, Desire, and Action. This staged process is summarized below: AWARENESS the customer is aware of the existence of a product or service INTEREST actively expressing an interest in a product group DESIRE aspiring to a particular brand or product ACTION taking the next step towards purchasing the chosen product

This early model has been evolved by marketing consultants and academics to cater for the modern customer and is now referred to in marketing as the purchase funnel. Many different consumer purchase models exist in marketing today, but it is generally accepted that the modern purchase funnel has more stages,[2] considers repurchase intent and takes into account new technologies and changes in consumer purchase behaviour.[3] The Purchase Funnel is also often referred to as the customer funnel, marketing funnel, or sales funnel. The concept of associating the funnel model with the AIDA concept was first proposed in Bond Salesmanship by William W. Townsend in 1924.[4] The purchase funnel concept is used in marketing to guide promotional campaigns targeting different stages of the customer journey, and also as a basis for customer relationship management (CRM) programmes.

[1] Barry, Thomas. 1987. The Development of the Hierarchy of Effects: An Historical Perspective. Current Issues and Research in Advertising, 251-295. [2] A modern purchase funnel concept - (2009) (http:/ / www. marketing-made-simple. com/ articles/ purchase-funnel. htm) [3] The customer decision journey - McKinsey Quarterly(2009) (http:/ / www. mckinseyquarterly. com/ Media_Entertainment/ Publishing/ The_consumer_decision_journey_2373#) [4] "The salesman should visualize his whole problem of developing the sales steps as the forcing by compression of a broad and general concept of facts through a funnel which produces the specific and favorable consideration of one fact. The process is continually from the general to the specific, and the visualizing of the funnel has helped many salesmen to lead a costumer from Attention to Interest, and beyond" (p. 109).

Customer lifecycle management


Customer lifecycle management

Customer Lifecycle Management, or CLM is the measurement of multiple customer related metrics, which, when analyzed for a period of time, indicate performance of a business.[1] The overall scope of the CLM implementation process encompasses all domains or departments of an organization, which generally brings all sources of static and dynamic data, marketing processes, and value added services to a unified decision supporting platform through iterative phases[2] of customer acquisition, retention, cross and up-selling, and lapsed customer win-back.[3][4] Some detailed CLM models further breakdown these phases into acquisition, introduction to products, profiling of customers, growth of customer base, cultivation of loyalty among customers, and termination of customer relationship.[5] According to a DM Review magazine article by Claudia Imhoff, et al., "The purpose of the customer life cycle is to define and communicate the stages through which a customer progresses when considering, purchasing and using products, and the associated business processes a company uses to move the customer through the customer life cycle."[6]

[1] (http:/ / www. salesboom. com/ whitepapers/ what_is_clm_whitepaper_summary. html), What is it, and how important is it to your small business? [2] http:/ / www. ubivent. com/ [3] http:/ / www. realmarket. com/ required/ rappdigital4. pdf [4] (http:/ / www. wantrealdata. com/ resources/ cust_life_cycle/ index. html), Customer life-cycle focus [5] (http:/ / www. nokia. co. uk/ NOKIA_COM_1/ Operators/ Business_drivers/ Customer_Loyalty_& _Retention/ concept_of_lifecycle_management_780x540. pdf), The concept of customer lifecycle management [6] "Building the Customer-Centric Enterprise" (http:/ / www. dmreview. com/ issues/ 20001101/ 2813-1. html). DM Review Magazine. November, 2002. . Retrieved 2008-11-04.

External links
Customer Lifecycle Management (CLM) - What Is It, and How Important Is It to Your Small Business? Fast Approach ( wp101076;jsessionid=MWFGNOO2EH01VQE1GHRSKH4ATMY32JVN?articleID=101076& _requestid=74380)

Customer lifetime value


Customer lifetime value

In marketing, customer lifetime value (CLV), lifetime customer value (LCV), or user lifetime value (LTV) is a prediction of the net profit attributed to the entire future relationship with a customer. The prediction model can have varying levels of sophistication and accuracy, ranging from a crude heuristic to the use of complex predictive analytics techniques.

One of the first accounts of it is in the 1988 book Database Marketing, and includes detailed worked examples.[1][2]

Uses and Advantages

Customer lifetime value has intuitive appeal as a marketing concept, because in theory it represents exactly how much each customer is worth in monetary terms, and therefore exactly how much a marketing department should be willing to spend to acquire each customer, especially in direct response marketing. Lifetime value is typically used to judge the appropriateness of the costs of acquisition of a customer. For example, if a new customer costs $50 to acquire (COCA, or cost of customer acquisition), and their lifetime value is $60, then the customer is judged to be profitable, and acquisition of additional similar customers is acceptable. Additionally, CLV is used to calculate customer equity. Advantages of CLV: management of customer relationship as an asset monitoring the impact of management strategies and marketing investments on the value of customer assets determination of the optimal level of investments in marketing and sales activities encourages marketers to focus on the long-term value of customers instead of investing resources in acquiring "cheap" customers with low total revenue value[3] implementation of sensitivity analysis in order to determinate getting impact by spending extra money on each customer[4] optimal allocation of limited resources for ongoing marketing activities in order to achieve a maximum return a good basis for selecting customers and for decision making regarding customer specific communication strategies measurement of customer loyalty (proportion of purchase, probability of purchase and repurchase, purchase frequency and sequence etc.)[5]

Misuses and Downsides

NPV vs Nominal Prediction
The most accurate CLV predictions are made using the net present value (NPV) of each future net profit source, so that the revenue to be received from the customer in the future is recognized at the future value of money. However, NPV calculations require additional sophistication including maintenance of a discount rate, which leads most organizations to instead calculate CLV using the nominal (non-discounted) figured. Nominal CLV predictions are biased slightly high, scaling higher the farther into the future the revenues are expected from customers

Customer lifetime value


Net Profit vs Revenue

A common mistake is for a CLV prediction to calculate the total revenue or even gross margin associated with a customer. However, this can cause CLV to be multiples of their actual value, and instead need to be calculated as the full net profit expected from the customer.

Segment Inaccuracy
Opponents often cite the inaccuracy of a CLV prediction to argue they should not be used to drive significant business decisions. For example, major drivers to the value of a customer such as the nature of the relationship are often not available as appropriately structured data and thus not included in the formula.

Comparison with Intuition

More, predictors such as specific demographics of a customer group may have an effect that is intuitively obvious to an experienced marketer, but are often omitted from CLV predictions and thus cause inaccuracies in certain customer segments.

Effects on Business Practices

Its use as a marketing metric tends to place greater emphasis on customer service and long-term customer satisfaction, rather than on maximizing short-term sales.

Predictive Models
Simple Ecommerce Example
(Avg Monthly Revenue per Customer * Gross Margin per Customer) / Monthly Churn Rate You should have something that looks like: $100 avg monthly spend * 25% margin / 5% monthly churn = $500 LTV

A Retention Example
4 Steps 1. 2. 3. 4. forecasting of remaining customer lifetime in years forecasting of future revenues year-by-year, based on estimation about future products purchased and price paid estimation of costs for delivering those products calculation of the net present value of these future amounts[7]

Forecasting accuracy and difficulty in tracking customers over time may affect CLV calculation process. Inputs Churn rate, the percentage of customers who end their relationship with a company in a given period. One minus the churn rate is the retention rate. Most models can be written using either churn rate or retention rate. If the model uses only one churn rate, the assumption is that the churn rate is constant across the life of the customer relationship. Discount rate, the cost of capital used to discount future revenue from a customer. Discounting is an advanced topic that is frequently ignored in customer lifetime value calculations. The current interest rate is sometimes used as a simple (but incorrect) proxy for discount rate. Contribution margin.

Customer lifetime value Retention cost, the amount of money a company has to spend in a given period to retain an existing customer. Retention costs include customer support, billing, promotional incentives, etc. Period, the unit of time into which a customer relationship is divided for analysis. A year is the most commonly used period. Customer lifetime value is a multi-period calculation, usually stretching 37 years into the future. In practice, analysis beyond this point is viewed as too speculative to be reliable. The number of periods used in the calculation is sometimes referred to as the model horizon. Model


: ,


is yearly gross contribution per customer,

is the (relevant) retention costs per customer per year (this

formula assumes the retention activities are paid for each mid year and they only affect those who were retained in the previous year), is the horizon (in years), is the yearly retention rate, is the yearly discount rate.

Simplified Models
It is often helpful to estimate customer lifetime value with a simple model to make initial assessments of customer segments and targeting. Possibly the simplest way to estimate CLV is to assume constant and long-lasting values for contribution margin, retention rate, and discount rates, as follows [9]:

[1] [2] [3] [4] [5] [6] [7] [8] [9] Shaw, R & Stone, M. (1988) Database Marketing, Gower, London Shaw, R & Stone, M. (1990) Database Marketing, Wiley US Edition Customer Lifetime Value (http:/ / www. optimove. com/ customer-lifetime-value. aspx) Gary Cokins (2009). Performance Management: Integrating Strategy Execution, Methodologies, Risk and Analytics. ISBN 978-0-470-44998-1. p. 177 V. Kumar (2008). Customer Lifetime Value. ISBN 978-1-60198-156-1. p. 6 http:/ / www. quora. com/ How-do-you-calculate-Customer-Lifetime-Value# Lynette Ryals (2008). Managing Customers Profitably. ISBN 978-0-470-06063-6. p.85 Berger, P. D. and Nasr, N. I. (1998), Customer lifetime value: Marketing models and applications. Journal of Interactive Marketing, 12: 1730. doi:10.1002/(SICI)1520-6653(199824)12:1<17::AID-DIR3>3.0.CO;2-K Adapted from "Customer Profitability and Lifetime Value," HBS Note 503-019

Predictive analytics


Predictive analytics
Predictive analytics encompasses a variety of techniques from statistics, modeling, machine learning, and data mining that analyze current and historical facts to make predictions about future events.[1][2] In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision making for candidate transactions. Predictive analytics is used in actuarial science,[3] marketing,[4] financial services,[5] telecommunications,[6] retail,[7] travel,[8] healthcare,[9] pharmaceuticals[10] and other fields. insurance,

One of the most well known applications is credit scoring,[1] which is used throughout financial services. Scoring models process a customers credit history, loan application, customer data, etc., in order to rank-order individuals by their likelihood of making future credit payments on time. A well-known example is the FICO score.

Predictive analytics is an area of statistical analysis that deals with extracting information from data and using it to predict future trends and behavior patterns. The core of predictive analytics relies on capturing relationships between explanatory variables and the predicted variables from past occurrences, and exploiting it to predict future outcomes. It is important to note, however, that the accuracy and usability of results will depend greatly on the level of data analysis and the quality of assumptions.

Generally, the term predictive analytics is used to mean predictive modeling, "scoring" data with predictive models, and forecasting. However, people are increasingly using the term to refer to related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.

Predictive models
Predictive models analyze past performance to assess how likely a customer is to exhibit a specific behavior in the future in order to improve marketing effectiveness. This category also encompasses models that seek out subtle data patterns to answer questions about customer performance, such as fraud detection models. Predictive models often perform calculations during live transactions, for example, to evaluate the risk or opportunity of a given customer or transaction, in order to guide a decision. With advancement in computing speed, individual agent modeling systems can simulate human behavior or reaction to given stimuli or scenarios. The new term for animating data specifically linked to an individual in a simulated environment is avatar analytics.

Descriptive models
Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products. Descriptive models do not rank-order customers by their likelihood of taking a particular action the way predictive models do. Descriptive models can be used, for example, to categorize customers by their product preferences and life stage. Descriptive modeling tools can be utilized to develop further models that can simulate large number of individualized agents and make predictions.

Predictive analytics


Decision models
Decision models describe the relationship between all the elements of a decision the known data (including results of predictive models), the decision, and the forecast results of the decision in order to predict the results of decisions involving many variables. These models can be used in optimization, maximizing certain outcomes while minimizing others. Decision models are generally used to develop decision logic or a set of business rules that will produce the desired action for every customer or circumstance.

Although predictive analytics can be put to use in many applications, we outline a few examples where predictive analytics has shown positive impact in recent years.

Analytical customer relationship management (CRM)

Analytical Customer Relationship Management is a frequent commercial application of Predictive Analysis. Methods of predictive analysis are applied to customer data to pursue CRM objectives which is to have a holistic view of the customer no matter where their information resides in the company or the department involved. CRM uses predictive analysis in applications for marketing campaigns, sales, and customer services to name a few. These tools are required in order for a company to posture and focus their efforts effectively across the breadth of their customer base. They must analyze and understand the products in demand or have the potential for high demand, predict customer's buying habits in order to promote relevant products at multiple touch points, and proactively identify and mitigate issues that have the potential to lose customers or reduce their ability to gain new ones.

Clinical decision support systems

Experts use predictive analysis in health care primarily to determine which patients are at risk of developing certain conditions, like diabetes, asthma, heart disease and other lifetime illnesses. Additionally, sophisticated clinical decision support systems incorporate predictive analytics to support medical decision making at the point of care. A working definition has been proposed by Robert Hayward of the Centre for Health Evidence: "Clinical Decision Support Systems link health observations with health knowledge to influence health choices by clinicians for improved health care."

Collection analytics
Every portfolio has a set of delinquent customers who do not make their payments on time. The financial institution has to undertake collection activities on these customers to recover the amounts due. A lot of collection resources are wasted on customers who are difficult or impossible to recover. Predictive analytics can help optimize the allocation of collection resources by identifying the most effective collection agencies, contact strategies, legal actions and other strategies to each customer, thus significantly increasing recovery at the same time reducing collection costs.

Often corporate organizations collect and maintain abundant data (e.g. customer records, sale transactions) as exploiting hidden relationships in the data can provide a competitive advantage. For an organization that offers multiple products, predictive analytics can help analyze customers spending, usage and other behavior, leading to efficient cross sales, or selling additional products to current customers.[2] This directly leads to higher profitability per customer and stronger customer relationships.

Predictive analytics


Customer retention
With the number of competing services available, businesses need to focus efforts on maintaining continuous consumer satisfaction, rewarding consumer loyalty and minimizing customer attrition. Businesses tend to respond to customer attrition on a reactive basis, acting only after the customer has initiated the process to terminate service. At this stage, the chance of changing the customers decision is almost impossible. Proper application of predictive analytics can lead to a more proactive retention strategy. By a frequent examination of a customers past service usage, service performance, spending and other behavior patterns, predictive models can determine the likelihood of a customer terminating service sometime in the near future.[6] An intervention with lucrative offers can increase the chance of retaining the customer. Silent attrition, the behavior of a customer to slowly but steadily reduce usage, is another problem that many companies face. Predictive analytics can also predict this behavior, so that the company can take proper actions to increase customer activity.

Direct marketing
When marketing consumer products and services, there is the challenge of keeping up with competing products and consumer behavior. Apart from identifying prospects, predictive analytics can also help to identify the most effective combination of product versions, marketing material, communication channels and timing that should be used to target a given consumer. The goal of predictive analytics is typically to lower the cost per order or cost per action.

Fraud detection
Fraud is a big problem for many businesses and can be of various types: inaccurate credit applications, fraudulent transactions (both offline and online), identity thefts and false insurance claims. These problems plague firms of all sizes in many industries. Some examples of likely victims are credit card issuers, insurance companies,[11] retail merchants, manufacturers, business-to-business suppliers and even services providers. A predictive model can help weed out the bads and reduce a business's exposure to fraud. Predictive modeling can also be used to identify high-risk fraud candidates in business or the public sector. Nigrini developed a risk-scoring method to identify audit targets. He describes the use of this approach to detect fraud in the franchisee sales reports of an international fast-food chain. Each location is scored using 10 predictors. The 10 scores are then weighted to give one final overall risk score for each location. The same scoring approach was also used to identify high-risk check kiting accounts, potentially fraudulent travel agents, and questionable vendors. A reasonably complex model was used to identify fraudulent monthly reports submitted by divisional controllers.[12] The Internal Revenue Service (IRS) of the United States also uses predictive analytics to mine tax returns and identify tax fraud.[11] Recent advancements in technology have also introduced predictive behavior analysis for web fraud detection. This type of solution utilizes heuristics in order to study normal web user behavior and detect anomalies indicating fraud attempts.

Portfolio, product or economy-level prediction

Often the focus of analysis is not the consumer but the product, portfolio, firm, industry or even the economy. For example, a retailer might be interested in predicting store-level demand for inventory management purposes. Or the Federal Reserve Board might be interested in predicting the unemployment rate for the next year. These types of problems can be addressed by predictive analytics using time series techniques (see below). They can also be addressed via machine learning approaches which transform the original time series into a feature vector space, where the learning algorithm finds patterns that have predictive power.[13][14]

Predictive analytics


Risk management
When employing risk management techniques, the results are always to predict and benefit from a future scenario. The Capital asset pricing model (CAP-M) predicts the best portfolio to maximize return, Probabilistic Risk Assessment (PRA)--when combined with mini-Delphi Techniques and statistical approaches yields accurate forecasts and RiskAoA is a stand-alone predictive tool.[15] These are three examples of approaches that can extend from project to market, and from near to long term. Underwriting (see below) and other business approaches identify risk management as a predictive method.

Many businesses have to account for risk exposure due to their different services and determine the cost needed to cover the risk. For example, auto insurance providers need to accurately determine the amount of premium to charge to cover each automobile and driver. A financial company needs to assess a borrowers potential and ability to pay before granting a loan. For a health insurance provider, predictive analytics can analyze a few years of past medical claims data, as well as lab, pharmacy and other records where available, to predict how expensive an enrollee is likely to be in the future. Predictive analytics can help underwrite these quantities by predicting the chances of illness, default, bankruptcy, etc. Predictive analytics can streamline the process of customer acquisition by predicting the future risk behavior of a customer using application level data.[3] Predictive analytics in the form of credit scores have reduced the amount of time it takes for loan approvals, especially in the mortgage market where lending decisions are now made in a matter of hours rather than days or even weeks. Proper predictive analytics can lead to proper pricing decisions, which can help mitigate future risk of default.

Technology and Big Data influences on Predictive Analytics

Big Data is a collection of data sets that are so large and complex that they become awkward to work with using traditional database management tools. The volume, variety and velocity of Big Data have introduced challenges across the board for capture, storage, search, sharing, analysis, and visualization. Examples of big data sources include web logs, RFID and sensor data, social networks, Internet search indexing, call detail records, military surveillance, and complex data in astronomic, biogeochemical, genomics, and atmospheric sciences. Thanks to technological advances in computer hardwarefaster CPUs, cheaper memory, and MPP architectures-and new technologies such as Hadoop, MapReduce, and in-database and text analytics for processing Big Data, it is now feasible to collect, analyze, and mine massive amounts of structured and unstructured data for new insights.[11] Today, exploring Big Data and using predictive analytics is within reach of more organizations than ever before.

Statistical techniques
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.

Regression Models
Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there is a wide variety of models that can be applied while performing predictive analytics. Some of them are briefly discussed below.

Predictive analytics Linear regression model The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters. These parameters are adjusted so that a measure of fit is optimized. Much of the effort in model fitting is focused on minimizing the size of the residual, as well as ensuring that it is randomly distributed with respect to the model predictions. The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters if and only if the Gauss-Markov assumptions are satisfied. Once the model has been estimated we would be interested to know if the predictor variables belong in the model i.e. is the estimate of each variables contribution reliable? To do this we can check the statistical significance of the models coefficients which can be measured using the t-statistic. This amounts to testing whether the coefficient is significantly different from zero. How well the model predicts the dependent variable based on the value of the independent variables can be assessed by using the R statistic. It measures predictive power of the model i.e. the proportion of the total variation in the dependent variable that is explained (accounted for) by variation in the independent variables.


Discrete choice models

Multivariate regression (above) is generally used when the response variable is continuous and has an unbounded range. Often the response variable may not be continuous but rather discrete. While mathematically it is feasible to apply multivariate regression to discrete ordered dependent variables, some of the assumptions behind the theory of multivariate linear regression no longer hold, and there are other techniques such as discrete choice models which are better suited for this type of analysis. If the dependent variable is discrete, some of those superior methods are logistic regression, multinomial logit and probit models. Logistic regression and probit models are used when the dependent variable is binary. Logistic regression In a classification setting, assigning outcome probabilities to observations can be achieved through the use of a logistic model, which is basically a method which transforms information about the binary dependent variable into an unbounded continuous variable and estimates a regular multivariate model (See Allisons Logistic Regression for more information on the theory of Logistic Regression). The Wald and likelihood-ratio test are used to test the statistical significance of each coefficient b in the model (analogous to the t tests used in OLS regression; see above). A test assessing the goodness-of-fit of a classification model is the "percentage correctly predicted." Multinomial logistic regression An extension of the binary logit model to cases where the dependent variable has more than 2 categories is the multinomial logit model. In such cases collapsing the data into two categories might not make good sense or may lead to loss in the richness of the data. The multinomial logit model is the appropriate technique in these cases, especially when the dependent variable categories are not ordered (for examples colors like red, blue, green). Some authors have extended multinomial regression to include feature selection/importance methods such as Random multinomial logit.

Predictive analytics Probit regression Probit models offer an alternative to logistic regression for modeling categorical dependent variables. Even though the outcomes tend to be similar, the underlying distributions are different. Probit models are popular in social sciences like economics. A good way to understand the key difference between probit and logit models is to assume that there is a latent variable z. We do not observe z but instead observe y which takes the value 0 or 1. In the logit model we assume that y follows a logistic distribution. In the probit model we assume that y follows a standard normal distribution. Note that in social sciences (e.g. economics), probit is often used to model situations where the observed variable y is continuous but takes values between 0 and 1. Logit versus probit The Probit model has been around longer than the logit model. They behave similarly, except that the logistic distribution tends to be slightly flatter tailed. One of the reasons the logit model was formulated was that the probit model was computationally difficult due to the requirement of numerically calculating integrals. Modern computing however has made this computation fairly simple. The coefficients obtained from the logit and probit model are fairly close. However, the odds ratio is easier to interpret in the logit model. Practical reasons for choosing the probit model over the logistic model would be: There is a strong belief that the underlying distribution is normal The actual event is not a binary outcome (e.g., bankruptcy status) but a proportion (e.g., proportion of population at different debt levels).


Time series models

Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure (such as autocorrelation, trend or seasonal variation) that should be accounted for. As a result standard regression techniques cannot be applied to time series data and methodology has been developed to decompose the trend, seasonal and cyclical component of the series. Modeling the dynamic path of a variable can improve forecasts since the predictable component of the series can be projected into the future. Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box-Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model which is the cornerstone of stationary time series analysis. ARIMA(autoregressive integrated moving average models) on the other hand are used to describe non-stationary time series. Box and Jenkins suggest differencing a non stationary time series to obtain a stationary series to which an ARMA model can be applied. Non stationary time series have a pronounced trend and do not have a constant long-run mean or variance. Box and Jenkins proposed a three stage methodology which includes: model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions. In the estimation stage, models are estimated using non-linear time series or maximum likelihood estimation procedures. Finally the validation stage involves diagnostic checking such as plotting the residuals to detect outliers and evidence of model fit. In recent years time series models have become more sophisticated and attempt to model conditional heteroskedasticity with models such as ARCH (autoregressive conditional heteroskedasticity) and GARCH

Predictive analytics (generalized autoregressive conditional heteroskedasticity) models frequently used for financial time series. In addition time series models are also used to understand inter-relationships among economic variables represented by systems of equations using VAR (vector autoregression) and structural VAR models.


Survival or duration analysis

Survival analysis is another name for time to event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering (reliability and failure time analysis). Censoring and non-normality, which are characteristic of survival data, generate difficulty when trying to analyze the data using conventional statistical models such as multiple linear regression. The normal distribution, being a symmetric distribution, takes positive as well as negative values, but duration by its very nature cannot be negative and therefore normality cannot be assumed when dealing with duration/survival data. Hence the normality assumption of regression models is violated. The assumption is that if the data were not censored it would be representative of the population of interest. In survival analysis, censored observations arise whenever the dependent variable of interest represents the time to a terminal event, and the duration of the study is limited in time. An important concept in survival analysis is the hazard rate, defined as the probability that the event will occur at time t conditional on surviving until time t. Another concept related to the hazard rate is the survival function which can be defined as the probability of surviving to time t. Most models try to model the hazard rate by choosing the underlying distribution depending on the shape of the hazard function. A distribution whose hazard function slopes upward is said to have positive duration dependence, a decreasing hazard shows negative duration dependence whereas constant hazard is a process with no memory usually characterized by the exponential distribution. Some of the distributional choices in survival models are: F, gamma, Weibull, log normal, inverse normal, exponential etc. All these distributions are for a non-negative random variable. Duration models can be parametric, non-parametric or semi-parametric. Some of the models commonly used are Kaplan-Meier and Cox proportional hazard model (non parametric).

Classification and regression trees

Classification and regression trees (CART) is a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively. Decision trees are formed by a collection of rules based on variables in the modeling data set: Rules based on variables values are selected to get the best split to differentiate observations based on the dependent variable Once a rule is selected and splits a node into two, the same process is applied to each child node (i.e. it is a recursive procedure) Splitting stops when CART detects no further gain can be made, or some pre-set stopping rules are met. (Alternatively, the data are split as much as possible and then the tree is later pruned.) Each branch of the tree ends in a terminal node. Each observation falls into one and exactly one terminal node, and each terminal node is uniquely defined by a set of rules. A very popular method for predictive analytics is Leo Breiman's Random forests or derived versions of this technique like Random multinomial logit.

Predictive analytics


Multivariate adaptive regression splines

Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by fitting piecewise linear regressions. An important concept associated with regression splines is that of a knot. Knot is where one local regression model gives way to another and thus is the point of intersection between two splines. In multivariate and adaptive regression splines, basis functions are the tool used for generalizing the search for knots. Basis functions are a set of functions used to represent the information contained in one or more variables. Multivariate and Adaptive Regression Splines model almost always creates the basis functions in pairs. Multivariate and adaptive regression spline approach deliberately overfits the model and then prunes to get to the optimal model. The algorithm is computationally very intensive and in practice we are required to specify an upper limit on the number of basis functions.

Machine learning techniques

Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market. In certain applications it is sufficient to directly predict the dependent variable without focusing on the underlying relationships between variables. In other cases, the underlying relationships can be very complex and the mathematical form of the dependencies unknown. For such cases, machine learning techniques emulate human cognition and learn from training examples to predict future events. A brief discussion of some of these methods used commonly for predictive analytics is provided below. A detailed study of machine learning can be found in Mitchell (1997). Neural networks Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, cognitive psychology/neuroscience, medicine, engineering, and physics. Neural networks are used when the exact nature of the relationship between inputs and output is not known. A key feature of neural networks is that they learn the relationship between inputs and output through training. There are three types of training in neural networks used by different networks, supervised and unsupervised training, reinforcement learning,with supervised being the most common one. Some examples of neural network training techniques are backpropagation, quick propagation, conjugate gradient descent, projection operator, Delta-Bar-Delta etc. Some unsupervised network architectures are multilayer perceptrons, Kohonen networks, Hopfield networks, etc. Radial basis functions A radial basis function (RBF) is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data. Radial basis functions have been applied in the area of neural networks where they are used as a replacement for the sigmoidal transfer function. Such networks have 3 layers, the input layer, the hidden layer with the RBF non-linearity and a linear output layer. The most popular choice for the non-linearity is the Gaussian. RBF networks have the advantage of not being locked into local minima as do the feed-forward networks such as the multilayer perceptron.

Predictive analytics Support vector machines Support Vector Machines (SVM) are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations. They commonly use kernel based methods to apply linear classification techniques to non-linear classification problems. There are a number of types of SVM such as linear, polynomial, sigmoid etc. Nave Bayes Nave Bayes based on Bayes conditional probability rule is used for performing classification tasks. Nave Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best employed when faced with the problem of curse of dimensionality i.e. when the number of predictors is very high. k-nearest neighbours The nearest neighbour algorithm (KNN) belongs to the class of pattern recognition statistical methods. The method does not impose a priori any assumptions about the distribution from which the modeling sample is drawn. It involves a training set with both positive and negative values. A new sample is classified by calculating the distance to the nearest neighbouring training case. The sign of that point will determine the classification of the sample. In the k-nearest neighbour classifier, the k nearest points are considered and the sign of the majority is used to classify the sample. The performance of the kNN algorithm is influenced by three main factors: (1) the distance measure used to locate the nearest neighbours; (2) the decision rule used to derive a classification from the k-nearest neighbours; and (3) the number of neighbours used to classify the new sample. It can be proved that, unlike other methods, this method is universally asymptotically convergent, i.e.: as the size of the training set increases, if the observations are independent and identically distributed (i.i.d.), regardless of the distribution from which the sample is drawn, the predicted class will converge to the class assignment that minimizes misclassification error. See Devroy et al. Geospatial predictive modeling Conceptually, geospatial predictive modeling is rooted in the principle that the occurrences of events being modeled are limited in distribution. Occurrences of events are neither uniform nor random in distribution there are spatial environment factors (infrastructure, sociocultural, topographic, etc.) that constrain and influence where the locations of events occur. Geospatial predictive modeling attempts to describe those constraints and influences by spatially correlating occurrences of historical geospatial locations with environmental factors that represent those constraints and influences. Geospatial predictive modeling is a process for analyzing events through a geographic filter in order to make statements of likelihood for event occurrence or emergence.


Historically, using predictive analytics toolsas well as understanding the results they deliveredrequired advanced skills. However, modern predictive analytics tools are no longer restricted to IT specialists. As more organizations adopt predictive analytics into decision-making processes and integrate it into their operations, theyre creating a shift in the market toward business users as the primary consumers of the information. Business users want tools they can use on their own. Vendors are responding by creating new software that removes the mathematical complexity, provides user-friendly graphic interfaces and/or builds in short cuts that can, for example, recognize the kind of data available and suggest an appropriate predictive model.[16] Predictive analytics tools have become sophisticated enough to adequately present and dissect data problems, so that any data-savvy information worker can utilize them to analyze data and retrieve meaningful, useful results.[2] For example, modern tools present findings using simple charts, graphs, and scores that indicate the likelihood of possible outcomes.[17]

Predictive analytics There are numerous tools available in the marketplace that help with the execution of predictive analytics. These range from those that need very little user sophistication to those that are designed for the expert practitioner. The difference between these tools is often in the level of customization and heavy data lifting allowed. Notable open source predictive analytic tools include: KNIME Orange Python R RapidMiner Weka


Notable commercial predictive analytic tools include: Angoss KnowledgeSTUDIO Exacaster IBM SPSS Statistics and IBM SPSS Modeler KXEN Modeler Mathematica MATLAB Oracle Data Mining (ODM) Pervasive SAP SAS and SAS Enterprise Miner STATISTICA TIBCO

In an attempt to provide a standard language for expressing predictive models, the Predictive Model Markup Language (PMML) has been proposed. Such an XML-based language provides a way for the different tools to define predictive models and to share these between PMML compliant applications. PMML 4.0 was released in June, 2009.

[1] Nyce, Charles (2007), Predictive Analytics White Paper (http:/ / www. aicpcu. org/ doc/ predictivemodelingwhitepaper. pdf), American Institute for Chartered Property Casualty Underwriters/Insurance Institute of America, p.1, [2] Eckerson, Wayne (May 10, 2007), Extending the Value of Your Data Warehousing Investment (http:/ / tdwi. org/ articles/ 2007/ 05/ 10/ predictive-analytics. aspx?sc_lang=en), The Data Warehouse Institute, [3] Conz, Nathan (September 2, 2008), "Insurers Shift to Customer-focused Predictive Analytics Technologies" (http:/ / www. insurancetech. com/ business-intelligence/ 210600271), Insurance & Technology, [4] Fletcher, Heather (March 2, 2011), "The 7 Best Uses for Predictive Analytics in Multichannel Marketing" (http:/ / www. targetmarketingmag. com/ article/ 7-best-uses-predictive-analytics-modeling-multichannel-marketing/ 1#), Target Marketing, [5] Korn, Sue (April 21, 2011), "The Opportunity for Predictive Analytics in Finance" (http:/ / www. hpcwire. com/ hpcwire/ 2011-04-21/ the_opportunity_for_predictive_analytics_in_finance. html), HPC Wire, [6] Barkin, Eric (May 2011), "CRM + Predictive Analytics: Why It All Adds Up" (http:/ / www. destinationcrm. com/ Articles/ Editorial/ Magazine-Features/ CRM---Predictive-Analytics-Why-It-All-Adds-Up-74700. aspx), Destination CRM, [7] Das, Krantik; Vidyashankar, G.S. (July 1, 2006), "Competitive Advantage in Retail Through Analytics: Developing Insights, Creating Value" (http:/ / www. information-management. com/ infodirect/ 20060707/ 1057744-1. html), Information Management, [8] McDonald, Michle (September 2, 2010), "New Technology Taps Predictive Analytics to Target Travel Recommendations" (http:/ / www. travelmarketreport. com/ technology?articleID=4259& LP=1,), Travel Market Report, [9] Stevenson, Erin (December 16, 2011), "Tech Beat: Can you pronounce health care predictive analytics?" (http:/ / www. times-standard. com/ business/ ci_19561141), Times-Standard, [10] McKay, Lauren (August 2009), "The New Prescription for Pharma" (http:/ / www. destinationcrm. com/ articles/ Web-Exclusives/ Web-Only-Bonus-Articles/ The-New-Prescription-for-Pharma-55774. aspx), Destination CRM,

Predictive analytics
[11] Schiff, Mike (March 6, 2012), BI Experts: Why Predictive Analytics Will Continue to Grow (http:/ / tdwi. org/ Articles/ 2012/ 03/ 06/ Predictive-Analytics-Growth. aspx?Page=1), The Data Warehouse Institute, [12] Nigrini, Mark (June, 2011). "Forensic Analytics: Methods and Techniques for Forensic Accounting Investigations" (http:/ / www. wiley. com/ WileyCDA/ WileyTitle/ productCd-0470890460. html). Hoboken, NJ: John Wiley & Sons Inc.. ISBN978-0-470-89046-2. . [13] Dhar, Vasant (April 2011). "Prediction in Financial Markets: The Case for Small Disjuncts" (http:/ / dl. acm. org/ citation. cfm?id=1961191). ACM Transactions on Intelligent Systems and Technologies 2 (3). . [14] Dhar, Vasant; Chou, Dashin and Provost Foster (October 2000). "Discovering Interesting Patterns in Investment Decision Making with GLOWER A Genetic Learning Algorithm Overlaid With Entropy Reduction" (http:/ / dl. acm. org/ citation. cfm?id=593502). Data Mining and Knowledge Discovery 4 (4). . [15] https:/ / acc. dau. mil/ CommunityBrowser. aspx?id=126070 [16] Halper, Fran (November 1, 2011), "The Top 5 Trends in Predictive Analytics" (http:/ / www. information-management. com/ issues/ 21_6/ the-top-5-trends-in-redictive-an-alytics-10021460-1. html), Information Management, [17] MacLennan, Jamie (May 1, 2012), 5 Myths about Predictive Analytics (http:/ / tdwi. org/ articles/ 2012/ 05/ 01/ 5-predictive-analytics-myths. aspx), The Data Warehouse Institute,


Agresti, Alan (2002). Categorical Data Analysis. Hoboken: John Wiley and Sons. ISBN0-471-36093-7. Coggeshall, Stephen, Davies, John, Jones, Roger., and Schutzer, Daniel, "Intelligent Security Systems," in Freedman, Roy S., Flein, Robert A., and Lederman, Jess, Editors (1995). Artificial Intelligence in the Capital Markets. Chicago: Irwin. ISBN1-55738-811-3. L. Devroye, L. Gyrfi, G. Lugosi (1996). A Probabilistic Theory of Pattern Recognition. New York: Springer-Verlag. Enders, Walter (2004). Applied Time Series Econometrics. Hoboken: John Wiley and Sons. ISBN0-521-83919-X. Greene, William (2000). Econometric Analysis. Prentice Hall. ISBN0-13-013297-7. Guidre, Mathieu; Howard N, Sh. Argamon (2009). Rich Language Analysis for Counterterrrorism. Berlin, London, New York: Springer-Verlag. ISBN978-3-642-01140-5. Mitchell, Tom (1997). Machine Learning. New York: McGraw-Hill. ISBN0-07-042807-7. Tukey, John (1977). Exploratory Data Analysis. New York: Addison-Wesley. ISBN0-201-07616-0.


Consumer behaviour
Consumer behaviour is the study of individuals, groups, or organizations and the processes they use to select, secure, and dispose of products, services, experiences, or ideas to satisfy needs and the impacts that these processes have on the consumer and society.[1] It blends elements from psychology, sociology, social anthropology and economics. It attempts to understand the decision-making processes of buyers, both individually and in groups. It studies characteristics of individual consumers such as demographics and behavioural variables in an attempt to understand people's wants. It also tries to assess influences on the consumer from groups such as family, friends, reference groups, and society in general. Customer behaviour study is based on consumer buying behaviour, with the customer playing the three distinct roles of user, payer and buyer. Research has shown that consumer behaviour is difficult to predict, even for experts in the field.[2] Relationship marketing is an influential asset for customer behaviour analysis as it has a keen interest in the re-discovery of the true meaning of marketing through the re-affirmation of the importance of the customer or buyer. A greater importance is also placed on consumer retention, customer relationship management, personalisation, customisation and one-to-one marketing. Social functions can be categorized into social choice and welfare functions. Each method for vote counting is assumed as social function but if Arrows possibility theorem is used for a social function, social welfare function is achieved. Some specifications of the social functions are decisiveness, neutrality, anonymity, monotonicity, unanimity, homogeneity and weak and strong Pareto optimality. No social choice function meets these requirements in an ordinal scale simultaneously. The most important characteristic of a social function is identification of the interactive effect of alternatives and creating a logical relation with the ranks. Marketing provides services in order to satisfy customers. With that in mind, the productive system is considered from its beginning at the production level, to the end of the cycle, the consumer (Kioumarsi et al., 2009).

Black box model

ENVIRONMENTAL FACTORS BUYER'S BLACK BOX Decision Process Problem recognition Information search Alternative evaluation Purchase decision Post-purchase behaviour Product choice Brand choice Dealer choice Purchase timing Purchase amount BUYER'S RESPONSE

Marketing Stimuli Environmental Stimuli Buyer Characteristics Product Price Place Promotion Economic Technological Political Cultural Demographic Natural Attitudes Motivation Perceptions Personality Lifestyle Knowledge

The black box model shows the interaction of stimuli, consumer characteristics, decision process and consumer responses.[3] It can be distinguished between interpersonal stimuli (between people) or intrapersonal stimuli (within people).[4] The black box model is related to the black box theory of behaviourism, where the focus is not set on the processes inside a consumer, but the relation between the stimuli and the response of the consumer. The marketing stimuli are planned and processed by the companies, whereas the environmental stimulus are given by social factors, based on the economical, political and cultural circumstances of a society. The buyers black box contains the buyer characteristics and the decision process, which determines the buyers response.

Consumer behaviour The black box model considers the buyers response as a result of a conscious, rational decision process, in which it is assumed that the buyer has recognized the problem. However, in reality many decisions are not made in awareness of a determined problem by the consumer.


Information search
Once the consumer has recognised a problem, they search for information on products and services that can solve that problem. Belch and Belch (2007) explain that consumers undertake both an internal (memory) and an external search. Sources of information include: Personal sources Commercial sources Public sources Personal experience

The relevant internal psychological process that is associated with information search is perception. Perception is defined as "the process by which an individual receives, selects, organises, and interprets information to create a meaningful picture of the world". Consumers' tendency to search for information on goods and services makes it possible for researchers to forecast the purchasing plans of consumers using brief descriptions of the products of interest.[5] The selective perception process Stage Description Selective exposure consumers select which promotional messages they will expose themselves to. Selective attention consumers select which promotional messages they will pay attention to. Selective comprehension consumer interpret messages in line with their beliefs, attitudes, motives and experiences. Selective retention consumers remember messages that are more meaningful or important to them. The implications of this process help develop an effective promotional strategy, and select which sources of information are more effective for the brand.

Evaluation of alternatives
At this time the consumer compares the brands and products that are in their evoked set. The evoked set refers to the number of alternatives that are considered by consumers during the problem-solving process. Sometimes also known as consideration , this set tends to be small relative to the total number of options available. How can the marketing organisation increase the likelihood that their brand is part of the consumer's evoked set? Consumers evaluate alternatives in terms of the functional and psychological benefits that they offer. The marketing organisation needs to understand what benefits consumers are seeking and therefore which attributes are most important in terms of making a decision. It also needs to check other brands of the customers consideration set to prepare the right plan for its own brand.

Purchase decision
Once the alternatives have been evaluated, the consumer is ready to make a purchase decision. Sometimes purchase intention does not result in an actual purchase. The marketing organisation must facilitate the consumer to act on their purchase intention. The organisation can use a variety of techniques to achieve this. The provision of credit or payment terms may encourage purchase, or a sales promotion such as the opportunity to receive a premium or enter a competition may provide an incentive to buy now. The relevant internal psychological process that is associated

Consumer behaviour with purchase decision is integration. Once the integration is achieved, the organisation can influence the purchase decisions much more easily. There are 5 stages of a consumer buying process [6] they are: The problem recognition stage, meaning the identification of something a consumer needs. The search for information, which means you search your knowledge bases or external knowledge sources for information on the product. The possibility of alternative options, meaning whether there is another better or cheaper product available. The choice to purchase the product and then finally the actual purchase of the product.[6] This shows the complete process that a consumer will most likely, whether recognisably or not, go through when they go to buy a product.


Postpurchase evaluation
The EKB (Engel, Kollat, Blackwell) model was further developed by Rice (1993) which suggested there should be a feedback loop, Foxall (2005) further suggests the importance of the post purchase evaluation and that it is key because of its influences on future purchase patterns.

Other influences
Consumer behaviour is influenced by internal conditions such as demographics, psychographics (lifestyle), personality, motivation, knowledge, attitudes, beliefs, and feelings. Psychological factors include an individuals motivation, perception, attitude and belief, while personal factors include income level, personality, age, occupation and lifestyle. Behaviour can also be affected by external influences, such as culture, sub-culture, locality, royalty, ethnicity, family, social class, past experience reference groups, lifestyle, market mix factors.

[1] Kuester, Sabine (2012): MKT 301: Strategic Marketing & Marketing in Specific Industry Contexts, University of Mannheim, p. 110. [2] J. Scott Armstrong (1991). "Prediction of Consumer Behavior by Experts and Novices" (http:/ / marketing. wharton. upenn. edu/ documents/ research/ Prediction of consumer behavior. pdf). Journal of Consumer Research (Journal of Consumer Research Inc.) 18: 251256. . [3] Sandhusen, Richard L.: Marketing (2000). Cf. S. 218 (http:/ / books. google. com/ books?id=8qlKaIq0AccC& printsec=frontcover#PPA218,M1) [4] Sandhusen, Richard L.: Marketing (2000). Cf. S. 219 (http:/ / books. google. com/ books?id=8qlKaIq0AccC& printsec=frontcover#PPA219,M1) [5] J. Scott Armstrong and Terry Overton (1971). "Brief vs. Comprehensive Descriptions in Measuring Intentions to Purchase" (http:/ / marketing. wharton. upenn. edu/ ideas/ pdf/ armstrong2/ brief. pdf). Journal of Marketing Research 5: 114117. . [6] Khosla, Swati (2010). "Consumer psychology: The essence of Marketing" (http:/ / web-l4. ebscohost. com. ezproxy-f. deakin. edu. au/ ehost/ detail?vid=5& hid=106& sid=4657a35a-29b0-4753-b833-46a39c374718@sessionmgr113& bdata=JnNpdGU9ZWhvc3QtbGl2ZQ==#db=ehh& AN=60641974). International Journal of Educational Administration 2 (2): 220-220. . Retrieved 2012-05-16.

Further reading
Blackwell, Miniard and Engel (2006). Consumer Behaviour (10th Ed.). Thomson Learning. Deaton, Angus; Muellbauer, John, Economics and consumer behavior ( books?id=B81RYQsx2l0C&printsec=frontcover), Cambridge ; New York : Cambridge University Press, 1980. ISBN 0-521-22850-6 Foxall, G. (2005.) Understanding Consumer Choice. Baingstoke. Palgrave Macmillian. Howard, J., Sheth, J.N. (1968), Theory of Buyer Behavior, J. Wiley & Sons, New York, NY. Kardes, Frank R.; Cronley, Maria L.; Cline, Thomas W., Consumer Behavior ( books?id=nwew7nJ6000C&printsec=frontcover), Mason, OH : South-Western, Cengage Learning, 2011. ISBN 978-0-538-74540-6

Consumer behaviour Laermer, Richard; Simmons, Mark, Punk Marketing, New York : Harper Collins, 2007. ISBN 978-0-06-115110-1 (Review of the book by Marilyn Scrizzi, in Journal of Consumer Marketing 24(7), 2007) Loudon, D.L. (1988), Consumer Behavior: Concepts and Applications, McGraw Hill, London. McNair, B. (1958), Retail Development, Harper & Row, New York, NY. Packard, Vance, The Hidden Persuaders, New York, D. McKay Co., 1957. Schiffman, L.G. (1993), Consumer Behavior, Prentice Hall International, London. Schwartz, Barry (2004), The Paradox of Choice: Why More Is Less, Ecco, New York. Shell, Ellen Ruppel, Cheap: The High Cost of Discount Culture ( books?id=-wDkR4Jt1FcC&printsec=frontcover), New York : Penguin Press, 2009. ISBN 978-1-59420-215-5 Solomon, M.R. (1994), Consumer Behavior, Allyn & Bacon, London.


External links
The Society for Consumer Psychology (

Consumer confusion
Consumer confusion is a state of mind that leads to consumers making imperfect purchasing decisions or lacking confidence in the correctness of their purchasing decisions.[1]

Confusion occurs when a consumer fails to correctly understand or interpret products and services.[2] This, in turn, leads to them making imperfect purchasing decisions. This concept is important to marketeers because consumer confusion may result in reduced sales, reduced satisfaction with products and difficulty communicating effectively with the consumer. It is a widely studied and broad subject which is a part of Consumer behaviour and Decision making.[3]

Choice overload
Choice overload (sometimes called overchoice in the context of confusion) occurs when the set of purchasing options becomes overwhelmingly large for a consumer. A good example is wine in the UK where supermarkets may present over 1000 different products leaving the consumer with a difficult choice process. Whilst large assortments do have some positive aspects (principally novelty and stimulation[4] and optimal solutions[5]) any assortment greater than around 12-14 products leads to confusion and specifically transferring the ownership of quality assurance to the consumer.[6] What this means in practice is reduced levels of satisfaction with purchases from large assortments as a consumer may be left with doubt that they have succeeded in finding the "best" product. Choice overload is growing with ever larger supermarkets and the internet being two of the main causes.[6]

Similarity is where two or more products lack differentiating features which prevents the consumer easily distinguishing between them. Differentiating features could be any from the marketing mix or anything else associated with the product such as brand. Similarity of products has the negative effect on the consumer of increasing the cognitive effort required to make a decision.[7] and reducing the perception of accuracy of decision. Both of these reduce the satisfaction with a decision and thereby satisfaction with the purchase.

Consumer confusion


Lack of information
A consumer may suffer from lack of information if the information doesn't exist, is unavailable to them at the required moment or is too complex for them to use in their decision making process.

Information overload
Too much information surrounding a product or service disturbs the consumer by forcing them to engage in a more complex and time consuming purchasing process. This, and the fact that it is difficult to compare and value the information when it is superfluous, leaves the consumer unsatisfied, insecure regarding what choice to make, and more prone to delay the decision-making, and thereby the actual purchase.[8]

Lack of consistency
When information provided on a product and/or service is not consistent with the consumer's previously held beliefs and convictions, ambiguity occurs in the understanding of the product.[8]

[1] Walsh, K (1999). "Marketing and Public Sector Management". European Journal of Marketing 28 (3): 63. [2] [3] [4] [5] [6] [7] [8] Turnbull, P W (2000). "Customer Confusion: The Mobile Phone Market". Journal of Marketing Management 16 (1-3): 143163. Soloman, M R Consumer Behaviour: Buying, Having and Being. Prentice Hall p.7 Darden; Griffin (1994). Baumol; Ide (1956). Broniarczyk, S M (2008). Product Assortment and Consumer Psychology. Loken, M (1986). Walsh, et al. (2007). "Consumer confusion proneness:Scale development, validation, and application". Journal of Marketing Management2 23.


Special A: Human factors and ergonomics

Human factors and ergonomics
Human factors and Ergonomics (HF&E) is a multidisciplinary field incorporating contributions from psychology, engineering, industrial design, graphic design, statistics, operations research and anthropometry. In essence it is the study of designing equipment and devices that fit the human body and its cognitive abilities. The two terms "human factors" and "ergonomics" are essentially synonymous.[1][2] The International Ergonomics Association defines ergonomics or human factors as follows:[2] Ergonomics (or human factors) is the scientific discipline concerned with the understanding of interactions among humans and other elements of a system, and the profession that applies theory, principles, data and methods to design in order to optimize human well-being and overall system performance. HF&E is employed to fulfill the goals of health and safety and productivity. It is relevant in the design of such things as safe furniture and easy-to-use interfaces to machines and equipment. Proper ergonomic design is necessary to prevent repetitive strain injuries and other musculoskeletal disorders, which can develop over time and can lead to long-term disability. Human factors and ergonomics is concerned with the fit between the user, equipment and their environments. It takes account of the user's capabilities and limitations in seeking to ensure that tasks, functions, information and the environment suit each user. To assess the fit between a person and the used technology, human factors specialists or ergonomists consider the job (activity) being done and the demands on the user; the equipment used (its size, shape, and how appropriate it is for the task), and the information used (how it is presented, accessed, and changed). Ergonomics draws on many disciplines in its study of humans and their environments, including anthropometry, biomechanics, mechanical engineering, industrial engineering, industrial design, information design, kinesiology, physiology and psychology.

Human factors and ergonomics


The term ergonomics, from Greek , meaning "work", and , meaning "natural laws", first entered the modern lexicon when Wojciech Jastrzbowski used the word in his 1857 article Rys ergonomji czyli nauki o pracy, opartej na prawdach poczerpnitych z Nauki Przyrody (The Outline of Ergonomics, i.e. Science of Work, Based on the Truths Taken from the Natural Science).[3] The introduction of the term to the English lexicon is widely attributed to British psychologist Hywel Murrell, at the 1949 meeting at the UK's Admiralty, which led to the foundation of The Ergonomics Society. He used it to encompass the studies in which he had been engaged during and after the World War II.[4] The expression human factors is a North American term which has been adopted to emphasise the application of the same methods to non work-related situations. A "human factor" is a physical or cognitive property of an individual or social behavior specific to humans that may influence the functioning of technological systems. The terms "human factors" and "ergonomics" are essentially synonymous.[1]
Ergonomics: the science of designing user interaction with equipment and workplaces to fit the user.

History of the field

The foundations of the science of ergonomics appear to have been laid within the context of the culture of Ancient Greece. A good deal of evidence indicates that Greek civilization in the 5th century BC used ergonomic principles in the design of their tools, jobs, and workplaces. One outstanding example of this can be found in the description Hippocrates gave of how a surgeon's workplace should be designed and how the tools he uses should be arranged.[5] The archaeological record also shows that the early Egyptian dynasties made tools and household equipment that illustrated ergonomic principles. It is therefore questionable whether the claim by Marmaras, et al., regarding the origin of ergonomics, can be justified.[6] In the 19th century, Frederick Winslow Taylor pioneered the "scientific management" method, which proposed a way to find the optimum method of carrying out a given task. Taylor found that he could, for example, triple the amount of coal that workers were shoveling by incrementally reducing the size and weight of coal shovels until the fastest shoveling rate was reached.[7] Frank and Lillian Gilbreth expanded Taylor's methods in the early 1900s to develop the "time and motion study". They aimed to improve efficiency by eliminating unnecessary steps and actions. By applying this approach, the Gilbreths reduced the number of motions in bricklaying from 18 to 4.5, allowing bricklayers to increase their productivity from 120 to 350 bricks per hour.[7] Previous to World War I the focus of aviation psychology was on the aviator himself, but the war shifted the focus onto the aircraft, in particular, the design of controls and displays, the effects of altitude and environmental factors on the pilot. The war saw the emergence of aeromedical research and the need for testing and measurement methods. Studies on driver behaviour started gaining momentum during this period, as Henry Ford started providing millions of Americans with automobiles. Another major development during this period was the performance of aeromedical research. By the end of WWI, two aeronautical labs were established, one at Brooks Airforce Base, Texas and the other at Wright field outside of Dayton, Ohio. Many tests were conducted to determine which characteristic

Human factors and ergonomics differentiated the successful pilots from the unsuccessful ones. During the early 1930s, Edwin Link developed the first flight simulator. The trend continued and more sophisticated simulators and test equipment were developed. Another significant development was in the civilian sector, where the effects of illumination on worker productivity were examined. This led to the identification of the Hawthorne Effect, which suggested that motivational factors could significantly influence human performance.[7] World War II marked the development of new and complex machines and weaponry, and these made new demands on operators' cognition. it was no longer possible to adopt the Tayloristic principle of matching individuals to preexisting jobs. Now the design of equipment had to take into account human limitations and take advantage of human capabilities. The decision-making, attention, situational awareness and hand-eye coordination of the machine's operator became key in the success or failure of a task. There was a lot of research conducted to determine the human capabilities and limitations that had to be accomplished. A lot of this research took off where the aeromedical research between the wars had left off. An example of this is the study done by Fitts and Jones (1947), who studied the most effective configuration of control knobs to be used in aircraft cockpits. A lot of this research transcended into other equipment with the aim of making the controls and displays easier for the operators to use. The entry of the terms "human factors" and "ergonomics" into the modern lexicon date from this period. It was observed that fully functional aircraft, flown by the best-trained pilots, still crashed. In 1943, Alphonse Chapanis, a lieutenant in the U.S. Army, showed that this so-called "pilot error" could be greatly reduced when more logical and differentiable controls replaced confusing designs in airplane cockpits. After the war, the Army Air Force published 19 volumes summarizing what had been established from research during the war.[7] In the decades since WWII, HF&E has continued to flourish and diversify. Work by Elias Porter and others within the RAND Corporation after WWII extended the conception of HF&E. "As the thinking progressed, a new concept developed - that it was possible to view an organization such as an air-defense, man-machine system as a single organism and that it was possible to study the behavior of such an organism. It was the climate for a breakthrough."[8] In the initial 20years after the WWII, most activities were done by the "founding fathers": Alphonse Chapanis, Paul Fitts, and Small. The beginning of The Cold War led to a major expansion of Defense supported research laboratories. Also, many labs established during WWII started expanding. Most of the research following the war was military-sponsored. Large sums of money were granted to universities to conduct research. The scope of the research also broadened from small equipments to entire workstations and systems. Concurrently, a lot of opportunities started opening up in the civilian industry. The focus shifted from research to participation through advice to engineers in the design of equipment. After 1965, the period saw a maturation of the discipline. The field has expanded with the development of the computer and computer applications.[7] The Space Age created new human factors issues such as weightlessness and extreme g-forces. Tolerance of the harsh environment of space and it's effects on the mind and body were widely studied The dawn of the Information Age has resulted in the related field of Humancomputer interaction (HCI). Likewise, the growing demand for and competition among consumer goods and electronics has resulted in more companies including human factors in product design.


HF&E Organizations
Formed in 1946 in the UK, the oldest professional body for human factors specialists and ergonomists is The Institute of Ergonomics and Human Factors, formally known as The Ergonomics Society. The Human Factors and Ergonomics Society (HFES) was founded in 1957. The Society's mission is to promote the discovery and exchange of knowledge concerning the characteristics of human beings that are applicable to the design of systems and devices of all kinds.

Human factors and ergonomics The International Ergonomics Association (IEA) is a federation of ergonomics and human factors societies from around the world. The mission of the IEA is to elaborate and advance ergonomics science and practice, and to improve the quality of life by expanding its scope of application and contribution to society. As of September 2008, the International Ergonomics Association has 46 federated societies and 2 affiliated societies.


Related organizations
The Institute of Occupational Medicine (IOM) was founded by the coal industry in 1969, from the outset the IOM employed ergonomics staff to apply ergonomics principles to the design of mining machinery and environments. To this day, the IOM continues ergonomics activities, especially in the fields of musculoskeletal disorders; heat stress and the ergonomics of personal protective equipment (PPE). Like many in occupational ergonomics, the demands and requirements of an ageing UK workforce are a growing concern and interest to IOM ergonomists. The International Society of Automotive Engineers (SAE) is a professional organization for mobility engineering professionals in the aerospace, automotive, and commercial vehicle industries. The Society is a standards development organization for the engineering of powered vehicles of all kinds, including cars, trucks, boats, aircraft, and others. The Society of Automotive Engineers has established a number of standards used in the automotive industry and elsewhere. It encourages the design of vehicles in accordance with established Human Factors principles. It is one the most influential organizations with respect to Ergonomics work in Automotive design. This society regularly holds conferences which address topics spanning all aspects of Human Factors/Ergonomics.

Specializations within this field include visual ergonomics, cognitive ergonomics, usability, humancomputer interaction, and user experience engineering. New terms are being generated all the time. For instance, user trial engineer may refer to a human factors professional who specialises in user trials. Although the names change, human factors professionals apply an understanding of human factors to the design of equipment, systems and working methods in order to improve comfort, health, safety and productivity. According to the International Ergonomics Association within the discipline of ergonomics there exist domains of specialization: Physical ergonomics is concerned with human anatomy, and some of the anthropometric, physiological and bio mechanical characteristics as they relate to physical activity.[2] Cognitive ergonomics is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system. (Relevant topics include mental workload, decision-making, skilled performance, human-computer interaction, human reliability, work stress and training as these may relate to human-system and Human-Computer Interaction design.)[2] Organizational ergonomics is concerned with the optimization of socio-technical systems, including their organizational structures, policies, and processes.(Relevant topics include communication, crew resource management, work design, design of working times, teamwork, participatory design, community ergonomics, cooperative work, new work programs, virtual organizations, telework, and quality management.)[2] Environmental ergonomics is concerned with human interaction with the environment. The physical environment is characterized by: climate, temperature, pressure, vibration, light.[9] There are more than twenty technical subgroups within the Human Factors and Ergonomics Society[10] (HFES), which indicates the range of applications for ergonomics.

Human factors and ergonomics


Human factors issues arise in simple systems and consumer products as well. Some examples include cellular telephones and other hand held devices that continue to shrink yet grow more complex (a phenomenon referred to as "creeping featurism"), millions of VCRs blinking "12:00" across the world because very few people can figure out how to program them, or alarm clocks that allow sleepy users to inadvertently turn off the alarm when they mean to hit 'snooze'. A user-centered design (UCD), also known as a systems approach or the usability engineering life cycle aims to improve the user-system. Ergonomic principles have been widely used in the design of both consumer and industrial products. Past examples include screwdriver handles made with serrations to improve finger grip, and use of soft thermoplastic elastomers to increase friction between the skin of the hand and the handle surface. HF&E continues to be successfully applied in the fields of aerospace, aging, health care, IT, product design, transportation, training, nuclear and virtual environments, among others. Physical ergonomics is important in the medical field, particularly to those diagnosed with physiological ailments or disorders such as arthritis (both chronic and temporary) or carpal tunnel syndrome. Pressure that is insignificant or imperceptible to those unaffected by these disorders may be very painful, or render a device unusable, for those who are. Many ergonomically designed products are also used or recommended to treat or prevent such disorders, and to treat pressure-related chronic pain. One of the most prevalent types of work-related injuries are musculoskeletal disorders. Work-related musculoskeletal disorders (WRMDs) result in persistent pain, loss of functional capacity and work disability, but their initial diagnosis is difficult because they are mainly based on complaints of pain and other symptoms.[11] Every year 1.8million U.S. workers experience WRMDs and nearly 600,000 of the injuries are serious enough to cause workers to miss work.[12] Certain jobs or work conditions cause a higher rate worker complaints of undue strain, localized fatigue, discomfort, or pain that does not go away after overnight rest. These types of jobs are often those involving activities such as repetitive and forceful exertions; frequent, heavy, or overhead lifts; awkward work positions; or use of vibrating equipment.[13] The Occupational Safety and Health Administration (OSHA) has found substantial evidence that ergonomics programs can cut workers' compensation costs, increase productivity and decrease employee turnover.[14] Therefore, it is important to gather data to identify jobs or work conditions that are most problematic, using sources such as injury and illness logs, medical records, and job analyses.[13] The emerging field of human factors in highway safety uses human factor principles to understand the actions and capabilities of road users - car and truck drivers, pedestrians, bicyclists, etc. - and use this knowledge to design roads and streets to reduce traffic collisions. Driver error is listed as a contributing factor in 44% of fatal collisions in the United States, so a topic of particular interest is how road users gather and process information about the road and its environment, and how to assist them to make the appropriate decision. [15]

Human factors practitioners come from a variety of backgrounds, though predominantly they are psychologists (from the various subfields of engineering psychology, cognitive psychology, perceptual psychology, applied psychology and experimental psychology) and physiologists. Designers (industrial, interaction, and graphic), anthropologists, technical communication scholars and computer scientists also contribute. Typically, an ergonomist will have an undergraduate degree in psychology, engineering, design or health sciences, and usually a masters degree or doctoral degree in a related discipline. Though some practitioners enter the field of human factors from other disciplines, both M.S. and Ph.D. degrees in Human Factors Engineering are available from several universities worldwide. The Human Factors Research Group (HFRG) at the University of Nottingham provides human factors courses at both at MSc and PhD level including a distance learning course in Applied Ergonomics.[16] Other Universities to offer postgraduate courses in human factors in the UK include Loughborough University, Cranfield University and the University of Oxford.

Human factors and ergonomics


Until recently, methods used to evaluate human factors and ergonomics ranged from simple questionnaires to more complex and expensive usability labs.[17] Some of the more common HF&E methods are listed below: Ethnographic analysis: Using methods derived from ethnography, this process focuses on observing the uses of technology in a practical environment. It is a qualitative and observational method that focuses on "real-world" experience and pressures, and the usage of technology or environments in the workplace. The process is best used early in the design process.[18] Focus Groups are another form of qualitative research in which one individual will facilitate discussion and elicit opinions about the technology or process under investigation. This can be on a one to one interview basis, or in a group session. Can be used to gain a large quantity of deep qualitative data,[19] though due to the small sample size, can be subject to a higher degree of individual bias.[20] Can be used at any point in the design process, as it is largely dependent on the exact questions to be pursued, and the structure of the group. Can be extremely costly. Iterative design: Also known as prototyping, the iterative design process seeks to involve users at several stages of design, in order to correct problems as they emerge. As prototypes emerge from the design process, these are subjected to other forms of analysis as outlined in this article, and the results are then taken and incorporated into the new design. Trends amongst users are analyzed, and products redesigned. This can become a costly process, and needs to be done as soon as possible in the design process before designs become too concrete.[18] Meta-analysis: A supplementary technique used to examine a wide body of already existing data or literature in order to derive trends or form hypotheses in order to aid design decisions. As part of a literature survey, a meta-analysis can be performed in order to discern a collective trend from individual variables.[20] Subjects-in-tandem: Two subjects are asked to work concurrently on a series of tasks while vocalizing their analytical observations. This is observed by the researcher, and can be used to discover usability difficulties. This process is usually recorded. Surveys and Questionnaires: A commonly used technique outside of Human Factors as well, surveys and questionnaires have an advantage in that they can be administered to a large group of people for relatively low cost, enabling the researcher to gain a large amount of data. The validity of the data obtained is, however, always in question, as the questions must be written and interpreted correctly, and are, by definition, subjective. Those who actually respond are in effect self-selecting as well, widening the gap between the sample and the population further.[20] Task analysis: A process with roots in activity theory, task analysis is a way of systematically describing human interaction with a system or process to understand how to match the demands of the system or process to human capabilities. The complexity of this process is generally proportional to the complexity of the task being analyzed, and so can vary in cost and time involvement. It is a qualitative and observational process. Best used early in the design process.[20] Think aloud protocol: Also known as "concurrent verbal protocol", this is the process of asking a user to execute a series of tasks or use technology, while continuously verbalizing their thoughts so that a researcher can gain insights as to the users' analytical process. Can be useful for finding design flaws that do not affect task performance, but may have a negative cognitive affect on the user. Also useful for utilizing experts in order to better understand procedural knowledge of the task in question. Less expensive than focus groups, but tends to be more specific and subjective.[21] User analysis: This process is based around designing for the attributes of the intended user or operator, establishing the characteristics that define them, creating a persona for the user. Best done at the outset of the design process, a user analysis will attempt to predict the most common users, and the characteristics that they would be assumed to have in common. This can be problematic if the design concept does not match the actual user, or if the identified are too vague to make clear design decisions from. This process is, however, usually quite

Human factors and ergonomics inexpensive, and commonly used.[20] "Wizard of Oz": This is a comparatively uncommon technique but has seen some use in mobile devices. Based upon the Wizard of Oz experiment, this technique involves an operator who remotely controls the operation of a device in order to imitate the response of an actual computer program. It has the advantage of producing a highly changeable set of reactions, but can be quite costly and difficult to undertake. Methods Analysis is the process of studying the tasks a worker completes using a step-by-step investigation. Each task in broken down into smaller steps until each motion the worker performs is described. Doing so enables you to see exactly where repetitive or straining tasks occur. Time studies determine the time required for a worker to complete each task. Time studies are often used to analyze cyclical jobs. They are considered event based studies because time measurements are triggered by the occurrence of predetermined events.[22] Work sampling is a method in which the job is sampled at random intervals to determine the proportion of total time spent on a particular task.[22] It provides insight into how often workers are performing tasks which might cause strain on their bodies. Predetermined time systems are methods for analyzing the time spent by workers on a particular task. One of the most widely used predetermined time system is called Methods-Time-Measurement or MTM. Other common work measurement systems include MODAPTS and MOST. Cognitive Walkthrough: This method is a usability inspection method in which the evaluators can apply user perspective to task scenarios to identify design problems. As applied to macroergonomics, evaluators are able to analyze the usability of work system designs to identify how well a work system is organized and how well the workflow is integrated.[23] Kansei Method: This is a method that transforms consumers responses to new products into design specifications. As applied to macroergonomics, this method can translate employees responses to changes to a work system into design specifications.[23] High Integration of Technology, Organization, and People (HITOP): This is a manual procedure done step-by-step to apply technological change to the workplace. It allows managers to be more aware of the human and organizational aspects of their technology plans, allowing them to efficiently integrate technology in these contexts.[23] Top Modeler: This model helps manufacturing companies identify the organizational changes needed when new technologies are being considered for their process.[23] Computer-integrated Manufacturing, Organization, and People System Design (CIMOP): This model allows for evaluating computer-integrated manufacturing, organization, and people system design based on knowledge of the system.[23] Anthropotechnology: This method considers analysis and design modification of systems for the efficient transfer of technology from one culture to another.[23] Systems Analysis Tool (SAT): This is a method to conduct systematic trade-off evaluations of work-system intervention alternatives.[23] Macroergonomic Analysis of Structure (MAS): This method analyzes the structure of work systems according to their compatibility with unique sociotechnical aspects.[23] Macroergonomic Analysis and Design (MEAD): This method assesses work-system processes by using a ten-step process.[23] Virtual Manufacturing and Response Surface Methodology (VMRSM): This method uses computerized tools and statistical analysis for workstation design.[24]


Human factors and ergonomics


Weaknesses of HF&E Methods

Problems in how usability measures are employed include the fact that measures of learning and retention of how to use an interface are rarely employed during methods and some studies treat measures of how users interact with interfaces as synonymous with quality-in-use, despite an unclear relation.[25] Although field methods can be extremely useful because they are conducted in the users natural environment, they have some major limitations to consider. The limitations include: 1. 2. 3. 4. Usually take more time and resources than other methods Very high effort in planning, recruiting, and executing than other methods Much longer study periods and therefore requires much goodwill among the participants Studies are longitudinal in nature, therefore, attrition can become a problem.[26]

[1] "Ergonomics" (http:/ / www. medicine. manchester. ac. uk/ oeh/ undergraduate/ onlineresources/ ergonomics/ ). The University of Manchester. Centre for Occupational and Environmental Health. . Retrieved May 18, 2012. [2] International Ergonomics Association. What is Ergonomics (http:/ / iea. cc/ 01_what/ What is Ergonomics. html). Website. Retrieved 6 December 2010. [3] Wojciech Jastrzbowski (http:/ / www. fees-network. org/ what-is-ergonomics/ ) [4] Hywel Murrell (http:/ / www. ergonomics. org. uk/ awards/ hywel-murrell) [5] "Marmaras, N., Poulakakis, G. and Papakostopoulos, V. (1999). Ergonomic design in ancient Greece. Applied Ergonomics, 30 (4), pp. 361-368" (http:/ / simor. ntua. gr/ ergou/ people/ CV-MarmarasNicolas. htm). . Retrieved 2012-04-06. [6] IG Okorji, 2009 [7] The History of Human Factors and Ergonomics, David Meister [8] Porter, Elias H. (1964). Manpower Development: The System Training Concept. New York: Harper and Row, p. xiii. [9] "Home Page of Environmental Ergonomics Society" (http:/ / www. environmental-ergonomics. org/ ). . Retrieved 2012-04-06. [10] "Technical Groups page at HFES Web site" (http:/ / www. hfes. org/ web/ TechnicalGroups/ technical. html). . Retrieved 2012-04-06. [11] Isabel A P Walsh; Jorge Oishi; Helenice J C Gil Coury (February 2008). "Clinical and functional aspects of work-related musculoskeletal disorders among active workers". Programa de Ps-graduao em Fisioterapia. Universidade Federal de So Carlos. So Carlos, SP, Brasil. Rev. Sade Pblica vol.42 no.1 So Paulo. [12] Charles N. Jeffress (October 27, 2000). "BEACON Biodynamics and Ergonomics Symposium". University of Connecticut, Farmington, Conn.. [13] "Workplace Ergonomics: NIOSH Provides Steps to Minimize Musculoskeletal Disorders" (http:/ / www. buildings. com/ articles/ detail. aspx?contentID=1563). 2003. . Retrieved 2008-04-23. [14] Charles N. Jeffress (October 27, 2000). BEACON Biodynamics and Ergonomics Symposium. University of Connecticut, Farmington, Conn.. [15] . |Title=National Cooperative Highway Research Project Report 600: Human Factors Guidelines for Road Systems (Second Edition) |publisher=Transportation Research Board |city=Washington, D.C.|author=John L. Campbell, Monica G. Lichty, et al. |year=2012 [16] Human Factors Research Group (HFRG) (http:/ / www. nottingham. ac. uk/ engineering-rg/ manufacturing/ humanfactors/ index. aspx) at the University of Nottingham These courses are accredited by the Ergonomics Society. See this link (http:/ / www. nottingham. ac. uk/ engineering-rg/ manufacturing/ humanfactors/ teaching. aspx) [17] Stanton, N.; Salmon, P., Walker G., Baber, C., Jenkins, D. (2005). Human Factors Methods; A Practical Guide For Engineering and Design.. Aldershot, Hampshire: Ashgate Publishing Limited. ISBN0-7546-4661-0. [18] Carrol, J.M. (1997). Human-Computer Interaction: Psychology as a Science of Design. Annu. Rev. Psyc., 48, 61-83. [19] (http:/ / www. nedarc. org/ nedarc/ media/ pdf/ surveyMethods_2006. pdf) [20] Wickens, C.D.; Lee J.D.; Liu Y.; Gorden Becker S.E. (1997). An Introduction to Human Factors Engineering, 2nd Edition. Prentice Hall. ISBN 0-321-01229-1. [21] Kuusela, H., Paul, P. (2000). A comparison of concurrent and retrospective verbal protocol analysis. The American Journal of Psychology, 113, 387-404. [22] Thomas J. Armstrong (2007). Measurement and Design of Work. [23] Brookhuis, K., Hedge, A., Hendrick, H., Salas, E., and Stanton, N. (2005). Handbook of Human Factors and Ergonomics Models. Florida: CRC Press. [24] Ben-Gal et al. (2002), The Ergonomic Design of Workstation Using Rapid Prototyping and Response Surface Methodology. IIE Transactions on Design and Manufacturing, 34(4), 375-391. Available at: http:/ / www. eng. tau. ac. il/ ~bengal/ Ergonomics_Paper. pdf [25] Hornbaek, K (2006). Current Practice in Measuring Usability: Challenges to Usability Studies and Research, International Journal of Human-Computer Studies.

Human factors and ergonomics

[26] Dumas, J. S.; Salzman, M.C. (2006). Reviews of Human Factors and Ergonomics. 2. Human Factors and Ergonomics Society.


Further reading
Books Meister, D. (1999). The History of Human Factors and Ergonomics. Mahwah, N.J.: Lawrence Erlbaum Associates. ISBN0-8058-2769-2. Oviatt, S. L.; Cohen, P. R. (2000, March). "Multimodal systems that process what comes naturally". Communications of the ACM (New York: ACM Press) 43 (3): 4553. doi:10.1145/330534.330538. Sarter, N. B.; Cohen, P. R. (2002). "Multimodal information presentation in support of human-automation communication and coordination". Advances in Human Performance and Cognitive Engineering Research (Netherlands: JAI) 2: 1336. doi:10.1016/S1479-3601(02)02004-0. Wickens, C.D.; Lee J.D.; Liu Y.; Gorden Becker S.E. (1997). An Introduction to Human Factors Engineering, 2nd Edition. Prentice Hall. ISBN0-321-01229-1. Wickens, C. D.; Sandy, D. L.; Vidulich, M. (1983). "Compatibility and resource competition between modalities of input, central processing, and output". Human Factors (Santa Monica, CA, United States: Human Factors and Ergonomics Society) 25 (2): 227248. ISSN00187208. PMID6862451. Wu, S. (2011). Warranty claims analysis considering human factors (doi:10.1016/j.ress.2010.07.010), Reliability Engineering and System Safety, Volume 96, No. 11, 2011, 131-138. Jan Dul and Bernard Weedmaster, Ergonomics for Beginners - - A classic introduction on ergonomics - Original title: Vademecum Ergonomie (Dutch) -published and updated since 1960's Stephen Pheasant, Bodyspace - - A classic exploration of ergonomics Zamprotta, Luigi, La qualit comme philosophie de la production.Interaction avec l'ergonomie et perspectives futures, thse de Matrise s Sciences Appliques - Informatique, Institut d'Etudes Suprieures L'Avenir, Bruxelles, anne universitaire 1992-93, TIU ( Press, Independence, Missouri (USA), 1994, ISBN 0-89697-452-9 Kim Vicente, The Human Factor Full of examples and statistics illustrating the gap between existing technology and the human mind, with suggestions to narrow it Donald Norman, The Design of Everyday Things - - An entertaining user-centered critique of nearly every gadget out there (at the time it was published) Liu, Y (2007). IOE 333. Course pack. Industrial and Operations Engineering 333 (Introduction to Ergonomics), University of Michigan, Ann Arbor, MI. Winter 2007 Wilson & Corlett, Evaluation of Human Work A practical ergonomics methodology. Warning: very technical and not a suitable 'intro' to ergonomics Wickens and Hollands (2000). Engineering Psychology and Human Performance. Discusses memory, attention, decision making, stress and human error, among other topics Alvin R. Tilley & Henry Dreyfuss Associates (1993, 2002), The Measure of Man & Woman: Human Factors in Design A human factors design manual. Valerie J Gawron (2000), Human Performance Measures Handbook Lawrence Erlbaum Associates - A useful summary of human performance measures. Peter Opsvik (2009), "Re-Thinking Sitting" Interesting insights on the history of the chair and how we sit from an ergonomic pioneer Thomas J. Armstrong (2008), Chapter 10: Allowances, Localized Fatigue, Musculoskeletal Disorders, and Biomechanics (not yet published) Computer Ergonomics & Work Related Upper Limb Disorder Prevention- Making The Business Case For Pro-active Ergonomics (Rooney et al., 2008) Peer-reviewed Journals (numbers between brackets are the ISI impact factor, followed by the date)

Human factors and ergonomics Behaviour & Information Technology (0.915, 2008) Ergonomics (journal)|Ergonomics (0.747, 20012003) Applied Ergonomics (0.738, 20012003) Human Factors (1.373, 2010) International Journal of Industrial Ergonomics (0.395, 20012003) Human Factors and Ergonomics in Manufacturing (0.311, 20012003) Travail Humain (0.260, 2001-2003) Theoretical Issues in Ergonomics Science (-) International Journal of Human Factors and Ergonomics (-) International Journal of Occupational Safety and Ergonomics (-)


External links
National Center for Human Factors Engineering in Healthcare ( Directory of Design Support Methods ( Engineering Data Compendium of Human Perception and Performance ( TOC/EDCTOC.html) Index of Non-Government Standards on Human Engineering... ( Index of Government Standards on Human Engineering... ( Human Factors Engineering resources ( htm#humanfactorsergonomics) MANPRINT ( Human Factors in aviation ( Usability Engineering and E-Health ( NIOSH Topic Page on Ergonomics and Musculoskeletal Disorders ( ergonomics/) Office Ergonomics Information ( pdf) from European Agency for Safety and Health at Work Human Factors Standards & Handbooks ( from the University of Maryland Department of Mechanical Engineering Human Factors and Ergonomics Resources (

Iterative design


Iterative design
Iterative design is a design methodology based on a cyclic process of prototyping, testing, analyzing, and refining a product or process. Based on the results of testing the most recent iteration of a design, changes and refinements are made. This process is intended to ultimately improve the quality and functionality of a design. In iterative design, interaction with the designed system is used as a form of research for informing and evolving a project, as successive versions, or iterations of a design are implemented.

Iterative design process

The iterative design process may be applied throughout the new product development process. However, changes are easiest and less expensive to implement in the earliest stages of development. The first step in the iterative design process is to develop a prototype. The prototype should be evaluated by a focus group or a group not associated with the product in order to deliver non-biased opinions. Information from the focus group should be synthesized and incorporated into the next iteration of the design. The process should be repeated until user issues have been reduced to an acceptable level.

Application: Human computer interfaces

Iterative design is commonly used in the development of human computer interfaces. This allows designers to identify any usability issues that may arise in the user interface before it is put into wide use. Even the best usability experts cannot design perfect user interfaces in a single attempt, so a usability engineering lifecycle should be built around the concept of iteration.[1] The typical steps of iterative design in user interfaces are as follows: 1. 2. 3. 4. 5. Complete an initial interface design Present the design to several test users Note any problems had by the test user Refine interface to account for/fix the problems Repeat steps 2-4 until user interface problems are resolved

Iterative design in user interfaces can be implemented in many ways. One common method of using iterative design in computer software is software testing. While this includes testing the product for functionality outside of the user interface, important feedback on the interface can be gained from subject testing early versions of a program. This allows software companies to release a better quality product to the public, and prevents the need of product modification following its release. Iterative design in online(website) interfaces is a more continuous process, as website modification, after it has been released to the user, is far more viable than in software design. Often websites use their users as test subjects for interface design, making modifications based on recommendations from visitors to their sites.

Iterative design use

Iterative design is a way of confronting the reality of unpredictable user needs and behaviors that can lead to sweeping and fundamental changes in a design. User testing will often show that even carefully evaluated ideas will be inadequate when confronted with a user test. Thus, it is important that the flexibility of the iterative designs implementation approach extends as far into the system as possible. Designers must further recognize that user testing results may suggest radical change that requires the designers to be prepared to completely abandon old ideas in favor of new ideas that are more equipped to suit user needs. Iterative design applies in many fields, from making knives to rockets. As an example consider the design of an electronic circuit that must perform a certain task, and must ultimately fit in a small space on a circuit board. It is useful to split these independent tasks into two smaller

Iterative design and simpler tasks, the functionality task, and the space and weight task. A breadboard is a useful way of implementing the electronic circuit on an interim basis, without having to worry about space and weight. Once the circuit works, improvements or incremental changes may be applied to the breadboard to increase or improve functionality over the original design. When the design is finalized, one can set about designing a proper circuit board meeting the space and weight criteria. Compacting the circuit on the circuit board requires that the wires and components be juggled around without changing their electrical characteristics. This juggling follows simpler rules than the design of the circuit itself, and is often automated. As far as possible off the shelf components are used, but where necessary for space or performance reasons, custom made components may be developed. Several instances of iterative design are as follows: Wiki - A wiki is a natural repository for iterative design. The 'Page History' facility allows tracking back to prior versions. Modifications are mostly incremental, and leave substantial parts of the text unchanged. Common law - The principle of legal precedent builds on past experience. This makes law a form of iterative design where there should be a clear audit trail of the development of legal thought. Evolution - There is a parallel between iterative and the theory of Natural Selection. Both involve a trial and error process in which the most suitable design advances to the next generation, while less suitable designs perish by the wayside. Subsequent versions of a product should also get progressively better as its producers learn what works and what doesn't in a process of refinement and continuous improvement.


When properly applied, iterative design will ensure a product or process is the best solution possible. When applied early in the development stage, significant cost savings are possible.[2] Other benefits to iterative design include: 1. Serious misunderstandings are made evident early in the lifecycle, when it's possible to react to them. 2. It enables and encourages user feedback, so as to elicit the system's real requirements. 3. The development team is forced to focus on those issues that are most critical to the project, and team members are shielded from those issues that distract them from the project's real risks. 4. Continuous, iterative testing enables an objective assessment of the project's status. 5. Inconsistencies among requirements, designs, and implementations are detected early. 6. The workload of the team, especially the testing team, is spread out more evenly throughout the lifecycle. 7. This approach enables the team to leverage lessons learned, and therefore to continuously improve the process. 8. Stakeholders in the project can be given concrete evidence of the project's status throughout the lifecycle.

Marshmallow Challenge
The Marshmallow Challenge is an instructive design challenge. It involves the task of constructing the highest possible free-standing structure with a marshmallow on top. The structure must be completed within 18-minutes using only 20 sticks of spaghetti, one yard of tape, and one yard of string.[3][4] [4] Observation and studies of participants show that kindergartners are regularly able to build higher structures, in comparison to groups of business school graduates. This is explained by the tendency for children to at once stick the marshmallow on top of a simple structure, test the prototype, and continue to improve upon it. Whereas, business school students tend to spend time vying for power, planning, and finally producing a structure to which the marshmallow is added.[5] The challenge was invented by Peter Skillman of Palm, Inc. and popularized by Tom Wujec of Autodesk.[6][7][8][9][10]

Iterative design


[1] Nielsen, J. (1993). "Iterative User Interface Design". IEEE Computer vol.26 no.11 pp 32-41. [2] Marilyn Mantei; Toby Teorey (April 1988). "Cost/Benefit Analysis for incorporating human factors in the software lifecycle". Publications of the ACM vol.31 no.4 pp 428-439. [3] "The Marshmallow Challenge" (http:/ / www. marshmallowchallenge. com/ Welcome. html). The Marshmallow Challenge. . Retrieved 2010-08-10. [4] "The Marshmallow Challenge" (http:/ / www. bpwrap. com/ 2010/ 04/ the-marshmallow-challenge/ ). CA: BPWrap. 2010-04-22. . Retrieved 2010-08-10. [5] Jerz, Dennis G. (2010-05-10). "The Marshmallow Challenge - Jerz's Literacy Weblog" (http:/ / jerz. setonhill. edu/ weblog/ 2010/ 05/ the_marshmallow_challenge/ ). . Retrieved 2010-08-10. [6] Cameron, Chris (2010-04-23). "Marshmallows and Spaghetti: How Kindergartners Think Like Lean Startups" (http:/ / www. readwriteweb. com/ start/ 2010/ 04/ marshmallows-and-spaghetti-how-kindergartners-think-like-lean-startups. php). . Retrieved 2010-08-10. [7] http:/ / engineeringrevision. com/ 302/ the-marshmallow-challenge/ [8] http:/ / www. selfishprogramming. com/ 2010/ 04/ 28/ the-marshmallow-challenge/ [9] http:/ / www. ideasforideas. com/ content/ marshmallow-challenge [10] http:/ / www. ucalgary. ca/ science/ node/ 1578

Boehm, Barry W.( May 1988) "A Spiral Model of Software Development and Enhancement," Computer, IEEE, pp.6172. Gould, J.D. and Lewis, C. (1985). Designing for Usability: Key Principles and What Designers Think, Communications of the ACM, March, 28(3), 300-311. Kruchten, Philippe. The Rational Unified ProcessAn Introduction, Kruchten, P. From Waterfall to Iterative Development - A Challenging Transition for Project Managers. The Rational Edge, 2000. Retrieved from RationalEdge/dec00/FromWaterfalltoIterativeDevelopmentDec00.pdf.Addison Wesley Longman, 1999.

External links
Iterative User Interface Design at ( Association for Computing Machinery ( Marshmallow Challenge official website ( TED video on Marshmallow Challenge ( html) Classroom images of Marshmallow Challenge ( The-Marshmallow-Challenge/12264824_43Kim#874495798_hxVMY)

User analysis


User analysis
User analysis is the process of identifying the potential users of a system and their attributes. This makes sure that the system will be more user friendly.

Work sampling
Work sampling is the statistical technique for determining the proportion of time spent by workers in various defined categories of activity (e.g. setting up a machine, assembling two parts, idleetc.).[1] It is as important as all other statistical techniques because it permits quick analysis, recognition, and enhancement of job responsibilities, tasks, performance competencies, and organizational work flows. Other names used for it are 'activity sampling', 'occurrence sampling', and 'ratio delay study'.[2] In a work sampling study, a large number of observations are made of the workers over an extended period of time. For statistical accuracy, the observations must be taken at random times during the period of study, and the period must be representative of the types of activities performed by the subjects. One important usage of the work sampling technique is the determination of the standard time for a manual manufacturing task. Similar techniques for calculating the standard time are time study, standard data, and predetermined motion time systems.

Characteristics of work sampling study

The study of work sampling has some general characteristics related to the work condition. One of them is the sufficient time available to perform the study. A work sampling study usually requires a substantial period of time to complete. There must be enough time available (several weeks or more) to conduct the study. Another characteristic is multiple workers. Work sampling is commonly used to study the activities of multiple workers rather than one worker. The third characteristic is long cycle time. The job covered in the study has relatively a long cycle time. The last condition is the non-repetitive work cycles. The work is not highly repetitive. The jobs consist of various tasks rather than a single repetitive task. However, it must be possible to classify the work activities into a distinct number of categories.

Steps in conducting a work sampling study

There are several recommended steps when starting to prepare a work sampling study[1]: 1. Define the manufacturing tasks for which the standard time is to be determined. 2. Define the task elements. These are the defined broken-down steps of the task that will be observed during the study. Since a worker is going to be observed, additional categories will likely be included as well, such as "idle", "waiting for work", and "absent". 3. Design the study. This includes designing the forms that will be used to record the observations, determining how many observations will be required, deciding on the number of days or shifts to be included in the study, scheduling the observations, and finally determining the number of observers needed. 4. Identify the observers who will do the sampling. 5. Start the study. All those who are affected by the study should be informed about it. 6. Make random visits to the plant and collect the observations. 7. After completing the study, analyze and present the results. This is done by preparing a report that summarizes and analyzes all data and making recommendations when required.

Work sampling


Determining the number of observations needed in work sampling

After the work elements are defined, the number of observations for the desired accuracy at the desired confidence level must be determined. The formula used in this method is:

standard error of proportion percentage of idle time percentage of working time number of observations

Additional applications of work sampling

Work sampling was initially developed for determining time allocation among workers' tasks in manufacturing environments.[3] However, the technique has also been applied more broadly to examine work in a number of different environments, such as healthcare[4] and construction.[5] More recently, in the academic fields of organizational psychology and organizational behaviour, the basic technique has been developed into a detailed job analysis method for examining a range of different research questions.[6]

[1] Groover, M. P. Work Systems and Methods, measurement, and Management of Work. Pearson Education International, 2007 ISBN 978-0-13-140650-6 [2] Sheth, V. Industrial Engineering Methods and Practices. Penram International Publishing, 2000, ISBN 81-87972-18-1 [3] Tsai, W-. H. (1996). A technical note on using work sampling to estimate the effort on activities under activity-based costing. International Journal of Production Economics, 43(1), 11-16. http:/ / dx. doi. org/ 10. 1016/ 0925-5273(95)00189-1 [4] Ampt, A., Westbrook, J., Creswick, N., & Mallock, N. (2007). A comparison of self-reported and observational work sampling techniques for measuring time in nursing tasks. Journal of Health Services Research & Policy, 12, 1824. http:/ / dx. doi. org/ 10. 1258/ 135581907779497576 [5] Buchholz, B., Paquet, V., Punnett, L., Lee, D., & Moir, S. (1996). PATH: A work sampling-based approach to ergonomic job analysis for construction and other non-repetitive work. Applied Ergonomics, 27(3), 177-187. http:/ / dx. doi. org/ 10. 1016/ 0003-6870(95)00078-X [6] Robinson, M. A. (2010). Work sampling: Methodological advances and new applications. Human Factors and Ergonomics in Manufacturing & Service Industries, 20(1), 4260. http:/ / dx. doi. org/ 10. 1002/ hfm. 20186

External links
Work sampling method (

Kansei engineering


Kansei engineering
Kansei Engineering (Japanese: kansei kougaku, emotional / affective engineering) aims the development or improvement of products and services by translating customer's psychological feelings and needs into product's design domain (i.e. parameters). It was founded by Mitsuo Nagamachi, Ph.D (Professor Emeritus of Hiroshima University & Former Dean of Hiroshima International University) and CEO of International Kansei Design. Kansei Engineering parametrically links customer's emotional responses (i.e. physical and psychological) to a product or service with their properties and characteristics. In consequence, products can be designed to bring forward the intended feeling. It has now been adopted as one of the topics for professional development by the Royal Statistical Society.

The design of products on today's markets often become increasingly complex since they contain more functions and they have to meet more demands on e.g. user-friendliness, manufacturability and ecological consideration. Shortened product life cycles are likely to increase development costs. This contributes to making errors in estimations of market trends very expensive. Companies therefore perform benchmarking studies that compare competitors on strategic-, process-, marketing- and product level. Also, they need a reliable instrument, which can predict the products reception on the market before the development cost gets too critical. However, success in a certain market segment does not only require knowledge about the competitors and their products' performance, but also about the impressions the products make on the customer. The latter requirement becomes much more important the more mature the products and the companies are. This means that the customer purchases a product based on more subjective terms such as manufacturer image, brand image, reputation, design, impression etc., although the products seem to be equal. A large number of manufacturers have started development activities to consider such subjective properties so that the product expresses the company image. This demand triggers the introduction of a new research field dealing with the collection of customers' hidden subjective needs and their translation into concrete products. Research is done foremost in Asia, namely Japan and Korea. In Europe a network has been forged under the 6th EU framework. This network refers to the new research field as emotional design or affective engineering.

History of (Kansei) Affective Engineering

Nowadays, people want to use products that should be functional at a physical level, usable at a psychological level and should be attractive at a subjective, emotional level. Affective engineering is the study of the interactions between the customer and the product at that third level. It focuses on the relationships between the physical traits of product and its affective influence on the user. Thanks to this field of research, it is possible to gain knowledge on how to design more attractive products and make the customers satisfied. Methods in Affective Engineering.Kansei engineering is one of the major area of ergonomics (human factor engineering ). The area of integrating affective values in artifacts is not new at all. Already in the 18th century philosophers such as Baumgarten and Kant established the area of aesthetics. In addition to pure practical values, artifacts always also had an affective component . One example is jewellery found in excavations from the stone ages. Also the period of renaissance is a good example of that. In the middle of the 19th century, the idea of aesthetics was deployed in scientific contexts. Charles E Osgood developed his Semantic Differentials Method in which he quantified the peoples perceptions of artifacts [4]. Some years later, in 1960, Professors Shigeru Mizuno and Yoji Akao developed an engineering approach in order to connect peoples needs to product properties. This method was called Quality Function Deployment (QFD). Another method, the Kano model was developed in the field of quality in the early 1980s by Professor Noriaki Kano, of Tokyo University. Kanos model is used to establish the importance of individual product features for the customers satisfaction and hence it creates the optimal requirement for process oriented product

Kansei engineering development activities. A pure marketing technique is Conjoint Analysis. Conjoint analysis estimates the relative importance of a products attributes by analyzing the consumers overall judgment of a product or service. A more artistic method is called Semantic description of environments. It is mainly a tool for examining how a single person or a group of persons experience a certain (architectural) environment. Although all of these methods are concerned with subjective impact, none of them can translate this impact to design parameters sufficiently. This can, however, be accomplished by Kansei Engineering. Kansei Engineering (KE) has been used as a tool for affective engineering. It was developed in the early 70s in Japan and is now widely spread among Japanese companies. In the middle of the 90s, the method spread to the United States, but cultural differences may have prevented the method to enfold its whole potential.


Kansei Engineering Procedure

As mentioned above, Kansei Engineering can be considered as a methodology within the research field of Affective Engineering. Some researchers have defined the content of the methodology. Shimizu et al. state that Kansei Engineering is used as a tool for product development and the basic principles behind it are the following: identification of product properties and correlation between those properties and the design characteristics. According to Nagasawa, one of the forerunners of Kansei Engineering, there are three focal points in the method: How to accurately understand consumer Kansei How to reflect and translate Kansei understanding into product design How to create a system and organization for Kansei orientated design The following figure shows how Kansei Engineering works in principle. Figure 1: Kansei Engineering System (KES).

A Model on Kansei Engineering Methodology

In Japanese publications, different types of Kansei Engineering are identified and applied in various contexts. Schtte examined different types of Kansei Engineering and developed a general model covering the contents of Kansei Engineering. Choice of Domain Domain in this context describes the overall idea behind an assembly of products, i.e. the product type in general. Choosing the domain includes the definition of the intended target group and user type, market-niche and type, and group of the product in question. Choosing and defining the domain is carried out including existing products, concepts and as yet unknown design solution. From this, a domain description is formulated serving as basis for further evaluation. Schtte describes the processes necessary in detail in a couple of publications. Span the Semantic Space The expression Semantic Space was addressed for the first time by Osgood et al.. He posed that every artifact can be described in a certain vector space defined by semantic expressions (words). This is done by collecting a large number of words that describe the domain. Suitable sources are pertinent literature, commercials, manuals, specification list, experts etc. The number of the words gathered typically varies, depending on the product between 100 and 1000 words. In a second step the words are grouped using manual (e.g. Affinity diagram, compare: Bergman and Klefsj, 1994) or mathematical methods (e.g. factor and/or cluster analysis, compare: Ishihara et al., 1998). Finally a few representing words are selected from this spanning the Semantic Space. These words are called Kansei words or Kansei Engineering words. Span the Space of Properties The next step is to span the Space of Product Properties, which is similar to the Semantic Space. The Space of Product Properties collects products representing the domain, identifies key features and selects product properties for further evaluation. The collection of products representing the domain is done from different sources such as existing products, customer suggestions, possible technical solutions and design concepts etc. The key features are found using specification lists for the products in question. To select properties for further evaluation, a Pareto-diagram (compare Bergman and Klefsj, 1994) can assist the decision between important and

Kansei engineering less important features. Synthesis In the synthesis step, the Semantic Space and the Space of Properties are linked together, as displayed in Figure 3. Compared to other methods in Affective Engineering, Kansei Engineering is the only method that can establish and quantify connections between abstract feelings and technical specifications. For every Kansei word a number of product properties are found, affecting the Kansei word. Synthesis The research into constructing these links has been a core part of Nagamachis work with Kansei Engineering in the last few years. Nowadays, a number of different tools is available. Some of the most common tools are : Category Identification Regression Analysis /Quantification Theory Type I Rough Sets Theory Genetic Algorithm Fuzzy Sets Theory


Model building and Test of Validity After doing the necessary stages, the final step of validation remains. This is done in order to check if the prediction model is reliable and realistic. However, in case of prediction model failure, it is necessary to update the Space of Properties and the Semantic Space, and consequently refine the model. The process of refinement is difficult due to the shortage of methods. This shows the need of new tools to be integrated. The existing tools can partially be found in the previously mentioned methods for the synthesis. Software Tools for Kansei Engineering Kansei Engineering has always been a statically and mathematically advanced methodology. Most types require good expert knowledge and a reasonable amount of experience to carry out the studies sufficiently. This has also been the major obstacle for a widespread application of Kansei Engineering. In order to facilitate application some software packages have been developed in the recent years, most of them in Japan. There are two different types of software packages available: User consoles and data collection and analysis tools. User consoles are software programs that calculate and propose a product design based on the users' subjective preferences (Kanseis). However, such software requires a database that quantifies the connections between Kanseis and the combination of product attributes. For building such databases, data collection and analysis tools can be used. This part of the paper demonstrates some of the tools. There are many more tools used in companies and universities, which might not be available to the public. User consoles

Kansei Engineering software

As described above, Kansei data collection and analysis is often complex and connected with statistical analysis. Depending on which synthesis method is used, different computer software is used. Kansei Engineering Software (KESo) uses QT1 for linear analysis. The concept of Kansei Engineering Software (KESo) Linkping University in Sweden ( [1]). The software generates online questionnaires for collection of Kansei raw-data Another Software package (Kn6) was devleoped at the technical University of Valencia in Spain. Both software packages improve the collection and evalutation of Kansei data. In this way even users with no specaialist compentence in advanced statistics can use Kansei Engineering

Kansei engineering


Akao, Y., History of Quality Function Deployment in Japan. International Academy for Quality Books Series. Vol. 3. 1990: Hansa Publisher. Baumgarten, A.G., Aesthetica. 1961, Hildesheim: Georg Olms Verlagsbuchhandlung. ENGAGE, European Project on Engineering Emotional Design Report of the State of the Art- Round 1. 2005: Valencia. Green, E.P. and V. Rao, Conjoint Measurement for Quantifying Judgemental data. Journal of Marketing Research, 1971: p.61-68. Grimsaeth Kjetil, Kansei Engineering Linking Emotions and product features, 2005, Norwegian University of Science and Technology. Hirata Ricardo, Nagamachi Mitsuo, Ishihara Shigekazu, Satisfying Emotional Needs of the Beer Consumer through Kansei Engineering (Case Study with Hiroshima International University Students), 7th International QMOD Conference 2004, University of Linkping and ITESM, Monterrey, NL, pp.219227, Mxico. Hirata Ricardo, Nagamachi Mitsuo, Ishihara Shigekazu, Nishino Tatsuo,Translation of customer Kansei and emotional needs into products, 2nd International Conference on Applied Human Factors and Ergonomics (AHFEI) 2008, Las Vegas, USA. Imamura, K., et al., An Application of Virtual Kansei Engineering to Kitchen Design, in Kansei Engineering 1, M. Nagamachi, Editor. 1997, Kaibundo Publishing Co., Ltd.: Kure. p.63-68. Kano, N., N. Seraku, and F. Takahashi, Attractive quality and must be quality, in Quality. 1984. p.39-44. Kant, I., Kritik av det rena frnuftet. 2004, Stockholm: Thales. Kller, R., Semantisk Milj Beskrivning (SMB). 1975, Stockholm: Psykologifrlaget AB Liber Tryck Stockholm. Matsubara, Y. and M. Nagamachi, Kansei Virtual Reality Technology and Evaluation on Kitchen Design, in Manufacturing Agility and Hybrid Automation - 1, R.J. Koubek and W. Karwowski, Editors. 1996, IEA Press: Louisville, Kentucky, USA. p.81-84.* Mori, N., Rough set approach to product design solution for the purposed "Kansei". The Science of Design Bulletin of the Japanese Society of Kansei Engineering, 2002. 48(9): p.85-94. Nagamachi, M., Kansei Engineering. 1989, Tokyo: Kaibundo Publishing Co. Ltd. Nagamachi Mitsuo, Kansei Kogaku no Ohanashi (Introduction to Kansei Engineering), Japan Standard Association, (in Japanese). Nagamachi, Mitsuo, Kansei Engineering: A New ergonomic consumer oriented technology for product development, International Journal of Industrial Ergonomics 15, 3-11, 1995. Nagamachi Mitsuo, Kansei Engineering: A new consumer oriented technology for product development, in W. Karwowski & W.S. Morris (editors), The Occupational Ergonomics Handbook, pp.18351848, 1999, CRC Press LLC, USA. Nagamachi Mitsuo, Kansei Engineering, in N. Stanton & A. Hedge et al., (editors), Handbook of Human Factors and Ergonomics Methods, pp.83.1 83-5, 2004,CRC Press LLC, USA. Nagamachi Mitsuo., ed., Nishino T., et al., Shohin Kaihatsu to Kansei (Desarrrollo de producto y Kansei), 2005, Kaibundo, Japan(in Japanese). Nagamachi Mitsuo, Perspectives and New Trend of Kansei / Affective Engineering, 1st European Conference on Affective Design and Kansei Engineering & 10th QMOD Conference, 2007, University of Linkoping and Lund University, Helsingborg, Suecia. Nagamachi, Mitsuo, Kansei/Affective Engineering. 2011, CRC Press. Nishino, T., Exercises on Kansei Engineering. 2001: Hiroshima International University. Nishino, T., et al. Internet Kansei Engineering System with Basic Kansei Database and Genetic Algorithm. in TQM and Human Factors. 1999. Linkping, Sweden: Centre for Studies of Humans, Technology and Organization.

Kansei engineering Osgood, C.E., G.J. Suci, and P.H. Tannenbaum, The measurement of meaning. 1957, Illinois: University of Illinois Press. 346. Schtte, S., et al., Concepts, methods and tools in Kansei Engineering. Theoretical Issues in Ergonomics Science, 2004. 5: p.214-232 Schtte, R., Developing an Expert Program software for Kansei Engineering, in Institute of Technology, Linkping University. 2006, Linkping University: Linkping. Shimizu, Y., et al., On-demand production system of apparel on basis of Kansei engineering. International Journal of Clothing Science and Technology, 2004. 16(1/2): p.32-42. Shimizu, Y. and T. Jindo, A fuzzy logic analysis method for evaluating human sensitivities. International Journal of Industrial Ergonomics, 1995. 15: p.39-47.


External links
European Kansei Engineering group [2] Ph.D thesis on Kansei Engineering [3] The Japan Society of Kansei Engineering [4] International Conference on Kansei Engineering & Intelligent Systems KEIS [5] QFD Institute [6]

[1] [2] [3] [4] [5] [6] http:/ / www. kanseiengineering. net http:/ / www. kansei. eu/ http:/ / liu. diva-portal. org/ smash/ record. jsf?searchId=1& pid=diva2:20839 http:/ / www. jske. org http:/ / www. Kanseiengineering. org http:/ / www. qfdi. org/ lifestyle_qfd_and_kanseiengineering_miata. htm

Systems analysis


Systems analysis
''''' Systems analysis is the study of sets of interacting entities, including computer systems analysis. This field is closely related to requirements analysis or operations research. It is also "an explicit formal inquiry carried out to help someone (referred to as the decision maker) identify a better course of action and make a better decision than he might otherwise have made."[1]

The terms analysis and synthesis come from Greek where they mean respectively "to take apart" and "to put together". These terms are used in scientific disciplines from mathematics and logic to economics and psychology to denote similar investigative procedures. Analysis is defined as the procedure by which we break down an intellectual or substantial whole into parts. Synthesis is defined as the procedure by which we combine separate elements or components in order to form a coherent whole.[2] Systems analysis researchers apply methodology to the analysis of systems involved to form an overall picture. System analysis is used in every field where there is a work of developing something. Analysis can also be defined as a series of components that perform organic function together.

Information technology
The development of a computer-based information system includes a systems analysis phase which produces or enhances the data model which itself is a precursor to creating or enhancing a database (see Christopher J. Date "An Introduction to Database Systems"). There are a number of different approaches to system analysis. When a computer-based information system is developed, systems analysis (according to the Waterfall model) would constitute the following steps: The development of a feasibility study, involving determining whether a project is economically, socially, technologically and organizationally feasible. Conducting fact-finding measures, designed to ascertain the requirements of the system's end-users. These typically span interviews, questionnaires, or visual observations of work on the existing system. Gauging how the end-users would operate the system (in terms of general experience in using computer hardware or software), what the system would be used for and so on Another view outlines a phased approach to the process. This approach breaks systems analysis into 5 phases: Scope Definition Problem analysis Requirements analysis Logical design Decision analysis

Use cases are a widely-used systems analysis modeling tool for identifying and expressing the functional requirements of a system. Each use case is a business scenario or event for which the system must provide a defined response. Use cases evolved out of object-oriented analysis; however, their use as a modeling tool has become common in many other methodologies for system analysis and design.

Systems analysis


Practitioners of systems analysis are often called up to dissect systems that have grown haphazardly to determine the current components of the system. This was shown during the year 2000 re-engineering effort as business and manufacturing processes were examined as part of the Y2K automation upgrades.[3] Employment utilizing systems analysis include systems analyst, business analyst, manufacturing engineer, enterprise architect, etc. While practitioners of systems analysis can be called upon to create new systems, they often modify, expand or document existing systems (processes, procedures and methods). A set of components interact with each other to accomplish some specific purpose. Systems are all around us. Our body is itself a system. A business is also a system. People, money, machine, market and material are the components of business system that work together that achieve the common goal of the organization.

[1] SYSTEMS ANALYSIS (http:/ / web. archive. org/ web/ 20070822025602/ http:/ / pespmc1. vub. ac. be/ ASC/ SYSTEM_ANALY. html) [2] Tom Ritchey, [ Analysis and . [3] Gza HUSI: Mechatronics Control Systems

External links
Software Requirement Analysis using UML ( software-requirement-analysis-using-uml) article by Dhiraj Shetty. Introduction to Social Macrodynamics ( blang=en&list=Found) A useful set of guides and a case study about the practical application of business and systems analysis methods ( A comprehensive description of the discipline of systems analysis from Simmons College, Boston, MA, USA ( (



In statistics, a meta-analysis refers to methods focused on contrasting and combining results from different studies, in the hope of identifying patterns among study results, sources of disagreement among those results, or other interesting relationships that may come to light in the context of multiple studies.[1] In its simplest form, this is normally by identification of a common measure of effect size, of which a weighted average might be the output of a meta-analysis. The weighting might be related to sample sizes within the individual studies. More generally there are other differences between the studies that need to be allowed for, but the general aim of a meta-analysis is to more powerfully estimate the true effect size as opposed to a less precise effect size derived in a single study under a given single set of assumptions and conditions. Meta-analyses are often, but not always, important components of a systematic review procedure. For instance, a meta-analysis may be conducted on several clinical trials of a medical treatment, in an effort to obtain a better understanding of how well the treatment works. Here it is convenient to follow the terminology used by the Cochrane Collaboration,[2] and use "meta-analysis" to refer to statistical methods of combining evidence, leaving other aspects of 'research synthesis' or 'evidence synthesis', such as combining information from qualitative studies, for the more general context of systematic reviews.

While the historical roots of meta-analysis may be traced back to 17th century studies of astronomy, a paper published in 1904 by the statistician Karl Pearson in the British Medical Journal which collated data from several studies of typhoid inoculation is seen as the first time a meta-analytic approach was used to aggregate the outcomes of multiple clinical studies.[3][4] The first meta-analysis of all conceptually identical experiments concerning a particular research issue, and conducted by independent researchers, has been identified as the 1940 book-length publication Extra-sensory perception after sixty years, authored by Duke University psychologists J. G. Pratt, J. B. Rhine, and associates.[5] This encompassed a review of 145 reports on ESP experiments published from 1882 to 1939, and included an estimate of the influence of unpublished papers on the overall effect (the file-drawer problem). Although meta-analysis is widely used in epidemiology and evidence-based medicine today, a meta-analysis of a medical treatment was not published until 1955. In the 1970s, more sophisticated analytical techniques were introduced in educational research, starting with the work of Gene V. Glass, Frank L. Schmidt and John E. Hunter. The term "meta-analysis" was coined by Gene V. Glass,[6] who was the first modern statistician to formalize the use the term meta-analysis. He states "my major interest currently is in what we have come to call ...the meta-analysis of research. The term is a bit grand, but it is precise and apt ... Meta-analysis refers to the analysis of analyses". Although this led to him being widely recognized as the modern founder of the method, the methodology behind what he termed "meta-analysis" predates his work by several decades.[7][8] The statistical theory surrounding meta-analysis was greatly advanced by the work of Nambury S. Raju, Larry V. Hedges, Harris Cooper, Ingram Olkin, John E. Hunter, Jacob Cohen, Thomas C. Chalmers, Robert Rosenthal and Frank L. Schmidt.



Advantages of meta-analysis
The advantages of meta-analysis (e.g. over classical literature reviews, simple overall means of effect sizes etc.) are that it: Shows whether the results are more varied than what is expected from the sample diversity, Allows derivation and statistical testing of overall factors and effect-size parameters in related studies, Is a generalization to the population of studies, Is able to control for between-study variation, Includes moderators to explain variation, Has higher statistical power to detect an effect than individual studies, Deals with information overload: the high number of articles published each year, Combines several studies and will therefore be less influenced by local biases than single studies will be, and Makes it possible to show whether a publication bias exists.

A meta-analysis of several small studies does not predict the results of a single large study, especially in a field like medicine where results are truly unpredictable.[9] Some have argued that a weakness of the method is that sources of bias are not controlled by the method. A good meta-analysis of badly designed studies will still result in bad statistics, according to Robert Slavin.[10] Slavin has argued that only methodologically sound studies should be included in a meta-analysis, a practice he calls 'best evidence synthesis'. Other meta-analysts would include weaker studies, and add a study-level predictor variable that reflects the methodological quality of the studies to examine the effect of study quality on the effect size.[11] However, Glass and colleagues argued that the better approach preserves variance in the study sample, casting as wide a net as possible, and that methodological selection criteria introduce unwanted subjectivity, defeating the purpose of the approach.[12]

Publication bias: the file drawer problem

Another potential pitfall is the reliance on the available corpus of published studies, which may create exaggerated outcomes due to publication bias, as it is far harder to publish studies which show negative results. For any given research area, one cannot know how many studies have been conducted but never reported and the results filed away.[13] This file drawer problem results in the distribution of effect sizes that are biased, skewed or completely cut off, creating a serious base rate fallacy, in which the significance of the published studies is overestimated. For example, if there were fifty tests, and only ten got results, then the real outcome is only 20% as significant as it appears, except that the other 80% were not submitted for publishing, or thrown out by publishers as uninteresting. This should be seriously considered when interpreting the outcomes of a meta-analysis.[13][14]

A funnelplot expected without the file drawer problem

This can be visualized with a funnel plot which is a scatter plot of sample size and effect sizes. If no publication bias is present, one would expect that there is no relation between sample size and effect size.[15] A negative relation between sample size and effect size would



imply that studies that found signifcant effects were more likely to be published and/or to be submitted for publication. There are several procedures available that attempt to correct for the file drawer problem, once identified, such as guessing at the cut off part of the distribution of study effects. Methods for detecting publication bias have been controversial as they typically have low power for detection of bias, but also may create false positives under some circumstances.[16] For instance small study effects, wherein methodological differences between smaller and larger studies exist, may cause differences in effect sizes between studies that resemble publication bias. However, small study effects may be just as A funnelplot expected with the file drawer problematic for the interpretation of meta-analyses, and the imperative problem is on meta-analytic authors to investigate potential sources of bias. A Tandem Method for analyzing publication bias has been suggested for cutting down false positive error problems, and suggesting that 25% of meta-analyses in the psychological sciences may have publication bias.[17] However low power problems likely remain at issue, and estimations of publication bias may remain lower than the true amount. Most discussions of publication bias focus on journal practices in which publication rates of statistically significant finds are higher than for non-significant findings. However questionable researcher practices, such as reworking statistical models until significance is achieved may also promote a bias toward statistically significant findings[18] allowing high bias for researchers to confirm their own beliefs.[19] Given that, unlike journal practices, questionable researcher practices aren't necessarily sample size dependent, and thus unlikely to demonstrate on the funnel plot and thus go undetected by most publication bias detection methods currently in use. Other weaknesses are Simpson's paradox (two smaller studies may point in one direction, and the combination study in the opposite direction); the coding of an effect is subjective; the decision to include or reject a particular study is subjective;[20] there are two different ways to measure effect: correlation or standardized mean difference; the interpretation of effect size is purely arbitrary; it has not been determined if the statistically most accurate method for combining results is the fixed, random or quality effect models; and, for medicine, the underlying risk in each studied group is of significant importance, and there is no universally agreed-upon way to weight the risk.

Dangers of agenda-driven bias

The most severe weakness and abuse of meta-analysis often occurs when the person or persons doing the meta-analysis have an economic, social, or political agenda such as the passage or defeat of legislation. Those persons with these types of agenda have a high likelihood to abuse meta-analysis due to personal bias. For example, researchers favorable to the author's agenda are likely to have their studies cherry picked while those not favorable will be ignored or labeled as "not credible". In addition, the favored authors may themselves be biased or paid to produce results that support their overall political, social, or economic goals in ways such as selecting small favorable data sets and not incorporating larger unfavorable data sets. The influence of such biases on the results of a meta-analysis is possible because the methodology of meta-analysis is highly malleable.[20] A 2011 study done to disclose possible conflicts of interests in underlying research studies used for medical meta-analyses reviewed 29 meta-analyses and found that conflicts of interests in the studies underlying the meta-analyses were rarely disclosed. The 29 meta-analyses included 11 from general medicine journals; 15 from specialty medicine journals, and three from the Cochrane Database of Systematic Reviews. The 29 meta-analyses reviewed an aggregate of 509 randomized controlled trials (RCTs). Of these, 318 RCTs reported funding sources with 219 (69%) industry funded. Of the 509 RCTs, 132 reported author conflict of interest disclosures, with 91 studies (69%) disclosing industry financial ties with one or more authors. The information was, however, seldom

Meta-analysis reflected in the meta-analyses. Only two (7%) reported RCT funding sources and none reported RCT author-industry ties. The authors concluded without acknowledgment of COI due to industry funding or author industry financial ties from RCTs included in meta-analyses, readers understanding and appraisal of the evidence from the meta-analysis may be compromised.[21]


Steps in a meta-analysis
1. Formulation of the problem 2. Search of literature 3. Selection of studies ('incorporation criteria') Based on quality criteria, e.g. the requirement of randomization and blinding in a clinical trial Selection of specific studies on a well-specified subject, e.g. the treatment of breast cancer. Decide whether unpublished studies are included to avoid publication bias (file drawer problem) 4. Decide which dependent variables or summary measures are allowed. For instance: Differences (discrete data) Means (continuous data) Hedges' g is a popular summary measure for continuous data that is standardized in order to eliminate scale differences, but it incorporates an index of variation between groups: in which 5. Model selection (see next paragraph) For reporting guidelines, see the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement [22] is the treatment mean, is the control mean, the pooled variance.

Meta-analysis models
1. Fixed effect model
The fixed effect model provides a weighted average of the study estimates, the weights being the inverse of the variance of the study estimate. Thus larger studies get larger weights than smaller studies and if the studies within the meta-analysis are dominated by a very large study, it receives essentially all the weight and smaller studies are ignored.[23] This is not so bad if study effect sizes differ only by sampling error, but once heterogeneity is present, then this must be accounted for by the model and one of the other models below should be utilized

2. Random effects model

A common model used to synthesize heterogenous research is the random effects model of meta-analysis. This is simply the weighted average of the effect sizes of a group of studies. The weight that is applied in this process of weighted averaging with a random effects meta-analysis is achieved in two steps [24]: 1. Step 1: inverse variance weighting 2. Step 2: Un-weighting of this inverse variance weighting by applying a random effects variance component (REVC) that is simply derived from the extent of variability of the effect sizes of the underlying studies. This means that the greater this variability in effect sizes (otherwise known as heterogeneity), the greater the un-weighting and this can reach a point when the random effects meta-analysis result becomes simply the un-weighted average effect size across the studies. At the other extreme, when all effect sizes are similar (or variability does not exceed sampling error), no REVC is applied and the random effects meta-analysis defaults to simply a fixed effect meta-analysis (only inverse variance weighting).

Meta-analysis The extent of this reversal is solely dependent on two factors [25]: 1. Heterogeneity of precision 2. Heterogeneity of effect size Since there is no reason to automatically assume that a larger variability in study sizes or effect sizes automatically indicates a faulty larger study or more reliable smaller studies, the re-distribution of weights under this model bears no relationship to what these studies have to offer. Indeed, there is no reason why the results of a meta-analysis should be associated with this method of reversal of the inverse variance weighting process of the included studies. As such, the changes in weight introduced by this model (to each study) results in a pooled estimate that can have no possible interpretation and, thus, bears no relationship with what the studies actually have to offer.[25] To compound the problem further, some statisticians [26] are proposing that we take an estimate that has no meaning and compute a prediction interval around it. This is akin to taking a random guess at the effectiveness of a therapy and under the false belief that it is meaningful try to expand on its interpretation. Unfortunately, there is no statistical manipulation that can replace commonsense. While heterogeneity might be due to underlying true differences in study effects, it is more than likely that such differences are brought about by systematic error. The best we can do in terms of addressing heterogeneity is to look up the list of studies and attempt to un-weight (from inverse variance) based on differences in evidence of bias rather than effect size or precision that are consequences of these failures. The most widely used method to estimate and account for heterogeneity is the DerSimonian-Laird (DL) approach.[27] More recently the iterative and computationally intensive restricted maximum likelihood (REML) approach emerged and is catching up. However, a comparison between these two (and more) models demonstrated that there is little to gain and DL is quite adequate in most scenarios.[28]


3. Quality effects model

Some researchers [29] introduce a new approach to adjustment for inter-study variability by incorporating a relevant component (quality) that differs between studies in addition to the weight based on the intra-study differences that is used in any fixed effects meta-analysis model. The strength of the quality effects meta-analysis is that it allows available methodological evidence to be used over subjective random probability, and thereby helps to close the damaging gap which has opened up between methodology and statistics in clinical research. To do this a correction for the quality adjusted weight of the ith study called taui is introduced.[30] This is a composite based on the quality of other studies except the study under consideration and is utilized to re-distribute quality adjusted weights based on the quality adjusted weights of other studies. In other words, if study i is of good quality and other studies are of poor quality, a proportion of their quality adjusted weights is mathematically redistributed to study i giving it more weight towards the overall effect size. As studies increase in quality, re-distribution becomes progressively less and ceases when all studies are of perfect quality. This model thus replaces the untenable interpretations that abound in the literature and a software is available to explore this method further [31]

Meta-regression is a tool used in meta-analysis to examine the impact of moderator variables on study effect size using regression-based techniques. Meta-regression is more effective at this task than are standard regression techniques.

Applications in modern science

Modern statistical meta-analysis does more than just combine the effect sizes of a set of studies. It can test if the outcomes of studies show more variation than the variation that is expected because of sampling different research participants. If that is the case, study characteristics such as measurement instrument used, population sampled, or aspects of the studies' design are coded. These characteristics are then used as predictor variables to analyze the

Meta-analysis excess variation in the effect sizes. Some methodological weaknesses in studies can be corrected statistically. For example, it is possible to correct effect sizes or correlations for the downward bias due to measurement error or restriction on score ranges. Meta-analysis can be done with single-subject design as well as group research designs. This is important because much of the research on low incidents populations has been done with single-subject research designs. Considerable dispute exists for the most appropriate meta-analytic technique for single subject research.[32] Meta-analysis leads to a shift of emphasis from single studies to multiple studies. It emphasizes the practical importance of the effect size instead of the statistical significance of individual studies. This shift in thinking has been termed "meta-analytic thinking". The results of a meta-analysis are often shown in a forest plot. Results from studies are combined using different approaches. One approach frequently used in meta-analysis in health care research is termed 'inverse variance method'. The average effect size across all studies is computed as a weighted mean, whereby the weights are equal to the inverse variance of each studies' effect estimator. Larger studies and studies with less random variation are given greater weight than smaller studies. Other common approaches include the MantelHaenszel method[33] and the Peto method. A recent approach to studying the influence that weighting schemes can have on results has been proposed through the construct of gravity, which is a special case of combinatorial meta-analysis. Signed differential mapping is a statistical technique for meta-analyzing studies on differences in brain activity or structure which used neuroimaging techniques such as fMRI, VBM or PET. Different high throughput techniques such as microarrays have been used to understand Gene expression. MicroRNA expression profiles have been used to identify differentially expressed microRNAs in particular cell or tissue type or disease conditions or to check the effect of a treatment. A meta-analysis of such expression profiles was performed to derive novel conclusions and to validate the known findings.[34]


[1] Greenland S, O' Rourke K: Meta-Analysis. Page 652 in Modern Epidemiology, 3rd ed. Edited by Rothman KJ, Greenland S, Lash T. Lippincott Williams and Wilkins; 2008. [2] Glossary at Cochrane Collaboration (http:/ / www. cochrane. org/ glossary/ ) [3] Nordmann, AJ; Kasenda, B; Briel, M (2012 Mar 9). "Meta-analyses: what they can and cannot do" (http:/ / www. smw. ch/ content/ smw-2012-13518/ ). Swiss medical weekly 142: w13518. doi:10.4414/smw.2012.13518. PMID22407741. . [4] O'Rourke, Keith (2007-12-01). "An historical perspective on meta-analysis: dealing quantitatively with varying study results". J R Soc Med 100 (12): 579582. doi:10.1258/jrsm.100.12.579. PMC2121629. PMID18065712. [5] Bsch, H. (2004). Reanalyzing a meta-analysis on extra-sensory perception dating from 1940, the first comprehensive meta-analysis in the history of science. In S. Schmidt (Ed.), Proceedings of the 47th Annual Convention of the Parapsychological Association, University of Vienna, (pp. 113) [6] Glass G. V (1976). "Primary, secondary, and meta-analysis of research". Educational Researcher 5 (10): 38. doi:10.3102/0013189X005010003. [7] Cochran WG. Problems Arising in the Analysis of a Series of Similar Experiments. Journal of the Royal Statistical Society, 4:102-118, 1937 [8] Cochran WG and Carroll SP. A Sampling Investigation of the Efficiency of Weighting Inversely as the Estimated Variance. Biometrics 9:447-459, 1953 [9] Lelorier, J.; Grgoire, G. V.; Benhaddad, A.; Lapierre, J.; Derderian, F. O. (1997). "Discrepancies between Meta-Analyses and Subsequent Large Randomized, Controlled Trials". New England Journal of Medicine 337 (8): 536542. doi:10.1056/NEJM199708213370806. PMID9262498. [10] Slavin, R. E. (1986). "Best-Evidence Synthesis: An Alternative to Meta-Analytic and Traditional Reviews". Educational Researcher 15 (9): 59. doi:10.3102/0013189X015009005. [11] Hunter, Schmidt, & Jackson, John E. (1982). Meta-analysis: Cumulating research findings across studies. Beverly Hills, California: Sage. [12] Glass, McGaw, & Smith (1981). Meta-analysis in social research. Beverly Hills, CA: Sage. [13] Rosenthal, Robert (1979). "The "File Drawer Problem" and the Tolerance for Null Results". Psychological Bulletin 86 (3): 638641. doi:10.1037/0033-2909.86.3.638 [14] Hunter, John E; Schmidt, Frank L (1990). Methods of Meta-Analysis: Correcting Error and Bias in Research Findings. Newbury Park, California; London; New Delhi: SAGE Publications [15] Light & Pillemer (1984). Summing up: The science of reviewing research. Cambridge, CA: Harvard University Pree.

[16] Ioannidis, J., & Trikalinos, T. (2007). "The appropriateness of asymmetry tests for publication bias in meta-analyses: a large survey" (http:/ / www. cmaj. ca/ content/ 176/ 8/ 1091. full). Canadian Medical Association Journal 176 (8): 638641. doi:10.1503/cmaj.060410. [17] Ferguson, C., & Brannick, M. (2012). "Publication bias in psychological science: Prevalence, methods for identifying and controlling, and implications for the use of meta-analyses" (http:/ / www. tamiu. edu/ ~cferguson/ PubBias. pdf). Psychological Methods 17 (1): 120128. doi:10.1037/a0024445. [18] Simmons, J., Nelson, L & Simonsohn, U. (2011). "False-Positive Psychology : Undisclosed Flexibility in Data Collection and Analysis Allows Presenting Anything as Significant" (http:/ / people. psych. cornell. edu/ ~jec7/ pcd pubs/ simmonsetal11. pdf). Psychological Science 22 (11): 13591366. doi:10.1177/0956797611417632. [19] LeBel, E. & Peters, K. (2011). "Fearing the future of empirical psychology: Bem's (2011) evidence of psi as a case study of deficiencies in modal research practice" (http:/ / publish. uwo. ca/ ~elebel/ documents/ l& p(2011,rgp). pdf). Review of General Psychology 15 (4): 371379. doi:10.1037/a0025172. [20] Stegenga, J. (2011). "Is meta-analysis the platinum standard?" (http:/ / www. sciencedirect. com/ science/ article/ pii/ S1369848611000665). Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4): 497507. doi:10.1016/j.shpsc.2011.07.003. [21] "How Well Do Meta-Analyses Disclose Conflicts of Interests in Underlying Research Studies | The Cochrane Collaboration" (http:/ / www. cochrane. org/ news/ blog/ how-well-do-meta-analyses-disclose-conflicts-interests-underlying-research-studies). . Retrieved 2012-01-13. [22] "The PRISMA statement" (http:/ / www. prisma-statement. org/ ). 2012-02-02. . Retrieved 2012-02-02. [23] Helfenstein U. Data and models determine treatment proposalsan illustration from meta-analysis. Postgrad Med J. 2002 Mar;78(917):1314 [24] Senn S. Trying to be precise about vagueness. Stat Med 2007; 26:141730 [25] Al Khalaf MM, Thalib L, Doi SA. "Combining heterogenous studies using the random-effects model is a mistake and leads to inconclusive meta-analyses" (http:/ / dl. dropbox. com/ u/ 85192141/ 2011-khalaf. pdf). Journal of Clinical Epidemiology 2011; 64:11923 [26] Riley RD, Higgins JP, Deeks JJ. (2011) "Interpretation of random effects meta-analyses". British Medical Journal Feb 10;342:d549. doi:10.1136/bmj.d549 [27] DerSimonian R, Laird N. (1986) "Meta-analysis in clinical trials". Controlled Clinical Trials, 7, 177188. doi:10.1016/0197-2456(86)90046-2 [28] Kontopantelis E, Reeves D. Performance of statistical methods for meta-analysis when true study effects are non-normally distributed: A simulation study. Statistical Methods in Medical Research. 2010 Dec. doi: http:/ / dx. doi. org/ 10. 1177/ 0962280210392008 [29] Doi SA, Barendregt JJ, Mozurkewich EL. Meta-analysis of heterogeneous clinical trials: an empirical example. Contemp Clin Trials. 2011 Mar;32(2):28898 [30] Doi SA, Thalib L. A quality-effects model for meta-analysis. Epidemiology. 2008 Jan;19(1):94100 [31] MetaXL software page (http:/ / www. epigear. com/ ) [32] Van den Noortgate, W. & Onghena, P. (2007). Aggregating Single-Case Results. The Behavior Analyst Today, 8(2), 196209 BAO (http:/ / www. baojournal. com) [33] Mantel, N.; Haenszel, W. (1959). "Statistical aspects of the analysis of data from the retrospective analysis of disease". Journal of the National Cancer Institute 22 (4): 719748. PMID13655060. [34] Bargaje, R., Hariharan, M., Scaria, V., and Pillai, B. (2010) Consensus miRNA expression profiles derived from inter-platform normalization of microarray data. RNA 16(1): 16-25 Bargaje, R; Hariharan, M; Scaria, V; Pillai, B (2010). "Consensus miRNA expression profiles derived from interplatform normalization of microarray data". RNA 16 (1): 1625. doi:10.1261/rna.1688110. PMC2802026. PMID19948767.


Cooper, H. & Hedges, L.V. (1994). The Handbook of Research Synthesis. New York: Russell Sage. Cornell, J. E. & Mulrow, C. D. (1999). Meta-analysis. In: H. J. Adr & G. J. Mellenbergh (Eds). Research Methodology in the social, behavioral and life sciences (pp.285323). London: Sage. Norman S.-L. T. (1999). "Tutorial in Biostatistics. Meta-Analysis: Formulating, Evaluating, Combining, and Reporting". Statistics in Medicine 18 (3): 321359. doi:10.1002/(SICI)1097-0258(19990215)18:3<321::AID-SIM28>3.0.CO;2-P. PMID10070677. Sutton, A.J., Jones, D.R., Abrams, K.R., Sheldon, T.A., & Song, F. (2000). Methods for Meta-analysis in Medical Research. London: John Wiley. ISBN 0-471-49066-0 Higgins JPT, Green S (editors). Cochrane Handbook for Systematic Reviews of Interventions Version 5.0.1 [updated September 2008]. The Cochrane Collaboration, 2008. Available from



Further reading
Thompson, Simon G; Pocock, Stuart J (2 November 1991). "Can meta-analysis be trusted?" (http:// The Lancet 338 (8775): 11271130. doi:10.1016/0140-6736(91)91975-Z. PMID1682553. Retrieved 17 June 2011. Explores two contrasting views: does meta-analysis provide "objective, quantitative methods for combining evidence from separate but similar studies" or merely "statistical tricks which make unjustified assumptions in producing oversimplified generalisations out of a complex of disparate studies"? Wilson, D. B., & Lipsey, M. W. (2001). Practical meta-analysis. Thousand Oaks: Sage publications. ISBN 0-7619-2168-0 O'Rourke, K. (2007) Just the history from the combining of information: investigating and synthesizing what is possibly common in clinical observations or studies via likelihood. ( mlm/JustHistory.pdf) Oxford: University of Oxford, Department of Statistics. Gives technical background material and details on the "An historical perspective on meta-analysis" paper cited in the references. Owen, A. B. (2009). "Karl Pearson's meta-analysis revisited". ( AOS697.pdf) Annals of Statistics, 37 (6B), 38673892. Supplementary report. ( ~ckirby/techreports/GEN/2009/2009-06.pdf) Ellis, Paul D. (2010). The Essential Guide to Effect Sizes: An Introduction to Statistical Power, Meta-Analysis and the Interpretation of Research Results. United Kingdom: Cambridge University Press. ISBN 0-521-14246-6 Bonett, D.G. (2012). Replication-extension studies, Current Directions in Psychology, 21, 409-412. Bonett, D.G. (2010). Varying coefficient meta-analysis methods for alpha reliability, Psychological Methods, 15, 368385. Bonett, D.G. (2009). Meta-analytic interval estimation for standardized and unstandardized mean differences, Psychological Methods, 14, 225238. Bonett, D.G. (2008). Meta-analytic interval estimation for bivariate correlations, Psychological Methods, 13, 173189. Stegenga, Jacob (2011). "Is meta-analysis the platinum standard of evidence?" ( science/article/pii/S1369848611000665). Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4): 497507. doi:10.1016/j.shpsc.2011.07.003.

External links
Cochrane Handbook for Systematic Reviews of Interventions ( index.htm) Effect Size and Meta-Analysis ( (ERIC Digest) Meta-Analysis at 25 (Gene V Glass) ( Meta-Analysis in Educational Research ( (ERIC Digest) Meta-Analysis: Methods of Accumulating Results Across Research Domains ( MetaA/) (article by Larry Lyons) Meta-analysis ( ( article) ( Meta-Analysis in Economics (Reading list) ( Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement (http://www., "an evidence-based minimum set of items for reporting in systematic reviews and meta-analyses."



MetaXL software page ( Effect Size Calculators ( Calculate d and r from a variety of statistics. ClinTools ( (commercial) Comprehensive Meta-Analysis ( (commercial) MIX 2.0 ( Professional Excel addin with Ribbon interface for meta-analysis and effect size conversions in Excel (free and commercial versions). What meta-analysis features are available in Stata ( (free add-ons to commercial package) The Meta-Analysis Calculator ( free on-line tool for conducting a meta-analysis Metastat ( (Free) Meta-Analyst ( Free Windows-based tool for Meta-Analysis of binary, continuous and diagnostic data Revman ( A free software for meta-analysis and preparation of cochrane protocols and review available from the Cochrane Collaboration Metafor-project ( A free software package to conduct meta-analyses in R. Calculation of fixed and random effects in R ( source code for performing univariate and multivariate meta-analyses in R, and for calculating several statistics of heterogeneity. Macros in SPSS ( Free Macros to conduct meta-analyses in SPSS. Compute Effect Sizes ( (R package). MAd GUI ( User friendly graphical user interface package to conduct meta-analysis in R (Free).


Special B
Eye tracking
Eye tracking is the process of measuring either the point of gaze (where one is looking) or the motion of an eye relative to the head. An eye tracker is a device for measuring eye positions and eye movement. Eye trackers are used in research on the visual system, in psychology, in cognitive linguistics and in product design. There are a number of methods for measuring eye movement. The most popular variant uses video images from which the eye position is extracted. Other methods use search coils or are based on the electrooculogram.

In the 1800s, studies of eye movement were made using direct observations. In 1879 in Paris, Louis mile Javal observed that reading does not involve a smooth sweeping of the eyes along the text, as previously assumed, but a series of short stops (called fixations) and quick saccades.[1] This observation raised important questions about reading, which were explored during the 1900s: On which words do the eyes stop? For how long? When does it regress back to already seen words? Edmund Huey[2] built an early eye tracker, using a sort of contact lens with a hole for the pupil. The lens was connected to an aluminum pointer that moved in response to the movement of the eye. Huey studied and quantified regressions (only a small proportion of saccades are regressions), and he showed that some words in a sentence are not fixated. The first non-intrusive eye trackers were built by Guy Thomas Buswell in Chicago, using beams of light that were reflected on the eye and then recording them on film. Buswell made systematic studies into reading[3] and picture viewing[4] In the 1950s, Alfred L. Yarbus[5] did important eye tracking research and his 1967 book is often quoted. He showed the task given to a subject has a very large influence on the subject's eye movement. He also wrote about the relation between fixations and interest: "All the records ... show conclusively that the character of the eye movement is either completely independent of or only very slightly dependent on the material of the picture and how it was made, provided that it is flat or nearly flat."[6] The cyclical pattern in the examination of pictures "is dependent not only on what is shown on the picture, but also on the problem facing the observer and the information that he hopes to gain from the picture."[7]

Eye tracking


"Records of eye movements show that the observer's attention is usually held only by certain elements of the picture.... Eye movement reflects the human thought processes; so the observer's thought may be followed to some extent from records of eye movement (the thought accompanying the examination of the particular object). It is easy to determine from these records which elements attract the observer's eye (and, consequently, his thought), in what order, and how often."[6]
This study by Yarbus (1967) is often referred to as evidence on how the task given to a

"The observer's attention is person influences his or her eye movement. frequently drawn to elements which do not give important information but which, in his opinion, may do so. Often an observer will focus his attention on elements that are unusual in the particular circumstances, unfamiliar, incomprehensible, and so on."[8] "... when changing its points of fixation, the observer's eye repeatedly returns to the same elements of the picture. Additional time spent on perception is not used to examine the secondary elements, but to reexamine the most important elements."[9] In the 1970s, eye tracking research expanded rapidly, particularly reading research. A good overview of the research in this period is given by Rayner.[13] In 1980, Just and Carpenter[14] formulated the influential Strong eye-mind Hypothesis, the hypothesis that "there is no appreciable lag between what is fixated and what is processed". If this hypothesis is correct, then when a subject looks at a word or object, he or she also thinks about (process cognitively), and for exactly as long as the recorded fixation. The [10] This study by Hunziker (1970) on eye tracking in problem solving used simple hypothesis is often taken for granted by 8mm film to track eye movement by filming the subject through a glass plate on which [11][12] beginning eye tracker researchers. the visual problem was displayed. However, gaze-contingent techniques offer an interesting option in order to disentangle overt and covert attentions, to differentiate what is fixated and what is processed.

Eye tracking During the 1980s, the eye-mind hypothesis was often questioned in light of covert attention,[15][16] the attention to something that one is not looking at, which people often do. If covert attention is common during eye tracking recordings, the resulting scan path and fixation patterns would often show not where our attention has been, but only where the eye has been looking, and so eye tracking would not indicate cognitive processing. The 1980s also saw the birth of using eye tracking to answer questions related to human-computer interaction. Specifically, researchers investigated how users search for commands in computer menus.[17] Additionally, computers allowed researchers to use eye-tracking results in real time, primarily to help disabled users.[18] More recently, there has been growth in using eye tracking to study how users interact with different computer interfaces. Specific questions researchers ask are related to the how easy different interfaces are for users[19] The results of the eye tracking research can lead to changes in design of the interface. Yet another recent area of research focuses on Web development. This can include how users react to drop-down menus or where they focus their attention on a Website so the developer knows where to place an advertisement[20] According to Hoffman,[21] current consensus is that visual attention is always slightly (100 to 250 ms) ahead of the eye. But as soon as attention moves to a new position, the eyes will want to follow.[22] We still cannot infer specific cognitive processes directly from a fixation on a particular object in a scene.[23] For instance, a fixation on a face in a picture may indicate recognition, liking, dislike, puzzlement etc. Therefore eye tracking is often coupled with other methodologies, such as introspective verbal protocols.


Tracker types
Eye trackers measure rotations of the eye in one of several ways, but principally they fall into three categories: One type uses an attachment to the eye, such as a special contact lens with an embedded mirror or magnetic field sensor, and the movement of the attachment is measured with the assumption that it does not slip significantly as the eye rotates. Measurements with tight fitting contact lenses have provided extremely sensitive recordings of eye movement, and magnetic search coils are the method of choice for researchers studying the dynamics and underlying physiology of eye movement. The second broad category uses some non-contact, optical method for measuring eye motion. Light, typically infrared, is reflected from the eye and sensed by a video camera or some other specially designed optical sensor. The information is then analyzed to extract eye rotation from changes in reflections. Video based eye trackers typically use the corneal reflection (the first Purkinje image) and the center of the pupil as features to track over time. A more sensitive type of eye tracker, the dual-Purkinje eye tracker,[24] uses reflections from the front of the cornea (first Purkinje image) and the back of the lens (fourth Purkinje image) as features to track. A still more sensitive method of tracking is to image features from inside the eye, such as the retinal blood vessels, and follow these features as the eye rotates. Optical methods, particularly those based on video recording, are widely used for gaze tracking and are favored for being non-invasive and inexpensive. The third category uses electric potentials measured with electrodes placed around the eyes. The eyes are the origin of a steady electric potential field, which can also be detected in total darkness and if the eyes are closed. It can be modelled to be generated by a dipole with its positive pole at the cornea and its negative pole at the retina. The electric signal that can be derived using two pairs of contact electrodes placed on the skin around one eye is called Electrooculogram (EOG). If the eyes move from the centre position towards the periphery, the retina approaches one electrode while the cornea approaches the opposing one. This change in the orientation of the dipole and consequently the electric potential field results in a change in the measured EOG signal. Inversely, by analysing these changes in eye movement can be tracked. Due to the discretisation given by the common electrode setup two separate movement components a horizontal and a vertical can be identified. A third EOG component is the radial EOG channel,[25] which is the average of the EOG channels referenced to some posterior scalp electrode. This radial EOG channel is sensitive to the saccadic spike potentials stemming from the extra-ocular muscles at the onset

Eye tracking of saccades, and allows reliable detection of even miniature saccades.[26] Due to potential drifts and variable relations between the EOG signal amplitudes and the saccade sizes make it challenging to use EOG for measuring slow eye movement and detecting gaze direction. EOG is, however, a very robust technique for measuring saccadic eye movement associated with gaze shifts and detecting blinks. Contrary to video-based eye-trackers, EOG allows recording of eye movements even with eyes closed, and can thus be used in sleep research. It is a very light-weight approach that, in contrast to current video-based eye trackers, only requires very low computational power, works under different lighting conditions and can be implemented as an embedded, self-contained wearable system.[27] It is thus the method of choice for measuring eye movement in mobile daily-life situations and REM phases during sleep. The major disadvantage of EOG is its relatively poor gaze direction accuracy compared to a video tracker. That is, it is difficult using EOG to determine with good accuracy exactly where a subject is looking, though the time of eye movements can be determined.


Technologies and techniques

The most widely used current designs are video-based eye trackers. A camera focuses on one or both eyes and records their movement as the viewer looks at some kind of stimulus. Most modern eye-trackers use the center of the pupil and infrared / near-infrared non-collimated light to create corneal reflections (CR). The vector between the pupil center and the corneal reflections can be used to compute the point of regard on surface or the gaze direction. A simple calibration procedure of the individual is usually needed before using the eye tracker.[28] Two general types of eye tracking techniques are used: Bright Pupil and Dark Pupil. Their difference is based on the location of the illumination source with respect to the optics. If the illumination is coaxial with the optical path, then the eye acts as a retroreflector as the light reflects off the retina creating a bright pupil effect similar to red eye. If the illumination source is offset from the optical path, then the pupil appears dark because the retroreflection from the retina is directed away from the camera. Bright Pupil tracking creates greater iris/pupil contrast allowing for more robust eye tracking with all iris pigmentation and greatly reduces interference caused by eyelashes and other obscuring features . It also allows for tracking in lighting conditions ranging from total darkness to very bright. But bright pupil techniques are not effective for tracking outdoors as extraneous IR sources interfere with monitoring . Eye tracking setups vary greatly; some are head-mounted, some require the head to be stable (for example, with a chin rest), and some function remotely and automatically track the head during motion. Most use a sampling rate of at least 30Hz. Although 50/60Hz is most common, today many video-based eye trackers run at 240, 350 or even 1000/1250Hz, which is needed in order to capture the detail of the very rapid eye movement during reading, or during studies of neurology. Eye movement is typically divided into fixations and saccades, when the eye gaze pauses in a certain position, and when it moves to another position, respectively. The resulting series of fixations and saccades is called a scanpath. Most information from the eye is made available during a fixation, but not during a saccade. The central one or two degrees of the visual angle (the fovea) provide the bulk of visual information; the input from larger eccentricities (the periphery) is less informative. Hence, the locations of fixations along a scanpath show what information loci on the stimulus were processed during an eye tracking session. On average, fixations last for around 200 ms during the reading of linguistic text, and 350 ms during the viewing of a scene. Preparing a saccade towards a new goal takes around 200 ms. Scanpaths are useful for analyzing cognitive intent, interest, and salience. Other biological factors (some as simple as gender) may affect the scanpath as well. Eye tracking in HCI typically investigates the scanpath for usability purposes, or as a method of input in gaze-contingent displays, also known as gaze-based interfaces.

Eye tracking


Data presentation
To allow interpretation of the data that is recorded by the various types of eye trackers exist various software that animates or visually represents it, so that the visual behavior of one or more users can be graphically resumed. The following ones are the most commonly used: Animated representations of a point on the interface This method is used when the visual behavior is examined individually indicating where did the user focus his/her gaze in each moment, complemented with a small path that indicates the previous saccade movements, as seen in the image. Static representations of the saccade path This is fairly similar to the one described above with the difference that this is static method. A higher level of expertise than with the animated ones is required to interpret this. Heat maps An alternative static representation, mainly used for the agglomerated analysis of the visual exploration patterns in a group of users, differing from both methods explained before. In these representations, the hot zones or zones with higher density designate where the users focused their gazes with a higher frequency. Blind zones maps This method is a simplified version of the Heat maps where the visually less attended zones by the users are displayed clearly, thus allowing for an easier understanding of the most relevant information, that is to say, we are informed about which zones were not seen by the users. The four methods described above are extremely useful and easy to understand in a later analysis. With them we can easily show the client with a single image that the users dont explore the interface in an orderly way as it is commonly believed.

Eye tracking vs. gaze tracking

Eye trackers necessarily measure the rotation of the eye with respect to the measuring system. If the measuring system is head mounted, as with EOG, then eye-in-head angles are measured. If the measuring system is table mounted, as with scleral search coils or table mounted camera (remote) systems, then gaze angles are measured. In many applications, the head position is fixed using a bite bar, a forehead support or something similar, so that eye position and gaze are the same. In other cases, the head is free to move, and head movement is measured with systems such as magnetic or video based head trackers. For head-mounted trackers, head position and direction are added to eye-in-head direction to determine gaze direction. For table-mounted systems, such as search coils, head direction is subtracted from gaze direction to determine eye-in-head position.

Eye tracking in practice

A great deal of research has gone into studies of the mechanisms and dynamics of eye rotation, but the goal of eye tracking is most often to estimate gaze direction. Users may be interested in what features of an image draw the eye, for example. It is important to realize that the eye tracker does not provide absolute gaze direction, but rather can only measure changes in gaze direction. In order to know precisely what a subject is looking at, some calibration procedure is required in which the subject looks at a point or series of points, while the eye tracker records the value that corresponds to each gaze position. (Even those techniques that track features of the retina cannot provide exact gaze direction because there is no specific anatomical feature that marks the exact point where the visual axis meets the retina, if indeed there is such a single, stable point.) An accurate and reliable calibration is essential for obtaining valid and repeatable eye movement data, and this can be a significant challenge for non-verbal subjects or those who have unstable gaze. Each method of eye tracking has advantages and disadvantages, and the choice of an eye tracking system depends on considerations of cost and application. There are offline methods and online procedures like AttentionTracking. There is a trade-off between cost and sensitivity, with the most sensitive systems costing many tens of thousands of

Eye tracking dollars and requiring considerable expertise to operate properly. Advances in computer and video technology have led to the development of relatively low cost systems that are useful for many applications and fairly easy to use. Interpretation of the results still requires some level of expertise, however, because a misaligned or poorly calibrated system can produce wildly erroneous data.


Eye tracking while driving a car in a difficult situation

The eye movement of two groups of drivers have been filmed with a special head camera by a team of the Swiss Federal Institute of Technology: Novice and experienced drivers had their eye-movement recorded while approaching a bend of a narrow road. The series of images has been condensed from the original film frames[29] to show 2 eye fixations per image for better comprehension. Each of these stills correspond approximately to 0.5 seconds in realtime. The series of images shows an example of eye fixations #9 to #14 of a typical novice and an experienced driver. Comparison of the top images shows that the experienced driver checks the curve and even has Fixation No. 9 left to look aside while the novice driver needs to check the road and estimate his distance to the parked car. In the middle images the experienced driver is now fully concentrating on the location where an oncoming car could be seen. The novice driver concentrates his view on the parked car. In the bottom image the novice is busy estimating the distance between the left wall and the parked car, while the experienced driver can use his peripheral vision for that and still concentrates his view on the dangerous point of the curve: If a car appears there he has to give way, i. e. stop to the right instead of passing the parked car.[30]

Eye tracking of younger and elderly people in walking

Elderly subjects depend more on foveal vision than younger subjects during walking. Their walking speed is decreased by a limited visual field, probably caused by a deteriorated peripheral vision. Younger subjects make use of both their central and peripheral vision while walking. Their peripheral vision allows faster control over the process of walking.[31]

Choosing an eye tracker

One difficulty in evaluating an eye tracking system is that the eye is never still, and it can be difficult to distinguish the tiny, but rapid and somewhat chaotic movement associated with fixation from noise sources in the eye tracking mechanism itself. One useful evaluation technique is to record from the two eyes simultaneously and compare the vertical rotation records. The two eyes of a normal subject are very tightly coordinated and vertical gaze directions typically agree to within +/- 2 minutes of arc (RMS of vertical position difference) during steady fixation. A properly functioning and sensitive eye tracking system will show this level of agreement between the two eyes, and any differences much larger than this can usually be attributed to measurement error.

Eye tracking


A wide variety of disciplines use eye tracking techniques, including cognitive science, psychology (notably psycholinguistics, the visual world paradigm), human-computer interaction (HCI), marketing research and medical research (neurological diagnosis). Specific applications include the tracking eye movement in language reading, music reading, human activity recognition, the perception of advertising, and the playing of sport.[32] Uses include: Cognitive Studies Medical Research Laser refractive surgery Human Factors Computer Usability Translation Process Research Vehicle Simulators In-vehicle Research Training Simulators Fatigue Detection Virtual Reality Adult Research Infant Research Adolescent Research Geriatric Research Primate Research Sports Training fMRI / MEG / EEG Commercial eye tracking (web usability, advertising, marketing, automotive, etc.) Finding good clues Communication systems for disabled Improved image and video communications Product development Employee training Computer Science: Activity Recognition[33][34][35]

Commercial applications
In recent years, the increased sophistication and accessibility of eye tracking technologies have generated a great deal of interest in the commercial sector. Applications include web usability, advertising, sponsorship, package design and automotive engineering. In general, commercial eye tracking studies function by presenting a target stimulus to a sample of consumers while an eye tracker is used to record the activity of the eye. Examples of target stimuli may include websites, television programs, sporting events, films, commercials, magazines, newspapers, packages, shelf Displays, consumer systems (ATMs, checkout systems, kiosks), and software. The resulting data can be statistically analyzed and graphically rendered to provide evidence of specific visual patterns. By examining fixations, saccades, pupil dilation, blinks and a variety of other behaviors researchers can determine a great deal about the effectiveness of a given medium or product. While some companies complete this type of research internally, there are many private companies that offer eye tracking services and analysis. The most prominent field of commercial eye tracking research is web usability. While traditional usability techniques are often quite powerful in providing information on clicking and scrolling patterns, eye tracking offers the ability to analyze user interaction between the clicks and how much time a user spends between clicks.. This provides valuable insight into which features are the most eye-catching, which features cause confusion and which

Eye tracking ones are ignored altogether. Specifically, eye tracking can be used to assess search efficiency, branding, online advertisements, navigation usability, overall design and many other site components. Analyses may target a prototype or competitor site in addition to the main client site. Eye tracking is commonly used in a variety of different advertising media. Commercials, print ads, online ads and sponsored programs are all conducive to analysis with current eye tracking technology. Analyses focus on visibility of a target product or logo in the context of a magazine, newspaper, website, or televised event. This allows researchers to assess in great detail how often a sample of consumers fixates on the target logo, product or ad. In this way, an advertiser can quantify the success of a given campaign in terms of actual visual attention. Eye tracking provides package designers with the opportunity to examine the visual behavior of a consumer while interacting with a target package. This may be used to analyze distinctiveness, attractiveness and the tendency of the package to be chosen for purchase. Eye tracking is often utilized while the target product is in the prototype stage. Prototypes are tested against each other and competitors to examine which specific elements are associated with high visibility and appeal. One of the most promising applications of eye tracking research is in the field of automotive design. Research is currently underway to integrate eye tracking cameras into automobiles. The goal of this endeavor is to provide the vehicle with the capacity to assess in real-time the visual behavior of the driver. The National Highway Traffic Safety Administration (NHTSA) estimates that drowsiness is the primary causal factor in 100,000 police-reported accidents per year. Another NHTSA study suggests that 80% of collisions occur within three seconds of a distraction. By equipping automobiles with the ability to monitor drowsiness, inattention, and cognitive engagement driving safety could be dramatically enhanced. Lexus claims to have equipped its LS 460 with the first driver monitor system in 2006, providing a warning if the driver takes his or her eye off the road.[36] Since 2005, eye tracking is used in communication systems for disabled persons: allowing the user to speak, send e-mail, browse the Internet and perform other such activities, using only their eyes.[37] Eye control works even when the user has involuntary movement as a result of Cerebral palsy or other disabilities, and for those who have glasses or other physical interference which would limit the effectiveness of older eye control systems. Eye tracking has also seen minute use in autofocus still camera equipment, where users can focus on a subject simply by looking at it through the viewfinder.


[1] Reported in Huey 1908/1968. [2] Huey, Edmund. The Psychology and Pedagogy of Reading (Reprint). MIT Press 1968 (originally published 1908). [3] Buswell (1922, 1937) [4] (1935) [5] Yarbus 1967 [6] Yarbus 1967, p.190 [7] Yarbus 1967, p.194 [8] Yarbus 1967, p.191 [9] Yarbus 1967, p.193 [10] Hunziker, H. W. (1970). Visuelle Informationsaufnahme und Intelligenz: Eine Untersuchung ber die Augenfixationen beim Problemlsen. Schweizerische Zeitschrift fr Psychologie und ihre Anwendungen, 1970, 29, Nr 1/2 (english abstract: http:/ / www. learning-systems. ch/ multimedia/ forsch1e. htm ) [11] http:/ / www. learning-systems. ch/ multimedia/ eye%20movements%20problem%20solving. swf [12] http:/ / www. learning-systems. ch/ multimedia/ forsch1e. htm [13] Rayner (1978) [14] Just and Carpenter (1980) [15] Posner (1980) [16] Wright & Ward (2008) [17] (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 100. 445& rep=rep1& type=pdf) [18] (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 100. 445& rep=rep1& type=pdf), (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 17. 4048& rep=rep1& type=pdf)

Eye tracking
[19] (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 100. 445& rep=rep1& type=pdf), (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 17. 4048& rep=rep1& type=pdf), (http:/ / delivery. acm. org/ 10. 1145/ 510000/ 507082/ p51-goldberg. pdf?ip=129. 2. 169. 18& CFID=43533044& CFTOKEN=20459728& __acm__=1316470102_c2364e52a2ef97072f959144162018c2) [20] (http:/ / www. mmi-interaktiv. de/ uploads/ media/ MMI-Interaktiv0303_SchiesslDudaThoelkeFischer. pdf) [21] Hoffman 1998 [22] Deubel and Schneider 1996 (http:/ / www. sciencedirect. com/ science?_ob=ArticleURL& _udi=B6T0W-3VXNHBP-10& _user=952938& _coverDate=06/ 30/ 1996& _rdoc=1& _fmt=& _orig=search& _sort=d& view=c& _acct=C000049220& _version=1& _urlVersion=0& _userid=952938& md5=4f7fbf4f015fde59aa9a39b30154e7f3) [23] Holsanova 2007 [24] Crane, H.D.; Steele, C.M. (1985). "Generation-V dual-Purkinje-image eyetracker". Applied Optics 24 (4): 527537. doi:10.1364/AO.24.000527. [25] Elbert, T., Lutzenberger, W., Rockstroh, B., Birbaumer, N., 1985. Removal of ocular artifacts from the EEG. A biophysical approach to the EOG. Electroencephalogr Clin Neurophysiol 60, 455-463. [26] Keren, A.S., Yuval-Greenberg, S., Deouell, L.Y., 2010. Saccadic spike potentials in gamma-band EEG: Characterization, detection and suppression. Neuroimage 49, 2248-2263. [27] Bulling, A.; Roggen, D. and Trster, G. (2009). "Wearable EOG goggles: Seamless sensing and context-awareness in everyday environments". Journal of Ambient Intelligence and Smart Environments (JAISE) 1 (2): 157171. (http:/ / dx. doi. org/ 10. 3233/ AIS-2009-0020) [28] Witzner Hansen, Dan; Qiang Ji (March 2010). "In the Eye of the Beholder: A Survey of Models for Eyes and Gaze" (http:/ / dl. acm. org/ citation. cfm?id=1729561). IEEE Trans. Pattern Anal. Mach. Intell. 32 (3): 478500. . [29] Cohen, A. S. (1983). Informationsaufnahme beim Befahren von Kurven, Psychologie fr die Praxis 2/83, Bulletin der Schweizerischen Stiftung fr Angewandte Psychologie [30] Pictures from: Hans-Werner Hunziker, (2006) Im Auge des Lesers: foveale und periphere Wahrnehmung vom Buchstabieren zur Lesefreude [In the eye of the reader: foveal and peripheral perception from letter recognition to the joy of reading] Transmedia Stubli Verlag Zrich 2006 ISBN 978-3-7266-0068-6 [31] Itoh N, Fukuda T. (2002) Comparative study of eye movement in extent of central and peripheral vision and use by young and elderly walkers.Percept Mot Skills. 2002 Jun;94(3 Pt 2):128391 [32] See, e.g., newspaper reading studies (http:/ / www. sol. lu. se/ humlab/ research/ humlabResearch. html?fileName=et_sv. html& language=EN). [33] Bulling, A. et al.: Robust Recognition of Reading Activity in Transit Using Wearable Electrooculography (http:/ / dx. doi. org/ 10. 1007/ 978-3-540-79576-6_2), Proc. of the 6th International Conference on Pervasive Computing (Pervasive 2008), pp. 1937, Sydney, Australia, May 2008. [34] Bulling, A. et al.: Eye Movement Analysis for Activity Recognition (http:/ / dx. doi. org/ 10. 1145/ 1620545. 1620552), Proc. of the 11th International Conference on Ubiquitous Computing (UbiComp 2009), pp. 4150, Orlando, United States, September 2009. [35] Bulling, A. et al.: Eye Movement Analysis for Activity Recognition Using Electrooculography (http:/ / doi. ieeecomputersociety. org/ 10. 1109/ TPAMI. 2010. 86), IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI). [36] "LS460 achieves a world-first in preventative safety" (http:/ / www. newcarnet. co. uk/ Lexus_news. html?id=5787). 2006-08-30. . Retrieved 2007-04-08. [37] Michelle Cometa (February 23, 2009). "Student learns to control computer with a blink of an eye" (http:/ / www. rit. edu/ news/ story. php?id=46626). Rochester Institute of Technology. . Retrieved August 20, 2011.


Adler FH & Fliegelman (1934). Influence of fixation on the visual acuity. Arch. Ophthalmology 12, 475. Buswell, G.T. (1922). Fundamental reading habits: A study of their development. Chicago, IL: University of Chicago Press. Buswell G.T. (1935). How People Look at Pictures. Chicago: Univ. Chicago Press 13755. Hillsdale, NJ: Erlbaum Buswell, G.T. (1937). How adults read. Chicago, IL: University of Chicago Press. Carpenter, Roger H.S.; Movements of the Eyes (2nd ed.). Pion Ltd, London, 1988. ISBN 0-85086-109-8. Cornsweet TN, Crane HD. (1973) Accurate two-dimensional eye tracker using first and fourth Purkinje images. J Opt Soc Am. 63, 9218. Cornsweet TN. (1958). New technique for the measurement of small eye movements. JOSA 48, 808811. Deubel, H. & Schneider, W.X. (1996) Saccade target selection and object recognition: Evidence for a common attentional mechanism. Vision Research, 36, 18271837.

Eye tracking Duchowski, A. T., "A Breadth-First Survey of Eye Tracking Applications", Behavior Research Methods, Instruments, & Computers (BRMIC), 34(4), November 2002, pp.455470. Eizenman M, Hallett PE, Frecker RC. (1985). Power spectra for ocular drift and tremor. Vision Res