This action might not be possible to undo. Are you sure you want to continue?
A Conceptual Framework for Designing ‘Smart Applications’ for the Emerging Ubiquituous Micromedia Environment.
Martin Lindner Studio eLearning Environments
Abstract: Institutionalized R&D dealing with ‘smart ICT’ has a severe problem, since the complete ecosystem is rapidly changing with the converging Web 2.0: How can it stay one step ahead, considering the incredible fast ‘wild’ innovation cycle? Where are longer-term needs that can be addressed by R&D in longer-term projects? This paper suggests that this may be the support of the Change Management needed to close the gap between the rapid evolution of web-based and mobile micromedia and the rather static patterns of behavior of mainstream users and organisations. This paper attempts a systematic description of the new media environment, building on a web-based discourse that itself has been too fast and distributed for peer-reviewed papers. It sketches out a number of concepts, partly theoretical, partly phenomenological, that may contribute to a more complete understanding of the emerging digital ‘micromedia environments’ for which useful ‘smart ICT’ has to be designed. A fundamental change from software development and ‘usability’ approaches towards a holistic approach of “User Experience Design” is diagnosed. Some hints for the future design of microinformation and microlearning applications are derived.
Introduction: Smart Technologies, Smart Environments
Smart Environments: Processing nit data but meanings
It is the mission of the ARC Research Studios Austria (RSA) to undertake cutting-edge, market-related research and development in the field of smart Information and Communication Technologies. Lately, the “Web 2.0” (O’Reilly 2005)1 wave of innovation is having a profound impact on the design of applications in that field, especially if these are part of the daily digital media environment of human users. This impact can be characterized as a change from traditional development of software ‘tools’ to the design of processes and experiences. The notion of ‘Smart ICT’ is quite vague, of course. Three basic layers or dimensions of ‘smart environments’ may be distinguished, depending on the semiotic level, at which interactions take place: (a) applications producing ‘intelligent behavior’ without
Web 2.0: Tim O’Reilly
conscious human interaction, like ‘smart clothing’ or ‘smart washing machines’; (b) the world of ‘pervasive computing’ as envisioned by Weiser (1991, 1994), mainly trying to create possibilities for more casual, but still conscious human interactions beyond the keyboard/mouse/screen-scenario; (c) a level augmenting the world through additional digital semiotic layers, where interactions are de facto based on written language and graphical sign systems (audio input/output plays a much less important role).2 For this purposes, the term is used in the last sense: for applications and services acting on a ‘secondary level’ of information processing, dealing not just with data, but with complex meanings. Those applications can represent ‘smart environments’ in themselves or, although restricted to very few functions, at the same time are acting as a modular, integral part of a bigger networked ‘smart environment’ in which again, as a whole, meanings are processed. This is the case with many lightweight applications (‘widgets’) that contribute to the overall experience of the “Web 2.0”. It was clear from the start that the desktop interface was not capable to really exploit the world-building possibilities of digital media. “The World is Not a Desktop”, said Weiser (1994). While he set up his project to re-build the physical world into a multiple computing interface beyond the restrictions of the screen, another visionary called for a “lifestreams” interface (Gelernter 2000). The Web 2.0, the term used here to include the phone-based Mobile Web 2.0 (Jaokar/Fish 2006), is a world made of signs – in the first place, written-signs-onscreens. It is in the line of the visions of Weiser and Gelernter, but in an odd way – (relatively) low-tech, messy, emergent, driven not by macro-concepts, but by the unpredictable uses of people … As a second world, it is downright cultural, not aiming at an artificial ‘naturalness’ created/augmented by technology. So essentially the convergent Web 2.0 is a digital media environment, made from symbols, informations, and communications. It is semantic, but not in the sense of a consistent, machine-readable Semantic Web. (For a more vision of a future semantic “Web 3.0” that is building on the microcontent-based Web 2.0, but involve back-end “machine-facilitated understanding of information”, see Spivack 2007.3) In the web-based digital environment that has evolved over the last five years or so, smart technologies are not located at the back-end, as in Pervasive Computing and in the Semantic Web. They are front-end, working at the Human-Computer Intersection, not themselves creating meanings from data, but rather augmenting and further processing given meanings: by filtering, re-structuring, annotating, syndicating, aggregating, displaying them in new forms and ways. At the same time, Web 2.0 applications are more and more especially designed to provoke users to supply and interconnect these meanings (‘user-generated content’) in an appropriate form to enable further processing and networking.
In a way this is the digital version of “Literary Technology”, which in Western Civilization is “continuously surrounding us at many scales” (Weiser 1994). 3 According to Spivack, Web 3.0 could be defined as “Web 3.0, a phrase coined by John Markoff of the New York Times in 2006, refers to a supposed third generation of Internet-based services that collectively comprise what might be called 'the intelligent Web'—such as those using semantic web, microformats, natural language search, data-mining, machine learning, recommendation agents, and artificial intelligence technologies—which emphasize machinefacilitated understanding of information in order to provide a more productive and intuitive user experience."
All these characteristics might look no too spectacular for themselves, but taken together, they change the whole scene of ICT and digital media. ‘Media’, as opposed to ‘mediums’, is an immersive space. In such a context, software cannot be developed anymore to act as isolated ‘tools’ or ‘engines’ handling a special task. It has to be designed as integral part of the ‘Digital Lifestyle’ respectively the ‘Digital Workstyle’ (the boundary is blurring anyway). 1.2 R&D in a Web 2.0 Environment
In the new ecosystem, adaptation and mutation happen within highly accelerated innovation cycles. This poses an important problem for R&D, as the traditional process of developing and building innovative software products in highly organized long-term projects is less and less viable. Like Tim O’Reilly (2005) famously noted: Users must be treated as co-developers, in a reflection of open source development practices (even if the software in question is unlikely to be released under an open source license.) The open source dictum, “release early and release often” in fact has morphed into an even more radical position, “the perpetual beta,” in which the product is developed in the open, with new features slipstreamed in on a monthly, weekly, or even daily basis. In typical academic or semi-academic R&D, it surely takes too much time to develop some complex software application. When such a product is ready (take for example a new e-learning platform) it tends to be obsolete already, as the digital environment has changed dramatically in the meantime. Even worse, these changes cannot be anticipated. Although there is sure much room for improvement there, this is only in part a matter of better project planning or project management, or even a turn of the philosophy toward “Rapid Prototyping” or “Iterative Reframing”. At least when it comes to ICT that has to integrate in the emerging web-based ecosystem, this is a structural problem. Even while being on the forefront of funded R&D, the Research Studios will have a very hard time to keep pace with the overheated innovation cycles out there in the ‘wild wild Web’ based on Open Source cultures and the creativity of small teams unhindered by organizational overheads. Therefore, in the field of e-learning, the strategy of the ARC Research Studio eLearning Environments should be to concentrate on a sort of Change Management to bridge a dramatically increasing gap between the Web and the restricted and inflexible digital worlds of mainstream users. People and organizations are becoming increasingly out-ofcycle with the new emerging web-based ‘smart environments’. But in order to survive in a competitive global environment, fast adaptation is crucial. To facilitate this change, a special sort of ‘smart technologies’ is needed. Here may be a mission for R&D: to learn from the proliferation of new web applications and experiences, to analyze and to translate them into coherent concepts acting as intermediaries between the organizational culture and the new digital media culture. For successfully developing such adaptive solutions, it is most important to understand the new relevance of design of processes and interfaces that came with the “Web 2.0”. This paper is an attempt to muster some ideas, concepts and metaphors (as ‘tools to think’) that have surfaced in the highly fragmented and distibuted Web 2.0 design discourse, and build them into a (still provisory) conceptual framework for further discussion.
User Experience Design for Microcontent-based Environments
‘Stop developing software – start designing experiences.’
“Working in a field of constant change, information technology designers habitually deal with evolving practices, fluid conventions, and unpredictable uses.” Brown/Duguid (1994) formulated this caveat over a decade ago, but in the permanent flux that is ICT, the Web 2.04 (the term is used in this paper to include t are not just marking one innovation step among many others. This is signalling a real paradigm change, comparable only with the evolutionary step towards the personal “microcomputer” (around 1975), with the change from the DOS-PC-interface to the desktop- and windowsmetaphor (around 1985), or with the moment when the PC (plus some occasional mail traffic) became also, and for some time separately, a browser-driven web-station (around 1995). All this are primarily cultural evolutions and not so much a matter of technological innovation in the narrow sense. The windows/desktop/mouse-combination of the “microcomputer” sure was smart technology, but even in 1978 it was not really ‘hi-tech’. Neither was the GUI of the Macintosh, or the combination of HTTP, HTML and the first Netscape browser that formed “the Web”. ICT innovation in the field the human-computer interaction, as opposed to deep down in the machine, cannot be easily modelled after the “Higher Faster Farther” innovation processes of 20th century applying to aeroplanes, or mainframe computers. It seems more like going towards more lightweight, more distributed, more networked, more dynamic, more feedback-driven. It is as much a matter of design as of technological innovation. And this is not only relating to the design of new generation devices, but also to the design of software, the most prominent examples for this being Apple (iMac, iPod/iTunes, iPhone) and Google (with its permanent adaptation and optimization of Web 2.0 concepts and technologies). This is what O’Reilly (2005) meant when characterizing the “Web 2.0 as a platform” being based on cross-platform “lightweight applications” and iterative “lightweight programming” of web services. This poses a fundamental challenge to the development of new ‘smart’ ICT applications. Formerly one started with the core functionality of the software, before, in a second step, wrapping it up into some ‘usable interface’. This may still work for traditional software tools like AutoCAD or Photoshop. But in the open and dynamic digital media environment of today, the core of successful applications will have to be built by User Experience Design, with programming coming second: “Stop designing products. Start designing experiences!”, claims leading Web 2.0 design guru Peter Merholz, 5 again making the obligatory reference to iPod/ iTunes. The catchphrase “Processes are not programs” makes a similar point. But professional software projects, not only in R&D, seem to be far off this mark yet, as Buxton (2007, p.
The term “Web 2.0“ is used throughout this paper to include also the “Mobile Web 2.0” (Jaokar/Fish 2006), mainly as a descriptive term for the changes in the Web following the boom of blogs, tagging, feeds, and user-driven social and semantic software in general. For a definition that stood the test of time, a thorough reading of O’Reilly (2005) is still worthwhile. 5 Peter Merholz (www. peterme.com) is an early blogger who founded his own renowned company, Adaptive Path.
73) notes: “[…] one of the most significant reasons for the failure of organizations to develop new software products in-house is the absence of anything that a design professional would recognize as an explicit design process.” 2.2 Subject Positions and Software Postures
Like socio-cultural discourses, different technologies and different media incorporate specific “subject positions” (Foucault 1982), which limit the field of possibilities for experiences and activities and, mostly non-consciously, superimpose a specific frame of mind and pattern of thinking onto each individual who is using a technology or interacting with certain media. This goes for software, digital devices and digital media. The look & feel of the interface, the possibilities for interaction and experiences, the structure and the flow as well as the semantics of the contents – all this is defining something like a ‘place’ an individual has to step in and accommodate to. The overall subject position for a concrete digital media usage scenario can be described itself as the effect of an overlapping of partial subject positions. These are created by special pieces of software (like MS Word), by the wider ‘software environment’ it is embedded in (the OS), and by the functionality and interface of a specific device (like a Desktop PC, a laptop, a PDA, a smartphone, a gamebox …). And each of this subpositions, as well as the general one, is a complex effect of specific technological, cultural and social factors incorporated in the design of the media and in the patterns of media use. In a PC/desktop context, the common concept of the “user” is a quite limited subject position, mostly related to what the designers Cooper/Reinmann (2003) call “sovereign posture” programs (ibid., 103 ff.): best used full-screen; monopolizing the user’s attention for long periods of time; offering a large set of related functions and features; users tend to keep them up and running continuously; dominating a user's workflow as his primary tool, even if other programs are used for support tasks. This is opposed to supplementing “transient posture programs”, that “come and go, presenting a single, high-relief function” (ibid., 106), and “daemonic posture programs” that are completely hidden in the background (ibid., 111). Popular examples of sovereign posture programs are specialized desktop applications like Excel, Powerpoint, Photoshop or AutoCADother specialized programs and “Web Access” applications built to mimic a main desktop app (e.g. MS Project). At the level of Web-based information access, the equivalent is the “visiting” of “portal sites”, like for example looking up books at the Library of Congress website (http://catalog.loc.gov). Now, as the tool paradigm of the desktop terminal is finally giving way to the ‘information space’ paradigm of the Web, the familiar system of sovereign, transient, and daemonic program postures is changing too. A piece of software is not so much defining its own posture anymore, but acting as a part of a wider ecosystem. In the perspective of the users, they are not interacting with sovereign applications, not even with the
browser, which rapidly becomes a sort of environment itself, but with a meta-software called “the Web”.
’Point of Presence’
“I don’t just ‘use’ the Internet, so why am I a user?” The complaint came from Robert Scoble, famous blogger and then Web 2.0 evangelist at Microsoft .6 Having the full range of modular Web 2.0 applications at command simultaneously, the “Web subject” finally feels like a digital being, living in the center of an immersive digital lifeworld. When the concepts of “usability” and “human-centered design” had been introduced into the world of IT by Jacob Nielsen and Don Norman, over a decade ago, this had been an improvement, compared with former program development done by software engineers. But still the side of the “user” had mainly been restricted to functionality, limited tasks and measurable outcomes, while the “human” factors (“needs”, “satisfaction”) remained vague. In the digital media environment, the limited concepts of the “software user” and “usability” are rapidly losing relevance, as they apply to a world of tools, not to the world of digital media. Designing for the new subject position is not the same as “human-centered design”. It is just not about taking the human factors better into account, ergonomically and psychologically. Instead, it has to be based on an analysis of the systematic position that a specific media environment and a specific piece of software are sparing out for individuals to step in. This characteristic general subject position of Web 2.0 can be characterized, more or less metaphorically, as a “Point of Presence” (PoP). In telecommunications, a PoP is the physical or virtual place where a connection is made available to a user who is dialing up into the network via the local access line. Idehen (2006) later used the term to define “Web 2.0”: A phase in the evolution web usage patterns that emphasizes Web Services based interaction between ‘Web Users'‘ and ‘Points of Web Presence’ [exposed APIs] over traditional ‘Web Users’ and ‘Web Sites’ based interaction. Basically, a transition from visual site interaction to presence based interaction. But this notion of “Points of Web Presence” can also be turned around. Human minds are part of the Web 2.0 system, which relies largely on “user generated content” and the profiles of user interactions to create its specific dynamics of circulation and personalization. From this perspective, the Web service is the “user”, and the mind of a human individual is the entry point to a mental and semantic network, that the service needs to connect itself to create value. As soon as the individual is connecting to the Web, a new instance of the ‘Point of Presence’ is created for both sides. This also means that the Web subject, as soon as connecting to the Web, has the characteristic experience of a fresh new start into an open space of possibilities. Exaggerating for the purposes of illustration, the biographical and the professional identity seems in a way to be erased. At the beginning there is always just presence, the
Robert Scoble, I don’t just “use” the Internet, so why am I a user? Posted in the Blog 'Scobleizer' (11/10/2005). http://scobleizer.com/2005/11/10/i-dont-use-the-internet-so-why-am-i-auser/, accessed 08/01/2007.
‘blank page of the mind’, which is still best symbolized by the minimalistic Web 2.0 design of the Google Search start page. The famous old Microsoft Internet tagline “Where do you want to go today?” contains also the question “Who do you want to become today?”, and the implied answers are “anywhere” and “anybody”. In a way, in each new Web session a persona is being built from scratch, not consciously, but as a byeffect of a flow of clicks that is building up an individual story.7 Although one still usually does have the PoP experience when sitting at some sort of desk, it is related to the life-form of ‘digital nomads’. The PC is not a desktop, it is a laptop, being closer to McLuhan’s bodily “extension of man”. Mobility from that perspective is only secondarily a topographic experience: The both medial and mental possibility to be anybody anywhere anytime opens up the possibility to use all the “nonspaces” and times-in-between typical for supermodern life (Augé 1994). As a consequence of these nearly anthropological changes, Web 2.0 applications have to be designed for the new “Point of Presence” subject position from the beginning. There is no fundamental difference here between applications designed for entertainment or work, for private or professional contexts. It is just the built-in structural position of Web 2.0-as-media, as opposed to the structural position of the user of a Personal Desktop Computer.8 Especially, it is not to be mistaken for the ‘cockpit’ position of tool software, like e.g. the Photoshop interface. A cockpit position is user-centered, but not creating the full/void position of the Web 2.0 subject. A personalized MySomething website is a sort of cockpit too, despite being adapted to an individual profile.9 The PoP subject position is something like an anthropological background for the new Digital Lifestyle, helping to understand some concrete concepts and consequences for design that will be sketched out in the following paragraphs. To put it in headlines: Web 2.0 applications will have to be designed
for micromedia technologies, devices and circulation systems optimized for transmitting and processing content in the form of meme-sized microcontentchunks; for dramatically different patterns of attention and developing along with the simultaneous use of digital information sources and applications; for new, complex structures of the digital environment, enhanced by peripheries and background, allowing for different kinds of focus and ‘peripheral view’; in the form of interfaces for modular lightweight micro-applications, that are combining perceived simplicity, flow-quality and gesture-driven directness with the enriching seamfulness abstract ‘sign layers’ are adding to the lifeworld.
This is also a structural reason for the breakdown of the borders between private and professional use of the Web, as well as between the respectable subject and the seeker for kicks who always is just a click away from porn, poker or strange YouTube videos. 8 It is interesting to note that here the world of the laptop and the world of the mobile phone are already converging even when the devices have not yet merged into the convergent “ubiquitous media” scenario that is to be expected for the near future. 9 The interesting borderline case would be the opened up social software site Facebook since it had been opened for all sorts of Web 2.0 plug-ins in 2007.
Micromedia and Microcontent
The new digital media environments are micromedia environments. ‘Media’ is here to be understood in a threefold sense: (a) as media technologies: technological systems for (mass) sign transmission and circulation; (b) as media content: content formatted for transmission and circulation; (c) as media space: the immersive environment created by media in the sense of (a) and (b). There seem to be some independent tendencies towards micromedia and microcontent. As McLuhan (2001) noted, electric media (telegraphy, telephony) changed the press from the start, replacing the long written essays of printed journals with the highly fragmented “mosaic” of news and ads. Over the long range, in radio and TV a similar shift from standalone, elaborate pieces towards a temporal ‘mosaic’ of small units (clips, news, pop songs …) can be observed. Since digital technology has turned “media” in the mid-90Ts, two parallel trends toward an ever more rapid circulation of ever smaller pieces of content have shown: First, with small-sized handheld devices, especially phones-with-screens, we have seen a remarkable comeback of media technologies that are characterized by [relatively] low resolution, low fidelity, and slow speeds (Manovich 2000), as opposed to broadband and multimedia technologies that are aiming toward ‘more’ (“more resolution, better color, better visual fidelity, more bandwidth, more immersion”). And Manovich prophesized with respect to networked cell phones that “minimalist media or micro-media” would “not only successfully compete with macro-media but may even overtake it in popularity” (ibid.). This has to do with the different subject position these personal media are opening for the individuals. Second, even on the larger screens of the PC content now tends to become microcontent. After Google had been shreddering the macro-content of the document-/page-based Web 1.0 to ‘small pieces loosely joined’ (Weinberger 2002), the Web 2.0 is now consisting mainly of clouds of small content clips. They are ‘small’ in respect to the space they need on the screen, in the system’s memory, and in respect to the time the processing consumes within the computational system (download time) and the human mind (attention span). Avant-gardists of the Web 2.0 noted this tendency as soon as 2002, leading to the still valuable definition of Dash (2003): Microcontent is information published in short form, with its length dictated by the constraint of a single main topic and by the physical and technical limitations of the software and devices that we use to view digital content today. We've discovered in the last few years that navigating the web in meme-sized chunks is the natural idiom of the Internet. Dash’s further definition can be systematically boiled down to three points which apply to the level of machines as well as to the level of human users. (See Lindner (2006b) for more details and references.) Microcontent is
self-contained – the smallest unit that can stand for itself in computational, mental and socio-cultural contexts;
individually addressable for computers (through ‘permalinks’) and humans (through their rhetorical ‘meme’-quality); appropriately formatted for easy consumption and further re-use (like e.g. ‘microformats’ and the convention of ‘blog posts’ both oscillatting between cultural form and machine-readable format).
This matches well with the definition the Web economist Umair Haque has been given for “micromedia”. He is concentrating on the radical impact of new microcontent-based technologies, applications and services on traditional mass media, especially the news and the music industry. According to him, in the context of an emerging highly dynamic, open and fragmented digital “attention economy”, “micromedia” (plural) are digital atomized media “that can be consumed in unbundled microchunks and aggregated and reconstructed in hyperefficient ways.” (Haque 2005) This paradigm change is transforming the whole ecosystem of content production, reception and circulation. Whether we like it or not (and there are reasons for both), micromedia and microcontent will not go away. This poses two main challenges: How can human ‘microinformation experiences’ become integrated into the complex context which is formed by the usage of one device (or the simultaneous usage of multiple devices), by the workflow, by the personal flow of tasks (be it professional or private), and finally by the socio-cultural context of the media being used? What forms of guidance and steering could possibly be designed to feel natural for a micromedia user? To meet these challenges, new patterns of attention and focus will have to be fully understood and considered.
Attention, Focus, and the Workplace 2.0
Continuous Partial Attention
“Continuous partial attention is a post-multitasking adaptive behaviour. Being connected makes us feel alive. ADD [Attention Deficit Disorder] is a dysfunctional variant of continuous partial attention. Continuous partial attention isn't motivated by productivity, it's motivated by being connected.” Such Linda Stone (2005) lately defined the influential term she herself had coined as early as 1998, while being a researcher for Microsoft. She was one of the first proposing a positive perspective on a phenomenon that still is mostly discussed as alarmed account of the dangers for productivity posed by “multitasking”, “distraction”, and “life interrupted”.10 Like Bryant (2006) and Brown/Duguid (1999), Stone seems to assume that the experience of “Information Overload” typically occurs where outdated organizational structures and psychological patterns are confronted with a new environment consisting of differently structured (micro-)information. From this perspective, the solution to “information overload” is not less digital information, but more information (David Weinberger11) – but structured and presented in different ways, in the form of microcontent and ‘meta-content’ (Bryant 2006).
For a first account of empiric research by the professors David Levy and Gloria Marks see for example Seven (2004).
There is a growing need for (nearly) simultaneous attention for multiple sources of dynamic information – something that formerly was expected only from high-level managers. (So in a way, the inflationary use of the word “manager” for definitions of more ordinary jobs has some reason after all …) The ongoing, quite extensive discussion of attention in the context of digital media cannot be summarized here. In any case, the implicit concept of a ‘subject of attention’ is closely related with ‘Point of Presence’ and ‘subject position’. As a subject of attention, the ‘real person’ appears to be a double-natured: a mind, and as user. On one hand a cognitive system with certain limitations for dealing with media-induced information abundance. On the other hand, from the Web 2.0 perspective, the ‘user’ as a source of attention is being used, a kind of human agent used for the circulation and semantic enrichment of “selfish memes”.12 In a digital (micro-)media usage scenario, attention problems especially occur where the subject constituting itself at the “Point of Presence” is (a) confronted with more traditional roles and concepts for knowledge and information work, and/or (b) is not supported by well-designed adaptive applications and interfaces. Unfortunately, it seems that empiric research has not brought up yet a convincing concept for the basic “unit of attention” (Cavanagh/Alvarez 2005), which could be used as something like the ‘currency’ in the attention economy. But it may well turn out in the end that this approach is too reductionist anyway13 for modeling a complex phenomenon occurring at the intersection of cognition, computing, and the media, with additional socio-cultural undercurrents. Still, neuropathologists Sohlberg/Mateer (1989) have proposed a quite useful typology for characterizing the attention environment of a typical Information Worker (and in the future, every ‘Web subject’ will have to be one in some respect):
Focused attention: The ability to respond discretely to specific visual, auditory or tactile stimuli. Selective attention: The capacity to maintain a behavioral or cognitive set in the face of distracting or competing stimuli. Sustained attention: The ability to maintain a consistent behavioral response during continuous and repetitive activity. Alternating attention: The capacity for mental flexibility that allows individuals to shift their focus of attention and move between tasks having different cognitive requirements. Divided attention: The ability to respond simultaneously to multiple tasks or multiple task demands.
The workflow in project-based teams as well as the Web 2.0 media environment, in which this workflow increasingly has to take place, divided and alternating attention cease to be exceptions and tend to become normal behavior, leading to “attention stress”.
“The cure to information overload is more information: The power of tags shows that the way to manage information overload is more information.” David Weinberger, Entry in “Joho The Blog”, 05/24/2005, URL: http://www.hyperorg.com/blogger/mtarchive/004037.html 12 The concept of “memes” has been introduced, playfully and provocatively, by Richard Dawkins in his book “The Selfish Gene” (1975). It has since been quite popular as a suggestive metaphor for phenomena of semantic emergence in the Web. 13 Actually it seems that Scientology has developed its own theory of “units of attention” …
The challenge for micromedia design is then to help manage sustained and alternating attention, both in a certain software environment (like in the Windows OS, in the smartphone OS, or even in a tabbed browser) and in the wider scenario of media usage (like the office or the backpack of the mobile worker), including different media and different devices with their spatial and socio-cultural relations. 4.2 Focus
Four levels may be distinguished for modeling the abstract “Continuous Partial Attention” environment that is unfolding around a given digital Point of Presence. The tendency is that digital media are modeling a richly structured environment, across applications, platforms and devices, that is comparable to a multidimensional real world scenario (say: the office). The different levels correspond to different grades of focus: Main Focus: Still there is one object and one application in the main focus at one time: usually some kind of text. What has changed, is the character and granularity of the object. When attention and focus are increasingly divided and alternating, the main focus becomes restricted to objects that can be grasped within one ‘unit of attention’. These units can be longer and shorter, depending on the ‘flow’ of the situation (see below), but typically former macro-content gets fragmentarized as much by the new attention patterns as it is has become shreddered by Google search results or blog coverage. So the greater ‘object’ that is in main focus is losing its clear boundary too, more and more resembling a bundle of ’small pieces loosely joined’. From other microcontent around it is distinguished through stronger ‘gravitational forces’, which are effects of personal interest, inherent semantics, and design. Semi-focus: In a macro-content environment the semi-focus is reserved for supplementary applications and contents, similar to a dictionary being used aside reading a book, or an example being looked up while following a main argumentation thead. In the Web 2.0 context, these semi-focused content becomes much more important. It is nearly as the horizon would be widening, but replacing ‘main focus’ by some kind of new ‘semi focus’ which is bringing forward new horizontal and (in the widest sense) visual patterns while certainly losing in vertical ‘depth’ and linear argumentation. Both at the rhetoric and at the graphic level, microcontent typically is designed for semi-focus attention, for being ‘caught at one glance’. Peripheral focus: Beyond the sphere of objects being ‘semi-focused’, either simultaneously or alternatingly, there is an even wider sphere for ‘glancing sidewards’ from the corner of the eyes. The function of peripheral structures is to embed and to contextualize the focused contents, but also to ensure quick reactuons if necessary: “[W]e keep the top level item in focus and scan the periphery in case something more important emerges.” (Stone 2005). Applications designed for peripheral focus are e.g. dynamic alert boxes signaling new incoming e-mails. Casual focus: A variation of the peripheral focus, which can at any time change into semi-focus, is a more playful casual focus (Tams 2006). This is typical for the media effect of the PC-based Web and mobile phones alike: Because there is always more information ‘out there’, the subject is felling provoked to permanently explore this space of opportunities. In fact, the Web 2.0 has even been characterized, among many other things, as the “Casual Web”. Everybody who has done creative work knows that experiencing this space of possibilities is quite important for high-level productivity. It is filled with gaming, if appropriate informations (photos, gossip, jokes, blog posts, news
clips …) are not available. Again, the challenge is to design new applications and interfaces in a way that is keeping this space open while making it an integral part of the whole system of productivity. Typical for the new environment is the blurring of borders not only between working and learning, and work and private life, but also between work and play. Well-designed ‘micro-attention’ applications have to take this into account. Background (non-focus): Generally every part of the environment with a lesser degree of attention/focus is relatively ‘background’, staying in latency. But there is also some sort of permanent background, which normally is never being focused, but permanently being felt, stabilizing or destabilizing the whole situation. Most important for the field of is probably the general feel of ‘openness’ or ‘closedness’ communicated by digital media. Either one is acting in a (potentially) open space (the Web) or in the ‘walled garden’ of an application. Both can make sense in different contexts and for different people. ( For many, it is assuring to use Microsoft Office tools that communicate the stable feeling of a closed system in which ‘every aspect has already been thought of’, in opposition to the “Wild Wild Web” where everybody is self-responsible and unexpected things might occur at any corner, anytime.) 4.3 Periphery
The evolution of digital media towards micromedia can be described as a change from highly restricted areas of focus (macrocontent objects, sovereign posture programs) to a kind of environment where semi-focused and peripheral attention plays a much more important role. In 1994, John Seely Brown published an important essay on border and periphery as the main challenge for information design at the dawning of the World Wide Web. According to him, the world seen through the computer screen is lacking the “peripheral vision” needed to provide the rich context that alone makes information really useful and meaningful for a human user (Brown 1994). This is exactly what the Web 2.0, including new Intranets and the converging Mobile Web, seems to be addressing. Digital media are not so much creating a workplace, or a new communication medium, but a lifeworld in itself. They are providing social context, via social software of all kinds, as well as semiotic context, that fills the gap between main focus and the non-conscious background. Actually, a large part of Continous Partial Attention is invested in information that makes the subject feel alive, that is, taking part in a sphere of vital circulation: “Continuous partial attention is motivated by a desire not to miss opportunities. We want to ensure our place as a live node on the network, we feel alive when we're connected.” (Stone 2005) This desire, not productivity in the narrow sense14, is the real reason why managers get addicted to mobile e-mail clients like the Blackberry or why teenagers look into their mobile phone screens in every idle moment. Concepts of ICT that ignore the playfulness of digital media will be probable to fail in the future.
Probably ‘productivity’ should be understood in a very wide sense, including all activities that are building structures of some kind over time, be they professional, hobby or just private. Interestingly, this is the approach of David Allen’s bestseller “Getting Things Done”, the bible for self management in a digital media age. In some passages it reads like a theory of microcontent.
4.4 Beyond Push/Pull: The Come-to-me Web The mobile phone has been described as a casual “background device”, making it easy “to pop into the foreground for a brief moment before simply falling into the background once more” (Schick 2005). Applications truly geared for the mobile lifestyle need to take advantage of this background status, says Nokia usability researcher Charlie Schick. Being itself by nature a “foreground device” designed for main focus, the PC has shown to model periphery and casuality within the screen: in the form of additional layers and items at the level of the desktop interface (e.g. widgets, e-mail alerts …), at the level of RSS-driven aggregation pages (e.g. iGoogle, Pageflakes, SuprGlu …) and at the level of the browser (tabs, Firefox plug-ins). Designing for these kinds of environments means designing for semi-focused or peripheral attention, and for the intuitive slipping of content from background to foreground and back again. This is what the Research Studio eLearning Environments is attempting with its concept of “Integrated Microlearning” and the “Knowledge Pulse”-application.15 The ambient character of these concepts is differing from the old idea of “Pull vs. Push” which belongs to the first generations of the Web. Back then, there had been also a discussion about dealing with an abundance of information and media that just couldn’t be handled anymore by people searching for some content, like in a library or a catalogue, and then clicking on a link to call it up on their screen (“pull”). Somehow the content had to come to the user/consumer (“push”). In 1997, a big essay in Wired magazine (Kelly et al. 1997) proclaimed the end of the “pull”-Web and the advent of digital “push media” that are “always on, mobile, customizable”. An application that was something like a hybrid of an ambient screensaver and the media-push of TV had then been developed by Pointkast, a much-hyped multi-million dollar start-up that spectacularly collapsed in 1999. This old pull/push-opposition belonged to a time when static content was shown on a single-focus screen. It is different from the emerging dimension of the “Come-to-me Web” (Vander Wal 2006), which is a function of the micromedia environment of the Web 2.0, statring with the evolution of weblogs and feeds (since 1999, but gaining real momentum four years later). The Come-to-me Web is going beyond the pull-metaphors of ‘wayfinding’ and ‘library search’ and beyond the push-metaphor of ‘watching TV’. It is based on attraction and association: Today's usage is truly focused on the person and how they set their personal information workflow for digital information. The focus is slightly different. Push and pull focused on technology, today the focus is on person and technology is just the conduit, which could (and should) fade into the background. Because the Web is transforming into a dense network of socio-semantic associations, I experience my activities less as pull/push, but as ‘attracting’ content which has already ‘been there’ in the background, like an “Info Cloud” following me across devices, platforms, and applications (Vander Wal 2003). This is in fact neither push nor pull. It is neither “going to get information”, like in a library, nor is it a message obtrusively “pushed at me”, like a pop-up window or a promotion e-mail. It is much more casual and ambient, more of an extension of my
attention horizon.16 Successful ‘smart applications’ will have to fit in a digital ecosystem that is modeled after information experiences in the ‘Real World’. Interacting with the Web 2.0 (or the Web 2.91 …) will be like walking down the street17 or sitting in a café, floating in information spaces with ever changing levels of focus, from background and periphery to semi-focus and main focus and back again. This experience can happen in the world outside, like in real cafes, using the mobile phone for accessing the Web as an additional semantic layer, or as well in interiors meant for focus and work, which are becoming are much more world-like through the new digital media of the Web 2.0. In such an ambient foreground/background environment, a real “push” would be experienced as something very obtrusive: Like someone on the street getting in your way and demanding your exclusive attention. This is not just a question of technology, though. To accept the push, the user has to understand the pushing content, be it information, advertising or educational, as an external representation of her own needs and desires. But again, in reality this is very rarely the case.
Interface and Interaction Design
User-experience design cannot be limited to the graphical user interface itself, but includes interaction design and information architecture as well. De facto there are several levels of interfaces clustering around the user: (a) the interface of the devicesplus-OS (e.g. a Windows Vista laptop, a Nokia Series 60 smartphone); (b) the interfaces of desktop applications (programs in the old sense, like MS Word); (c) the interfaces of new ‘widget’-clients that tend to present only one type of microcontent they get as a Web Service (e.g. a weather widget, or a microblogging client interacting with a Web service like Twitteroo); (d) the browser itself, since it came to be more than an application for browing pages, but a sort of Operating System for the new webtop (like Firefox with different plug-ins). Evidently, in such a multi-interface environment there are special problems for interface design that cannot be covered here in detail, but two important issues are “the flow” and the “seams”. Interface design has to consider different devices and applications, in a way that allows multitasking in space and in time. In space, when different windows (browser tabs, widgets) are open simultaneously. In time, because in a multitasking/microtasking situation, with continuing partial attention, it is important to enable interruptions without breaking the flow. This calls for a concept of flow that exceeds beyond traditional interaction design, which is concentrating on single-focus, sovereign-posture user scenarios.
Approaches to advertisement that adapt to the new subject position are Google Adsense and, of course, the famous Amazon recommendations (“People who have bought this book have also bought …”).
James Corbett, Actually we're all edge cases... and promiscuous. Entry in weblog “EirePreneur. Doing microbusiness in Ireland”. 02/02/2006. URL: http://eirepreneur.blogs.com/eirepreneur/2006/02/actually_were_a.html
The main concern of interaction design has always been “not breaking the flow”. Cooper (2003) uses the term “flow” 61 times in the seminal work “Face 2.0. The Essentials of Interaction Design”, one time explicitly giving credit to Csikszentmihalyi, a psychologist, who had introduced the influential concept in 1975 to describe a kind of experience he had noticed while observing people engaged in sport activities and creative work. According to Csikszentmihalyi (1990), a “flow experience” shows the following characteristics: the feeling of gliding effortlessly from one instant to the next; no separation between self and environment, stimulus and reaction, past, present and future; rewarding in itself; high satisfaction or even a gentle sense of euphoria; neither active nor passive; the feeling of being in charge (intuitively, not in a way of ‘manipulating objects’). In the field of HCI, five dimensions of “flow” can be identified. They all have to be taken into account when designing for the new digital media environment. Cooper (2003) uses the term in a threefold sense:
for the usability flow, that is the programmed “normal flow of system activities and interactions”, both visual and logical; for the user’s workflow, “both within a task and between related tasks“; for a specific design that is aiming at ‘software becoming transparent’, making possible media flow experiences in the narrower sense of Csikszentmihalyi – a main feature for this is gesture-driven ‘embodied interaction’ (Dourish 2001).
Especially for the first and the third dimension of the flow, “perceived simplicity” (Skogen 2006) is crucial. Famous examples are the designs of Apple and Google, as opposed to Microsoft and (formerly) Yahoo. Here simplicity is not primarily realized on the logical level of a ‘usable’ interaction structure (though that remains important, of course), but on an aesthetic level. A fourth dimension of “flow” not mentioned by Cooper is also important for HCI: The implicit flow structure of the type of media environment, which is different on the office desktop, on the “always-on Web PC”, on the PDA, or on the mobile phone. And finally there is a fifth “flow” concept that may be even more relevant for micromedia design. Introduced by Williams (1974), it is used for describing a then new experience of watching TV brought on by commercial stations, the introduction of the remote control, and Cable TV. The users were ‘put in the center’, creating their own personal flow on top of the programs. From that time on, the TV screen was not experienced as a ‘window’ or a ‘stage’ anymore, but has taken on the look & feel of “monitoring” in a control room, with all kinds of digital information inserts and frequent change of viewpoints. While the habit of following the “programmed flow” can be compared to the usability flow of desktop software and the Web 1.0, the new user-centered, much more anarchic flow is pointing forward to the Point of Presence and the dynamic microcontent cloud of Web 2.0. 5.2 Seamfulness
The aim of flow-orientated Interface Design is to make software become transparent. This is related to Mark Weiser’s ideal of “seamlessness”, where the ideal interface would
be one that isn’t even experienced as an interface anymore. For the user, the dimension of the human and the dimension of the machines would become one. But a world without seams is also a world with less meaning. In fact, professionals who are immersed in a Web 2.0 working environment have already started to use 3 browsers at once, one each for certain tasks or certain contexts, in addition to having opened multiple tabs within in each browsers. This is a rather crude reintroduction of interfaces into a situation where technology had already created a unified space. A similar effect is created by the multiple devices of the current Digital Lifestyle: laptop, mobile phone, MP3 device, digital camera, maybe a Blackberry. From the perspective of media experience, it is quite doubtful that there ever will be ‘one ring to rule them all’. Multiple devices add structure and meaning, and enable subtle changes of subject positions for the users. The same goes for the multiple small applications that together form the Web 2.0 environment of today. It seems that just because digital media are so powerful in overcoming restrictions of time and space, they are calling for re-introducing structure in some way or another. If applications are well designed, for example, they would enforce the basic structure of the micromedia environment – the field of main focus, semi-focus, peripheral view and background. Each of this levels, and each of the objects embedded there, have their own interfaces. Chalmers/Galani (2004) would call this “designing for seamfulness”. They are also offering an additional explanation for the problems with seamlessness: An interactive media system that is too complex and too perfect may fail because of giving the users no possibility to “participate, adapt and appropriate”, while another system might succeed not despite, but exactly because it is made up of “inexpensive, easily manipulated, visible” pieces. They draw the conclusion to deliberately design for ‘appropriation’, aiming for systems that are robust, flexible, simple, manipulable and overt: By overt, we mean the underlying mechanisms of such systems are made visible, as a precondition for the other requirements that provide a basis for appropriation. Such visibility is seamful, rather than seamless. This overt visibility should probably also aiming for be reducible to peripheral awareness […] This is related to the importance of an experience of openness, which is, at least in the field of consumer media, not brought about by ‘perfect’ subliminal technological infrastructure, but by a seamful, imperfect one. (For some more remarks on ‘designing for openness’, see Lindner (2006b).)
The aim of this paper was to sketch out a conceptual framework for a further systematical discussion of the design of ‘smart applications’ within a more and more ubiquituous micromedia and “monomedia” environment, in the sense of Matthew Chalmers (2001): We have found it useful to consider the many media, technologies and spaces as one design medium, because each person’s experience depends on them all. People’s activity continually combines and cuts across different media, interweaving those media and building up the patterns of association and use that make meaning. Point of Presence, Continuous Partial Attention, different levels of focus, peripheral view, background/foreground, Come-to-me Web, Info Cloud, flow, perceived simplicity, openness, appropriation, seamfulness, … By introducing a set of concepts collected from very different sources, this environment has been shown to have its inner logic, which must be understood when designing new applications, even more when outlining a R&D strategy for the next 3 – 4 years. The starting point for all these considerations has been the Knowledge Pulse, the microlearning application developed by the ARC Studio eLearning Environments, and possible future interrelations with the R&D work in the other Research Studios. Some hints for the design of microlearning applications have been given in Lindner (2006), including a chart comparing similar products in the perspective of micromedia design. In a way, this is a piece of software that is positioned at the boundary between Web 1.0 and Web 2.0. It can be conceptualized either as a push-based learning machine or as an integral part of a future Come-to-me Web. The problem here seems to be that the concept of ‘push- learning’ doesn’t fit well into the new microcontent-based media environment, and the subject positions going with it. But at the same time, the gap between the rapid evolution of the Micro-Web and the mental and practical adoption of mainstream users is widening. This leads to certain dilemma: old structures of learning and interaction do not work anymore, while new structures haven’t been building up yet. This dilemma is typical and not limited to the field of e-learning. It seems less and less possible to meet the main challenges of ‘smart applications development’ at the level of one single application. With microcontent-based digital media, an evolving ecosystem has to be considered that is calling for new concepts. Contents and attention flows are not limited anymore to one application, to one software environment (like the PCdesktop or the Web), or to one platform (the PC-centered Web or the Mobile Web). The problems posed by the ‘digital climate change’ towards microcontent and micromedia have to be addressed in a wider context. Processes are not programs, design is becoming as important as software development. The specific structure of the Research Studios Austria opens up the possibility to take a holistic perspective on the new phenomena, combining perspectives from different fields. The chance for R&D in a highly innovative and dynamic field is to understand the deeper structures and to follow longer-term strategies, while adapting as quick as possible to the permanent innovation in the digital ecosystem.
“Stop designing products. Start designing integrated experiences!”
Augé, M. (1995), Non-Places. Introduction to an Anthropology of Supermodernity. London, New York, NY: Verso. Abowd, G. D., Mynatt, E.D. (2005), Designing for the Human Experience in Smart Environments. In: Diane J. Cook, Sajal K. Das (eds.), Smart Environments: Technology, Protocols and Applications, pp. 151-174. New York, NY: Wiley & Sons. Brown, J. S., Duguid, P. (1994), Borderline Issues: Social and Material Aspects of Design. Human-Computer Interaction, v 9, n 1 (pp. 3-36), 1994. URL: http://www.johnseelybrown.com/Borderline_Issues.pdf Brown, J. S., Duguid, P. (1999), The Social Life of Information. Boston, MA: Harvard Business School Press. Brown, J. S., Hagel III, J. (2005) From Push to Pull: The next frontier of innovation McKinsey Quarterly, Online Edition, 2005, 3, 82 – 91. URL: http://www.mckinseyquarterly.com/article_abstract_visitor.aspx?ar=1642&L2= 21&L3=37&srid=9&gp=1# Bryant, L. (2006), Humanising the Enterprise through Ambient Social Knowledge. Talk at O'Reilly Emerging Technology Conference (San Diego, CA; March 6-9, 2006). URL: http://www.headshift.com/archives/002895.cfm Buxton, B. (2007), Sketching User Experiences: Getting the Design Right and the Right Design. New York, NY: Morgan Kaufman. Cavanagh, P., Alvarez, G.A. (2005), Tracking Multiple Targets With Multifocal Attention. Trends in Cognitive Sciences, Vol. 9, Issue 7, July 2005, pp. 349-354 Chalmers, M. (2001) Paths and Contextually Specific Recommendations. Proc. DELOS/NSF Workshop on Personalisation and Recommender Systems in Digital Libraries, Dublin, June 2001. URL: http://www.dcs.gla.ac.uk/~matthew/papers/delos.html Chalmers, M., Galani, A. (2004), Seamful Interweaving: Heterogeneity in the Theory and Design of Interactive Systems. In: Proc. ACM DIS 2004, pp. 243-252. URL: http://www.dcs.gla.ac.uk/~matthew/papers/DIS2004v3.pdf Cooper, A., Reimann, R. R. (2003), About Face 2.0: The Essentials of Interaction Design. New York, NY: Wiley & Sons. Csikszentmihalyi, M. (1990). Flow: The Psychology of Optimal Experience. New York: Harper and Row.
Dash, A. (2002), Introducing the Microcontent Client. URL: http://www.anildash.com/magazine/2002/11/introducing_the.html Haque, U. (2005), The New Economics of Media. Micromedia, Connected Consumption, and the Snowball Effect. URL: http://www.bubblegeneration.com/resources/mediaeconomics.ppt Idehen, K. (2006), Web 2.0's Open Data Access Conundrum (Update). Posted to blog 'Kingsley Idehen's Blog Data Space' (09/05/2006). URL: http://www.openlinksw.com/blog/~kidehen/index.vspx?page=&id=1034, accessed 08/01/2007. Jaokar, A., Fish, T. (2006), Mobile Web 2.0. London: Futuretext. Kelly, K., Wolf, G. (1997), Push! Kiss your browser goodbye: The radical future of media beyond the Web. Wired, 1997, Nr. 5.03. URL: http://www.wired.com/wired/archive/5.03/ff_push.html Leene, A. (2006), The MicroWeb. Using MicroContent in Theory and Practice. In: Hug, T., Lindner, M., Bruck, P.A. (eds.), Proceedings of the Microlearning 2006 Conference (Innsbruck, June 8 - 9, 2006). Innsbruck: Innsbruck University Press. Lindner, M. (2006), Human-centered Design for ‘Casual’ Information and Learning in Micromedia Environments. In Holzinger, A. et.al. (Eds.) M3 - Interdisciplinary Aspects on Digital Media & Education. Proceedings of the 2nd Symposium of the WG HCI&UE of the Austrian Computer Society (ACS), pp. 52 – 60. Manovich, L. (2000), Beyond Broadband: Macromedia and micro-media. In: Lovink, G. (Ed.) (2000), net.congestion reader. De Balie: Amsterdam. URL: http://www.manovich.net/docs/Mass.cro_micro.doc (digital version) McLuhan, M. (1964), Understanding Media: The Extensions of Man. McGraw-Hill: New York, NY. Schick, C. (2005), What are the true qualities of mobility? Entry in Weblog “Lifeblog”, 09/06/2005. URL: http://cognections.typepad.com/lifeblog/2005/09/what_are_the_tr.html Seven, R. (2004), Life Inter-rupted, Pacific Northwest. The Seattle Times Magazine (11/28/2004). URL: http://seattletimes.nwsource.com/pacificnw/2004/1128/cover.html Sohlberg, M. M., Mateer, C. A. (1989). Introduction to cognitive rehabilitation: theory and practice. New York: Guilford Press. Skogen, M. G. R (2005), Simplicity in Complicated User-Interface Applications. Paper for Nordcode05, 4th Nordcode Seminar and Workshop "Common Denominators" (Trondheim, May 18 – 20, 2005). URL: http://www.ivt.ntnu.no/ipd/nordcode05/papers05/nc05-Skogen.pdf
Stone, L. (2006), Keynote to the conference ETech 2006. Transcription by Nat Torkington, entry in weblog “O’Reilly radar”, 03/12/2006. URL: http://radar.oreilly.com/archives/2006/03/etech_linda_stone_1.html Tams, J. (2006), Introduction to Casuality. Presentation for Casuality Europe ’06 – A Conference for Casual Game developers, Publishers and Distributors (Amsterdam, February 7 – 9, 2006). URL:http://europe.casuality.org/preso_2006/jessica_tams_Casuality_Europe _2006.ppt Vander Wal, T. (2006), The Come To Me Web. Entry in weblog “Personal Infocloud”. URL: http://www.personalinfocloud.com/2006/01/the_come_to_me_.html See also: Vander Wal, T. (2003), Vander Wal, T. (2003), Welcome to the Personal Info Cloud [Entry in weblog "Personal Infocloud"]. URL: http://www.personalinfocloud.com/2003/10/welcome_to_the_.html Williams, E. (2005), Ten Rules for Web Startups. Entry in weblog “Evhead”, 11/27/2005. URL: http://www.evhead.com/2005/11/ten-rules-for-web-startups.asp Williams, R. (1974), Television: technology and cultural form. New York: Schocken, 1975 Weinberger, D. (2002). Small Pieces, Loosely Joined. A Unified Theory of the Web. Cambridge, MA: Perseus Publishing.