Professional Documents
Culture Documents
0
A Strategic Analysis
Team: Chris Gerrard, Hamid Hadi Sichani, Max Pruger, Gia Wood, Pradyot Rai
1
We b
The Atlantic Monthly wrote of the need for new ways to manage the explosion of
knowledge that was even then threatening to overwhelm the ability of professional to keep pace with developments in their
areas of interest. He described the necessity to provide access to large bodies of separate pieces of information and proposed a mechanism for
2
Web
history…
In 1989 Tim Berners-Lee “proposed that a global hypertext space be created in which any network-accessible
1990 first
information could be refered to by a single "Universal Document Identifier" (UDI). In he wrote the
This would have been remarkable enough in and of itself, but the real breakthrough publication
was the
of the specifications of the enabling technologies – UDI (since changed to URL HTML ), , and
HTTP everyone
, on the Web server, allowing who wanted to take part in the new way of sharing information to jump in
philosophy …
“ The dream behind the Web is of a common
information space in which we communicate by
sharing information . Its universality is essential: the fact that a
hypertext link can point to anything, be it personal, local or global, be it draft or highly polished. There was a second part of the
dream, too, dependent on the Web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which
Web 2.0
is not “about” technology
Web 2.0 ; rather, it builds upon the Web 1.0 foundation by
incorporating old and new technologies, techniques, and concepts of information creation, distribution, and access into a
much richer information universe where the value of information can be tapped in
ways previously unknown unanticipated , and in many ways . This paper examines
the elements that make up Web 2.0, considers their benefits, and proposes ways in which the modern information organization can further its
strategic goals by embracing the Web 2.0 principles and practices.
progression of innovation
taken place through the coupling of technology with in the use of the
technology for novel purposes . As an example, the creation of Web forms in 1993[1] enabled the
RESTful approach to information systems
architecture by providing the ability to send parameterized information to the Web server in a request; Ward Cunningham took advantage of this
5
Web 2.0
Web 2.0…
information easy to create
The heart of Web 2.0 is the concept that should be , easy to
6
Web 2.0
creating information
created in variety of ways
Information in Web 2.0 can be a wide , and contained in an even
variety of formats
wider , ranging from the straightforward HTML page to complex proprietary-format documents to
7
Web 2.0
connecting information
Vannevar Bush associative indexing articulated the concept of “ ”, the process of
Tim Berners-Lee
tying two things together so that any item may be caused to select another. provided the means to link
embedded links
information together via HTML Web 2.0 declared in . has embraced and
highly dynamic and organic . The following sections describe the mechanisms at work in each
context.
8
Web 2.0
accessing information
The value of information exists only in its usage .
browsing
There are multiple ways in which the information held within Web 2.0 can be found and retrieved. The classic web or
Beginning with the ability to incorporate information gathering through the use of forms, following the RESTful architectural patterns, through the
emergence of Wikis and their true collaboration model of information development, to Weblogs/Blogs, coupled with RSS and email for
disseminating information to interested parties, to social networks and their impact on the collaborative development of meaning, …, through to
Web Services and their SOA cousin providing the larger scale, more formal approach to component-based application development.
9
Web 2.0
distributed hypermedia systems like the world wide web. The term originated in a 2000
Roy Fielding
doctoral dissertation about the web written by , one of the principal authors of the HTTP protocol specification,
and has quickly passed into widespread use in the networking community.
While REST originally referred to a collection of architectural principles (described below), people now often use the term in a looser sense to
describe any simple web-based interface that uses XML and HTTP without the extra abstractions of
MEP-based approaches like the web services SOAP protocol. Strictly speaking, it is possible (though not common) to design web service systems in
accordance with Fielding's REST architectural style, and it is possible to design simple XML+HTTP interfaces in accordance with the RPC style, so
these two different uses of REST cause some confusion in technical discussions.
Systems that follow Fielding's REST principles are often referred to as RESTful; REST's most zealous advocates call themselves RESTafarians.
10
Web 2.0
REST - Principles
REST's proponents argue that the web has enjoyed the scalability and growth that it has
not RESTful).A set of well-defined operations that apply to all pieces of information (called resources):
HTTP itself defines a small set of operations, the most important of which are GET, POST, PUT, and DELETE. People often compare these with the
CRUD operations required for data persistence, though POST does not fit cleanly into the comparison.
A universal syntax for resource-identification: in a RESTful system, every resource is uniquely addressable
through the resource's URI.
The use of hypermedia both for application information and application state-transitions: representations in a REST system are typically HTML or
XML files that contain both information and links to other resources; as a result, it is often possible to navigate from one REST resource to many
others, simply by following links, without requiring the use of registries or other additional infrastructure.
11
Web 2.0
Wikis –
Communal Information Development
website multiple authors
A wiki is a that can edit as easily as typing plain text. Wiki, the name and concept,
Ward Cunningham
are the brainchild of , who launched the first wiki March 25, on
1995 community
to support a of like-minded people in accessing and dynamically
changing documents on-line . His site was a “quick” way to complete a project, and he adopted the
Hawaiian term wiki, “based on the Hawaiian term wiki, meaning "quick," "fast," or "to hasten"”. Interestingly, the source for this quote – Wikipedia
– is the largest and most highly regarded online encyclopedia, and is itself a wiki.
great advantage
(Gerrard) began reading it in 1996, and contributing to it in 1997. The of this approach is that it
12
Web 2.0
Wikis…
Wikis make it easy to contribute
deliberate
different kinds of content. Entering information into a wiki is just typing; what you type is what goes in. Beyond that, the
CamelCase – capital letters begin each individual word in a combined string of words. When the wiki sees a CamelCase word it
recognizes that it refers to a separate page, creates the page if it doesn’t exist, and provides an HTML link to it wherever the CamelCase word
appears. Other markup elements such as bolding and italicizing text and inserting bulleted lists are accomplished through similarly simple markups;
e.g. text written “ *the ocean doesn’t want me today*” gets displayed as “the ocean doesn’t want me
today”.
13
Web 2.0
Wikis…
Wikis can be public, as already seen, or private. Acquiring a private wiki is easy; software that runs a wiki – the wiki engine, is available in a
bedazzling array of choices. Wiki engines are written in every major programming language (and many minor ones), have a vast array of features,
and are available as both open source and commercial offerings. Wikis are also available as hosted services, which are suitable for exploratory and
private use, but are less attractive in a business environment where the information in the wiki may constitute an intellectual asset. Many
corporations implement public wikis as customer support tools, and private wikis as part of their internal knowledge management systems.
links
within the wiki is available thorough theCamelCase. between page written by the authors via
Notification email
of changes to the content can be sent to interested parties through notifications. RSS feeds
14
Web 2.0
periodic articles
primarily of normally, but not always, in reverse chronological order. They permit the
Blogs
content. publish straight to the web
evolved as a means to , and made it easy to publish
link
content and then to the original content
comments, additions and afterthoughts . There are
15
Web 2.0
feeds . In addition, special blog search engines , such as Google BlogSearch, Technorati, and Feedster[1],
feedback. Many entrepreneurs are using blogs to create a personal relationship with their target
audience, giving consumers a sense of ownership in product design, marketing, and eventual launch.
corporations monitoring
Large respond to are product blogs to quickly
disaster is Apple with their iPod Nano. Soon after the initial release of the Nano, blogs appeared complaining about a screen glitch and
lack of durability[2]. Before the issue escalated into a full-blown disaster, Apple announced they would replace all defective units. Ironically,
instead of this debacle hurting Apple’s reputation, they were commended for their quick
response . Corporations are also using blogs to provide users with expert advice. As a pre-emptive move, Macromedia launched
blogs before the release of their Flash, Dreamweaver, Fireworks and ColdFusion applications to rave reviews. Marketing departments are also
creating special blogs, called BusiBlogs, to advertise and promote their products and services. 16
Web 2.0
RSS
Real Simple Syndication
RSS, originally Rich Site , now commonly called
Summary , is a way of packaging changes to Web content in a standard XML format and making it available through a well know
feeds
mechanism, termed “ keep up new information
”, so that interested parties can with the . The
feed readers
feed consists of XML that encapsulates the new information, and can be accessed via – programs that contact the
web site and retrieve the feed. RSS began as a way for people to see what was new on the sites that interested them. Feed readers, originally
standalone applications, became commonplace in email programs and web browsers. In Web 2.0 RSS feeders
are being used to gather information for use in mashups , as in
http://almaer.com/blog/archives/000931.html, describing the retrieval of specially tagged photographs from Flickr and combing those photographs
with Google maps to produce custom maps with pictures of locations attached.
17
Web 2.0
Tagging…
attaching key words to pieces of web content. The only restriction is that the content be
Tagging refers to the practice of
reachable via a URI. Seemingly innocuous, the act of tagging, when practiced by large
numbers of people, has proven to be enormously powerful. When two people tag different items with the same key word they are
forming an association between the items, and the association is a link that can be followed by a tag-aware
system. When multiple people use the same key word the effect is to create a category of meaning
that includes all the tagged items as members of the category. A category of meaning created through the growth of tags has been termed a
Folksonomy, itself a “portmanteau of the words folk (or folks) and taxonomy, the term folksonomy has been attributed to Thomas
Vander Wal.”[1]
Tagging sites like del.icio.us (http://del.icio.us/) have proven to be extremely popular; one reason is that they provide the social function of allowing
From a corporate perspective, the emergence of folksonomies has been proposed as a very effective way of
organizing information according to the concepts and meanings of the people intimately engaged with it, and that the creation
of environments that support the effective growth and use of tagging and folksonomies can provide real value in leveraging an
18
Web 2.0
Finding Information –
Searching and Browsing
Findability will eventually be
Finding information when it is relevant and useful is essential to realizing the value of the information. "
recognized as a central and defining challenge in the development of web sites, intranets, knowledge management
systems and online communities."[1] In today’s knowledge economy, learning and finding are powered by all manner
of links between and among people and documents. These links may be explicit, as is the case with HTML hyperlinks,
or implicit, as in similarly tagged items in a folksonomy.
There are three basic ways to find information in the Web 2.0: surfing, browsing and searching. Surfing is the process of
following hyperlinks through HTML documents and is the fundamental mechanism; it supports enables the others. Surfing has its
roots in Vannevar Bush’s trails[2] and Tim Berners-Lee’s work; it will not be discussed further in this paper.
Browsing allows one to see the contents of the system, indexed by subject or topic. Searching allows
you to see a custom generated list of resources that match your query.
19
Web 2.0
searching…
Computerized searching for information has been around for a very long time. In the early web searching was very
limited. Early search engines relied upon the classic searching techniques such as text indexing, in-document
proximity analysis, and limited word stemming. As the web grew exponentially, these techniques, even coupled with the taxonomic approach
exemplified by Yahoo became unwieldy and unreliable in locating “the best”, or even highly associated or relevant results.
Google’s PageRanking mechanism of weighting a web resource (pages) using the number of links to it from
other locations provided an external measure of the importance of the page’s information.
Searching in Web 2.0 builds upon Web 1.0, and promises to provide real benefits in the usefulness of information,
particularly within organizational boundaries. With the advent of low cost search appliances from companies like Google, it is now possible to locate
virtually information accessible via the organizational network. The Web 2.0 search challenges involve incorporating the
new information relationships that are emerging. Wikis and blogs rely on hyperlinks and commenting to extend
information, these are not fundamentally different from Web 1.0, their volume and granularity pose interesting problems. The greater challenge, and
far greater potential payoff, is in using the emerging associative connections to reveal higher-level relationships
between and among information assets. For example, a corporate folksonomy contains the information structure and relationships meaningful to the
people who grow it; the ability to employ this meta-information to find and reveal relevant information will
provide real strategic benefits.
20
Web 2.0
browsing …
taxonomic systems preset
Browsing has in the past normally been the province of that have
information structures connected via efficient indexes of well-identified characteristics of the information.
Web 2.0
combination of the two. Most classic knowledge management systems are oriented around this approach. In the concept of
structures grow in place that “ ” as people interact with the information and provide their individual
power of the
associations – subject or topic, usually in the form of tags. We begin to see here the
21
Web 2.0
Novel Uses of Information –
Mashups and Aggregation
22
Web 2.0
23
Web 2.0
Business of modern world has benefited from the protocols that allow heterogeneous computer
systems to interoperate efficiently. These technologies are referred to web services collectively.
XML implementation
content during interactions in distributed systems and allows technology
details to be hidden . The new technology has led to widespread use of web services.
The move to Service Oriented Architecture (SOA) did not commerce until 2000. SOA represents a bigger picture of what we could do with web
services. It is an approach to build distributed systems that deliver or build application functionalities as services to end-user applications.
24
Web 2.0
principles…
Web services are built on concepts using underlying software components to offer services through interface. It is a big leap from component
architecture because it further extends the separation of services from their implementations.
The notion that service is the integral part of component thinking and the introduction of COBRA are both the prelude to service-oriented
architecture. While Web Services is the programmatic interface that follows the principles of separation of services, independence of the platforms
•A logical business structure for use by internal and external clients regardless of implementation technologies;
•Design and quality of service characteristics that enable use or reuse, abstraction and conformation with service level agreements.
Standards based protocols, Separation of provider and consumer, Enabling automatic discovery
and usage, Enabled by SOA, Functional standardization, Use(reuse) of service; not reuse copying of code/implementation, Abstraction,
Service is abstracted away from the implementation, enabling technology and application independence, Formalization of relationship,
Formal contract between endpoints, places obligations on provider and consumer, Relevance. Functionality presented at a
granularity recognized by the user as a meaningful service
25
Web 2.0
technology…
two key roles
As with web services there are with SOA architecture: service requestor and
service provider . The requester application invokes the services offered by provider applications by sending request
messages. Requester also processes response messages sent by provider.
Some providers could also be requestors. They aggregate responses from the other providers to construct composite responses. Certain SOA
technologies such as UDDI and WS-Trust also use a service broker as an intermediate for brokered trust agreements or service location, etc.
The W3C SOAP 1.2 standard defines the use of XML-formatted messages for communication between a service requestor and a service provider.
Requesting message (XML) put in a SOAP envelope (also XML) is sent to the provider. Provider sends the message back in the same format.
SOAP is the best way to support invocation in a SOA environment involving heterogeneous systems. It is platform
neutral vendor neutral
and . However, SOA does not always require SOAP. The company could build the SOA
using Java, for example, as long as all entities are written in the same language. But this is not workable with a scenario when multiple partners and
heterogeneous platforms are in the picture.
26
Web 2.0
service description …
The Web Services Description Language (WSDL) specifies the XML language for
defining the contract between service provider and requestor in terms of messages.
WSDL contains the following content:
•Request message format
•Response message format
•Where to send messages
WSDL is based on XML therefore it is machine-readable. The developers could use this protocol to automate service
discovery and invocation . For example, a Java proxy object can be generated to invoke any Web service
from its WSDL description, regardless of how the service is implemented, either using Java, C#, or any other languages. In fact, WSDL does not
specify implementation details such as programming language.
27
Web 2.0
service discovery…
for finding a required service. UDDI implements a service registry which is a broker between the provider and requestor.
SOA does not require UDDI however it could be a wise choice if SOAP is the protocol that is being implemented since UDDI is built on SOAP.
28
Web 2.0
benefits…
SOA that properly reflects the real world creates convergence of the business and IT
perspectives and promotes greaterefficiency, adaptability cost and
control
in business relationships and structures.
The service creates looser coupling between business models and technologies. This will reduce the dependency on specific technologies or
products.
Specification of functionalities into independent services would facilitate shared business and technical services that enable consistency across the
enterprise and local variation.
Self-describing nature of the run-time services enables automation of business rules and technical functions, reducing human intervention and
promoting straight-through processing.
SOA services can be extensively re-used and the protocol encourages asset repurposing.
29
Web 2.0
30
Web 2.0
31
Web 2.0
32
he necessity to provide access to large bodies of separate pieces of information and proposed a mechanism for connecting the pieces to one another into
“trails” that could be followed to track the information associations.
n 1989 Tim Berners-Lee “proposed that a global hypertext space be created in which any network-accessible information could be refered to by a single
Universal Document Identifier" (UDI). In 1990 he wrote the first [Gia] hypertext server and an editor named “WorlDwidEweb”. A simple browser for
hypertext document written by Nicola Pellow was also released, creating what would become the World Wide Web (WWW).
This would have been remarkable [Max] enough in and of itself, but the real breakthrough was the publication of the specifications of the enabling
echnologies –UDI (since changed to URL), HTML, and HTTP, on the Web server, allowing everyone who wanted to take part in the new way of sharing
nformation to jump in and create their own Web servers, pages, sites, and browsers, leveraging [Hamid] the strength of the idea. The spread of Web page
and sites was and is unparalleled in its breadth, and with the emergence of page-based browsers Vannevar Bush’s concept of vast bodies of interconnecte
nformation available to everyone was well under way. This was Web 1.0
“The dream behind the Web is of a common information space in which we communicate by sharing information. Its universality is essential: the fact tha
a hypertext link can point to anything, be it personal, local or [prady], be it draft or highly polished. There was a second part of the dream, too, dependent
on the Web being so generally used that it became a realistic mirror (or in fact the primary embodiment) of the ways in which we work and play and
ocialize. Web 2.0 is not “about” technology; rather, it builds upon the Web 1.0 foundation [Chris] by incorporating old and new technologies, techniques
and concepts of information creation, distribution, and access into a much richer information universe where the value of information can be tapped in
ways previously unknown, and in many ways unanticipated. This paper examines the elements that make up Web 2.0, considers their benefits, and
proposes ways in which the modern information organization can further its strategic goals by embracing the Web 2.0 principles and practices.
The foundation technologies of Web 2.0 are the same as those of Web 1.0. The emergence of Web 2.0 has taken place through the coupling of technology
with progression of innovation in the use of the technology for novel purposes. As an example, the creation of Web forms in 1993 enabled the RESTful
approach to information systems architecture by providing the ability to send parameterized information to the Web server in a request; Ward Cunningham
ook advantage of this ability to create first Wiki in 1995, so that a community could truly collaborate their body of
common knowledge. The heart of is the concept that information should be easy to create, easy to connect together in
meaningful ways, easy to locate
Web 2.0when it is relevant or interesting, and easy to combine into novel forms that reveal
unintentional connectedness and thus permit the creation of “new” knowledge. These principles are realized in a variety of ways; flexibility is a hallmark
of Web 2.0. Information in Web 2.0 can be created in a wide variety of ways, and contained in an even wider variety of formats, ranging from the
traightforward HTML page to complex proprietary-format documents to dynamic data retrieved from databases in another dimension. In Web 2.0 the
concept of information is “it’s all the same” – information is information, as long as it is accessible through the mechanisms of the Web, including the
newer mechanisms described below. Vannevar Bush articulated the concept of “associative indexing”, the process of tying two things together so that any
tem may be caused to select another. Tim Berners-Lee provided the means to link information together via embedded links declared in HTML. Web 2.0
has embraced and extended the concept of connectedness by incorporating external mechanisms for connecting together previously disconnected items.
These range from passive and internal to highly dynamic and organic. The following sections describe the mechanisms at work in each context. The value
of information exists only in its usage. There are multiple ways in which the information held within Web 2.0 that can be found and retrieved. The classic
web browsing or surfing is the concept of following Vannevar Bush’s trails laid down by a pioneer how has been through the information before you and
aid down the markers to follow. There are other ways, from the purely human to the purely automated. Again, each following section will describe the
elevant information access mechanisms.
(this paragraph needs work as the intro to the individual sections, will revisit after their review &/or) Beginning with the ability to incorporate informatio
gathering through the use of forms, following the RESTful architectural patterns, through the emergence of Wikis and their true collaboration model of
nformation development, to Weblogs/Blogs, coupled with RSS and email for disseminating information to interested parties, to social networks and their
mpact on the collaborative development of meaning, …, through to Web Services and their SOA cousin providing the larger scale, more formal 33 approach