You are on page 1of 9

• The synthesis of organization, labeling, search, and navigation systems within

digital, physical, and cross-channel ecosystems.


• The art and science of shaping information products and experiences to support
usability, findability, and understanding.
• An emerging discipline and community of practice focused on bringing principles of
design and architecture to the digital landscape.
Then, they jokingly ask, “Were you expecting a single definition? Something short
and sweet? A few words that succinctly capture the essence and expanse of the field of
information architecture? Keep dreaming!”
There is still disagreement about what is part of IA. Andrew Dillon and Don Turnbull
describe some differences in points of view:
In the absence of formal definition, a line of division has been drawn between two
competing views of the field, known generally as the Big IA vs. Little IA perspectives. Big
IA is . . . the process of designing and building information resources that are useful, usable,
and acceptable. From this perspective IA must cover user experience and even organizational
acceptance of the resource. On the other hand, Little IA refers to . . . a far more constrained
activity that deals with information organization and maintenance, but does not involve itself
in analyzing the user response or the graphical design of the information space. Big IA tends
to be seen as top-down, conceiving the full product and its human or organizational impact;
Little IA is viewed as more bottom-up, addressing the metadata and controlled vocabulary
aspects of information organization, without dealing directly with, and certainly never
evaluating formally, the user experience of the resulting space.
Some information architects reject the notion that information architecture is a new
approach to the information organization that has been practiced in libraries, archives, and
museums for a long time. But the parallels are striking. Librarians have long understood the
necessity of organizing information resources in ways that will aid users in gaining access to
them as needed. There does appear to be some agreement, however, on the desire to design
complete and helpful “information ecologies.” To do this, the information architect must
• create online spaces with users’ needs, behaviors, and limitations in mind;
• understand the specific context of the site (e.g., the mission, goals, strategies, etc.,
that are unique to the institution creating the space); and
• organize the online information (e.g., the stuff that users are looking for) logically
and clearly to provide easy access to that information.

The process includes designing usable interfaces and navigation systems (e.g.,
sitemaps, taxonomies, menus) in addition to creating a pleasing overall graphic design.
Rosenfeld, Morville, and Arango state, “many people think of website navigation structures
when they think of information architecture, and this view isn’t entirely off: navigation
menus and their ilk are certainly within the remit of what information architecture produces.
It’s just that you can’t get there without having explored the more abstract territory first.” In
short, the field is dedicated to making information findable and understandable by creating
unique and logical information structures in online settings.
Rosenfeld, Morville, and Arango identify the following stages that the process of
information architecture must go through: research, strategy, design, implementation, and
administration.
• Research includes a review of background materials; gaining an understanding of
the goals and business context; examining the existing information architecture, content, and
intended audiences; and finally conducting studies necessary to explore the situation. In this
stage, an understanding of the content must be developed. This includes gathering
information about: ownership; types, formats, and structures of the content; existing
metadata; and the amount or volume of content.
• Strategy arises from contextual understanding developed in the first phase and
defines the top levels of the site’s organization and navigation structures, while also
considering document types and the metadata schema. For example, one must develop ideas
about how users will access the site’s information (e.g., alphabetic, chronological, topical, or
task-oriented means).
• Design involves creating detailed blueprints, metadata schemas, and the like, to be
used by graphic designers, programmers, content authors, and the production team. In this
stage, content categories, browsing menus, controlled vocabularies, search functions, and
label systems are created.
• Implementation is where designs are used in the building, testing, and launching of
the site— organizing and tagging documents, troubleshooting, and developing documentation
occur in this phase.
• Administration involves the continuous evaluation and improvement of the site’s
information architecture.
The strategy and design stages, in particular, are the ones that require a thorough
understanding of the theoretical underpinnings of information organization and the system
design that will allow display of results in a logical and usable fashion.

Indexing and Abstracting

Indexing and abstracting are two approaches to distilling information content into an
abbreviated, but comprehensive, representation of an information resource. Indexing has a
long and shifting tradition in terms of what it is, who has done it, why it is done, and how it is
done. The history of abstracting is less volatile, and has evolved in the twentieth and twenty-
first centuries into specific formats with targeted audiences.
Indexing
Indexing is the process by which the content of an information resource is analyzed,
and the aboutness of that item is determined and expressed in a concise manner. Indexing is
also concerned with describing the information resource in such a way that users are aware of
the basic attributes of a document, such as author, title, length, and the location of the
content. Indexing typically concerns textual items only; although image indexing is a
growing area of practice. There are three basic types of indexing: back-of-the-book indexing,
database indexing, and web indexing.
In traditional back-of-the-book indexing, the index is a list of terms or phrases
arranged alphabetically with locator references that make it possible for the user to retrieve
the desired content. Language of the indexing terms is typically derived from language of the
text—thus the kind of indexing done in this context is referred to as derived indexing. A good
book index will also include second-level entries (i.e., subheadings), variant entries (i.e.,
multiple entry points), and cross-references. Book indexing is primarily done by freelance
specialists who contract with publishers, although some publishers maintain an in-house
indexing staff, or it may be accomplished by the authors themselves.
In database indexing (sometimes referred to as journal or periodical indexing), each
database item is represented by a set of descriptor terms and, in some instances, a
classification code. Database indexing normally uses a controlled vocabulary or thesaurus
from which the indexer selects and assigns the appropriate terms. The scope and number of
descriptor terms assigned to an item is determined by the editorial policies of the publisher of
the given database, and in-house or specially trained indexers usually perform the indexing.
Web indexing (or Internet indexing) is a type of indexing still very much in
development, in terms of both its jargon and its actual practice. Currently, web indexing falls
into two basic categories: (1) back-of-the-book style indexing—often referred to as A–Z
Indexing—which uses encoded index links within the website, and (2) search engine
indexing—more accurately described as the automatic indexing of websites. In search engine
indexing, websites are searched based on user query terms, an index of the words found and
where they were found is maintained, and future searching on these same queries uses the
saved indexes. Because of the varieties of web indexing, who does web indexing is a question
that applies primarily to the A–Z style of web indexing. Freelance contractors, often book
indexers who have expanded their repertoire of services, create A– Z web indexes. Different
types of indexes are described more fully in Chapter 3.
A number of software tools have been created to generate indexes. They use a variety
of techniques, the efficacy of which depends upon variables such as cost, time constraints,
type and size of files to be indexed, and individual preferences. The American Society for
Indexing (ASI) lists several types of tools used for indexing. These include:
• Standalone or dedicated tools, which allow the creation of back-of-the-book indexes
from page- numbered galleys;
• Embedded indexing tools, which allow insertion of index entries as invisible text in
electronic files;
• Tagging and keywording tools, which allow indexing codes (instead of invisible
text) to be embedded in electronic text and allow creation of hard-coded jumps, similar to
web links; these rely on the words used by the author of the text, not indexers’ concepts;
• Automated indexing software, which accompanies most word-processing software
and builds concordances or word lists directly from texts using the language of the authors
(again, not true indexes that include key concepts that may not use the author’s words);
• Free-text and weighted-text searching tools, which allow the assignment of values to
words and phrases.
Abstracting
Abstracting is a process that consists of analyzing the content of an information
resource and then writing a succinct summary or synopsis of that work. Typically, abstracting
is done for an academic publication or a professional journal. The length, style, and amount
of detail in an abstract may vary depending on its intended audience. Generally, an abstract is
not a review of the work, nor does it evaluate or interpret the work that is being abstracted,
although critical abstracts do include some evaluative text. Although it contains key words
and concepts found in the larger document, the abstract is an original text rather than an
excerpted passage. There are several types of abstracts:
• Indicative: descriptive of the content, but without providing results or outcomes
(also can be referred to as a descriptive abstract);
• Informative: summative with the results or outcomes emphasized;
• Critical: condensed critical review (this type is fairly uncommon);
• Structured: non-narrative in format; it includes specific factors, such as objectives,
methods, results, etc.;
• Modular: includes five discrete sections—citation, annotation, indicative abstract,
informative abstract, and critical abstract.
Technically, an abstract is the summary text; in practice, a formal abstract consists of
the title and citation of the abstracted work and the summary text.
Abstracting is done by both authors and specially trained information professionals.
Scholarly journals often require that an abstract accompany the articles that authors submit
for publication. The rubrics provided by journal publishers can be inconsistent or vague, and
the quality of published abstracts can suffer as a consequence. Abstracts are also written by
professionals, who are either in-house or are contracted by the publishers. Editorial policies
are employed to guide abstractors in this instance; policies are not all the same but are
designed in response to specific audience needs.
Abstracts have a number of uses in information organization and retrieval. Users
needing to stay abreast of a field or given topic can do so by reviewing abstracts published in
that area. Beginning researchers or researchers seeking to master a given area of literature
find that reviewing abstracts instead of full texts saves time. Abstracts aid in the decision of
which articles need to be read in full versus which can be skimmed or skipped altogether. In a
related fashion, because it is sometimes the practice to publish English-language abstracts for
non-English-language articles, the user can decide from reading the abstract if it is cost- and
time-effective to have a translation made of a full article. Librarians and other information
professionals find that the use of abstracts assists in the speed and utility of patron literature
searches. Database indexers, who typically index only using the title and abstract of a text,
require that the abstract be well written and accurate.
Records Management
Records management is the terminology applied to the control and disposition of
records created in offices and other administrative settings. It has its roots in the office filing
systems that developed throughout the twentieth century. These systems have been highly
affected by developments in technology—typewriters, photocopiers, and computers (starting
with sorters and collators). The use of computers in this context has sometimes been referred
to as data administration. Records management systems have a strong relationship with
archives, as that is where an organization’s records may be deposited when their active
operating life has passed and they have become inactive records.
As was true in other parts of our society, records management originally involved the
keeping, filing, and maintaining of paper records. It was a simpler time but also a frustrating
time, because usually only one copy of a record was filed in only one place. The file labels of
one records manager were not necessarily logical to the next. As information began being
entered and stored in electronic files, access points (the file labels) became invisible. This
was not an immediate problem as long as the people who developed the electronic files
documented what was contained in them. The situation became more complicated when
powerful personal computers began to allow persons to store and file their own information
on their desktops. A problem of continuity developed when these personal files were
abandoned.
For many years various operations were automated, each with its own system. For
example, payroll, general ledger, accounts payable, inventories, and other such systems were
automated separately. During the 1990s integration of these systems took place with the
result that the systems had many redundant data fields with little documentation of their
content. These fields seemed to be meant to contain the same information, but what was
actually there was often different (e.g., name given in full in the payroll file, but middle name
shortened to an initial in the faculty file).
In 2001 the International Organization for Standardization (ISO) published a standard
for records management. It defines records management as the “field of management
responsible for the efficient and systematic control of the creation, receipt, maintenance, use
and disposition of records.” Further, in describing records systems characteristics, the
standard states:
A records system should
a) routinely capture all records within the scope of the business activities it covers,
b) organize the records in a way that reflects the business processes of the records’
creator,
c) protect the records from unauthorized alteration or disposition,
d) routinely function as the primary source of information about actions that are
documented in the records, and
e) provide ready access to all relevant records and related metadata.

There have now been developed a number of commercially available records


management systems that track and store records, provide security and auditing functions,
have content management and user identity modules, and more. Records management
systems are a growing industry as corporate settings (as well as college and university,
governmental, and other more traditional institutional settings) engage in records
management activities and seek technological solutions to long-standing data management
problems.
Records managers have dealt with the information explosion by using principles of
information organization. The units that need to be organized in the administrative
environment are such things as directories, files, programs, and, at another level, such things
as field values. Organization can be by system (e.g., payroll, budget) or by type of record
(e.g., personal names, registration records). Records managers must keep track of information
that crosses system boundaries (e.g., personal names cross boundaries when the same names
are entered into several different files). There must be methods for handling concepts that
have the same names but different purposes (e.g., the concept of part time can have different
definitions in a university depending upon whether one is talking about payroll, faculty,
graduate students, or undergraduate students).

Personal Information Management*

Personal information management is defined as “the practice and the study of the
activities a person performs in order to acquire or create, store, organize, maintain, retrieve,
use, and distribute the information needed to complete tasks and fulfill various roles and
responsibilities.”86 Because personal information management is directly connected to an
individual’s everyday life, there has been growing interest in the development of tools and
devices that facilitate personal information management, as well as more investigation of
how people manage their personal information.
People organize their personal information in various formats—including paper
documents, emails, photos, music albums, recipes, and different types of digital files—in
their personal spaces such as offices and homes. When organizing personal information,
people have different habits and processes. For instance, some people have neatly organized
offices while others’ offices are messy with piles of paper documents.87 In the case of
organizing digital files, there have been ongoing debates on the necessity of organizing them
into folders, particularly since people can now simply search for them in personal devices.
However, many people report that they still organize digital files into folders; the act of
organizing files has more functions than just finding specific items. These functions include
reminding people of tasks and helping them further understand the relationships among
information items. In addition, a search function can be less useful when there are a number
of files with similar names or the exact key words cannot be recalled.
There are many factors that influence organization decisions, such as where and how
to organize items. Primary factors include use/purpose of the information item, format of the
information item, and topic of the information item.88 Organizing personal information can
be challenging because it involves various decisions that need to be made based on the future
use of, need for, interest in, and value of the information, all of which can be hard to predict
(and can change easily). Today, two issues also make personal information organization even
more challenging. These are information overload, which is when a person receives more
information than can be processed, and information fragmentation, which is having
information items scattered across multiple personal devices and tools in different formats.89
However, regardless of the format, effectively organizing personal information items
definitely facilitates finding and using information efficiently, which can increase an
individual’s productivity.
Knowledge Management
Everyone has heard the phrase “Knowledge is power.” Originally, the phrase applied
to individuals and implied that persons who increased their knowledge would be able to
increase their power in society. During the 1980s it came to be understood that the same thing
applied to organizations. At that time, there was much downsizing of organizations in order
to reduce overhead and increase profits. In the process, it became obvious that the
organizations lost important knowledge as employees left and took their accumulated years of
knowledge with them. In the same period there was much technological development that
was seen at first as a way to save costs by replacing human workers. Again, though, the
knowledge held and applied by humans was not all replaced by machines. For an
organization to survive, knowledge is brought to bear in the challenges the organization
faces. Management of that knowledge increases its power.
The idea of passing on knowledge gained in a work setting has existed for centuries.
Apprentices learned various trades by working alongside masters. Children often followed
parents into family businesses. More recently, there have been people known as mentors.
Also, a person leaving a job is often asked to train the replacement person before leaving.
Knowledge management (KM) is the process of capturing, developing, sharing, and
using organizational information to make good, well-informed decisions. This concept came
into being as an attempt to capture employees’ knowledge with advanced technology so that
the knowledge could be stored and shared easily. As people became overwhelmed with the
increased availability of information through rapid technological developments, knowledge
management took on the additional role of coping with the explosion of information. In the
KM context, the process comprises three major components: people, processes, and
technology.
Managing knowledge requires a definition of knowledge, a concept that has been
discussed by philosophers for years without complete resolution. It has been characterized in
several ways—for example, as residing in people’s minds rather than in any stored form; as
being a combination of information, context, and experience; as being that which represents
shared experience among groups and communities; or, as a high value form of information
that is applied to decisions and actions. R. D. Stacy makes the following observation:
Knowledge is not a “thing,” or a system, but an ephemeral, active process of relating.
If one takes this view then no one, let alone a corporation, can own knowledge. Knowledge
itself cannot be stored, nor can intellectual capital be measured, and certainly neither of them
can be managed.
However, Rosenfeld, Morville, and Arango posit, “Knowledge managers develop
tools, processes, and incentives to encourage people to share” what they know.
Dave Snowden notes that knowledge management started in 1995 with the
popularization of ideas about tacit knowledge versus explicit knowledge put forward by
Ikujiro Nonaka and Hirotaka Takeuchi. Nonaka and Takeuchi postulated that tacit knowledge
is hidden, residing in the human mind, and cannot be easily represented via electronics; but it
can be made explicit to the degree necessary to accomplish a specific innovation. They
described a spiral process of sharing tacit knowledge with others through socializing,
followed by listeners internalizing the knowledge, and then new knowledge being created, in
turn, to be shared. Snowden says that it does not follow that all knowledge in people’s minds
could or should be made explicit. Often, the knowledge that can be made explicit is just the
tip of the iceberg. However, early knowledge management programs “attempted to
disembody all knowledge from its possessors to make it an organizational asset.” Software
programs were created and are being used for this purpose. For example, Knowledge Base
Software from Novo Solutions claims that it can provide a training tool for new employees,
centralize and retain employee knowledge, and create and update categorized and searchable
knowledge management articles, among other things. Collaboration Solutions software
(formerly known as Lotus) from IBM claims to create, organize, share, and manage business
content in order to provide the right information effectively and efficiently to those who need
it, and to “gather and exchange information through professional networks and build
communities of experts to help execute tasks faster.
Naresh Agarwal and Md. Anwarul Islam provide a list of technology and non-
technology tools and mechanisms for implementing different phases of the knowledge
management cycle (e.g., knowledge capture and creation, knowledge sharing and transfer,
and knowledge application and use). They put forth a number of considerations about the
place of technology in knowledge management:
• A single set of tools cannot be mandated because every organization and its
employees will need to decide for themselves which tools and technologies they find easy to
use and useful to their current needs.
• Technology tools keep changing, so there cannot be a permanent set of
recommendations that will hold true over time. What will remain consistent is the need for
knowledge creation, sharing, and use in organizations.
• An organization needs to factor in the cost of adopting any particular set of tools or
technology (i.e., buying/licensing and the cost of maintaining).
• Technology is not the most important component in KM implementations.
Knowledge management is about people, not about tools and technology. Technology is
needed to support people’s needs, and not the other way around.
M. C. Vasudevan, Murali Mohan, and Amit Kapoor observe that knowledge
management in an enterprise often involves
• identifying, selecting, and cataloging information resources that are pertinent to the
enterprise’s needs;
• identifying information flow patterns among individuals and among groups (e.g.,
finding out who asks what questions, learning what information is obtained from which
sources, determining what types of information are not easily available or accessible); and
• designing and developing user-friendly systems for accessing the enterprise’s
knowledge base.
Core issues of concern to people in the information organization business are those of
describing, classifying, and retrieving what has been stored. In the context of knowledge
management, this means that the organization’s knowledge must be sorted out, labeled (i.e.,
described), and categorized into different subjects or groups (i.e., a taxonomy) if it is to be
retrieved when needed. In their study of knowledge management in consulting firms, Ling-
Ling Lai and Arlene G. Taylor found that the firms required creation of a knowledge piece at
the end of each project. Further, each organization had a template with attributes and facets
appropriate to describing both tacit and explicit knowledge gained during the course of a
consultation. The researchers observed that the actions of capturing tacit knowledge and then
describing it are much like the process of organizing information in libraries: “Essentially,
descriptive cataloging and subject cataloging (in LIS terminology) are achieved when
consultants work on describing a knowledge piece by completing a standardized template
with a number of attributes, and further when they use facets to categorize the knowledge
piece. Whether it is called facet analysis, tagging, or providing metadata, the core meaning of
cataloging and classification exists in consulting firms as well as in libraries.
CONCLUSION
This chapter discusses basic needs to organize, defines information organization, and
presents an overview of a number of different kinds of organizing contexts and environments.
While there are differences among these environments, there are also many points of
convergence. All of the contexts and environments are interested in describing resources for
retrieval and posterity purposes, providing access to resources and helping users to select
what is most appropriate for their needs, helping users to understand and explore the
information they encounter, analyzing content and describing it consistently, using categories
in beneficial ways, and so on.
The following chapters discuss in more detail the processes that have been developed
for information organization, those that are being worked on, and the issues that affect their
implementation. But first, in the next chapter, a historical look at the development of
organizing processes through a number of centuries serves to give us a perspective on where
we have been, where we are now, and how far we might go.

You might also like