You are on page 1of 15

The Reference Librarian

ISSN: 0276-3877 (Print) 1541-1117 (Online) Journal homepage: https://www.tandfonline.com/loi/wref20

Optimal Results: What Libraries Need to Know


About Google and Search Engine Optimization

Kay Cahill & Renee Chalut

To cite this article: Kay Cahill & Renee Chalut (2009) Optimal Results: What Libraries Need to
Know About Google and Search Engine Optimization, The Reference Librarian, 50:3, 234-247,
DOI: 10.1080/02763870902961969

To link to this article: https://doi.org/10.1080/02763870902961969

Published online: 15 Jun 2009.

Submit your article to this journal

Article views: 2440

View related articles

Citing articles: 2 View citing articles

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=wref20
The Reference Librarian, 50:234–247, 2009
Copyright © Taylor & Francis Group, LLC
ISSN: 0276-3877 print/1541-1117 online
DOI: 10.1080/02763870902961969

Optimal Results: What Libraries


1541-1117
0276-3877
WREF
The Reference Librarian,
Librarian Vol. 50, No. 3, May 2009: pp. 1–22

Need to Know About Google and Search


Engine Optimization

KAY CAHILL and RENEE CHALUT


Google
K. Cahilland
andSearch
R. Chalut
Engine Optimization

Vancouver Public Library, Vancouver, British Columbia, Canada

Search engine optimization, or the practice of designing a web site


so that it rises to the top of the results page when users search for
particular keywords or phrases, has become so prevalent on the
modern web that it has a significant influence on Google search
results. This article examines the techniques used by search engine
optimization practitioners, the difference between “white hat”
and “black hat” optimization tactics, and why it is important for
library staff to understand these techniques and their impact on
search engine results pages. It also looks at ways that library staff
can help their users develop awareness of the factors that influence
search results and how to better assess the quality and relevance of
results listings.

KEYWORDS Google, search results, search engine results page,


search engine optimization

INTRODUCTION

Internet marketing is big business. When your storefront is potentially the


screen of any computer connected to the Internet and the street on which
your shop is located is a network of fibre optic cables spanning the globe,
the number of consumers who could encounter your product is one that
companies could only dream of in previous, less networked times. The
challenge for businesses trying to sell their products and services on
the Internet is how to draw attention to their offering in a street that never
runs out of room for their competitors. From banner ads to pop-ups to

Address correspondence to Kay Cahill, Vancouver Public Library, 350 West Georgia St.,
Vancouver, British Columbia, Canada V6B 6B1. E-mail: kay.cahill@vpl.ca

234
Google and Search Engine Optimization 235

sponsored search, there are enormous payoffs for companies that can make
their product stand out from the crowd.
Search engine optimization (SEO) is currently one of the key aspects of
successfully marketing products and services on the modern web. SEO
techniques are not in themselves new, but over time they have been refined
and reworked until they have come to have significant influence on the
pages most heavily relied on for Internet users seeking information—the
results of their Google searches.
This is significant for libraries because as we move increasingly toward
a model where our users expect to be able to search independently, our
role shifts more toward becoming the guide who provides the critical infor-
mation literacy instruction that enables them to perform those searches
effectively and evaluate the authority and relevance of the results they
receive in return. If SEO techniques are leading to changes in the order of
search results and the quantity of what amounts to targeted advertising
within those results, our users need to be aware of this and it’s our respon-
sibility, through the courses we teach and the advice that we give, to help
create that awareness. The more we can help library users become aware of
the limitations and benefits of search engines and teach them to recognize
authoritative sources versus self-styled experts and product placement, the
more effectively they will be able to construct their searches and dissemi-
nate the information they find.
This article seeks to explore both the evolution and effects of SEO
techniques, including the efforts of search engines to reduce the effects of
“black hat,” or illegitimate, SEO on search results, and the role of the library
in helping users understand what influences search results and how to
better assess their relevance and accuracy.

A BRIEF HISTORY OF SPONSORED SEARCH

In 2000, Google AdWords launched. They represented a departure from the


standard revenue models for search engines at the time, which were heavily
reliant on individual web sites purchasing banner advertising space or
paying for inclusion in the search engine’s database to ensure more regular
spider visits and a subsequent bump up the search results lists.
Banner advertising was simplistic, an online version of traditional print
advertisements seen in newspapers and magazines; in addition to its limited
capacity for revenue generation, the increasingly garish and intrusive
designs of advertisers desperate for click-throughs eventually ended up
grating on users’ nerves and putting off as many people as they attracted.
Paid inclusion generated steadier revenue but not enough to fund an
individual search engine’s operations, especially given that users were
understandably wary of a system that theoretically placed web sites higher
236 K. Cahill and R. Chalut

on results lists because they had paid to be there rather than because of
their relevance to the search terms (Hedger 2004).
Then Google came on the scene, a young upstart determined to
change the face of Internet searching. Google eschewed the lure of both
banner advertising and paid inclusion in favor of AdWords, or keyword-
based, advertising. Google was not the first search engine to use this type of
sponsored search; it was pre-dated by both Overture in 1998 and BeFirst in
1999 (Fain and Pedersen 2006), but AdWords quickly overtook its competi-
tors and by 2008 Google was estimated to be in control of 69% of the online
advertising market (Baker 2008).
The premise behind AdWords and similarly sponsored search products
is simple. Keywords are used to match brief advertisements to searches
entered by Google users. In addition to the main list of search results, the
advertisements retrieved are displayed under a separate “Sponsored Links”
header. The keyword matching is intended to ensure that only relevant
advertisements are shown, increasing the likelihood of the searcher clicking
through to the advertiser’s web site.
Revenue from AdWords derives from two sources: the number of click-
throughs on the advertisements themselves (the standard pay-per-click
model) and the results of keyword auctions. Advertisers bid on what they
consider to be the most relevant search words to increase their chances of
successful sales; for example, an online winter sports retailer might bid on
keywords including “skis,” “snowboard,” and “snowshoes.” The AdSense
program also offers a pay-per-click revenue-sharing scheme to third party
web sites, which choose to host sponsored AdWords advertisements on
their sites.
Google AdWords ads are part and parcel of Google searching, but they
are also clearly separated from the main search results list. Although the
color scheme is the same, the placement on the page and the small grey
“Sponsored Links” header are a clear sign to all but the most novice
searcher that these are not a part of the main search results. Basic Internet
instruction courses offered by libraries almost always include an advisory
note to help new users differentiate between sponsored results and regular
results.
Clearly there is a huge market for AdWords and enough people are
clicking through and following up with purchases to keep both Google
itself and its millions of third-party advertisers interested and subsidized.
However, AdWords remains first and foremost an obvious marketing ploy.
Even the name makes it clear that, regardless of the connotations of the
public label “sponsored,” they are advertisements.
What’s wrong with obvious advertising? First and foremost, the Internet
is pretty tired of it. Users who lived through the era of endless pop-ups,
animated gifs, and garish banner ads have grown to loathe web advertise-
ments, especially the obvious ones. Witness the enormous furor that erupted
Google and Search Engine Optimization 237

on LiveJournal early in 2008 when Sup, the site’s Russian owners, announced
that they were removing basic accounts and forcing users to choose
between paid accounts and free “sponsored” accounts that covered users’
blogs in third party advertisements. Sixty-eight pages (5,000 individual com-
ments) of furious user responses and a 1-day “content strike,” where many
of LiveJournal’s most popular bloggers boycotted the site entirely, rapidly
ensued, forcing Sup to retract the cancellation of the basic accounts (Gladkova
2008).
As a result of this growing antipathy toward traditional web advertising,
businesses are turning to other solutions. YouTube, for example, is proving
a popular venue for ad agencies who carefully construct video advertising
campaigns to look like typical amateur videos, often to the disgust of site
users when the truth emerges (Sidebar 2008).
The holy grail for many companies is one simple thing: achieving as
prominent a place as possible not in the sponsored links section but in the
main results list. Whether a company has products to sell or is simply trying
to achieve a wider audience or more page hits, the obvious way to achieve
its goal is to appear in the place where most of the searcher’s attention is
focused. This is where SEO comes in. SEO is about creating a web page that
will appear as high as possible on the search engine results page (SERP) for
a particular keyword search.

PAGERANK AND THE SEARCH ENGINE RESULTS PAGE

At the heart of Google’s search results (and Google itself; Backrub, the Google
prototype, evolved from a research project by founders Sergei Page and
Larry Brin during their days at Stanford University) lies PageRank. PageRank
is based on a concept familiar to every library school student: citation analysis.
On the web, this means that the unique PageRank algorithm essentially uses
the popularity of a page to define its place in the search engine results list
for a particular search. A hyperlink to Page A from Page B counts as one
vote for Page B on the PageRank scale.
A simple concept, but the execution is somewhat more sophisticated
than may first appear. PageRank not only measures the number of times an
individual page is linked to, but also weights the relative importance of the
pages that are linking to it on a scale of 1 to 10 to provide a level of authority
to the individual vote. Therefore, a link from the Harvard University web
site is accorded significantly more importance by PageRank than a link from
Dave’s Blog.
As soon as Google released PageRank, webmasters and web site owners
set to work figuring out how to make it work to their advantage. Manipulating
the ranking system of Google and other search engines is known as SEO.
For SEO, the key lies in manipulating these rankings so that their site rises
238 K. Cahill and R. Chalut

up within the SERP listings. Their goal is not just to be on page one of the
results, but to achieve one of the top five or ten spots that are most likely to
be clicked on by the information seeker who entered the original search.
SEO practices are divided into two camps: white hat and black hat. The
intentions of the two camps are similar, as are many of the practices. The
former are (or represent) legitimate businesses or organizations with infor-
mation to present or products to sell the web searcher. Our aforementioned
winter sports retailer wants to make its site findable by someone who wants
to buy a snowboard, so it will employ certain accepted tools and techniques
to optimize accessibility to potential snowboard buyers.
Black hat consultants or programmers and site owners want you to
look at their web sites too. Site owners may have a legitimate product to
sell or something interesting to show you, but often it is not what you were
looking for. Black hat SEO consultants do not actually care whether any-
body buys anything in particular; they get paid by the click, so their goal is
purely to get more visitors to a site regardless of what users do there. As a
result, they are more than happy to resort to rather devious and sleazy tricks
to send the user to sites that are not relevant to their search purely in the
interest of increasing traffic to those sites. Our poor snowboarder is simply
trying to find a new board, but the black hatters lure him or her into a site
advertising time shares, online casinos, or fake Rolex watches. White hat SEO
sites potentially want you to buy a product, but they do aim for a certain
relevance when pushing their sites to users.
People generally associate the term “spam” with unwanted e-mails that
clog their inboxes, but spam also refers to black hat SEO techniques that
mislead web searchers for commercial purposes and the sites themselves.
Google defines search engine spam as practices that attempt to deceive the
search engine and ultimately the searcher by manipulating search engine
results to push irrelevant pages to the top of the rankings. Accepted SEO
practices, which maintain a degree of relevance to the commercial page push,
are not considered search engine spam; black hat techniques, however, are.
Both legitimate and illicit SEO techniques are mainly based around two
types of manipulation: page content and links. The earliest tricks of the SEO
trade involved modification of content to improve a page’s chance of
matching a searcher’s keyword inquiries. Google’s PageRank was devel-
oped as a direct response to the early practice of keyword stuffing, which
led to porn sites (which rapidly became very good at this) regularly appear-
ing at the top of search results regardless of what keywords the searcher
entered in the search engine. The web designers simply populated the
metadata field in the HTML code with keywords that they anticipated users
were likely to search, such as “president” or “united states,” often repeating
the keywords ad infinitum.
Keyword stuffing still exists today, but search engines have smartened
up and now look for repeated keywords in code as an indication of
Google and Search Engine Optimization 239

malfeasant design. Black hat SEO practitioners need to be trickier to fool


Google’s algorithms. Black hat coders may try to hide keywords in the page
itself by cloaking (i.e., making them the same color as the background and
therefore invisible to the user). Another black hat practice is to create a
doorway page for a web site, which is filled with keywords that look good
to the search engine. The users, however, are redirected to another page
that is often irrelevant to their search and full of spam ads. Doorway pages
are stringently hunted down by Google, and at one point led to the removal
of the BMW site from Google’s index.
Black hatters have now turned to other sources of keywords: third
party web sites. This practice is known as “site scraping.” Bots scrape out
original content from authentic web sites or blogs and repost the content on
the illegitimate site for search spiders to crawl. The content itself may
consist of accurate information, but it is information that has been stolen
from its owner, taken out of context, and used to lure searchers into spam
sites under false pretences. In addition to misleading the searcher and
annoying search engine coders, site scraping breaks copyright laws.
White hat search engine optimization techniques also employ keyword
analysis and manipulation. Google analytics and other search marketing
services release (often for a price) keyword query data. Site designers, SEO
companies, keyword analysts, and other involved parties study the ever-
changing choices of keywords and keyword strings used by web searchers
and expend considerable time and energy choosing just the right terms and
phrases to include in their web site’s content. They have also learned how
to place those keywords on a webpage in ways that will enhance their like-
lihood of being crawled by search spiders (i.e., by putting them in the
page’s title field). Legitimate companies use keyword density tools to maxi-
mize the appearance of relevant words while avoiding the black hat practice
of keyword stuffing. The main difference here is that white hat keyword
tools consider the intentions of the user and aim to achieve relevance;
again, consider the snowboard buyer and snowboard seller seeking each
other on the web versus an online poker site luring in the unsuspecting
snowboarder.
White hat SEO practices have even led to improvement in Google results
in some cases. For instance, in the past a search for subscription drug names
would bring results that were mainly companies that produced or marketed
the drug. Many of these sites would have varied domain names and avoid the
direct look of a commercial site to hide corporate affiliations and commercial
motivation and make the searcher think the information about the drug was
authoritative and impartial. A winter 2009 search for the keyword “Prozac”
returned results with actual authoritative medical sites receiving high page
rankings. Rxlist and medicinenet, both respected information sources, must
obviously be using white hat SEO techniques to push themselves up past the
sites wanting to sell you the drug or just get you to click in.
240 K. Cahill and R. Chalut

An SEO technique that is more of a direct response to Google’s PageR-


ank is link manipulation, which is also known as “spamdexing” or “link
spam.” Links become a commodity as companies are able to pay for links to
their own sites from “link farms,” illicit web sites that are created especially
for the purpose of pushing up the site’s PageRank through link-ins. Often
these sites are blogs, fraudulently set up by black hatters. The bots will then
continuously ping the blog’s server, making the server think the blog has
been updated. This will attract search engine spiders, pushing up a page’s
rank by illicit means. Link farms and blog pinging also suck up bandwidth
and server capacity.
Another common type of link spam is comment spam. Black hatters
will design a bot that creates a false web identity, which then jumps around
the web to blogs and other web sites that permit comments. These bots will
drop links back to the spam sites, again driving up their PageRank. Com-
ment spam degrades the value of comments on web sites and is extremely
annoying to the owners of the site/blog, who find themselves cleaning out
the commenting equivalent of fake Rolex ads from their site.
Referrer spam is a black hat SEO technique whereby an innocent web
site is accessed by the bad guys (usually through a script or bot) so that
their URL will show up in the target site’s referrer logs, which are often left
public. Spiders crawl these logs, and will push up the black hat referrer’s
PageRank as well as (just as in the case of comment spam) eating up the
target’s server resources.
White hat SEO also employs techniques related to link placement but,
as with keyword manipulation, in a more honest way. Web site owners are
encouraged to submit their sites to legitimate directories such as Yahoo;
these directories carry proportionately more weight when PageRank is
calculated. Google also receives submissions directly from site owners to
include in its index. A notable difference between black and white hat SEO
is that the former is, for the most part, only interested in click traffic. They
are aware that their tricks are illicit and try their best to sneak past the
protective measures implemented by search engines to get those clicks as
fast as they can.
Black hat SEO designers and the companies they represent are gener-
ally not interested in customer trust or a long-term client base. A legitimate
site works on the long-term, long-tail theory: the owner is (or ought to be)
patient enough to wait until other sites link into his or her site and confident
enough in the quality and utility of the site that this will eventually happen.
Google’s PageRank does in fact give some weight to a page’s longevity
(which has led to a back alley market in old domain names).
Ranking manipulation does not always happen for financial gain. The
concept of “Google bombing” drew public attention a few years ago when
it became widely known that searching the keywords “miserable failure”
brought U.S. President George Bush’s web site right to the top of Google’s
Google and Search Engine Optimization 241

results. Part prank, part social commentary, the perpetrators of this and
other Google bombs liberally pepper their own and other web sites and
blogs with links to the White House site using “miserable failure” as anchor
text.
For quite a long time, Google’s response to the bombing was to do
nothing, citing their desire to refrain from manipulating what it saw as the
“objectivity” of their PageRank system. (Meyer 2005). However, after many
well publicized bombs, Google seemed to realize that PageRank was too
easily manipulated to be entirely objective and had received numerous
complaints from users who thought the company endorsed the opinion that
Bush or Jimmy Carter or others were “Miserable Failures.” In January 2007,
Google announced it was tweaking its search algorithms to fight Google
bombs (Cuffs 2007).

GOOGLE STRIKES BACK

First and foremost, Google is a money-making enterprise, and a lucrative


one. It achieved its current dominance largely because web users started
depending on PageRank to bring them the results they wanted while simul-
taneously guarding them from the garbage that they often had to wade
through when scanning the SERPs from other search engines. Obvious
manipulations, such as attention-grabbing Google Bombs, would under-
mine confidence in the product. Google’s punishment for too blatant
attempts to subvert PageRank is simple and severe: banishment. At a more
granular level, the company works constantly to increase and tweak the
“signals” used to assign weighting on the SERP. A 2007 New York Times arti-
cle describes lead Google engineer Amit Singhal’s office whiteboard, cov-
ered in user complaints and new algorithms that attempt to provide
accurate, untainted results (Hansell 2007).
These increasingly complex signals are intended both to prevent the
deliberate manipulation of SERPs and to ensure that Google maintains its
ability to return useful, relevant results for searches featuring the most
scattered handful of keywords. At the current time, there are known to be
more than 200 of these factors, or “signals,” working behind the scenes to
determine the relevancy of each page that appears on an individual SERP.
“PageRank is but one signal. Some signals are on Web pages—like words,
links, images, and so on. Some are drawn from the history of how pages
have changed over time. Some signals are data patterns uncovered in the
trillions of searches that Google has handled over the years” (Hansell 2007).
Like the original PageRank algorithm, most of these changes remain closely
guarded trade secrets.
In 2005, Google released a tool that enabled web site owners and
designers to fight link spam themselves. This is an HTML attribute called
242 K. Cahill and R. Chalut

“nofollow,” which tells Google’s spiders not to attribute weight to certain


links on the site or to any links on a whole site. To a flurry of criticism by
bloggers, Wikipedia began using nofollow in 2007 (Baker 2007). The effec-
tiveness of the nofollow attribute as “link condom” (and the fairness of its
application) has been a subject of much debate (Boggs 2008) but has served
to reduce some black hat SEO practices such as comment spam.
Google also relies on its users to inform it of problem searches and sites.
The “dissatisfied” link at the bottom of a results page provides a way for
disgruntled searchers to report black hat practitioners. “Preferred users” get
further protection against rank manipulation. Google will remember you and
your searches and return results ranked with some weight toward your search
history (a practice that Google is using increasingly, but which has drawn
heavy criticism from a privacy standpoint). Users signed in to their Google
accounts also are able to rate or reject any site that appears in a search result.
All of these (and many other) signals play a part in defining the final
order in which pages appear on the SERP. Google is understandably cagey
about exactly what these signals are, given that they lie at the heart of the
reputation for relevant results that have made them not just the best-known
and most-used search engine on the Internet, but one of the largest and
most influential companies in the world.
However, SEO operators and web site owners are just as determined to
keep manipulating SERP rankings to their own ends. Even in the face of 200
signals, vigilant Google employees, searcher contributions, and threat of
banishment, spam or joke sites still float their way to the top.
New algorithms introduced by Google in January 2007 diffused
the original Google Bomb that resulted in the official web site of former
President George Bush appearing at the top of the SERP for the search
terms “miserable failure” (Sullivan 2007). However, the site reappeared in
the results when the search was repeated in January 2009.
Exceeding projections even in a time of economic downturn, the
search engine marketing industry is expected to continue growing up to
$25.2 billion in 2011, from $11.5 billion in 2007 (Meiners 2008). Unscrupulous
businesses and SEO black hatters continue to want their share of this.

INFORMATION SEEKING BEHAVIORS

Although more in-depth research needs to occur to analyze exactly how


extensively SEO practices are influencing the final list order on SERPs, the
spread of SEO marketing and the seriousness with which search engines are
treating black hat SEO practitioners provide clear evidence that this is an
issue about which information seekers should be aware.
Moreover, studies of user behavior give rise to additional concerns over
SERP manipulation. Google has been boss of the search engine scene for
Google and Search Engine Optimization 243

most of the decade since its launch (the verb “to google” first appeared in
the New York Times in 2001), and for most people (not just casual searchers
but even many trained information professionals) it remains not just the first
port of call but often the only port of call when looking for information.
As of December 2008, according to Internet marketing company Hitwise,
Google was the search engine of choice for over 72% of U.S. searches.
Google has almost made the search process too easy: it has trained
information seekers to expect useful, relevant results with almost every
single search, regardless of the vagaries of the keywords selected. The
smarter Google’s 200 signals get at weeding out irrelevancies, correcting
misspellings, and accurately identifying when a user is looking for Turkey
(the country) as opposed to turkey (the bird), the more people trust and
rely on that first page of results returned by their search.
The result of this is that searchers have gotten lazy and are getting
lazier. Jansen and Spink’s (2006) study into user search practices found that
73% of search engine users do not look at any search results beyond the
first page. A recent iProspects (2006) survey revealed that an increasing
number of search engine users are unwilling to look past the first three
pages of results. The impact of this can be clearly seen with a sample search
for McDonald’s; all results in the first three pages are produced by some
branch of the corporation.
These are significant and somewhat frightening statistics. It means that
three-quarters of searchers never venture beyond that first page, that top ten
listing targeted so aggressively by myriad black and white hatted SEO prac-
titioners. More interesting still would be a similar study into the search
behavior of reference personnel—we who think of ourselves as search pro-
fessionals—how many search engines do we use and how many page
results will we look through before we give up?

THE ROLE OF THE LIBRARY

The take-home message for libraries is that the end users of search engines,
our patrons, are not fully aware of all the factors that contribute to the SERP
that they see in front of them following a Google search, and that one of the
keys to helping them fully understand and exploit the capacity of search
engines is changing their own entrenched information seeking behaviors.
One of the ways to download knowledge and tools to our users is increase
our own knowledge and use the tools ourselves. As search engines become
more important to reference work on both sides of the desk, a sound pro-
fessional familiarity with their workings is essential. Including web sites and
blogs such as Search Engine Watch and Search Engine Land in your list of
professional reading is essential to keep up to date with new developments
in what’s happening in search engine world.
244 K. Cahill and R. Chalut

Reference staff need also restrain themselves from the temptation to


rely too much on a single search engine. A case in point is chat reference.
As the demand for the service grows (QuestionPoint [2009], a major pro-
vider of chat reference, received 532,000 questions in 2008, up from 450,000
in 2007), pressure on librarians to retrieve and deliver answers as rapidly as
possible increases. As anyone who has ever staffed a virtual reference desk
knows, impatient users wanting immediate results often do not allow the
luxury of searching multiple sources—if you take too much time, you can
lose them. The constant pressure to return sites within mere seconds can
easily lead to habits of using the single search engine and not taking that
teachable moment to explain the workings of search engines and why it
may not be a good idea to restrict yourself to a single one.
The casual searcher doesn’t need to know how the PageRank algorithm
works or what those 200 various signals that go into calculating the rele-
vancy ranking of a particular search result are, but a better understanding of
the basic mechanics of how search engines generate results is an important
weapon for any user to carry into the search arena. The more critical the
eye searchers bring to the SERP, the better they are able to sift out the most
relevant, authoritative information from the list. The more sophisticated and
widespread SEO techniques become, the greater the danger that the end
user is not viewing the information most relevant to his or her search, but
rather is viewing intelligently targeted advertising or simply aggressive
marketing of an irrelevant product by black hatters.
In the modern information environment, we are already seeing the
evolution of libraries from gatekeepers of information to guides and tutors
on how to find it, how to use it, and how to better understand its relevance
and its limitations. Teaching our users to be aware of SEO and its influence
on search results is another important aspect of the librarian’s role as infor-
mation literacy advocate and instructor.
Librarians can build the techniques needed to develop this critical eye
into the basic courses that they teach to help their patrons develop effective
Internet skills. They can also teach people to use the tools that Google has
created to help weed out the spam sites pushed by the black hatters: the
“dissatisfied” link and the “promote” and “remove” buttons. As part of this
information literacy instruction and skill-building, librarians can explore
the abundance of tools and resources that exist on the web that can help
provide a broader, more unbiased view of the products that the SEO
merchants work so hard on pushing to the top of the SERP.
Vertical search engines such as Google’s Product Search (formerly
known as Froogle: <http://www.google.com/products>) provide an excellent
starting point for the evaluation of search results when looking to purchase
commercial products. Product Search offers options for the searcher to sort
results by relevance, price, product ranking, or retailer ranking, in addition
to providing access to a wealth of customer reviews and ratings. This allows
Google and Search Engine Optimization 245

the searcher to sort, resort, compare, and contrast the products on offer,
rather than simply being presented with a list whose order is dictated by the
marketing prowess of the company. Moreover, the inclusion of reviews and
ratings allows the searcher to access the collective experience of all the
users who have previously bought the product or interacted with the ven-
dor. This has been taken a step further by sites like Get Satisfaction (<http://
www.getsatisfaction.com>) and Restaurantica (<http://www.restaurantica.com>),
which are organization- and community-driven ranking sites. Although user
ratings and reviews are not necessarily objective, they provide immediate
feedback on commercial products and services from the people who have
actually used them, which is invaluable for the searcher trying to filter out
the best product from the array on offer.
Libraries also play a role in teaching users to look beyond not just the
first page of their Google results, but beyond Google itself. There is a
wealth of information on the deep web locked behind barriers, such as reg-
istration and authentication, that users who never look beyond Google are
missing. Introducing users to subject-specialized resources, full-text article
databases, and vertical search products gives them access to a new range of
information and authority. For most library staff, these things are second
nature; we use them without even thinking about it to verify, double-check,
and compare and contrast the information we are finding. What is so critical
in the modern information environment, when more data than one person
could process in his or her lifetime is just a mouse-click away, is sharing
these skills with our patrons to help them see his or her own search results
through that all-important critical eye.

CONCLUSION

Without a more extensive study into the impact of SEO practices on search
engine results, it is hard to say how detrimental SEO actually is on SERP list-
ings. Moreover, not all SEO has a malign influence on search results. The
influence of SEO may be self-correcting to some degree; as SEO practices
become more widespread and better understood, increasing numbers of
web sites will simply incorporate even the more sophisticated optimization
techniques as standard, eliminating some of the advantage held by those
web sites that currently use the most effective SEO practices. The search
engines themselves are strongly committed to the elimination of black hat
SEO wherever possible, although fighting this particular battle to some
extent mirrors the two steps forward, one step back campaigns against e-mail
spam, spyware, and various other scourges of the digital age.
Some have speculated that the PageRank system and SEO tactics, both
white and black hat, have led to a World Wide Web where the information
users end up looking at (those vital top ten results) is overwhelmingly
246 K. Cahill and R. Chalut

skewed toward the white, English speaking mainstream. As Segev (2008)


wrote, “From the top down view, it is the operation of corporations and
organizations that own ‘authority sites,’ which provide, organize and
customize online information, and their interactions with governments
and states. Together these dominant actors shape our information society,
and thus their strategies and tactics require investigations” (para. 3). A
search for “war” and “Iraq,” for example, brings back a list of results that
was, on those all important first three pages, almost exclusively comprised
of web sites from Western countries, whereas a search for “AIDS” contained
only a single non-North American site.
Both on the reference desk and in the information literacy programs
we offer, libraries play a vital role in helping users learn to be realistic about
the searches they perform and the results they get back in return. Over the
decades and centuries, libraries have striven to provide access to a spectrum
of ideas and perspectives. By restricting ourselves and our users to the top
search engines, we restrict that access and end up with those first pages of
ranked sites that have frequently been pushed up there by the deliberate
effort of both black and white SEO techniques. The ease of keyword
searching too often breeds a sense that whatever results come back are
good enough, but by learning to refine searches effectively and develop a
critical eye when looking at the SERP, it’s possible to do much better.

REFERENCES

Baker, Loren. 2007. NOFOLLOW Wikipedia to Pagerank Zero Campaign. Search


Engine Journal. http://www.searchenginejournal.com/nofollow-wikipedia-to-
pagerank-zero-campaign/4304/ (accessed January 28, 2009).
Baker, Loren. 2008. Google now controls 69% of online advertising market. Search
Engine Journal. http://www.searchenginejournal.com/google-now-controls-
69-of-online-advertising-market/6632/ (accessed January 5, 2009).
Boggs, Chris. 2008. The great Nofollow link debate of 2008. Search Engine Watch.
http://searchenginewatch.com/3628452. (accessed January 28, 2009).
Cuffs, Matt. 2007. A few words about Google bombing. The official Google
Webmaster Central Blog. http://googlewebmastercentral.blogspot.com/2007/
01/quick-word-about-googlebombs.html (accessed January 28, 2009).
Fain, Daniel, and Jan Pedersen. 2006. Sponsored search: A brief history. http://
www.business.ualberta.ca/kasdemir/ssa2/fain_pedersen.PDF (accessed January 5,
2009).
Gladkova, Svetlana. 2008. Livejournal is forced to bring basic accounts back. Profy.
http://profy.com/2008/07/17/livejournal-forced-return-basic-accounts/ (accessed
January 5, 2009).
Hansell, Saul. 2007. Google keeps tweaking its search engine. New York Times. http://
www.nytimes.com/2007/06/03/business/yourmoney/03google.html?pagewanted=1
(accessed January 5, 2009).
Google and Search Engine Optimization 247

Hedger, Jim. 2004. Paid inclusion going the way of the dodo. Zeromillion.com. http://
www.zeromillion.com/webmarketing/web-site-submission-google.html (accessed
January 5, 2009).
iProspect. 2006. iProspect search engine user behavior study. www.iprospect.com/
premiumPDFs/WhitePaper_2006_SearchEngineUserBehavior/ (accessed January
27, 2009).
Jansen, B., and A. Spink. 2006. How we are searching the world wide web. Informa-
tion Processing and Management 42: 248–263.
Meiners, Janet. 2008. Search engine marketing projected to reach $25.2 billion in 2011.
Marketing Pilgrim. http://www.marketingpilgrim.com/2008/03/search-engine-
marketing-to-projected-to-reach-252-billion-in-2011.html (accessed January 28,
2009).
Meyer, Marissa. 2005. Google bombing failure. The official Google Blog. http://
googleblog.blogspot.com/2005/09/googlebombing-failure.html (accessed January 28,
2009).
QuestionPoint. 2009. QuestionPoint statistical reports. http://www.questionpoint.org
(accessed January 30, 2009).
Segev, Elad. 2008. Search engines and power: A politics of online (mis-)informa-
tion. Webology no. 5, http://www.webology.ir/2008/v5n2/a54.html (accessed
January 30, 2009).
Sullivan, Danny. 2007. Google kills Bush’s miserable failure search & other Google
bombs. Search Engine Land. http://searchengineland.com/google-kills-bushs-
miserable-failure-search-other-google-bombs-10363 (accessed January 29, 2009).
Wii Fit girlfriend viral video is actually an ad. 2008. Sidebar: The Gamespot Newsblog.
http://www.gamespot.com/pages/news/show_blog_entry.php?topic_id=
26429442&part=rss&subj=6192391 (accessed January 5, 2009).

You might also like