Professional Documents
Culture Documents
• Branding
• Increasing & Maintain Search Ranking
• Increase Number of Visitors through Organic Search
Traffic
• Increase E-commerce Sales
• Customer Service
Developing an SEO Plan Prior to Site Development
• SEO professionals must maintain a research process for analyzing
how the search landscape is changing, because search engines
strive to continuously evolve to improve their services.
• Organizations should take many factors into account when
pursuing an SEO strategy, including:
• What the organization is trying to promote
• Target market
• Brand
• Website structure
• Current site content
• Ease with which the content and site structure can be modified
• Any immediately available content
• Competitive landscape
Understanding Your Audience and
Finding Your Niche
2. Index: Store and organize the content found during the crawling
process. Once a page is in the index, it’s in the running to be displayed
as a result to relevant queries.
3. Rank: is the order in which the indexed results appear on the result
page (SERP). The list goes from the most relevant content to the least.
.
• It includes Crawling, Indexing, and Ranking.
• What is search engine crawling?
Crawling is the discovery process in which search engines send out
a team of robots (known as crawlers or spiders) to find new and
updated content. Content can vary — it could be a webpage, an
image, a video, a PDF, etc. — but regardless of the format, content
is discovered by links.
or
• Crawling is the process of fetching all the web pages linked with a
website.
• This task is performed by a software called crawler. The crawlers
are also known as spiders or bots , they visit website and send
information to their respective parent websites.
What is Crawler?
• Crawlers are search engines’ automated robots that can
reach the many billions of interconnected documents on
WWW.
• Crawler is a program that systematically browses the World
Wide Web in order to create an index of data.
• A web crawler (also known as web spider) is a program
which browses the World Wide Web in a methodical,
automated manner. A web crawler is one type of bot. Web
crawlers keeps a copy of all the visited pages for later
processing.
In other words….
• To discover all the public pages on the World Wide Web and
then present the ones that best match up with the user’s search
query. The first step in this process is crawling the Web.
• The search engines start with a set of high quality sites, and then
visit the links on each page of those sites to discover other web
pages.
• The search engine will then load those other pages and analyze
that content as well.
• This process repeats over and over again until the crawling
process is complete. This process is an enormously complex one
as the Web is a large and complex place.
• Search engines do not attempt to crawl the entire Web every
day.
Indexing
• Search engines process and store information they find in an
index, a huge database of all the content they’ve discovered and
deem good enough to serve up to searchers.
• Indexing is the process of creating index for all the fetched web
pages and keeping them into a huge database from where it can
later be retrieved.
• An index is another name for the database used by a search
engines. It contains information on all the websites the search
engine was able to find. If a website is not in a search engine’s
index, users will not be able to find it using that search engine.
Search engines regularly update their indexes.
Difference between Indexing and Crawling
• Boolean searches
Boolean searches use Boolean terms such as AND, OR, and NOT. This
type of logic is used to expand or restrict which documents are
returned in a search.
Term weighting
Term weighting refers to the importance of a particular
search term to the query. The idea is to weight particular
terms more heavily than others to produce superior search
results. For example, the word the in a query will receive
very little weight in selecting the results because it appears
in nearly all English-language documents. There is nothing
unique about it, and it does not help in document selection.
Measuring Content Quality and User Engagement
• The search engines also attempt to measure the quality and
uniqueness of a website’s content. One method they may use for doing
this is evaluating the document itself.
Spelling and grammatical errors
if a web page has lots of spelling and grammatical errors, that can be
taken as a sign that little editorial effort was put into that page
Reading level
The search engines can also analyze the reading level of the document.
One popular formula for doing this is the Flesch-Kincaid Grade Level
Readability Formula, which considers things like the average word
length and the number of words in a sentence to determine the level
of education needed to be able to understand the sentence.
Imagine a scenario where the products being sold on a page are
children’s toys, but the reading level calculated suggests that a grade
level of a senior in college is required to read the page. This could be
another indicator of poor editorial effort.
Actual user interaction
For example, if a large number of the users who visit the web page
after clicking on a search result immediately return to the search
engine and click on the next result, that would be a strong
indicator of poor quality.
Google has access to a large number of data sources that it
can use to measure how visitors interact with your website.
Some of those sources include:
Interaction with web search results
if a user clicks through on a SERP listing and comes to your site,
clicks the Back button, and then clicks on another result in the
same set of search results, that could be seen as a negative
ranking signal.
Google Analytics (website statistics service)
Google analytics provide a set of tools which can be used for
getting detailed activity that are going on the websites. It provides
data on following points.
Bounce rate
Indicating the number of users visiting only one page of the
website.
Time on site
Indicating the average time spent by users on the site. It would
tell the average time between the loading of the first page and the
loading of the last page
Page views per visitor
The average number of pages viewed per visitor on your site.
Google Toolbar
The Google toolbar is a downloadable browser toolbar for
the Internet Explorer and Firefox browsers.
Google +1 Button (similar to Facebook LIKE button)
In April 2011, Google began public testing of a new feature,
the +1 button. This enables users to “vote” for a page, either
directly in the search results or on the page itself, thereby
identifying their favorite websites for a particular search
query.
Chrome Blocklist Extension
In February 2011, Google released the Chrome Blocklist. This
provides users of the Chrome browser a way to indicate
search results they don’t like.
Goo.gl
In September 2010, Google released its own URL shortener.
This tool allows Google to see what content is being shared
and what content is being clicked on. (discontinued from
April 18, 2018)
Link analysis