How Search Engines Work
Search Engine Optimization (SEO) has evolved considerably over the years. Tactics which
focused only on keyword optimization have become obsolete and replaced with modern strategies
which focus on user-experience; while we have to remember that search engines are not
human beings, we also have to understand how search engines view pages. This is the basis for
current tactics although there is no denying the fact that SEO is an incredibly valuable and
cost-efficient business strategy for all industries.
These engines are text-driven, and so they look at text items on a website to get an
idea what it is about. This is known as "crawling". Technology may advance rapidly but search
engines are not creatures evolved "intelligently" to feel the beauty or look and feel of a
website or enjoy the pictures and sounds in flash movies. Search engines work differently from
each other and it's quite a task to "conquer" all of them in search results but careful and
intelligent optimizing can really work wonders.
Search engines go through a whole set of processes beginning with crawling. They have
to perform several activities in sequence to deliver search results that are as close as possible
to the user, who is a human searcher.
• Crawling - a software called a "spider" or a "robot" follows links from pages and indexes
everything they find in their crawling. However, it's virtually impossible for the crawler
to visit every single web page to retrieve information, sometimes some web pages end up not
being crawled for months, which is why it is essential to visualize what a crawler can see
and only put those on the website instead of flash movies, password protected pages, extensive
If the content is not viewable there is no chance they will be indexed, processed or retrieved.
Using a Spider Simulator it is easy to find if website content is viewable to a spider.
• Indexing - after a webpage is crawled, the contents are indexed and stored in a giant
database. The indexing process is mainly assigning particular pages or content which are
identified and matched with keywords and descriptions given by the user. It is humanly impossible
to index and process such extensive volumes of information but search engines are able to deal
with such tasks especially if keywords are optimized so that they are identified and the pages
are classified correctly, leading to higher search engine page rankings.
• Processing - this is the step where the engine compares the indexed page in the database
with the search string to pull out the information that the user needs.
• Calculating Relevancy - this is the step that follows. Relevancy of indexed content
to keyword search is based on various algorithms which have relative weights that are different
for factors like density of keyword, links and metatags etc. This is the reason that different
search engines deliver different search results for the relatively same search string. All
of the search engines change algorithms periodically; therefore it is essential that web pages
adapt to these changes by devoting time and effort to SEO so that websites rank consistently.
• Retrieving - this is nothing but display of retrieved results in a user's web browser;
the list can be very long but the most relevant sites are ranked at the top followed by sites
which rank less and less on relevancy factors.
To know more about Manual Search Engine Submission, please check
my gig on Fiverr.