Web Crawler - Search Engine Robots - Search Engine Spiders

To perform search engine optimization (SEO) on a web page you first need to understand how web crawlers, search engine robots or search engine spiders work.

What is a Web Crawler?

(Also known as search engine robots or search engine spiders)

A web crawler (also known as web spider) is a program which browses the World Wide Web in a methodical, automated manner. A web crawler is one type of bot. Web crawlers not only keep a copy of all the visited pages for later processing - for example by a search engine but also index these pages to make the search narrower... continued

Source: From Wikipedia, the free encyclopedia

How Web Crawlers Work

Web crawlers (search engine robots or search engine spiders) use a process called "crawling the web" or web crawling. They start with the web servers that have heavy traffic and most popular web pages.

Diagram of how a web crawler works

Click the diagram above to see the web crawling process used by the web crawler.

The web crawler sets out from the search engine's base computer system looking for websites to index.

The web crawler collects information about the website and it's links.

When the web crawler returns home the information is indexed by the search engine.

Web Crawler Related Articles

If you found this web page a useful resource for your own website please link as follows:

HTML Basic Tutor - www.htmlbasictutor.ca/

Search engine optimization (SEO) and web crawlers, search engine robots or search engine spiders how they work.
URL: