Web Crawler - Search Engine Robots - Search Engine Spiders
To perform search engine optimization (SEO) on a web page you first need to understand how web crawlers, search engine robots or search engine spiders work.
What is a Web Crawler?
(Also known as search engine robots or search engine spiders)
A web crawler (also known as web spider) is a program which browses the World Wide Web in a methodical, automated manner. A web crawler is one type of bot. Web crawlers not only keep a copy of all the visited pages for later processing - for example by a search engine but also index these pages to make the search narrower... continued
How Web Crawlers Work
Web crawlers (search engine robots or search engine spiders) use a process called "crawling the web" or web crawling. They start with the web servers that have heavy traffic and most popular web pages.
Click the diagram above to see the web crawling process used by the web crawler.
The web crawler sets out from the search engine's base computer system looking for websites to index.
The web crawler collects information about the website and it's links.
- the website url
- the web page title
- the meta tag information
- the web page content
- the links on the page, and where they go to
When the web crawler returns home the information is indexed by the search engine.
Web Crawler Related Articles
- How Search Engines Read Web Pages To properly optimize a web page for the search engines you need to understand how the search engines read web pages.
- Search Engine Submissions Once the website has been submitted to a search engine there are a few stages to the process of actually getting indexed.
- Web Page Content Search Engines See In this series of articles focuses on the content of the web page the search engines see.