Thursday, September 18, 2008



A search engine robot’s action is called spidering, as it resembles the multiple legged

spiders. The spider’s job is to go to a web page, read the contents, connect to any other

pages on that web site through links, and bring back the information. From one page it

will travel to several pages and this proliferation follows several parallel and nested paths

simultaneously. Spiders frequent the site at some interval, may be a month to a few

months, and re-index the pages. This way any changes that may have occurred in your

pages could also be reflected in the index. The spiders automatically visit your web pages

and create their listings. An important aspect is to study what factors promote “deep

crawl” – the depth to which the spider will go into your website from the page it first

visited. Listing (submitting or registering) with a search engine is a step that could

accelerate and increase the chances of that engine “spidering” your pages.