Abstrato

Auto-Explore the Web � Web Crawler

Soumick Chatterjee, Asoke Nath

World Wide Web is an ever-growing public library with hundreds of millions of books without any central management system. Finding a piece of information without a proper directory is like finding a middle in a haystack. Various search engines solve this problem by indexing an amount of the complete content that is available in the internet. For accomplishing this job, search engines use an automated program, known as a web crawler. The most vital job of the web is information retrieval, that too with proper efficiency. Web Crawler helps to accomplish that, by helping search indexing or by helping in making archives. Web Crawler automatically visits all the available links which is further indexed. But, usage of web crawler is not limited to only search engines, but they can also be used for web scrapping, spam filtering, identifying unauthorized use of copyrighted content, identifying illegal and harmful web activities etc. Web Crawler faces various challenges while crawling deep web content, multimedia content etc. Various crawling techniques and various web crawlers are available and discussed in this paper.

Isenção de responsabilidade: Este resumo foi traduzido usando ferramentas de inteligência artificial e ainda não foi revisado ou verificado