{"title":"URL ordering based performance evaluation of Web crawler","authors":"Mohd Shoaib, A. Maurya","doi":"10.1109/ICAETR.2014.7012962","DOIUrl":null,"url":null,"abstract":"There are billions of Web pages on World Wide Web which can be accessed via internet. All of us rely on usage of internet for source of information. This source of information is available on web in various forms such as Websites, databases, images, sound, videos and many more. The search results given by search engine are classified on basis of many techniques such as keyword matches, link analysis, or many other techniques. Search engines provide information gathered from their own indexed databases. These indexed databases contain downloaded information from web pages. Whenever a query is provided by user, the information is fetched from these indexed pages. The Web Crawler is used to download and store web pages. Web crawler of these search engines is expert in crawling various Web pages to gather huge source of information. Web Crawler is developed which orders URLs on the basis of their content similarity to a query and structural similarity. Results are provided over five parameters: Top URLs, Precision, Content, Structural and Total Similarity for a keyword.","PeriodicalId":196504,"journal":{"name":"2014 International Conference on Advances in Engineering & Technology Research (ICAETR - 2014)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2014-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"7","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2014 International Conference on Advances in Engineering & Technology Research (ICAETR - 2014)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ICAETR.2014.7012962","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 7
Abstract
There are billions of Web pages on World Wide Web which can be accessed via internet. All of us rely on usage of internet for source of information. This source of information is available on web in various forms such as Websites, databases, images, sound, videos and many more. The search results given by search engine are classified on basis of many techniques such as keyword matches, link analysis, or many other techniques. Search engines provide information gathered from their own indexed databases. These indexed databases contain downloaded information from web pages. Whenever a query is provided by user, the information is fetched from these indexed pages. The Web Crawler is used to download and store web pages. Web crawler of these search engines is expert in crawling various Web pages to gather huge source of information. Web Crawler is developed which orders URLs on the basis of their content similarity to a query and structural similarity. Results are provided over five parameters: Top URLs, Precision, Content, Structural and Total Similarity for a keyword.