Reproducible Web Corpora

Johannes Kiesel, Florian Kneist, Milad Alshomary, Benno Stein, Matthias Hagen, Martin Potthast
{"title":"Reproducible Web Corpora","authors":"Johannes Kiesel, Florian Kneist, Milad Alshomary, Benno Stein, Matthias Hagen, Martin Potthast","doi":"10.1145/3239574","DOIUrl":null,"url":null,"abstract":"The evolution of web pages from static HTML pages toward dynamic pieces of software has rendered archiving them increasingly difficult. Nevertheless, an accurate, reproducible web archive is a necessity to ensure the reproducibility of web-based research. Archiving web pages reproducibly, however, is currently not part of best practices for web corpus construction. As a result, and despite the ongoing efforts of other stakeholders to archive the web, tools for the construction of reproducible web corpora are insufficient or ill-fitted. This article presents a new tool tailored to this purpose. It relies on emulating user interactions with a web page while recording all network traffic. The customizable user interactions can be replayed on demand, while requests sent by the archived page are served with the recorded responses. The tool facilitates reproducible user studies, user simulations, and evaluations of algorithms that rely on extracting data from web pages. To evaluate our tool, we conduct the first systematic assessment of reproduction quality for rendered web pages. Using our tool, we create a corpus of 10,000 web pages carefully sampled from the Common Crawl and manually annotated with regard to reproduction quality via crowdsourcing. Based on this data, we test three approaches to automatic reproduction-quality assessment. An off-the-shelf neural network, trained on visual differences between the web page during archiving and reproduction, matches the manual assessments best. This automatic assessment of reproduction quality allows for immediate bugfixing during archiving and continuous development of our tool as the web continues to evolve.","PeriodicalId":15582,"journal":{"name":"Journal of Data and Information Quality (JDIQ)","volume":"24 1","pages":"1 - 25"},"PeriodicalIF":0.0000,"publicationDate":"2018-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"9","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Data and Information Quality (JDIQ)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3239574","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 9

Abstract

The evolution of web pages from static HTML pages toward dynamic pieces of software has rendered archiving them increasingly difficult. Nevertheless, an accurate, reproducible web archive is a necessity to ensure the reproducibility of web-based research. Archiving web pages reproducibly, however, is currently not part of best practices for web corpus construction. As a result, and despite the ongoing efforts of other stakeholders to archive the web, tools for the construction of reproducible web corpora are insufficient or ill-fitted. This article presents a new tool tailored to this purpose. It relies on emulating user interactions with a web page while recording all network traffic. The customizable user interactions can be replayed on demand, while requests sent by the archived page are served with the recorded responses. The tool facilitates reproducible user studies, user simulations, and evaluations of algorithms that rely on extracting data from web pages. To evaluate our tool, we conduct the first systematic assessment of reproduction quality for rendered web pages. Using our tool, we create a corpus of 10,000 web pages carefully sampled from the Common Crawl and manually annotated with regard to reproduction quality via crowdsourcing. Based on this data, we test three approaches to automatic reproduction-quality assessment. An off-the-shelf neural network, trained on visual differences between the web page during archiving and reproduction, matches the manual assessments best. This automatic assessment of reproduction quality allows for immediate bugfixing during archiving and continuous development of our tool as the web continues to evolve.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
可复制的Web语料库
web页面从静态HTML页面向动态软件的演变使得对它们进行归档变得越来越困难。然而,一个准确的,可复制的网络档案是必要的,以确保基于网络的研究的可重复性。然而,可复制的网页存档目前还不是web语料库构建的最佳实践的一部分。因此,尽管其他利益相关者正在努力存档网络,但用于构建可复制的网络语料库的工具还是不足或不合适。本文提供了一个专门用于此目的的新工具。它依赖于在记录所有网络流量的同时模拟用户与网页的交互。可定制的用户交互可以按需重播,而存档页面发送的请求将与记录的响应一起提供。该工具促进了可重复的用户研究、用户模拟和算法评估,这些算法依赖于从网页中提取数据。为了评估我们的工具,我们对渲染网页的复制质量进行了第一次系统评估。使用我们的工具,我们创建了一个10000个网页的语料库,从Common Crawl中仔细取样,并通过众包对复制质量进行手动注释。基于这些数据,我们测试了三种自动再现质量评估方法。一个现成的神经网络,在网页存档和复制的过程中对视觉差异进行了训练,与人工评估最匹配。这种对复制质量的自动评估允许在存档过程中立即修复错误,并随着网络的不断发展而不断开发我们的工具。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Editorial: Special Issue on Data Transparency—Data Quality, Annotation, and Provenance Challenge Paper: The Vision for Time Profiled Temporal Association Mining Editorial: Special Issue on Quality Assessment and Management in Big Data—Part I Developing a Global Data Breach Database and the Challenges Encountered Knowledge Transfer for Entity Resolution with Siamese Neural Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1