Empirical examination of a collaborative web application

Christopher Stewart, Matthew Leventi, Kai Shen
{"title":"Empirical examination of a collaborative web application","authors":"Christopher Stewart, Matthew Leventi, Kai Shen","doi":"10.1109/IISWC.2008.4636094","DOIUrl":null,"url":null,"abstract":"Online instructional applications, social networking sites, Wiki-based Web sites, and other emerging Web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative Web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative Web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative Web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK-a widely-used collaborative Web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.","PeriodicalId":447179,"journal":{"name":"2008 IEEE International Symposium on Workload Characterization","volume":"220 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2008-09-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"24","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2008 IEEE International Symposium on Workload Characterization","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/IISWC.2008.4636094","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 24

Abstract

Online instructional applications, social networking sites, Wiki-based Web sites, and other emerging Web applications that rely on end users for the generation of web content are increasingly popular. However, these collaborative Web applications are still absent from the benchmark suites commonly used in the evaluation of online systems. This paper argues that collaborative Web applications are unlike traditional online benchmarks, and therefore warrant a new class of benchmarks. Specifically, request behaviors in collaborative Web applications are determined by contributions from end users, which leads to qualitatively more diverse server-side resource requirements and execution patterns compared to traditional online benchmarks. Our arguments stem from an empirical examination of WeBWorK-a widely-used collaborative Web application that allows teachers to post math or physics problems for their students to solve online. Compared to traditional online benchmarks (like TPC-C, SPECweb, and RUBiS), WeBWorK requests are harder to cluster according to their resource consumption, and they follow less regular patterns. Further, we demonstrate that the use of a WeBWorK-style benchmark would probably have led to different results in some recent research studies concerning request classification from event chains and type-based resource usage prediction.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
协作式web应用程序的实证检验
在线教学应用程序、社会网络站点、基于wiki的Web站点和其他依赖最终用户生成Web内容的新兴Web应用程序越来越流行。然而,这些协作Web应用程序仍然没有出现在在线系统评估中常用的基准套件中。本文认为协作Web应用程序与传统的在线基准测试不同,因此需要一类新的基准测试。具体来说,协作Web应用程序中的请求行为是由最终用户的贡献决定的,与传统的在线基准测试相比,这会导致服务器端资源需求和执行模式在质量上更加多样化。我们的论点源于对webwork的实证研究,webwork是一个广泛使用的协作网络应用程序,允许教师将数学或物理问题发布给学生,让他们在线解决。与传统的在线基准测试(如TPC-C、SPECweb和RUBiS)相比,WeBWorK请求更难根据它们的资源消耗进行集群,并且它们遵循较少的规则模式。此外,我们还证明,在最近的一些关于从事件链中进行请求分类和基于类型的资源使用预测的研究中,使用webwork风格的基准可能会导致不同的结果。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Workload characterization of selected JEE-based Web 2.0 applications Accelerating multi-core processor design space evaluation using automatic multi-threaded workload synthesis Evaluating the impact of dynamic binary translation systems on hardware cache performance On the representativeness of embedded Java benchmarks A workload for evaluating deep packet inspection architectures
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1