用打开的任务验证扩展实体关系图

M. Sabou, Klemens Käsznar, Markus Zlabinger, S. Biffl, D. Winkler
{"title":"用打开的任务验证扩展实体关系图","authors":"M. Sabou, Klemens Käsznar, Markus Zlabinger, S. Biffl, D. Winkler","doi":"10.1609/hcomp.v8i1.7471","DOIUrl":null,"url":null,"abstract":"The verification of Extended Entity Relationship (EER) diagrams and other conceptual models that capture the design of information systems is crucial to ensure reliable systems. To scale up verification processes to larger groups of experts, Human Computation techniques were used focusing primarily on closed tasks, which constrain the number and variety of reported defects in favor of easy aggregation of derived judgements. To address this limitation of closed tasks, in this paper, we investigate EER verification (as instance of a broader family of model verification problems) with open tasks to extend the range of collected results. We also address the challenge of aggregating results of open tasks by proposing a follow-up HC task for defect validation. We evaluate our approach for HC-based EER Verification with open tasks in a set of experiments conducted with junior developers and show that (1) open tasks allow collecting a variety of insights that go beyond a manually built gold standard while still leading to good performance (F1=60%) and (2) HC-based validation can be reliably used for validating the results of open tasks (F1=84% compared to expert validation).","PeriodicalId":87339,"journal":{"name":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Verifying Extended Entity Relationship Diagrams with Open Tasks\",\"authors\":\"M. Sabou, Klemens Käsznar, Markus Zlabinger, S. Biffl, D. Winkler\",\"doi\":\"10.1609/hcomp.v8i1.7471\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The verification of Extended Entity Relationship (EER) diagrams and other conceptual models that capture the design of information systems is crucial to ensure reliable systems. To scale up verification processes to larger groups of experts, Human Computation techniques were used focusing primarily on closed tasks, which constrain the number and variety of reported defects in favor of easy aggregation of derived judgements. To address this limitation of closed tasks, in this paper, we investigate EER verification (as instance of a broader family of model verification problems) with open tasks to extend the range of collected results. We also address the challenge of aggregating results of open tasks by proposing a follow-up HC task for defect validation. We evaluate our approach for HC-based EER Verification with open tasks in a set of experiments conducted with junior developers and show that (1) open tasks allow collecting a variety of insights that go beyond a manually built gold standard while still leading to good performance (F1=60%) and (2) HC-based validation can be reliably used for validating the results of open tasks (F1=84% compared to expert validation).\",\"PeriodicalId\":87339,\"journal\":{\"name\":\"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1609/hcomp.v8i1.7471\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ... AAAI Conference on Human Computation and Crowdsourcing","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1609/hcomp.v8i1.7471","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

验证扩展实体关系(EER)图和其他捕获信息系统设计的概念模型对于确保系统的可靠性至关重要。为了将验证过程扩大到更大的专家组,人类计算技术主要集中在封闭任务上,这限制了报告缺陷的数量和种类,有利于推导判断的容易聚集。为了解决封闭任务的这种限制,在本文中,我们研究了具有开放任务的EER验证(作为更广泛的模型验证问题家族的实例),以扩展收集结果的范围。我们还提出了一个后续的HC任务来进行缺陷验证,从而解决了对开放任务的结果进行聚合的挑战。我们在一组与初级开发人员进行的实验中评估了基于hc的开放式任务EER验证方法,并表明:(1)开放式任务允许收集各种见解,这些见解超出了手动构建的黄金标准,同时仍然导致良好的性能(F1=60%);(2)基于hc的验证可以可靠地用于验证开放式任务的结果(F1=84%)。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Verifying Extended Entity Relationship Diagrams with Open Tasks
The verification of Extended Entity Relationship (EER) diagrams and other conceptual models that capture the design of information systems is crucial to ensure reliable systems. To scale up verification processes to larger groups of experts, Human Computation techniques were used focusing primarily on closed tasks, which constrain the number and variety of reported defects in favor of easy aggregation of derived judgements. To address this limitation of closed tasks, in this paper, we investigate EER verification (as instance of a broader family of model verification problems) with open tasks to extend the range of collected results. We also address the challenge of aggregating results of open tasks by proposing a follow-up HC task for defect validation. We evaluate our approach for HC-based EER Verification with open tasks in a set of experiments conducted with junior developers and show that (1) open tasks allow collecting a variety of insights that go beyond a manually built gold standard while still leading to good performance (F1=60%) and (2) HC-based validation can be reliably used for validating the results of open tasks (F1=84% compared to expert validation).
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Collect, Measure, Repeat: Reliability Factors for Responsible AI Data Collection Crowdsourced Clustering via Active Querying: Practical Algorithm with Theoretical Guarantees BackTrace: A Human-AI Collaborative Approach to Discovering Studio Backdrops in Historical Photographs Confidence Contours: Uncertainty-Aware Annotation for Medical Semantic Segmentation Humans Forgo Reward to Instill Fairness into AI
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1