Towards a More Structured Peer Review Process with Empirical Standards

Arham Arshad, Taher Ahmed Ghaleb, P. Ralph
{"title":"Towards a More Structured Peer Review Process with Empirical Standards","authors":"Arham Arshad, Taher Ahmed Ghaleb, P. Ralph","doi":"10.1145/3463274.3463359","DOIUrl":null,"url":null,"abstract":"Context. Empirical research consistently demonstrates that that scholarly peer review is ineffective, unreliable, and prejudiced. In principle, the solution is to move from contemporary, unstructured, essay-like reviewing to more structured, checklist-like reviewing. The Task Force created models—called “empirical standards”—of the software engineering community’s expectations for different popular methodologies. Objective. This paper presents a tool for facilitating more structured reviewing by generating review checklists from the empirical standards. Design. A tool that generates pre-submission and review forms using the empirical standards for software engineering research was designed and implemented. The pre-submission and review forms can be used by authors and reviewers, respectively, to determine whether a manuscript meets the software engineering community’s expectations for the particular kind of research conducted. Evaluation. The proposed tool can be empirically evaluated using lab or field randomized experiments as well as qualitative research. Huge, impractical studies involving splitting a conference program committee are not necessary to establish the effectiveness of the standards, checklists and structured review. Conclusions. The checklist generator enables more structured peer reviews, which in turn should improve review quality, reliability, thoroughness, and readability. Empirical research is needed to assess the effectiveness of the tool and the standards.","PeriodicalId":328024,"journal":{"name":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","volume":"23 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-06-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"1","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 25th International Conference on Evaluation and Assessment in Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3463274.3463359","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 1

Abstract

Context. Empirical research consistently demonstrates that that scholarly peer review is ineffective, unreliable, and prejudiced. In principle, the solution is to move from contemporary, unstructured, essay-like reviewing to more structured, checklist-like reviewing. The Task Force created models—called “empirical standards”—of the software engineering community’s expectations for different popular methodologies. Objective. This paper presents a tool for facilitating more structured reviewing by generating review checklists from the empirical standards. Design. A tool that generates pre-submission and review forms using the empirical standards for software engineering research was designed and implemented. The pre-submission and review forms can be used by authors and reviewers, respectively, to determine whether a manuscript meets the software engineering community’s expectations for the particular kind of research conducted. Evaluation. The proposed tool can be empirically evaluated using lab or field randomized experiments as well as qualitative research. Huge, impractical studies involving splitting a conference program committee are not necessary to establish the effectiveness of the standards, checklists and structured review. Conclusions. The checklist generator enables more structured peer reviews, which in turn should improve review quality, reliability, thoroughness, and readability. Empirical research is needed to assess the effectiveness of the tool and the standards.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
以经验标准建立更结构化的同行评议过程
上下文。实证研究一致表明,学术同行评议是无效的、不可靠的和有偏见的。原则上,解决方案是从现代的、非结构化的、像论文一样的复习转向更结构化的、像检查表一样的复习。Task Force创建了软件工程社区对不同流行方法的期望的模型——称为“经验标准”。目标。本文提出了一种工具,通过从经验标准生成审查清单来促进更结构化的审查。设计。设计并实现了一个使用软件工程研究的经验标准生成预提交和评审表单的工具。作者和审稿人可以分别使用预提交和审查表单,以确定手稿是否满足软件工程社区对所进行的特定研究类型的期望。评估。提出的工具可以使用实验室或现场随机实验以及定性研究进行经验评估。要建立标准、检查表和结构化审查的有效性,没有必要进行涉及拆分会议计划委员会的大规模、不切实际的研究。结论。检查表生成器支持更结构化的同行评审,这反过来应该提高评审的质量、可靠性、彻底性和可读性。需要实证研究来评估工具和标准的有效性。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
About the Assessment of Grey Literature in Software Engineering Towards an Automated Classification Approach for Software Engineering Research Fog Based Energy Efficient Process Framework for Smart Building Open Data-driven Usability Improvements of Static Code Analysis and its Challenges Towards a corpus for credibility assessment in software practitioner blog articles
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1