Mobile app review analysis for crowdsourcing of software requirements: a mapping study of automated and semi-automated tools.

IF 3.5 4区 计算机科学 Q2 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE PeerJ Computer Science Pub Date : 2024-11-05 eCollection Date: 2024-01-01 DOI:10.7717/peerj-cs.2401
Rhodes Massenon, Ishaya Gambo, Roseline Oluwaseun Ogundokun, Ezekiel Adebayo Ogundepo, Sweta Srivastava, Saurabh Agarwal, Wooguil Pak
{"title":"Mobile app review analysis for crowdsourcing of software requirements: a mapping study of automated and semi-automated tools.","authors":"Rhodes Massenon, Ishaya Gambo, Roseline Oluwaseun Ogundokun, Ezekiel Adebayo Ogundepo, Sweta Srivastava, Saurabh Agarwal, Wooguil Pak","doi":"10.7717/peerj-cs.2401","DOIUrl":null,"url":null,"abstract":"<p><p>Mobile app reviews are valuable for gaining user feedback on features, usability, and areas for improvement. Analyzing these reviews manually is difficult due to volume and structure, leading to the need for automated techniques. This mapping study categorizes existing approaches for automated and semi-automated tools by analyzing 180 primary studies. Techniques include topic modeling, collocation finding, association rule-based, aspect-based sentiment analysis, frequency-based, word vector-based, and hybrid approaches. The study compares various tools for analyzing mobile app reviews based on performance, scalability, and user-friendliness. Tools like KEFE, MERIT, DIVER, SAFER, SIRA, T-FEX, RE-BERT, and AOBTM outperformed baseline tools like IDEA and SAFE in identifying emerging issues and extracting relevant information. The study also discusses limitations such as manual intervention, linguistic complexities, scalability issues, and interpretability challenges in incorporating user feedback. Overall, this mapping study outlines the current state of feature extraction from app reviews, suggesting future research and innovation opportunities for extracting software requirements from mobile app reviews, thereby improving mobile app development.</p>","PeriodicalId":54224,"journal":{"name":"PeerJ Computer Science","volume":"10 ","pages":"e2401"},"PeriodicalIF":3.5000,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11623114/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"PeerJ Computer Science","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.7717/peerj-cs.2401","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q2","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0

Abstract

Mobile app reviews are valuable for gaining user feedback on features, usability, and areas for improvement. Analyzing these reviews manually is difficult due to volume and structure, leading to the need for automated techniques. This mapping study categorizes existing approaches for automated and semi-automated tools by analyzing 180 primary studies. Techniques include topic modeling, collocation finding, association rule-based, aspect-based sentiment analysis, frequency-based, word vector-based, and hybrid approaches. The study compares various tools for analyzing mobile app reviews based on performance, scalability, and user-friendliness. Tools like KEFE, MERIT, DIVER, SAFER, SIRA, T-FEX, RE-BERT, and AOBTM outperformed baseline tools like IDEA and SAFE in identifying emerging issues and extracting relevant information. The study also discusses limitations such as manual intervention, linguistic complexities, scalability issues, and interpretability challenges in incorporating user feedback. Overall, this mapping study outlines the current state of feature extraction from app reviews, suggesting future research and innovation opportunities for extracting software requirements from mobile app reviews, thereby improving mobile app development.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
软件需求众包的移动应用审查分析:自动化和半自动化工具的映射研究。
手机应用评论对于获取用户关于功能、可用性和改进领域的反馈非常有价值。由于数量和结构的原因,手动分析这些审查是困难的,这导致需要自动化技术。本研究通过分析180项主要研究,对现有的自动化和半自动化工具方法进行了分类。技术包括主题建模、搭配查找、基于关联规则、基于方面的情感分析、基于频率的、基于词向量的和混合方法。该研究比较了基于性能、可扩展性和用户友好性分析手机应用评论的各种工具。KEFE、MERIT、DIVER、SAFER、SIRA、T-FEX、RE-BERT和AOBTM等工具在识别新出现的问题和提取相关信息方面优于IDEA和SAFE等基准工具。该研究还讨论了人工干预、语言复杂性、可扩展性问题以及合并用户反馈时的可解释性挑战等局限性。总体而言,该地图研究概述了从应用评论中提取特征的现状,为从移动应用评论中提取软件需求提出了未来的研究和创新机会,从而改善了移动应用开发。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
PeerJ Computer Science
PeerJ Computer Science Computer Science-General Computer Science
CiteScore
6.10
自引率
5.30%
发文量
332
审稿时长
10 weeks
期刊介绍: PeerJ Computer Science is the new open access journal covering all subject areas in computer science, with the backing of a prestigious advisory board and more than 300 academic editors.
期刊最新文献
Design of a 3D emotion mapping model for visual feature analysis using improved Gaussian mixture models. Enhancing task execution: a dual-layer approach with multi-queue adaptive priority scheduling. LOGIC: LLM-originated guidance for internal cognitive improvement of small language models in stance detection. Generative AI and future education: a review, theoretical validation, and authors' perspective on challenges and solutions. MSR-UNet: enhancing multi-scale and long-range dependencies in medical image segmentation.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1