From Data Analysis to Human Input: Navigating the Complexity of Software Evaluation and Assessment

Sigrid Eldh
{"title":"From Data Analysis to Human Input: Navigating the Complexity of Software Evaluation and Assessment","authors":"Sigrid Eldh","doi":"10.1145/3593434.3596439","DOIUrl":null,"url":null,"abstract":"It is the time of trust and transformation in software. We want explainable AI to assist us in dialogue, write our programs, test our software, and improve how we communicate. It is the time of digitalization, but we must ask ourselves - on what data in what format, when do we collect it, and what is the source? Does “data” make sense? Every action can be automated, should eventually be automated, and as such should be traceable and explainable. The transformation of software – and how we can now train, and feedback in a fast way, enable us to not only utilize existing technologies, but also aids us in faster embracing new technologies. This transformation is much to slow even if things change at a lightning speed. Change is the only thing we can be sure will happen. Evaluating and assessing quality of software sounds easy but is only as good as you design it to be. We, often simplify the problem so we can move forward, but it is the complications that is the real issue – our context, our combination of tools, languages, hardware, history, and way of working. We simply need the labeling, the meta-data, the context – and this data in a form with “many” perspectives to draw the more “accurate” scientific picture. Having a multi-facetted perspective is important when analyzing complex contexts. In software, listening skills and asking the right questions to the right people is often invaluable to complement blunt data. On the other side - much information is probably missing as you are too easily getting “only” what you asked for. So, we cannot judge what we cannot observe – and analyzing this data, is another issue all together. We need to know what is right – because if we cannot trust the source – or double check the outcome, how would we know it is not just a “fake” data? What does the outlier really mean? Is it a sign of a new trend is it the first time we capture this odd event? Therefore, it is easy to lose perspective in a fast-changing world. Despite drowning in tools, we still miss a lot of them. The threshold of using a tool is high, as we cannot trust them, and we cannot be sure that the data these tools collect does represent what we want to investigate. Therefore, the role of the scientist is more important than ever. Trusting the scientific process, utilizing multiple methods, and combining them is the receipt! Another goal is doing our best to select topics and collaborators – as building better software (quality) for humanity. It starts with you and me. I hope I will in this context be able to touch upon areas like security, testing, automation, AI/ML, ethics and “human in the loop”, analysis, tools, and technical debt, with a focus on evaluations and assessments.","PeriodicalId":178596,"journal":{"name":"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering","volume":"34 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-06-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the 27th International Conference on Evaluation and Assessment in Software Engineering","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3593434.3596439","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

It is the time of trust and transformation in software. We want explainable AI to assist us in dialogue, write our programs, test our software, and improve how we communicate. It is the time of digitalization, but we must ask ourselves - on what data in what format, when do we collect it, and what is the source? Does “data” make sense? Every action can be automated, should eventually be automated, and as such should be traceable and explainable. The transformation of software – and how we can now train, and feedback in a fast way, enable us to not only utilize existing technologies, but also aids us in faster embracing new technologies. This transformation is much to slow even if things change at a lightning speed. Change is the only thing we can be sure will happen. Evaluating and assessing quality of software sounds easy but is only as good as you design it to be. We, often simplify the problem so we can move forward, but it is the complications that is the real issue – our context, our combination of tools, languages, hardware, history, and way of working. We simply need the labeling, the meta-data, the context – and this data in a form with “many” perspectives to draw the more “accurate” scientific picture. Having a multi-facetted perspective is important when analyzing complex contexts. In software, listening skills and asking the right questions to the right people is often invaluable to complement blunt data. On the other side - much information is probably missing as you are too easily getting “only” what you asked for. So, we cannot judge what we cannot observe – and analyzing this data, is another issue all together. We need to know what is right – because if we cannot trust the source – or double check the outcome, how would we know it is not just a “fake” data? What does the outlier really mean? Is it a sign of a new trend is it the first time we capture this odd event? Therefore, it is easy to lose perspective in a fast-changing world. Despite drowning in tools, we still miss a lot of them. The threshold of using a tool is high, as we cannot trust them, and we cannot be sure that the data these tools collect does represent what we want to investigate. Therefore, the role of the scientist is more important than ever. Trusting the scientific process, utilizing multiple methods, and combining them is the receipt! Another goal is doing our best to select topics and collaborators – as building better software (quality) for humanity. It starts with you and me. I hope I will in this context be able to touch upon areas like security, testing, automation, AI/ML, ethics and “human in the loop”, analysis, tools, and technical debt, with a focus on evaluations and assessments.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
从数据分析到人工输入:驾驭软件评估和评估的复杂性
这是信任和软件转型的时刻。我们想要可解释的人工智能来帮助我们进行对话、编写程序、测试软件,并改善我们的沟通方式。这是一个数字化的时代,但我们必须问自己——什么数据,什么格式,什么时候收集,来源是什么?“数据”有意义吗?每个操作都可以自动化,最终应该自动化,因此应该是可跟踪和可解释的。软件的转变——以及我们现在如何以一种快速的方式进行培训和反馈——使我们不仅能够利用现有技术,还能帮助我们更快地接受新技术。即使事物以闪电般的速度变化,这种转变也要慢得多。变化是我们唯一可以肯定会发生的事情。评估和评估软件的质量听起来很容易,但只有在你设计它的时候才会很好。我们经常简化问题,这样我们就可以继续前进,但真正的问题是复杂性——我们的环境,我们对工具、语言、硬件、历史和工作方式的组合。我们只需要标签、元数据、背景——这些数据以“多种”视角的形式呈现,从而绘制出更“准确”的科学图景。在分析复杂的上下文时,拥有多方位的视角非常重要。在软件领域,倾听技巧和向正确的人提出正确的问题对于补充生硬的数据通常是无价的。另一方面,由于你太容易“只”得到你想要的,可能会丢失很多信息。所以,我们不能判断我们没有观察到的东西——分析这些数据,是另一个问题。我们需要知道什么是正确的——因为如果我们不能信任来源——或者反复检查结果,我们怎么知道这不是一个“假”数据?异常值的真正含义是什么?这是一种新趋势的标志吗?这是我们第一次捕捉到这种奇怪的现象吗?因此,在一个瞬息万变的世界里,很容易失去远见。尽管工具泛滥,但我们仍然错过了很多工具。使用工具的门槛很高,因为我们不能信任它们,而且我们不能确定这些工具收集的数据确实代表了我们想要调查的内容。因此,科学家的作用比以往任何时候都更重要。相信科学的过程,使用多种方法,并将它们结合起来就是收据!另一个目标是尽我们最大的努力选择主题和合作者——为人类构建更好的软件(质量)。一切从你和我开始。我希望在这篇文章中能够触及安全、测试、自动化、人工智能/机器学习、道德和“人在循环”、分析、工具和技术债务等领域,重点是评估和评估。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Classification-based Static Collection Selection for Java: Effectiveness and Adaptability Decentralised Autonomous Organisations for Public Procurement Analyzing the Resource Usage Overhead of Mobile App Development Frameworks Investigation of Security-related Commits in Android Apps Exploring the UK Cyber Skills Gap through a mapping of active job listings to the Cyber Security Body of Knowledge (CyBOK)
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1