Hannah Deters, Jakob Droste, Martin Obaidi, Kurt Schneider
{"title":"Exploring the means to measure explainability: Metrics, heuristics and questionnaires","authors":"Hannah Deters, Jakob Droste, Martin Obaidi, Kurt Schneider","doi":"10.1016/j.infsof.2025.107682","DOIUrl":null,"url":null,"abstract":"<div><h3>Context:</h3><div>As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement.</div></div><div><h3>Objective:</h3><div>Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements.</div></div><div><h3>Method:</h3><div>In order to design a user-centered quality model for explainability, we conducted a literature review.</div></div><div><h3>Results:</h3><div>We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires.</div></div><div><h3>Conclusion:</h3><div>Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.</div></div>","PeriodicalId":54983,"journal":{"name":"Information and Software Technology","volume":"181 ","pages":"Article 107682"},"PeriodicalIF":3.8000,"publicationDate":"2025-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information and Software Technology","FirstCategoryId":"94","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0950584925000217","RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INFORMATION SYSTEMS","Score":null,"Total":0}
引用次数: 0
Abstract
Context:
As the complexity of modern software is steadily growing, these systems become increasingly difficult to understand for their stakeholders. At the same time, opaque and artificially intelligent systems permeate a growing number of safety-critical areas, such as medicine and finance. As a result, explainability is becoming more important as a software quality aspect and non-functional requirement.
Objective:
Contemporary research has mainly focused on making artificial intelligence and its decision-making processes more understandable. However, explainability has also gained traction in recent requirements engineering research. This work aims to contribute to that body of research by providing a quality model for explainability as a software quality aspect. Quality models provide means and measures to specify and evaluate quality requirements.
Method:
In order to design a user-centered quality model for explainability, we conducted a literature review.
Results:
We identified ten fundamental aspects of explainability. Furthermore, we aggregated criteria and metrics to measure them as well as alternative means of evaluation in the form of heuristics and questionnaires.
Conclusion:
Our quality model and the related means of evaluation enable software engineers to develop and validate explainable systems in accordance with their explainability goals and intentions. This is achieved by offering a view from different angles at fundamental aspects of explainability and the related development goals. Thus, we provide a foundation that improves the management and verification of explainability requirements.
期刊介绍:
Information and Software Technology is the international archival journal focusing on research and experience that contributes to the improvement of software development practices. The journal''s scope includes methods and techniques to better engineer software and manage its development. Articles submitted for review should have a clear component of software engineering or address ways to improve the engineering and management of software development. Areas covered by the journal include:
• Software management, quality and metrics,
• Software processes,
• Software architecture, modelling, specification, design and programming
• Functional and non-functional software requirements
• Software testing and verification & validation
• Empirical studies of all aspects of engineering and managing software development
Short Communications is a new section dedicated to short papers addressing new ideas, controversial opinions, "Negative" results and much more. Read the Guide for authors for more information.
The journal encourages and welcomes submissions of systematic literature studies (reviews and maps) within the scope of the journal. Information and Software Technology is the premiere outlet for systematic literature studies in software engineering.