Metrics to Estimate Model Comprehension Quality: Insights from a Systematic Literature Review

Jordan Hermann, B. Tenbergen, Marian Daun
{"title":"Metrics to Estimate Model Comprehension Quality: Insights from a Systematic Literature Review","authors":"Jordan Hermann, B. Tenbergen, Marian Daun","doi":"10.7250/csimq.2022-31.01","DOIUrl":null,"url":null,"abstract":"Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create “good” conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.","PeriodicalId":416219,"journal":{"name":"Complex Syst. Informatics Model. Q.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex Syst. Informatics Model. Q.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7250/csimq.2022-31.01","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create “good” conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
估计模型理解质量的度量:来自系统文献综述的见解
概念模型是在短时间内与广泛的利益相关者沟通复杂信息的有效和无与伦比的手段。然而,在实践中,概念模型通常在清晰度、使用的特征、传达的内容和整体质量方面有所不同。这可能会影响模型的理解,以至于模型实际上是无用的。为了解决这个问题,已经提出了创建“好的”概念模型的指导方针。然而,这些指导方针通常是抽象的,难以在不同的建模语言中进行操作,部分重叠,甚至相互矛盾。此外,到目前为止,还没有对拟议的指导方针进行比较研究。由于没有既定的度量标准来度量或估计给定概念模型的模型理解程度,因此这个问题更加严重。在这篇文章中,我们提出了一项文献调查的结果,调查了109份该领域的出版物,并讨论了衡量模型理解、量化和实证的指标。结果表明,虽然已经提出了一些具体的可量化指标和指导方针,但具体的评估建议在很大程度上是缺失的。此外,一些建议的指导方针是相互矛盾的,并且很少有度量允许以特定的方式实例化模型质量的公共框架。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Discovering and Assessing Enterprise Architecture Debts Towards an E-Government Enterprise Architecture Framework for Developing Economies Trustworthiness Requirements in Information Systems Design: Lessons Learned from the Blockchain Community Business-IT Alignment: A Discussion on Enterprise Architecture and Blockchains. Editorial Introduction to Issue 35 of CSIMQ Supporting Information System Integration Decisions in the Post-Merger Context
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1