{"title":"Metrics to Estimate Model Comprehension Quality: Insights from a Systematic Literature Review","authors":"Jordan Hermann, B. Tenbergen, Marian Daun","doi":"10.7250/csimq.2022-31.01","DOIUrl":null,"url":null,"abstract":"Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create “good” conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.","PeriodicalId":416219,"journal":{"name":"Complex Syst. Informatics Model. Q.","volume":"27 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Complex Syst. Informatics Model. Q.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.7250/csimq.2022-31.01","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Conceptual models are an effective and unparalleled means to communicate complicated information with a broad variety of stakeholders in a short period of time. However, in practice, conceptual models often vary in clarity, employed features, communicated content, and overall quality. This potentially impacts model comprehension to a point where models are factually useless. To counter this, guidelines to create “good” conceptual models have been suggested. However, these guidelines are often abstract, hard to operationalize in different modeling languages, partly overlap, or even contradict one another. In addition, no comparative study of proposed guidelines exists so far. This issue is exacerbated as no established metrics to measure or estimate model comprehension for a given conceptual model exist. In this article, we present the results of a literature survey investigating 109 publications in the field and discuss metrics to measure model comprehension, their quantification, and their empirical substantiation. Results show that albeit several concrete quantifiable metrics and guidelines have been proposed, concrete evaluative recommendations are largely missing. Moreover, some suggested guidelines are contradictory, and few metrics exist that allow instantiating common frameworks for model quality in a specific way.