Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design
{"title":"Explainable AI for engineering design: A unified approach of systems engineering and component-based deep learning demonstrated by energy-efficient building design","authors":"Philipp Geyer , Manav Mahan Singh , Xia Chen","doi":"10.1016/j.aei.2024.102843","DOIUrl":null,"url":null,"abstract":"<div><div>Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (<em>R</em><sup>2</sup> = 0.94) compared to conventional monolithic methods (<em>R</em><sup>2</sup> <em>=</em> 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: <em>R</em><sup>2</sup> = 0.92..0.99; zones: <em>R</em><sup>2</sup> = 0.78..0.93).</div></div>","PeriodicalId":50941,"journal":{"name":"Advanced Engineering Informatics","volume":"62 ","pages":"Article 102843"},"PeriodicalIF":8.0000,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advanced Engineering Informatics","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1474034624004919","RegionNum":1,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Data-driven models created by machine learning (ML) have gained importance in all fields of design and engineering. They have high potential to assist decision-makers in creating novel artifacts with better performance and sustainability. However, limited generalization and the black-box nature of these models lead to limited explainability and reusability. To overcome this situation, we developed a component-based approach to create partial component models by ML. This component-based approach aligns deep learning with systems engineering (SE). The key contribution of the component-based method is that activations at interfaces between the components are interpretable engineering quantities. In this way, the hierarchical component system forms a deep neural network (DNN) that a priori integrates interpretable information for explainability of predictions. The large range of possible configurations in composing components allows the examination of novel unseen design cases outside training data. The matching of parameter ranges of components using similar probability distributions produces reusable, well-generalizing, and trustworthy models. The approach adapts the model structure to SE methods and domain knowledge. We examine the performance of the approach in the field of energy-efficient building design: First, we observed better generalization of the component-based method by analyzing prediction accuracy outside the training data. Especially for representative designs that are different in structure, we observed a much higher accuracy (R2 = 0.94) compared to conventional monolithic methods (R2= 0.71). Second, we illustrate explainability by demonstrating how sensitivity information from SE and an interpretable model based on rules from low-depth decision trees serve engineering design. Third, we evaluate explainability using qualitative and quantitative methods that demonstrate the matching of preliminary knowledge and data-driven derived strategies and show correctness of activations at component interfaces compared to white-box simulation results (envelope components: R2 = 0.92..0.99; zones: R2 = 0.78..0.93).
机器学习(ML)所创建的数据驱动模型在设计和工程的各个领域都越来越重要。它们在协助决策者创造性能更佳、可持续性更强的新型人工制品方面潜力巨大。然而,这些模型的有限通用性和黑箱性质导致其可解释性和可重用性有限。为了克服这种情况,我们开发了一种基于组件的方法,通过 ML 创建部分组件模型。这种基于组件的方法将深度学习与系统工程(SE)相结合。基于组件的方法的主要贡献在于,组件之间接口的激活是可解释的工程量。这样,分层组件系统就形成了一个深度神经网络(DNN),先验地整合了可解释的信息,从而实现预测的可解释性。组成组件的可能配置范围很大,因此可以在训练数据之外检查未见过的新设计案例。使用相似的概率分布来匹配组件的参数范围,可生成可重复使用、具有良好泛化能力且值得信赖的模型。该方法可根据 SE 方法和领域知识调整模型结构。我们在节能建筑设计领域检验了该方法的性能:首先,通过分析训练数据之外的预测准确性,我们观察到基于组件的方法具有更好的泛化能力。特别是对于结构不同的代表性设计,与传统的整体方法(R2 = 0.71)相比,我们观察到更高的准确率(R2 = 0.94)。其次,我们通过展示来自 SE 的灵敏度信息和基于低深度决策树规则的可解释模型如何服务于工程设计来说明可解释性。第三,我们使用定性和定量方法来评估可解释性,这些方法展示了初步知识与数据驱动的衍生策略的匹配性,并与白盒模拟结果(包络元件:R2 = 0.92...0.99;区域:R2 = 0.78...0.93)相比,展示了组件界面激活的正确性。
期刊介绍:
Advanced Engineering Informatics is an international Journal that solicits research papers with an emphasis on 'knowledge' and 'engineering applications'. The Journal seeks original papers that report progress in applying methods of engineering informatics. These papers should have engineering relevance and help provide a scientific base for more reliable, spontaneous, and creative engineering decision-making. Additionally, papers should demonstrate the science of supporting knowledge-intensive engineering tasks and validate the generality, power, and scalability of new methods through rigorous evaluation, preferably both qualitatively and quantitatively. Abstracting and indexing for Advanced Engineering Informatics include Science Citation Index Expanded, Scopus and INSPEC.