Michael Austin Langford, Kenneth H Chan, Jonathon Emil Fleck, Philip K McKinley, Betty H C Cheng
{"title":"MoDALAS: addressing assurance for learning-enabled autonomous systems in the face of uncertainty.","authors":"Michael Austin Langford, Kenneth H Chan, Jonathon Emil Fleck, Philip K McKinley, Betty H C Cheng","doi":"10.1007/s10270-023-01090-9","DOIUrl":null,"url":null,"abstract":"<p><p>Increasingly, safety-critical systems include artificial intelligence and machine learning components (i.e., learning-enabled components (LECs)). However, when behavior is learned in a training environment that fails to fully capture real-world phenomena, the response of an LEC to untrained phenomena is uncertain and therefore cannot be assured as safe. Automated methods are needed for self-assessment and adaptation to decide when learned behavior can be trusted. This work introduces a model-driven approach to manage self-adaptation of a learning-enabled system (LES) to account for run-time contexts for which the learned behavior of LECs cannot be trusted. The resulting framework enables an LES to monitor and evaluate goal models at run time to determine whether or not LECs can be expected to meet functional objectives and enables system adaptation accordingly. Using this framework enables stakeholders to have more confidence that LECs are used only in contexts comparable to those validated at design time.</p>","PeriodicalId":49507,"journal":{"name":"Software and Systems Modeling","volume":" ","pages":"1-21"},"PeriodicalIF":2.0000,"publicationDate":"2023-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10024308/pdf/","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Software and Systems Modeling","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1007/s10270-023-01090-9","RegionNum":3,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"COMPUTER SCIENCE, SOFTWARE ENGINEERING","Score":null,"Total":0}
引用次数: 0
Abstract
Increasingly, safety-critical systems include artificial intelligence and machine learning components (i.e., learning-enabled components (LECs)). However, when behavior is learned in a training environment that fails to fully capture real-world phenomena, the response of an LEC to untrained phenomena is uncertain and therefore cannot be assured as safe. Automated methods are needed for self-assessment and adaptation to decide when learned behavior can be trusted. This work introduces a model-driven approach to manage self-adaptation of a learning-enabled system (LES) to account for run-time contexts for which the learned behavior of LECs cannot be trusted. The resulting framework enables an LES to monitor and evaluate goal models at run time to determine whether or not LECs can be expected to meet functional objectives and enables system adaptation accordingly. Using this framework enables stakeholders to have more confidence that LECs are used only in contexts comparable to those validated at design time.
期刊介绍:
We invite authors to submit papers that discuss and analyze research challenges and experiences pertaining to software and system modeling languages, techniques, tools, practices and other facets. The following are some of the topic areas that are of special interest, but the journal publishes on a wide range of software and systems modeling concerns:
Domain-specific models and modeling standards;
Model-based testing techniques;
Model-based simulation techniques;
Formal syntax and semantics of modeling languages such as the UML;
Rigorous model-based analysis;
Model composition, refinement and transformation;
Software Language Engineering;
Modeling Languages in Science and Engineering;
Language Adaptation and Composition;
Metamodeling techniques;
Measuring quality of models and languages;
Ontological approaches to model engineering;
Generating test and code artifacts from models;
Model synthesis;
Methodology;
Model development tool environments;
Modeling Cyberphysical Systems;
Data intensive modeling;
Derivation of explicit models from data;
Case studies and experience reports with significant modeling lessons learned;
Comparative analyses of modeling languages and techniques;
Scientific assessment of modeling practices