{"title":"AI Assurance for the Public – Trust but Verify, Continuously","authors":"P. Laplante, Rick Kuhn","doi":"10.1109/STC55697.2022.00032","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) systems are increasingly seen in many public facing applications such as self-driving land vehicles, autonomous aircraft, medical systems and financial systems. AI systems should equal or surpass human performance, but given the consequences of failure or erroneous or unfair decisions in these systems, how do we assure the public that these systems work as intended and will not cause harm? For example, that an autonomous vehicle does not crash or that intelligent credit scoring system is not biased, even after passing substantial acceptance testing prior to release. In this paper we discuss AI trust and assurance and related concepts, that is, assured autonomy, particularly for critical systems. Then we discuss how to establish trust through AI assurance activities throughout the system development lifecycle. Finally, we introduce a “trust but verify continuously” approach to AI assurance, which describes assured autonomy activities in a model based systems development context and includes postdelivery activities for continuous assurance.","PeriodicalId":170123,"journal":{"name":"2022 IEEE 29th Annual Software Technology Conference (STC)","volume":"60 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2022-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2022 IEEE 29th Annual Software Technology Conference (STC)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/STC55697.2022.00032","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Artificial intelligence (AI) systems are increasingly seen in many public facing applications such as self-driving land vehicles, autonomous aircraft, medical systems and financial systems. AI systems should equal or surpass human performance, but given the consequences of failure or erroneous or unfair decisions in these systems, how do we assure the public that these systems work as intended and will not cause harm? For example, that an autonomous vehicle does not crash or that intelligent credit scoring system is not biased, even after passing substantial acceptance testing prior to release. In this paper we discuss AI trust and assurance and related concepts, that is, assured autonomy, particularly for critical systems. Then we discuss how to establish trust through AI assurance activities throughout the system development lifecycle. Finally, we introduce a “trust but verify continuously” approach to AI assurance, which describes assured autonomy activities in a model based systems development context and includes postdelivery activities for continuous assurance.