Juliette Mattioli, Henri Sohier, Agnès Delaborde, Kahina Amokrane-Ferka, Afef Awadid, Zakaria Chihani, Souhaiel Khalfaoui, Gabriel Pedroza
{"title":"基于 ML 的可信系统工程的关键可信度属性和关键绩效指标概览","authors":"Juliette Mattioli, Henri Sohier, Agnès Delaborde, Kahina Amokrane-Ferka, Afef Awadid, Zakaria Chihani, Souhaiel Khalfaoui, Gabriel Pedroza","doi":"10.1007/s43681-023-00394-2","DOIUrl":null,"url":null,"abstract":"<div><p>When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service. For instance, the users expect that the technology will not do something it is not supposed to do, e.g., performing actions without informing users. Thus, the use of Artificial Intelligence (AI) in safety-critical systems such as in avionics, mobility, defense, and healthcare requires proving their trustworthiness through out its overall lifecycle (from design to deployment). Based on surveys on quality measures, characteristics and sub-characteristics of AI systems, the Confiance.ai program (www.confiance.ai) aims to identify the relevant trustworthiness attributes and their associated key performance indicators (KPI) or their associated methods for assessing the induced level of trust.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"15 - 25"},"PeriodicalIF":0.0000,"publicationDate":"2024-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering\",\"authors\":\"Juliette Mattioli, Henri Sohier, Agnès Delaborde, Kahina Amokrane-Ferka, Afef Awadid, Zakaria Chihani, Souhaiel Khalfaoui, Gabriel Pedroza\",\"doi\":\"10.1007/s43681-023-00394-2\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service. For instance, the users expect that the technology will not do something it is not supposed to do, e.g., performing actions without informing users. Thus, the use of Artificial Intelligence (AI) in safety-critical systems such as in avionics, mobility, defense, and healthcare requires proving their trustworthiness through out its overall lifecycle (from design to deployment). Based on surveys on quality measures, characteristics and sub-characteristics of AI systems, the Confiance.ai program (www.confiance.ai) aims to identify the relevant trustworthiness attributes and their associated key performance indicators (KPI) or their associated methods for assessing the induced level of trust.</p></div>\",\"PeriodicalId\":72137,\"journal\":{\"name\":\"AI and ethics\",\"volume\":\"4 1\",\"pages\":\"15 - 25\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-01-08\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"AI and ethics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://link.springer.com/article/10.1007/s43681-023-00394-2\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00394-2","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
An overview of key trustworthiness attributes and KPIs for trusted ML-based systems engineering
When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service. For instance, the users expect that the technology will not do something it is not supposed to do, e.g., performing actions without informing users. Thus, the use of Artificial Intelligence (AI) in safety-critical systems such as in avionics, mobility, defense, and healthcare requires proving their trustworthiness through out its overall lifecycle (from design to deployment). Based on surveys on quality measures, characteristics and sub-characteristics of AI systems, the Confiance.ai program (www.confiance.ai) aims to identify the relevant trustworthiness attributes and their associated key performance indicators (KPI) or their associated methods for assessing the induced level of trust.