M. Marquardt, P. Graf, Eva Jansen, S. Hillmann, Jan-Niklas Voigt-Antons
{"title":"情景、功能和信任:关于人工智能在医学中的可解释性的情景访谈研究结果","authors":"M. Marquardt, P. Graf, Eva Jansen, S. Hillmann, Jan-Niklas Voigt-Antons","doi":"10.14512/tatup.33.1.41","DOIUrl":null,"url":null,"abstract":"A central requirement for the use of artificial intelligence (AI) in medicine is its explainability, i. e., the provision of addressee-oriented information about its functioning. This leads to the question of how socially adequate explainability can be designed. To identify evaluation factors, we interviewed healthcare stakeholders about two scenarios: diagnostics and documentation. The scenarios vary the influence that an AI system has on decision-making through the interaction design and the amount of data processed. We present key evaluation factors for explainability at the interactional and procedural levels. Explainability must not interfere situationally in the doctor-patient conversation and question the professional role. At the same time, explainability functionally legitimizes an AI system as a second opinion and is central to building trust. A virtual embodiment of the AI system is advantageous for language-based explanations","PeriodicalId":504838,"journal":{"name":"TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis","volume":"125 12","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-03-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Situativität, Funktionalität und Vertrauen: Ergebnisse einer szenariobasierten Interviewstudie zur Erklärbarkeit von KI in der Medizin\",\"authors\":\"M. Marquardt, P. Graf, Eva Jansen, S. Hillmann, Jan-Niklas Voigt-Antons\",\"doi\":\"10.14512/tatup.33.1.41\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"A central requirement for the use of artificial intelligence (AI) in medicine is its explainability, i. e., the provision of addressee-oriented information about its functioning. This leads to the question of how socially adequate explainability can be designed. To identify evaluation factors, we interviewed healthcare stakeholders about two scenarios: diagnostics and documentation. The scenarios vary the influence that an AI system has on decision-making through the interaction design and the amount of data processed. We present key evaluation factors for explainability at the interactional and procedural levels. Explainability must not interfere situationally in the doctor-patient conversation and question the professional role. At the same time, explainability functionally legitimizes an AI system as a second opinion and is central to building trust. A virtual embodiment of the AI system is advantageous for language-based explanations\",\"PeriodicalId\":504838,\"journal\":{\"name\":\"TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis\",\"volume\":\"125 12\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-03-15\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.14512/tatup.33.1.41\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"TATuP - Zeitschrift für Technikfolgenabschätzung in Theorie und Praxis","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.14512/tatup.33.1.41","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Situativität, Funktionalität und Vertrauen: Ergebnisse einer szenariobasierten Interviewstudie zur Erklärbarkeit von KI in der Medizin
A central requirement for the use of artificial intelligence (AI) in medicine is its explainability, i. e., the provision of addressee-oriented information about its functioning. This leads to the question of how socially adequate explainability can be designed. To identify evaluation factors, we interviewed healthcare stakeholders about two scenarios: diagnostics and documentation. The scenarios vary the influence that an AI system has on decision-making through the interaction design and the amount of data processed. We present key evaluation factors for explainability at the interactional and procedural levels. Explainability must not interfere situationally in the doctor-patient conversation and question the professional role. At the same time, explainability functionally legitimizes an AI system as a second opinion and is central to building trust. A virtual embodiment of the AI system is advantageous for language-based explanations