Explained Artificial Intelligence Helps to Integrate Artificial and Human Intelligence Into Medical Diagnostic Systems: Analytical Review of Publications
{"title":"Explained Artificial Intelligence Helps to Integrate Artificial and Human Intelligence Into Medical Diagnostic Systems: Analytical Review of Publications","authors":"M. Farkhadov, Aleksander Eliseev, N. Petukhova","doi":"10.1109/AICT50176.2020.9368576","DOIUrl":null,"url":null,"abstract":"Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.","PeriodicalId":136491,"journal":{"name":"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2020-10-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2020 IEEE 14th International Conference on Application of Information and Communication Technologies (AICT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/AICT50176.2020.9368576","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Artificial intelligence-based medical systems can by now diagnose various disorders highly accurately. However, we should stress that despite encouraging and ever improving results, people still distrust such systems. We review relevant publications over the past five years, to identify the main causes of such mistrust and ways to overcome it. Our study showes that the main reasons to distrust these systems are opaque models, blackbox algorithms, and potentially unrepresentful training samples. We demonstrate that explainable artificial intelligence, aimed to create more user-friendly and understandable systems, has become a noticeable new topic in theoretical research and practical development. Another notable trend is to develop approaches to build hybrid systems, where artificial and human intelligence interact according to the teamwork model.