E. Blasch, J. Pieter de Villiers, Gregor Pavin, A. Jousselme, P. Costa, Kathryn B. Laskey, J. Ziegler
{"title":"URREF在信息融合问责评估中的应用","authors":"E. Blasch, J. Pieter de Villiers, Gregor Pavin, A. Jousselme, P. Costa, Kathryn B. Laskey, J. Ziegler","doi":"10.23919/fusion49465.2021.9626847","DOIUrl":null,"url":null,"abstract":"The EXCITE (EXplainability Capability Information Testing Evaluation) approach assesses information fusion interpretability, explainability, and accountability for uncertainty analysis. Amongst many data and information fusion techniques is the need to understand the information fusion system capability for the intended application. While many approaches for data fusion support uncertainty reduction from measured data, there are other contributing factors such as data source credibility, knowledge completeness, multiresolution, and problem alignment. To facilitate the alignment of the data fusion approach to the user’s intended action, there is a need towards a representation of the uncertainty. The paper highlights the approach to leverage recent research efforts in interpretability as methods of data handing in the Uncertainty Representation and Reasoning Evaluation Framework (URREF) while also proposing explainability and accountability as a representation criterion. Accountability is closely associated with the selected decision and the outcome which has these four attributes: amount of data towards the result, distance score of decision selection, accuracy/credibility/timeliness of results, and risk analysis. The risk analysis includes: verifiability, observability, liability, transparency, attributability, and remediabilty. Results are demonstrated on notional example.","PeriodicalId":226850,"journal":{"name":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","volume":"106 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":"{\"title\":\"Use of the URREF towards Information Fusion Accountability Evaluation\",\"authors\":\"E. Blasch, J. Pieter de Villiers, Gregor Pavin, A. Jousselme, P. Costa, Kathryn B. Laskey, J. Ziegler\",\"doi\":\"10.23919/fusion49465.2021.9626847\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The EXCITE (EXplainability Capability Information Testing Evaluation) approach assesses information fusion interpretability, explainability, and accountability for uncertainty analysis. Amongst many data and information fusion techniques is the need to understand the information fusion system capability for the intended application. While many approaches for data fusion support uncertainty reduction from measured data, there are other contributing factors such as data source credibility, knowledge completeness, multiresolution, and problem alignment. To facilitate the alignment of the data fusion approach to the user’s intended action, there is a need towards a representation of the uncertainty. The paper highlights the approach to leverage recent research efforts in interpretability as methods of data handing in the Uncertainty Representation and Reasoning Evaluation Framework (URREF) while also proposing explainability and accountability as a representation criterion. Accountability is closely associated with the selected decision and the outcome which has these four attributes: amount of data towards the result, distance score of decision selection, accuracy/credibility/timeliness of results, and risk analysis. The risk analysis includes: verifiability, observability, liability, transparency, attributability, and remediabilty. Results are demonstrated on notional example.\",\"PeriodicalId\":226850,\"journal\":{\"name\":\"2021 IEEE 24th International Conference on Information Fusion (FUSION)\",\"volume\":\"106 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2021-11-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"2\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"2021 IEEE 24th International Conference on Information Fusion (FUSION)\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.23919/fusion49465.2021.9626847\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 24th International Conference on Information Fusion (FUSION)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.23919/fusion49465.2021.9626847","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Use of the URREF towards Information Fusion Accountability Evaluation
The EXCITE (EXplainability Capability Information Testing Evaluation) approach assesses information fusion interpretability, explainability, and accountability for uncertainty analysis. Amongst many data and information fusion techniques is the need to understand the information fusion system capability for the intended application. While many approaches for data fusion support uncertainty reduction from measured data, there are other contributing factors such as data source credibility, knowledge completeness, multiresolution, and problem alignment. To facilitate the alignment of the data fusion approach to the user’s intended action, there is a need towards a representation of the uncertainty. The paper highlights the approach to leverage recent research efforts in interpretability as methods of data handing in the Uncertainty Representation and Reasoning Evaluation Framework (URREF) while also proposing explainability and accountability as a representation criterion. Accountability is closely associated with the selected decision and the outcome which has these four attributes: amount of data towards the result, distance score of decision selection, accuracy/credibility/timeliness of results, and risk analysis. The risk analysis includes: verifiability, observability, liability, transparency, attributability, and remediabilty. Results are demonstrated on notional example.