Pub Date : 2026-02-09DOI: 10.1109/jiot.2026.3662758
Haowen Zhang, Juan Li, Qing Yao
{"title":"RACER: Fast and Accurate Time Series Clustering with Random Convolutional Kernels and Ensemble Methods","authors":"Haowen Zhang, Juan Li, Qing Yao","doi":"10.1109/jiot.2026.3662758","DOIUrl":"https://doi.org/10.1109/jiot.2026.3662758","url":null,"abstract":"","PeriodicalId":54347,"journal":{"name":"IEEE Internet of Things Journal","volume":"314 1","pages":""},"PeriodicalIF":10.6,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1007/s10796-026-10698-3
Mustafa Cavus, Jan N. van Rijn, Przemysław Biecek
Trustworthiness of AI systems is a core objective of Human-Centered Explainable AI, and relies, among other things, on explainability and understandability of the outcome. While automated machine learning tools automate model training, they often generate not only a single “best” model but also a set of near-equivalent alternatives, known as the Rashomon set. This set provides a unique opportunity for human-centered explainability: by exposing variability among similarly performing models, we can offer users richer and more informative explanations. In this paper, we introduce Rashomon partial dependence profiles , a model-agnostic technique that aggregates feature effect estimates across the Rashomon set. Unlike traditional explanations derived from a single model, Rashomon partial dependence profiles explicitly quantify uncertainty and visualize variability, further enabling user trust and understanding model behavior to make informed decisions. Additionally, under high-noise conditions, the Rashomon partial dependence profiles more accurately recover ground-truth feature relationships than a single-model partial dependence profile. Experiments on synthetic and real-world datasets demonstrate that Rashomon partial dependence profiles reduce average deviation from the ground truth by up to 38%, and their confidence intervals reliably capture true feature effects. These results highlight how leveraging the Rashomon set can enhance technical rigor while centering explanations on user trust and understanding aligned with Human-centered explainable AI principles.
{"title":"Quantifying Model Uncertainty with AutoML and Rashomon Partial Dependence Profiles: Enabling Trustworthy and Human-centered XAI","authors":"Mustafa Cavus, Jan N. van Rijn, Przemysław Biecek","doi":"10.1007/s10796-026-10698-3","DOIUrl":"https://doi.org/10.1007/s10796-026-10698-3","url":null,"abstract":"Trustworthiness of AI systems is a core objective of Human-Centered Explainable AI, and relies, among other things, on explainability and understandability of the outcome. While automated machine learning tools automate model training, they often generate not only a single “best” model but also a set of near-equivalent alternatives, known as the Rashomon set. This set provides a unique opportunity for human-centered explainability: by exposing variability among similarly performing models, we can offer users richer and more informative explanations. In this paper, we introduce <jats:italic>Rashomon partial dependence profiles</jats:italic> , a model-agnostic technique that aggregates feature effect estimates across the Rashomon set. Unlike traditional explanations derived from a single model, Rashomon partial dependence profiles explicitly quantify uncertainty and visualize variability, further enabling user trust and understanding model behavior to make informed decisions. Additionally, under high-noise conditions, the Rashomon partial dependence profiles more accurately recover ground-truth feature relationships than a single-model partial dependence profile. Experiments on synthetic and real-world datasets demonstrate that Rashomon partial dependence profiles reduce average deviation from the ground truth by up to 38%, and their confidence intervals reliably capture true feature effects. These results highlight how leveraging the Rashomon set can enhance technical rigor while centering explanations on user trust and understanding aligned with Human-centered explainable AI principles.","PeriodicalId":13610,"journal":{"name":"Information Systems Frontiers","volume":"45 1","pages":""},"PeriodicalIF":5.9,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-09DOI: 10.1109/tac.2026.3663109
Pietro A. Refosco, Christopher Edwards, Dimitrios Papageorgiou
{"title":"Conditions for boundedness of under-tuned super-twisting sliding mode control loops","authors":"Pietro A. Refosco, Christopher Edwards, Dimitrios Papageorgiou","doi":"10.1109/tac.2026.3663109","DOIUrl":"https://doi.org/10.1109/tac.2026.3663109","url":null,"abstract":"","PeriodicalId":13201,"journal":{"name":"IEEE Transactions on Automatic Control","volume":"314 1","pages":""},"PeriodicalIF":6.8,"publicationDate":"2026-02-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146146130","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-08DOI: 10.1016/j.dss.2026.114632
Matthijs Meire, Steven Hoornaert, Arno De Caigny, Kristof Coussement
{"title":"A decision support framework for estimating the impact of covariate shift in machine learning systems","authors":"Matthijs Meire, Steven Hoornaert, Arno De Caigny, Kristof Coussement","doi":"10.1016/j.dss.2026.114632","DOIUrl":"https://doi.org/10.1016/j.dss.2026.114632","url":null,"abstract":"","PeriodicalId":55181,"journal":{"name":"Decision Support Systems","volume":"94 1","pages":""},"PeriodicalIF":7.5,"publicationDate":"2026-02-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146138278","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}