首页 > 最新文献

International Journal on Artificial Intelligence Tools最新文献

英文 中文
Recommender System Based on Unsupervised Clustering and Supervised Deep Learning 基于无监督聚类和有监督深度学习的推荐系统
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-05-17 DOI: 10.1142/s0218213024500167
Dhiraj Khurana, D. Sahni, Yogesh Kumar
{"title":"Recommender System Based on Unsupervised Clustering and Supervised Deep Learning","authors":"Dhiraj Khurana, D. Sahni, Yogesh Kumar","doi":"10.1142/s0218213024500167","DOIUrl":"https://doi.org/10.1142/s0218213024500167","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140962317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Hybrid Approach Combining Deep Learning and Case-Based Reasoning for Phishing Email Detection 开发一种结合深度学习和基于案例推理的混合方法来检测网络钓鱼邮件
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-05-10 DOI: 10.1142/s0218213024500155
Mohamed Abdelkarim Remmide, Fatima Boumahdi, Narhimène Boustia
{"title":"Towards a Hybrid Approach Combining Deep Learning and Case-Based Reasoning for Phishing Email Detection","authors":"Mohamed Abdelkarim Remmide, Fatima Boumahdi, Narhimène Boustia","doi":"10.1142/s0218213024500155","DOIUrl":"https://doi.org/10.1142/s0218213024500155","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140992941","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing and Addressing Model Trustworthiness Trade-offs in Trauma Triage 评估和解决创伤分流中模型可信度的权衡问题
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600078
Douglas Talbert, Katherine L. Phillips, Katherine E. Brown, Steve Talbert
Trauma triage occurs in suboptimal environments for making consequential decisions. Published triage studies demonstrate the extremes of the complexity/accuracy trade-off, either studying simple models with poor accuracy or very complex models with accuracies nearing published goals. Using a Level I Trauma Center’s registry cases (n = 50 644), this study describes, uses, and derives observations from a methodology to more thoroughly examine this trade-off. This or similar methods can provide the insight needed for practitioners to balance understandability with accuracy. Additionally, this study incorporates an evaluation of group-based fairness into this trade-off analysis to provide an additional dimension of insight into model selection. Lastly, this paper proposes and analyzes a multi-model approach to mitigating trust-related trade-offs. The experiments allow us to draw several conclusions regarding the machine learning models in the domain of trauma triage and demonstrate the value of our trade-off analysis to provide insight into choices regarding model complexity, model accuracy, and model fairness.
创伤分流是在不理想的环境下进行的,无法做出相应的决定。已发表的分诊研究显示了复杂性/准确性权衡的两个极端,要么研究准确性较差的简单模型,要么研究准确性接近已发表目标的非常复杂的模型。本研究利用一个一级创伤中心的登记病例(n = 50 644),描述、使用并从一种方法中得出观察结果,以更彻底地研究这种权衡。这种方法或类似方法可以为从业人员提供平衡可理解性和准确性所需的洞察力。此外,本研究还在权衡分析中加入了对基于群体的公平性的评估,为模型选择提供了额外的洞察力。最后,本文提出并分析了一种缓解信任相关权衡的多模型方法。通过实验,我们得出了有关创伤分流领域机器学习模型的若干结论,并证明了我们的权衡分析在洞察模型复杂性、模型准确性和模型公平性选择方面的价值。
{"title":"Assessing and Addressing Model Trustworthiness Trade-offs in Trauma Triage","authors":"Douglas Talbert, Katherine L. Phillips, Katherine E. Brown, Steve Talbert","doi":"10.1142/s0218213024600078","DOIUrl":"https://doi.org/10.1142/s0218213024600078","url":null,"abstract":"Trauma triage occurs in suboptimal environments for making consequential decisions. Published triage studies demonstrate the extremes of the complexity/accuracy trade-off, either studying simple models with poor accuracy or very complex models with accuracies nearing published goals. Using a Level I Trauma Center’s registry cases (n = 50 644), this study describes, uses, and derives observations from a methodology to more thoroughly examine this trade-off. This or similar methods can provide the insight needed for practitioners to balance understandability with accuracy. Additionally, this study incorporates an evaluation of group-based fairness into this trade-off analysis to provide an additional dimension of insight into model selection. Lastly, this paper proposes and analyzes a multi-model approach to mitigating trust-related trade-offs. The experiments allow us to draw several conclusions regarding the machine learning models in the domain of trauma triage and demonstrate the value of our trade-off analysis to provide insight into choices regarding model complexity, model accuracy, and model fairness.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140657010","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Reliable Estimation of Causal Effects Using Predictive Models 利用预测模型可靠地估计因果效应
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600066
Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin
In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.
近年来,机器学习算法凭借其高效性和多功能性在许多领域得到广泛应用。然而,预测模型的复杂性导致自动决策缺乏可解释性。最近的研究通过估算输入特征对预训练模型预测的贡献,提高了一般可解释性。在这些改进的基础上,实践者们试图深入了解底层数据生成机制的因果关系。为此,有学者尝试将因果知识融入可解释性中,因为非因果技术可能导致自相矛盾的解释。在本文中,我们认为每个关于因果效应的问题都需要自己的推理,依赖于在任意变量集上训练的初始预测模型可能会在估计所有可能的效应时导致量化问题。作为替代方案,我们主张采用查询驱动的方法,分别解决每个因果问题。假设变量之间的因果结构是已知的,我们建议使用因果推理工具将特定效应量化为一个涉及可观测概率的公式。然后,我们推导出选择变量的条件,以训练一个针对相关因果问题的预测模型。最后,我们确定了合适的可解释人工智能(XAI)技术,以便从模型预测中估计因果效应。此外,我们还介绍了一种通过干预因果机制来估算直接影响的新方法。
{"title":"Reliable Estimation of Causal Effects Using Predictive Models","authors":"Mahdi Hadj Ali, Yann Le Biannic, Pierre-Henri Wuillemin","doi":"10.1142/s0218213024600066","DOIUrl":"https://doi.org/10.1142/s0218213024600066","url":null,"abstract":"In recent years, machine learning algorithms have been widely adopted across many fields due to their efficiency and versatility. However, the complexity of predictive models has led to a lack of interpretability in automatic decision-making. Recent works have improved general interpretability by estimating the contributions of input features to the predictions of a pre-trained model. Drawing on these improvements, practitioners seek to gain causal insights into the underlying data-generating mechanisms. To this end, works have attempted to integrate causal knowledge into interpretability, as non-causal techniques can lead to paradoxical explanations. In this paper, we argue that each question about a causal effect requires its own reasoning and that relying on an initial predictive model trained on an arbitrary set of variables may result in quantification problems when estimating all possible effects. As an alternative, we advocate for a query-driven methodology that addresses each causal question separately. Assuming that the causal structure relating the variables is known, we propose to employ the tools of causal inference to quantify a particular effect as a formula involving observable probabilities. We then derive conditions on the selection of variables to train a predictive model that is tailored for the causal question of interest. Finally, we identify suitable eXplainable AI (XAI) techniques to estimate causal effects from the model predictions. Furthermore, we introduce a novel method for estimating direct effects through intervention on causal mechanisms.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140656680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization 利用基于偏差奇偶校验得分的损失函数正则化实现深度学习预测的公平性
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600030
Bhanu Jain, Manfred Huber, R. Elmasri
Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.
机器学习驱动的决策支持系统被越来越多的人接受,这凸显了确保所有利益相关者公平的必要性。这项研究提出了一种在训练阶段提高神经网络模型公平性的新方法。我们提供了一个框架,用于创建一系列不同的增强公平性的正则化组件,这些组件可与广为接受的基于二元交叉熵的精度损失协同使用。我们使用偏差奇偶校验得分(BPS)--一种用单一值量化模型偏差的指标--来构建与不同统计量有关的损失函数,甚至是那些尚未开发的统计量。我们分析了新推出的正则化组件的行为及其对偏差的影响。我们探讨了它们在累犯和基于人口普查的成人收入预测领域的影响。结果表明,即使在不平衡的数据集上,适当的公平损失函数也能在不牺牲准确性的情况下减轻偏差。
{"title":"Fairness for Deep Learning Predictions Using Bias Parity Score Based Loss Function Regularization","authors":"Bhanu Jain, Manfred Huber, R. Elmasri","doi":"10.1142/s0218213024600030","DOIUrl":"https://doi.org/10.1142/s0218213024600030","url":null,"abstract":"Rising acceptance of machine learning driven decision support systems underscores the need for ensuring fairness for all stakeholders. This work proposes a novel approach to increase a Neural Network model’s fairness during the training phase. We offer a frame-work to create a family of diverse fairness enhancing regularization components that can be used in tandem with the widely accepted binary-cross-entropy based accuracy loss. We use Bias Parity Score (BPS), a metric that quantifies model bias with a single value, to build loss functions pertaining to different statistical measures — even for those that may not be developed yet. We analyze behavior and impact of the newly minted regularization components on bias. We explore their impact in the realm of recidivism and census-based adult income prediction. The results illustrate that apt fairness loss functions can mitigate bias without forsaking accuracy even for imbalanced datasets.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140656090","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams 解释类型对用户满意度和人类-代理团队绩效的影响
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600042
Bryan Lavender, Sami Abuhaimed, Sandip Sen
Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.
自动代理的能力迅速增强,而且易于部署,它们将在我们的社会中发挥更加关键和决定性的作用。我们将在不同的领域,甚至在同行角色中遇到这些代理并与之合作。为了获得信任和实现无缝协调,我们期望并要求这些代理解释其决策、行为和建议。我们有兴趣开发可用于人类-代理团队的机制,以最大限度地利用人类和自动推理者的相对优势。我们对临时团队很感兴趣,在这些团队中,团队成员开始合作,通常是为了应对紧急情况或短期机会,而事先并不了解彼此。在这项研究中,我们使用了由人类和代理组成的虚拟临时团队,他们在几个事件中进行合作,每个事件要求他们完成从现有任务类型中选择的一组任务。团队成员最初并不知道他们的伙伴在可用任务类型方面的能力,因此代理任务分配者必须调整任务分配过程,以最大限度地提高团队绩效。在由人类和代理组成的协作团队中,建立用户信心和满意度以及有效的团队绩效非常重要。解释可以增加用户对代理团队成员和团队决策的信任。本文的重点是分析任务分配决策的解释如何影响用户绩效和人类工作者的观点,包括动机和满意度等因素。我们评估了不同类型的解释,如积极的、基于优势的解释和消极的、基于劣势的解释,以了解:(a) 解释如何提高满意度和绩效;(b) 信心、可理解性、动机和解释力等因素如何与满意度和绩效相关联。我们在 CHATboard 平台上进行了实验,该平台允许与 MTurk 工作者在多个任务分配集上进行虚拟协作。我们将介绍对结果的分析以及与我们的研究假设相关的结论。
{"title":"Effects of Explanation Types on User Satisfaction and Performance in Human-agent Teams","authors":"Bryan Lavender, Sami Abuhaimed, Sandip Sen","doi":"10.1142/s0218213024600042","DOIUrl":"https://doi.org/10.1142/s0218213024600042","url":null,"abstract":"Automated agents, with rapidly increasing capabilities and ease of deployment, will assume more key and decisive roles in our societies. We will encounter and work together with such agents in diverse domains and even in peer roles. To be trusted and for seamless coordination, these agents would be expected and required to explain their decision making, behaviors, and recommendations. We are interested in developing mechanisms that can be used by human-agent teams to maximally leverage relative strengths of human and automated reasoners. We are interested in ad hoc teams in which team members start to collaborate, often to respond to emergencies or short-term opportunities, without significant prior knowledge about each other. In this study, we use virtual ad hoc teams, consisting of a human and an agent, collaborating over a few episodes where each episode requires them to complete a set of tasks chosen from available task types. Team members are initially unaware of the capabilities of their partners for the available task types, and the agent task allocator must adapt the allocation process to maximize team performance. It is important in collaborative teams of humans and agents to establish user confidence and satisfaction, as well as to produce effective team performance. Explanations can increase user trust in agent team members and in team decisions. The focus of this paper is on analyzing how explanations of task allocation decisions can influence both user performance and the human workers’ perspective, including factors such as motivation and satisfaction. We evaluate different types of explanation, such as positive, strength-based explanations and negative, weakness-based explanations, to understand (a) how satisfaction and performance are improved when explanations are presented, and (b) how factors such as confidence, understandability, motivation, and explanatory power correlate with satisfaction and performance. We run experiments on the CHATboard platform that allows virtual collaboration over multiple episodes of task assignments, with MTurk workers. We present our analysis of the results and conclusions related to our research hypotheses.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140658925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On Bounding the Behavior of Neurons 论神经元行为的边界
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600029
Richard Borowski, Arthur Choi
A neuron with binary inputs and a binary output represents a Boolean function. Our goal is to extract this Boolean function into a tractable representation that will facilitate the explanation and formal verification of a neuron’s behavior. Unfortunately, extracting a neuron’s Boolean function is in general an NP-hard problem. However, it was recently shown that prime implicants of this Boolean function can be enumerated efficiently, with only polynomial time delay. Building on this result, we first propose a best-first search algorithm that is able to incrementally tighten the inner and outer bounds of a neuron’s Boolean function. Second, we show that these bounds correspond to truncated prime-implicant covers of the Boolean function. Next, we show how these bounds can be propagated in an elementary class of neural networks. Finally, we provide case studies that highlight our ability to bound the behavior of neurons.
具有二进制输入和二进制输出的神经元代表一个布尔函数。我们的目标是将这个布尔函数提取为一个可理解的表示,以便于对神经元的行为进行解释和形式验证。不幸的是,提取神经元的布尔函数一般是一个 NP 难问题。不过,最近有研究表明,这种布尔函数的质隐式可以高效地枚举出来,而且只需多项式时间延迟。在此基础上,我们首先提出了一种最佳优先搜索算法,该算法能够逐步收紧神经元布尔函数的内外边界。其次,我们展示了这些边界对应于布尔函数的截断素隐盖。接下来,我们将展示如何在一类基本的神经网络中传播这些边界。最后,我们将提供案例研究,强调我们约束神经元行为的能力。
{"title":"On Bounding the Behavior of Neurons","authors":"Richard Borowski, Arthur Choi","doi":"10.1142/s0218213024600029","DOIUrl":"https://doi.org/10.1142/s0218213024600029","url":null,"abstract":"A neuron with binary inputs and a binary output represents a Boolean function. Our goal is to extract this Boolean function into a tractable representation that will facilitate the explanation and formal verification of a neuron’s behavior. Unfortunately, extracting a neuron’s Boolean function is in general an NP-hard problem. However, it was recently shown that prime implicants of this Boolean function can be enumerated efficiently, with only polynomial time delay. Building on this result, we first propose a best-first search algorithm that is able to incrementally tighten the inner and outer bounds of a neuron’s Boolean function. Second, we show that these bounds correspond to truncated prime-implicant covers of the Boolean function. Next, we show how these bounds can be propagated in an elementary class of neural networks. Finally, we provide case studies that highlight our ability to bound the behavior of neurons.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140655510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Predictive Policing: A Fairness-aware Approach 预测性警务:公平意识方法
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-25 DOI: 10.1142/s0218213024600054
Ava Downey, Sheikh Rabiul Islam, Md Kamruzzman Sarker
As Artificial Intelligence (AI) systems become increasingly embedded in our daily lives, it is of utmost importance to ensure that they are both fair and reliable. Regrettably, this is not always the case for predictive policing systems, as evidence shows biases based on age, race, and sex, leading to wrongful identifications of individuals as potential criminals. Given the existing criticism of the system’s unjust treatment of minority groups, it becomes essential to address and mitigate this concerning trend. This study delved into the infusion of domain knowledge in the predictive policing system, aiming to minimize prevailing fairness issues. The experimental results indicate a considerable increase in fairness across all metrics for all protected classes, thus fostering greater trust in the predictive policing system by reducing the unfair treatment of individuals.
随着人工智能(AI)系统越来越多地融入我们的日常生活,确保其公平可靠至关重要。遗憾的是,预测性警务系统并非总是如此,因为有证据显示,该系统存在基于年龄、种族和性别的偏见,导致错误地将个人识别为潜在罪犯。鉴于目前对该系统不公正对待少数群体的批评,解决和缓解这一令人担忧的趋势变得至关重要。本研究深入探讨了在预测性警务系统中注入领域知识的问题,旨在最大限度地减少普遍存在的公平性问题。实验结果表明,在所有指标上,所有受保护群体的公平性都有了显著提高,从而通过减少对个人的不公平待遇,提高了人们对预测性警务系统的信任。
{"title":"Predictive Policing: A Fairness-aware Approach","authors":"Ava Downey, Sheikh Rabiul Islam, Md Kamruzzman Sarker","doi":"10.1142/s0218213024600054","DOIUrl":"https://doi.org/10.1142/s0218213024600054","url":null,"abstract":"As Artificial Intelligence (AI) systems become increasingly embedded in our daily lives, it is of utmost importance to ensure that they are both fair and reliable. Regrettably, this is not always the case for predictive policing systems, as evidence shows biases based on age, race, and sex, leading to wrongful identifications of individuals as potential criminals. Given the existing criticism of the system’s unjust treatment of minority groups, it becomes essential to address and mitigate this concerning trend. This study delved into the infusion of domain knowledge in the predictive policing system, aiming to minimize prevailing fairness issues. The experimental results indicate a considerable increase in fairness across all metrics for all protected classes, thus fostering greater trust in the predictive policing system by reducing the unfair treatment of individuals.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140654850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advances in Explainable, Fair, and Trustworthy AI 可解释、公平和可信赖的人工智能的进步
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-22 DOI: 10.1142/s0218213024030015
Sheikh Rabiul Islam, Ingrid Russell, William Eberle, Douglas Talbert, Md Golam Moula Mehedi Hasan
This special issue encapsulates the multifaceted landscape of contemporary challenges and innovations in Artificial Intelligence (AI) and Machine Learning (ML), with a particular focus on issues related to explainability, fairness, and trustworthiness. The exploration begins with the computational intricacies of understanding and explaining the behavior of binary neurons within neural networks. Simultaneously, ethical dimensions in AI are scrutinized, emphasizing the nuanced considerations required in defining autonomous ethical agents. The pursuit of fairness is exemplified through frameworks and methodologies in machine learning, addressing biases and promoting trust, particularly in predictive policing systems. Human-agent interaction dynamics are elucidated, revealing the nuanced relationship between task allocation, performance, and user satisfaction. The imperative of interpretability in complex predictive models is highlighted, emphasizing a query-driven methodology. Lastly, in the context of trauma triage, the study underscores the delicate trade-off between model accuracy and practitioner-friendly interpretability, introducing innovative strategies to address biases and trust-related metrics.
本特刊囊括了当代人工智能(AI)和机器学习(ML)领域面临的多方面挑战和创新,尤其关注与可解释性、公平性和可信性相关的问题。探讨从理解和解释神经网络中二元神经元行为的复杂计算开始。同时,对人工智能的伦理层面进行了仔细研究,强调了在定义自主伦理代理时所需要的细微考量。通过机器学习的框架和方法论来体现对公平的追求,解决偏见和促进信任,特别是在预测性警务系统中。阐明了人与代理的交互动态,揭示了任务分配、性能和用户满意度之间的微妙关系。强调了复杂预测模型的可解释性,强调了查询驱动的方法。最后,在创伤分流的背景下,该研究强调了模型准确性与实践者友好的可解释性之间的微妙权衡,并引入了创新策略来解决偏差和与信任相关的指标。
{"title":"Advances in Explainable, Fair, and Trustworthy AI","authors":"Sheikh Rabiul Islam, Ingrid Russell, William Eberle, Douglas Talbert, Md Golam Moula Mehedi Hasan","doi":"10.1142/s0218213024030015","DOIUrl":"https://doi.org/10.1142/s0218213024030015","url":null,"abstract":"This special issue encapsulates the multifaceted landscape of contemporary challenges and innovations in Artificial Intelligence (AI) and Machine Learning (ML), with a particular focus on issues related to explainability, fairness, and trustworthiness. The exploration begins with the computational intricacies of understanding and explaining the behavior of binary neurons within neural networks. Simultaneously, ethical dimensions in AI are scrutinized, emphasizing the nuanced considerations required in defining autonomous ethical agents. The pursuit of fairness is exemplified through frameworks and methodologies in machine learning, addressing biases and promoting trust, particularly in predictive policing systems. Human-agent interaction dynamics are elucidated, revealing the nuanced relationship between task allocation, performance, and user satisfaction. The imperative of interpretability in complex predictive models is highlighted, emphasizing a query-driven methodology. Lastly, in the context of trauma triage, the study underscores the delicate trade-off between model accuracy and practitioner-friendly interpretability, introducing innovative strategies to address biases and trust-related metrics.","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140674229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Winners of Nikolaos Bourbakis Award for 2023 2023 年尼古拉斯-布尔巴基斯奖获奖者
IF 1.1 4区 计算机科学 Q3 Computer Science Pub Date : 2024-04-22 DOI: 10.1142/s0218213024820013
{"title":"Winners of Nikolaos Bourbakis Award for 2023","authors":"","doi":"10.1142/s0218213024820013","DOIUrl":"https://doi.org/10.1142/s0218213024820013","url":null,"abstract":"","PeriodicalId":50280,"journal":{"name":"International Journal on Artificial Intelligence Tools","volume":null,"pages":null},"PeriodicalIF":1.1,"publicationDate":"2024-04-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140675055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Journal on Artificial Intelligence Tools
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1