首页 > 最新文献

Ai Magazine最新文献

英文 中文
Fairness amidst non-IID graph data: A literature review
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-28 DOI: 10.1002/aaai.12212
Wenbin Zhang, Shuigeng Zhou, Toby Walsh, Jeremy C. Weiss

The growing importance of understanding and addressing algorithmic bias in artificial intelligence (AI) has led to a surge in research on AI fairness, which often assumes that the underlying data are independent and identically distributed (IID). However, real-world data frequently exist in non-IID graph structures that capture connections among individual units. To effectively mitigate bias in AI systems, it is essential to bridge the gap between traditional fairness literature, designed for IID data, and the prevalence of non-IID graph data. This survey reviews recent advancements in fairness amidst non-IID graph data, including the newly introduced fair graph generation and the commonly studied fair graph classification. In addition, available datasets and evaluation metrics for future research are identified, the limitations of existing work are highlighted, and promising future directions are proposed.

{"title":"Fairness amidst non-IID graph data: A literature review","authors":"Wenbin Zhang,&nbsp;Shuigeng Zhou,&nbsp;Toby Walsh,&nbsp;Jeremy C. Weiss","doi":"10.1002/aaai.12212","DOIUrl":"https://doi.org/10.1002/aaai.12212","url":null,"abstract":"<p>The growing importance of understanding and addressing algorithmic bias in artificial intelligence (AI) has led to a surge in research on AI fairness, which often assumes that the underlying data are independent and identically distributed (IID). However, real-world data frequently exist in non-IID graph structures that capture connections among individual units. To effectively mitigate bias in AI systems, it is essential to bridge the gap between traditional fairness literature, designed for IID data, and the prevalence of non-IID graph data. This survey reviews recent advancements in fairness amidst non-IID graph data, including the newly introduced fair graph generation and the commonly studied fair graph classification. In addition, available datasets and evaluation metrics for future research are identified, the limitations of existing work are highlighted, and promising future directions are proposed.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12212","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Beyond scaleup: Knowledge-aware parsimony learning from deep networks
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-28 DOI: 10.1002/aaai.12211
Quanming Yao, Yongqi Zhang, Yaqing Wang, Nan Yin, James Kwok, Qiang Yang

The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as “building blocks” to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.

{"title":"Beyond scaleup: Knowledge-aware parsimony learning from deep networks","authors":"Quanming Yao,&nbsp;Yongqi Zhang,&nbsp;Yaqing Wang,&nbsp;Nan Yin,&nbsp;James Kwok,&nbsp;Qiang Yang","doi":"10.1002/aaai.12211","DOIUrl":"https://doi.org/10.1002/aaai.12211","url":null,"abstract":"<p>The brute-force scaleup of training datasets, learnable parameters and computation power, has become a prevalent strategy for developing more robust learning models. However, due to bottlenecks in data, computation, and trust, the sustainability of this strategy is a serious concern. In this paper, we attempt to address this issue in a parsimonious manner (i.e., achieving greater potential with simpler models). The key is to drive models using domain-specific knowledge, such as symbols, logic, and formulas, instead of purely relying on scaleup. This approach allows us to build a framework that uses this knowledge as “building blocks” to achieve parsimony in model design, training, and interpretation. Empirical results show that our methods surpass those that typically follow the scaling law. We also demonstrate our framework in AI for science, specifically in the problem of drug-drug interaction prediction. We hope our research can foster more diverse technical roadmaps in the era of foundation models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12211","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143119937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The role and significance of state-building as ensuring national security in the context of artificial intelligence development
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1002/aaai.12207
Vitaliy Gumenyuk, Anatolii Nikitin, Oleksandr Bondar, Iaroslav Zhydovtsev, Hanna Yermakova

Artificial intelligence (AI) has emerged as a major technology and represents a fundamental and revolutionary innovation of our time that has the potential to significantly change the global scenario. In the context of further development of artificial intelligence, state establishment plays a central role in ensuring national security. Countries are tasked with developing legal frameworks for the development and application of AI. Additionally, governments should commit resources to AI research and development to ensure access to cutting-edge technology. As AI continues to evolve, nation-building remains crucial for the protection of national security. Countries must shoulder the responsibility of establishing legal structures to supervise the progression and implementation of artificial intelligence. Investing in AI research and development is essential to secure access to cutting-edge technology. Gracious society and open engagement apply critical impact on forming AI approaches. Civic organizations can contribute to expanding open mindfulness of the related dangers and openings of AI, guaranteeing straightforwardness and responsibility in legislative activities, and pushing for the creation of capable AI approaches. Open interest can help governments in comprehending the yearnings of citizens with respect to AI approaches. This study explores the role and importance of nation-building in ensuring national security in the context of the development of artificial intelligence. It also examines how civil society and public participation can effectively shape AI policy. The topic offers diverse research and analytical opportunities that enable a deeper understanding of the interactions and mutual influences between statehood and artificial intelligence in the context of ensuring national security. It examines the potential and threats that artificial intelligence poses to national security and considers strategies that countries can adopt to ensure security in this area. Based on the research findings, recommendations and suggestions are made for governments and civil society to improve the effectiveness of public participation in formulating AI policies.

{"title":"The role and significance of state-building as ensuring national security in the context of artificial intelligence development","authors":"Vitaliy Gumenyuk,&nbsp;Anatolii Nikitin,&nbsp;Oleksandr Bondar,&nbsp;Iaroslav Zhydovtsev,&nbsp;Hanna Yermakova","doi":"10.1002/aaai.12207","DOIUrl":"https://doi.org/10.1002/aaai.12207","url":null,"abstract":"<p>Artificial intelligence (AI) has emerged as a major technology and represents a fundamental and revolutionary innovation of our time that has the potential to significantly change the global scenario. In the context of further development of artificial intelligence, state establishment plays a central role in ensuring national security. Countries are tasked with developing legal frameworks for the development and application of AI. Additionally, governments should commit resources to AI research and development to ensure access to cutting-edge technology. As AI continues to evolve, nation-building remains crucial for the protection of national security. Countries must shoulder the responsibility of establishing legal structures to supervise the progression and implementation of artificial intelligence. Investing in AI research and development is essential to secure access to cutting-edge technology. Gracious society and open engagement apply critical impact on forming AI approaches. Civic organizations can contribute to expanding open mindfulness of the related dangers and openings of AI, guaranteeing straightforwardness and responsibility in legislative activities, and pushing for the creation of capable AI approaches. Open interest can help governments in comprehending the yearnings of citizens with respect to AI approaches. This study explores the role and importance of nation-building in ensuring national security in the context of the development of artificial intelligence. It also examines how civil society and public participation can effectively shape AI policy. The topic offers diverse research and analytical opportunities that enable a deeper understanding of the interactions and mutual influences between statehood and artificial intelligence in the context of ensuring national security. It examines the potential and threats that artificial intelligence poses to national security and considers strategies that countries can adopt to ensure security in this area. Based on the research findings, recommendations and suggestions are made for governments and civil society to improve the effectiveness of public participation in formulating AI policies.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12207","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113763","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A survey of security and privacy issues of machine unlearning
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1002/aaai.12209
Aobo Chen, Yangyi Li, Chenxu Zhao, Mengdi Huai

Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.

{"title":"A survey of security and privacy issues of machine unlearning","authors":"Aobo Chen,&nbsp;Yangyi Li,&nbsp;Chenxu Zhao,&nbsp;Mengdi Huai","doi":"10.1002/aaai.12209","DOIUrl":"https://doi.org/10.1002/aaai.12209","url":null,"abstract":"<p>Machine unlearning is a cutting-edge technology that embodies the privacy legal principle of the right to be forgotten within the realm of machine learning (ML). It aims to remove specific data or knowledge from trained models without retraining from scratch and has gained significant attention in the field of artificial intelligence in recent years. However, the development of machine unlearning research is associated with inherent vulnerabilities and threats, posing significant challenges for researchers and practitioners. In this article, we provide the first comprehensive survey of security and privacy issues associated with machine unlearning by providing a systematic classification across different levels and criteria. Specifically, we begin by investigating unlearning-based security attacks, where adversaries exploit vulnerabilities in the unlearning process to compromise the security of machine learning (ML) models. We then conduct a thorough examination of privacy risks associated with the adoption of machine unlearning. Additionally, we explore existing countermeasures and mitigation strategies designed to protect models from malicious unlearning-based attacks targeting both security and privacy. Further, we provide a detailed comparison between machine unlearning-based security and privacy attacks and traditional malicious attacks. Finally, we discuss promising future research directions for security and privacy issues posed by machine unlearning, offering insights into potential solutions and advancements in this evolving field.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12209","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113761","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geometric Machine Learning
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-10 DOI: 10.1002/aaai.12210
Melanie Weber

A cornerstone of machine learning is the identification and exploitation of structure in high-dimensional data. While classical approaches assume that data lies in a high-dimensional Euclidean space, geometric machine learning methods are designed for non-Euclidean data, including graphs, strings, and matrices, or data characterized by symmetries inherent in the underlying system. In this article, we review geometric approaches for uncovering and leveraging structure in data and how an understanding of data geometry can lead to the development of more effective machine learning algorithms with provable guarantees.

{"title":"Geometric Machine Learning","authors":"Melanie Weber","doi":"10.1002/aaai.12210","DOIUrl":"https://doi.org/10.1002/aaai.12210","url":null,"abstract":"<p>A cornerstone of machine learning is the identification and exploitation of structure in high-dimensional data. While classical approaches assume that data lies in a high-dimensional Euclidean space, <i>geometric machine learning</i> methods are designed for non-Euclidean data, including graphs, strings, and matrices, or data characterized by symmetries inherent in the underlying system. In this article, we review geometric approaches for uncovering and leveraging structure in data and how an understanding of data geometry can lead to the development of more effective machine learning algorithms with provable guarantees.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12210","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
On the reliability of Large Language Models to misinformed and demographically informed prompts
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-01-08 DOI: 10.1002/aaai.12208
Toluwani Aremu, Oluwakemi Akinwehinmi, Chukwuemeka Nwagu, Syed Ishtiaque Ahmed, Rita Orji, Pedro Arnau Del Amo, Abdulmotaleb El Saddik

We investigate and observe the behavior and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution. Dataset and assessment information can be found at https://github.com/tolusophy/Edge-of-Tomorrow.

{"title":"On the reliability of Large Language Models to misinformed and demographically informed prompts","authors":"Toluwani Aremu,&nbsp;Oluwakemi Akinwehinmi,&nbsp;Chukwuemeka Nwagu,&nbsp;Syed Ishtiaque Ahmed,&nbsp;Rita Orji,&nbsp;Pedro Arnau Del Amo,&nbsp;Abdulmotaleb El Saddik","doi":"10.1002/aaai.12208","DOIUrl":"https://doi.org/10.1002/aaai.12208","url":null,"abstract":"<p>We investigate and observe the behavior and performance of Large Language Model (LLM)-backed chatbots in addressing misinformed prompts and questions with demographic information within the domains of Climate Change and Mental Health. Through a combination of quantitative and qualitative methods, we assess the chatbots' ability to discern the veracity of statements, their adherence to facts, and the presence of bias or misinformation in their responses. Our quantitative analysis using True/False questions reveals that these chatbots can be relied on to give the right answers to these close-ended questions. However, the qualitative insights, gathered from domain experts, shows that there are still concerns regarding privacy, ethical implications, and the necessity for chatbots to direct users to professional services. We conclude that while these chatbots hold significant promise, their deployment in sensitive areas necessitates careful consideration, ethical oversight, and rigorous refinement to ensure they serve as a beneficial augmentation to human expertise rather than an autonomous solution. Dataset and assessment information can be found at https://github.com/tolusophy/Edge-of-Tomorrow.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"46 1","pages":""},"PeriodicalIF":2.5,"publicationDate":"2025-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12208","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143113526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Leveraging AI to improve health information access in the World's largest maternal mobile health program 利用人工智能改善世界上最大的孕产妇流动保健项目的卫生信息获取
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-12-10 DOI: 10.1002/aaai.12206
Shresth Verma, Arshika Lalan, Paula Rodriguez Diaz, Panayiotis Danassis, Amrita Mahale, Kumar Madhu Sudan, Aparna Hegde, Milind Tambe, Aparna Taneja

Harnessing the wide-spread availability of cell phones, many nonprofits have launched mobile health (mHealth) programs to deliver information via voice or text to beneficiaries in underserved communities, with maternal and infant health being a key area of such mHealth programs. Unfortunately, dwindling listenership is a major challenge, requiring targeted interventions using limited resources. This paper focuses on Kilkari, the world's largest mHealth program for maternal and child care – with over 3 million active subscribers at a time – launched by India's Ministry of Health and Family Welfare (MoHFW) and run by the non-profit ARMMAN. We present a system called CHAHAK that aims to reduce automated dropouts as well as boost engagement with the program through the strategic allocation of interventions to beneficiaries. Past work in a similar domain has focused on a much smaller scale mHealth program and used markovian restless multiarmed bandits to optimize a single limited intervention resource. However, this paper demonstrates the challenges in adopting a markovian approach in Kilkari; therefore, CHAHAK instead relies on non-markovian time-series restless bandits and optimizes multiple interventions to improve listenership. We use real Kilkari data from the Odisha state in India to show CHAHAK's effectiveness in harnessing multiple interventions to boost listenership, benefiting marginalized communities. When deployed CHAHAK will assist the largest maternal mHealth program to date.

利用手机的广泛可用性,许多非营利组织已经启动了移动健康(mHealth)项目,通过语音或文本向服务不足社区的受益者传递信息,孕产妇和婴儿健康是此类移动健康项目的一个关键领域。不幸的是,听众人数的减少是一个重大挑战,需要利用有限的资源进行有针对性的干预。Kilkari是世界上最大的妇幼保健移动医疗项目,一次拥有300多万活跃用户,由印度卫生和家庭福利部(MoHFW)发起,由非营利组织ARMMAN运营。我们提出了一个名为CHAHAK的系统,旨在减少自动退学,并通过向受益人战略性地分配干预措施来提高对该计划的参与度。过去在类似领域的工作主要集中在规模小得多的移动医疗项目上,并使用马尔可夫不宁多武装强盗来优化单一有限的干预资源。然而,本文展示了在Kilkari中采用马尔可夫方法的挑战;因此,CHAHAK转而依靠非马尔可夫时间序列不宁盗匪,并优化多重干预来提高听众。我们使用来自印度奥里萨邦的真实Kilkari数据来展示CHAHAK在利用多种干预措施提高听众人数,使边缘化社区受益方面的有效性。部署后,CHAHAK将协助迄今为止最大的孕产妇移动医疗项目。
{"title":"Leveraging AI to improve health information access in the World's largest maternal mobile health program","authors":"Shresth Verma,&nbsp;Arshika Lalan,&nbsp;Paula Rodriguez Diaz,&nbsp;Panayiotis Danassis,&nbsp;Amrita Mahale,&nbsp;Kumar Madhu Sudan,&nbsp;Aparna Hegde,&nbsp;Milind Tambe,&nbsp;Aparna Taneja","doi":"10.1002/aaai.12206","DOIUrl":"https://doi.org/10.1002/aaai.12206","url":null,"abstract":"<p>Harnessing the wide-spread availability of cell phones, many nonprofits have launched mobile health (mHealth) programs to deliver information via voice or text to beneficiaries in underserved communities, with maternal and infant health being a key area of such mHealth programs. Unfortunately, dwindling listenership is a major challenge, requiring targeted interventions using limited resources. This paper focuses on Kilkari, the world's largest mHealth program for maternal and child care – with over 3 million active subscribers at a time – launched by India's Ministry of Health and Family Welfare (MoHFW) and run by the non-profit ARMMAN. We present a system called CHAHAK that aims to reduce automated dropouts as well as boost engagement with the program through the strategic allocation of interventions to beneficiaries. Past work in a similar domain has focused on a much smaller scale mHealth program and used markovian restless multiarmed bandits to optimize a single limited intervention resource. However, this paper demonstrates the challenges in adopting a markovian approach in Kilkari; therefore, CHAHAK instead relies on non-markovian time-series restless bandits and optimizes multiple interventions to improve listenership. We use real Kilkari data from the Odisha state in India to show CHAHAK's effectiveness in harnessing multiple interventions to boost listenership, benefiting marginalized communities. When deployed CHAHAK will assist the largest maternal mHealth program to date.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"526-536"},"PeriodicalIF":2.5,"publicationDate":"2024-12-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12206","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142860681","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Introduction to the special issue on Innovative Applications of Artificial Intelligence (IAAI 2024) 人工智能创新应用特刊(IAAI 2024)简介
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-11-27 DOI: 10.1002/aaai.12205
Alexander Wong, Yuhao Chen, Jan Seyler

This special issue of AI Magazine covers select applications from the Innovative Applications of Artificial Intelligence (IAAI) conference held in 2024 in Vancouver, Canada. The articles address a broad range of very challenging issues and contain great lessons for AI researchers and application developers.

本期《人工智能杂志》特刊涵盖了2024年在加拿大温哥华举行的人工智能创新应用(IAAI)会议的部分应用。这些文章讨论了广泛的非常具有挑战性的问题,并为AI研究人员和应用程序开发人员提供了很好的经验教训。
{"title":"Introduction to the special issue on Innovative Applications of Artificial Intelligence (IAAI 2024)","authors":"Alexander Wong,&nbsp;Yuhao Chen,&nbsp;Jan Seyler","doi":"10.1002/aaai.12205","DOIUrl":"https://doi.org/10.1002/aaai.12205","url":null,"abstract":"<p>This special issue of <i>AI Magazine</i> covers select applications from the Innovative Applications of Artificial Intelligence (IAAI) conference held in 2024 in Vancouver, Canada. The articles address a broad range of very challenging issues and contain great lessons for AI researchers and application developers.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"440-442"},"PeriodicalIF":2.5,"publicationDate":"2024-11-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12205","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142851525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deceptively simple: An outsider's perspective on natural language processing 简单得令人难以置信:从局外人的角度看自然语言处理
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1002/aaai.12204
Ashiqur R. KhudaBukhsh

This article highlights a collection of ideas with an underlying deceptive simplicity that addresses several practical challenges in computational social science and generative AI safety. These ideas lead to (1) an interpretable and quantifiable framework for political polarization; (2) a language identifier robust to noisy social media text settings; (3) a cross-lingual semantic sampler that harnesses code-switching; and (4) a bias audit framework that uncovers shocking racism, antisemitism, misogyny, and other biases in a wide suite of large language models.

本文重点介绍了一系列具有潜在欺骗性的简单想法,这些想法解决了计算社会科学和生成式人工智能安全中的几个实际挑战。这些想法导致:(1)政治两极分化的可解释和可量化框架;(2)对嘈杂的社交媒体文本设置具有鲁棒性的语言标识符;(3)利用语码转换的跨语言语义采样器;(4)一个偏见审计框架,可以在一系列大型语言模型中发现令人震惊的种族主义、反犹主义、厌女症和其他偏见。
{"title":"Deceptively simple: An outsider's perspective on natural language processing","authors":"Ashiqur R. KhudaBukhsh","doi":"10.1002/aaai.12204","DOIUrl":"https://doi.org/10.1002/aaai.12204","url":null,"abstract":"<p>This article highlights a collection of ideas with an underlying deceptive simplicity that addresses several practical challenges in computational social science and generative AI safety. These ideas lead to (1) an interpretable and quantifiable framework for political polarization; (2) a language identifier robust to noisy social media text settings; (3) a cross-lingual semantic sampler that harnesses code-switching; and (4) a bias audit framework that uncovers shocking racism, antisemitism, misogyny, and other biases in a wide suite of large language models.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"569-582"},"PeriodicalIF":2.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12204","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861857","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI-assisted research collaboration with open data for fair and effective response to call for proposals 利用开放数据进行人工智能辅助研究合作,以公平有效地响应提案征集
IF 2.5 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-10-21 DOI: 10.1002/aaai.12203
Siva Likitha Valluru, Michael Widener, Biplav Srivastava, Sriraam Natarajan, Sugata Gangopadhyay

Building teams and promoting collaboration are two very common business activities. An example of these are seen in the TeamingForFunding problem, where research institutions and researchers are interested to identify collaborative opportunities when applying to funding agencies in response to latter's calls for proposals. We describe a novel deployed system to recommend teams using a variety of Artificial Intelligence (AI) methods, such that (1) each team achieves the highest possible skill coverage that is demanded by the opportunity, and (2) the workload of distributing the opportunities is balanced among the candidate members. We address these questions by extracting skills latent in open data of proposal calls (demand) and researcher profiles (supply), normalizing them using taxonomies, and creating efficient algorithms that match demand to supply. We create teams to maximize goodness along a novel metric balancing short- and long-term objectives. We evaluate our system in two diverse settings in US and India of researchers and proposal calls, at two different time instants about 1 year apart (total 4 settings), to establish generality of our approach, and deploy it at a major US university. We validate the effectiveness of our algorithms (1) quantitatively, by evaluating the recommended teams using a goodness score and find that more informed methods lead to recommendations of smaller number of teams and higher goodness, and (2) qualitatively, by conducting a large-scale user study at a college-wide level, and demonstrate that users overall found the tool very useful and relevant.

构建团队和促进协作是两个非常常见的业务活动。在TeamingForFunding问题中可以看到一个例子,研究机构和研究人员在向资助机构申请提案时,有兴趣确定合作机会。我们描述了一个新的部署系统,使用各种人工智能(AI)方法来推荐团队,这样(1)每个团队都达到了机会所需的最高技能覆盖率,(2)分配机会的工作量在候选成员之间是平衡的。我们通过提取提案呼叫(需求)和研究人员简介(供应)的公开数据中潜在的技能来解决这些问题,使用分类法对它们进行规范化,并创建匹配需求与供应的有效算法。我们创建团队,沿着平衡短期和长期目标的新度量最大化优秀。我们在美国和印度的两个不同的研究人员和提案电话环境中评估了我们的系统,在两个不同的时间点,大约相隔1年(总共4个环境),以建立我们方法的通用性,并在美国一所主要大学部署它。我们验证了我们的算法的有效性(1)定量地,通过使用优度评分评估推荐的团队,发现更明智的方法导致推荐的团队数量更少,优度更高;(2)定性地,通过在大学范围内进行大规模的用户研究,并证明用户总体上认为该工具非常有用和相关。
{"title":"AI-assisted research collaboration with open data for fair and effective response to call for proposals","authors":"Siva Likitha Valluru,&nbsp;Michael Widener,&nbsp;Biplav Srivastava,&nbsp;Sriraam Natarajan,&nbsp;Sugata Gangopadhyay","doi":"10.1002/aaai.12203","DOIUrl":"https://doi.org/10.1002/aaai.12203","url":null,"abstract":"<p>Building teams and promoting collaboration are two very common business activities. An example of these are seen in the <i>TeamingForFunding</i> problem, where research institutions and researchers are interested to identify collaborative opportunities when applying to funding agencies in response to latter's calls for proposals. We describe a novel <i>deployed</i> system to recommend teams using a variety of Artificial Intelligence (AI) methods, such that (1) each team achieves the highest possible skill coverage that is demanded by the opportunity, and (2) the workload of distributing the opportunities is balanced among the candidate members. We address these questions by extracting skills latent in open data of proposal calls (demand) and researcher profiles (supply), normalizing them using taxonomies, and creating efficient algorithms that match demand to supply. We create teams to maximize goodness along a novel metric balancing short- and long-term objectives. We evaluate our system in two diverse settings in US and India of researchers and proposal calls, at two different time instants about 1 year apart (total 4 settings), to establish generality of our approach, and deploy it at a major US university. We validate the effectiveness of our algorithms (1) quantitatively, by evaluating the recommended teams using a goodness score and find that more informed methods lead to recommendations of smaller number of teams and higher goodness, and (2) qualitatively, by conducting a large-scale user study at a college-wide level, and demonstrate that users overall found the tool very useful and relevant.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 4","pages":"457-471"},"PeriodicalIF":2.5,"publicationDate":"2024-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12203","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142861856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Ai Magazine
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1