Sequential decision-making involves making informed decisions based on continuous interactions with a complex environment. This process is ubiquitous in various applications, including recommendation systems and clinical treatment design. My research has concentrated on addressing two pivotal challenges in sequential decision-making: (1) How can we design algorithms that efficiently learn the optimal decision strategy with minimal interactions and limited sample data? (2) How can we ensure robustness in decision-making algorithms when faced with distributional shifts due to environmental changes and the sim-to-real gap? This paper summarizes and expands upon the talk I presented at the AAAI 2024 New Faculty Highlights program, detailing how my research aims to tackle these challenges.
{"title":"Efficient and robust sequential decision making algorithms","authors":"Pan Xu","doi":"10.1002/aaai.12186","DOIUrl":"https://doi.org/10.1002/aaai.12186","url":null,"abstract":"<p>Sequential decision-making involves making informed decisions based on continuous interactions with a complex environment. This process is ubiquitous in various applications, including recommendation systems and clinical treatment design. My research has concentrated on addressing two pivotal challenges in sequential decision-making: (1) How can we design algorithms that efficiently learn the optimal decision strategy with minimal interactions and limited sample data? (2) How can we ensure robustness in decision-making algorithms when faced with distributional shifts due to environmental changes and the sim-to-real gap? This paper summarizes and expands upon the talk I presented at the AAAI 2024 New Faculty Highlights program, detailing how my research aims to tackle these challenges.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"376-385"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding and correcting algorithmic bias in artificial intelligence (AI) has become increasingly important, leading to a surge in research on AI fairness within both the AI community and broader society. Traditionally, this research operates within the constrained supervised learning paradigm, assuming the presence of class labels, independent and identically distributed (IID) data, and batch-based learning necessitating the simultaneous availability of all training data. However, in practice, class labels may be absent due to censoring, data is often represented using non-IID graph structures that capture connections among individual units, and data can arrive and evolve over time. These prevalent real-world data representations limit the applicability of existing fairness literature, which typically addresses fairness in static and tabular supervised learning settings. This paper reviews recent advances in AI fairness aimed at bridging these gaps for practical deployment in real-world scenarios. Additionally, opportunities are envisioned by highlighting the limitations and significant potential for real applications.
{"title":"AI fairness in practice: Paradigm, challenges, and prospects","authors":"Wenbin Zhang","doi":"10.1002/aaai.12189","DOIUrl":"https://doi.org/10.1002/aaai.12189","url":null,"abstract":"<p>Understanding and correcting algorithmic bias in artificial intelligence (AI) has become increasingly important, leading to a surge in research on AI fairness within both the AI community and broader society. Traditionally, this research operates within the constrained supervised learning paradigm, assuming the presence of class labels, independent and identically distributed (IID) data, and batch-based learning necessitating the simultaneous availability of all training data. However, in practice, class labels may be absent due to censoring, data is often represented using non-IID graph structures that capture connections among individual units, and data can arrive and evolve over time. These prevalent real-world data representations limit the applicability of existing fairness literature, which typically addresses fairness in static and tabular supervised learning settings. This paper reviews recent advances in AI fairness aimed at bridging these gaps for practical deployment in real-world scenarios. Additionally, opportunities are envisioned by highlighting the limitations and significant potential for real applications.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"386-395"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, there has been great interest in deploying autonomous mobile robots in airports, malls, and hospitals to complete a range of tasks such as delivery, cleaning, and patrolling. The rich context of these environments gives rise to highly unstructured motion that is challenging for robots to anticipate and adapt to. This results in uncomfortable and unsafe human–robot encounters, poor robot performance, and even catastrophic failures that hinder robot acceptance. Such observations have motivated my work on social robot navigation, the problem of enabling robots to navigate in human environments while accounting for human safety and comfort. In this article, I highlight prior work on expanding the classical autonomy stack with mathematical models and algorithms designed to contribute towards smoother mobile robot deployments in complex environments.
{"title":"Towards smooth mobile robot deployments in dynamic human environments","authors":"Christoforos Mavrogiannis","doi":"10.1002/aaai.12192","DOIUrl":"https://doi.org/10.1002/aaai.12192","url":null,"abstract":"<p>Recently, there has been great interest in deploying autonomous mobile robots in airports, malls, and hospitals to complete a range of tasks such as delivery, cleaning, and patrolling. The rich context of these environments gives rise to highly unstructured motion that is challenging for robots to anticipate and adapt to. This results in uncomfortable and unsafe human–robot encounters, poor robot performance, and even catastrophic failures that hinder robot acceptance. Such observations have motivated my work on social robot navigation, the problem of enabling robots to navigate in human environments while accounting for human safety and comfort. In this article, I highlight prior work on expanding the classical autonomy stack with mathematical models and algorithms designed to contribute towards smoother mobile robot deployments in complex environments.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"419-428"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.
{"title":"Toward the confident deployment of real-world reinforcement learning agents","authors":"Josiah P. Hanna","doi":"10.1002/aaai.12190","DOIUrl":"https://doi.org/10.1002/aaai.12190","url":null,"abstract":"<p>Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"396-403"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.
{"title":"Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning","authors":"Tejas Gokhale","doi":"10.1002/aaai.12194","DOIUrl":"https://doi.org/10.1002/aaai.12194","url":null,"abstract":"<p>Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"429-435"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.
{"title":"Better environments for better AI","authors":"Sarah Keren","doi":"10.1002/aaai.12187","DOIUrl":"https://doi.org/10.1002/aaai.12187","url":null,"abstract":"<p>Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"369-375"},"PeriodicalIF":2.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: can we utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM-generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.
{"title":"Combating misinformation in the age of LLMs: Opportunities and challenges","authors":"Canyu Chen, Kai Shu","doi":"10.1002/aaai.12188","DOIUrl":"https://doi.org/10.1002/aaai.12188","url":null,"abstract":"<p>Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: <i>can we utilize LLMs to combat misinformation?</i> On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: <i>how to combat LLM-generated misinformation?</i> In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"354-368"},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12188","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azanzi Jiomekong, Allard Oelen, Soren Auer, Lorenz Anna-Lena, Vogt Lars
Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.
{"title":"Food information engineering","authors":"Azanzi Jiomekong, Allard Oelen, Soren Auer, Lorenz Anna-Lena, Vogt Lars","doi":"10.1002/aaai.12185","DOIUrl":"https://doi.org/10.1002/aaai.12185","url":null,"abstract":"<p>Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"338-353"},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosina O Weber, Adam J Johs, Prateek Goel, João Marques Silva
Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.
{"title":"XAI is in trouble","authors":"Rosina O Weber, Adam J Johs, Prateek Goel, João Marques Silva","doi":"10.1002/aaai.12184","DOIUrl":"https://doi.org/10.1002/aaai.12184","url":null,"abstract":"<p>Researchers focusing on how artificial intelligence (AI) methods explain their decisions often discuss controversies and limitations. Some even assert that most publications offer little to no valuable contributions. In this article, we substantiate the claim that explainable AI (XAI) is in trouble by describing and illustrating four problems: the disagreements on the scope of XAI, the lack of definitional cohesion, precision, and adoption, the issues with motivations for XAI research, and limited and inconsistent evaluations. As we delve into their potential underlying sources, our analysis finds these problems seem to originate from AI researchers succumbing to the pitfalls of interdisciplinarity or from insufficient scientific rigor. Analyzing these potential factors, we discuss the literature at times coming across unexplored research questions. Hoping to alleviate existing problems, we make recommendations on precautions against the challenges of interdisciplinarity and propose directions in support of scientific rigor.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"300-316"},"PeriodicalIF":2.5,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12184","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The European Union Parliament passed the EU AI Act in 2024, which is an important milestone towards the world's first comprehensive AI law to formally take effect. Although this is a significant achievement, the real work begins with putting these rules into action, a journey filled with challenges and opportunities. This perspective article reviews recent interdisciplinary research aimed at facilitating the implementation of the prohibited AI practices outlined in the EU AI Act. It also explores the necessary future efforts to effectively enforce the banning of those prohibited practices across the EU market and the challenges associated with such enforcement. Addressing these future tasks and challenges calls for the establishment of an interdisciplinary governance framework. This framework may contain a workflow that can identify the necessary expertise and coordinate experts’ collaboration at different stages of AI governance. Additionally, it involves developing and implementing a set of compliance and ethical safeguards to ensure effective management and supervision of AI practices.
{"title":"Implementation of the EU AI act calls for interdisciplinary governance","authors":"Huixin Zhong","doi":"10.1002/aaai.12183","DOIUrl":"10.1002/aaai.12183","url":null,"abstract":"<p>The European Union Parliament passed the EU AI Act in 2024, which is an important milestone towards the world's first comprehensive AI law to formally take effect. Although this is a significant achievement, the real work begins with putting these rules into action, a journey filled with challenges and opportunities. This perspective article reviews recent interdisciplinary research aimed at facilitating the implementation of the prohibited AI practices outlined in the EU AI Act. It also explores the necessary future efforts to effectively enforce the banning of those prohibited practices across the EU market and the challenges associated with such enforcement. Addressing these future tasks and challenges calls for the establishment of an interdisciplinary governance framework. This framework may contain a workflow that can identify the necessary expertise and coordinate experts’ collaboration at different stages of AI governance. Additionally, it involves developing and implementing a set of compliance and ethical safeguards to ensure effective management and supervision of AI practices.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"333-337"},"PeriodicalIF":2.5,"publicationDate":"2024-07-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12183","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141821161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}