The increasingly decentralized and private nature of data in our digital society has motivated the development of collaborative intelligent systems that enable knowledge aggregation among data owners. However, collaborative learning has only been investigated in simple settings. For example, clients are often assumed to train solution models de novo, disregarding all prior expertise. The learned model is typically represented in task-specific forms that are not generalizable to unseen, emerging scenarios. Finally, a universal model representation is enforced among collaborators, ignoring their local compute constraints or input representations. These limitations hampers the practicality of prior collaborative systems in learning scenarios with limited task data that demand constant knowledge adaptation and transfer across information silos, tasks, and learning models, as well as the utilization of prior solution expertise. Furthermore, prior collaborative learning frameworks are not sustainable on a macro scale where participants desire fairness allocation of benefits (e.g., access to the combined model) based on their costs of participation (e.g., overhead of model sharing and training synchronization, risk of information breaches, etc.). This necessitates a new perspective of collaborative learning where the server not only aggregates but also conducts valuation of the participant's contribution, and distribute aggregated information to individuals in commensurate to their contribution. To substantiate the above vision, we propose a new research agenda on developing effective and sustainable collaborative learning frameworks across heterogeneous systems, featuring three novel computational capabilities on knowledge organization: model expression, comprehension, and valuation.
{"title":"Effective knowledge representation and utilization for sustainable collaborative learning across heterogeneous systems","authors":"Trong Nghia Hoang","doi":"10.1002/aaai.12193","DOIUrl":"https://doi.org/10.1002/aaai.12193","url":null,"abstract":"<p>The increasingly decentralized and private nature of data in our digital society has motivated the development of collaborative intelligent systems that enable knowledge aggregation among data owners. However, collaborative learning has only been investigated in simple settings. For example, clients are often assumed to train solution models <i>de novo</i>, disregarding all prior expertise. The learned model is typically represented in task-specific forms that are not generalizable to unseen, emerging scenarios. Finally, a universal model representation is enforced among collaborators, ignoring their local compute constraints or input representations. These limitations hampers the practicality of prior collaborative systems in learning scenarios with limited task data that demand constant knowledge adaptation and transfer across information silos, tasks, and learning models, as well as the utilization of prior solution expertise. Furthermore, prior collaborative learning frameworks are not sustainable on a macro scale where participants desire fairness allocation of benefits (e.g., access to the combined model) based on their costs of participation (e.g., overhead of model sharing and training synchronization, risk of information breaches, etc.). This necessitates a new perspective of collaborative learning where the server not only aggregates but also conducts valuation of the participant's contribution, and distribute aggregated information to individuals in commensurate to their contribution. To substantiate the above vision, we propose a new research agenda on developing effective and sustainable collaborative learning frameworks across heterogeneous systems, featuring three novel computational capabilities on knowledge organization: model expression, comprehension, and valuation.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"404-410"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12193","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324480","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the fairness of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high-stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post-processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute-aware and attribute-blind settings, and is particularly suited to large-scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.
{"title":"Fair and optimal prediction via post-processing","authors":"Han Zhao","doi":"10.1002/aaai.12191","DOIUrl":"https://doi.org/10.1002/aaai.12191","url":null,"abstract":"<p>With the development of machine learning algorithms and the increasing computational resources available, artificial intelligence has achieved great success in many application domains. However, the success of machine learning has also raised concerns about the <i>fairness</i> of the learned models. For instance, the learned models can perpetuate and even exacerbate the potential bias and discrimination in the training data. This issue has become a major obstacle to the deployment of machine learning systems in high-stakes domains, for example, criminal judgment, medical testing, online advertising, hiring process, and so forth. To mitigate the potential bias exhibited by machine learning models, fairness criteria can be integrated into the training process to ensure fair treatment across all demographics, but it often comes at the expense of model performance. Understanding such tradeoffs, therefore, is crucial to the design of optimal and fair algorithms. My research focuses on characterizing the inherent tradeoff between fairness and accuracy in machine learning, and developing algorithms that can achieve both fairness and optimality. In this article, I will discuss our recent work on designing post-processing algorithms for fair classification, which can be applied to a wide range of fairness criteria, including statistical parity, equal opportunity, and equalized odds, under both attribute-aware and attribute-blind settings, and is particularly suited to large-scale foundation models where retraining is expensive or even infeasible. I will also discuss the connections between our work and other related research on trustworthy machine learning, including the connections between algorithmic fairness and differential privacy as well as adversarial robustness.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"411-418"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12191","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324481","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sequential decision-making involves making informed decisions based on continuous interactions with a complex environment. This process is ubiquitous in various applications, including recommendation systems and clinical treatment design. My research has concentrated on addressing two pivotal challenges in sequential decision-making: (1) How can we design algorithms that efficiently learn the optimal decision strategy with minimal interactions and limited sample data? (2) How can we ensure robustness in decision-making algorithms when faced with distributional shifts due to environmental changes and the sim-to-real gap? This paper summarizes and expands upon the talk I presented at the AAAI 2024 New Faculty Highlights program, detailing how my research aims to tackle these challenges.
{"title":"Efficient and robust sequential decision making algorithms","authors":"Pan Xu","doi":"10.1002/aaai.12186","DOIUrl":"https://doi.org/10.1002/aaai.12186","url":null,"abstract":"<p>Sequential decision-making involves making informed decisions based on continuous interactions with a complex environment. This process is ubiquitous in various applications, including recommendation systems and clinical treatment design. My research has concentrated on addressing two pivotal challenges in sequential decision-making: (1) How can we design algorithms that efficiently learn the optimal decision strategy with minimal interactions and limited sample data? (2) How can we ensure robustness in decision-making algorithms when faced with distributional shifts due to environmental changes and the sim-to-real gap? This paper summarizes and expands upon the talk I presented at the AAAI 2024 New Faculty Highlights program, detailing how my research aims to tackle these challenges.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"376-385"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12186","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Understanding and correcting algorithmic bias in artificial intelligence (AI) has become increasingly important, leading to a surge in research on AI fairness within both the AI community and broader society. Traditionally, this research operates within the constrained supervised learning paradigm, assuming the presence of class labels, independent and identically distributed (IID) data, and batch-based learning necessitating the simultaneous availability of all training data. However, in practice, class labels may be absent due to censoring, data is often represented using non-IID graph structures that capture connections among individual units, and data can arrive and evolve over time. These prevalent real-world data representations limit the applicability of existing fairness literature, which typically addresses fairness in static and tabular supervised learning settings. This paper reviews recent advances in AI fairness aimed at bridging these gaps for practical deployment in real-world scenarios. Additionally, opportunities are envisioned by highlighting the limitations and significant potential for real applications.
{"title":"AI fairness in practice: Paradigm, challenges, and prospects","authors":"Wenbin Zhang","doi":"10.1002/aaai.12189","DOIUrl":"https://doi.org/10.1002/aaai.12189","url":null,"abstract":"<p>Understanding and correcting algorithmic bias in artificial intelligence (AI) has become increasingly important, leading to a surge in research on AI fairness within both the AI community and broader society. Traditionally, this research operates within the constrained supervised learning paradigm, assuming the presence of class labels, independent and identically distributed (IID) data, and batch-based learning necessitating the simultaneous availability of all training data. However, in practice, class labels may be absent due to censoring, data is often represented using non-IID graph structures that capture connections among individual units, and data can arrive and evolve over time. These prevalent real-world data representations limit the applicability of existing fairness literature, which typically addresses fairness in static and tabular supervised learning settings. This paper reviews recent advances in AI fairness aimed at bridging these gaps for practical deployment in real-world scenarios. Additionally, opportunities are envisioned by highlighting the limitations and significant potential for real applications.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"386-395"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12189","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324460","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, there has been great interest in deploying autonomous mobile robots in airports, malls, and hospitals to complete a range of tasks such as delivery, cleaning, and patrolling. The rich context of these environments gives rise to highly unstructured motion that is challenging for robots to anticipate and adapt to. This results in uncomfortable and unsafe human–robot encounters, poor robot performance, and even catastrophic failures that hinder robot acceptance. Such observations have motivated my work on social robot navigation, the problem of enabling robots to navigate in human environments while accounting for human safety and comfort. In this article, I highlight prior work on expanding the classical autonomy stack with mathematical models and algorithms designed to contribute towards smoother mobile robot deployments in complex environments.
{"title":"Towards smooth mobile robot deployments in dynamic human environments","authors":"Christoforos Mavrogiannis","doi":"10.1002/aaai.12192","DOIUrl":"https://doi.org/10.1002/aaai.12192","url":null,"abstract":"<p>Recently, there has been great interest in deploying autonomous mobile robots in airports, malls, and hospitals to complete a range of tasks such as delivery, cleaning, and patrolling. The rich context of these environments gives rise to highly unstructured motion that is challenging for robots to anticipate and adapt to. This results in uncomfortable and unsafe human–robot encounters, poor robot performance, and even catastrophic failures that hinder robot acceptance. Such observations have motivated my work on social robot navigation, the problem of enabling robots to navigate in human environments while accounting for human safety and comfort. In this article, I highlight prior work on expanding the classical autonomy stack with mathematical models and algorithms designed to contribute towards smoother mobile robot deployments in complex environments.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"419-428"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12192","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324482","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.
{"title":"Toward the confident deployment of real-world reinforcement learning agents","authors":"Josiah P. Hanna","doi":"10.1002/aaai.12190","DOIUrl":"https://doi.org/10.1002/aaai.12190","url":null,"abstract":"<p>Intelligent learning agents must be able to learn from experience so as to accomplish tasks that require more ability than could be initially programmed. Reinforcement learning (RL) has emerged as a potentially powerful class of solution methods to create agents that learn from trial-and-error interaction with the world. Despite many prominent success stories, a number of challenges often stand between the use of RL in real-world problems. As part of the AAAI New Faculty Highlight Program, in this article, I will describe the work that my group is doing at the University of Wisconsin—Madison with the intent to remove barriers to the use of RL in practice. Specifically, I will describe recent work that aims to give practitioners confidence in learned behaviors, methods to increase the data efficiency of RL, and work on “challenge” domains that stress RL algorithms beyond current testbeds.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"396-403"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12190","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324461","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.
{"title":"Towards robust visual understanding: A paradigm shift in computer vision from recognition to reasoning","authors":"Tejas Gokhale","doi":"10.1002/aaai.12194","DOIUrl":"https://doi.org/10.1002/aaai.12194","url":null,"abstract":"<p>Models that learn from data are widely and rapidly being deployed today for real-world use, but they suffer from unforeseen failures that limit their reliability. These failures often have several causes such as distribution shift; adversarial attacks; calibration errors; scarcity of data and/or ground-truth labels; noisy, corrupted, or partial data; and limitations of evaluation metrics. But many failures also occur because many modern AI tasks require reasoning beyond pattern matching and such reasoning abilities are difficult to formulate as data-based input–output function fitting. The reliability problem has become increasingly important under the new paradigm of semantic “multimodal” learning. In this article, I will discuss findings from our work to provide avenues for the development of robust and reliable computer vision systems, particularly by leveraging the interactions between vision and language. This article expands upon the invited talk at AAAI 2024 and covers three thematic areas: robustness of visual recognition systems, open-domain reliability for visual reasoning, and challenges and opportunities associated with generative models in vision.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"429-435"},"PeriodicalIF":2.5,"publicationDate":"2024-09-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12194","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324469","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.
{"title":"Better environments for better AI","authors":"Sarah Keren","doi":"10.1002/aaai.12187","DOIUrl":"https://doi.org/10.1002/aaai.12187","url":null,"abstract":"<p>Most AI research focuses exclusively on the AI agent itself, that is, given some input, what are the improvements to the agent's reasoning that will yield the best possible output? In my research, I take a novel approach to increasing the capabilities of AI agents via the use of AI to design the environments in which they are intended to act. My methods identify the inherent capabilities and limitations of AI agents and find the best way to modify their environment in order to maximize performance. With this agenda in mind, I describe here several research projects that vary in their objective, in the AI methodologies that are applied for finding optimal designs, and in the real-world applications to which they correspond. I also discuss how the different projects fit within my overarching objective of using AI to promote effective multi-agent collaboration and to enhance the way robots and machines interact with humans.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"369-375"},"PeriodicalIF":2.5,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12187","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324489","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: can we utilize LLMs to combat misinformation? On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: how to combat LLM-generated misinformation? In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.
{"title":"Combating misinformation in the age of LLMs: Opportunities and challenges","authors":"Canyu Chen, Kai Shu","doi":"10.1002/aaai.12188","DOIUrl":"https://doi.org/10.1002/aaai.12188","url":null,"abstract":"<p>Misinformation such as fake news and rumors is a serious threat for information ecosystems and public trust. The emergence of large language models (LLMs) has great potential to reshape the landscape of combating misinformation. Generally, LLMs can be a double-edged sword in the fight. On the one hand, LLMs bring promising opportunities for combating misinformation due to their profound world knowledge and strong reasoning abilities. Thus, one emerging question is: <i>can we utilize LLMs to combat misinformation?</i> On the other hand, the critical challenge is that LLMs can be easily leveraged to generate deceptive misinformation at scale. Then, another important question is: <i>how to combat LLM-generated misinformation?</i> In this paper, we first systematically review the history of combating misinformation before the advent of LLMs. Then we illustrate the current efforts and present an outlook for these two fundamental questions, respectively. The goal of this survey paper is to facilitate the progress of utilizing LLMs for fighting misinformation and call for interdisciplinary efforts from different stakeholders for combating LLM-generated misinformation.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"354-368"},"PeriodicalIF":2.5,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12188","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Azanzi Jiomekong, Allard Oelen, Soren Auer, Lorenz Anna-Lena, Vogt Lars
Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.
{"title":"Food information engineering","authors":"Azanzi Jiomekong, Allard Oelen, Soren Auer, Lorenz Anna-Lena, Vogt Lars","doi":"10.1002/aaai.12185","DOIUrl":"https://doi.org/10.1002/aaai.12185","url":null,"abstract":"<p>Food information engineering relies on statistical and AI techniques (e.g., symbolic, connectionist, and neurosymbolic AI) for collecting, storing, processing, diffusing, and putting food information in a form exploitable by humans and machines. Food information is collected manually and automatically. Once collected, food information is organized using tabular data representation schema, symbolic, connectionist or neurosymbolic AI techniques. Once collected, processed, and stored, food information is diffused to different stakeholders using appropriate formats. Even if neurosymbolic AI has shown promising results in many domains, we found that this approach is rarely used in the domain of food information engineering. This paper aims to serve as a good reference for food information engineering researchers. Unlike existing reviews on the subject, we cover all the aspects of food information engineering and we linked the paper to online resources built using Open Research Knowledge Graph. These resources are composed of templates, comparison tables of research contributions and smart reviews. All these resources are organized in the “Food Information Engineering” observatory and will be continually updated with new research contributions.</p>","PeriodicalId":7854,"journal":{"name":"Ai Magazine","volume":"45 3","pages":"338-353"},"PeriodicalIF":2.5,"publicationDate":"2024-07-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/aaai.12185","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142324669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}