Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31236
Takashi Kido, K. Takadama
At the AAAI Spring Symposium 2024, we explore the important challenges facing Generative Artificial Intelligence (GenAI) concerning both social structures and individual welfare. Our discussion revolves around two perspectives. Individual Impact of GenAI on Well-being: This perspective focuses on the design of AI systems with keen consideration for individual well-being. It seeks to understand how digital experiences influence emotions and the quality of life at a personal level. By examining the effects of AI technologies on individuals, we aim to tailor solutions to enhance personal welfare and fulfillment. Social Impact of GenAI on Well-being: Here, emphasis shifts to the broader societal implications of GenAI. We strive for decisions and implementations that foster fairness and benefit all members of society. This perspective acknowledges the interconnectedness of individuals within social structures and seeks to ensure that GenAI advancements positively contribute to collective well-being. In this paper, we provide an overview of the motivations driving our exploration, elucidate key terms essential for understanding the discourse, outline the primary areas of focus of our symposium, and pose research inquiries that will guide our discussions. Through this comprehensive approach, we aim to address the multifaceted challenges and opportunities presented by GenAI in promoting both social and individual well-being.
{"title":"The Challenges for GenAI in Social and Individual Well-Being","authors":"Takashi Kido, K. Takadama","doi":"10.1609/aaaiss.v3i1.31236","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31236","url":null,"abstract":"At the AAAI Spring Symposium 2024, we explore the important challenges facing Generative Artificial Intelligence (GenAI) concerning both social structures and individual welfare. Our discussion revolves around two perspectives.\u0000\u0000Individual Impact of GenAI on Well-being: This perspective focuses on the design of AI systems with keen consideration for individual well-being. It seeks to understand how digital experiences influence emotions and the quality of life at a personal level. By examining the effects of AI technologies on individuals, we aim to tailor solutions to enhance personal welfare and fulfillment.\u0000\u0000Social Impact of GenAI on Well-being: Here, emphasis shifts to the broader societal implications of GenAI. We strive for decisions and implementations that foster fairness and benefit all members of society. This perspective acknowledges the interconnectedness of individuals within social structures and seeks to ensure that GenAI advancements positively contribute to collective well-being.\u0000\u0000In this paper, we provide an overview of the motivations driving our exploration, elucidate key terms essential for understanding the discourse, outline the primary areas of focus of our symposium, and pose research inquiries that will guide our discussions. Through this comprehensive approach, we aim to address the multifaceted challenges and opportunities presented by GenAI in promoting both social and individual well-being.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"100 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122554","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31222
Aritra Mitra
Stochastic gradient descent (SGD) is at the heart of large-scale distributed machine learning paradigms such as federated learning (FL). In these applications, the task of training high-dimensional weight vectors is distributed among several workers that exchange information over networks of limited bandwidth. While parallelization at such an immense scale helps to reduce the computational burden, it creates several other challenges: delays, asynchrony, and most importantly, a significant communication bottleneck. The popularity and success of SGD can be attributed in no small part to the fact that it is extremely robust to such deviations from ideal operating conditions. Inspired by these findings, we ask: Are common reinforcement learning (RL) algorithms also robust to similarly structured perturbations? Perhaps surprisingly, despite the recent surge of interest in multi-agent/federated RL, almost nothing is known about the above question. This paper collects some of our recent results in filling this void.
{"title":"Towards Robust Multi-Agent Reinforcement Learning","authors":"Aritra Mitra","doi":"10.1609/aaaiss.v3i1.31222","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31222","url":null,"abstract":"Stochastic gradient descent (SGD) is at the heart of large-scale distributed machine learning paradigms such as federated learning (FL). In these applications, the task of training high-dimensional weight vectors is distributed among several workers that exchange information over networks of limited bandwidth. While parallelization at such an immense scale helps to reduce the computational burden, it creates several other challenges: delays, asynchrony, and most importantly, a significant communication bottleneck. The popularity and success of SGD can be attributed in no small part to the fact that it is extremely robust to such deviations from ideal operating conditions. Inspired by these findings, we ask: Are common reinforcement learning (RL)\u0000algorithms also robust to similarly structured perturbations? Perhaps surprisingly, despite the recent surge of interest in multi-agent/federated RL, almost nothing is known about the above question. This paper collects some of our recent results in filling this void.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"94 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141123106","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31255
Mingzhe Yang
Human judgment is often subject to bias, leading to unfair decisions. This is particularly problematic when assessments have significant consequences, underscoring the importance of guiding humans towards fairness. Although recent advancements in AI have facilitated decision support, it is not always feasible to employ AI assistance in real-world scenarios. Therefore, this study focuses on developing and evaluating a method to guide humans in making fair judgments. Our experimental results confirmed that our approach effectively promotes fairness in human decision-making.
{"title":"Fair Machine Guidance to Enhance Fair Decision Making","authors":"Mingzhe Yang","doi":"10.1609/aaaiss.v3i1.31255","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31255","url":null,"abstract":"Human judgment is often subject to bias, leading to unfair decisions. This is particularly problematic when assessments have significant consequences, underscoring the importance of guiding humans towards fairness. Although recent advancements in AI have facilitated decision support, it is not always feasible to employ AI assistance in real-world scenarios. Therefore, this study focuses on developing and evaluating a method to guide humans in making fair judgments. Our experimental results confirmed that our approach effectively promotes fairness in human decision-making.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"12 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119365","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Feature concepts and data leaves have been invented to foster thoughts for creating social and physical well-being through the use of datasets. The idea, simply put, is to at-tach selected and collected Data Leaves that are summaries of event flows to be discovered from corresponding datasets, on the target Feature Concept representing the expected scenarios of well-being individuals and well-being society. A graph of existing or expected datasets, attached in the form of Data Leaves on a Feature Concept, was generated semi-automatically. Rather than sheer auto-mated generative AI, our work addresses the process of generative artificial and natural intelligence to create the basis for collecting and connecting useful data.
{"title":"Collect and Connect Data Leaves to Feature Concepts: Interactive Graph Generation Toward Wellbeing","authors":"Yukio Ohsawa, Tomohide Maekawa, Hiroki Yamaguchi, Hiro Yoshida, Kaira Sekiguchi","doi":"10.1609/aaaiss.v3i1.31241","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31241","url":null,"abstract":"Feature concepts and data leaves have been invented to foster thoughts for creating social and physical well-being through the use of datasets. The idea, simply put, is to at-tach selected and collected Data Leaves that are summaries of event flows to be discovered from corresponding datasets, on the target Feature Concept representing the expected scenarios of well-being individuals and well-being society. A graph of existing or expected datasets, attached in the form of Data Leaves on a Feature Concept, was generated semi-automatically. Rather than sheer auto-mated generative AI, our work addresses the process of generative artificial and natural intelligence to create the basis for collecting and connecting useful data.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"24 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119421","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31172
Ellen J. Bass, Steven Weber
Current risk frameworks such as probabilistic risk analy-sis methodologies do not take societal safety-related benefits into account. To inform human-AI collaborative system development, this manuscript highlights the need for updated risk frameworks and suggestions for relevant considerations.
{"title":"Toward Risk Frameworks for Autonomous Systems that Take Societal Safety-related Benefits into Account","authors":"Ellen J. Bass, Steven Weber","doi":"10.1609/aaaiss.v3i1.31172","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31172","url":null,"abstract":"Current risk frameworks such as probabilistic risk analy-sis methodologies do not take societal safety-related benefits into account. To inform human-AI collaborative system development, this manuscript highlights the need for updated risk frameworks and suggestions for relevant considerations.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"12 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31283
Ming Qian, Chuiqing Kong
Our study utilizes concept description instructions and few-shot learning examples to examine the effectiveness of a large language model (GPT-4) in generating Chinese-to-English translations that embody related translation concepts. We discovered that human language experts possess superior abductive reasoning skills compared to GPT-4. Therefore, it is crucial for humans to employ abductive reasoning to craft more detailed instructions and infuse additional logic into exemplary prompts, a step essential for guiding a large language model effectively, in contrast to the more intuitive understanding a human expert might have. This approach would make the prompt engineering process more complicated and less human-like. Emphasizing domain-specific abductive reasoning stands out as a crucial aspect of human-like learning that AI/ML systems based on large language models should aim to replicate.
{"title":"Exploring the Gap: The Challenge of Achieving Human-like Generalization for Concept-based Translation Instruction Using Large Language Models","authors":"Ming Qian, Chuiqing Kong","doi":"10.1609/aaaiss.v3i1.31283","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31283","url":null,"abstract":"Our study utilizes concept description instructions and few-shot learning examples to examine the effectiveness of a large language model (GPT-4) in generating Chinese-to-English translations that embody related translation concepts. We discovered that human language experts possess superior abductive reasoning skills compared to GPT-4. Therefore, it is crucial for humans to employ abductive reasoning to craft more detailed instructions and infuse additional logic into exemplary prompts, a step essential for guiding a large language model effectively, in contrast to the more intuitive understanding a human expert might have. This approach would make the prompt engineering process more complicated and less human-like. Emphasizing domain-specific abductive reasoning stands out as a crucial aspect of human-like learning that AI/ML systems based on large language models should aim to replicate.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"61 19","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31210
Christopher Toukmaji, Allison Tee
We present BIDARA, a Bio-Inspired Design And Research Assistant, to address the complexity of biomimicry -- the practice of designing modern-day engineering solutions inspired by biological phenomena. Large Language Models (LLMs) have been shown to act as sufficient general-purpose task solvers, but they often hallucinate and fail in regimes that require domain-specific and up-to-date knowledge. We integrate Retrieval-Augmented Generation (RAG) and Reasoning-and-Action agents to aid LLMs in avoiding hallucination and utilizing updated knowledge during generation of biomimetic design solutions. We find that incorporating RAG increases the feasibility of the design solutions in both prompting and agent settings, and we use these findings to guide our ongoing work. To the extent of our knowledge, this is the first work that integrates and evaluates Retrieval-Augmented Generation within LLM-generated biomimetic design solutions.
{"title":"Retrieval-Augmented Generation and LLM Agents for Biomimicry Design Solutions","authors":"Christopher Toukmaji, Allison Tee","doi":"10.1609/aaaiss.v3i1.31210","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31210","url":null,"abstract":"We present BIDARA, a Bio-Inspired Design And Research Assistant, to address the complexity of biomimicry -- the practice of designing modern-day engineering solutions inspired by biological phenomena. Large Language Models (LLMs) have been shown to act as sufficient general-purpose task solvers, but they often hallucinate and fail in regimes that require domain-specific and up-to-date knowledge. We integrate Retrieval-Augmented Generation (RAG) and Reasoning-and-Action agents to aid LLMs in avoiding hallucination and utilizing updated knowledge during generation of biomimetic design solutions. We find that incorporating RAG increases the feasibility of the design solutions in both prompting and agent settings, and we use these findings to guide our ongoing work. To the extent of our knowledge, this is the first work that integrates and evaluates Retrieval-Augmented Generation within LLM-generated biomimetic design solutions.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"46 7","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121888","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31183
Diyi Yang
Large language models (LLMs) have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how LLMstransform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part looks at social skill learning via LLMs by empowering therapists and learners with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI collaboration via LLMs can empower individuals and foster positive change. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems.
{"title":"Human-AI Interaction in the Age of Large Language Models","authors":"Diyi Yang","doi":"10.1609/aaaiss.v3i1.31183","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31183","url":null,"abstract":"Large language models (LLMs) have revolutionized the way humans interact with AI systems, transforming a wide range of fields and disciplines. In this talk, I share two distinct approaches to empowering human-AI interaction using LLMs. The first one explores how LLMstransform computational social science, and how human-AI collaboration can reduce costs and improve the efficiency of social science research. The second part looks at social skill learning via LLMs by empowering therapists and learners with LLM-empowered feedback and deliberative practices. These two works demonstrate how human-AI collaboration via LLMs can empower individuals and foster positive change. We conclude by discussing how LLMs enable collaborative intelligence by redefining the interactions between humans and AI systems.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"23 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120059","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Large Language Models have excelled at encoding and leveraging language patterns in large text-based corpora for various tasks, including spatiotemporal event-based question answering (QA). However, due to encoding a text-based projection of the world, they have also been shown to lack a full bodied understanding of such events, e.g., a sense of intuitive physics, and cause-and-effect relationships among events. In this work, we propose using causal event graphs (CEGs) to enhance language understanding of spatiotemporal events in language models, using a novel approach that also provides proofs for the model’s capture of the CEGs. A CEG consists of events denoted by nodes, and edges that denote cause and effect relationships among the events. We perform experimentation and evaluation of our approach for benchmark spatiotemporal QA tasks and show effective performance, both quantitative and qualitative, over state-of-the-art baseline methods.
{"title":"Causal Event Graph-Guided Language-based Spatiotemporal Question Answering","authors":"Kaushik Roy, Alessandro Oltramari, Yuxin Zi, Chathurangi Shyalika, Vignesh Narayanan, Amit Sheth","doi":"10.1609/aaaiss.v3i1.31204","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31204","url":null,"abstract":"Large Language Models have excelled at encoding and leveraging language patterns in large text-based corpora for various tasks, including spatiotemporal event-based question answering (QA). However, due to encoding a text-based projection of the world, they have also been shown to lack a full bodied understanding of such events, e.g., a sense of intuitive physics, and cause-and-effect relationships among events. In this work, we propose using causal event graphs (CEGs) to enhance language understanding of spatiotemporal events in language models, using a novel approach that also provides proofs for the model’s capture of the CEGs. A CEG consists of events denoted by nodes, and edges that denote cause and effect relationships among the events. We perform experimentation and evaluation of our approach for benchmark spatiotemporal QA tasks and show effective performance, both quantitative and qualitative, over state-of-the-art baseline methods.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"77 8","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120967","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Temporal credit assignment is the process of distributing delayed outcomes to each action in a sequence, which is essential for learning to adapt and make decisions in dynamic environments. While computational methods in reinforcement learning, such as temporal difference (TD), have shown success in tackling this issue, it remains unclear whether these mechanisms accurately reflect how humans handle feedback delays. Furthermore, cognitive science research has not fully explored the credit assignment problem in humans and cognitive models. Our study uses a cognitive model based on Instance-Based Learning Theory (IBLT) to investigate various credit assignment mechanisms, including equal credit, exponential credit, and TD credit, using the IBL decision mechanism in a goal-seeking navigation task with feedback delays and varying levels of decision complexity. We compare the performance and process measures of the different models with human decision-making in two experiments. Our findings indicate that the human learning process cannot be fully explained by any of the mechanisms. We also observe that decision complexity affects human behavior but not model behavior. By examining the similarities and differences between human and model behavior, we summarize the challenges and opportunities for developing learning agents that emulate human decisions in dynamic environments.
{"title":"Credit Assignment: Challenges and Opportunities in Developing Human-like Learning Agents","authors":"Thuy Ngoc Nguyen, Chase McDonald, Cleotilde Gonzalez","doi":"10.1609/aaaiss.v3i1.31180","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31180","url":null,"abstract":"Temporal credit assignment is the process of distributing delayed outcomes to each action in a sequence, which is essential for learning to adapt and make decisions in dynamic environments. While computational methods in reinforcement learning, such as temporal difference (TD), have shown success in tackling this issue, it remains unclear whether these mechanisms accurately reflect how humans handle feedback delays. Furthermore, cognitive science research has not fully explored the credit assignment problem in humans and cognitive models. Our study uses a cognitive model based on Instance-Based Learning Theory (IBLT) to investigate various credit assignment mechanisms, including equal credit, exponential credit, and TD credit, using the IBL decision mechanism in a goal-seeking navigation task with feedback delays and varying levels of decision complexity. We compare the performance and process measures of the different models with human decision-making in two experiments. Our findings indicate that the human learning process cannot be fully explained by any of the mechanisms. We also observe that decision complexity affects human behavior but not model behavior. By examining the similarities and differences between human and model behavior, we summarize the challenges and opportunities for developing learning agents that emulate human decisions in dynamic environments.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"12 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}