Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31179
Simon Kapiamba, H. Fouad, Ira S. Moskowitz
This paper outlines an approach for assessing and quantifying the risks associated with integrating Large Language Models (LLMs) in generating naval operational plans. It aims to explore the potential benefits and challenges of LLMs in this context and to suggest a methodology for a comprehensive risk assessment framework.
{"title":"Responsible Integration of Large Language Models (LLMs) in Navy Operational Plan Generation","authors":"Simon Kapiamba, H. Fouad, Ira S. Moskowitz","doi":"10.1609/aaaiss.v3i1.31179","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31179","url":null,"abstract":"This paper outlines an approach for assessing and quantifying\u0000the risks associated with integrating Large Language Models\u0000(LLMs) in generating naval operational plans. It aims to explore\u0000the potential benefits and challenges of LLMs in this\u0000context and to suggest a methodology for a comprehensive\u0000risk assessment framework.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"18 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31279
Shiwali Mohan, John E. Laird
Autonomous intelligent agents, including humans, operate in a complex, dynamic environment that necessitates continuous learning. We revisit our thesis that proposes that learning in human-like agents can be categorized into two levels: Level 1 (L1) involving innate and automatic learning mechanisms, while Level 2 (L2) comprises deliberate strategies controlled by the agent. Our thesis draws from our experiences in building artificial agents with complex learning behaviors, such as interactive task learning and open-world learning.
{"title":"Learning Fast and Slow: A Redux of Levels of Learning in General Autonomous Intelligent Agents","authors":"Shiwali Mohan, John E. Laird","doi":"10.1609/aaaiss.v3i1.31279","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31279","url":null,"abstract":"Autonomous intelligent agents, including humans, operate in a complex, dynamic environment that necessitates continuous learning. We revisit our thesis that proposes that learning in human-like agents can be categorized into two levels: Level 1 (L1) involving innate and automatic learning mechanisms, while Level 2 (L2) comprises deliberate strategies controlled by the agent. Our thesis draws from our experiences in building artificial agents with complex learning behaviors, such as interactive task learning and open-world learning.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"31 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31184
Ming Yin
Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.
{"title":"Accounting for Human Engagement Behavior to Enhance AI-Assisted Decision Making","authors":"Ming Yin","doi":"10.1609/aaaiss.v3i1.31184","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31184","url":null,"abstract":"Artificial intelligence (AI) technologies have been increasingly integrated into human workflows. For example, the usage of AI-based decision aids in human decision-making processes has resulted in a new paradigm of AI-assisted decision making---that is, the AI-based decision aid provides a decision recommendation to the human decision makers, while humans make the final decision. The increasing prevalence of human-AI collaborative decision making highlights the need to understand how humans engage with the AI-based decision aid in these decision-making processes, and how to promote the effectiveness of the human-AI team in decision making. In this talk, I'll discuss a few examples illustrating that when AI is used to assist humans---both an individual decision maker or a group of decision makers---in decision making, people's engagement with the AI assistance is largely subject to their heuristics and biases, rather than careful deliberation of the respective strengths and limitations of AI and themselves. I'll then describe how to enhance AI-assisted decision making by accounting for human engagement behavior in the designs of AI-based decision aids. For example, AI recommendations can be presented to decision makers in a way that promotes their appropriate trust and reliance on AI by leveraging or mitigating human biases, informed by the analysis of human competence in decision making. Alternatively, AI-assisted decision making can be improved by developing AI models that can anticipate and adapt to the engagement behavior of human decision makers.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"11 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31259
Yu Chen, Gabriel Granco, Yunfei Hou, Heather Macias, Frank A. Gomez
This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity.
{"title":"AI for Social Good Education at Hispanic Serving Institutions","authors":"Yu Chen, Gabriel Granco, Yunfei Hou, Heather Macias, Frank A. Gomez","doi":"10.1609/aaaiss.v3i1.31259","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31259","url":null,"abstract":"This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"28 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31218
Jithin Jagannath
Machine learning promises to empower dynamic resource allocation requirements of Next Generation (NextG) wireless networks including 6G and tactical networks. Recently, we have seen the impact machine learning can make on various aspects of wireless networks. Yet, in most cases, the progress has been limited to simulations and/or relies on large processing units to run the decision engines as opposed to deploying it on the radio at the edge. While relying on simulations for rapid and efficient training of deep reinforcement learning (DRL) may be necessary, it is key to mitigate the sim-real gap while trying to improve the generalization capability. To mitigate these challenges, we developed the Marconi-Rosenblatt Framework for Intelligent Networks (MR-iNet Gym), an open-source architecture designed for accelerating the deployment of novel DRL for NextG wireless networks. To demonstrate its impact, we tackled the problem of distributed frequency and power allocation while emphasizing the generalization capability of DRL decision engine. The end-to-end solution was implemented on the GPU-embedded software-defined radio and validated using over-the-air evaluation. To the best of our knowledge, these were the first instances that established the feasibility of deploying DRL for optimized distributed resource allocation for next-generation of GPU-embedded radios.
{"title":"Framework for Federated Learning and Edge Deployment of Real-Time Reinforcement Learning Decision Engine on Software Defined Radio","authors":"Jithin Jagannath","doi":"10.1609/aaaiss.v3i1.31218","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31218","url":null,"abstract":"Machine learning promises to empower dynamic resource allocation requirements of Next Generation (NextG) wireless networks including 6G and tactical networks. Recently, we have seen the impact machine learning can make on various aspects of wireless networks. Yet, in most cases, the progress has been limited to simulations and/or relies on large processing units to run the decision engines as opposed to deploying it on the radio at the edge. While relying on simulations for rapid and efficient training of deep reinforcement learning (DRL) may be necessary, it is key to mitigate the sim-real gap while trying to improve the generalization capability. To mitigate these challenges, we developed the Marconi-Rosenblatt Framework for Intelligent Networks (MR-iNet Gym), an open-source architecture designed for accelerating the deployment of novel DRL for NextG wireless networks. To demonstrate its impact, we tackled the problem of distributed frequency and power allocation while emphasizing the generalization capability of DRL decision engine. The end-to-end solution was implemented on the GPU-embedded software-defined radio and validated using over-the-air evaluation. To the best of our knowledge, these were the first instances that established the feasibility of deploying DRL for optimized distributed resource allocation for next-generation of GPU-embedded radios.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"76 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31177
Ted Goranson
We explore a novel approach to complex domain modelling by emphasising primitives based on perception. The usual approach either focuses on actors or cognition associated with tokens that convey information. In related research, we have examined using effects and/or outcomes as primitives, and influences as the generator of those outcomes via categoric functors. That approach (influences, effects) has advantages: it leverages what is known and supports the expanded logics we use, where we want to anticipate and engineer possible futures. But it has weaknesses when placed in a dynamic human-machine system where what is perceived or assumed matters more than what is known. The work reported here builds on previous advances in type specification and reasoning to ‘move the primitives forward’ more toward situation encounter and away from situation understanding. The goal is in the context of shared human-machine systems where: • reaction times are shorter than the traditional ingestion/comprehension/response loop can support; • situations that are too complex or dynamic for current comprehension by any means; • there simply is insufficient knowledge about governing situations for the comprehension model to support action; and/or, • the many machine/human and system/system interfaces that are incapable of conveying the needed insights; that is, the communication channels choke the information or influence flows. While the approach is motivated by the above unfriendly conditions, we expect significant benefits. We will explore these but engineer toward a federated decision paradigm where decisions by local human, machine or synthesis are not whole-situation-aware, but that collectively ‘swarm’ locally across the larger system to be more effective, ‘wiser’ than a convention paradigm may produce. The supposed implementation strategy will be through extending an existing ‘playbooks as code’ project whose goals are to advise on local action by modelling and gaming complex system dynamics. A sponsoring context is ‘grey zone’ competition that avoids armed conflict, but that can segue to a mixed system course of action advisory. The general context is a costly ‘blue swan’ risk in large commercial and government enterprises. The method will focus on patterns and relationships in synthetic categories used to model type transitions within topological models of system influence. One may say this is applied intuitionistic type theory, following mechanisms generally described by synthetic differential geometry. In this context, the motivating supposition of this study is that information-carrying influence channels are best modelled in our challenging domain as perceived types rather than understood types.
{"title":"Perception-Dominant Control Types for Human/Machine Systems","authors":"Ted Goranson","doi":"10.1609/aaaiss.v3i1.31177","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31177","url":null,"abstract":"We explore a novel approach to complex domain modelling by emphasising primitives based on perception. The usual approach either focuses on actors or cognition associated with tokens that convey information. In related research, we have examined using effects and/or outcomes as primitives, and influences as the generator of those outcomes via categoric functors. \u0000 That approach (influences, effects) has advantages: it leverages what is known and supports the expanded logics we use, where we want to anticipate and engineer possible futures. But it has weaknesses when placed in a dynamic human-machine system where what is perceived or assumed matters more than what is known. The work reported here builds on previous advances in type specification and reasoning to ‘move the primitives forward’ more toward situation encounter and away from situation understanding. \u0000 The goal is in the context of shared human-machine systems where:\u0000• reaction times are shorter than the traditional ingestion/comprehension/response loop can support;\u0000• situations that are too complex or dynamic for current comprehension by any means;\u0000• there simply is insufficient knowledge about governing situations for the comprehension model to support action; and/or,\u0000• the many machine/human and system/system interfaces that are incapable of conveying the needed insights; that is, the communication channels choke the information or influence flows.\u0000 While the approach is motivated by the above unfriendly conditions, we expect significant benefits. We will explore these but engineer toward a federated decision paradigm where decisions by local human, machine or synthesis are not whole-situation-aware, but that collectively ‘swarm’ locally across the larger system to be more effective, ‘wiser’ than a convention paradigm may produce.\u0000 The supposed implementation strategy will be through extending an existing ‘playbooks as code’ project whose goals are to advise on local action by modelling and gaming complex system dynamics. A sponsoring context is ‘grey zone’ competition that avoids armed conflict, but that can segue to a mixed system course of action advisory. The general context is a costly ‘blue swan’ risk in large commercial and government enterprises.\u0000 The method will focus on patterns and relationships in synthetic categories used to model type transitions within topological models of system influence. One may say this is applied intuitionistic type theory, following mechanisms generally described by synthetic differential geometry. In this context, the motivating supposition of this study is that information-carrying influence channels are best modelled in our challenging domain as perceived types rather than understood types.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"100 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31200
Orfeas Menis Mastromichalakis, Edmund Dervakos, A. Chortaras, G. Stamou
The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.
{"title":"Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs","authors":"Orfeas Menis Mastromichalakis, Edmund Dervakos, A. Chortaras, G. Stamou","doi":"10.1609/aaaiss.v3i1.31200","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31200","url":null,"abstract":"The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"22 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31191
Nicholas Harvel, F. B. Haiek, Anupriya Ankolekar, David James Brunner
Large Language Models (LLMs) can increase the productivity of general-purpose knowledge work, but accuracy is a concern, especially in professional settings requiring domain-specific knowledge and reasoning. To evaluate the suitability of LLMs for such work, we developed a benchmark of 16 analytical tasks representative of the investment banking industry. We evaluated LLM performance without special prompting, with relevant information provided in the prompt, and as part of a system giving the LLM access to domain-tuned functions for information retrieval and planning. Without access to functions, state-of-the-art LLMs performed poorly, completing two or fewer tasks correctly. Access to appropriate domain-tuned functions yielded dramatically better results, although performance was highly sensitive to the design of the functions and the structure of the information they returned. The most effective designs yielded correct answers on 12 out of 16 tasks. Our results suggest that domain-specific functions and information structures, by empowering LLMs with relevant domain knowledge and enabling them to reason in domain-appropriate ways, may be a powerful means of adapting LLMs for use in demanding professional settings.
{"title":"Can LLMs Answer Investment Banking Questions? Using Domain-Tuned Functions to Improve LLM Performance on Knowledge-Intensive Analytical Tasks","authors":"Nicholas Harvel, F. B. Haiek, Anupriya Ankolekar, David James Brunner","doi":"10.1609/aaaiss.v3i1.31191","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31191","url":null,"abstract":"Large Language Models (LLMs) can increase the productivity of general-purpose knowledge work, but accuracy is a concern, especially in professional settings requiring domain-specific knowledge and reasoning. To evaluate the suitability of LLMs for such work, we developed a benchmark of 16 analytical tasks representative of the investment banking industry. We evaluated LLM performance without special prompting, with relevant information provided in the prompt, and as part of a system giving the LLM access to domain-tuned functions for information retrieval and planning. Without access to functions, state-of-the-art LLMs performed poorly, completing two or fewer tasks correctly. Access to appropriate domain-tuned functions yielded dramatically better results, although performance was highly sensitive to the design of the functions and the structure of the information they returned. The most effective designs yielded correct answers on 12 out of 16 tasks. Our results suggest that domain-specific functions and information structures, by empowering LLMs with relevant domain knowledge and enabling them to reason in domain-appropriate ways, may be a powerful means of adapting LLMs for use in demanding professional settings.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"96 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31243
Yohn Jairo Parra Bautista, Carlos Theran, Richard A. Aló
We explore the foresighted concerns that Norbert Wiener voiced in 1960 about the potential of machines to learn and create strategies that could not be anticipated, drawing parallels to the fable "The Sorcerer's Apprentice" by Goethe. The progress in artificial intelligence (AI) has brought these worries back to the forefront, as shown by a survey AI Impacts conducted in 2022 with more than 700 machine learning researchers. This survey found a five percentage probability that advanced AI might cause "extremely adverse" outcomes, including the possibility of human extinction. Importantly, the introduction of OpenAI's ChatGPT, powered by GPT-4, has led to a surge in entrepreneurial activities, highlighting the ease of use of large language models (LLMs).AI's potential for adverse outcomes, such as military control and unregulated AI races, is explored alongside concerns about AI's role in governance, healthcare, media portrayal, and surpassing human intelligence. Given their transformative impact on content creation, the prominence of generative AI tools such as ChatGPT is noted. The societal assessment of Artificial Intelligence (AI) has grown increasingly intricate and pressing in tandem with the rapid evolution of this technology. As AI continues to advance at a swift pace, the need to comprehensively evaluate its societal implications has become more complex and urgent, necessitating a thorough examination of its potential impact on various domains such as governance, healthcare, media portrayal, and surpassing human intelligence. This assessment is crucial in addressing ethical concerns related to bias, data misuse, technical limitations, and transparency gaps, and in integrating ethical and legal principles throughout AI algorithm lifecycles to ensure alignment with societal well-being. Furthermore, the urgency of addressing the societal implications of AI is underscored by the need for healthcare workforce upskilling and ethical considerations in the era of AI-assisted medicine, emphasizing the critical importance of integrating societal well-being into the development and deployment of AI technologies. Our study entails an examination of the ethical quandaries and obstacles presented when developing methods to evaluate and predict the broader societal impacts of AI on decision-making processes involving the generating of images, videos, and textual content.
{"title":"Ethical Considerations of Generative AI: A Survey Exploring the Role of Decision Makers in the Loop","authors":"Yohn Jairo Parra Bautista, Carlos Theran, Richard A. Aló","doi":"10.1609/aaaiss.v3i1.31243","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31243","url":null,"abstract":"We explore the foresighted concerns that Norbert Wiener voiced in 1960 about the potential of machines to learn and create strategies that could not be anticipated, drawing parallels to the fable \"The Sorcerer's Apprentice\" by Goethe. The progress in artificial intelligence (AI) has brought these worries back to the forefront, as shown by a survey AI Impacts conducted in 2022 with more than 700 machine learning researchers. This survey found a five percentage probability that advanced AI might cause \"extremely adverse\" outcomes, including the possibility of human extinction. Importantly, the introduction of OpenAI's ChatGPT, powered by GPT-4, has led to a surge in entrepreneurial activities, highlighting the ease of use of large language models (LLMs).AI's potential for adverse outcomes, such as military control and unregulated AI races, is explored alongside concerns about AI's role in governance, healthcare, media portrayal, and surpassing human intelligence. Given their transformative impact on content creation, the prominence of generative AI tools such as ChatGPT is noted. The societal assessment of Artificial Intelligence (AI) has grown increasingly intricate and pressing in tandem with the rapid evolution of this technology. As AI continues to advance at a swift pace, the need to comprehensively evaluate its societal implications has become more complex and urgent, necessitating a thorough examination of its potential impact on various domains such as governance, healthcare, media portrayal, and surpassing human intelligence. This assessment is crucial in addressing ethical concerns related to bias, data misuse, technical limitations, and transparency gaps, and in integrating ethical and legal principles throughout AI algorithm lifecycles to ensure alignment with societal well-being. Furthermore, the urgency of addressing the societal implications of AI is underscored by the need for healthcare workforce upskilling and ethical considerations in the era of AI-assisted medicine, emphasizing the critical importance of integrating societal well-being into the development and deployment of AI technologies. Our study entails an examination of the ethical quandaries and obstacles presented when developing methods to evaluate and predict the broader societal impacts of AI on decision-making processes involving the generating of images, videos, and textual content.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"85 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122995","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31238
Yuto Nakashima
Generating preferred images from GANs is a challenging task due to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images from users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of StyleGAN, creating meaningful subspaces. Additionally, we use a multi-armed bandit algorithm to decide which dimensions to explore, focusing on the user's preferences. Our experiments show that our method is more efficient in generating preferred images than the baseline.
由于潜在空间的高维特性,从 GAN 生成首选图像是一项具有挑战性的任务。在本研究中,我们提出了一种新方法,利用简单的用户滑动交互从用户生成首选图片。为了有效地利用刷卡交互探索潜在空间,我们对 StyleGAN 的潜在空间进行了主成分分析,从而创建了有意义的子空间。此外,我们还使用多臂匪徒算法来决定探索哪些维度,重点关注用户的偏好。实验表明,我们的方法在生成首选图片方面比基线方法更有效。
{"title":"Personalized Image Generation Through Swiping","authors":"Yuto Nakashima","doi":"10.1609/aaaiss.v3i1.31238","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31238","url":null,"abstract":"Generating preferred images from GANs is a challenging task due to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images from users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of StyleGAN, creating meaningful subspaces. Additionally, we use a multi-armed bandit algorithm to decide which dimensions to explore, focusing on the user's preferences. Our experiments show that our method is more efficient in generating preferred images than the baseline.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"7 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}