Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31267
Neelu Sinha, Rama Madhavarao, Robert Freeman, Irene Oujo, Janet Boyd
Degree completion rates for Hispanic students lag far be-hind their white non-Hispanic peers. To close this gap and accelerate degree completion for Hispanic students at Hispanic-Serving Institutions (HSIs), we offer a pedagogical framework to incorporate AI Literacy into existing programs and encourage faculty-mentored undergraduate research initiatives to solve real-world problems using AI. Using a holistic perspective that includes experience, perception, cognition, and behavior, we describe the ideal process of learning based on a four-step cycle of experience, reflecting, thinking, and acting. Additionally, we emphasize the role of social interaction and community in developing mental abilities and understand how cognitive development is influenced by cultural and social factors. Tailoring the content to be culturally relevant, accessible, and engaging to our Hispanic students, and employing projects-based learning, we offer hands-on activities based on social justice, inclusion, and equity to incorporate AI Literacy. Furthermore, combining the pedagogical framework along with faculty-mentored undergraduate research (the significance of which has been shown to have numerous benefits) will enable our Hispanic students develop competencies to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool anywhere; preparing them for the future and encouraging them to use AI ethically.
{"title":"AI Literacy for Hispanic-Serving Institution (HSI) Students","authors":"Neelu Sinha, Rama Madhavarao, Robert Freeman, Irene Oujo, Janet Boyd","doi":"10.1609/aaaiss.v3i1.31267","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31267","url":null,"abstract":"Degree completion rates for Hispanic students lag far be-hind their white non-Hispanic peers. To close this gap and accelerate degree completion for Hispanic students at Hispanic-Serving Institutions (HSIs), we offer a pedagogical framework to incorporate AI Literacy into existing programs and encourage faculty-mentored undergraduate research initiatives to solve real-world problems using AI. Using a holistic perspective that includes experience, perception, cognition, and behavior, we describe the ideal process of learning based on a four-step cycle of experience, reflecting, thinking, and acting. Additionally, we emphasize the role of social interaction and community in developing mental abilities and understand how cognitive development is influenced by cultural and social factors. Tailoring the content to be culturally relevant, accessible, and engaging to our Hispanic students, and employing projects-based learning, we offer hands-on activities based on social justice, inclusion, and equity to incorporate AI Literacy. Furthermore, combining the pedagogical framework along with faculty-mentored undergraduate research (the significance of which has been shown to have numerous benefits) will enable our Hispanic students develop competencies to critically evaluate AI technologies, communicate and collaborate effectively with AI, and use AI as a tool anywhere; preparing them for the future and encouraging them to use AI ethically.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"16 10","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120337","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31214
Yiran Chen
In the emerging field of federated learning (FL), the challenge of heterogeneity, both in data and systems, presents significant obstacles to efficient and effective model training. This talk focuses on the latest advancements and solutions addressing these challenges. The first part of the talk delves into data heterogeneity, a core issue in FL, where data distributions across different clients vary widely and affect FL convergence. We will introduce the FedCor framework addressing this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation. The second part of the talk shifts focus to system heterogeneity, an equally critical challenge in FL. System heterogeneity involves the varying computational capabilities, network speeds, and other resource-related constraints of participating devices in FL. To address this, we introduce FedSEA, which is a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Additionally, we discuss FedRepre, a framework specifically designed to enhance FL in real-world environments by addressing challenges including unbalanced local dataset distributions, uneven computational capabilities, and fluctuating network speeds. By introducing a client selection mechanism and a specialized server architecture, FedRepre notably improves the efficiency, scalability, and performance of FL systems. Our talk aims to provide a comprehensive overview of the current research and advancements in tackling both data and system heterogeneity in federated learning. We hope to highlight the path forward for FL, underlining its potential in diverse real-world applications while maintaining data privacy and optimizing resource usage.
{"title":"Advancing Federated Learning by Addressing Data and System Heterogeneity","authors":"Yiran Chen","doi":"10.1609/aaaiss.v3i1.31214","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31214","url":null,"abstract":"In the emerging field of federated learning (FL), the challenge of heterogeneity, both in data and systems, presents significant obstacles to efficient and effective model training. This talk focuses on the latest advancements and solutions addressing these challenges.\u0000\u0000The first part of the talk delves into data heterogeneity, a core issue in FL, where data distributions across different clients vary widely and affect FL convergence. We will introduce the FedCor framework addressing this by modeling loss correlations between clients using Gaussian Process and reducing expected global loss. External covariate shift in FL is uncovered, demonstrating that normalization layers are crucial, and layer normalization proves effective. Additionally, class imbalance in FL degrades performance, but our proposed Federated Class-balanced Sampling (Fed-CBS) mechanism reduces this imbalance by employing homomorphic encryption for privacy preservation.\u0000\u0000The second part of the talk shifts focus to system heterogeneity, an equally critical challenge in FL. System heterogeneity involves the varying computational capabilities, network speeds, and other resource-related constraints of participating devices in FL. To address this, we introduce FedSEA, which is a semi-asynchronous FL framework that addresses accuracy drops by balancing aggregation frequency and predicting local update arrival. Additionally, we discuss FedRepre, a framework specifically designed to enhance FL in real-world environments by addressing challenges including unbalanced local dataset distributions, uneven computational capabilities, and fluctuating network speeds. By introducing a client selection mechanism and a specialized server architecture, FedRepre notably improves the efficiency, scalability, and performance of FL systems.\u0000\u0000Our talk aims to provide a comprehensive overview of the current research and advancements in tackling both data and system heterogeneity in federated learning. We hope to highlight the path forward for FL, underlining its potential in diverse real-world applications while maintaining data privacy and optimizing resource usage.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"23 15","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31215
Mark J. Gerken
You can’t get more “on the tactical edge” than in space. No other operational domain suffers from the combinations of distance from the operator, harsh environments, unreachable assets with aging hardware, and increadably long communications as space systems. The complexity of developing and deploying AI solutions in satellites and probes is far more difficult than deploying similar AI on Earth. This talk explores some of the considerations involved in deploying AI and machine learning (ML) in the space domain.
{"title":"Operational Environments at the Extreme Tactical Edge","authors":"Mark J. Gerken","doi":"10.1609/aaaiss.v3i1.31215","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31215","url":null,"abstract":"You can’t get more “on the tactical edge” than in space. No\u0000other operational domain suffers from the combinations of\u0000distance from the operator, harsh environments, unreachable\u0000assets with aging hardware, and increadably long communications\u0000as space systems. The complexity of developing and\u0000deploying AI solutions in satellites and probes is far more\u0000difficult than deploying similar AI on Earth. This talk explores\u0000some of the considerations involved in deploying AI\u0000and machine learning (ML) in the space domain.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"11 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31285
Christopher B. Rauch, Ursula Addison, Michael Floyd, Prateek Goel, Justin Karneeb, Ray Kulhanek, O. Larue, David Ménager, Mallika Mainali, Matthew Molineaux, Adam Pease, Anik Sen, Jt Turner, Rosina Weber
We present an approach to algorithmic decision-making that emulates key facets of human decision-making, particularly in scenarios marked by expert disagreement and ambiguity. Our system employs a case-based reasoning framework, integrating learned experiences, contextual factors, probabilistic reasoning, domain-specific knowledge, and the personal traits of decision-makers. A primary aim of the system is to articulate algorithmic decision-making as a human-comprehensible reasoning process, complete with justifications for selected actions.
{"title":"Algorithmic Decision-Making in Difficult Scenarios","authors":"Christopher B. Rauch, Ursula Addison, Michael Floyd, Prateek Goel, Justin Karneeb, Ray Kulhanek, O. Larue, David Ménager, Mallika Mainali, Matthew Molineaux, Adam Pease, Anik Sen, Jt Turner, Rosina Weber","doi":"10.1609/aaaiss.v3i1.31285","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31285","url":null,"abstract":"We present an approach to algorithmic decision-making that emulates key facets of human decision-making, particularly in scenarios marked by expert disagreement and ambiguity. Our system employs a case-based reasoning framework, integrating learned experiences, contextual factors, probabilistic reasoning, domain-specific knowledge, and the personal traits of decision-makers. A primary aim of the system is to articulate algorithmic decision-making as a human-comprehensible reasoning process, complete with justifications for selected actions.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"21 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119450","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31185
Nils Beutling, Maja Spahic-Bogdanovic
This paper presents a Knowledge-Based Recommender System (KBRS) that aims to align course recommendations with students' career goals in the field of information systems. The developed KBRS uses the European Skills, Competences, qualifications, and Occupations (ESCO) ontology, course descriptions, and a Large Language Model (LLM) such as ChatGPT 3.5 to bridge course content with the skills required for specific careers in information systems. In this context, no reference is made to the previous behavior of students. The system links course content to the skills required for different careers, adapts to students' changing interests, and provides clear reasoning for the courses proposed. An LLM is used to extract learning objectives from course descriptions and to map the promoted competency. The system evaluates the degree of relevance of courses based on the number of job-related skills supported by the learning objectives. This recommendation is supported by information that facilitates decision-making. The paper describes the system's development, methodology and evaluation and highlights its flexibility, user orientation and adaptability. It also discusses the challenges that arose during the development and evaluation of the system.
{"title":"Personalised Course Recommender: Linking Learning Objectives and Career Goals through Competencies","authors":"Nils Beutling, Maja Spahic-Bogdanovic","doi":"10.1609/aaaiss.v3i1.31185","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31185","url":null,"abstract":"This paper presents a Knowledge-Based Recommender System (KBRS) that aims to align course recommendations with students' career goals in the field of information systems. The developed KBRS uses the European Skills, Competences, qualifications, and Occupations (ESCO) ontology, course descriptions, and a Large Language Model (LLM) such as ChatGPT 3.5 to bridge course content with the skills required for specific careers in information systems. In this context, no reference is made to the previous behavior of students. The system links course content to the skills required for different careers, adapts to students' changing interests, and provides clear reasoning for the courses proposed. An LLM is used to extract learning objectives from course descriptions and to map the promoted competency. The system evaluates the degree of relevance of courses based on the number of job-related skills supported by the learning objectives. This recommendation is supported by information that facilitates decision-making. The paper describes the system's development, methodology and evaluation and highlights its flexibility, user orientation and adaptability. It also discusses the challenges that arose during the development and evaluation of the system.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"4 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119622","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31286
Sina Rismanchian, Shayan Doroudi, Yasaman Razeghi
While object recognition is one of the prevalent affordances of humans' perceptual systems, even human infants can prioritize a place system over the object recognition system, that is used when navigating. This ability, combined with active learning strategies can make humans fast learners of Turtle Geometry, a notion introduced about four decades ago. We contrast humans' performances and learning strategies with large visual language models (LVLMs) and as we show, LVLMs fall short of humans in solving Turtle Geometry tasks. We outline different characteristics of human-like learning in the domain of Turtle Geometry that are fundamentally unparalleled in state-of-the-art deep neural networks and can inform future research directions in the field of artificial intelligence.
{"title":"Turtle-like Geometry Learning: How Humans and Machines Differ in Learning Turtle Geometry","authors":"Sina Rismanchian, Shayan Doroudi, Yasaman Razeghi","doi":"10.1609/aaaiss.v3i1.31286","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31286","url":null,"abstract":"While object recognition is one of the prevalent affordances of humans' perceptual systems, even human infants can prioritize a place system over the object recognition system, that is used when navigating. This ability, combined with active learning strategies can make humans fast learners of Turtle Geometry, a notion introduced about four decades ago. We contrast humans' performances and learning strategies with large visual language models (LVLMs) and as we show, LVLMs fall short of humans in solving Turtle Geometry tasks. We outline different characteristics of human-like learning in the domain of Turtle Geometry that are fundamentally unparalleled in state-of-the-art deep neural networks and can inform future research directions in the field of artificial intelligence.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"17 21","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119669","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31252
Kevin Vo
In this ongoing narrative review, I summarize the existing body of literature on the role of artificial intelligence in mediating human communication, focusing on how it is currently transforming our communication patterns. Moreover, this re-view uniquely contributes by critically analyzing potential future shifts in these patterns, particularly in light of the advancing capabilities of artificial intelligence. Special emphasis is placed on the implications of emerging generative AI technologies, projecting how they might redefine the landscape of human interaction.
{"title":"AI-Assisted Talk: A Narrative Review on the New Social and Conversational Landscape","authors":"Kevin Vo","doi":"10.1609/aaaiss.v3i1.31252","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31252","url":null,"abstract":"In this ongoing narrative review, I summarize the existing body of literature on the role of artificial intelligence in mediating human communication, focusing on how it is currently transforming our communication patterns. Moreover, this re-view uniquely contributes by critically analyzing potential future shifts in these patterns, particularly in light of the advancing capabilities of artificial intelligence. Special emphasis is placed on the implications of emerging generative AI technologies, projecting how they might redefine the landscape of human interaction.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"1 9","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120319","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For decades, the route to familiarity in AI was through technical studies such as computer science. Yet AI has infiltrated many areas of our society. Many fields are rightfully now demanding at least a passing familiarity with machine learning: understanding the standard architectures, knowledge on how to use them, and addressing common concerns. A few such fields look at the standard ethical issues such as fairness, accountability, and transparency. Very few fields situate AI technologies in sociotechnical system analysis, nor give a rigorous foundation in ethical analysis applied to the design, development, and use of the technologies. We have proposed an undergraduate certificate in AI that gives equal weight to social and ethical issues and to technical matters of AI system design and use, aimed at students outside of the traditional AI-related disciplines. By including social and ethical issues in our AI certificate requirements, we expect to attract a broader population of students. By creating an accessible AI certification, we create an opportunity for individuals from diverse experiences to contribute to the discussion of what AI is, what its impact is, and where it should go in the future.
{"title":"Designing Inclusive AI Certifications","authors":"Kathleen Timmerman, Judy Goldsmith, Brent Harrison, Zongming Fei","doi":"10.1609/aaaiss.v3i1.31269","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31269","url":null,"abstract":"For decades, the route to familiarity in AI was through technical studies such as computer science. Yet AI has infiltrated many areas of our society. Many fields are rightfully now demanding at least a passing familiarity with machine learning: understanding the standard architectures, knowledge on how to use them, and addressing common concerns. A few such fields look at the standard ethical issues such as fairness, accountability, and transparency. Very few fields situate AI technologies in sociotechnical system analysis, nor give a rigorous foundation in ethical analysis applied to the design, development, and use of the technologies. We have proposed an undergraduate certificate in AI that gives equal weight to social and ethical issues and to technical matters of AI system design and use, aimed at students outside of the traditional AI-related disciplines. By including social and ethical issues in our AI certificate requirements, we expect to attract a broader population of students. By creating an accessible AI certification, we create an opportunity for individuals from diverse experiences to contribute to the discussion of what AI is, what its impact is, and where it should go in the future.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"80 12","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141123154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31195
Pei-Ying Lin, Erick Chandra, Jane Yung-jen Hsu
Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.
{"title":"ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering","authors":"Pei-Ying Lin, Erick Chandra, Jane Yung-jen Hsu","doi":"10.1609/aaaiss.v3i1.31195","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31195","url":null,"abstract":"Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"30 13","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-05-20DOI: 10.1609/aaaiss.v3i1.31179
Simon Kapiamba, H. Fouad, Ira S. Moskowitz
This paper outlines an approach for assessing and quantifying the risks associated with integrating Large Language Models (LLMs) in generating naval operational plans. It aims to explore the potential benefits and challenges of LLMs in this context and to suggest a methodology for a comprehensive risk assessment framework.
{"title":"Responsible Integration of Large Language Models (LLMs) in Navy Operational Plan Generation","authors":"Simon Kapiamba, H. Fouad, Ira S. Moskowitz","doi":"10.1609/aaaiss.v3i1.31179","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31179","url":null,"abstract":"This paper outlines an approach for assessing and quantifying\u0000the risks associated with integrating Large Language Models\u0000(LLMs) in generating naval operational plans. It aims to explore\u0000the potential benefits and challenges of LLMs in this\u0000context and to suggest a methodology for a comprehensive\u0000risk assessment framework.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":"18 16","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}