首页 > 最新文献

Proceedings of the AAAI Symposium Series最新文献

英文 中文
Domain-specific Embeddings for Question-Answering Systems: FAQs for Health Coaching 用于问题解答系统的特定领域嵌入:健康指导常见问题
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31197
Andreas Martin, Charuta Pande, Sandro Schwander, A. Ajuwon, Christoph Pimmer
FAQs are widely used to respond to users’ knowledge needs within knowledge domains. While LLM might be a promising way to address user questions, they are still prone to hallucinations i.e., inaccurate or wrong responses, which, can, inter alia, lead to massive problems, including, but not limited to, ethical issues. As a part of the healthcare coach chatbot for young Nigerian HIV clients, the need to meet their information needs through FAQs is one of the main coaching requirements. In this paper, we explore if domain knowledge in HIV FAQs can be represented as text embeddings to retrieve similar questions matching user queries, thus improving the understanding of the chatbot and the satisfaction of the users. Specifically, we describe our approach to developing an FAQ chatbot for the domain of HIV. We used a predefined FAQ question-answer knowledge base in English and Pidgin co-created by HIV clients and experts from Nigeria and Switzerland. The results of the post-engagement survey show that the chatbot mostly understood the user’s questions and could identify relevant matching questions and retrieve an appropriate response.
常见问题解答(FAQ)被广泛用于满足用户在知识领域中的知识需求。虽然 LLM 可能是解决用户问题的一种有前途的方法,但它们仍然容易产生幻觉,即不准确或错误的回答,这可能会导致大量问题,包括但不限于道德问题。作为针对尼日利亚年轻艾滋病客户的医疗保健指导聊天机器人的一部分,通过常见问题满足他们的信息需求是主要的指导要求之一。在本文中,我们探讨了能否将艾滋病常见问题中的领域知识表示为文本嵌入,以检索与用户查询相匹配的类似问题,从而提高聊天机器人的理解能力和用户的满意度。具体来说,我们描述了为艾滋病领域开发常见问题聊天机器人的方法。我们使用了英语和皮金语的预定义常见问题问答知识库,该知识库由来自尼日利亚和瑞士的艾滋病客户和专家共同创建。参与后调查的结果显示,聊天机器人大多能理解用户的问题,并能识别相关的匹配问题和检索适当的回复。
{"title":"Domain-specific Embeddings for Question-Answering Systems: FAQs for Health Coaching","authors":"Andreas Martin, Charuta Pande, Sandro Schwander, A. Ajuwon, Christoph Pimmer","doi":"10.1609/aaaiss.v3i1.31197","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31197","url":null,"abstract":"FAQs are widely used to respond to users’ knowledge needs within knowledge domains. While LLM might be a promising way to address user questions, they are still prone to hallucinations i.e., inaccurate or wrong responses, which, can, inter alia, lead to massive problems, including, but not limited to, ethical issues. As a part of the healthcare coach chatbot for young Nigerian HIV clients, the need to meet their information needs through FAQs is one of the main coaching requirements. In this paper, we explore if domain knowledge in HIV FAQs can be represented as text embeddings to retrieve similar questions matching user queries, thus improving the understanding of the chatbot and the satisfaction of the users. Specifically, we describe our approach to developing an FAQ chatbot for the domain of HIV. We used a predefined FAQ question-answer knowledge base in English and Pidgin co-created by HIV clients and experts from Nigeria and Switzerland. The results of the post-engagement survey show that the chatbot mostly understood the user’s questions and could identify relevant matching questions and retrieve an appropriate response.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modes of Tracking Mal-Info in Social Media with AI/ML Tools to Help Mitigate Harmful GenAI for Improved Societal Well Being 利用 AI/ML 工具跟踪社交媒体中恶意信息的模式,帮助减少有害 GenAI,改善社会福祉
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31247
Andy Skumanich, Han Kyul Kim
A rapidly developing threat to societal well-being is from misinformation widely spread on social media. Even more concerning is ”mal-info” (malicious) which is amplified on certain social networks. Now there is an additional dimension to that threat, which is the use of Generative AI to deliberately augment the mis-info and mal-info. This paper highlights some of the ”fringe” social media channels which have a high level of mal-info as characterized by our AI/ML algorithms. We discuss various channels and focus on one in particular, ”GAB”, as representative of the potential negative impacts. We outline some of the current mal-info as an example. We capture elements, and observe the trends in time. We provide a set of AI/ML modes which can characterize the mal-info and allow for capture, tracking, and potentially forresponding or for mitigation. We highlight the concern about malicious agents using GenAI for deliberate mal-info messaging specifically to disrupt societal well being. We suggest the characterizations presented as a methodology for initiating a more deliberate and quantitative approach to address these harmful aspects of social media which would adversely impact societal well being. The article highlights the potential for ”mal-info,” including disinfo, cyberbullying, and hate speech, to disrupt segments of society. The amplification of mal-info can result in serious real-world consequences such as mass shootings. Despite attempts to introduce moderation on major platforms like Facebook and to some extent on X/Twitter, there are now growing social networks such as Gab, Gettr, and Bitchute that offer completely unmoderated spaces. This paper presents an introductionto these platforms and the initial results of a semiquantitative analysis of Gab’s posts. The paper examines several characterization modes using text analysis. The paper emphasizes the developing dangerous use of generative AI algorithms by Gab and other fringe platforms, highlighting the risks to societal well being. This article aims to lay the foundation for capturing, monitoring, and mitigating these risks.
社交媒体上广泛传播的错误信息对社会福祉的威胁正在迅速发展。更令人担忧的是在某些社交网络上被放大的 "恶意信息"。现在,这种威胁又多了一个层面,那就是使用生成式人工智能来故意增强错误信息和恶意信息。本文重点介绍了一些 "边缘 "社交媒体渠道,根据我们的人工智能/ML 算法,这些渠道存在大量恶意信息。我们讨论了各种渠道,并重点讨论了 "GAB",它是潜在负面影响的代表。我们以当前的一些恶意信息为例进行概述。我们捕捉要素,观察时间趋势。我们提供了一套人工智能/人工智能模式,可以描述恶意信息的特征,并允许捕获、跟踪和潜在的响应或缓解。我们强调了对恶意代理利用 GenAI 故意发送恶意信息以破坏社会福祉的担忧。我们建议将所提出的特征描述作为一种方法,以启动一种更深思熟虑的定量方法来解决社交媒体中这些会对社会福祉产生不利影响的有害方面。文章强调了 "恶意信息"(包括虚假信息、网络欺凌和仇恨言论)扰乱社会各阶层的可能性。恶意信息的放大可能导致严重的现实后果,如大规模枪击事件。尽管 Facebook 等主要平台试图引入节制,X/Twitter 也在一定程度上引入了节制,但现在,Gab、Gettr 和 Bitchute 等社交网络也在不断发展,它们提供了完全不受节制的空间。本文介绍了这些平台,以及对 Gab 帖子进行半定量分析的初步结果。本文通过文本分析研究了几种表征模式。本文强调了 Gab 和其他边缘平台正在危险地使用生成式人工智能算法,突出强调了对社会福祉的风险。本文旨在为捕捉、监控和降低这些风险奠定基础。
{"title":"Modes of Tracking Mal-Info in Social Media with AI/ML Tools to Help Mitigate Harmful GenAI for Improved Societal Well Being","authors":"Andy Skumanich, Han Kyul Kim","doi":"10.1609/aaaiss.v3i1.31247","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31247","url":null,"abstract":"A rapidly developing threat to societal well-being is from misinformation widely spread on social media. Even more concerning is ”mal-info” (malicious) which is amplified on certain social networks. Now there is an additional dimension to that threat, which is the use of Generative AI to deliberately augment the mis-info and mal-info. This paper highlights some of the ”fringe” social media channels which have a high level of mal-info as characterized by our AI/ML algorithms. We discuss various channels and focus on one in particular, ”GAB”, as representative of the potential negative impacts. We outline some of the current mal-info as an example. We capture elements, and observe the trends in time. We provide a set of AI/ML modes which can characterize the mal-info and allow for capture, tracking, and potentially for\u0000responding or for mitigation. We highlight the concern about malicious agents using GenAI for deliberate mal-info messaging specifically to disrupt societal well being. We suggest the characterizations presented as a methodology for initiating a more deliberate and quantitative approach to address these harmful aspects of social media which would adversely impact societal well being. \u0000\u0000The article highlights the potential for ”mal-info,” including disinfo, cyberbullying, and hate speech, to disrupt segments of society. The amplification of mal-info can result in serious real-world consequences such as mass shootings. Despite attempts to introduce moderation on major platforms like Facebook and to some extent on X/Twitter, there are now growing social networks such as Gab, Gettr, and Bitchute that offer completely unmoderated spaces. This paper presents an introduction\u0000to these platforms and the initial results of a semiquantitative analysis of Gab’s posts. The paper examines several characterization modes using text analysis. The paper emphasizes the developing dangerous use of generative AI algorithms by Gab and other fringe platforms, highlighting the risks to societal well being. This article aims to lay the foundation for capturing, monitoring, and mitigating these risks.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
You Can Have Your Cake and Eat It Too: Ensuring Practical Robustness and Privacy in Federated Learning 你可以尽情享用你的蛋糕:确保联合学习的实用稳健性和隐私性
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31225
Nojan Sheybani, F. Koushanfar
Inherently, federated learning (FL) robustness is very challenging to guarantee, especially when trying to maintain privacy. Compared to standard ML settings, FL's open training process allows for malicious clients to easily go under the radar. Alongside this, malicious clients can easily collude to attack the training process continuously, and without detection. FL models are also still susceptible to attacks on standard ML training procedures. This massive attack surface makes balancing the tradeoff between utility, practicality, robustness, and privacy extremely challenging. While there have been proposed defenses to attacks using popular privacy-preserving primitives, such as fully homomorphic encryption, they often face trouble balancing an all-important question that is present in all privacy-preserving systems: How much utility and practicality am I willing to give up to ensure privacy and robustness? In this work, we discuss a practical approach towards secure and robust FL and the challenges that face this field of emerging research.
从本质上讲,联合学习(FL)的稳健性很难保证,尤其是在试图维护隐私时。与标准 ML 设置相比,FL 的开放式训练过程可以让恶意客户端轻易地隐藏起来。与此同时,恶意客户端可以轻易地串通起来,在不被发现的情况下持续攻击训练过程。FL 模型仍然容易受到标准 ML 训练程序的攻击。这种巨大的攻击面使得平衡实用性、实用性、稳健性和隐私性之间的关系变得极具挑战性。虽然有人提出了使用流行的隐私保护原语(如全同态加密)来抵御攻击的方法,但这些方法在平衡所有隐私保护系统中都存在的一个重要问题上往往会遇到困难:为了确保隐私和稳健性,我愿意放弃多少实用性和实用性?在这项工作中,我们将讨论实现安全、稳健 FL 的实用方法,以及这一新兴研究领域所面临的挑战。
{"title":"You Can Have Your Cake and Eat It Too: Ensuring Practical Robustness and Privacy in Federated Learning","authors":"Nojan Sheybani, F. Koushanfar","doi":"10.1609/aaaiss.v3i1.31225","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31225","url":null,"abstract":"Inherently, federated learning (FL) robustness is very challenging to guarantee, especially when trying to maintain privacy. Compared to standard ML settings, FL's open training process allows for malicious clients to easily go under the radar. Alongside this, malicious clients can easily collude to attack the training process continuously, and without detection. FL models are also still susceptible to attacks on standard ML training procedures. This massive attack surface makes balancing the tradeoff between utility, practicality, robustness, and privacy extremely challenging. While there have been proposed defenses to attacks using popular privacy-preserving primitives, such as fully homomorphic encryption, they often face trouble balancing an all-important question that is present in all privacy-preserving systems: How much utility and practicality am I willing to give up to ensure privacy and robustness? In this work, we discuss a practical approach towards secure and robust FL and the challenges that face this field of emerging research.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121426","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Is Federated Learning Still Alive in the Foundation Model Era? 基础教育模式时代的联盟式学习还存在吗?
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31213
Nathalie Baracaldo
Federated learning (FL) has arisen as an alternative to collecting large amounts of data in a central place to train a machine learning (ML) model. FL is privacy-friendly, allowing multiple parties to collaboratively train an ML model without exchanging or transmitting their training data. For this purpose, an aggregator iteratively coordinates the training process among parties, and parties simply share with the aggregator model updates, which contain information pertinent to the model such as neural network weights. Besides privacy, generalization has been another key driver for FL: parties who do not have enough data to train a good performing model by themselves can now engage in FL to obtain an ML model suitable for their tasks. Products and real applications in the industry and consumer space have demonstrated the power of this learning paradigm.Recently, foundation models have taken the AI community by storm, promising to solve the shortage of labeled data. A foundation model is a powerful model that can be recycled for a variety of use cases by applying techniques such as zero-shot learning and full or parameter-efficient fine tuning. The premise is that the amount of data required to fine tune a foundation model for a new task is much smaller than fully training a traditional model from scratch. The reason why this is the case is that a good foundation model has already learned relevant general representations, and thus, adapting it to a new task only requires a minimal number of additional samples. This raises the question: Is FL still alive in the era of foundation models?In this talk, I will address this question. I will present some use cases where FL is very much alive. In these use cases, finding a foundation model with a desired representation is difficult if not impossible. With this pragmatic point of view, I hope to shed some light into a real use case where disparate private data is available in isolation at different parties and where labels may be located at a single party that doesn’t have any other information, making it impossible for a single party to train a model on its own. Furthermore, in some vertically-partitioned scenarios, cleaning data is not an option due to privacy-related reasons and it is not clear how to apply foundation models. Finally, I will also go over a few other requirements that are often overlooked, such as unlearning of data and its implications for the lifecycle management of FL and systems based on foundation models.
联邦学习(FL)是在一个中心位置收集大量数据以训练机器学习(ML)模型的一种替代方法。联合学习对隐私友好,允许多方协作训练 ML 模型,而无需交换或传输训练数据。为此,聚合器会反复协调各方的训练过程,各方只需与聚合器共享模型更新,其中包含神经网络权重等与模型相关的信息。除了隐私之外,通用化也是 FL 的另一个关键驱动因素:如果各方没有足够的数据来自行训练一个性能良好的模型,现在可以通过 FL 来获得适合其任务的 ML 模型。最近,基础模型在人工智能界掀起了一场风暴,有望解决标注数据短缺的问题。基础模型是一种功能强大的模型,通过应用零点学习和全面或参数高效微调等技术,可在各种用例中循环使用。前提是,针对新任务微调基础模型所需的数据量要比从头开始训练一个传统模型小得多。之所以会出现这种情况,是因为一个好的基础模型已经学会了相关的一般表征,因此,调整它以适应新任务只需要极少量的额外样本。这就提出了一个问题:在本讲座中,我将探讨这个问题。我将介绍一些 FL 非常活跃的用例。在这些用例中,找到一个具有所需表征的基础模型即使不是不可能,也是很困难的。在这种情况下,标签可能位于没有任何其他信息的单方,这使得单方无法独立训练模型。此外,在一些垂直分区的场景中,由于隐私相关的原因,清洗数据并不是一种选择,而且也不清楚如何应用基础模型。最后,我还将介绍一些经常被忽视的其他要求,如数据的非学习性及其对基于基础模型的 FL 和系统的生命周期管理的影响。
{"title":"Is Federated Learning Still Alive in the Foundation Model Era?","authors":"Nathalie Baracaldo","doi":"10.1609/aaaiss.v3i1.31213","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31213","url":null,"abstract":"Federated learning (FL) has arisen as an alternative to collecting large amounts of data in a central place to train a machine learning (ML) model. FL is privacy-friendly, allowing multiple parties to collaboratively train an ML model without exchanging or transmitting their training data. For this purpose, an aggregator iteratively coordinates the training process among parties, and parties simply share with the aggregator model updates, which contain information pertinent to the model such as neural network weights. Besides privacy, generalization has been another key driver for FL: parties who do not have enough data to train a good performing model by themselves can now engage in FL to obtain an ML model suitable for their tasks. Products and real applications in the industry and consumer space have demonstrated the power of this learning paradigm.\u0000\u0000\u0000Recently, foundation models have taken the AI community by storm, promising to solve the shortage of labeled data. A foundation model is a powerful model that can be recycled for a variety of use cases by applying techniques such as zero-shot learning and full or parameter-efficient fine tuning. The premise is that the amount of data required to fine tune a foundation model for a new task is much smaller than fully training a traditional model from scratch. The reason why this is the case is that a good foundation model has already learned relevant general representations, and thus, adapting it to a new task only requires a minimal number of additional samples. This raises the question: Is FL still alive in the era of foundation models?\u0000\u0000\u0000In this talk, I will address this question. I will present some use cases where FL is very much alive. In these use cases, finding a foundation model with a desired representation is difficult if not impossible. With this pragmatic point of view, I hope to shed some light into a real use case where disparate private data is available in isolation at different parties and where labels may be located at a single party that doesn’t have any other information, making it impossible for a single party to train a model on its own. Furthermore, in some vertically-partitioned scenarios, cleaning data is not an option due to privacy-related reasons and it is not clear how to apply foundation models. Finally, I will also go over a few other requirements that are often overlooked, such as unlearning of data and its implications for the lifecycle management of FL and systems based on foundation models.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Generative AI Applications in Helping Children with Speech Language Issues 生成式人工智能在帮助儿童解决言语语言问题中的应用
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31244
Helen Qin
This paper reports how generative AI can help children with specific language impairment (SLI) issues by developing an AI-assisted tool to support children with challenges in phonological development in English, especially children with English as the secondary language in the United States. Children from bilingual families often experience challenges in developing proficiency in English pronunciation and communication, which has been exacerbated by remote learning during the pandemic and led to learning loss. School-aged children with speech problems require timely intervention because children with language disorders find it difficult to communicate with others, leading to social isolation and academic difficulties. The needed intervention is often delayed due to the high cost of speech services and the shortage of Speech and Language Pathologists (SLPs). Individuals with a history of SLI have an increased risk of unemployment. An AI-assisted Phonological Development (AI-PD) tool was prototyped, aiming to alleviate these challenges by assisting caregivers in evaluating children's phonological development, assisting SLPs in lesson preparation, and mitigating the severe shortage of SLPs.
本文报告了生成式人工智能如何通过开发一种人工智能辅助工具来帮助有特殊语言障碍(SLI)问题的儿童,尤其是在美国以英语为第二语言的儿童在英语语音发展方面遇到的挑战。来自双语家庭的儿童往往在熟练掌握英语发音和交流方面遇到困难,这种情况在大流行病期间因远程学习而加剧,并导致学习损失。有语言问题的学龄儿童需要及时干预,因为有语言障碍的儿童难以与他人交流,导致社交孤立和学业困难。由于言语服务费用高昂以及言语和语言病理学家(SLP)短缺,所需的干预往往被延误。有 SLI 病史的人失业风险更高。我们开发了一个人工智能辅助语音发展(AI-PD)工具原型,旨在通过协助护理人员评估儿童的语音发展、协助语言治疗师备课以及缓解语言治疗师的严重短缺来缓解这些挑战。
{"title":"Generative AI Applications in Helping Children with Speech Language Issues","authors":"Helen Qin","doi":"10.1609/aaaiss.v3i1.31244","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31244","url":null,"abstract":"This paper reports how generative AI can help children with specific language impairment (SLI) issues by developing an AI-assisted tool to support children with challenges in phonological development in English, especially children with English as the secondary language in the United States. Children from bilingual families often experience challenges in developing proficiency in English pronunciation and communication, which has been exacerbated by remote learning during the pandemic and led to learning loss. School-aged children with speech problems require timely intervention because children with language disorders find it difficult to communicate with others, leading to social isolation and academic difficulties. The needed intervention is often delayed due to the high cost of speech services and the shortage of Speech and Language Pathologists (SLPs). Individuals with a history of SLI have an increased risk of unemployment. An AI-assisted Phonological Development (AI-PD) tool was prototyped, aiming to alleviate these challenges by assisting caregivers in evaluating children's phonological development, assisting SLPs in lesson preparation, and mitigating the severe shortage of SLPs.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119268","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Human-Centric Approach towards Equity and Inclusion in AI Education 以人为本,实现人工智能教育的公平与包容
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31264
Swati Mehrotra, Neelu Sinha
Artificial Intelligence (AI) has become pervasive in modern lives, with AI generative tools driving further transformation. However, a notable issue persists: the underrepresentation of females and individuals from ethnic and racial minorities in the tech industry. Despite generally positive attitudes toward technology among young students, this enthusiasm often does not extend to aspirations for careers in the field. To address this disparity, many schools in the United States are now offering computer science and AI courses at the high school level. Nevertheless, students from underrepresented groups often feel disconnected from these subjects, leading to low enrollment rates. Research underscores that students' career aspirations are solidified between the ages of 10-14 yrs, highlighting the importance of engaging them with computer science and computing skills during this formative period. Leveraging the Bourdieusian concept of social capital, this paper proposes educational interventions tailored for elementary schools. By nurturing students' technical social capital, these interventions aim to foster an inclusive ecosystem from an early age, when aspirations are taking shape. Ultimately, the goal is to enhance the accessibility of computer science education and related skills, empowering young students from underrepresented groups to pursue higher studies and careers in computer science and AI fields.
人工智能(AI)已在现代生活中无处不在,人工智能生成工具推动着进一步的变革。然而,一个值得注意的问题依然存在:女性以及少数族裔和种族在科技行业的代表性不足。尽管年轻学生对技术的态度普遍积极,但这种热情往往不会延伸到对该领域职业的向往。为了解决这一差距,美国许多学校现在都在高中阶段开设了计算机科学和人工智能课程。然而,来自代表性不足群体的学生往往感觉与这些科目脱节,导致入学率较低。研究强调,学生的职业抱负在 10-14 岁之间就已经形成,这就突出了在这一成长阶段让他们学习计算机科学和计算技能的重要性。本文利用布尔迪厄斯的社会资本概念,提出了针对小学的教育干预措施。通过培养学生的技术社会资本,这些干预措施旨在从小就培养一个包容性的生态系统,因为这个时期学生的理想正在形成。最终目标是提高计算机科学教育和相关技能的可及性,使来自代表性不足群体的年轻学生有能力在计算机科学和人工智能领域追求更高的学业和职业。
{"title":"A Human-Centric Approach towards Equity and Inclusion in AI Education","authors":"Swati Mehrotra, Neelu Sinha","doi":"10.1609/aaaiss.v3i1.31264","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31264","url":null,"abstract":"Artificial Intelligence (AI) has become pervasive in modern lives, with AI generative tools driving further transformation. However, a notable issue persists: the underrepresentation of females and individuals from ethnic and racial minorities in the tech industry. Despite generally positive attitudes toward technology among young students, this enthusiasm often does not extend to aspirations for careers in the field. To address this disparity, many schools in the United States are now offering computer science and AI courses at the high school level. Nevertheless, students from underrepresented groups often feel disconnected from these subjects, leading to low enrollment rates. Research underscores that students' career aspirations are solidified between the ages of 10-14 yrs, highlighting the importance of engaging them with computer science and computing skills during this formative period. Leveraging the Bourdieusian concept of social capital, this paper proposes educational interventions tailored for elementary schools. By nurturing students' technical social capital, these interventions aim to foster an inclusive ecosystem from an early age, when aspirations are taking shape. Ultimately, the goal is to enhance the accessibility of computer science education and related skills, empowering young students from underrepresented groups to pursue higher studies and careers in computer science and AI fields.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141123251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LLMs in Automated Essay Evaluation: A Case Study 自动论文评估法学硕士:案例研究
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31193
Milan Kostic, Hans Friedrich Witschel, Knut Hinkelmann, Maja Spahic-Bogdanovic
This study delves into the application of large language models (LLMs), such as ChatGPT-4, for the automated evaluation of student essays, with a focus on a case study conducted at the Swiss Institute of Business Administration. It explores the effectiveness of LLMs in assessing German-language student transfer assignments, and contrasts their performance with traditional evaluations by human lecturers. The primary findings highlight the challenges faced by LLMs in terms of accurately grading complex texts according to predefined categories and providing detailed feedback. This research illuminates the gap between the capabilities of LLMs and the nuanced requirements of student essay evaluation. The conclusion emphasizes the necessity for ongoing research and development in the area of LLM technology to improve the accuracy, reliability, and consistency of automated essay assessments in educational contexts.
本研究深入探讨了大型语言模型(LLM)(如 ChatGPT-4)在学生论文自动评估中的应用,重点是在瑞士工商管理学院进行的一项案例研究。研究探讨了 LLM 在评估德语学生转学作业方面的有效性,并将其表现与传统的人工讲师评估进行了对比。主要研究结果凸显了法学硕士在根据预定义类别对复杂文本进行准确分级和提供详细反馈方面所面临的挑战。这项研究揭示了法律硕士的能力与学生论文评价的细微要求之间的差距。结论强调,有必要在法律硕士技术领域不断进行研究和开发,以提高教育背景下自动论文评估的准确性、可靠性和一致性。
{"title":"LLMs in Automated Essay Evaluation: A Case Study","authors":"Milan Kostic, Hans Friedrich Witschel, Knut Hinkelmann, Maja Spahic-Bogdanovic","doi":"10.1609/aaaiss.v3i1.31193","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31193","url":null,"abstract":"This study delves into the application of large language models (LLMs), such as ChatGPT-4, for the automated evaluation of student essays, with a focus on a case study conducted at the Swiss Institute of Business Administration. It explores the effectiveness of LLMs in assessing German-language student transfer assignments, and contrasts their performance with traditional evaluations by human lecturers. The primary findings highlight the challenges faced by LLMs in terms of accurately grading complex texts according to predefined categories and providing detailed feedback. This research illuminates the gap between the capabilities of LLMs and the nuanced requirements of student essay evaluation. The conclusion emphasizes the necessity for ongoing research and development in the area of LLM technology to improve the accuracy, reliability, and consistency of automated essay assessments in educational contexts.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120466","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Impacts of Text-to-Image Generative AI on Creative Professionals According to Prospective Generative AI Researchers: Insights from Japan 未来的生成式人工智能研究人员认为文本到图像的生成式人工智能对创意专业人员的影响:来自日本的启示
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31256
Sharon Chee Yin Ho, Arisa Ema, Tanja Tajmel
The growing interest in Japan to implement text-to-image (T2I) generative artificial intelligence (GenAI) technologies in creative workflows has raised concern over what ethical and social implications these technologies will have on creative professionals. Our pilot study is the first to discuss what social and ethical oversights may emerge regarding such issues from prospective Japanese researchers – computer science (CS) graduate students studying in Japan. Given that these students are the primary demographic hired to work at research and development (R&D) labs at the forefront of such innovations in Japan, any social and ethical oversight on such issues may unequip them as future knowledge experts who will play a pivotal role in helping shape Japan’s policies regarding image generating AI technologies.
在日本,人们对在创意工作流程中采用文本到图像(T2I)生成式人工智能(GenAI)技术的兴趣与日俱增,这引起了人们对这些技术将对创意专业人员产生何种伦理和社会影响的关注。我们的试点研究首次讨论了潜在的日本研究人员--在日本学习的计算机科学(CS)研究生--在这些问题上可能出现的社会和伦理疏忽。鉴于这些学生是受聘在日本此类创新最前沿的研发(R&D)实验室工作的主要人群,对此类问题的任何社会和伦理疏忽都可能使他们失去作为未来知识专家的资格,而他们将在帮助制定日本有关图像生成人工智能技术的政策方面发挥关键作用。
{"title":"The Impacts of Text-to-Image Generative AI on Creative Professionals According to Prospective Generative AI Researchers: Insights from Japan","authors":"Sharon Chee Yin Ho, Arisa Ema, Tanja Tajmel","doi":"10.1609/aaaiss.v3i1.31256","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31256","url":null,"abstract":"The growing interest in Japan to implement text-to-image (T2I) generative artificial intelligence (GenAI) technologies in creative workflows has raised concern over what ethical and social implications these technologies will have on creative professionals. Our pilot study is the first to discuss what social and ethical oversights may emerge regarding such issues from prospective Japanese researchers – computer science (CS) graduate students studying in Japan. Given that these students are the primary demographic hired to work at research and development (R&D) labs at the forefront of such innovations in Japan, any social and ethical oversight on such issues may unequip them as future knowledge experts who will play a pivotal role in helping shape Japan’s policies regarding image generating AI technologies.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120500","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social Smarts with Tech Sparks: Harnessing LLMs for Youth Socioemotional Growth 社会智慧与科技火花:利用 LLM 促进青少年社会情感成长
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31253
Kevin Vo
This study proposal combines the transformative potential of GPT-4 with an innovative approach to learning social and emotional skills, offering a novel conversational aid designed to enhance adolescents' social competence, and ultimately combat social disconnection in the digital era.
本研究提案将 GPT-4 的变革潜力与学习社交和情感技能的创新方法相结合,提供了一种新颖的对话辅助工具,旨在提高青少年的社交能力,并最终消除数字时代的社会脱节现象。
{"title":"Social Smarts with Tech Sparks: Harnessing LLMs for Youth Socioemotional Growth","authors":"Kevin Vo","doi":"10.1609/aaaiss.v3i1.31253","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31253","url":null,"abstract":"This study proposal combines the transformative potential of GPT-4 with an innovative approach to learning social and emotional skills, offering a novel conversational aid designed to enhance adolescents' social competence, and ultimately combat social disconnection in the digital era.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122829","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Model of Cognizing Supporting the Origination of Cognizing in Nature 支持自然界认知起源的认知模式
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31282
Edward M. Pogossian
Our model of cognizing roots in developmental psychology by Jean Piaget, follows researchers in modeling cognizing by solvers of combinatorial games, enriches object–oriented representatives of realities by input classifiers and relationships in English, while tends to be consistent with questioning the origination of cognizing in nature. Let us introduce the basics of the model, provide arguments for its adequacy, followed by those supporting the origination of cognizing.
我们的认知模型源于让-皮亚杰(Jean Piaget)的发展心理学,追随研究者通过组合游戏的解题者来建立认知模型,通过输入分类器和英语中的关系来丰富面向对象的现实代表,同时倾向于与对自然界认知起源的质疑保持一致。让我们介绍一下该模型的基本原理,为其充分性提供论据,然后是支持认知起源的论据。
{"title":"A Model of Cognizing Supporting the Origination of Cognizing in Nature","authors":"Edward M. Pogossian","doi":"10.1609/aaaiss.v3i1.31282","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31282","url":null,"abstract":"Our model of cognizing roots in developmental psychology by Jean Piaget, follows researchers in modeling cognizing by solvers of combinatorial games, enriches object–oriented representatives of realities by input classifiers and relationships in English, while tends to be consistent with questioning the origination of cognizing in nature. \u0000Let us introduce the basics of the model, provide arguments for its adequacy, followed by those supporting the origination of cognizing.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118979","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the AAAI Symposium Series
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1