首页 > 最新文献

Proceedings of the AAAI Symposium Series最新文献

英文 中文
ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering ASMR:聚合语义匹配检索 通过开放式问题解答释放 LLM 的常识能力
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31195
Pei-Ying Lin, Erick Chandra, Jane Yung-jen Hsu
Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.
常识推理是指根据一般知识和常识进行推理、得出结论和理解世界的能力。大语言模型(LLMs)是否具备常识推理能力仍是研究人员和专家们争论的话题。在面对多项选择的常识推理任务时,人类通常会依靠已有知识和常识在脑海中形成一个初步答案。随后,他们会将这一初步答案与所提供的选项进行比较,并选择最有可能的选项作为最终答案。我们引入了聚合语义匹配检索(ASMR)作为多选常识推理任务的解决方案。为了模仿人类解决多选常识推理任务的过程,我们利用 LLM 的功能,首先通过开放式问题生成初步的可能答案,这有助于加强从给定选项中检索问题相关答案的过程。我们的实验证明了 ASMR 在流行的常识推理基准数据集(包括 CSQA、SIQA 和 ARC(Easy 和 Challenge))上的有效性。在 SIQA 数据集上,ASMR 实现了最先进的(SOTA)性能,与之前的 SOTA 相比,准确率最高提高了 15.3%。
{"title":"ASMR: Aggregated Semantic Matching Retrieval Unleashing Commonsense Ability of LLM through Open-Ended Question Answering","authors":"Pei-Ying Lin, Erick Chandra, Jane Yung-jen Hsu","doi":"10.1609/aaaiss.v3i1.31195","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31195","url":null,"abstract":"Commonsense reasoning refers to the ability to make inferences, draw conclusions, and understand the world based on general knowledge and commonsense. Whether Large Language Models (LLMs) have commonsense reasoning ability remains a topic of debate among researchers and experts. When confronted with multiple-choice commonsense reasoning tasks, humans typically rely on their prior knowledge and commonsense to formulate a preliminary answer in mind. Subsequently, they compare this preliminary answer to the provided choices, and select the most likely choice as the final answer. We introduce Aggregated Semantic Matching Retrieval (ASMR) as a solution for multiple-choice commonsense reasoning tasks. To mimic the process of humans solving commonsense reasoning tasks with multiple choices, we leverage the capabilities of LLMs to first generate the preliminary possible answers through open-ended question which aids in enhancing the process of retrieving relevant answers to the question from the given choices. Our experiments demonstrate the effectiveness of ASMR on popular commonsense reasoning benchmark datasets, including CSQA, SIQA, and ARC (Easy and Challenge). ASMR achieves state-of-the-art (SOTA) performance with a peak of +15.3% accuracy improvement over the previous SOTA on SIQA dataset.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141118926","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Learning Fast and Slow: A Redux of Levels of Learning in General Autonomous Intelligent Agents 学习的快与慢:通用自主智能代理的学习水平再论
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31279
Shiwali Mohan, John E. Laird
Autonomous intelligent agents, including humans, operate in a complex, dynamic environment that necessitates continuous learning. We revisit our thesis that proposes that learning in human-like agents can be categorized into two levels: Level 1 (L1) involving innate and automatic learning mechanisms, while Level 2 (L2) comprises deliberate strategies controlled by the agent. Our thesis draws from our experiences in building artificial agents with complex learning behaviors, such as interactive task learning and open-world learning.
包括人类在内的自主智能代理在复杂多变的环境中运行,需要不断学习。我们重温了我们的论文,该论文提出类人代理的学习可分为两个层次:第一级(L1)涉及与生俱来的自动学习机制,而第二级(L2)则包括由代理控制的深思熟虑的策略。我们的论文借鉴了我们在构建具有复杂学习行为(如交互式任务学习和开放世界学习)的人工代理方面的经验。
{"title":"Learning Fast and Slow: A Redux of Levels of Learning in General Autonomous Intelligent Agents","authors":"Shiwali Mohan, John E. Laird","doi":"10.1609/aaaiss.v3i1.31279","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31279","url":null,"abstract":"Autonomous intelligent agents, including humans, operate in a complex, dynamic environment that necessitates continuous learning. We revisit our thesis that proposes that learning in human-like agents can be categorized into two levels: Level 1 (L1) involving innate and automatic learning mechanisms, while Level 2 (L2) comprises deliberate strategies controlled by the agent. Our thesis draws from our experiences in building artificial agents with complex learning behaviors, such as interactive task learning and open-world learning.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120745","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Responsible Integration of Large Language Models (LLMs) in Navy Operational Plan Generation 负责将大型语言模型 (LLM) 整合到海军作战计划生成中
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31179
Simon Kapiamba, H. Fouad, Ira S. Moskowitz
This paper outlines an approach for assessing and quantifyingthe risks associated with integrating Large Language Models(LLMs) in generating naval operational plans. It aims to explorethe potential benefits and challenges of LLMs in thiscontext and to suggest a methodology for a comprehensiverisk assessment framework.
本文概述了一种评估和量化与集成大型语言模型(LLMs)以生成海军作战计划相关的风险的方法。本文旨在探讨大型语言模型在此背景下的潜在优势和挑战,并为全面风险评估框架提出方法建议。
{"title":"Responsible Integration of Large Language Models (LLMs) in Navy Operational Plan Generation","authors":"Simon Kapiamba, H. Fouad, Ira S. Moskowitz","doi":"10.1609/aaaiss.v3i1.31179","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31179","url":null,"abstract":"This paper outlines an approach for assessing and quantifying\u0000the risks associated with integrating Large Language Models\u0000(LLMs) in generating naval operational plans. It aims to explore\u0000the potential benefits and challenges of LLMs in this\u0000context and to suggest a methodology for a comprehensive\u0000risk assessment framework.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Framework for Federated Learning and Edge Deployment of Real-Time Reinforcement Learning Decision Engine on Software Defined Radio 软件无线电实时强化学习决策引擎的联合学习和边缘部署框架
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31218
Jithin Jagannath
Machine learning promises to empower dynamic resource allocation requirements of Next Generation (NextG) wireless networks including 6G and tactical networks. Recently, we have seen the impact machine learning can make on various aspects of wireless networks. Yet, in most cases, the progress has been limited to simulations and/or relies on large processing units to run the decision engines as opposed to deploying it on the radio at the edge. While relying on simulations for rapid and efficient training of deep reinforcement learning (DRL) may be necessary, it is key to mitigate the sim-real gap while trying to improve the generalization capability. To mitigate these challenges, we developed the Marconi-Rosenblatt Framework for Intelligent Networks (MR-iNet Gym), an open-source architecture designed for accelerating the deployment of novel DRL for NextG wireless networks. To demonstrate its impact, we tackled the problem of distributed frequency and power allocation while emphasizing the generalization capability of DRL decision engine. The end-to-end solution was implemented on the GPU-embedded software-defined radio and validated using over-the-air evaluation. To the best of our knowledge, these were the first instances that established the feasibility of deploying DRL for optimized distributed resource allocation for next-generation of GPU-embedded radios.
机器学习有望满足下一代(NextG)无线网络(包括 6G 和战术网络)的动态资源分配要求。最近,我们看到了机器学习对无线网络各个方面的影响。然而,在大多数情况下,进展仅限于模拟和/或依赖大型处理单元来运行决策引擎,而不是将其部署在边缘的无线电上。虽然依靠仿真来快速高效地训练深度强化学习(DRL)可能是必要的,但关键是要在努力提高泛化能力的同时缩小仿真与真实之间的差距。为了缓解这些挑战,我们开发了马可尼-罗森布拉特智能网络框架(MR-iNet Gym),这是一个开源架构,旨在加速部署适用于 NextG 无线网络的新型 DRL。为了证明其影响力,我们在强调 DRL 决策引擎的泛化能力的同时,解决了分布式频率和功率分配问题。端到端解决方案是在嵌入 GPU 的软件定义无线电上实现的,并通过空中评估进行了验证。据我们所知,这些是为下一代 GPU 嵌入式无线电优化分布式资源分配部署 DRL 的可行性的首个实例。
{"title":"Framework for Federated Learning and Edge Deployment of Real-Time Reinforcement Learning Decision Engine on Software Defined Radio","authors":"Jithin Jagannath","doi":"10.1609/aaaiss.v3i1.31218","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31218","url":null,"abstract":"Machine learning promises to empower dynamic resource allocation requirements of Next Generation (NextG) wireless networks including 6G and tactical networks. Recently, we have seen the impact machine learning can make on various aspects of wireless networks. Yet, in most cases, the progress has been limited to simulations and/or relies on large processing units to run the decision engines as opposed to deploying it on the radio at the edge. While relying on simulations for rapid and efficient training of deep reinforcement learning (DRL) may be necessary, it is key to mitigate the sim-real gap while trying to improve the generalization capability. To mitigate these challenges, we developed the Marconi-Rosenblatt Framework for Intelligent Networks (MR-iNet Gym), an open-source architecture designed for accelerating the deployment of novel DRL for NextG wireless networks. To demonstrate its impact, we tackled the problem of distributed frequency and power allocation while emphasizing the generalization capability of DRL decision engine. The end-to-end solution was implemented on the GPU-embedded software-defined radio and validated using over-the-air evaluation. To the best of our knowledge, these were the first instances that established the feasibility of deploying DRL for optimized distributed resource allocation for next-generation of GPU-embedded radios.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120977","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Can LLMs Answer Investment Banking Questions? Using Domain-Tuned Functions to Improve LLM Performance on Knowledge-Intensive Analytical Tasks 法律硕士能否回答投资银行问题?使用领域调整函数提高法律硕士在知识密集型分析任务中的表现
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31191
Nicholas Harvel, F. B. Haiek, Anupriya Ankolekar, David James Brunner
Large Language Models (LLMs) can increase the productivity of general-purpose knowledge work, but accuracy is a concern, especially in professional settings requiring domain-specific knowledge and reasoning. To evaluate the suitability of LLMs for such work, we developed a benchmark of 16 analytical tasks representative of the investment banking industry. We evaluated LLM performance without special prompting, with relevant information provided in the prompt, and as part of a system giving the LLM access to domain-tuned functions for information retrieval and planning. Without access to functions, state-of-the-art LLMs performed poorly, completing two or fewer tasks correctly. Access to appropriate domain-tuned functions yielded dramatically better results, although performance was highly sensitive to the design of the functions and the structure of the information they returned. The most effective designs yielded correct answers on 12 out of 16 tasks. Our results suggest that domain-specific functions and information structures, by empowering LLMs with relevant domain knowledge and enabling them to reason in domain-appropriate ways, may be a powerful means of adapting LLMs for use in demanding professional settings.
大型语言模型(LLM)可以提高通用知识工作的效率,但准确性却令人担忧,尤其是在需要特定领域知识和推理的专业环境中。为了评估大型语言模型在此类工作中的适用性,我们开发了一个包含 16 项具有代表性的投资银行业分析任务的基准。我们对 LLM 的性能进行了评估,包括没有特殊提示的情况下、在提示中提供相关信息的情况下,以及作为系统的一部分让 LLM 访问用于信息检索和规划的领域调整功能的情况下。在没有使用这些功能的情况下,最先进的 LLM 表现不佳,只能正确完成两项或更少任务。使用适当的领域调整函数后,结果大为改观,尽管性能对函数的设计及其返回信息的结构非常敏感。最有效的设计在 16 个任务中的 12 个任务中获得了正确答案。我们的研究结果表明,针对特定领域的函数和信息结构,通过赋予 LLMs 相关领域的知识,使他们能够以适合该领域的方式进行推理,可能是使 LLMs 适应高要求专业环境的有力手段。
{"title":"Can LLMs Answer Investment Banking Questions? Using Domain-Tuned Functions to Improve LLM Performance on Knowledge-Intensive Analytical Tasks","authors":"Nicholas Harvel, F. B. Haiek, Anupriya Ankolekar, David James Brunner","doi":"10.1609/aaaiss.v3i1.31191","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31191","url":null,"abstract":"Large Language Models (LLMs) can increase the productivity of general-purpose knowledge work, but accuracy is a concern, especially in professional settings requiring domain-specific knowledge and reasoning. To evaluate the suitability of LLMs for such work, we developed a benchmark of 16 analytical tasks representative of the investment banking industry. We evaluated LLM performance without special prompting, with relevant information provided in the prompt, and as part of a system giving the LLM access to domain-tuned functions for information retrieval and planning. Without access to functions, state-of-the-art LLMs performed poorly, completing two or fewer tasks correctly. Access to appropriate domain-tuned functions yielded dramatically better results, although performance was highly sensitive to the design of the functions and the structure of the information they returned. The most effective designs yielded correct answers on 12 out of 16 tasks. Our results suggest that domain-specific functions and information structures, by empowering LLMs with relevant domain knowledge and enabling them to reason in domain-appropriate ways, may be a powerful means of adapting LLMs for use in demanding professional settings.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Exploiting Machine Learning Bias: Predicting Medical Denials 利用机器学习的偏差:预测医疗拒绝率
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31181
Stephen Russell, Fabio Montes Suros, Ashwin Kumar
For a large healthcare system, ignoring costs associated with managing the patient encounter denial process (staffing, contracts, etc.), total denial-related amounts can be more than $1B annually in gross charges. Being able to predict a denial before it occurs has the potential for tremendous savings. Using machine learning to predict denial has the potential to allow denial-preventing interventions. However, challenges of data imbalance make creating a single generalized model difficult. We employ two biased models in a hybrid voting scheme to achieve results that exceed the state-of-the art and allow for incremental predictions as the encounter progresses. The model had the added benefit of monitoring the human-driven denial process that affect the underlying distribution, on which the models’ bias is based.
对于大型医疗保健系统而言,如果不考虑与管理病人就诊拒绝流程相关的成本(人员配备、合同等),每年与拒绝相关的总费用可能超过 10 亿美元。如果能在拒付发生之前预测到拒付,就有可能节省大量费用。利用机器学习预测拒付有可能实现防止拒付的干预措施。然而,数据不平衡的挑战使得创建一个单一的通用模型变得困难。我们在一个混合投票方案中采用了两个有偏差的模型,从而获得了超越最先进技术的结果,并允许随着遭遇的进展进行增量预测。该模型的另一个好处是可以监控影响基本分布的人为驱动的拒绝过程,而模型的偏差正是建立在这一基础之上的。
{"title":"Exploiting Machine Learning Bias: Predicting Medical Denials","authors":"Stephen Russell, Fabio Montes Suros, Ashwin Kumar","doi":"10.1609/aaaiss.v3i1.31181","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31181","url":null,"abstract":"For a large healthcare system, ignoring costs associated with managing the patient encounter denial process (staffing, contracts, etc.), total denial-related amounts can be more than $1B annually in gross charges. Being able to predict a denial before it occurs has the potential for tremendous savings. Using machine learning to predict denial has the potential to allow denial-preventing interventions. However, challenges of data imbalance make creating a single generalized model difficult. We employ two biased models in a hybrid voting scheme to achieve results that exceed the state-of-the art and allow for incremental predictions as the encounter progresses. The model had the added benefit of monitoring the human-driven denial process that affect the underlying distribution, on which the models’ bias is based.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141123298","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
AI for Social Good Education at Hispanic Serving Institutions 西语裔服务机构的人工智能社会公益教育
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31259
Yu Chen, Gabriel Granco, Yunfei Hou, Heather Macias, Frank A. Gomez
This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity.
本项目旨在通过开发和研究人工智能教育创新学习实践和资源的功效,扩大人工智能教育的社会公益性。我们为学生开发了三个人工智能学习模块,以便1)确定其所在社区与可持续发展目标相一致的社会问题(如贫困、饥饿、优质教育);2)通过实践实验室和商业应用学习人工智能;3)以团队形式创建人工智能驱动的解决方案,以解决他们所确定的社会问题。学生团队应将人工智能学习融入其所在社区,并为社区做出贡献。然后,学生利用这些模块参与跨学科方法,在三所 CSU HSI(圣何塞州立大学、加州理工波莫纳分校和 CSU 圣贝纳迪诺分校)的信息科学与技术、地理学和计算机科学领域促进人工智能学习,以造福社会。最后,我们将从学习成果、学生体验、学生参与度和公平性等方面评估所提出的人工智能教学方法和活动的效果和影响。
{"title":"AI for Social Good Education at Hispanic Serving Institutions","authors":"Yu Chen, Gabriel Granco, Yunfei Hou, Heather Macias, Frank A. Gomez","doi":"10.1609/aaaiss.v3i1.31259","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31259","url":null,"abstract":"This project aims to broaden AI education by developing and studying the efficacy of innovative learning practices and resources for AI education for social good. We have developed three AI learning modules for students to: 1) identify social issues that align with the SDGs in their community (e.g., poverty, hunger, quality education); 2) learn AI through hands-on labs and business applications; and 3) create AI-powered solutions in teams to address social is-sues they have identified. Student teams are expected to situate AI learning in their communities and contribute to their communities. Students then use the modules to en-gage in an interdisciplinary approach, facilitating AI learn-ing for social good in informational sciences and technology, geography, and computer science at three CSU HSIs (San Jose State University, Cal Poly Pomona and CSU San Bernardino). Finally, we aim to evaluate the efficacy and impact of the proposed AI teaching methods and activities in terms of learning outcomes, student experience, student engagement, and equity.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141119091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Human-Like Acquisition of Language and Concepts 模拟人类学习语言和概念的过程
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31275
Peter Lindes, Steven Jones
Humans acquire language and related concepts in a trajectory over a lifetime. Concepts for simple interaction with the world are learned before language. Later, words are learned to name these concepts along with structures needed to represent larger meanings. Eventually, language advances to where it can drive the learning of new concepts. Throughout this trajectory a language processing capability uses architectural mechanisms to process language using the knowledge already acquired. We assume that this growing body of knowledge is made up of small units of form-meaning mapping that can be composed in many ways, suggesting that these units are learned incrementally from experience. In prior work we have built a system to comprehend human language within an autonomous robot using knowledge in such units developed by hand. Here we propose a research program to develop the ability of an artificial agent to acquire this knowledge incrementally and autonomously from its experience in a similar trajectory. We then propose a strategy for evaluating this human-like learning system using a large benchmark created as a tool for training deep learning systems. We expect that our human-like learning system will produce better task performance from training on only a small subset of this benchmark.
人的一生都在循序渐进地学习语言和相关概念。在学习语言之前,人们先学会了与世界进行简单互动的概念。后来,人们学会了命名这些概念的词汇,以及表达更大含义所需的结构。最终,语言可以推动新概念的学习。在这一发展过程中,语言处理能力将利用已掌握的知识架构机制来处理语言。我们假定,这个不断增长的知识体系是由形式-意义映射的小单元组成的,它们可以以多种方式组成,这表明这些单元是从经验中逐步学习的。在之前的工作中,我们利用手工开发的此类单元知识,在自主机器人中建立了一个理解人类语言的系统。在此,我们提出了一项研究计划,以开发人工智能代理从类似轨迹的经验中逐步自主获取这些知识的能力。然后,我们提出一种策略,利用作为深度学习系统训练工具而创建的大型基准来评估这种类人学习系统。我们预计,我们的类人学习系统只需在该基准的一小部分上进行训练,就能产生更好的任务性能。
{"title":"Modeling Human-Like Acquisition of Language and Concepts","authors":"Peter Lindes, Steven Jones","doi":"10.1609/aaaiss.v3i1.31275","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31275","url":null,"abstract":"Humans acquire language and related concepts in a trajectory over a lifetime. Concepts for simple interaction with the world are learned before language. Later, words are learned to name these concepts along with structures needed to represent larger meanings. Eventually, language advances to where it can drive the learning of new concepts. Throughout this trajectory a language processing capability uses architectural mechanisms to process language using the knowledge already acquired. We assume that this growing body of knowledge is made up of small units of form-meaning mapping that can be composed in many ways, suggesting that these units are learned incrementally from experience. In prior work we have built a system to comprehend human language within an autonomous robot using knowledge in such units developed by hand. Here we propose a research program to develop the ability of an artificial agent to acquire this knowledge incrementally and autonomously from its experience in a similar trajectory. We then propose a strategy for evaluating this human-like learning system using a large benchmark created as a tool for training deep learning systems. We expect that our human-like learning system will produce better task performance from training on only a small subset of this benchmark.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141121330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Personalized Image Generation Through Swiping 通过轻扫生成个性化图像
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31238
Yuto Nakashima
Generating preferred images from GANs is a challenging task due to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images from users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of StyleGAN, creating meaningful subspaces. Additionally, we use a multi-armed bandit algorithm to decide which dimensions to explore, focusing on the user's preferences. Our experiments show that our method is more efficient in generating preferred images than the baseline.
由于潜在空间的高维特性,从 GAN 生成首选图像是一项具有挑战性的任务。在本研究中,我们提出了一种新方法,利用简单的用户滑动交互从用户生成首选图片。为了有效地利用刷卡交互探索潜在空间,我们对 StyleGAN 的潜在空间进行了主成分分析,从而创建了有意义的子空间。此外,我们还使用多臂匪徒算法来决定探索哪些维度,重点关注用户的偏好。实验表明,我们的方法在生成首选图片方面比基线方法更有效。
{"title":"Personalized Image Generation Through Swiping","authors":"Yuto Nakashima","doi":"10.1609/aaaiss.v3i1.31238","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31238","url":null,"abstract":"Generating preferred images from GANs is a challenging task due to the high-dimensional nature of latent space. In this study, we propose a novel approach that uses simple user-swipe interactions to generate preferred images from users. To effectively explore the latent space with only swipe interactions, we apply principal component analysis to the latent space of StyleGAN, creating meaningful subspaces. Additionally, we use a multi-armed bandit algorithm to decide which dimensions to explore, focusing on the user's preferences. Our experiments show that our method is more efficient in generating preferred images than the baseline.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141120139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs 使用知识图谱对机器学习分类器进行基于规则的解释
Pub Date : 2024-05-20 DOI: 10.1609/aaaiss.v3i1.31200
Orfeas Menis Mastromichalakis, Edmund Dervakos, A. Chortaras, G. Stamou
The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.
使用符号化知识表示和推理来解决机器学习分类器缺乏透明度的问题,是近来备受关注的一个研究领域。在这项工作中,我们使用知识图谱作为底层框架,为机器学习分类器的操作解释提供了术语表达,摆脱了使用原始数据特征作为解释表达手段的限制,为解释的可理解性问题提供了一个很有前景的解决方案。特别是,在以知识图谱的形式描述分类器应用领域的情况下,我们引入了一个新颖的理论框架,以知识图谱术语表达的基于查询的规则的形式来表示分类器操作的解释。这样就可以使用独立于分类器特征及其应用领域的术语和信息来解释不透明的黑盒子分类器,从而获得更易于理解的解释,而且还可以根据最终用户的需求创建不同层次的解释。
{"title":"Rule-Based Explanations of Machine Learning Classifiers Using Knowledge Graphs","authors":"Orfeas Menis Mastromichalakis, Edmund Dervakos, A. Chortaras, G. Stamou","doi":"10.1609/aaaiss.v3i1.31200","DOIUrl":"https://doi.org/10.1609/aaaiss.v3i1.31200","url":null,"abstract":"The use of symbolic knowledge representation and reasoning as a way to resolve the lack of transparency of machine learning classifiers is a research area that has lately gained a lot of traction. In this work, we use knowledge graphs as the underlying framework providing the terminology for representing explanations for the operation of a machine learning classifier escaping the constraints of using the features of raw data as a means to express the explanations, providing a promising solution to the problem of the understandability of explanations. In particular, given a description of the application domain of the classifier in the form of a knowledge graph, we introduce a novel theoretical framework for representing explanations of its operation, in the form of query-based rules expressed in the terminology of the knowledge graph. This allows for explaining opaque black-box classifiers, using terminology and information that is independent of the features of the classifier and its domain of application, leading to more understandable explanations but also allowing the creation of different levels of explanations according to the final end-user.","PeriodicalId":516827,"journal":{"name":"Proceedings of the AAAI Symposium Series","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141122493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the AAAI Symposium Series
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1