首页 > 最新文献

Cognitive Systems Research最新文献

英文 中文
A machine-readable metadata model for intelligent data analysis 用于智能数据分析的机器可读元数据模型
IF 2.4 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-08-10 DOI: 10.1016/j.cogsys.2025.101388
Xavier Angerri, Joan Vázquez, Karina Gibert
Metadata, or data about data, is essential in Knowledge Discovery from Data (KDD) and Artificial Intelligence (AI) processes, providing details about the meanings and technical aspects of dataset variables. Historically, research has focused on software to store and manage metadata, mainly describing data structure and formats for analyst understanding. However, little has been done to analyze the metadata required for automating advanced KDD processes. Traditionally, metadata creation has been a manual process, relying on analysts to gather information from stakeholders. This paper introduces the GeMeDaFi methodology, enabling stakeholders to automatically generate machine-readable metadata files, facilitating the automatic management of metadata. GeMeDaFi is a key component of the AM4IDA methodology, which guides intelligent data analysis using automatically generated metadata. This process spans from data preprocessing to result interpretation, including modeling. The metadata file, based on the MdM formal model, incorporates semantic information from stakeholders and the dataset’s structure, supporting automated intelligent preprocessing and analysis. The proposal also enhances the INSESS methodology for intelligent data analysis and has been applied in four real-world scenarios. The primary contributions are the significant reduction in time and errors in creating metadata files, accelerating the preprocessing phase, and enabling automation in the analytical step of KDD processes.
元数据,或关于数据的数据,在从数据中发现知识(KDD)和人工智能(AI)过程中是必不可少的,它提供了关于数据集变量的含义和技术方面的细节。从历史上看,研究主要集中在存储和管理元数据的软件上,主要是描述数据结构和格式以供分析师理解。然而,很少有人分析自动化高级KDD过程所需的元数据。传统上,元数据创建是一个手动过程,依赖于分析人员从涉众那里收集信息。本文介绍了GeMeDaFi方法,使利益相关者能够自动生成机器可读的元数据文件,促进元数据的自动管理。GeMeDaFi是AM4IDA方法的关键组成部分,该方法使用自动生成的元数据指导智能数据分析。这个过程从数据预处理到结果解释,包括建模。元数据文件基于MdM形式化模型,整合了来自涉众的语义信息和数据集的结构,支持自动化智能预处理和分析。该提案还增强了INSESS智能数据分析方法,并已在四个现实场景中应用。主要的贡献是显著减少了创建元数据文件的时间和错误,加速了预处理阶段,并使KDD过程的分析步骤自动化。
{"title":"A machine-readable metadata model for intelligent data analysis","authors":"Xavier Angerri,&nbsp;Joan Vázquez,&nbsp;Karina Gibert","doi":"10.1016/j.cogsys.2025.101388","DOIUrl":"10.1016/j.cogsys.2025.101388","url":null,"abstract":"<div><div>Metadata, or data about data, is essential in Knowledge Discovery from Data (KDD) and Artificial Intelligence (AI) processes, providing details about the meanings and technical aspects of dataset variables. Historically, research has focused on software to store and manage metadata, mainly describing data structure and formats for analyst understanding. However, little has been done to analyze the metadata required for automating advanced KDD processes. Traditionally, metadata creation has been a manual process, relying on analysts to gather information from stakeholders. This paper introduces the GeMeDaFi methodology, enabling stakeholders to automatically generate machine-readable metadata files, facilitating the automatic management of metadata. GeMeDaFi is a key component of the AM4IDA methodology, which guides intelligent data analysis using automatically generated metadata. This process spans from data preprocessing to result interpretation, including modeling. The metadata file, based on the MdM formal model, incorporates semantic information from stakeholders and the dataset’s structure, supporting automated intelligent preprocessing and analysis. The proposal also enhances the INSESS methodology for intelligent data analysis and has been applied in four real-world scenarios. The primary contributions are the significant reduction in time and errors in creating metadata files, accelerating the preprocessing phase, and enabling automation in the analytical step of KDD processes.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"93 ","pages":"Article 101388"},"PeriodicalIF":2.4,"publicationDate":"2025-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144933459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A novel DE/VS hybrid algorithm for enhanced optimization in numerical and engineering problems 一种新的DE/VS混合算法,用于数值和工程问题的增强优化
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-17 DOI: 10.1016/j.cogsys.2025.101376
Yiğit Çağatay Kuyu
Effectively balancing exploration and exploitation is crucial for metaheuristic algorithms to achieve high-quality solutions in complex search spaces. The proposed DE/VS hybrid algorithm combines the strengths of differential evolution (DE) and vortex search (VS) to enhance global optimization performance. DE provides robust exploration but struggles with exploitation, while VS excels in exploitation but lacks exploration, often leading to premature convergence. The DE/VS framework introduces a hierarchical subpopulation structure and dynamic population size adjustment, ensuring a balanced trade-off between exploration and exploitation. This adaptive mechanism enhances convergence efficiency and prevents stagnation. Experimental evaluations across benchmark functions and engineering problems confirm that DE/VS consistently outperforms traditional methods. Statistical analysis further validates its superiority, demonstrating its effectiveness in solving complex optimization problems.
元启发式算法要在复杂的搜索空间中获得高质量的解,有效地平衡探索和利用是至关重要的。本文提出的DE/VS混合算法结合了差分进化(DE)和涡搜索(VS)的优点,提高了全局优化性能。DE提供了强大的探索功能,但在开发方面存在困难,而VS擅长开发功能,但缺乏探索功能,经常导致过早收敛。DE/VS框架引入了分层次人口结构和动态人口规模调整,确保了勘探和开发之间的平衡权衡。这种自适应机制提高了收敛效率,防止了停滞。对基准函数和工程问题的实验评估证实,DE/VS始终优于传统方法。统计分析进一步验证了该方法的优越性,证明了其在解决复杂优化问题中的有效性。
{"title":"A novel DE/VS hybrid algorithm for enhanced optimization in numerical and engineering problems","authors":"Yiğit Çağatay Kuyu","doi":"10.1016/j.cogsys.2025.101376","DOIUrl":"10.1016/j.cogsys.2025.101376","url":null,"abstract":"<div><div>Effectively balancing exploration and exploitation is crucial for metaheuristic algorithms to achieve high-quality solutions in complex search spaces. The proposed DE/VS hybrid algorithm combines the strengths of differential evolution (DE) and vortex search (VS) to enhance global optimization performance. DE provides robust exploration but struggles with exploitation, while VS excels in exploitation but lacks exploration, often leading to premature convergence. The DE/VS framework introduces a hierarchical subpopulation structure and dynamic population size adjustment, ensuring a balanced trade-off between exploration and exploitation. This adaptive mechanism enhances convergence efficiency and prevents stagnation. Experimental evaluations across benchmark functions and engineering problems confirm that DE/VS consistently outperforms traditional methods. Statistical analysis further validates its superiority, demonstrating its effectiveness in solving complex optimization problems.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101376"},"PeriodicalIF":2.1,"publicationDate":"2025-07-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144654348","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Selective visual memory replay with self-evaluation in cognitive robots based on global workspace framework 基于全局工作空间框架的认知机器人选择性视觉记忆自评价重放
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-15 DOI: 10.1016/j.cogsys.2025.101377
Wenjie Huang , Antonio Chella , Angelo Cangelosi
Learning capability for artificial systems is a well-studied topic, with various schemes enabling the system to develop knowledge continually. Methods based on memory replay are commonly adopted in the literature. This work presents a consciousness-based model integrated with a continual learning scheme for class-incremental learning in visual recognition. We suggest a reciprocal relation between memory maintenance and the learning activity of the system based on psychological evidence. The memory capability fits the continual learning problem. In return, a self-evaluation of knowledge mechanism is proposed for the robot to discriminate the important learning data during interactions to alleviate the memory constraint without degrading the distribution representation of abnormal data. The implemented robotic agent autonomously puts more effort into learning novel knowledge without human intervention. The cognitive architecture based on the Global Workspace Theory for the robotic agent is presented, with which the agent can automatically associate information from different modalities. Memory consolidation is implemented to run in parallel to the memory formation process. The work is validated in a class-incremental object recognition experiment on a robotic agent. The results show that the agent automatically balances the memory distribution for learning and maintains a relatively small set of samples during learning.
人工系统的学习能力是一个被广泛研究的话题,有各种各样的方案使系统能够不断地发展知识。文献中通常采用基于记忆重放的方法。这项工作提出了一个基于意识的模型,并将其与视觉识别中类别增量学习的持续学习方案相结合。基于心理学证据,我们认为记忆维持与系统的学习活动之间存在相互关系。记忆能力适合持续学习的问题。在不影响异常数据分布表征的前提下,提出了机器人在交互过程中识别重要学习数据的知识自我评价机制,以缓解记忆约束。实现的机器人智能体在没有人为干预的情况下自主学习新知识。提出了基于全局工作空间理论的机器人智能体认知体系结构,使机器人智能体能够自动关联不同模态的信息。内存巩固的实现与内存形成过程并行运行。在机器人代理上进行了类增量对象识别实验,验证了该方法的有效性。结果表明,智能体自动平衡学习的记忆分布,并在学习过程中保持相对较小的样本集。
{"title":"Selective visual memory replay with self-evaluation in cognitive robots based on global workspace framework","authors":"Wenjie Huang ,&nbsp;Antonio Chella ,&nbsp;Angelo Cangelosi","doi":"10.1016/j.cogsys.2025.101377","DOIUrl":"10.1016/j.cogsys.2025.101377","url":null,"abstract":"<div><div>Learning capability for artificial systems is a well-studied topic, with various schemes enabling the system to develop knowledge continually. Methods based on memory replay are commonly adopted in the literature. This work presents a consciousness-based model integrated with a continual learning scheme for class-incremental learning in visual recognition. We suggest a reciprocal relation between memory maintenance and the learning activity of the system based on psychological evidence. The memory capability fits the continual learning problem. In return, a self-evaluation of knowledge mechanism is proposed for the robot to discriminate the important learning data during interactions to alleviate the memory constraint without degrading the distribution representation of abnormal data. The implemented robotic agent autonomously puts more effort into learning novel knowledge without human intervention. The cognitive architecture based on the Global Workspace Theory for the robotic agent is presented, with which the agent can automatically associate information from different modalities. Memory consolidation is implemented to run in parallel to the memory formation process. The work is validated in a class-incremental object recognition experiment on a robotic agent. The results show that the agent automatically balances the memory distribution for learning and maintains a relatively small set of samples during learning.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101377"},"PeriodicalIF":2.1,"publicationDate":"2025-07-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards naturalized phenomenology: Dynamics of space–time clouds and power law of working memory 走向自然现象学:时空云的动力学与工作记忆的幂律
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-07 DOI: 10.1016/j.cogsys.2025.101374
Ihor Lubashevsky , Vasily Lubashevskiy
In this paper, we address the challenge of naturalizing phenomenology by uniting the first-person and third-person perspectives as complementary components in describing human perception. Our approach builds on the concept of space–time clouds (Lubashevsky and Plavinska, Physics of the Human Temporality: Complex Present, Springer, 2021) and introduces a novel formalism of cloud functions to model preconscious information processing in large-scale neural networks. The space–time clouds mathematically represent mental images of physical objects as they are perceived from the first-person perspective, while the cloud functions describe their preconscious representations within the same mathematical framework. The preconscious representations inherit all properties of space–time clouds, except their temporal extent; they are determined completely at each moment in time. The dynamics of cloud functions, governed by brain network activity, is described within a mathematical framework rooted in theories of physical systems, which relies on neural correlates of consciousness and the integrity of mental images. Modality-specific information processing is thought to be responsible for the emergence of high-level preattentive representations. By way of example, we reproduce the properties of the power law of working memory using the developed formalism applied to the recognition of a single scalar physical property. The corresponding governing equation reduces to the Schrödinger equation in imaginary time combined with the Lotka–Volterra model in a Hilbert space.
在本文中,我们通过统一第一人称和第三人称视角作为描述人类感知的互补组成部分来解决自然现象学的挑战。我们的方法建立在时空云概念的基础上(Lubashevsky和Plavinska,人类时间性的物理学:复杂的现在,b施普林格,2021),并引入了一种新的云函数形式来模拟大规模神经网络中的前意识信息处理。时空云在数学上代表了从第一人称视角感知到的物理对象的心理图像,而云函数在相同的数学框架内描述了它们的前意识表征。前意识表征继承了时空云的所有属性,除了它们的时间范围;它们在每一时刻都是完全确定的。由大脑网络活动控制的云功能的动态是在扎根于物理系统理论的数学框架内描述的,它依赖于意识的神经关联和心理图像的完整性。特定于模态的信息处理被认为是高级预注意表征出现的原因。举例来说,我们利用已开发的用于识别单个标量物理性质的形式再现了工作记忆幂律的性质。将相应的控制方程与希尔伯特空间中的Lotka-Volterra模型结合,简化为虚时中的Schrödinger方程。
{"title":"Towards naturalized phenomenology: Dynamics of space–time clouds and power law of working memory","authors":"Ihor Lubashevsky ,&nbsp;Vasily Lubashevskiy","doi":"10.1016/j.cogsys.2025.101374","DOIUrl":"10.1016/j.cogsys.2025.101374","url":null,"abstract":"<div><div>In this paper, we address the challenge of naturalizing phenomenology by uniting the first-person and third-person perspectives as complementary components in describing human perception. Our approach builds on the concept of space–time clouds (Lubashevsky and Plavinska, <em>Physics of the Human Temporality: Complex Present</em>, Springer, 2021) and introduces a novel formalism of cloud functions to model preconscious information processing in large-scale neural networks. The space–time clouds mathematically represent mental images of physical objects as they are perceived from the first-person perspective, while the cloud functions describe their preconscious representations within the same mathematical framework. The preconscious representations inherit all properties of space–time clouds, except their temporal extent; they are determined completely at each moment in time. The dynamics of cloud functions, governed by brain network activity, is described within a mathematical framework rooted in theories of physical systems, which relies on neural correlates of consciousness and the integrity of mental images. Modality-specific information processing is thought to be responsible for the emergence of high-level preattentive representations. By way of example, we reproduce the properties of the power law of working memory using the developed formalism applied to the recognition of a single scalar physical property. The corresponding governing equation reduces to the Schrödinger equation in imaginary time combined with the Lotka–Volterra model in a Hilbert space.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101374"},"PeriodicalIF":2.1,"publicationDate":"2025-07-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144605596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards knowledge autonomy in the Companion cognitive architecture 同伴认知体系结构中的知识自治
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-05 DOI: 10.1016/j.cogsys.2025.101378
Constantine Nakos, Kenneth D. Forbus
One of the fundamental aspects of cognitive architectures is their ability to encode and manipulate knowledge. Without a consistent, well-designed, and scalable knowledge management scheme, an architecture will be unable to move past toy problems and tackle the broader problems of cognition. Moreover, it will not be able to reach a state of knowledge autonomy, in which the architecture has the tools it needs to acquire and maintain knowledge on its own. In this paper, we document some of the challenges we have faced in developing the knowledge stack for the Companion cognitive architecture and discuss the tools, representations, and practices we have developed to overcome them. We also lay out a series of next steps that will allow Companions to play a greater role in managing their own knowledge, an important part of knowledge autonomy. It is our hope that these observations will prove useful to other cognitive architecture developers facing similar challenges.
认知架构的一个基本方面是它们对知识进行编码和操作的能力。如果没有一个一致的、设计良好的、可伸缩的知识管理方案,体系结构将无法越过小问题,解决更广泛的认知问题。此外,它将无法达到知识自治的状态,在这种状态下,体系结构拥有自己获取和维护知识所需的工具。在本文中,我们记录了在为Companion认知体系结构开发知识堆栈时所面临的一些挑战,并讨论了我们为克服这些挑战而开发的工具、表示和实践。我们还列出了一系列后续步骤,这些步骤将允许同伴在管理自己的知识方面发挥更大的作用,这是知识自主的重要组成部分。我们希望这些观察结果对其他面临类似挑战的认知架构开发人员有用。
{"title":"Towards knowledge autonomy in the Companion cognitive architecture","authors":"Constantine Nakos,&nbsp;Kenneth D. Forbus","doi":"10.1016/j.cogsys.2025.101378","DOIUrl":"10.1016/j.cogsys.2025.101378","url":null,"abstract":"<div><div>One of the fundamental aspects of cognitive architectures is their ability to encode and manipulate knowledge. Without a consistent, well-designed, and scalable knowledge management scheme, an architecture will be unable to move past toy problems and tackle the broader problems of cognition. Moreover, it will not be able to reach a state of <em>knowledge autonomy</em>, in which the architecture has the tools it needs to acquire and maintain knowledge on its own. In this paper, we document some of the challenges we have faced in developing the knowledge stack for the Companion cognitive architecture and discuss the tools, representations, and practices we have developed to overcome them. We also lay out a series of next steps that will allow Companions to play a greater role in managing their own knowledge, an important part of knowledge autonomy. It is our hope that these observations will prove useful to other cognitive architecture developers facing similar challenges.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101378"},"PeriodicalIF":2.1,"publicationDate":"2025-07-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144656102","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The construction and implementation direction of personalized learning model based on multimodal data fusion in the context of intelligent education 智能教育背景下基于多模态数据融合的个性化学习模型构建与实现方向
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-07-03 DOI: 10.1016/j.cogsys.2025.101379
Xingle Ji, Lu Sun, Kun Huang
The rapid development of artificial intelligence (AI) technologies, represented by computer vision, natural language processing, and speech recognition, has brought new opportunities for the advancement of personalized learning within intelligent education. This article utilizes intelligent collection devices such as cameras, electroencephalographs (EEG), eye trackers, smart bracelets, and data gloves to comprehensively collect and analyze data on learners’ voices, videos, texts, breathing, heartbeats, EEG signals, and eye movements. A multimodal dataset for learners is constructed across four dimensions: behavioral representation, physiological information, human–computer interaction, and learning context. By employing natural language processing, speech recognition, computer vision, and physiological information recognition technologies, we extract and analyze the multimodal datasets. This process mines the hidden personalized information of learners, enabling data-driven, real-time, quantified evaluation of their learning states. This study constructs a personalized learning model based on multimodal data fusion within the field of intelligent education by examining the current research landscape, data types, and relevant fusion strategies of this technology. It aims to provide personalized services tailored to the needs of each learner.
以计算机视觉、自然语言处理和语音识别为代表的人工智能(AI)技术的快速发展,为智能教育中个性化学习的推进带来了新的机遇。本文利用摄像头、脑电图仪、眼动仪、智能手环、数据手套等智能采集设备,对学习者的声音、视频、文字、呼吸、心跳、脑电图信号、眼动等数据进行全面采集和分析。学习者的多模态数据集跨越四个维度:行为表征、生理信息、人机交互和学习环境。采用自然语言处理、语音识别、计算机视觉和生理信息识别等技术,对多模态数据集进行提取和分析。这个过程挖掘学习者隐藏的个性化信息,实现数据驱动的、实时的、量化的学习状态评估。本研究通过考察智能教育领域多模态数据融合的研究现状、数据类型和相关融合策略,构建了基于多模态数据融合的个性化学习模型。它的目标是提供个性化的服务,以满足每个学习者的需求。
{"title":"The construction and implementation direction of personalized learning model based on multimodal data fusion in the context of intelligent education","authors":"Xingle Ji,&nbsp;Lu Sun,&nbsp;Kun Huang","doi":"10.1016/j.cogsys.2025.101379","DOIUrl":"10.1016/j.cogsys.2025.101379","url":null,"abstract":"<div><div>The rapid development of artificial intelligence (AI) technologies, represented by computer vision, natural language processing, and speech recognition, has brought new opportunities for the advancement of personalized learning within intelligent education. This article utilizes intelligent collection devices such as cameras, electroencephalographs (EEG), eye trackers, smart bracelets, and data gloves to comprehensively collect and analyze data on learners’ voices, videos, texts, breathing, heartbeats, EEG signals, and eye movements. A multimodal dataset for learners is constructed across four dimensions: behavioral representation, physiological information, human–computer interaction, and learning context. By employing natural language processing, speech recognition, computer vision, and physiological information recognition technologies, we extract and analyze the multimodal datasets. This process mines the hidden personalized information of learners, enabling data-driven, real-time, quantified evaluation of their learning states. This study constructs a personalized learning model based on multimodal data fusion within the field of intelligent education by examining the current research landscape, data types, and relevant fusion strategies of this technology. It aims to provide personalized services tailored to the needs of each learner.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101379"},"PeriodicalIF":2.1,"publicationDate":"2025-07-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144563494","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Robot manipulation in everyday activities with the CRAM 2.0 cognitive architecture and generalized action plans 机器人在日常活动中的操作与CRAM 2.0认知架构和广义的行动计划
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-21 DOI: 10.1016/j.cogsys.2025.101375
Michael Beetz , Gayane Kazhoyan , David Vernon
The CRAM 2.0 robot cognitive architecture provides a framework for knowledge-based instantiation of robot manipulation design patterns for everyday activities. These design patterns take the form of generalized action plans, which are transformed by CRAM 2.0 into parameterized low-level motion plans, using knowledge and reasoning with a contextual model to identify the motion parameter values that will successfully perform the actions required to accomplish the task. In this way, CRAM 2.0 performs implicit-to-explicit manipulation, mapping an under-specified high-level goal to the specific low-level motions required to accomplish the goal. We demonstrate the ability of a CRAM-controlled robot to carry out everyday activities in a kitchen environment.
CRAM 2.0机器人认知架构为日常活动的机器人操作设计模式的基于知识的实例化提供了一个框架。这些设计模式采用广义行动计划的形式,通过CRAM 2.0将其转换为参数化的低级行动计划,使用知识和推理与上下文模型来识别将成功执行完成任务所需的行动的运动参数值。通过这种方式,CRAM 2.0执行隐式到显式的操作,将未指定的高级目标映射到完成该目标所需的特定低级动作。我们展示了一个cramc控制的机器人在厨房环境中进行日常活动的能力。
{"title":"Robot manipulation in everyday activities with the CRAM 2.0 cognitive architecture and generalized action plans","authors":"Michael Beetz ,&nbsp;Gayane Kazhoyan ,&nbsp;David Vernon","doi":"10.1016/j.cogsys.2025.101375","DOIUrl":"10.1016/j.cogsys.2025.101375","url":null,"abstract":"<div><div>The CRAM 2.0 robot cognitive architecture provides a framework for knowledge-based instantiation of robot manipulation design patterns for everyday activities. These design patterns take the form of generalized action plans, which are transformed by CRAM 2.0 into parameterized low-level motion plans, using knowledge and reasoning with a contextual model to identify the motion parameter values that will successfully perform the actions required to accomplish the task. In this way, CRAM 2.0 performs implicit-to-explicit manipulation, mapping an under-specified high-level goal to the specific low-level motions required to accomplish the goal. We demonstrate the ability of a CRAM-controlled robot to carry out everyday activities in a kitchen environment.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101375"},"PeriodicalIF":2.1,"publicationDate":"2025-06-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144366129","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient and scalable masked word prediction using concept formation 使用概念形成的高效和可扩展的掩码词预测
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-06-04 DOI: 10.1016/j.cogsys.2025.101371
Xin Lian , Zekun Wang , Christopher J. MacLellan
This paper introduces Cobweb/4L, a novel approach for efficient language model learning that supports masked word prediction. The approach builds on Cobweb, an incremental system that learns a hierarchy of probabilistic concepts. Each concept stores the frequencies of words that appear in instances tagged with the concept label. The system utilizes an attribute-value representation to encode words and their context into instances. Cobweb/4L uses an information-theoretic variant of category utility as well as a new performance mechanism that leverages multiple concepts to generate predictions. We demonstrate that its new performance mechanism substantially outperforms prior Cobweb performance mechanisms that use only a single node to generate predictions. Further, we demonstrate that Cobweb/4L outperforms transformer-based language models in a low-data setting by learning more rapidly and achieving better final performance. Lastly, we show that Cobweb/4L, which is hyperparameter-free, is robust across varying scales of training data and does not require any manual tuning. This is in contrast to Word2Vec, which performs best with a varying number of hidden nodes that depend on the total amount of training data; this means its hyperparameters must be manually tuned for different amounts of training data. We conclude by discussing future directions for Cobweb/4L.
本文介绍了一种新的支持掩码词预测的高效语言模型学习方法——蛛网/4L。该方法建立在蛛网上,蛛网是一个学习概率概念层次结构的增量系统。每个概念存储用概念标签标记的实例中出现的单词的频率。该系统利用属性值表示将单词及其上下文编码为实例。Cobweb/4L使用类别实用程序的信息理论变体,以及利用多个概念生成预测的新性能机制。我们证明了它的新性能机制大大优于先前仅使用单个节点生成预测的蛛网性能机制。此外,我们证明了在低数据设置下,通过更快地学习和获得更好的最终性能,蜘蛛网/4L优于基于转换器的语言模型。最后,我们证明了无超参数的蛛网/4L在不同规模的训练数据上具有鲁棒性,并且不需要任何手动调优。这与Word2Vec相反,Word2Vec在不同数量的隐藏节点上表现最好,这取决于训练数据的总量;这意味着它的超参数必须针对不同数量的训练数据进行手动调整。最后,我们讨论了蛛网/4L的未来发展方向。
{"title":"Efficient and scalable masked word prediction using concept formation","authors":"Xin Lian ,&nbsp;Zekun Wang ,&nbsp;Christopher J. MacLellan","doi":"10.1016/j.cogsys.2025.101371","DOIUrl":"10.1016/j.cogsys.2025.101371","url":null,"abstract":"<div><div>This paper introduces Cobweb/4L, a novel approach for efficient language model learning that supports masked word prediction. The approach builds on Cobweb, an incremental system that learns a hierarchy of probabilistic concepts. Each concept stores the frequencies of words that appear in instances tagged with the concept label. The system utilizes an attribute-value representation to encode words and their context into instances. Cobweb/4L uses an information-theoretic variant of category utility as well as a new performance mechanism that leverages multiple concepts to generate predictions. We demonstrate that its new performance mechanism substantially outperforms prior Cobweb performance mechanisms that use only a single node to generate predictions. Further, we demonstrate that Cobweb/4L outperforms transformer-based language models in a low-data setting by learning more rapidly and achieving better final performance. Lastly, we show that Cobweb/4L, which is hyperparameter-free, is robust across varying scales of training data and does not require any manual tuning. This is in contrast to Word2Vec, which performs best with a varying number of hidden nodes that depend on the total amount of training data; this means its hyperparameters must be manually tuned for different amounts of training data. We conclude by discussing future directions for Cobweb/4L.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101371"},"PeriodicalIF":2.1,"publicationDate":"2025-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144298451","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Foxtsage vs. Adam: Revolution or evolution in optimization? Foxtsage vs. Adam:优化的革命还是进化?
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-31 DOI: 10.1016/j.cogsys.2025.101373
Sirwan Abdolwahed Aula , Tarik Ahmed Rashid
Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.
优化技术在神经网络训练中至关重要,影响着预测性能、收敛效率和计算可行性。传统的优化器(如Adam)提供自适应学习率,但与收敛稳定性和超参数敏感性作斗争,而SGD提供稳定性,但缺乏适应性。我们提出Foxtsage,一种新的混合优化算法,将FOX-TSA(用于全局搜索和探索)与SGD(用于微调本地开发)集成在一起,以解决这些限制。提出的Foxtsage optimizer在多个标准数据集(包括MNIST、IMDB和CIFAR-10)上对广泛使用的Adam optimizer进行了基准测试。性能评估基于训练损失、准确性、精度、召回率、f1分数和计算时间。该研究进一步探讨了计算复杂性和优化性能与效率之间的权衡。实验结果表明,Foxtsage的平均损失降低了42.03% (Foxtsage: 9.508, Adam: 16.402),损失标准差降低了42.19% (Foxtsage: 20.86, Adam: 36.085),表明优化的一致性和鲁棒性更强。此外,在正确率(0.78%)、精密度(0.91%)、召回率(1.02%)和f1分数(0.89%)方面也有适度的提高,显示出更好的泛化能力。然而,这些收益是以显著的计算成本为代价的,平均时间增加了330.87% (Foxtsage: 39.541秒,Adam: 9.177秒),这引起了人们对时间敏感应用的实际可行性的担忧。通过有效地将FOX-TSA的全球搜索能力与SGD的自适应稳定性相结合,Foxtsage为神经网络训练提供了一个有前途的替代方案。虽然它增强了性能和健壮性,但它增加的计算开销带来了一个关键的权衡。未来的工作将侧重于降低计算复杂性,提高可扩展性,并探索其在现实世界深度学习任务中的适用性。
{"title":"Foxtsage vs. Adam: Revolution or evolution in optimization?","authors":"Sirwan Abdolwahed Aula ,&nbsp;Tarik Ahmed Rashid","doi":"10.1016/j.cogsys.2025.101373","DOIUrl":"10.1016/j.cogsys.2025.101373","url":null,"abstract":"<div><div>Optimisation techniques are crucial in neural network training, influencing predictive performance, convergence efficiency, and computational feasibility. Traditional Optimisers such as Adam offer adaptive learning rates but struggle with convergence stability and hyperparameter sensitivity, whereas SGD provides stability but lacks adaptiveness. We propose Foxtsage, a novel hybrid optimisation algorithm that integrates the FOX-TSA (for global search and exploration) with SGD (for fine-tuned local exploitation) to address these limitations. The proposed Foxtsage Optimiser is benchmarked against the widely used Adam Optimiser across multiple standard datasets, including MNIST, IMDB, and CIFAR-10. Performance is evaluated based on training loss, accuracy, precision, recall, F1-score, and computational time. The study further explores computational complexity and the trade-off between optimisation performance and efficiency. Experimental findings demonstrate that Foxtsage achieves a 42.03% reduction in mean loss (Foxtsage: 9.508, Adam: 16.402) and a 42.19% decrease in loss standard deviation (Foxtsage: 20.86, Adam: 36.085), indicating greater consistency and robustness in optimisation. Additionally, modest improvements are observed in accuracy (0.78%), precision (0.91%), recall (1.02%), and F1-score (0.89%), showcasing better generalization capability. However, these gains come at a significant computational cost, with a 330.87% increase in time mean (Foxtsage: 39.541 sec, Adam: 9.177 sec), raising concerns about practical feasibility in time-sensitive applications. By effectively combining FOX-TSA’s global search power with SGD’s adaptive stability, Foxtsage provides a promising alternative for neural network training. While it enhances performance and robustness, its increased computational overhead presents a critical trade-off. Future work will focus on reducing computational complexity, improving scalability, and exploring its applicability in real-world deep-learning tasks.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101373"},"PeriodicalIF":2.1,"publicationDate":"2025-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144263094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An integrative model of self-efficacy within a computational cognitive architecture 计算认知架构中自我效能的综合模型
IF 2.1 3区 心理学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2025-05-27 DOI: 10.1016/j.cogsys.2025.101372
Ron Sun , Sergei Bugrov , David Dai
Effects of self-efficacy on effort and performance have been found to be complex and multi-faceted. Seemingly inconsistent empirical findings and theories exist, and controversies abound. Using a computational cognitive architecture, we show that different empirical results may potentially be synthesized. Analysis and simulation within the computational cognitive architecture account for various empirical phenomena of self-efficacy, demonstrating that their interpretations can be unified mechanistically. We attribute effort allocation to utility that is maximized and trace utility back to essential human motives, thus hypothesizing a mechanistic/computational (not just conceptual) basis of effort allocation and performance. Within this model, various effects of self-efficacy are qualitatively captured.
自我效能感对努力和表现的影响是复杂和多方面的。似乎不一致的实证发现和理论存在,争议比比皆是。使用计算认知架构,我们表明不同的经验结果可能被潜在地合成。计算认知架构中的分析和模拟解释了自我效能的各种经验现象,表明它们的解释可以在机械上统一。我们将努力分配归因于效用最大化,并将效用追溯到基本的人类动机,从而假设了努力分配和绩效的机械/计算(而不仅仅是概念)基础。在这个模型中,自我效能的各种影响被定性地捕获。
{"title":"An integrative model of self-efficacy within a computational cognitive architecture","authors":"Ron Sun ,&nbsp;Sergei Bugrov ,&nbsp;David Dai","doi":"10.1016/j.cogsys.2025.101372","DOIUrl":"10.1016/j.cogsys.2025.101372","url":null,"abstract":"<div><div>Effects of self-efficacy on effort and performance have been found to be complex and multi-faceted. Seemingly inconsistent empirical findings and theories exist, and controversies abound. Using a computational cognitive architecture, we show that different empirical results may potentially be synthesized. Analysis and simulation within the computational cognitive architecture account for various empirical phenomena of self-efficacy, demonstrating that their interpretations can be unified mechanistically. We attribute effort allocation to utility that is maximized and trace utility back to essential human motives, thus hypothesizing a mechanistic/computational (not just conceptual) basis of effort allocation and performance. Within this model, various effects of self-efficacy are qualitatively captured.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"92 ","pages":"Article 101372"},"PeriodicalIF":2.1,"publicationDate":"2025-05-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144481243","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Cognitive Systems Research
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1