The use of constraint models in symbolic AI has significantly increased during the last decades for their capability of certifying the existence of solutions as well as their optimality. In the latter case, approaches based on the Maximum and Minimum Satisfiability problems, or MaxSAT and MinSAT, have shown to provide state-of-the-art performances in solving many computationally challenging problems of social interest, including scheduling, timetabling and resource allocation. Indeed, the research on new approaches to MaxSAT and MinSAT is a trend still providing cutting-edge advances. In this work, we push in this direction by contributing new tableaux-based calculi for solving the MaxSAT and MinSAT problems of regular propositional logic, referred to as Regular MaxSAT and Regular MinSAT problems, respectively. For these problems, we consider as well the two extensions of the highest practical interest, namely the inclusion of weights to clauses, and the distinction between hard (mandatory) and soft (desirable) constraints. Hence, our methods handle any subclass of the most general variants: Weighted Partial Regular MaxSAT and Weighted Partial Regular MinSAT. We provide a detailed description of the methods and prove that the proposed calculi are sound and complete.
{"title":"Complete tableau calculi for Regular MaxSAT and Regular MinSAT","authors":"Jordi Coll , Chu-Min Li , Felip Manyà , Elifnaz Yangin","doi":"10.1016/j.cogsys.2024.101319","DOIUrl":"10.1016/j.cogsys.2024.101319","url":null,"abstract":"<div><div>The use of constraint models in symbolic AI has significantly increased during the last decades for their capability of certifying the existence of solutions as well as their optimality. In the latter case, approaches based on the Maximum and Minimum Satisfiability problems, or MaxSAT and MinSAT, have shown to provide state-of-the-art performances in solving many computationally challenging problems of social interest, including scheduling, timetabling and resource allocation. Indeed, the research on new approaches to MaxSAT and MinSAT is a trend still providing cutting-edge advances. In this work, we push in this direction by contributing new tableaux-based calculi for solving the MaxSAT and MinSAT problems of regular propositional logic, referred to as Regular MaxSAT and Regular MinSAT problems, respectively. For these problems, we consider as well the two extensions of the highest practical interest, namely the inclusion of weights to clauses, and the distinction between hard (mandatory) and soft (desirable) constraints. Hence, our methods handle any subclass of the most general variants: Weighted Partial Regular MaxSAT and Weighted Partial Regular MinSAT. We provide a detailed description of the methods and prove that the proposed calculi are sound and complete.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"90 ","pages":"Article 101319"},"PeriodicalIF":2.1,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096915","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-07DOI: 10.1016/j.cogsys.2025.101322
Carlos Zárate, Félix Ramos, Alan Christian López Fraga
Cognitive architectures represent an alternative in the quest to develop general purpose artificial intelligence, for which cognitive sciences are studied. In this work we will focus on modeling affective processing, an important component to enable basic emotional capabilities. This component was developed with the aim of generating affective responses in the presence of stimuli, necessary to feed a basic emotion model already proposed within our research group. For the proposal we used a layer-based model with Deterministic Finite Automata’s (DFA) to process stimuli along the time, which works as structures to store and represent stimulus–response associations. This approach provides an independent component, contrary to the proposals commonly seen in the state of the art, where this process is often embedded in the feelings and emotions calculations. This model was tested to respond to the sounds consonance, showing that is capable to provide and reinforce responses for specific stimuli features. The results obtained show that the model is capable of making associations between the encoded stimuli and the expected responses, taking advantage of the fact that it is not necessary to be trained to identify stimulus patterns but only to learn to respond to them.
{"title":"Computational model for affective processing based on Cognitive Sciences: An approach using deterministic finite automata’s and temporal heterogeneity","authors":"Carlos Zárate, Félix Ramos, Alan Christian López Fraga","doi":"10.1016/j.cogsys.2025.101322","DOIUrl":"10.1016/j.cogsys.2025.101322","url":null,"abstract":"<div><div>Cognitive architectures represent an alternative in the quest to develop general purpose artificial intelligence, for which cognitive sciences are studied. In this work we will focus on modeling affective processing, an important component to enable basic emotional capabilities. This component was developed with the aim of generating affective responses in the presence of stimuli, necessary to feed a basic emotion model already proposed within our research group. For the proposal we used a layer-based model with Deterministic Finite Automata’s (DFA) to process stimuli along the time, which works as structures to store and represent stimulus–response associations. This approach provides an independent component, contrary to the proposals commonly seen in the state of the art, where this process is often embedded in the feelings and emotions calculations. This model was tested to respond to the sounds consonance, showing that is capable to provide and reinforce responses for specific stimuli features. The results obtained show that the model is capable of making associations between the encoded stimuli and the expected responses, taking advantage of the fact that it is not necessary to be trained to identify stimulus patterns but only to learn to respond to them.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"90 ","pages":"Article 101322"},"PeriodicalIF":2.1,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143096914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-25DOI: 10.1016/j.cogsys.2024.101315
Labiba Aziz , Jan Treur
Prenatal maternal stress (PNMS) has significant implications for infant temperament, primarily through alterations in the hypothalamic–pituitary–adrenal (HPA) axis and epigenetic mechanisms. This study explores the effects of PNMS on infant stress reactivity using a fifth-order adaptive dynamical system model. The model integrates genetic, epigenetic, and environmental factors, focusing on the downregulation of 11β-HSD-2, an enzyme responsible for converting active cortisol to its inactive form, and its subsequent influence on fetal cortisol exposure. The article also employs network-oriented modeling to represent epigenetic changes and their impact on infant temperament development, emphasizing the HPA axis’ role in stress regulation. Simulation experiments compare scenarios with PNMS, illustrating the long-term developmental consequences on temperament. This research highlights the importance of maternal well-being during pregnancy in shaping infant stress responses and provides insights into the developmental origins of health and disease.
{"title":"Higher-order adaptive dynamical system modelling of epigenetic mechanisms in infant temperament shaped by prenatal maternal stress","authors":"Labiba Aziz , Jan Treur","doi":"10.1016/j.cogsys.2024.101315","DOIUrl":"10.1016/j.cogsys.2024.101315","url":null,"abstract":"<div><div>Prenatal maternal stress (PNMS) has significant implications for infant temperament, primarily through alterations in the hypothalamic–pituitary–adrenal (HPA) axis and epigenetic mechanisms. This study explores the effects of PNMS on infant stress reactivity using a fifth-order adaptive dynamical system model. The model integrates genetic, epigenetic, and environmental factors, focusing on the downregulation of 11β-HSD-2, an enzyme responsible for converting active cortisol to its inactive form, and its subsequent influence on fetal cortisol exposure. The article also employs network-oriented modeling to represent epigenetic changes and their impact on infant temperament development, emphasizing the HPA axis’ role in stress regulation. Simulation experiments compare scenarios with PNMS, illustrating the long-term developmental consequences on temperament. This research highlights the importance of maternal well-being during pregnancy in shaping infant stress responses and provides insights into the developmental origins of health and disease.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"90 ","pages":"Article 101315"},"PeriodicalIF":2.1,"publicationDate":"2024-12-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143097275","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-30DOI: 10.1016/j.cogsys.2024.101302
Zhiqiang Wu , Dongshu Wang , Lei Liu
Behavioral decision-making in unknown environments of mobile robots is a crucial research topic in robotics. Inspired by the working mechanism of different brain regions in mammals, this paper designed a new hybrid model integrating the functions of cerebellum and basal ganglia by simulating the memory replay of hippocampus, so as to realize the autonomous behavioral decision-making of robot in unknown environments. A reinforcement learning module based on Actor-Critic framework and a developmental network module are used to simulate the functions of the basal ganglia and cerebellum, respectively. Considering the different functions of D1 and D2 dopamine receptors in basal ganglia, an Actor network module with separate learning of positive and negative rewards is designed for the basal ganglia to realize efficient exploration of the environments by the agent. According to the characteristics of biological memory, a physiological memory priority index is designed for hippocampus memory replay, which improves the offline learning efficiency of cerebellum. The integrated model enables dynamic switching between decisions made by cerebellum and basal ganglia based on the agent’s cognitive level with respect to the environment. Finally, the effectiveness of the proposed model is verified through experiments on agent navigation in both simulation and real environments, as well as through performance comparison experiments with other learning algorithms.
移动机器人在未知环境中的行为决策是机器人学的一个重要研究课题。受哺乳动物不同脑区工作机制的启发,本文通过模拟海马的记忆重放,设计了一种整合小脑和基底节功能的新型混合模型,以实现机器人在未知环境中的自主行为决策。基于 Actor-Critic 框架的强化学习模块和发育网络模块分别用于模拟基底节和小脑的功能。考虑到基底节中多巴胺受体 D1 和 D2 的不同功能,为基底节设计了分别学习正负奖励的 Actor 网络模块,以实现机器人对环境的高效探索。根据生物记忆的特点,为海马记忆重放设计了生理记忆优先级指标,提高了小脑的离线学习效率。综合模型可根据代理对环境的认知水平,实现小脑和基底神经节决策的动态切换。最后,通过在模拟和真实环境中进行的代理导航实验,以及与其他学习算法的性能对比实验,验证了所提模型的有效性。
{"title":"Integrated model of cerebellal supervised learning and basal ganglia’s reinforcement learning for mobile robot behavioral decision-making","authors":"Zhiqiang Wu , Dongshu Wang , Lei Liu","doi":"10.1016/j.cogsys.2024.101302","DOIUrl":"10.1016/j.cogsys.2024.101302","url":null,"abstract":"<div><div>Behavioral decision-making in unknown environments of mobile robots is a crucial research topic in robotics. Inspired by the working mechanism of different brain regions in mammals, this paper designed a new hybrid model integrating the functions of cerebellum and basal ganglia by simulating the memory replay of hippocampus, so as to realize the autonomous behavioral decision-making of robot in unknown environments. A reinforcement learning module based on Actor-Critic framework and a developmental network module are used to simulate the functions of the basal ganglia and cerebellum, respectively. Considering the different functions of D1 and D2 dopamine receptors in basal ganglia, an Actor network module with separate learning of positive and negative rewards is designed for the basal ganglia to realize efficient exploration of the environments by the agent. According to the characteristics of biological memory, a physiological memory priority index is designed for hippocampus memory replay, which improves the offline learning efficiency of cerebellum. The integrated model enables dynamic switching between decisions made by cerebellum and basal ganglia based on the agent’s cognitive level with respect to the environment. Finally, the effectiveness of the proposed model is verified through experiments on agent navigation in both simulation and real environments, as well as through performance comparison experiments with other learning algorithms.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101302"},"PeriodicalIF":2.1,"publicationDate":"2024-10-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-28DOI: 10.1016/j.cogsys.2024.101304
Kyrtin Atreides, David J. Kelley
We examine preliminary results from the first automated system to detect the 188 cognitive biases included in the 2016 Cognitive Bias Codex, as applied to both human and AI-generated text, and compared to a human baseline of performance. The human baseline was constructed from the collective intelligence of a small but diverse group of volunteers independently submitting their detected cognitive biases for each sample in the task used for the first phase. This baseline was used as an approximation of the ground truth on this task, for lack of any prior established and relevant benchmark. Results showed the system’s performance to be above that of the average human, but below that of the top-performing human and the collective, with greater performance on a subset of 18 out of the 24 categories in the codex. This version of the system was also applied to analyzing responses to 150 open-ended questions put to each of the top 5 performing closed and open-source Large Language Models, as of the time of testing. Results from this second phase showed measurably higher rates of cognitive bias detection across roughly half of all categories than those observed when analyzing human-generated text. The level of model contamination was also considered for two types of contamination observed, where the models gave canned responses. Levels of cognitive bias detected in each model were compared both to one another and to data from the first phase.
{"title":"Cognitive biases in natural language: Automatically detecting, differentiating, and measuring bias in text","authors":"Kyrtin Atreides, David J. Kelley","doi":"10.1016/j.cogsys.2024.101304","DOIUrl":"10.1016/j.cogsys.2024.101304","url":null,"abstract":"<div><div>We examine preliminary results from the first automated system to detect the 188 cognitive biases included in the 2016 Cognitive Bias Codex, as applied to both human and AI-generated text, and compared to a human baseline of performance. The human baseline was constructed from the collective intelligence of a small but diverse group of volunteers independently submitting their detected cognitive biases for each sample in the task used for the first phase. This baseline was used as an approximation of the ground truth on this task, for lack of any prior established and relevant benchmark. Results showed the system’s performance to be above that of the average human, but below that of the top-performing human and the collective, with greater performance on a subset of 18 out of the 24 categories in the codex. This version of the system was also applied to analyzing responses to 150 open-ended questions put to each of the top 5 performing closed and open-source Large Language Models, as of the time of testing. Results from this second phase showed measurably higher rates of cognitive bias detection across roughly half of all categories than those observed when analyzing human-generated text. The level of model contamination was also considered for two types of contamination observed, where the models gave canned responses. Levels of cognitive bias detected in each model were compared both to one another and to data from the first phase.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101304"},"PeriodicalIF":2.1,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142593527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The discovery of various cognitive biases and social illusions indicates that people routinely have misbeliefs. Focusing on the illusion of control (IOC), this article argues that when time and cognitive resources are limited, and information is imperfect, misbeliefs can be generated naturally in a normal belief formation system, and these misbeliefs might help people adapt better to the environment. In this study, we present a computational model—the informativeness-weighting model (IWM)—describing how beliefs are revised by observed evidence. To be precise, IOC is the result of distinct types of evidence being endowed with different weights according to its informativeness in a belief revision process. To evaluate the model, we also designed two behavioral experiments to compare people’s sense of control with that predicted by the model. In both experiments, our model outperformed two alternative models in predicting and explaining the misestimation of people’s perceived control. Thus, we suggest that our model reflects an adaptive strategy for information processing, which helps to explain why misbeliefs, like IOC, are prevalent in human cognition.
{"title":"A new perspective on Misbeliefs: A computational model for perceived control","authors":"Haokui Xu , Bohao Shi , Yiming Zhu , Jifan Zhou , Mowei Shen","doi":"10.1016/j.cogsys.2024.101305","DOIUrl":"10.1016/j.cogsys.2024.101305","url":null,"abstract":"<div><div>The discovery of various cognitive biases and social illusions indicates that people routinely have misbeliefs. Focusing on the illusion of control (IOC), this article argues that when time and cognitive resources are limited, and information is imperfect, misbeliefs can be generated naturally in a normal belief formation system, and these misbeliefs might help people adapt better to the environment.<!--> <!-->In this study, we present a computational model—the informativeness-weighting model (IWM)—describing how beliefs are revised by observed evidence. To be precise, IOC is the result of distinct types of evidence being endowed with different weights according to its informativeness in a belief revision process. To evaluate the model, we also designed two behavioral experiments to compare people’s sense of control with that predicted by the model.<!--> <!-->In both experiments, our model outperformed two alternative models in predicting and explaining the misestimation of people’s perceived control. Thus, we suggest that our model reflects an adaptive strategy for information processing, which helps to explain why misbeliefs, like IOC, are prevalent in human cognition.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101305"},"PeriodicalIF":2.1,"publicationDate":"2024-10-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-18DOI: 10.1016/j.cogsys.2024.101297
Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf
Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset ( participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.
{"title":"From human explanations to explainable AI: Insights from constrained optimization","authors":"Inga Ibs , Claire Ott , Frank Jäkel, Constantin A. Rothkopf","doi":"10.1016/j.cogsys.2024.101297","DOIUrl":"10.1016/j.cogsys.2024.101297","url":null,"abstract":"<div><div>Many complex decision-making scenarios encountered in the real-world, including energy systems and infrastructure planning, can be formulated as constrained optimization problems. Solutions for these problems are often obtained using white-box solvers based on linear program representations. Even though these algorithms are well understood and the optimality of the solution is guaranteed, explanations for the solutions are still necessary to build trust and ensure the implementation of policies. Solution algorithms represent the problem in a high-dimensional abstract space, which does not translate well to intuitive explanations for lay people. Here, we report three studies in which we pose constrained optimization problems in the form of a computer game to participants. In the game, called Furniture Factory, participants manage a company that produces furniture. In two qualitative studies, we first elicit representations and heuristics with concurrent explanations and validate their use in post-hoc explanations. We analyze the complexity of the explanations given by participants to gain a deeper understanding of how complex cognitively adequate explanations should be. Based on insights from the analysis of the two qualitative studies, we formalize strategies that in combination can act as descriptors for participants’ behavior and optimal solutions. We match the strategies to decisions in a large behavioral dataset (<span><math><mrow><mo>></mo><mn>150</mn></mrow></math></span> participants) gathered in a third study, and compare the complexity of strategy combinations to the complexity featured in participants’ explanations. Based on the analyses from these three studies, we discuss how these insights can inform the automatic generation of cognitively adequate explanations in future AI systems.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101297"},"PeriodicalIF":2.1,"publicationDate":"2024-10-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142572379","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of brain-morphic software holds significant promise for creating artificial general intelligence that exhibits high affinity and interpretability for humans and also offers substantial benefits for medical applications. To facilitate this, creating Brain Reference Architecture (BRA) data, serving as a design specification for brain-morphic software is imperative. BRA-driven development, which utilizes Brain Information Flow (BIF) diagrams based on mesoscale brain anatomy and Hypothetical Component Diagrams (HCD) for corresponding computational functionalities, has been proposed to address this need. This methodology formalizes identifying possible functional structures by leveraging existing, albeit insufficient, neuroscientific knowledge. However, applying this methodology across the entire brain, thereby creating a Whole Brain Reference Architecture (WBRA), represents a significant research and development challenge due to its scale and complexity. Technology roadmaps have been introduced as a strategic tool to guide discussion, management, and distribution of resources within such expansive research and development activities. These roadmaps proposed a manual, anatomically based approach to incrementally construct BIF and HCD, thereby systematically expanding brain organ coverage toward achieving a complete WBRA. Large Language Model (LLM) technologies have introduced a paradigm shift, substantially automating the BRA-driven development process. This is largely due to the BRA data being structured around the brain’s anatomy and described in natural language, which aligns well with the capabilities of LLMs for supporting and automating the construction and verification processes. In this paper, we propose a novel technology roadmap to largely automate the creation of WBRA, leveraging neuroscientific insights. This roadmap includes 12 activities for automating BIF construction, notably extracting anatomical structures from scholarly articles. Furthermore, it details 11 activities aimed at enhancing the integration of Hypothetical Component Diagrams (HCD) into the WBRA, focusing on automating checks for functional consistency. This roadmap aims to establish a cost-effective and efficient design process for WBRA, ensuring the availability of brain-morphic software design specifications that are continually validated against the latest neuroscientific knowledge.
脑形态软件的开发为创造人工通用智能带来了巨大的希望,这种人工通用智能与人类具有高度的亲和力和可解释性,同时也为医疗应用带来了巨大的好处。为此,必须创建脑参考架构(BRA)数据,作为脑形态软件的设计规范。为了满足这一需求,我们提出了基于中尺度大脑解剖学的脑信息流图(Brain Information Flow,BIF)和相应计算功能的假设组件图(Hypothetical Component Diagrams,HCD)的脑参考架构驱动开发方法。这种方法通过利用现有的神经科学知识(尽管还不够充分),将识别可能的功能结构正规化。然而,将这种方法应用于整个大脑,从而创建全脑参考架构(WBRA),由于其规模和复杂性,是一项重大的研发挑战。技术路线图已被作为一种战略工具引入,用于指导此类大规模研发活动中的讨论、管理和资源分配。这些路线图提出了一种基于解剖学的手动方法,以逐步构建 BIF 和 HCD,从而系统地扩大大脑器官的覆盖范围,实现完整的 WBRA。大型语言模型(LLM)技术带来了范式的转变,使 BRA 驱动的开发过程大大自动化。这主要是由于 BRA 数据是围绕大脑解剖结构并用自然语言描述的,这与 LLM 在支持和自动化构建和验证过程方面的能力非常吻合。在本文中,我们提出了一个新颖的技术路线图,利用神经科学的洞察力,在很大程度上实现 WBRA 创建的自动化。该路线图包括 12 项自动构建 BIF 的活动,特别是从学术文章中提取解剖结构。此外,它还详细介绍了 11 项旨在加强将假想成分图 (HCD) 整合到 WBRA 中的活动,重点是自动检查功能一致性。该路线图旨在为 WBRA 建立一个经济高效的设计流程,确保提供根据最新神经科学知识不断验证的脑形态软件设计规范。
{"title":"Technology roadmap toward the completion of whole-brain architecture with BRA-driven development","authors":"Hiroshi Yamakawa , Yoshimasa Tawatsuji , Yuta Ashihara , Ayako Fukawa , Naoya Arakawa , Koichi Takahashi , Yutaka Matsuo","doi":"10.1016/j.cogsys.2024.101300","DOIUrl":"10.1016/j.cogsys.2024.101300","url":null,"abstract":"<div><div>The development of brain-morphic software holds significant promise for creating artificial general intelligence that exhibits high affinity and interpretability for humans and also offers substantial benefits for medical applications. To facilitate this, creating Brain Reference Architecture (BRA) data, serving as a design specification for brain-morphic software is imperative. BRA-driven development, which utilizes Brain Information Flow (BIF) diagrams based on mesoscale brain anatomy and Hypothetical Component Diagrams (HCD) for corresponding computational functionalities, has been proposed to address this need. This methodology formalizes identifying possible functional structures by leveraging existing, albeit insufficient, neuroscientific knowledge. However, applying this methodology across the entire brain, thereby creating a Whole Brain Reference Architecture (WBRA), represents a significant research and development challenge due to its scale and complexity. Technology roadmaps have been introduced as a strategic tool to guide discussion, management, and distribution of resources within such expansive research and development activities. These roadmaps proposed a manual, anatomically based approach to incrementally construct BIF and HCD, thereby systematically expanding brain organ coverage toward achieving a complete WBRA. Large Language Model (LLM) technologies have introduced a paradigm shift, substantially automating the BRA-driven development process. This is largely due to the BRA data being structured around the brain’s anatomy and described in natural language, which aligns well with the capabilities of LLMs for supporting and automating the construction and verification processes. In this paper, we propose a novel technology roadmap to largely automate the creation of WBRA, leveraging neuroscientific insights. This roadmap includes 12 activities for automating BIF construction, notably extracting anatomical structures from scholarly articles. Furthermore, it details 11 activities aimed at enhancing the integration of Hypothetical Component Diagrams (HCD) into the WBRA, focusing on automating checks for functional consistency. This roadmap aims to establish a cost-effective and efficient design process for WBRA, ensuring the availability of brain-morphic software design specifications that are continually validated against the latest neuroscientific knowledge.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101300"},"PeriodicalIF":2.1,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142560689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-17DOI: 10.1016/j.cogsys.2024.101303
Olga Chernavskaya
The idea of reproducing personality by means of digital (neural network) technologies (“digital immortality”), together with the concept of Digital Twin (DT), still attracts great attention. Recent advances in the DT industry permit to expect the production of perfect “mirror” DT in the near future. We argue that the “immortality in the memory of other people” could be approached quite closely due to creating an analogue of personal DT by simulating the personality. For this purpose, it is necessary to compose the “constructive portrait” of a chosen person (by extracting the key features and traits of personality) and try to reproduce it by means of a chosen model. We are developing an original model Natural Constructive Cognitive Architecture (NCCA) that inherently provides the interpretation of logical and intuitive thinking, subconscious, etc. This model should be adjusted to specific set of knowledge inherent in a particular person (books, films, photographs, etc.), with an emphasis on personal lexicons (verbal, emotion, behavioral). NCCA contains a large set of free model parameters, which enables us to reproduce a wide range of personality features, from thinking style to temperament. It is shown that popular Generative Pre-trained Transformers (GPTs) have much in common with NCCA and could be adapted and used as an analog of DT of a specific person. We argue that the proposed program would provide the possibility to create an analog of DT, which could give an impression (at least, an illusion) of communication with the desired specific person.
{"title":"To the problem of digital immortality","authors":"Olga Chernavskaya","doi":"10.1016/j.cogsys.2024.101303","DOIUrl":"10.1016/j.cogsys.2024.101303","url":null,"abstract":"<div><div>The idea of reproducing personality by means of digital (neural network) technologies (“digital immortality”), together with the concept of Digital Twin (DT), still attracts great attention. Recent advances in the DT industry permit to expect the production of perfect “mirror” DT in the near future. We argue that the “immortality in the memory of other people” could be approached quite closely due to creating an <em>analogue</em> of personal DT by simulating the personality. For this purpose, it is necessary to compose the “constructive portrait” of a chosen person (by extracting the key features and traits of personality) and try to reproduce it by means of a chosen model. We are developing an original model Natural Constructive Cognitive Architecture (NCCA) that inherently provides the interpretation of logical and intuitive thinking, subconscious, etc. This model should be adjusted to specific set of knowledge inherent in a particular person (books, films, photographs, etc.), with an emphasis on personal <em>lexicons</em> (verbal, emotion, behavioral). NCCA contains a large set of free model parameters, which enables us to reproduce a wide range of personality features, from thinking style to temperament. It is shown that popular Generative Pre-trained Transformers (GPTs) have much in common with NCCA and could be adapted and used as an analog of DT of a specific person. We argue that the proposed program would provide the possibility to create an analog of DT, which could give an impression (at least, an illusion) of communication with the desired specific person.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101303"},"PeriodicalIF":2.1,"publicationDate":"2024-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142526111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In large urban areas, enhancing the personal care and quality of life for elderly individuals poses a critical societal challenge. As the population ages and the amount of people requiring assistance grows, so does the demand for home care services. This will inevitably put tremendous pressure on a system that has historically struggled to provide high-quality assistance with limited resources, all while managing urgent, unforeseen additional demands. This scenario can be framed as a resource allocation problem, wherein caregivers must be efficiently matched with services based on availability, qualifications, and schedules. Given its scale and complexity, traditional computational approaches have struggled to address this problem effectively, leaving it largely unresolved. Currently, many European cities emphasize geographical and emotional proximity, offering a model for home care services based on reduced social urban sectors. This new paradigm provides opportunities for tackling the resource allocation problem while promoting desirable pairings between caregivers and elderly people. This paper presents a MaxSAT-based solution in this context. Our approach efficiently allocates services across various configurations, maximizing caregiver-user pairings’ similarity and consistency while minimizing costs. Moreover, we show that our method solves the resource allocation problem in a reasonable amount of time. Consequently, we can either provide an optimal allocation or highlight the limits of the available resources relative to the service demand.
{"title":"Optimizing resource allocation in home care services using MaxSAT","authors":"Irene Unceta , Bernat Salbanya , Jordi Coll , Mateu Villaret , Jordi Nin","doi":"10.1016/j.cogsys.2024.101291","DOIUrl":"10.1016/j.cogsys.2024.101291","url":null,"abstract":"<div><div>In large urban areas, enhancing the personal care and quality of life for elderly individuals poses a critical societal challenge. As the population ages and the amount of people requiring assistance grows, so does the demand for home care services. This will inevitably put tremendous pressure on a system that has historically struggled to provide high-quality assistance with limited resources, all while managing urgent, unforeseen additional demands. This scenario can be framed as a resource allocation problem, wherein caregivers must be efficiently matched with services based on availability, qualifications, and schedules. Given its scale and complexity, traditional computational approaches have struggled to address this problem effectively, leaving it largely unresolved. Currently, many European cities emphasize geographical and emotional proximity, offering a model for home care services based on reduced social urban sectors. This new paradigm provides opportunities for tackling the resource allocation problem while promoting desirable pairings between caregivers and elderly people. This paper presents a MaxSAT-based solution in this context. Our approach efficiently allocates services across various configurations, maximizing caregiver-user pairings’ similarity and consistency while minimizing costs. Moreover, we show that our method solves the resource allocation problem in a reasonable amount of time. Consequently, we can either provide an optimal allocation or highlight the limits of the available resources relative to the service demand.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"88 ","pages":"Article 101291"},"PeriodicalIF":2.1,"publicationDate":"2024-10-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142525957","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}