Pub Date : 2026-01-16DOI: 10.1016/j.cogsys.2026.101435
Jose Luis Vilchez Tornero
Purpose: There is a need to understand how the perception of, attention to and reason with traffic signs influence on driving behavior. The more we know about drivers‘ cognitive processing of them, the better for their response time to those signs and for the decision they take. In previous works, we have shown that the signs that are not-well designed provoke counterproductive effects on movement. Design/methodology/approach: In the present study, regulatory traffic signs in Ecuador are classified by using the criteria of their representativity, their univocity and the numbers of errors participants make when responding to them. Findings: With these criteria, we can detect which traffic signs need to be redesigned. Research limitations/implications: The consequences of traffic accidents are enough important to take this study seriously. In this sense, research must also take a step forward to real-driving contexts in order to reach more ecological conclusions. Practical implications:
This work contributes to the improvement of traffic safety. Originality/value: I develop a new methodology to classify traffic signs from a cognitive Science point of view.
{"title":"Representativity and univocity of traffic signs and their effect on trajectory movement in a driving-simulation task: regulatory signs","authors":"Jose Luis Vilchez Tornero","doi":"10.1016/j.cogsys.2026.101435","DOIUrl":"10.1016/j.cogsys.2026.101435","url":null,"abstract":"<div><div>Purpose: There is a need to understand how the perception of, attention to and reason with traffic signs influence on driving behavior. The more we know about drivers‘ cognitive processing of them, the better for their response time to those signs and for the decision they take. In previous works, we have shown that the signs that are not-well designed provoke counterproductive effects on movement. Design/methodology/approach: In the present study, regulatory traffic signs in Ecuador are classified by using the criteria of their representativity, their univocity and the numbers of errors participants make when responding to them. Findings: With these criteria, we can detect which traffic signs need to be redesigned. Research limitations/implications: The consequences of traffic accidents are enough important to take this study seriously. In this sense, research must also take a step forward to real-driving contexts in order to reach more ecological conclusions. Practical implications:</div><div>This work contributes to the improvement of traffic safety. Originality/value: I develop a new methodology to classify traffic signs from a cognitive Science point of view.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"96 ","pages":"Article 101435"},"PeriodicalIF":2.4,"publicationDate":"2026-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145996322","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-13DOI: 10.1016/j.cogsys.2025.101430
Rahma Lakhdim , Jan Treur , Peter H.M.P. Roelofsma
Blockchain networks face evolving security risks that require rapid and consistent responses from employees. This study presents an AI Coach that mirrors human reasoning through stages of context detection, world modeling, belief updating, preparation, execution, and feedback. In doing so, the AI Coach provides cognitive support. The architecture is defined by six types of matrices that include state connectivity, connectivity weights, combination functions, combination function parameters, speed factors, and initial values. In simulations of anomalous transactions, smart contract breaches, consensus delays, and unauthorized access, the AI Coach effectively prioritized critical events and guided response actions, demonstrating its ability to support more structured and efficient security workflows. These results underscore the effectiveness of the AI Coach in improving reliability and responsiveness in blockchain security monitoring.
{"title":"Optimising blockchain security: Computational analysis of adaptive AI coaching","authors":"Rahma Lakhdim , Jan Treur , Peter H.M.P. Roelofsma","doi":"10.1016/j.cogsys.2025.101430","DOIUrl":"10.1016/j.cogsys.2025.101430","url":null,"abstract":"<div><div>Blockchain networks face evolving security risks that require rapid and consistent responses from employees. This study presents an AI Coach that mirrors human reasoning through stages of context detection, world modeling, belief updating, preparation, execution, and feedback. In doing so, the AI Coach provides cognitive support. The architecture is defined by six types of matrices that include state connectivity, connectivity weights, combination functions, combination function parameters, speed factors, and initial values. In simulations of anomalous transactions, smart contract breaches, consensus delays, and unauthorized access, the AI Coach effectively prioritized critical events and guided response actions, demonstrating its ability to support more structured and efficient security workflows. These results underscore the effectiveness of the AI Coach in improving reliability and responsiveness in blockchain security monitoring.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"95 ","pages":"Article 101430"},"PeriodicalIF":2.4,"publicationDate":"2025-12-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-08DOI: 10.1016/j.cogsys.2025.101433
Ron Sun
This article examines the discourse on rationality and intelligence in machines (i.e., in AI systems). It delves into a specific computational approach for addressing rationality and intelligence — the development of a computational cognitive architecture that aims to capture the human mind to the greatest extent possible. The article discusses various forms of human rationality, different ideas about human intelligence, conceptions of human activities, roles of human motivation, and so on, all examined in relation to the cognitive architecture, thus linking machines to humans. Through examples, the article argues that recent computational models (AI systems in a generalized sense) are more sophisticated than what critics of AI often assumed: They are well equipped to overcome many of the criticisms leveled against AI of the past.
{"title":"Rethinking rationality and intelligence: Humans versus machines","authors":"Ron Sun","doi":"10.1016/j.cogsys.2025.101433","DOIUrl":"10.1016/j.cogsys.2025.101433","url":null,"abstract":"<div><div>This article examines the discourse on rationality and intelligence in machines (i.e., in AI systems). It delves into a specific computational approach for addressing rationality and intelligence — the development of a computational cognitive architecture that aims to capture the human mind to the greatest extent possible. The article discusses various forms of human rationality, different ideas about human intelligence, conceptions of human activities, roles of human motivation, and so on, all examined in relation to the cognitive architecture, thus linking machines to humans. Through examples, the article argues that recent computational models (AI systems in a generalized sense) are more sophisticated than what critics of AI often assumed: They are well equipped to overcome many of the criticisms leveled against AI of the past.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"95 ","pages":"Article 101433"},"PeriodicalIF":2.4,"publicationDate":"2025-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145791264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A fifth-order adaptive dynamical network model is introduced to examine the role of epigenetics in the development of schizoaffective disorder. The model’s focus is on the symptom of impaired reality testing and examines the impacts of aberrant salience and cortical disinhibition. Schizoaffective disorder is characterised through symptoms from schizophrenia and a mood disorder. The model demonstrates the impact that trauma has on the increased expression of DNA-methyltransferase 1, resulting in the hypermethylation of the GAD1 and GAD2 genes, and increased MeCP2 binding on promoter regions. The hypermethylation of GAD1 and GAD2 leads to decreased synthesis of GABA, with downstream effects on the dysregulation of glutamate and dopamine. Furthermore, the epigenetic effects of clozapine and valproate are explored in later simulations.
{"title":"Epigenetic Influences in Aberrant Salience and Reality Testing in Schizoaffective Disorder: A Multi-Level Adaptive Network Modelling Approach","authors":"Alisha Huber , Jovana Vukmirović , Reza Haydarlou , Jan Treur","doi":"10.1016/j.cogsys.2025.101423","DOIUrl":"10.1016/j.cogsys.2025.101423","url":null,"abstract":"<div><div>A fifth-order adaptive dynamical network model is introduced to examine the role of epigenetics in the development of schizoaffective disorder. The model’s focus is on the symptom of impaired reality testing and examines the impacts of aberrant salience and cortical disinhibition. Schizoaffective disorder is characterised through symptoms from schizophrenia and a mood disorder. The model demonstrates the impact that trauma has on the increased expression of DNA-methyltransferase 1, resulting in the hypermethylation of the GAD1 and GAD2 genes, and increased MeCP2 binding on promoter regions. The hypermethylation of GAD1 and GAD2 leads to decreased synthesis of GABA, with downstream effects on the dysregulation of glutamate and dopamine. Furthermore, the epigenetic effects of clozapine and valproate are explored in later simulations.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"95 ","pages":"Article 101423"},"PeriodicalIF":2.4,"publicationDate":"2025-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145738518","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cogsys.2025.101419
Hendrik Buschmeier , Heike M. Buhl , Friederike Kern , Angela Grimminger , Helen Beierling , Josephine Fisher , André Groß , Ilona Horwath , Nils Klowait , Stefan Lazarov , Michael Lenke , Vivien Lohmer , Katharina Rohlfing , Ingrid Scharlau , Amit Singh , Lutz Terfloth , Anna-Lisa Vollmer , Yu Wang , Annedore Wilmes , Britta Wrede
Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’ on the part of the explainee. However, what it means to ‘understand’ is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely enabledness, ‘knowing how’ to do or decide something, and comprehension, ‘knowing that’ – both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain agency. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.
{"title":"Forms of understanding for XAI-Explanations","authors":"Hendrik Buschmeier , Heike M. Buhl , Friederike Kern , Angela Grimminger , Helen Beierling , Josephine Fisher , André Groß , Ilona Horwath , Nils Klowait , Stefan Lazarov , Michael Lenke , Vivien Lohmer , Katharina Rohlfing , Ingrid Scharlau , Amit Singh , Lutz Terfloth , Anna-Lisa Vollmer , Yu Wang , Annedore Wilmes , Britta Wrede","doi":"10.1016/j.cogsys.2025.101419","DOIUrl":"10.1016/j.cogsys.2025.101419","url":null,"abstract":"<div><div>Explainability has become an important topic in computer science and artificial intelligence, leading to a subfield called Explainable Artificial Intelligence (XAI). The goal of providing or seeking explanations is to achieve (better) ‘understanding’ on the part of the explainee. However, what it means to ‘understand’ is still not clearly defined, and the concept itself is rarely the subject of scientific investigation. This conceptual article aims to present a model of forms of understanding for XAI-explanations and beyond. From an interdisciplinary perspective bringing together computer science, linguistics, sociology, philosophy and psychology, a definition of understanding and its forms, assessment, and dynamics during the process of giving everyday explanations are explored. Two types of understanding are considered as possible outcomes of explanations, namely <em>enabledness</em>, ‘knowing how’ to do or decide something, and <em>comprehension</em>, ‘knowing that’ – both in different degrees (from shallow to deep). Explanations regularly start with shallow understanding in a specific domain and can lead to deep comprehension and enabledness of the explanandum, which we see as a prerequisite for human users to gain <em>agency</em>. In this process, the increase of comprehension and enabledness are highly interdependent. Against the background of this systematization, special challenges of understanding in XAI are discussed.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101419"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145684583","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-01DOI: 10.1016/j.cogsys.2025.101421
Kexin Zhao, Jamie C. Macbeth
The capabilities of large language models (LLMs) have rarely been assessed against those of classical, symbolic AI systems for natural language generation and natural language understanding. This paper assesses the understanding and reasoning capabilities of a large language model by probing it with SHRDLU, a rule-based, symbolic natural language understanding system that features a human user issuing commands to a robot which grasps and moves objects in a virtual “blocks world” environment. We perform a study in which we prompt an LLM with SHRDLU human-robot interaction dialogs and simple questions about the locations of objects at the conclusion of the dialog. In these tests of GPT-4’s understanding of spatial and containment relationships and its ability to reason about complex scenarios involving object manipulation, we find that GPT-4 performs well with basic tasks but struggles with complex spatial relationships and object tracking, with an accuracy as low as 16 % in certain conditions with longer dialogs. Although GPT-4, a state of the art LLM, appears to be no match for SHRDLU, one of the earliest natural language understanding systems, this study is an important initial step towards future systems which may achieve the best of both neural and symbolic worlds.
{"title":"Probing the reasoning abilities of LLMs in blocks world","authors":"Kexin Zhao, Jamie C. Macbeth","doi":"10.1016/j.cogsys.2025.101421","DOIUrl":"10.1016/j.cogsys.2025.101421","url":null,"abstract":"<div><div>The capabilities of large language models (LLMs) have rarely been assessed against those of classical, symbolic AI systems for natural language generation and natural language understanding. This paper assesses the understanding and reasoning capabilities of a large language model by probing it with SHRDLU, a rule-based, symbolic natural language understanding system that features a human user issuing commands to a robot which grasps and moves objects in a virtual “blocks world” environment. We perform a study in which we prompt an LLM with SHRDLU human-robot interaction dialogs and simple questions about the locations of objects at the conclusion of the dialog. In these tests of GPT-4’s understanding of spatial and containment relationships and its ability to reason about complex scenarios involving object manipulation, we find that GPT-4 performs well with basic tasks but struggles with complex spatial relationships and object tracking, with an accuracy as low as 16 % in certain conditions with longer dialogs. Although GPT-4, a state of the art LLM, appears to be no match for SHRDLU, one of the earliest natural language understanding systems, this study is an important initial step towards future systems which may achieve the best of both neural and symbolic worlds.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101421"},"PeriodicalIF":2.4,"publicationDate":"2025-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-30DOI: 10.1016/j.cogsys.2025.101422
Leonardo L. Rossi , Letícia Berto , Paula P. Costa , Ricardo Gudwin , Esther Colombini , Alexandre Simões
Reinforcement learning (RL) methods inspired by cognitive architectures are crucial for empowering autonomous agents to tackle complex, dynamic tasks. This study evaluates two RL-based drive optimization strategies – 1-LDO and 2-LDO – within the framework of cognitive architectures for autonomous robots. 1-LDO integrates both motivational drives into a single learning model, whereas 2-LDO separates them into distinct models, allowing for modular learning. Grounded in Hull’s Drive Theory, we explore early versus late selection mechanisms to optimize drive reduction through RL, particularly in agents driven by curiosity and survival imperatives. Through reward and stress analyses, we demonstrate that Deep Q-Network (DQN) agents outperform traditional Q-Learning approaches in fine-grained environments, with the 2-LDO configuration showing marked advantages due to its modular design. In contrast, in coarser environments, 2-LDO combined with Q-Learning achieves superior efficiency, offering faster drive regulation at reduced computational cost. These results suggest that early selection mechanisms, aligned with Hull’s theoretical principles, may provide the most effective strategy for optimizing drive-based behaviors in autonomous agents.
{"title":"Dual or unified: optimizing drive-based reinforcement learning for cognitive autonomous robots","authors":"Leonardo L. Rossi , Letícia Berto , Paula P. Costa , Ricardo Gudwin , Esther Colombini , Alexandre Simões","doi":"10.1016/j.cogsys.2025.101422","DOIUrl":"10.1016/j.cogsys.2025.101422","url":null,"abstract":"<div><div>Reinforcement learning (RL) methods inspired by cognitive architectures are crucial for empowering autonomous agents to tackle complex, dynamic tasks. This study evaluates two RL-based drive optimization strategies – 1-LDO and 2-LDO – within the framework of cognitive architectures for autonomous robots. 1-LDO integrates both motivational drives into a single learning model, whereas 2-LDO separates them into distinct models, allowing for modular learning. Grounded in Hull’s Drive Theory, we explore early versus late selection mechanisms to optimize drive reduction through RL, particularly in agents driven by curiosity and survival imperatives. Through reward and stress analyses, we demonstrate that Deep Q-Network (DQN) agents outperform traditional Q-Learning approaches in fine-grained environments, with the 2-LDO configuration showing marked advantages due to its modular design. In contrast, in coarser environments, 2-LDO combined with Q-Learning achieves superior efficiency, offering faster drive regulation at reduced computational cost. These results suggest that early selection mechanisms, aligned with Hull’s theoretical principles, may provide the most effective strategy for optimizing drive-based behaviors in autonomous agents.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"95 ","pages":"Article 101422"},"PeriodicalIF":2.4,"publicationDate":"2025-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145685990","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-25DOI: 10.1016/j.cogsys.2025.101420
Laura K. Bartlett , Noman Javed , Dmitry Bennett , Peter C.R. Lane , Fernand Gobet
The cueing task is a robust experimental paradigm for investigating attention. A centrally presented valid cue, correctly indicating the location of an upcoming target stimulus, leads to quicker responses than an invalid cue. A feature of this paradigm is that increasing the delay between a peripheral cue and a target reverses this effect, where responses become slower for a valid cue, a phenomenon termed inhibition of return (IOR). Using GEMS, a system that utilises genetic programming techniques, we generated potential strategies underlying the facilitation and IOR effects in the cueing paradigm. Models were generated for three experiments differing in their experimental designs, all with good fit to behavioural data. Our approach helps address current issues in the field of attention regarding how it is defined and what mechanisms underlie it. Additional benefits and limitations of this method are discussed.
{"title":"Generating models of attentional cueing and inhibition of return with genetic programming","authors":"Laura K. Bartlett , Noman Javed , Dmitry Bennett , Peter C.R. Lane , Fernand Gobet","doi":"10.1016/j.cogsys.2025.101420","DOIUrl":"10.1016/j.cogsys.2025.101420","url":null,"abstract":"<div><div>The cueing task is a robust experimental paradigm for investigating attention. A centrally presented valid cue, correctly indicating the location of an upcoming target stimulus, leads to quicker responses than an invalid cue. A feature of this paradigm is that increasing the delay between a peripheral cue and a target reverses this effect, where responses become slower for a valid cue, a phenomenon termed inhibition of return (IOR). Using GEMS, a system that utilises genetic programming techniques, we generated potential strategies underlying the facilitation and IOR effects in the cueing paradigm. Models were generated for three experiments differing in their experimental designs, all with good fit to behavioural data. Our approach helps address current issues in the field of attention regarding how it is defined and what mechanisms underlie it. Additional benefits and limitations of this method are discussed.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"95 ","pages":"Article 101420"},"PeriodicalIF":2.4,"publicationDate":"2025-11-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145665708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-04DOI: 10.1016/j.cogsys.2025.101416
Melisa-Maria Damian , Roy M. Treur , Sophie C.F. Hendrikse , Jan Treur
Social Anxiety Disorder (SAD) is characterized by an excessive fear of negative evaluation that influences avoidance behaviors and a constant negative view of self. In order to assist in remote exposure therapy through creation of personalized content, this paper develops a second-order adaptive network model of SAD. We built a second-order adaptive network with nineteen literature-related states cover not only possible causes, threat appraisal, but also physiological arousal, fear/action regulation, safety and avoidance behaviors, and post-event processing, all connected by weighted links. These weights can be made adaptive by the self-modeling principle for networks and reflect the neural influences on such behaviors (e.g. amygdala spike, vmPFC brake, insula activation). Besides weights, the learning speeds are also adaptive and regulated by certain factors (e.g. BNST sustaining anxiety, dopamine relief leading to habituation, dACC conflict monitoring). Through simulations, two SAD cases were observed: the brief success and failure of ventromedial prefrontal cortex (vmPFC) regulation in ameliorating fear and the result of conducting safety behaviors leading to anxiety reduction. A scenario was then translated from these simulations into scripts that provided the foundation for an AI-generated role-play video. The result illustrates the modeled emotions, behaviors and coping strategies. This work demonstrates an adaptable, research-driven framework for generating susceptible remote exposures.
{"title":"Creating AI-generated role-playing videos from causal network model simulations of social anxiety disorder for virtual therapeutic contexts","authors":"Melisa-Maria Damian , Roy M. Treur , Sophie C.F. Hendrikse , Jan Treur","doi":"10.1016/j.cogsys.2025.101416","DOIUrl":"10.1016/j.cogsys.2025.101416","url":null,"abstract":"<div><div>Social Anxiety Disorder (SAD) is characterized by an excessive fear of negative evaluation that influences avoidance behaviors and a constant negative view of self. In order to assist in remote exposure therapy through creation of personalized content, this paper develops a second-order adaptive network model of SAD. We built a second-order adaptive network with nineteen literature-related states cover not only possible causes, threat appraisal, but also physiological arousal, fear/action regulation, safety and avoidance behaviors, and post-event processing, all connected by weighted links. These weights can be made adaptive by the self-modeling principle for networks and reflect the neural influences on such behaviors (e.g. amygdala spike, vmPFC brake, insula activation). Besides weights, the learning speeds are also adaptive and regulated by certain factors (e.g. BNST sustaining anxiety, dopamine relief leading to habituation, dACC conflict monitoring). Through simulations, two SAD cases were observed: the brief success and failure of ventromedial prefrontal cortex (vmPFC) regulation in ameliorating fear and the result of conducting safety behaviors leading to anxiety reduction. A scenario was then translated from these simulations into scripts that provided the foundation for an AI-generated role-play video. The result illustrates the modeled emotions, behaviors and coping strategies. This work demonstrates an adaptable, research-driven framework for generating susceptible remote exposures.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101416"},"PeriodicalIF":2.4,"publicationDate":"2025-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Business Process Model and Notation (BPMN) 2.0 is applied to create process models for documentation, communication, and collaboration. Usually, these models are often presented in a black-and-white colorization. However, the literature states that individuals can process colored information more efficiently. Therefore, this paper presents an empirical study, in which different colorizations (i.e., black-and-white, partially colorized, colorized, and disfluent) in BPMN process models and their effects on the cognitive load, processing time, and comprehension performance were evaluated. The results showed that colorization influenced the intrinsic and germane cognitive load. Further, colorization did not significantly affect processing time and comprehension performance. However, disfluent process models resulted in a higher extraneous cognitive load and lower ease of understanding. Contrary to the Disfluency Theory, it does not foster the comprehension of such models. In addition, Disfluency Theory exerts only a fraction of the benefits on readers with prior expertise in working with process models. The insights highlight especially the application of partially colorized process models. Altogether, implications for research and practice, as well as directions for future work, are discussed in this paper.
{"title":"From monochrome to color - Exploring the effects of different colorizations on process model comprehension","authors":"Michael Winter , Janine Grimmer , Manfred Reichert , Rüdiger Pryss","doi":"10.1016/j.cogsys.2025.101417","DOIUrl":"10.1016/j.cogsys.2025.101417","url":null,"abstract":"<div><div>Business Process Model and Notation (BPMN) 2.0 is applied to create process models for documentation, communication, and collaboration. Usually, these models are often presented in a black-and-white colorization. However, the literature states that individuals can process colored information more efficiently. Therefore, this paper presents an empirical study, in which different colorizations (i.e., black-and-white, partially colorized, colorized, and disfluent) in BPMN process models and their effects on the cognitive load, processing time, and comprehension performance were evaluated. The results showed that colorization influenced the intrinsic and germane cognitive load. Further, colorization did not significantly affect processing time and comprehension performance. However, disfluent process models resulted in a higher extraneous cognitive load and lower ease of understanding. Contrary to the Disfluency Theory, it does not foster the comprehension of such models. In addition, Disfluency Theory exerts only a fraction of the benefits on readers with prior expertise in working with process models. The insights highlight especially the application of partially colorized process models. Altogether, implications for research and practice, as well as directions for future work, are discussed in this paper.</div></div>","PeriodicalId":55242,"journal":{"name":"Cognitive Systems Research","volume":"94 ","pages":"Article 101417"},"PeriodicalIF":2.4,"publicationDate":"2025-11-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145519875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}