Pub Date : 2025-11-26eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1631206
Aisha Gul, Liam Turner, Carolina Fuentes
Introduction: The global ageing population rise creates a growing need for assistance and Socially Assistive robots (SARs) have the potential to support independence for older adults. However, to allow older adults to benefit from robots that will assist in daily life, it is important to better understand the role of trust in SARs.
Method: We present a Systematic Literature Review (SLR) aiming to identify the models, methods, and research settings used for measuring trust in SARs with older adults as population and analyse current factors in trust assessment.
Result: Our results reveal that previous studies were mostly conducted in lab settings and used subjective self-report measures like questionnaires, interviews, and surveys to measure the trust of older adults in SARs. Moreover, many of these studies focus on healthy older adults without age-related disabilities. We also examine different human-robot trust models that influence trust, and we discuss the lack of standardisation in the measurement of trust among older people in SARs.
Discussion: To address the standardisation gap, we developed a conceptual framework, Subjective Objective Trust Assessment HRI (SOTA-HRI), that incorporates subjective and objective measures to comprehensively evaluate trust in human-robot inter-actions. By combining these dimensions, our proposed framework provides a foundation for future research to design tailored interventions, enhance interaction quality, and ensure reliable trust assessment methods in this domain. Finally, we highlight key areas for future research, such as considering demographic sensitivity in trust-building strategies and further exploring contextual factors such as predictability and dependability that have not been thoroughly explored.
{"title":"Conventions and research challenges in considering trust with socially assistive robots for older adults.","authors":"Aisha Gul, Liam Turner, Carolina Fuentes","doi":"10.3389/frobt.2025.1631206","DOIUrl":"10.3389/frobt.2025.1631206","url":null,"abstract":"<p><strong>Introduction: </strong>The global ageing population rise creates a growing need for assistance and Socially Assistive robots (SARs) have the potential to support independence for older adults. However, to allow older adults to benefit from robots that will assist in daily life, it is important to better understand the role of trust in SARs.</p><p><strong>Method: </strong>We present a Systematic Literature Review (SLR) aiming to identify the models, methods, and research settings used for measuring trust in SARs with older adults as population and analyse current factors in trust assessment.</p><p><strong>Result: </strong>Our results reveal that previous studies were mostly conducted in lab settings and used subjective self-report measures like questionnaires, interviews, and surveys to measure the trust of older adults in SARs. Moreover, many of these studies focus on healthy older adults without age-related disabilities. We also examine different human-robot trust models that influence trust, and we discuss the lack of standardisation in the measurement of trust among older people in SARs.</p><p><strong>Discussion: </strong>To address the standardisation gap, we developed a conceptual framework, Subjective Objective Trust Assessment HRI (SOTA-HRI), that incorporates subjective and objective measures to comprehensively evaluate trust in human-robot inter-actions. By combining these dimensions, our proposed framework provides a foundation for future research to design tailored interventions, enhance interaction quality, and ensure reliable trust assessment methods in this domain. Finally, we highlight key areas for future research, such as considering demographic sensitivity in trust-building strategies and further exploring contextual factors such as predictability and dependability that have not been thoroughly explored.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1631206"},"PeriodicalIF":3.0,"publicationDate":"2025-11-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12690213/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145745162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1740881
Eleni Kelasidi, Michael Triantafyllou, Sveinung Johan Ohrem
{"title":"Editorial: Autonomous robotic systems in aquaculture: research challenges and industry needs.","authors":"Eleni Kelasidi, Michael Triantafyllou, Sveinung Johan Ohrem","doi":"10.3389/frobt.2025.1740881","DOIUrl":"https://doi.org/10.3389/frobt.2025.1740881","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1740881"},"PeriodicalIF":3.0,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12678286/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-21eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1731356
Shude He, Shi-Lu Dai, Chengzhi Yuan, Haotian Shi
{"title":"Editorial: Advancements in neural learning control for enhanced multi-robot coordination.","authors":"Shude He, Shi-Lu Dai, Chengzhi Yuan, Haotian Shi","doi":"10.3389/frobt.2025.1731356","DOIUrl":"https://doi.org/10.3389/frobt.2025.1731356","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1731356"},"PeriodicalIF":3.0,"publicationDate":"2025-11-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12678322/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145702448","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deployment of robots into hazardous environments typically involves a "human-robot teaming" (HRT) paradigm, in which a human supervisor interacts with a remotely operating robot inside the hazardous zone. Situational awareness (SA) is vital for enabling HRT, to support navigation, planning, and decision-making. In this paper, we explore issues of higher-level "semantic" information and understanding in SA. In semi-autonomous or variable-autonomy paradigms, different types of semantic information may be important, in different ways, for both the human operator and an autonomous agent controlling the robot. We propose a generalizable framework for acquiring and combining multiple modalities of semantic-level SA during remote deployments of mobile robots. We demonstrate the framework with an example application of search and rescue (SAR) in disaster-response robotics. We propose a set of "environment semantic indicators" that can reflect a variety of different types of semantic information, such as indicators of risk or signs of human activity (SHA), as the robot encounters different scenes. Based on these indicators, we propose a metric to describe the overall situation of the environment, called "Situational Semantic Richness" (SSR). This metric combines multiple semantic indicators to summarize the overall situation. The SSR indicates whether an information-rich, complex situation has been encountered, which may require advanced reasoning by robots and humans and, hence, the attention of the expert human operator. The framework is tested on a Jackal robot in a mock-up disaster-response environment. Experimental results demonstrate that the proposed semantic indicators are sensitive to changes in different modalities of semantic information in different scenes, and the SSR metric reflects the overall semantic changes in the situations encountered.
{"title":"A framework for semantics-based situational awareness during mobile robot deployments.","authors":"Tianshu Ruan, Aniketh Ramesh, Hao Wang, Alix Johnstone-Morfoisse, Gokcenur Altindal, Paul Norman, Grigoris Nikolaou, Rustam Stolkin, Manolis Chiou","doi":"10.3389/frobt.2025.1694123","DOIUrl":"10.3389/frobt.2025.1694123","url":null,"abstract":"<p><p>Deployment of robots into hazardous environments typically involves a \"human-robot teaming\" (HRT) paradigm, in which a human supervisor interacts with a remotely operating robot inside the hazardous zone. Situational awareness (SA) is vital for enabling HRT, to support navigation, planning, and decision-making. In this paper, we explore issues of higher-level \"semantic\" information and understanding in SA. In semi-autonomous or variable-autonomy paradigms, different types of semantic information may be important, in different ways, for both the human operator and an autonomous agent controlling the robot. We propose a generalizable framework for acquiring and combining multiple modalities of semantic-level SA during remote deployments of mobile robots. We demonstrate the framework with an example application of search and rescue (SAR) in disaster-response robotics. We propose a set of \"environment semantic indicators\" that can reflect a variety of different types of semantic information, such as indicators of risk or signs of human activity (SHA), as the robot encounters different scenes. Based on these indicators, we propose a metric to describe the overall situation of the environment, called \"Situational Semantic Richness\" (SSR). This metric combines multiple semantic indicators to summarize the overall situation. The SSR indicates whether an information-rich, complex situation has been encountered, which may require advanced reasoning by robots and humans and, hence, the attention of the expert human operator. The framework is tested on a Jackal robot in a mock-up disaster-response environment. Experimental results demonstrate that the proposed semantic indicators are sensitive to changes in different modalities of semantic information in different scenes, and the SSR metric reflects the overall semantic changes in the situations encountered.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1694123"},"PeriodicalIF":3.0,"publicationDate":"2025-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12672245/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145679066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-18eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1656642
Abdelaali Mahrouk
Background: Brain-Machine Interfaces (BMIs) increasingly mediate human interaction with assistive systems, yet remain sensitive to internal cognitive divergence. Subtle shifts in user intention-due to fatigue, overload, or schema conflict-may affect system reliability. While decoding accuracy has improved, most systems still lack mechanisms to communicate internal uncertainty or reasoning dynamics in real time.
Objective: We present NECAP-Interaction, a neuro-symbolic architecture that explores the potential of symbolic feedback to support real-time human-AI alignment. The framework aims to improve neuroergonomic transparency by integrating symbolic trace generation into the BMI control pipeline.
Methods: All evaluations were conducted using high-fidelity synthetic agents across three simulation tasks (motor control, visual attention, cognitive inhibition). NECAP-Interaction generates symbolic descriptors of epistemic shifts, supporting co-adaptive human-system communication. We report trace clarity, response latency, and symbolic coverage using structured replay analysis and interpretability metrics.
Results: NECAP-Interaction anticipated behavioral divergence up to 2.3 ± 0.4 s before error onset and maintained over 90% symbolic trace interpretability across uncertainty tiers. In simulated overlays, symbolic feedback improved user comprehension of system states and reduced latency to trust collapse compared to baseline architectures (CNN, RNN).
Conclusion: Cognitive interpretability is not merely a technical concern-it is a design priority. By embedding symbolic introspection into BMI workflows, NECAP-Interaction supports user transparency and co-regulated interaction in cognitively demanding contexts. These findings contribute to the development of human-centered neurotechnologies where explainability is experienced in real time.
{"title":"Symbolic feedback for transparent fault anticipation in neuroergonomic brain-machine interfaces.","authors":"Abdelaali Mahrouk","doi":"10.3389/frobt.2025.1656642","DOIUrl":"10.3389/frobt.2025.1656642","url":null,"abstract":"<p><strong>Background: </strong>Brain-Machine Interfaces (BMIs) increasingly mediate human interaction with assistive systems, yet remain sensitive to internal cognitive divergence. Subtle shifts in user intention-due to fatigue, overload, or schema conflict-may affect system reliability. While decoding accuracy has improved, most systems still lack mechanisms to communicate internal uncertainty or reasoning dynamics in real time.</p><p><strong>Objective: </strong>We present NECAP-Interaction, a neuro-symbolic architecture that explores the potential of symbolic feedback to support real-time human-AI alignment. The framework aims to improve neuroergonomic transparency by integrating symbolic trace generation into the BMI control pipeline.</p><p><strong>Methods: </strong>All evaluations were conducted using high-fidelity synthetic agents across three simulation tasks (motor control, visual attention, cognitive inhibition). NECAP-Interaction generates symbolic descriptors of epistemic shifts, supporting co-adaptive human-system communication. We report trace clarity, response latency, and symbolic coverage using structured replay analysis and interpretability metrics.</p><p><strong>Results: </strong>NECAP-Interaction anticipated behavioral divergence up to 2.3 ± 0.4 s before error onset and maintained over 90% symbolic trace interpretability across uncertainty tiers. In simulated overlays, symbolic feedback improved user comprehension of system states and reduced latency to trust collapse compared to baseline architectures (CNN, RNN).</p><p><strong>Conclusion: </strong>Cognitive interpretability is not merely a technical concern-it is a design priority. By embedding symbolic introspection into BMI workflows, NECAP-Interaction supports user transparency and co-regulated interaction in cognitively demanding contexts. These findings contribute to the development of human-centered neurotechnologies where explainability is experienced in real time.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1656642"},"PeriodicalIF":3.0,"publicationDate":"2025-11-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12668942/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145670244","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-17eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1702360
Alberto Neri, Veronica Penza, Nazim Haouchine, Leonardo S Mattos
Objective: Registering a preoperative 3D model of an organ with its actual anatomy viewed from an intraoperative video is a fundamental challenge in computer-assisted surgery, especially for surgical augmented reality. To address this, we present a benchmark of state-of-the-art deep learning point-cloud registration methods, offering a transparent evaluation of their generalizability to surgical scenarios and establishing a robust guideline for developing advanced non-rigid algorithms.
Methods: We systematically evaluate traditional and deep learning GMM-based, correspondence-based, correspondence-free, matching-based, and liver-specific point cloud registration approaches on two surgical datasets: a deformed IRCAD liver set and DePoll dataset. We also propose our complete-to-partial point cloud registration framework that leverages keypoint extraction, overlap estimation, and a Transformer-based architecture, culminating in competitive registration results.
Results: Experimental evaluations on deformed IRCAD tests reveal that most deep learning methods achieve good registration performances with TRE<10 mm, MAE(R) < 4 and MAE(t)<5 mm. On DePoll, however, performance drops dramatically due to the large deformations.
Conclusion: In conclusion, deep-learning rigid registration methods remain reliable under small deformations and varying partiality but lose accuracy when faced with severe non-rigid changes. To overcome this, future work should focus on building non-rigid registration architectures that preserve the strengths of self-, cross-attention and overlap modules while enhancing correspondence estimation to handle large deformations in laparoscopic surgery.
{"title":"Benchmarking complete-to-partial point cloud registration techniques for laparoscopic surgery.","authors":"Alberto Neri, Veronica Penza, Nazim Haouchine, Leonardo S Mattos","doi":"10.3389/frobt.2025.1702360","DOIUrl":"10.3389/frobt.2025.1702360","url":null,"abstract":"<p><strong>Objective: </strong>Registering a preoperative 3D model of an organ with its actual anatomy viewed from an intraoperative video is a fundamental challenge in computer-assisted surgery, especially for surgical augmented reality. To address this, we present a benchmark of state-of-the-art deep learning point-cloud registration methods, offering a transparent evaluation of their generalizability to surgical scenarios and establishing a robust guideline for developing advanced non-rigid algorithms.</p><p><strong>Methods: </strong>We systematically evaluate traditional and deep learning GMM-based, correspondence-based, correspondence-free, matching-based, and liver-specific point cloud registration approaches on two surgical datasets: a deformed IRCAD liver set and DePoll dataset. We also propose our complete-to-partial point cloud registration framework that leverages keypoint extraction, overlap estimation, and a Transformer-based architecture, culminating in competitive registration results.</p><p><strong>Results: </strong>Experimental evaluations on deformed IRCAD tests reveal that most deep learning methods achieve good registration performances with TRE<10 mm, MAE(R) < 4 and MAE(t)<5 mm. On DePoll, however, performance drops dramatically due to the large deformations.</p><p><strong>Conclusion: </strong>In conclusion, deep-learning rigid registration methods remain reliable under small deformations and varying partiality but lose accuracy when faced with severe non-rigid changes. To overcome this, future work should focus on building non-rigid registration architectures that preserve the strengths of self-, cross-attention and overlap modules while enhancing correspondence estimation to handle large deformations in laparoscopic surgery.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1702360"},"PeriodicalIF":3.0,"publicationDate":"2025-11-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12666070/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145662284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1567211
Mohammad Al Homsi, Maja Trumić, Adriano Fagiolini, Giansalvo Cirrincione
Recent advances in artificial intelligence (AI) have attracted significant attention due to AI's ability to solve complex problems and the rapid development of learning algorithms and computational power. Among the many AI techniques, transformers stand out for their flexible architectures and high computational capacity. Unlike traditional neural networks, transformers use mechanisms such as self-attention with positional encoding, which enable them to effectively capture long-range dependencies in sequential and spatial data. This paper presents a comparison of various deep Q-learning algorithms and proposes two original techniques that use self-attention into deep Q-learning. The first technique is structured self-attention with deep Q-learning, and the second uses multi-head attention with deep Q-learning. These methods are compared with different types of deep Q-learning and other temporal techniques in uncertain tasks, such as throwing objects to unknown targets. The performance of these algorithms is evaluated in a simplified environment, where the task involves throwing a ball using a robotic arm manipulator. This setup provides a controlled scenario to analyze the algorithms' efficiency and effectiveness in solving dynamic control problems. Additional constraints are introduced to evaluate performance under more complex conditions, such as a joint lock or the presence of obstacles like a wall near the robot or the target. The output of the algorithm includes the correct joint configurations and trajectories for throwing to unknown target positions. The use of multi-head attention has enhanced the robot's ability to prioritize and interact with critical environmental features. The paper also includes a comparison of temporal difference algorithms to address constraints on the robot's joints. These algorithms are capable of finding solutions within the limitations of existing hardware, enabling robots to interact intelligently and autonomously with their environment.
{"title":"Comparative analysis of deep Q-learning algorithms for object throwing using a robot manipulator.","authors":"Mohammad Al Homsi, Maja Trumić, Adriano Fagiolini, Giansalvo Cirrincione","doi":"10.3389/frobt.2025.1567211","DOIUrl":"10.3389/frobt.2025.1567211","url":null,"abstract":"<p><p>Recent advances in artificial intelligence (AI) have attracted significant attention due to AI's ability to solve complex problems and the rapid development of learning algorithms and computational power. Among the many AI techniques, transformers stand out for their flexible architectures and high computational capacity. Unlike traditional neural networks, transformers use mechanisms such as self-attention with positional encoding, which enable them to effectively capture long-range dependencies in sequential and spatial data. This paper presents a comparison of various deep Q-learning algorithms and proposes two original techniques that use self-attention into deep Q-learning. The first technique is structured self-attention with deep Q-learning, and the second uses multi-head attention with deep Q-learning. These methods are compared with different types of deep Q-learning and other temporal techniques in uncertain tasks, such as throwing objects to unknown targets. The performance of these algorithms is evaluated in a simplified environment, where the task involves throwing a ball using a robotic arm manipulator. This setup provides a controlled scenario to analyze the algorithms' efficiency and effectiveness in solving dynamic control problems. Additional constraints are introduced to evaluate performance under more complex conditions, such as a joint lock or the presence of obstacles like a wall near the robot or the target. The output of the algorithm includes the correct joint configurations and trajectories for throwing to unknown target positions. The use of multi-head attention has enhanced the robot's ability to prioritize and interact with critical environmental features. The paper also includes a comparison of temporal difference algorithms to address constraints on the robot's joints. These algorithms are capable of finding solutions within the limitations of existing hardware, enabling robots to interact intelligently and autonomously with their environment.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1567211"},"PeriodicalIF":3.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660075/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649677","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-14eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1725162
Casey Kennington, Koji Inoue, Randy Gomez, Peter Ford-Dominey
{"title":"Editorial: Dialogue with robots: constructive approaches for understanding communication.","authors":"Casey Kennington, Koji Inoue, Randy Gomez, Peter Ford-Dominey","doi":"10.3389/frobt.2025.1725162","DOIUrl":"https://doi.org/10.3389/frobt.2025.1725162","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1725162"},"PeriodicalIF":3.0,"publicationDate":"2025-11-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12660074/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649649","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-11-13eCollection Date: 2025-01-01DOI: 10.3389/frobt.2025.1607190
Junya Nakanishi, Jun Baba, Yuichiro Yoshikawa, Hiroko Kamide, Hiroshi Ishiguro
This paper discusses the functional advantages of the Selection-Broadcast Cycle structure proposed by Global Workspace Theory (GWT), inspired by human consciousness, particularly focusing on its applicability to artificial intelligence and robotics in dynamic, real-time scenarios. While previous studies often examined the Selection and Broadcast processes independently, this research emphasizes their combined cyclic structure and the resulting benefits for real-time cognitive systems. Specifically, the paper identifies three primary benefits: Dynamic Thinking Adaptation, Experience-Based Adaptation, and Immediate Real-Time Adaptation. This work highlights GWT's potential as a cognitive architecture suitable for sophisticated decision-making and adaptive performance in unsupervised, dynamic environments. It suggests new directions for the development and implementation of robust, general-purpose AI and robotics systems capable of managing complex, real-world tasks.
{"title":"Hypothesis on the functional advantages of the selection-broadcast cycle structure: global workspace theory and dealing with a real-time world.","authors":"Junya Nakanishi, Jun Baba, Yuichiro Yoshikawa, Hiroko Kamide, Hiroshi Ishiguro","doi":"10.3389/frobt.2025.1607190","DOIUrl":"10.3389/frobt.2025.1607190","url":null,"abstract":"<p><p>This paper discusses the functional advantages of the Selection-Broadcast Cycle structure proposed by Global Workspace Theory (GWT), inspired by human consciousness, particularly focusing on its applicability to artificial intelligence and robotics in dynamic, real-time scenarios. While previous studies often examined the Selection and Broadcast processes independently, this research emphasizes their combined cyclic structure and the resulting benefits for real-time cognitive systems. Specifically, the paper identifies three primary benefits: Dynamic Thinking Adaptation, Experience-Based Adaptation, and Immediate Real-Time Adaptation. This work highlights GWT's potential as a cognitive architecture suitable for sophisticated decision-making and adaptive performance in unsupervised, dynamic environments. It suggests new directions for the development and implementation of robust, general-purpose AI and robotics systems capable of managing complex, real-world tasks.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"12 ","pages":"1607190"},"PeriodicalIF":3.0,"publicationDate":"2025-11-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12657164/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145649691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}