Pub Date : 2024-11-05eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1449721
Yiqi Li, Yelin Jiang, Koh Hosoda
In the study of PAM (McKibben-type pneumatic artificial muscle)-driven bipedal robots, it is essential to investigate whether the intrinsic properties of the PAM contribute to achieving stable robot motion. Furthermore, it is crucial to determine if this contribution can be achieved through the interaction between the robot's mechanical structure and the PAM. In previous research, a PAM-driven bipedal musculoskeletal robot was designed based on the principles of the spring-loaded inverted pendulum (SLIP) model. The robot features low leg inertia and concentrated mass near the hip joint. However, it is important to note that for this robot, only the design principles were based on the SLIP model, and no specialized controller was specifically designed based on the model. To address this issue, based on the characteristics of the developed robot, a PAM controller designed also based on the SLIP model is developed in this study. This model-based controller regulates ankle flexion PAM to adjust the direction of the ground reaction force during robot walking motion. The results indicate that the proposed controller effectively directs the leg ground reaction force towards the center of mass during walking.
{"title":"Controller design and experimental validation of walking for a musculoskeletal bipedal lower limb robot based on the spring-loaded inverted pendulum model.","authors":"Yiqi Li, Yelin Jiang, Koh Hosoda","doi":"10.3389/frobt.2024.1449721","DOIUrl":"10.3389/frobt.2024.1449721","url":null,"abstract":"<p><p>In the study of PAM (McKibben-type pneumatic artificial muscle)-driven bipedal robots, it is essential to investigate whether the intrinsic properties of the PAM contribute to achieving stable robot motion. Furthermore, it is crucial to determine if this contribution can be achieved through the interaction between the robot's mechanical structure and the PAM. In previous research, a PAM-driven bipedal musculoskeletal robot was designed based on the principles of the spring-loaded inverted pendulum (SLIP) model. The robot features low leg inertia and concentrated mass near the hip joint. However, it is important to note that for this robot, only the design principles were based on the SLIP model, and no specialized controller was specifically designed based on the model. To address this issue, based on the characteristics of the developed robot, a PAM controller designed also based on the SLIP model is developed in this study. This model-based controller regulates ankle flexion PAM to adjust the direction of the ground reaction force during robot walking motion. The results indicate that the proposed controller effectively directs the leg ground reaction force towards the center of mass during walking.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1449721"},"PeriodicalIF":2.9,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11574207/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-05eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1278983
Alan F T Winfield
The use of evolutionary robotic systems to model aspects of evolutionary biology is well-established. Yet, few studies have asked the question, "What kind of model is an evolutionary robotic system?" This paper seeks to address that question in several ways. First, it is addressed by applying a structured model description developed for physical robot models of animal sensorimotor systems, then by outlining the strengths and limitations of evolutionary robotics for modelling evolutionary biology, and, finally, by considering the deepest questions in evolution and which of them might feasibly be modelled by evolutionary robotics. The paper concludes that although evolutionary robotics faces serious limitations in exploring deeper questions in evolutionary biology, its bottom-up approach to modelling populations of evolving phenotypes and their embodied interactions holds significant value for both testing and generating hypotheses.
{"title":"Evolutionary robotics as a modelling tool in evolutionary biology.","authors":"Alan F T Winfield","doi":"10.3389/frobt.2024.1278983","DOIUrl":"10.3389/frobt.2024.1278983","url":null,"abstract":"<p><p>The use of evolutionary robotic systems to model aspects of evolutionary biology is well-established. Yet, few studies have asked the question, \"What kind of model is an evolutionary robotic system?\" This paper seeks to address that question in several ways. First, it is addressed by applying a structured model description developed for physical robot models of animal sensorimotor systems, then by outlining the strengths and limitations of evolutionary robotics for modelling evolutionary biology, and, finally, by considering the deepest questions in evolution and which of them might feasibly be modelled by evolutionary robotics. The paper concludes that although evolutionary robotics faces serious limitations in exploring deeper questions in evolutionary biology, its bottom-up approach to modelling populations of evolving phenotypes and their embodied interactions holds significant value for both testing and generating hypotheses.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1278983"},"PeriodicalIF":2.9,"publicationDate":"2024-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11575461/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142676511","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-04eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1470163
Wael Taie, Khaled ElGeneidy, Ali Al-Yacoub, Ronglei Sun
Collaborative robots (cobots) are increasingly integrated into Industry 4.0 dynamic manufacturing environments that require frequent system reconfiguration due to changes in cobot paths and payloads. This necessitates fast methods for identifying payload inertial parameters to compensate the cobot controller and ensure precise and safe operation. Our prior work used Incremental Ensemble Model (IEM) to identify payload parameters, eliminating the need for an excitation path and thus removing the separate identification step. However, this approach suffers from catastrophic forgetting. This paper introduces a novel incremental ensemble learning method that addresses the problem of catastrophic forgetting by adding a new weak learner to the ensemble model for each new training bag. Moreover, it proposes a new classification model that assists the ensemble model in identifying which weak learner provides the most accurate estimation for new input data. The proposed method incrementally updates the identification model while the cobot navigates any task path, maintaining accuracy on old weak learner even after updating with new data. Validation performed on the Franka Emika cobot showcases the model's superior accuracy and adaptability, effectively eliminating the problem of catastrophic forgetting.
{"title":"Addressing catastrophic forgetting in payload parameter identification using incremental ensemble learning.","authors":"Wael Taie, Khaled ElGeneidy, Ali Al-Yacoub, Ronglei Sun","doi":"10.3389/frobt.2024.1470163","DOIUrl":"10.3389/frobt.2024.1470163","url":null,"abstract":"<p><p>Collaborative robots (cobots) are increasingly integrated into Industry 4.0 dynamic manufacturing environments that require frequent system reconfiguration due to changes in cobot paths and payloads. This necessitates fast methods for identifying payload inertial parameters to compensate the cobot controller and ensure precise and safe operation. Our prior work used Incremental Ensemble Model (IEM) to identify payload parameters, eliminating the need for an excitation path and thus removing the separate identification step. However, this approach suffers from catastrophic forgetting. This paper introduces a novel incremental ensemble learning method that addresses the problem of catastrophic forgetting by adding a new weak learner to the ensemble model for each new training bag. Moreover, it proposes a new classification model that assists the ensemble model in identifying which weak learner provides the most accurate estimation for new input data. The proposed method incrementally updates the identification model while the cobot navigates any task path, maintaining accuracy on old weak learner even after updating with new data. Validation performed on the Franka Emika cobot showcases the model's superior accuracy and adaptability, effectively eliminating the problem of catastrophic forgetting.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1470163"},"PeriodicalIF":2.9,"publicationDate":"2024-11-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11570578/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142669382","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1407095
Yuto Ushijima, Satoru Satake, Takayuki Kanda
It is extremely challenging for security guard robots to independently stop human line-cutting behavior. We propose addressing this issue by using humorous phrases. First, we created a dataset and built a humor effectiveness predictor. Using a simulator, we replicated 13,000 situations of line-cutting behavior and collected 500 humorous phrases through crowdsourcing. Combining these simulators and phrases, we evaluated each phrase's effectiveness in different situations through crowdsourcing. Using machine learning with this dataset, we constructed a humor effectiveness predictor. In the process of preparing this machine learning, we discovered that considering the situation and the discomfort caused by the phrase is crucial for predicting the effectiveness of humor. Next, we constructed a system to select the best humorous phrase for the line-cutting behavior using this predictor. We then conducted a video experiment in which we compared the humorous phrases selected using this proposed system with typical non-humorous phrases. The results revealed that humorous phrases selected by the proposed system were more effective in discouraging line-cutting behavior than typical non-humorous phrases.
{"title":"Predicting humor effectiveness of robots for human line cutting.","authors":"Yuto Ushijima, Satoru Satake, Takayuki Kanda","doi":"10.3389/frobt.2024.1407095","DOIUrl":"https://doi.org/10.3389/frobt.2024.1407095","url":null,"abstract":"<p><p>It is extremely challenging for security guard robots to independently stop human line-cutting behavior. We propose addressing this issue by using humorous phrases. First, we created a dataset and built a humor effectiveness predictor. Using a simulator, we replicated 13,000 situations of line-cutting behavior and collected 500 humorous phrases through crowdsourcing. Combining these simulators and phrases, we evaluated each phrase's effectiveness in different situations through crowdsourcing. Using machine learning with this dataset, we constructed a humor effectiveness predictor. In the process of preparing this machine learning, we discovered that considering the situation and the discomfort caused by the phrase is crucial for predicting the effectiveness of humor. Next, we constructed a system to select the best humorous phrase for the line-cutting behavior using this predictor. We then conducted a video experiment in which we compared the humorous phrases selected using this proposed system with typical non-humorous phrases. The results revealed that humorous phrases selected by the proposed system were more effective in discouraging line-cutting behavior than typical non-humorous phrases.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1407095"},"PeriodicalIF":2.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11554535/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-29eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1462558
Giacinto Barresi, Ana Lúcia Faria, Marta Matamala-Gomez, Edward Grant, Philippe S Archambault, Giampaolo Brichetto, Thomas Platz
{"title":"Editorial: Human-centered solutions and synergies across robotic and digital systems for rehabilitation.","authors":"Giacinto Barresi, Ana Lúcia Faria, Marta Matamala-Gomez, Edward Grant, Philippe S Archambault, Giampaolo Brichetto, Thomas Platz","doi":"10.3389/frobt.2024.1462558","DOIUrl":"https://doi.org/10.3389/frobt.2024.1462558","url":null,"abstract":"","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1462558"},"PeriodicalIF":2.9,"publicationDate":"2024-10-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11554525/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630023","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cyber-physical systems (CPSs) are evolving from individual systems to collectives of systems that collaborate to achieve highly complex goals, realizing a cyber-physical system of systems (CPSoSs) approach. They are heterogeneous systems comprising various autonomous CPSs, each with unique performance capabilities, priorities, and pursued goals. In practice, there are significant challenges in the applicability and usability of CPSoSs that need to be addressed. The decentralization of CPSoSs assigns tasks to individual CPSs within the system of systems. All CPSs should harmonically pursue system-based achievements and collaborate to make system-of-system-based decisions and implement the CPSoS functionality. The automotive domain is transitioning to the system of systems approach, aiming to provide a series of emergent functionalities like traffic management, collaborative car fleet management, or large-scale automotive adaptation to the physical environment, thus providing significant environmental benefits and achieving significant societal impact. Similarly, large infrastructure domains are evolving into global, highly integrated cyber-physical systems of systems, covering all parts of the value chain. This survey provides a comprehensive review of current best practices in connected cyber-physical systems and investigates a dual-layer architecture entailing perception and behavioral components. The presented perception layer entails object detection, cooperative scene analysis, cooperative localization and path planning, and human-centric perception. The behavioral layer focuses on human-in-the-loop (HITL)-centric decision making and control, where the output of the perception layer assists the human operator in making decisions while monitoring the operator's state. Finally, an extended overview of digital twin (DT) paradigms is provided so as to simulate, realize, and optimize large-scale CPSoS ecosystems.
{"title":"Distributed intelligence in industrial and automotive cyber-physical systems: a review.","authors":"Nikos Piperigkos, Alexandros Gkillas, Gerasimos Arvanitis, Stavros Nousias, Aris Lalos, Apostolos Fournaris, Panagiotis Radoglou-Grammatikis, Panagiotis Sarigiannidis, Konstantinos Moustakas","doi":"10.3389/frobt.2024.1430740","DOIUrl":"https://doi.org/10.3389/frobt.2024.1430740","url":null,"abstract":"<p><p>Cyber-physical systems (CPSs) are evolving from individual systems to collectives of systems that collaborate to achieve highly complex goals, realizing a cyber-physical system of systems (CPSoSs) approach. They are heterogeneous systems comprising various autonomous CPSs, each with unique performance capabilities, priorities, and pursued goals. In practice, there are significant challenges in the applicability and usability of CPSoSs that need to be addressed. The decentralization of CPSoSs assigns tasks to individual CPSs within the system of systems. All CPSs should harmonically pursue system-based achievements and collaborate to make system-of-system-based decisions and implement the CPSoS functionality. The automotive domain is transitioning to the system of systems approach, aiming to provide a series of emergent functionalities like traffic management, collaborative car fleet management, or large-scale automotive adaptation to the physical environment, thus providing significant environmental benefits and achieving significant societal impact. Similarly, large infrastructure domains are evolving into global, highly integrated cyber-physical systems of systems, covering all parts of the value chain. This survey provides a comprehensive review of current best practices in connected cyber-physical systems and investigates a dual-layer architecture entailing perception and behavioral components. The presented perception layer entails object detection, cooperative scene analysis, cooperative localization and path planning, and human-centric perception. The behavioral layer focuses on human-in-the-loop (HITL)-centric decision making and control, where the output of the perception layer assists the human operator in making decisions while monitoring the operator's state. Finally, an extended overview of digital twin (DT) paradigms is provided so as to simulate, realize, and optimize large-scale CPSoS ecosystems.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1430740"},"PeriodicalIF":2.9,"publicationDate":"2024-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11551047/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-10-25eCollection Date: 2024-01-01DOI: 10.3389/frobt.2024.1472051
Mads Bering Christiansen, Ahmad Rafsanjani, Jonas Jørgensen
Artificial Intelligence (AI) has rapidly become a widespread design aid through the recent proliferation of generative AI tools. In this work we use generative AI to explore soft robotics designs, specifically Soft Biomorphism, an aesthetic design paradigm emphasizing the inherent biomorphic qualities of soft robots to leverage them as affordances for interactions with humans. The work comprises two experiments aimed at uncovering how generative AI can articulate and expand the design space of soft biomorphic robotics using text-to-image (TTI) and image-to-image (ITI) generation techniques. Through TTI generation, Experiment 1 uncovered alternative interpretations of soft biomorphism, emphasizing the novel incorporation of, e.g., fur, which adds a new dimension to the material aesthetics of soft robotics. In Experiment 2, TTI and ITI generation were combined and a category of hybrid techno-organic robot designs discovered, which combined rigid and pliable materials. The work thus demonstrates in practice the specific ways in which AI image generation can contribute towards expanding the design space of soft robotics.
{"title":"Nature redux: interrogating biomorphism and soft robot aesthetics through generative AI.","authors":"Mads Bering Christiansen, Ahmad Rafsanjani, Jonas Jørgensen","doi":"10.3389/frobt.2024.1472051","DOIUrl":"https://doi.org/10.3389/frobt.2024.1472051","url":null,"abstract":"<p><p>Artificial Intelligence (AI) has rapidly become a widespread design aid through the recent proliferation of generative AI tools. In this work we use generative AI to explore soft robotics designs, specifically Soft Biomorphism, an aesthetic design paradigm emphasizing the inherent biomorphic qualities of soft robots to leverage them as affordances for interactions with humans. The work comprises two experiments aimed at uncovering how generative AI can articulate and expand the design space of soft biomorphic robotics using text-to-image (TTI) and image-to-image (ITI) generation techniques. Through TTI generation, Experiment 1 uncovered alternative interpretations of soft biomorphism, emphasizing the novel incorporation of, e.g., fur, which adds a new dimension to the material aesthetics of soft robotics. In Experiment 2, TTI and ITI generation were combined and a category of hybrid techno-organic robot designs discovered, which combined rigid and pliable materials. The work thus demonstrates in practice the specific ways in which AI image generation can contribute towards expanding the design space of soft robotics.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1472051"},"PeriodicalIF":2.9,"publicationDate":"2024-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11543949/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142630031","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A robot hand-arm that can perform various tasks with the unaffected arm could ease the daily lives of patients with a single upper-limb dysfunction. A smooth interaction between robot and patient is desirable since their other arm functions normally. If the robot can move in response to the user's intentions and cooperate with the unaffected arm, even without detailed operation, it can effectively assist with daily tasks. This study aims to propose and develop a cybernic robot hand-arm with the following features: 1) input of user intention via bioelectrical signals from the paralyzed arm, the unaffected arm's motion, and voice; 2) autonomous control of support movements; 3) a control system that integrates voluntary and autonomous control by combining 1) and 2) to thus allow smooth work support in cooperation with the unaffected arm, reflecting intention as a part of the body; and 4) a learning function to provide work support across various tasks in daily use. We confirmed the feasibility and usefulness of the proposed system through a pilot study involving three patients. The system learned to support new tasks by working with the user through an operating function that does not require the involvement of the unaffected arm. The system divides the support actions into movement phases and learns the phase-shift conditions from the sensor information about the user's intention. After learning, the system autonomously performs learned support actions through voluntary phase shifts based on input about the user's intention via bioelectrical signals, the unaffected arm's motion, and by voice, enabling smooth collaborative movement with the unaffected arm. Experiments with patients demonstrated that the system could learn and provide smooth work support in cooperation with the unaffected arm to successfully complete tasks they find difficult. Additionally, the questionnaire subjectively confirmed that cooperative work according to the user's intention was achieved and that work time was within a feasible range for daily life. Furthermore, it was observed that participants who used bioelectrical signals from their paralyzed arm perceived the system as part of their body. We thus confirmed the feasibility and usefulness of various cooperative task supports using the proposed method.
{"title":"Cybernic robot hand-arm that realizes cooperative work as a new hand-arm for people with a single upper-limb dysfunction.","authors":"Hiroaki Toyama, Hiroaki Kawamoto, Yoshiyuki Sankai","doi":"10.3389/frobt.2024.1455582","DOIUrl":"10.3389/frobt.2024.1455582","url":null,"abstract":"<p><p>A robot hand-arm that can perform various tasks with the unaffected arm could ease the daily lives of patients with a single upper-limb dysfunction. A smooth interaction between robot and patient is desirable since their other arm functions normally. If the robot can move in response to the user's intentions and cooperate with the unaffected arm, even without detailed operation, it can effectively assist with daily tasks. This study aims to propose and develop a cybernic robot hand-arm with the following features: 1) input of user intention via bioelectrical signals from the paralyzed arm, the unaffected arm's motion, and voice; 2) autonomous control of support movements; 3) a control system that integrates voluntary and autonomous control by combining 1) and 2) to thus allow smooth work support in cooperation with the unaffected arm, reflecting intention as a part of the body; and 4) a learning function to provide work support across various tasks in daily use. We confirmed the feasibility and usefulness of the proposed system through a pilot study involving three patients. The system learned to support new tasks by working with the user through an operating function that does not require the involvement of the unaffected arm. The system divides the support actions into movement phases and learns the phase-shift conditions from the sensor information about the user's intention. After learning, the system autonomously performs learned support actions through voluntary phase shifts based on input about the user's intention via bioelectrical signals, the unaffected arm's motion, and by voice, enabling smooth collaborative movement with the unaffected arm. Experiments with patients demonstrated that the system could learn and provide smooth work support in cooperation with the unaffected arm to successfully complete tasks they find difficult. Additionally, the questionnaire subjectively confirmed that cooperative work according to the user's intention was achieved and that work time was within a feasible range for daily life. Furthermore, it was observed that participants who used bioelectrical signals from their paralyzed arm perceived the system as part of their body. We thus confirmed the feasibility and usefulness of various cooperative task supports using the proposed method.</p>","PeriodicalId":47597,"journal":{"name":"Frontiers in Robotics and AI","volume":"11 ","pages":"1455582"},"PeriodicalIF":2.9,"publicationDate":"2024-10-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11535860/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142584643","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}