Voice recognition and command technology for applications with industrial robots is a relatively new field in the intelligent manufacturing industry. It offers a number of advantages over other methods of communication with robots, as it requires fewer specialized skills to manipulate the robot workstation. Additionally, using voice commands can help reduce the number of industrial injuries caused by contact with machinery, thus potentially save operators' lives in emergency situations where external assistance is not immediately available. This study presents a design of a Cartesian robot workstation which is equipped with a voice recognition system controlled by audio commands, as well as a vision perception system. The vision perception system uses the Real Sense depth camera that captures information about the coordinates of the work pieces, which is processed by SSD algorithm. The voice recognition system has been developed with an algorithm which combines both LSTM and HMM, and it has good performance in term of both efficiency and accuracy in controlling normal operation as well as emergency stop for our robot grasping workstation.
{"title":"Emergency Stop System of Computer Vision Workstation Based on GMM-HMM and LSTM","authors":"Muhuan Wu, Fangrui Guo, Junwei Wu, Yuliang Xiao, Mingyu Jin, Quan Zhang","doi":"10.1109/ICARA56516.2023.10125926","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125926","url":null,"abstract":"Voice recognition and command technology for applications with industrial robots is a relatively new field in the intelligent manufacturing industry. It offers a number of advantages over other methods of communication with robots, as it requires fewer specialized skills to manipulate the robot workstation. Additionally, using voice commands can help reduce the number of industrial injuries caused by contact with machinery, thus potentially save operators' lives in emergency situations where external assistance is not immediately available. This study presents a design of a Cartesian robot workstation which is equipped with a voice recognition system controlled by audio commands, as well as a vision perception system. The vision perception system uses the Real Sense depth camera that captures information about the coordinates of the work pieces, which is processed by SSD algorithm. The voice recognition system has been developed with an algorithm which combines both LSTM and HMM, and it has good performance in term of both efficiency and accuracy in controlling normal operation as well as emergency stop for our robot grasping workstation.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130793816","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125938
Xinchao Song, Nikolas Lamb, Sean Banerjee, N. Banerjee
Though several approaches exist to automatically generate repair parts for fractured objects, there has been little prior work on the automatic assembly of generated repair parts. Assembly of repair parts to fractured objects is a challenging problem due to the complex high-frequency geometry at the fractured region, which limits the effectiveness of traditional controllers. We present an approach using reinforcement learning that combines visual and tactile information to automatically assemble repair parts to fractured objects. Our approach overcomes the limitations of existing assembly approaches that require objects to have a specific structure, that require training on a large dataset to generalize to new objects, or that require the assembled state to be easily identifiable, such as for peg-in-hole assembly. We propose two visual metrics that provide estimation of assembly state with 3 degrees of freedom. Tactile information allows our approach to assemble objects under occlusion, as occurs when the objects are nearly assembled. Our approach is able to assemble objects with complex interfaces without placing requirements on object structure.
{"title":"Reinforcement-Learning Based Robotic Assembly of Fractured Objects Using Visual and Tactile Information","authors":"Xinchao Song, Nikolas Lamb, Sean Banerjee, N. Banerjee","doi":"10.1109/ICARA56516.2023.10125938","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125938","url":null,"abstract":"Though several approaches exist to automatically generate repair parts for fractured objects, there has been little prior work on the automatic assembly of generated repair parts. Assembly of repair parts to fractured objects is a challenging problem due to the complex high-frequency geometry at the fractured region, which limits the effectiveness of traditional controllers. We present an approach using reinforcement learning that combines visual and tactile information to automatically assemble repair parts to fractured objects. Our approach overcomes the limitations of existing assembly approaches that require objects to have a specific structure, that require training on a large dataset to generalize to new objects, or that require the assembled state to be easily identifiable, such as for peg-in-hole assembly. We propose two visual metrics that provide estimation of assembly state with 3 degrees of freedom. Tactile information allows our approach to assemble objects under occlusion, as occurs when the objects are nearly assembled. Our approach is able to assemble objects with complex interfaces without placing requirements on object structure.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128135545","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125712
Yassine Habib, P. Papadakis, C. L. Barz, Antoine Fagette, Tiago Gonçalves, Cédric Buche
Simultaneous Localization and Mapping (SLAM) research has reached a level of maturity enabling systems to build autonomously an accurate sparse map of the environment while localizing themselves in that map. At the same time, the use of deep learning has recently brought great improvements in Monocular Depth Prediction (MDP). Some applications such as autonomous drone navigation and obstacle avoidance require dense structure information and cannot only rely on sparse SLAM representation. We propose to densify a state-of-the-art SLAM algorithm using deep learning-based dense MDP at keyframe rate. Towards this goal, we describe a scale recovery from SLAM landmarks by minimizing a depth error metric combined with a multi-view depth refinement using a volumetric approach. We conclude with experiments that attest the added value of our approach in terms of depth estimation.
{"title":"Densifying SLAM for UAV Navigation by Fusion of Monocular Depth Prediction","authors":"Yassine Habib, P. Papadakis, C. L. Barz, Antoine Fagette, Tiago Gonçalves, Cédric Buche","doi":"10.1109/ICARA56516.2023.10125712","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125712","url":null,"abstract":"Simultaneous Localization and Mapping (SLAM) research has reached a level of maturity enabling systems to build autonomously an accurate sparse map of the environment while localizing themselves in that map. At the same time, the use of deep learning has recently brought great improvements in Monocular Depth Prediction (MDP). Some applications such as autonomous drone navigation and obstacle avoidance require dense structure information and cannot only rely on sparse SLAM representation. We propose to densify a state-of-the-art SLAM algorithm using deep learning-based dense MDP at keyframe rate. Towards this goal, we describe a scale recovery from SLAM landmarks by minimizing a depth error metric combined with a multi-view depth refinement using a volumetric approach. We conclude with experiments that attest the added value of our approach in terms of depth estimation.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131716468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125913
Kyoshiro Itakura
In this paper, a nonlinear model predictive control considering vehicle jerk dynamics is proposed for improving ride comfort of passengers. Since the vehicle model in prediction phase requires high accuracy dynamics in order to handle the jerk motion, the approximated wheel load transfer dynamics is introduced. Also, to obtain the control capability for not only jerk but also acceleration, velocity and position, the expanded state space model including these dimensions into the one has been developed. It improves the utility as autonomous vehicle controller. By numerical simulation in assuming cornering driving scene, the effectiveness that jerk and other vehicle states enable to constraint simultaneously by individual torque distribution by electric power train is validated. Further, the principle of the optimized torque distribution by proposed method is analyzed.
{"title":"Vehicle Motion Control with Jerk Constraint by Nonlinear Model Predictive Control","authors":"Kyoshiro Itakura","doi":"10.1109/ICARA56516.2023.10125913","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125913","url":null,"abstract":"In this paper, a nonlinear model predictive control considering vehicle jerk dynamics is proposed for improving ride comfort of passengers. Since the vehicle model in prediction phase requires high accuracy dynamics in order to handle the jerk motion, the approximated wheel load transfer dynamics is introduced. Also, to obtain the control capability for not only jerk but also acceleration, velocity and position, the expanded state space model including these dimensions into the one has been developed. It improves the utility as autonomous vehicle controller. By numerical simulation in assuming cornering driving scene, the effectiveness that jerk and other vehicle states enable to constraint simultaneously by individual torque distribution by electric power train is validated. Further, the principle of the optimized torque distribution by proposed method is analyzed.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117044402","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125759
Tomoaki Shimizu, Kosuke Taneda, Ayumu Goto, Tomoya Hattori, Toyokazu Kobayashi, Ryota Takamido, Jun Ota
The aim of this study is to develop an algorithm for task assignment and movement planning to a destination by considering the acceleration and deceleration patterns of automated guided vehicles (AGVs). In this study, we propose an algorithm for task assignment and motion planning for AGV kinematics with multiple search trees based on the conflict-based search with optimal task assignment (CBS-TA) of Hönig et al. By devising the options for AGV actions and settings for standby actions, the algorithm can perform optimal motion planning for each AGV, considering its dynamic characteristics.
{"title":"Offline Task Assignment and Motion Planning Algorithm Considering Agent's Dynamics","authors":"Tomoaki Shimizu, Kosuke Taneda, Ayumu Goto, Tomoya Hattori, Toyokazu Kobayashi, Ryota Takamido, Jun Ota","doi":"10.1109/ICARA56516.2023.10125759","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125759","url":null,"abstract":"The aim of this study is to develop an algorithm for task assignment and movement planning to a destination by considering the acceleration and deceleration patterns of automated guided vehicles (AGVs). In this study, we propose an algorithm for task assignment and motion planning for AGV kinematics with multiple search trees based on the conflict-based search with optimal task assignment (CBS-TA) of Hönig et al. By devising the options for AGV actions and settings for standby actions, the algorithm can perform optimal motion planning for each AGV, considering its dynamic characteristics.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116619940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125669
H. Sung, C. Hsiao, C. Lee, Chin-Yu Wang
The nerve plexus damage of the upper limb provokes the disability of patients and makes a drastic impact both mentally and physically. Therefore, rehabilitation treatments for patients are indispensable. This study focuses on developing a 3-axis rehabilitation robot with targets including the geometric design of body structure, mathematical modeling of the multi-axis mechanism, the multi-body dynamic analysis, the selection of mechatronic systems, and the assembling and solid test of the rehabilitation robot. Through the precedent design, analysis, and simulation using software, the specification and performance can be ensured in advance. The realized rehabilitation robot is quickly developed with a further professional evaluation of the rehabilitating movements. Thus, the most suitable rehabilitating movement for patients can be achieved to enhance the effectiveness of the rehabilitation treatment.
{"title":"Design and Development of a Prototype Upper-limb Rehabilitation Robot Based on Multi-body Dynamics Analysis","authors":"H. Sung, C. Hsiao, C. Lee, Chin-Yu Wang","doi":"10.1109/ICARA56516.2023.10125669","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125669","url":null,"abstract":"The nerve plexus damage of the upper limb provokes the disability of patients and makes a drastic impact both mentally and physically. Therefore, rehabilitation treatments for patients are indispensable. This study focuses on developing a 3-axis rehabilitation robot with targets including the geometric design of body structure, mathematical modeling of the multi-axis mechanism, the multi-body dynamic analysis, the selection of mechatronic systems, and the assembling and solid test of the rehabilitation robot. Through the precedent design, analysis, and simulation using software, the specification and performance can be ensured in advance. The realized rehabilitation robot is quickly developed with a further professional evaluation of the rehabilitating movements. Thus, the most suitable rehabilitating movement for patients can be achieved to enhance the effectiveness of the rehabilitation treatment.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"249 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132659066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/icara56516.2023.10125910
{"title":"2023 9th International Conference on Automation, Robotics and Applications","authors":"","doi":"10.1109/icara56516.2023.10125910","DOIUrl":"https://doi.org/10.1109/icara56516.2023.10125910","url":null,"abstract":"","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132055709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125952
Yongjiang Huang, Xixiang Liu
Doppler velocity log (DVL) can provide velocity information for the strap-down inertial navigation system (SINS), and it is widely used in the field of navigation for autonomous underwater vehicles (AUVs). However, in deep-sea scenarios, the DVL's output is the velocity relative to the ocean currents since it works in the water-tracking mode. The ocean-current velocity brings systematic errors to the SINS/DVL integrated navigation system, which leads to the degradation of navigation positioning accuracy. Therefore, estimating and compensating for the current velocity is an essential requirement for high-precision navigation performance and an important topic for oceanographic research. In the small ocean area where AUV works, the current velocity remains stable for a short time, and it can be considered a constant value. This paper proposes an optimization-based alignment (OBA) method for estimating the ocean-current velocity based on the GNSS (global navigation satellite system)/SINS/DVL integrated system. The simulation results show that the proposed method can estimate the current velocity accurately and does not depend on the initial value compared with the traditional KF method.
{"title":"The Estimation of Ocean-current Velocity for AUV Using Optimization-Based Alignment","authors":"Yongjiang Huang, Xixiang Liu","doi":"10.1109/ICARA56516.2023.10125952","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125952","url":null,"abstract":"Doppler velocity log (DVL) can provide velocity information for the strap-down inertial navigation system (SINS), and it is widely used in the field of navigation for autonomous underwater vehicles (AUVs). However, in deep-sea scenarios, the DVL's output is the velocity relative to the ocean currents since it works in the water-tracking mode. The ocean-current velocity brings systematic errors to the SINS/DVL integrated navigation system, which leads to the degradation of navigation positioning accuracy. Therefore, estimating and compensating for the current velocity is an essential requirement for high-precision navigation performance and an important topic for oceanographic research. In the small ocean area where AUV works, the current velocity remains stable for a short time, and it can be considered a constant value. This paper proposes an optimization-based alignment (OBA) method for estimating the ocean-current velocity based on the GNSS (global navigation satellite system)/SINS/DVL integrated system. The simulation results show that the proposed method can estimate the current velocity accurately and does not depend on the initial value compared with the traditional KF method.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"65 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133426198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125953
Xiaotian Zhang, Hiroki Goto, Shouhei Shirafuji, Keiji Okuhara, Noritaka Takamura, Naoya Kagawa, Hiroyasu Baba, Jun Ota
A specified device or laser tracker accurately calibrates robot joints' offsets, but they require bothering setup for daily calibration. The offset calibration using a hand-eye camera and a marker makes it more convenient. However, the calibration accuracy reduces according to the limitation in pose estimation accuracy using a hand-eye camera and a marker. This paper proposes a method for optimizing the camera poses to capture the marker in joint offset calibration using a hand-eye camera to increase the calibration accuracy. We proposed an index to evaluate the effect of the error in the pose estimation based on the camera images on the offset calibration. The offset calibration with the marker measurement at the poses, which obtained the optimization to maximize the proposed index, realized higher accuracy than other approaches.
{"title":"Measurement Pose Optimization for Joint Offset Calibration with a Hand-Eye Camera","authors":"Xiaotian Zhang, Hiroki Goto, Shouhei Shirafuji, Keiji Okuhara, Noritaka Takamura, Naoya Kagawa, Hiroyasu Baba, Jun Ota","doi":"10.1109/ICARA56516.2023.10125953","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125953","url":null,"abstract":"A specified device or laser tracker accurately calibrates robot joints' offsets, but they require bothering setup for daily calibration. The offset calibration using a hand-eye camera and a marker makes it more convenient. However, the calibration accuracy reduces according to the limitation in pose estimation accuracy using a hand-eye camera and a marker. This paper proposes a method for optimizing the camera poses to capture the marker in joint offset calibration using a hand-eye camera to increase the calibration accuracy. We proposed an index to evaluate the effect of the error in the pose estimation based on the camera images on the offset calibration. The offset calibration with the marker measurement at the poses, which obtained the optimization to maximize the proposed index, realized higher accuracy than other approaches.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"1141 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116410393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-10DOI: 10.1109/ICARA56516.2023.10125943
K. Omer, A. Monteriù
Autonomous mobile robotics are becoming a common sight in smart homes and industries, where they are used for tasks such as cleaning and material handling. In some cases, it is necessary to restrict the movement of these robots to certain areas or rooms during specific times or for specific tasks, in order to ensure safety and prevent interference. Traditional methods for creating virtual walls and borders to accomplish this often have limitations in terms of ease of use, universality, and remote accessibility. In order to overcome these issues, this work presents a method for creating virtual walls, doors, and borders using the Robotic Operating System (ROS) and point cloud data. This method can be used to restrict the movement of autonomous mobile robots in human-centered environments, including smart homes and warehouses, and can be activated and deactivated remotely. It is a flexible and cost-effective solution that does not require specialized sensors and can be used with single or multi-robot systems. The effectiveness of our method has been demonstrated through testing in simulation based on the reconstruction of real environments.
{"title":"An Effective Method for Creating Virtual Doors and Borders to Prevent Autonomous Mobile Robots from Entering Restricted Areas","authors":"K. Omer, A. Monteriù","doi":"10.1109/ICARA56516.2023.10125943","DOIUrl":"https://doi.org/10.1109/ICARA56516.2023.10125943","url":null,"abstract":"Autonomous mobile robotics are becoming a common sight in smart homes and industries, where they are used for tasks such as cleaning and material handling. In some cases, it is necessary to restrict the movement of these robots to certain areas or rooms during specific times or for specific tasks, in order to ensure safety and prevent interference. Traditional methods for creating virtual walls and borders to accomplish this often have limitations in terms of ease of use, universality, and remote accessibility. In order to overcome these issues, this work presents a method for creating virtual walls, doors, and borders using the Robotic Operating System (ROS) and point cloud data. This method can be used to restrict the movement of autonomous mobile robots in human-centered environments, including smart homes and warehouses, and can be activated and deactivated remotely. It is a flexible and cost-effective solution that does not require specialized sensors and can be used with single or multi-robot systems. The effectiveness of our method has been demonstrated through testing in simulation based on the reconstruction of real environments.","PeriodicalId":443572,"journal":{"name":"2023 9th International Conference on Automation, Robotics and Applications (ICARA)","volume":"135 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-02-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117220139","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}