Pub Date : 2023-11-03DOI: 10.1142/s0129065723500685
Nadia Mammone, Cosimo Ieracitano, Rossella Spataro, Christoph Guger, Woosang Cho, Francesco Carlo Morabito
{"title":"A few-shot transfer learning approach for motion intention decoding from electroencephalographic signals","authors":"Nadia Mammone, Cosimo Ieracitano, Rossella Spataro, Christoph Guger, Woosang Cho, Francesco Carlo Morabito","doi":"10.1142/s0129065723500685","DOIUrl":"https://doi.org/10.1142/s0129065723500685","url":null,"abstract":"","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"33 19","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135873431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-09-30DOI: 10.1142/S0129065723500582
Diego Teran-Pineda, Karl Thurnhofer-Hemsi, Enrique Domínguez
Human activity recognition is an application of machine learning with the aim of identifying activities from the gathered activity raw data acquired by different sensors. In medicine, human gait is commonly analyzed by doctors to detect abnormalities and determine possible treatments for the patient. Monitoring the patient's activity is paramount in evaluating the treatment's evolution. This type of classification is still not enough precise, which may lead to unfavorable reactions and responses. A novel methodology that reduces the complexity of extracting features from multimodal sensors is proposed to improve human activity classification based on accelerometer data. A sliding window technique is used to demarcate the first dominant spectral amplitude, decreasing dimensionality and improving feature extraction. In this work, we compared several state-of-art machine learning classifiers evaluated on the HuGaDB dataset and validated on our dataset. Several configurations to reduce features and training time were analyzed using multimodal sensors: all-axis spectrum, single-axis spectrum, and sensor reduction.
{"title":"Human Gait Activity Recognition Using Multimodal Sensors.","authors":"Diego Teran-Pineda, Karl Thurnhofer-Hemsi, Enrique Domínguez","doi":"10.1142/S0129065723500582","DOIUrl":"10.1142/S0129065723500582","url":null,"abstract":"<p><p>Human activity recognition is an application of machine learning with the aim of identifying activities from the gathered activity raw data acquired by different sensors. In medicine, human gait is commonly analyzed by doctors to detect abnormalities and determine possible treatments for the patient. Monitoring the patient's activity is paramount in evaluating the treatment's evolution. This type of classification is still not enough precise, which may lead to unfavorable reactions and responses. A novel methodology that reduces the complexity of extracting features from multimodal sensors is proposed to improve human activity classification based on accelerometer data. A sliding window technique is used to demarcate the first dominant spectral amplitude, decreasing dimensionality and improving feature extraction. In this work, we compared several state-of-art machine learning classifiers evaluated on the HuGaDB dataset and validated on our dataset. Several configurations to reduce features and training time were analyzed using multimodal sensors: all-axis spectrum, single-axis spectrum, and sensor reduction.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350058"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41151065","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-10-04DOI: 10.1142/S0129065723500594
Jhielson M Pimentel, Renan C Moioli, Mariana F P De Araujo, Patricia A Vargas
This work presents a neurorobotics model of the brain that integrates the cerebellum and the basal ganglia regions to coordinate movements in a humanoid robot. This cerebellar-basal ganglia circuitry is well known for its relevance to the motor control used by most mammals. Other computational models have been designed for similar applications in the robotics field. However, most of them completely ignore the interplay between neurons from the basal ganglia and cerebellum. Recently, neuroscientists indicated that neurons from both regions communicate not only at the level of the cerebral cortex but also at the subcortical level. In this work, we built an integrated neurorobotics model to assess the capacity of the network to predict and adjust the motion of the hands of a robot in real time. Our model was capable of performing different movements in a humanoid robot by respecting the sensorimotor loop of the robot and the biophysical features of the neuronal circuitry. The experiments were executed in simulation and the real world. We believe that our proposed neurorobotics model can be an important tool for new studies on the brain and a reference toward new robot motor controllers.
{"title":"An Integrated Neurorobotics Model of the Cerebellar-Basal Ganglia Circuitry.","authors":"Jhielson M Pimentel, Renan C Moioli, Mariana F P De Araujo, Patricia A Vargas","doi":"10.1142/S0129065723500594","DOIUrl":"10.1142/S0129065723500594","url":null,"abstract":"<p><p>This work presents a neurorobotics model of the brain that integrates the cerebellum and the basal ganglia regions to coordinate movements in a humanoid robot. This cerebellar-basal ganglia circuitry is well known for its relevance to the motor control used by most mammals. Other computational models have been designed for similar applications in the robotics field. However, most of them completely ignore the interplay between neurons from the basal ganglia and cerebellum. Recently, neuroscientists indicated that neurons from both regions communicate not only at the level of the cerebral cortex but also at the subcortical level. In this work, we built an integrated neurorobotics model to assess the capacity of the network to predict and adjust the motion of the hands of a robot in real time. Our model was capable of performing different movements in a humanoid robot by respecting the sensorimotor loop of the robot and the biophysical features of the neuronal circuitry. The experiments were executed in simulation and the real world. We believe that our proposed neurorobotics model can be an important tool for new studies on the brain and a reference toward new robot motor controllers.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350059"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41143843","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-11-01Epub Date: 2023-09-29DOI: 10.1142/S0129065723500570
Jiaqi Cui, Jianghong Xiao, Yun Hou, Xi Wu, Jiliu Zhou, Xingchen Peng, Yan Wang
Radiotherapy is one of the leading treatments for cancer. To accelerate the implementation of radiotherapy in clinic, various deep learning-based methods have been developed for automatic dose prediction. However, the effectiveness of these methods heavily relies on the availability of a substantial amount of data with labels, i.e. the dose distribution maps, which cost dosimetrists considerable time and effort to acquire. For cancers of low-incidence, such as cervical cancer, it is often a luxury to collect an adequate amount of labeled data to train a well-performing deep learning (DL) model. To mitigate this problem, in this paper, we resort to the unsupervised domain adaptation (UDA) strategy to achieve accurate dose prediction for cervical cancer (target domain) by leveraging the well-labeled high-incidence rectal cancer (source domain). Specifically, we introduce the cross-attention mechanism to learn the domain-invariant features and develop a cross-attention transformer-based encoder to align the two different cancer domains. Meanwhile, to preserve the target-specific knowledge, we employ multiple domain classifiers to enforce the network to extract more discriminative target features. In addition, we employ two independent convolutional neural network (CNN) decoders to compensate for the lack of spatial inductive bias in the pure transformer and generate accurate dose maps for both domains. Furthermore, to enhance the performance, two additional losses, i.e. a knowledge distillation loss (KDL) and a domain classification loss (DCL), are incorporated to transfer the domain-invariant features while preserving domain-specific information. Experimental results on a rectal cancer dataset and a cervical cancer dataset have demonstrated that our method achieves the best quantitative results with [Formula: see text], [Formula: see text], and HI of 1.446, 1.231, and 0.082, respectively, and outperforms other methods in terms of qualitative assessment.
{"title":"Unsupervised Domain Adaptive Dose Prediction via Cross-Attention Transformer and Target-Specific Knowledge Preservation.","authors":"Jiaqi Cui, Jianghong Xiao, Yun Hou, Xi Wu, Jiliu Zhou, Xingchen Peng, Yan Wang","doi":"10.1142/S0129065723500570","DOIUrl":"10.1142/S0129065723500570","url":null,"abstract":"<p><p>Radiotherapy is one of the leading treatments for cancer. To accelerate the implementation of radiotherapy in clinic, various deep learning-based methods have been developed for automatic dose prediction. However, the effectiveness of these methods heavily relies on the availability of a substantial amount of data with labels, i.e. the dose distribution maps, which cost dosimetrists considerable time and effort to acquire. For cancers of low-incidence, such as cervical cancer, it is often a luxury to collect an adequate amount of labeled data to train a well-performing deep learning (DL) model. To mitigate this problem, in this paper, we resort to the unsupervised domain adaptation (UDA) strategy to achieve accurate dose prediction for cervical cancer (target domain) by leveraging the well-labeled high-incidence rectal cancer (source domain). Specifically, we introduce the cross-attention mechanism to learn the domain-invariant features and develop a cross-attention transformer-based encoder to align the two different cancer domains. Meanwhile, to preserve the target-specific knowledge, we employ multiple domain classifiers to enforce the network to extract more discriminative target features. In addition, we employ two independent convolutional neural network (CNN) decoders to compensate for the lack of spatial inductive bias in the pure transformer and generate accurate dose maps for both domains. Furthermore, to enhance the performance, two additional losses, i.e. a knowledge distillation loss (KDL) and a domain classification loss (DCL), are incorporated to transfer the domain-invariant features while preserving domain-specific information. Experimental results on a rectal cancer dataset and a cervical cancer dataset have demonstrated that our method achieves the best quantitative results with [Formula: see text], [Formula: see text], and HI of 1.446, 1.231, and 0.082, respectively, and outperforms other methods in terms of qualitative assessment.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":" ","pages":"2350057"},"PeriodicalIF":0.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41177765","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1142/s0129065724500047
Tamas Dozsa, Federico Deuschle, Bram Cornelis, Peter Kovacs
Summary: We introduce an extension of the classical support vector machine classification algorithm with adaptive orthogonal transformations. The proposed transformations are realized through so-called variable projection operators. This approach allows the classifier to learn an informative representation of the data during the training process. Furthermore, choosing the underlying adaptive transformations correctly allows for learning interpretable parameters. Since the gradients of the proposed transformations are known with respect to the learnable parameters, we focus on training the primal form the modified SVM objectives using a stochastic subgradient method. We consider the possibility of using Mercer kernels with the proposed algorithms. We construct a case study using the linear combinations of adaptive Hermite functions where the proposed classification scheme outperforms the classical support vector machine approach. The proposed variable projection support vector machines provide a lightweight alternative to deep learning methods which incorporate automatic feature extraction.
{"title":"Variable projection support vector machines and some applications using adaptive Hermite expansions","authors":"Tamas Dozsa, Federico Deuschle, Bram Cornelis, Peter Kovacs","doi":"10.1142/s0129065724500047","DOIUrl":"https://doi.org/10.1142/s0129065724500047","url":null,"abstract":"Summary: We introduce an extension of the classical support vector machine classification algorithm with adaptive orthogonal transformations. The proposed transformations are realized through so-called variable projection operators. This approach allows the classifier to learn an informative representation of the data during the training process. Furthermore, choosing the underlying adaptive transformations correctly allows for learning interpretable parameters. Since the gradients of the proposed transformations are known with respect to the learnable parameters, we focus on training the primal form the modified SVM objectives using a stochastic subgradient method. We consider the possibility of using Mercer kernels with the proposed algorithms. We construct a case study using the linear combinations of adaptive Hermite functions where the proposed classification scheme outperforms the classical support vector machine approach. The proposed variable projection support vector machines provide a lightweight alternative to deep learning methods which incorporate automatic feature extraction.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"191 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136312493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-27DOI: 10.1142/s0129065724500011
Yi Wang, Yuan Zhang, Chi Ma, Rui Wang, Zhe Guo, Yu Shen, Miaomiao Wang, Hongying Meng
Punctate White Matter Damage (PWMD) is a common neonatal brain disease, which can easily cause neurological disorder and strongly affect life quality in terms of neuromotor and cognitive performance. Especially, at the neonatal stage, the best cure time can be easily missed because PWMD is not conducive to the diagnosis based on current existing methods. The lesion of PWMD is relatively straightforward on T1-weighted Magnetic Resonance Imaging (T1 MRI), showing semi-oval, cluster or linear high signals. Diffusion Tensor Magnetic Resonance Image (DT-MRI, referred to as DTI) is a noninvasive technique that can be used to study brain microstructures in vivo, and provide information on movement and cognition-related nerve fiber tracts. Therefore, a new method was proposed to use T1 MRI combined with DTI for better neonatal PWMD analysis based on DTI super-resolution and multi-modality image registration. First, after preprocessing, neonatal DTI super-resolution was performed with the three times B-spline interpolation algorithm based on the Log-Euclidean space to improve DTIs' resolution to fit the T1 MRIs and facilitate nerve fiber tractography. Second, the symmetric diffeomorphic registration algorithm and inverse b0 image were selected for multi-modality image registration of DTI and T1 MRI. Finally, the 3D lesion models were combined with fiber tractography results to analyze and predict the degree of PWMD lesions affecting fiber tracts. Extensive experiments demonstrated the effectiveness and super performance of our proposed method. This streamlined technique can play an essential auxiliary role in diagnosing and treating neonatal PWMD.
{"title":"Neonatal White Matter Damage Analysis using DTI Super-resolution and Multi-modality Image Registration","authors":"Yi Wang, Yuan Zhang, Chi Ma, Rui Wang, Zhe Guo, Yu Shen, Miaomiao Wang, Hongying Meng","doi":"10.1142/s0129065724500011","DOIUrl":"https://doi.org/10.1142/s0129065724500011","url":null,"abstract":"Punctate White Matter Damage (PWMD) is a common neonatal brain disease, which can easily cause neurological disorder and strongly affect life quality in terms of neuromotor and cognitive performance. Especially, at the neonatal stage, the best cure time can be easily missed because PWMD is not conducive to the diagnosis based on current existing methods. The lesion of PWMD is relatively straightforward on T1-weighted Magnetic Resonance Imaging (T1 MRI), showing semi-oval, cluster or linear high signals. Diffusion Tensor Magnetic Resonance Image (DT-MRI, referred to as DTI) is a noninvasive technique that can be used to study brain microstructures in vivo, and provide information on movement and cognition-related nerve fiber tracts. Therefore, a new method was proposed to use T1 MRI combined with DTI for better neonatal PWMD analysis based on DTI super-resolution and multi-modality image registration. First, after preprocessing, neonatal DTI super-resolution was performed with the three times B-spline interpolation algorithm based on the Log-Euclidean space to improve DTIs' resolution to fit the T1 MRIs and facilitate nerve fiber tractography. Second, the symmetric diffeomorphic registration algorithm and inverse b0 image were selected for multi-modality image registration of DTI and T1 MRI. Finally, the 3D lesion models were combined with fiber tractography results to analyze and predict the degree of PWMD lesions affecting fiber tracts. Extensive experiments demonstrated the effectiveness and super performance of our proposed method. This streamlined technique can play an essential auxiliary role in diagnosing and treating neonatal PWMD.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"148 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-10-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"136312439","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-10-01DOI: 10.1142/S0129065723500533
Miguel A Vicente-Querol, Antonio Fernández-Caballero, Pascual González, Luz M González-Gualda, Patricia Fernández-Sotos, José P Molina, Arturo S García
Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.
{"title":"Effect of Action Units, Viewpoint and Immersion on Emotion Recognition Using Dynamic Virtual Faces.","authors":"Miguel A Vicente-Querol, Antonio Fernández-Caballero, Pascual González, Luz M González-Gualda, Patricia Fernández-Sotos, José P Molina, Arturo S García","doi":"10.1142/S0129065723500533","DOIUrl":"https://doi.org/10.1142/S0129065723500533","url":null,"abstract":"<p><p>Facial affect recognition is a critical skill in human interactions that is often impaired in psychiatric disorders. To address this challenge, tests have been developed to measure and train this skill. Recently, virtual human (VH) and virtual reality (VR) technologies have emerged as novel tools for this purpose. This study investigates the unique contributions of different factors in the communication and perception of emotions conveyed by VHs. Specifically, it examines the effects of the use of action units (AUs) in virtual faces, the positioning of the VH (frontal or mid-profile), and the level of immersion in the VR environment (desktop screen versus immersive VR). Thirty-six healthy subjects participated in each condition. Dynamic virtual faces (DVFs), VHs with facial animations, were used to represent the six basic emotions and the neutral expression. The results highlight the important role of the accurate implementation of AUs in virtual faces for emotion recognition. Furthermore, it is observed that frontal views outperform mid-profile views in both test conditions, while immersive VR shows a slight improvement in emotion recognition. This study provides novel insights into the influence of these factors on emotion perception and advances the understanding and application of these technologies for effective facial emotion recognition training.</p>","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"33 10","pages":"2350053"},"PeriodicalIF":0.0,"publicationDate":"2023-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41155867","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Stroke patients are prone to fatigue during the EEG acquisition procedure, and experiments have high requirements on cognition and physical limitations of subjects. Therefore, how to learn effective feature representation is very important. Deep learning networks have been widely used in motor imagery (MI) based brain-computer interface (BCI). This paper proposes a contrast predictive coding (CPC) framework based on the modified s-transform (MST) to generate MST-CPC feature representations. MST is used to acquire the temporal-frequency feature to improve the decoding performance for MI task recognition. EEG2Image is used to convert multi-channel one-dimensional EEG into two-dimensional EEG topography. High-level feature representations are generated by CPC which consists of an encoder and autoregressive model. Finally, the effectiveness of generated features is verified by the k-means clustering algorithm. It can be found that our model generates features with high efficiency and a good clustering effect. After classification performance evaluation, the average classification accuracy of MI tasks is 89% based on 40 subjects. The proposed method can obtain effective feature representations and improve the performance of MI-BCI systems. By comparing several self-supervised methods on the public dataset, it can be concluded that the MST-CPC model has the highest average accuracy. This is a breakthrough in the combination of self-supervised learning and image processing of EEG signals. It is helpful to provide effective rehabilitation training for stroke patients to promote motor function recovery.
{"title":"Self-supervised eeg representation learning with contrastive predictive coding for post-stroke","authors":"Fangzhou Xu, Yihao Yan, Jianqun Zhu, Xinyi Chen, Licai Gao, Yanbing Liu, Weiyou Shi, Yitai Lou, Wei Wang, Jiancai Leng, Yang Zhang","doi":"10.1142/s0129065723500661","DOIUrl":"https://doi.org/10.1142/s0129065723500661","url":null,"abstract":"Stroke patients are prone to fatigue during the EEG acquisition procedure, and experiments have high requirements on cognition and physical limitations of subjects. Therefore, how to learn effective feature representation is very important. Deep learning networks have been widely used in motor imagery (MI) based brain-computer interface (BCI). This paper proposes a contrast predictive coding (CPC) framework based on the modified s-transform (MST) to generate MST-CPC feature representations. MST is used to acquire the temporal-frequency feature to improve the decoding performance for MI task recognition. EEG2Image is used to convert multi-channel one-dimensional EEG into two-dimensional EEG topography. High-level feature representations are generated by CPC which consists of an encoder and autoregressive model. Finally, the effectiveness of generated features is verified by the k-means clustering algorithm. It can be found that our model generates features with high efficiency and a good clustering effect. After classification performance evaluation, the average classification accuracy of MI tasks is 89% based on 40 subjects. The proposed method can obtain effective feature representations and improve the performance of MI-BCI systems. By comparing several self-supervised methods on the public dataset, it can be concluded that the MST-CPC model has the highest average accuracy. This is a breakthrough in the combination of self-supervised learning and image processing of EEG signals. It is helpful to provide effective rehabilitation training for stroke patients to promote motor function recovery.","PeriodicalId":94052,"journal":{"name":"International journal of neural systems","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135132847","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}