Ying Li, Jian Chen, Zhihai Su, Jinjin Hai, Ruoxi Qin, Kai Qiao, Hai Lu, Binghai Yan
Currently, lumbar spine diseases are becoming increasingly young, and with the aging of the population, clinical doctors are facing increasing pressure in detecting lumbar spine diseases. Therefore, an AI-based diagnosis system for lumbar spine diseases using nuclear magnetic images (MRI) has become a sustainable solution for early diagnosis. However, a large amount of work has shown the fragility of neural networks in unseen data distributions. Therefore, this paper proposes an adversarial training-based robust diagnosis method for lumbar disc herniation to address the fragility issue of deep models under specific small perturbations. By enhancing the robustness of the model to specific perturbations through adversarial training, the deep network can correctly classify lumbar spine MRI data with perturbations. The deep network model uses ResNet50, with adversarial examples containing adversarial perturbations added during training, followed by joint training of normal and adversarial examples, and Mixup augmentation from the perspective of data augmentation to further enhance the model's robustness. Through 5-fold cross-validation training, this method was verified to significantly improve the robustness of the model under adversarial perturbations (average recognition accuracy increased from 50.14% to 71.07%), while maintaining high recognition accuracy for normal samples (our method/baseline: 89.14%/89.05%).
{"title":"Adversarial training-based robust diagnosis method for lumbar disc herniation","authors":"Ying Li, Jian Chen, Zhihai Su, Jinjin Hai, Ruoxi Qin, Kai Qiao, Hai Lu, Binghai Yan","doi":"10.1117/12.3001430","DOIUrl":"https://doi.org/10.1117/12.3001430","url":null,"abstract":"Currently, lumbar spine diseases are becoming increasingly young, and with the aging of the population, clinical doctors are facing increasing pressure in detecting lumbar spine diseases. Therefore, an AI-based diagnosis system for lumbar spine diseases using nuclear magnetic images (MRI) has become a sustainable solution for early diagnosis. However, a large amount of work has shown the fragility of neural networks in unseen data distributions. Therefore, this paper proposes an adversarial training-based robust diagnosis method for lumbar disc herniation to address the fragility issue of deep models under specific small perturbations. By enhancing the robustness of the model to specific perturbations through adversarial training, the deep network can correctly classify lumbar spine MRI data with perturbations. The deep network model uses ResNet50, with adversarial examples containing adversarial perturbations added during training, followed by joint training of normal and adversarial examples, and Mixup augmentation from the perspective of data augmentation to further enhance the model's robustness. Through 5-fold cross-validation training, this method was verified to significantly improve the robustness of the model under adversarial perturbations (average recognition accuracy increased from 50.14% to 71.07%), while maintaining high recognition accuracy for normal samples (our method/baseline: 89.14%/89.05%).","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128781851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, object detection algorithm using artificial intelligence technology plays an increasingly important role in the field of computer vision, and plays an extremely important role in such practical application scenarios as automatic driving, urban monitoring, national defense, military and medical assistance. Different from visible light imaging, infrared imaging technology uses detectors to measure the infrared radiation difference between the object itself and the background, overcoming the difficulty of low light intensity and realizing infrared object detection in the low-light scene. In this paper, the traditional infrared object detection algorithm for low light background and infrared object detection algorithm based on deep learning are reviewed, and the current representative classical algorithms are compared, and the characteristics of the model combined with the actual application scenarios are analyzed. Finally, the difficulties and challenges that the current infrared object detection task facing are described, and the research direction of infrared object detection is prospected.
{"title":"Review of infrared object detection algorithms for low-light background","authors":"Jianguo Wei, Y. Qu, Yanbin Ma","doi":"10.1117/12.3001327","DOIUrl":"https://doi.org/10.1117/12.3001327","url":null,"abstract":"At present, object detection algorithm using artificial intelligence technology plays an increasingly important role in the field of computer vision, and plays an extremely important role in such practical application scenarios as automatic driving, urban monitoring, national defense, military and medical assistance. Different from visible light imaging, infrared imaging technology uses detectors to measure the infrared radiation difference between the object itself and the background, overcoming the difficulty of low light intensity and realizing infrared object detection in the low-light scene. In this paper, the traditional infrared object detection algorithm for low light background and infrared object detection algorithm based on deep learning are reviewed, and the current representative classical algorithms are compared, and the characteristics of the model combined with the actual application scenarios are analyzed. Finally, the difficulties and challenges that the current infrared object detection task facing are described, and the research direction of infrared object detection is prospected.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127695262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jialin Chen, Chunmei Ma, Y. Li, Shuaikun Fan, Rui Shi, Xi-ping Yan
Accurate semantic segmentation of retinal images is very important for intelligent diagnosis of eye diseases. However, the large number of tiny blood vessels and the uneven distribution of blood vessels in the retina pose many challenges to the segmentation algorithm. In this paper, we propose a Hybrid Attention Fusion U-Net model (HAU-Net) for segmentation of retinal blood vessel images. Specifically, we use the U-NET network as the backbone network, and bridge attention is introduced into the network to improve the efficiency of vessel feature extraction. In addition, we introduce channel attention and spatial attention modules at the bottom of the network, to obtain coarse-to-fine feature representation of retinal vessel images, so as to improve the accuracy of vascular image segmentation. In order to verify the model's performance, we conducted extensive experiments on DRIVE and CHASE_DB1 datasets, and the accuracy reach 97.03% and 97.72%, respectively, which are better than CAR-UNet and MC-UNet.
{"title":"HAU-Net: hybrid attention U-NET for retinal blood vessels image segmentation","authors":"Jialin Chen, Chunmei Ma, Y. Li, Shuaikun Fan, Rui Shi, Xi-ping Yan","doi":"10.1117/12.3000792","DOIUrl":"https://doi.org/10.1117/12.3000792","url":null,"abstract":"Accurate semantic segmentation of retinal images is very important for intelligent diagnosis of eye diseases. However, the large number of tiny blood vessels and the uneven distribution of blood vessels in the retina pose many challenges to the segmentation algorithm. In this paper, we propose a Hybrid Attention Fusion U-Net model (HAU-Net) for segmentation of retinal blood vessel images. Specifically, we use the U-NET network as the backbone network, and bridge attention is introduced into the network to improve the efficiency of vessel feature extraction. In addition, we introduce channel attention and spatial attention modules at the bottom of the network, to obtain coarse-to-fine feature representation of retinal vessel images, so as to improve the accuracy of vascular image segmentation. In order to verify the model's performance, we conducted extensive experiments on DRIVE and CHASE_DB1 datasets, and the accuracy reach 97.03% and 97.72%, respectively, which are better than CAR-UNet and MC-UNet.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127736316","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At present, the image processing of remote sensing technology mainly depends on the transcendental ability of human beings, and it needs to spend a lot of artificial resources to mark. Therefore, this paper proposes a research and application of semantic segmentation method for remote sensing images based on convolutional neural network. Normalize the data, subtract the mean value and divide it by the standard deviation to standardize, divide the data, introduce data enhancement to further enhance the training data, and create a convolutional neural network and a training network. Each layer of U-NET is composed of three layers of convolution, and features are extracted and integrated by pooling or up-sampling. At the last layer, all the previously extracted features are classified into two categories to realize the semantic segmentation of the image. The experimental results show that the F1 score, Recall score and Precision score of this method are 84.31%, 89.59% and 79.62%, respectively. By introducing U-NET, the semantic segmentation accuracy of remote sensing images is improved. Compared with the traditional full convolution neural network, U-NET has been improved. Through the stronger connection between layers, plus up-sampling and down-convolution, features can be fully extracted and accurate segmentation can be achieved with fewer training samples.
{"title":"Semantic segmentation of remote sensing image based on U-NET","authors":"Li Yao, Simeng Jia, Ziqing Dai","doi":"10.1117/12.3001440","DOIUrl":"https://doi.org/10.1117/12.3001440","url":null,"abstract":"At present, the image processing of remote sensing technology mainly depends on the transcendental ability of human beings, and it needs to spend a lot of artificial resources to mark. Therefore, this paper proposes a research and application of semantic segmentation method for remote sensing images based on convolutional neural network. Normalize the data, subtract the mean value and divide it by the standard deviation to standardize, divide the data, introduce data enhancement to further enhance the training data, and create a convolutional neural network and a training network. Each layer of U-NET is composed of three layers of convolution, and features are extracted and integrated by pooling or up-sampling. At the last layer, all the previously extracted features are classified into two categories to realize the semantic segmentation of the image. The experimental results show that the F1 score, Recall score and Precision score of this method are 84.31%, 89.59% and 79.62%, respectively. By introducing U-NET, the semantic segmentation accuracy of remote sensing images is improved. Compared with the traditional full convolution neural network, U-NET has been improved. Through the stronger connection between layers, plus up-sampling and down-convolution, features can be fully extracted and accurate segmentation can be achieved with fewer training samples.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"284 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120870535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Against the poor noise immunity of the vibration measurement algorithm based on video phase, this paper proposes an algorithm based on enhanced fast empirical mode decomposition and Hilbert phase-based motion estimation (EFEMD-HPME). EFEMD decomposes the multicomponent video image into a single-component image, and then the local phase information of single component image is extracted through Hilbert transform, which has superior noise immunity compared with the spectral decomposition technique of the band-pass filter. Experiments show that the algorithm proposed in this article has a signal-to-noise ratio improvement of about 30% and a relative error of less than 0.5% compared with the HPME, which is of great significance for improving the robustness of video vibration measurement in general measurement environments.
{"title":"Research on video vibration measurement based on fast two-dimensional empirical mode decomposition and Hilbert transform","authors":"Honglei Du, Z. Zhong","doi":"10.1117/12.3001022","DOIUrl":"https://doi.org/10.1117/12.3001022","url":null,"abstract":"Against the poor noise immunity of the vibration measurement algorithm based on video phase, this paper proposes an algorithm based on enhanced fast empirical mode decomposition and Hilbert phase-based motion estimation (EFEMD-HPME). EFEMD decomposes the multicomponent video image into a single-component image, and then the local phase information of single component image is extracted through Hilbert transform, which has superior noise immunity compared with the spectral decomposition technique of the band-pass filter. Experiments show that the algorithm proposed in this article has a signal-to-noise ratio improvement of about 30% and a relative error of less than 0.5% compared with the HPME, which is of great significance for improving the robustness of video vibration measurement in general measurement environments.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130809249","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
To address the problems of insufficient accuracy and difficulty of application in the current Chinese image description field, this paper proposes an evaluation method based on semantic constraints in the target domain. Unlike previous research, this method acts on the output stage of the model, and based on the extraction of key semantics in the target application domain, it is constrained by the macroscopic semantic space of that domain or by introducing external semantic information from other visual tasks. The experiments show that the proposed method effectively improves the semantic coherence between the model output description sentences and the input images in the target domain, and is helpful for the practical application of image description in specific domains.
{"title":"Chinese image description evaluation method based on target domain semantic constraints","authors":"Zhenhao Wang, Wenyi Sun, Zhengsong Wang, Le Yang","doi":"10.1117/12.3000808","DOIUrl":"https://doi.org/10.1117/12.3000808","url":null,"abstract":"To address the problems of insufficient accuracy and difficulty of application in the current Chinese image description field, this paper proposes an evaluation method based on semantic constraints in the target domain. Unlike previous research, this method acts on the output stage of the model, and based on the extraction of key semantics in the target application domain, it is constrained by the macroscopic semantic space of that domain or by introducing external semantic information from other visual tasks. The experiments show that the proposed method effectively improves the semantic coherence between the model output description sentences and the input images in the target domain, and is helpful for the practical application of image description in specific domains.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"68 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114357573","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the context of mobile Internet, time series analysis has become an important way to capture the characteristics of data such as periodicity and correlation. Establishing a temporal sequence analysis model as an effective means to capture data features, for the problems of irregularity, nonlinearity, and inconspicuous feature relationships that commonly occur in sequences. In this paper, we use convolutional neural network to extract the potential features in the sequence, and combine the long and short term memory network to analyze the temporal features in the data; meanwhile, due to the "gate" structure of the long and short term memory network, some noise in the data is introduced into the model for training, resulting in the overfitting problem. -The decode-reconstruction network structure is used to remove this noise and improve the accuracy of the model. In this paper, we use the stock data of CBS as an example and compare it with the existing algorithm model, based on which we demonstrate the higher accuracy of this algorithm with different domain data sets.
{"title":"CNN-LSTM-VAE based time series trend prediction","authors":"Wei Li, Hui Gao, Zeqi Qin","doi":"10.1117/12.3000935","DOIUrl":"https://doi.org/10.1117/12.3000935","url":null,"abstract":"In the context of mobile Internet, time series analysis has become an important way to capture the characteristics of data such as periodicity and correlation. Establishing a temporal sequence analysis model as an effective means to capture data features, for the problems of irregularity, nonlinearity, and inconspicuous feature relationships that commonly occur in sequences. In this paper, we use convolutional neural network to extract the potential features in the sequence, and combine the long and short term memory network to analyze the temporal features in the data; meanwhile, due to the \"gate\" structure of the long and short term memory network, some noise in the data is introduced into the model for training, resulting in the overfitting problem. -The decode-reconstruction network structure is used to remove this noise and improve the accuracy of the model. In this paper, we use the stock data of CBS as an example and compare it with the existing algorithm model, based on which we demonstrate the higher accuracy of this algorithm with different domain data sets.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"123 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116815155","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Images, being significant carriers of memories and information, are valued by people. To restore images, it is necessary to perform noise reduction processing to eliminate noise generated by camera equipment and other factors. Traditional denoising technology such as wavelet transform is used to help engineer restore a image. And in recent years, the introduction of convolutional neural networks has accelerated the progress of noise reduction research. Many classic models have been developed by researchers using U-shaped networks and other techniques. Researchers often use multi-scale approaches to obtain multiple feature maps and enhance their network with these features. Our work enhanced denoising network by introducing large convolutions, small convolutions, and Fast Fourier convolutions to capture feature information at different scales. Additionally, we used an SE block to introduce attention mechanisms into the network. As evidenced by experimental results, our network achieved outstanding performance.
{"title":"A multi-scale branch convolutional neural network for denoising","authors":"Chunyu Wang, Xuesong Su","doi":"10.1117/12.3000863","DOIUrl":"https://doi.org/10.1117/12.3000863","url":null,"abstract":"Images, being significant carriers of memories and information, are valued by people. To restore images, it is necessary to perform noise reduction processing to eliminate noise generated by camera equipment and other factors. Traditional denoising technology such as wavelet transform is used to help engineer restore a image. And in recent years, the introduction of convolutional neural networks has accelerated the progress of noise reduction research. Many classic models have been developed by researchers using U-shaped networks and other techniques. Researchers often use multi-scale approaches to obtain multiple feature maps and enhance their network with these features. Our work enhanced denoising network by introducing large convolutions, small convolutions, and Fast Fourier convolutions to capture feature information at different scales. Additionally, we used an SE block to introduce attention mechanisms into the network. As evidenced by experimental results, our network achieved outstanding performance.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129871689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yichen Wang, Hu Sheng, Jing Li, Jiating Li, Jinxing Liu
With the rapid development of multimedia technology and the Internet, the security of multimedia data is facing more and more threats. Digital watermarking technology can effectively solve this problem, and compressed sensing theory is widely used in the field of digital watermarking. As the theoretical basis of the new generation of information processing, this theory can effectively solve the problem of multimedia data being attacked. Based on compressed sensing theory, combined with discrete wavelet transform and SVD decomposition, this research realized the embedding and extraction of watermark in the observation domain after image thinning, and tested different forms of watermark attacks such as salt pepper noise, Gaussian noise and filtering on the watermarked image. The experimental data shows that the PSNR value and NC of the algorithm are in line with expectations, and it has good robustness, transparency, and extraction resistance.
{"title":"Design of digital watermarking algorithm based on compression sensing","authors":"Yichen Wang, Hu Sheng, Jing Li, Jiating Li, Jinxing Liu","doi":"10.1117/12.3002802","DOIUrl":"https://doi.org/10.1117/12.3002802","url":null,"abstract":"With the rapid development of multimedia technology and the Internet, the security of multimedia data is facing more and more threats. Digital watermarking technology can effectively solve this problem, and compressed sensing theory is widely used in the field of digital watermarking. As the theoretical basis of the new generation of information processing, this theory can effectively solve the problem of multimedia data being attacked. Based on compressed sensing theory, combined with discrete wavelet transform and SVD decomposition, this research realized the embedding and extraction of watermark in the observation domain after image thinning, and tested different forms of watermark attacks such as salt pepper noise, Gaussian noise and filtering on the watermarked image. The experimental data shows that the PSNR value and NC of the algorithm are in line with expectations, and it has good robustness, transparency, and extraction resistance.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125537366","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the advent of digital age, bill recognition has become a necessary work for many enterprises, institutions and individuals. This paper proposes a ticket recognition platform based on deep learning algorithm, which can automatically recognize various types of tickets and realize fast and accurate processing. The platform uses deep learning algorithms such as convolutional neural network, cyclic neural network and long and short time memory network and combines image processing technology and natural language processing technology to effectively solve the difficult problems in ticket recognition. The experimental results show that the platform has high performance in ticket recognition accuracy and speed and can meet the needs of practical applications.
{"title":"Research on ticket recognition platform based on deep learning algorithm","authors":"Gaode Cheng","doi":"10.1117/12.3003840","DOIUrl":"https://doi.org/10.1117/12.3003840","url":null,"abstract":"With the advent of digital age, bill recognition has become a necessary work for many enterprises, institutions and individuals. This paper proposes a ticket recognition platform based on deep learning algorithm, which can automatically recognize various types of tickets and realize fast and accurate processing. The platform uses deep learning algorithms such as convolutional neural network, cyclic neural network and long and short time memory network and combines image processing technology and natural language processing technology to effectively solve the difficult problems in ticket recognition. The experimental results show that the platform has high performance in ticket recognition accuracy and speed and can meet the needs of practical applications.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127835107","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}