Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864689
Tuğba Erkoç, M. T. Eskil
Effective training of filters in Convolutional Neural Networks (CNN) ensures their success. In order to achieve good classification results in CNNs, filters must be carefully initialized, trained and fine-tuned. We propose an unsupervised method that allows the discovery of filters from the given dataset in a single epoch without specifying the number of filters hyper-parameter in convolutional layers. Our proposed method gradually builds the convolutional layers by a discovery routine that extracts a number of features that adequately represent the complexity of the input domain. The discovered filters represent the patterns in the domain, so they do not require any initialization method or backpropagation training for fine tuning purposes. Our method achieves 99.03% accuracy on MNIST dataset without applying any data augmentation techniques.
{"title":"Unsupervised Similarity Based Convolutions for Handwritten Digit Classification","authors":"Tuğba Erkoç, M. T. Eskil","doi":"10.1109/SIU55565.2022.9864689","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864689","url":null,"abstract":"Effective training of filters in Convolutional Neural Networks (CNN) ensures their success. In order to achieve good classification results in CNNs, filters must be carefully initialized, trained and fine-tuned. We propose an unsupervised method that allows the discovery of filters from the given dataset in a single epoch without specifying the number of filters hyper-parameter in convolutional layers. Our proposed method gradually builds the convolutional layers by a discovery routine that extracts a number of features that adequately represent the complexity of the input domain. The discovered filters represent the patterns in the domain, so they do not require any initialization method or backpropagation training for fine tuning purposes. Our method achieves 99.03% accuracy on MNIST dataset without applying any data augmentation techniques.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114694672","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864987
Ahmet Faruk Celimli, Ogulcan Özdemir, L. Akarun
Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.
{"title":"Attention Modeling with Temporal Shift in Sign Language Recognition","authors":"Ahmet Faruk Celimli, Ogulcan Özdemir, L. Akarun","doi":"10.1109/SIU55565.2022.9864987","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864987","url":null,"abstract":"Sign languages are visual languages expressed with multiple cues including facial expressions, upper-body and hand gestures. These different visual cues can be used together or at different instants to convey the message. In order to recognize sign languages, it is crucial to model what, where and when to attend. In this study, we developed a model to use different visual cues at the same time by using Temporal Shift Modules (TSMs) and attention modeling. Our experiments are conducted with BospohorusSign22k dataset. Our system has achieved 92.46% recognition accuracy and improved the performance approximately 14% compared to the baseline study with 78.85% accuracy.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"284 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116101742","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864713
Dogukan Mesci, Anil Koluacik, B. Yılmaz, Melih Sen, E. Masazade, V. Beskardes
Bats are of great importance for the survival of all living beings and for biodiversity. This study aims to classify the collective calls of the Egyptian fruit bat, whose northernmost distribution is in Turkey, using deep learning methods CNN and LSTM and utilizing MFCC (Mel Frequency Cepstral Coefficients) features. Thanks to the classification of species-specific calls, it is possible to observe the habitat preference, social relations, foraging, reproduction, mobility and migration of the species. The classification results obtained in this study provide significant increases compared to the previous study, especially in distinguishing certain calls.
蝙蝠对所有生物的生存和生物多样性至关重要。本研究旨在利用深度学习方法CNN和LSTM,利用Mel Frequency Cepstral Coefficients特征,对分布在土耳其最北的埃及果蝠的集体鸣叫进行分类。由于对物种特有的叫声进行了分类,因此有可能观察到物种的栖息地偏好、社会关系、觅食、繁殖、迁移和迁徙。与以往的研究相比,本研究获得的分类结果有了显著的提高,特别是在区分某些叫声方面。
{"title":"Classification of Egyptian Fruit Bat Calls with Deep Learning Methods","authors":"Dogukan Mesci, Anil Koluacik, B. Yılmaz, Melih Sen, E. Masazade, V. Beskardes","doi":"10.1109/SIU55565.2022.9864713","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864713","url":null,"abstract":"Bats are of great importance for the survival of all living beings and for biodiversity. This study aims to classify the collective calls of the Egyptian fruit bat, whose northernmost distribution is in Turkey, using deep learning methods CNN and LSTM and utilizing MFCC (Mel Frequency Cepstral Coefficients) features. Thanks to the classification of species-specific calls, it is possible to observe the habitat preference, social relations, foraging, reproduction, mobility and migration of the species. The classification results obtained in this study provide significant increases compared to the previous study, especially in distinguishing certain calls.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115364526","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864954
K. Kayabol, Ensar Burak Aytekin, Sertaç Arisoy, E. Kuruoğlu
In this paper, autoencoder-based multivariate skewed t-distribution is proposed for hyperspectral anomaly detection. In the proposed method, the reconstruction error between the hyperspectral images reconstructed by the autoencoder and the original hyperspectral images is calculated and is modeled with a multivariate skewed t-distribution. The parameters of the distribution are estimated using the variational Bayes approach, and a distribution-based rule is determined for anomaly detection. The experimental results show that the proposed method has better performance when compared to the RX, LRASR and DAEAD anomaly detection methods.
{"title":"Hyperspectral Anomaly Detection with Multivariate Skewed t Background Model","authors":"K. Kayabol, Ensar Burak Aytekin, Sertaç Arisoy, E. Kuruoğlu","doi":"10.1109/SIU55565.2022.9864954","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864954","url":null,"abstract":"In this paper, autoencoder-based multivariate skewed t-distribution is proposed for hyperspectral anomaly detection. In the proposed method, the reconstruction error between the hyperspectral images reconstructed by the autoencoder and the original hyperspectral images is calculated and is modeled with a multivariate skewed t-distribution. The parameters of the distribution are estimated using the variational Bayes approach, and a distribution-based rule is determined for anomaly detection. The experimental results show that the proposed method has better performance when compared to the RX, LRASR and DAEAD anomaly detection methods.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125183880","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864728
Ö. Mercan, Umut Özdil, Sükrü Ozan
This study was carried out with the aim of automatically translating phone calls between customers and customer representatives of a company. The dataset used in the study was created with audio files that were taken from open source platforms and reading of short texts in various contents by the company personnel. In addition to the labbeled data, approximately 28 thousand unlabeled data were labelled, and a total of 37534 audio data were prepared to be used in the training of the model that will translate from speech to text. The Wav2Vec2-XLSR-53 model which is a pre-trained model trained in 53 languages was fine-tuned with the our Turkish dataset. It has been obtained that it gives successful results in the speech to text performed on the data that is not used in model training and validation. The model was shared as open source on HugginFace to be used and tested for similar speech to text translation problems.
{"title":"Çok Dilli Sesten Metne Çeviri Modelinin İnce Ayar Yapılarak Türkçe Dilindeki Başarısının Arttırılması Increasing Performance in Turkish by Finetuning of Multilingual Speech-to-Text Model","authors":"Ö. Mercan, Umut Özdil, Sükrü Ozan","doi":"10.1109/SIU55565.2022.9864728","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864728","url":null,"abstract":"This study was carried out with the aim of automatically translating phone calls between customers and customer representatives of a company. The dataset used in the study was created with audio files that were taken from open source platforms and reading of short texts in various contents by the company personnel. In addition to the labbeled data, approximately 28 thousand unlabeled data were labelled, and a total of 37534 audio data were prepared to be used in the training of the model that will translate from speech to text. The Wav2Vec2-XLSR-53 model which is a pre-trained model trained in 53 languages was fine-tuned with the our Turkish dataset. It has been obtained that it gives successful results in the speech to text performed on the data that is not used in model training and validation. The model was shared as open source on HugginFace to be used and tested for similar speech to text translation problems.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121815103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864953
Muna Layth Abdulateef Al-Zubaidi, Selim Aras
The reason why real feelings and mood changes can be seen through our eyes is that the eyes provide the most revealing and accurate information of all human communication signs. It is possible to control a human-computer interface by voluntarily moving the eyes, which have an important place in communication. In this study, the appropriate feature and classification methods were investigated to use the Electooculography signs obtained from seven different voluntary eye movements in the human-computer interface. The success of the system is increased by determining the combination that gives the best result from many features by using the sequential forward feature selection method. The developed method reached 93.9% success in the seven-class dataset. The results show that human-computer interface control can be done with high accuracy with voluntary eye movements. Also, the development of a real-time working model is inspiring for work.
{"title":"Investigation of Appropriate Classification Method for EOG Based Human Computer Interface","authors":"Muna Layth Abdulateef Al-Zubaidi, Selim Aras","doi":"10.1109/SIU55565.2022.9864953","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864953","url":null,"abstract":"The reason why real feelings and mood changes can be seen through our eyes is that the eyes provide the most revealing and accurate information of all human communication signs. It is possible to control a human-computer interface by voluntarily moving the eyes, which have an important place in communication. In this study, the appropriate feature and classification methods were investigated to use the Electooculography signs obtained from seven different voluntary eye movements in the human-computer interface. The success of the system is increased by determining the combination that gives the best result from many features by using the sequential forward feature selection method. The developed method reached 93.9% success in the seven-class dataset. The results show that human-computer interface control can be done with high accuracy with voluntary eye movements. Also, the development of a real-time working model is inspiring for work.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124527512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864911
Golara Ghorban Dordinejad, Hakan Cevikalp
Image set based face recognition has recently become a popular topic as it has better performance than single image based face recognition. However, preprocessing is needed to remove the effects of some adverse conditions such as different pose angles, illumination, and expression differences within the set. One of the most effective preprocessing to improve the face recognition rate is face frontalization. Face frontalization is defined as the artificial acquisition of a face image with a different pose angle to a frontal pose. It has been observed that this process increases the face recognition performance. In this paper, image set based face recognition was performed by applying face frontalization to all images in the sets. Firstly, the faces in the IJBA database were frontalized by using the Rotate and Render hybrid frontalization method, which is based on a Three-Dimensional and Generative Adversial Network. Then, discriminative convex classifier is used for set based face recognition. In face recognition experiments, when the frontalized IJBA database and its non-frontalized version were compared, it was observed that the accuracy of face recognition increased with the frontalized face images.
{"title":"Face Frontalization for Image Set Based Face Recognition","authors":"Golara Ghorban Dordinejad, Hakan Cevikalp","doi":"10.1109/SIU55565.2022.9864911","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864911","url":null,"abstract":"Image set based face recognition has recently become a popular topic as it has better performance than single image based face recognition. However, preprocessing is needed to remove the effects of some adverse conditions such as different pose angles, illumination, and expression differences within the set. One of the most effective preprocessing to improve the face recognition rate is face frontalization. Face frontalization is defined as the artificial acquisition of a face image with a different pose angle to a frontal pose. It has been observed that this process increases the face recognition performance. In this paper, image set based face recognition was performed by applying face frontalization to all images in the sets. Firstly, the faces in the IJBA database were frontalized by using the Rotate and Render hybrid frontalization method, which is based on a Three-Dimensional and Generative Adversial Network. Then, discriminative convex classifier is used for set based face recognition. In face recognition experiments, when the frontalized IJBA database and its non-frontalized version were compared, it was observed that the accuracy of face recognition increased with the frontalized face images.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124022411","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864846
Cemal Efe Tezcan, Berk Kiras, G. Bilgin
It is very important to have a high accuracy rate in detecting cancerous cells in histopathological images. Thanks to high-accuracy images, cancerous cells will be detected more sensitively, and there will be a chance for more accurate and early diagnosis. Thus, a very important preliminary step will be taken in the treatment of cancerous cells. In this study, classification performances were comparatively analyzed by applying various methods to four different cancer cell types (benign, normal, carcinoma in situ and invasive carcinoma). By using BACH and Bioimaging as datasets, the desired parts are tried to be obtained primarily by several image processing methods (pyramid mean shifting, line detection, spreading). After obtaining images of different sizes, their performances are examined by using VGG16, DenseNet121, ResNet50, MobileNetV2, InceptionResNetV2, CNN deep transfer learning methods.
{"title":"Classification of Breast Cancer Histopathological Images with Deep Transfer Learning Methods","authors":"Cemal Efe Tezcan, Berk Kiras, G. Bilgin","doi":"10.1109/SIU55565.2022.9864846","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864846","url":null,"abstract":"It is very important to have a high accuracy rate in detecting cancerous cells in histopathological images. Thanks to high-accuracy images, cancerous cells will be detected more sensitively, and there will be a chance for more accurate and early diagnosis. Thus, a very important preliminary step will be taken in the treatment of cancerous cells. In this study, classification performances were comparatively analyzed by applying various methods to four different cancer cell types (benign, normal, carcinoma in situ and invasive carcinoma). By using BACH and Bioimaging as datasets, the desired parts are tried to be obtained primarily by several image processing methods (pyramid mean shifting, line detection, spreading). After obtaining images of different sizes, their performances are examined by using VGG16, DenseNet121, ResNet50, MobileNetV2, InceptionResNetV2, CNN deep transfer learning methods.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130688859","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864825
E. G. Sahin
This paper presents a novel microstrip dual-band bandstop filter design by using inductively coupled Coplanar Waveguide fed (CPW) dual-mode resonators. Two nested dual-mode square loop resonators with perturbation elements on microstrip layer are constructed to produce first and second band. In the proposed filter design method, inductively coupled CPW rectangular rings are used to excite the degenerate modes of the resonators to obtain dual band bandstop filtering structure. Two perturbation elements are used to control the reflection zeros of the first and second band. So, two different filter responses are obtained by repositioning the reflection zeros. Charge distributions of the reflection zeros and poles of the structure are investigated for each stopband to exhibit the mode characteristic. Two filters are simulated with a full-wave EM simulator. Two filters are implemented and measured. Despite the production losses are high as a result of the CPW-fed structure, the measurement results are in good agreement with the simulation results.
{"title":"Coplanar-Waveguide Fed Microstrip Dual- Band Bandstop Filters with Inductively Coupled Dual-Mode Ring Resonators","authors":"E. G. Sahin","doi":"10.1109/SIU55565.2022.9864825","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864825","url":null,"abstract":"This paper presents a novel microstrip dual-band bandstop filter design by using inductively coupled Coplanar Waveguide fed (CPW) dual-mode resonators. Two nested dual-mode square loop resonators with perturbation elements on microstrip layer are constructed to produce first and second band. In the proposed filter design method, inductively coupled CPW rectangular rings are used to excite the degenerate modes of the resonators to obtain dual band bandstop filtering structure. Two perturbation elements are used to control the reflection zeros of the first and second band. So, two different filter responses are obtained by repositioning the reflection zeros. Charge distributions of the reflection zeros and poles of the structure are investigated for each stopband to exhibit the mode characteristic. Two filters are simulated with a full-wave EM simulator. Two filters are implemented and measured. Despite the production losses are high as a result of the CPW-fed structure, the measurement results are in good agreement with the simulation results.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130497416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2022-05-15DOI: 10.1109/SIU55565.2022.9864950
Haydar Kiliç, S. Ceyhan
In this study, a new anisotropic metric for color images was defined and the filtering results of a noisy image were examined. Unlike the others (Riemann), the metric created in filtering was chosen as Finsler type and mathematical inferences were made until the filter creation stage. The scale parameter beta and step size dt were tried for different images, and the parameters that gave the best results for the new metric were examined for this study. This new filter was compared with some known filters and the results were examined. As a result, the new filter provided the best image enhancement.
{"title":"Color Image Enhancement Using A New Anisotropic Metric","authors":"Haydar Kiliç, S. Ceyhan","doi":"10.1109/SIU55565.2022.9864950","DOIUrl":"https://doi.org/10.1109/SIU55565.2022.9864950","url":null,"abstract":"In this study, a new anisotropic metric for color images was defined and the filtering results of a noisy image were examined. Unlike the others (Riemann), the metric created in filtering was chosen as Finsler type and mathematical inferences were made until the filter creation stage. The scale parameter beta and step size dt were tried for different images, and the parameters that gave the best results for the new metric were examined for this study. This new filter was compared with some known filters and the results were examined. As a result, the new filter provided the best image enhancement.","PeriodicalId":115446,"journal":{"name":"2022 30th Signal Processing and Communications Applications Conference (SIU)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2022-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126742311","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}