J. Futrelle, Jeff Gaynor, J. Plutchak, J. Myers, R. McGrath, P. Bajcsy, Jason Kastner, Kailash Kotwani, J. Lee, Luigi Marini, R. Kooper, T. McLaren, Yong Liu
The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to locate, use, link, annotate, and discuss data and metadata as they work with existing applications in distributed environments. Tupelo is built using a combination of commonly-used Semantic Web technologies for metadata management, content management technologies for data management, and workflow technologies for management of computation, and can interoperate with other tools using a variety of standard interfaces and a client and desktop API. Tupelo's primary function is to facilitate interoperability, providing a Knowledge Space "view" of distributed, heterogeneous resources such as institutional repositories, relational databases, and semantic web stores. Knowledge Spaces have driven recent work creating e-Science cyberenvironments to serve distributed, active scientific communities. Tupelo-based components deployed in desktop applications, on portals, and in AJAX applications interoperate to allow researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce a complete and coherent research record suitable for distribution and archiving.
{"title":"Semantic middleware for e-science knowledge spaces","authors":"J. Futrelle, Jeff Gaynor, J. Plutchak, J. Myers, R. McGrath, P. Bajcsy, Jason Kastner, Kailash Kotwani, J. Lee, Luigi Marini, R. Kooper, T. McLaren, Yong Liu","doi":"10.1145/1657120.1657124","DOIUrl":"https://doi.org/10.1145/1657120.1657124","url":null,"abstract":"The Tupelo semantic content management middleware implements Knowledge Spaces that enable scientists to locate, use, link, annotate, and discuss data and metadata as they work with existing applications in distributed environments. Tupelo is built using a combination of commonly-used Semantic Web technologies for metadata management, content management technologies for data management, and workflow technologies for management of computation, and can interoperate with other tools using a variety of standard interfaces and a client and desktop API. Tupelo's primary function is to facilitate interoperability, providing a Knowledge Space \"view\" of distributed, heterogeneous resources such as institutional repositories, relational databases, and semantic web stores. Knowledge Spaces have driven recent work creating e-Science cyberenvironments to serve distributed, active scientific communities. Tupelo-based components deployed in desktop applications, on portals, and in AJAX applications interoperate to allow researchers to develop, coordinate and share datasets, documents, and computational models, while preserving process documentation and other contextual information needed to produce a complete and coherent research record suitable for distribution and archiving.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2009-11-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133218174","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Aspect‐based sentiment analysis has gained wide popularity due to its benefits of text extraction, classification, and ranking the overall sentiments of each feature extracted. However, the aspect‐based feature extraction techniques often result in acquiring more number aspects that refer to the same feature which arises the need for aspect‐based text classification. Since most of the existing techniques focus on monolingual aspect‐based sentimental analysis, we planned to develop a multilingual aspect‐based text classification for Indian languages. We perform the multilingual aspect‐based text classification on different morphologically rich and complex languages such as Hindi, Tamil, Malayalam, Bengali, Urdu, Telugu, and Sinhalese. To achieve this objective, in this article we present an optimized rectified linear unit (reLU) layer‐based bidirectional long short‐term memory (reLU‐BiLSTM) deep learning tool is developed. The parameters of the reLU‐BiLSTM architecture are optimized using the local search‐based five‐element cycle optimization algorithm (LSFECO) optimization algorithm. Initially, the proposed model preprocesses the multilingual texts obtained from the reviews using different techniques such as tokenization, special character removal, text normalization and so forth. The discrete and categorical features from the different languages are initially extracted by applying the bidirectional encoder representations from transformers (BERT) model which processes the sentences in the text in a layer‐by‐layer manner. The context learning and word embeddings (aspects) present in the text are identified using different approaches such as word mover's distance, continuous Bag‐of‐Words (CBOW), and Cosine similarity. The LSFECO optimized reLU‐BiLSTM architecture classifies the different aspects present in the embedding document to its corresponding classes (flowers, plants, animals, sports, politics, etc). The efficiency of the proposed methodology is evaluated using the text obtained from different text documents such as semantic relations from Wikipedia, Habeas Corpus (HC) Corpora, Sentiment Lexicons for 81 Languages, IIT Bombay English‐Hindi Parallel Corpus, and Indic Languages Multilingual Parallel Corpus. When compared to conventional techniques, the proposed methodology outperforms them in terms of entropy, coverage, purity, processing time, accuracy, F1‐score, recall, and precision.
{"title":"Local search five-element cycle optimized reLU-BiLSTM for multilingual aspect-based text classification","authors":"K. S. Kumar, C. Sulochana","doi":"10.1002/cpe.7374","DOIUrl":"https://doi.org/10.1002/cpe.7374","url":null,"abstract":"Aspect‐based sentiment analysis has gained wide popularity due to its benefits of text extraction, classification, and ranking the overall sentiments of each feature extracted. However, the aspect‐based feature extraction techniques often result in acquiring more number aspects that refer to the same feature which arises the need for aspect‐based text classification. Since most of the existing techniques focus on monolingual aspect‐based sentimental analysis, we planned to develop a multilingual aspect‐based text classification for Indian languages. We perform the multilingual aspect‐based text classification on different morphologically rich and complex languages such as Hindi, Tamil, Malayalam, Bengali, Urdu, Telugu, and Sinhalese. To achieve this objective, in this article we present an optimized rectified linear unit (reLU) layer‐based bidirectional long short‐term memory (reLU‐BiLSTM) deep learning tool is developed. The parameters of the reLU‐BiLSTM architecture are optimized using the local search‐based five‐element cycle optimization algorithm (LSFECO) optimization algorithm. Initially, the proposed model preprocesses the multilingual texts obtained from the reviews using different techniques such as tokenization, special character removal, text normalization and so forth. The discrete and categorical features from the different languages are initially extracted by applying the bidirectional encoder representations from transformers (BERT) model which processes the sentences in the text in a layer‐by‐layer manner. The context learning and word embeddings (aspects) present in the text are identified using different approaches such as word mover's distance, continuous Bag‐of‐Words (CBOW), and Cosine similarity. The LSFECO optimized reLU‐BiLSTM architecture classifies the different aspects present in the embedding document to its corresponding classes (flowers, plants, animals, sports, politics, etc). The efficiency of the proposed methodology is evaluated using the text obtained from different text documents such as semantic relations from Wikipedia, Habeas Corpus (HC) Corpora, Sentiment Lexicons for 81 Languages, IIT Bombay English‐Hindi Parallel Corpus, and Indic Languages Multilingual Parallel Corpus. When compared to conventional techniques, the proposed methodology outperforms them in terms of entropy, coverage, purity, processing time, accuracy, F1‐score, recall, and precision.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"739 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122006557","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In today's world, cloud computing is an emerging service, and it has proved to be a profit‐oriented business model. A drastic growth was observed in cloud computing during the last decade because of its easy access to services. As the number of users are increasing dynamically, load balancing is required to handle the user's load. Load balancing algorithms minimize the data center processing time and increase the throughput. Therefore cloud service providers need the best dynamic strategies. In this work, the proposed Efficient Throttled load balancing algorithm improves the performance in the data center environment. Our experimental results indicate that the Efficient Throttled load balancing algorithm improves the average overall response time by 6.47% and average data center processing time by 20.74%, compared to the Throttled load balancing algorithm.
{"title":"Efficient Throttled load balancing algorithm to improve the response time and processing time in data center","authors":"B. RamanaReddy, M. Indiramma","doi":"10.1002/cpe.7208","DOIUrl":"https://doi.org/10.1002/cpe.7208","url":null,"abstract":"In today's world, cloud computing is an emerging service, and it has proved to be a profit‐oriented business model. A drastic growth was observed in cloud computing during the last decade because of its easy access to services. As the number of users are increasing dynamically, load balancing is required to handle the user's load. Load balancing algorithms minimize the data center processing time and increase the throughput. Therefore cloud service providers need the best dynamic strategies. In this work, the proposed Efficient Throttled load balancing algorithm improves the performance in the data center environment. Our experimental results indicate that the Efficient Throttled load balancing algorithm improves the average overall response time by 6.47% and average data center processing time by 20.74%, compared to the Throttled load balancing algorithm.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127553303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xinhui Zhao, Zehui Wu, Xiaobin Song, Qingxian Wang
Software Definition Network (SDN) has three features as separation of control and forwarding, unified management of configuration, and dynamic programming, which have greatly improved flexibility of network deployment and dynamics of network management, as well as efficiency of network transmission. However, its security problem is quite outstanding. This paper proposes a new security defense method based on coloring distribution model, which aims at the shortcomings of the current research that does not change the weak security, certainty, statics, and isomorphism of SDN. Motivated by the idea of moving target defense, our method abstracts network topology of SDN using coloring theory and realizes diversified deployment of controllers and switches, thus improving the security of network itself without changing the structure of SDN. Simulation results show that our method can prevent denial of service (DOS) attacks against controllers and switches and at the same time effectively block the worm, which is one of the most threat of smart city, propagation via switches.
{"title":"Secure analysis on entire software-defined network using coloring distribution model","authors":"Xinhui Zhao, Zehui Wu, Xiaobin Song, Qingxian Wang","doi":"10.1002/cpe.5541","DOIUrl":"https://doi.org/10.1002/cpe.5541","url":null,"abstract":"Software Definition Network (SDN) has three features as separation of control and forwarding, unified management of configuration, and dynamic programming, which have greatly improved flexibility of network deployment and dynamics of network management, as well as efficiency of network transmission. However, its security problem is quite outstanding. This paper proposes a new security defense method based on coloring distribution model, which aims at the shortcomings of the current research that does not change the weak security, certainty, statics, and isomorphism of SDN. Motivated by the idea of moving target defense, our method abstracts network topology of SDN using coloring theory and realizes diversified deployment of controllers and switches, thus improving the security of network itself without changing the structure of SDN. Simulation results show that our method can prevent denial of service (DOS) attacks against controllers and switches and at the same time effectively block the worm, which is one of the most threat of smart city, propagation via switches.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124867092","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Internet of Things (IoT) botnet attacks are considered an important risk to information security. This work mainly focusing on botnet attack detection targeting various IoT devices. In this work, feature generation and classification are the two major processes considered for attack detection. Generative adversarial network (GAN) is applied for the feature generation process. GAN has generator and discriminator. Here effective generator network is introduced by applying added convolution layers with batch normalization and rectified linear unit activation function. In this proposed system, a novel network called the data perception network is proposed with scale fused architecture. The data perception network is developed to determine generator's efficiency in generating fake data similar to original data. This perception network is also considered for estimating loss function by analyzing in different scales. Hence, the major strength of this network is that highly reliable data are provided using the synthesized data. An efficient network architecture called scale fused bidirectional long short term memory attention model (SFBAM) is applied for the classification process. The proposed model is evaluated using the IoT‐23 dataset, which can differentiate between benign and malicious data in IoT attacks. Compared to existing models, this proposed model provides effective results by improving accuracy and reducing loss.
{"title":"Effective Internet of Things botnet classification by data upsampling using generative adversarial network and scale fused bidirectional long short term memory attention model","authors":"K. Geetha, H. BrahmanandaS.","doi":"10.1002/cpe.7380","DOIUrl":"https://doi.org/10.1002/cpe.7380","url":null,"abstract":"Internet of Things (IoT) botnet attacks are considered an important risk to information security. This work mainly focusing on botnet attack detection targeting various IoT devices. In this work, feature generation and classification are the two major processes considered for attack detection. Generative adversarial network (GAN) is applied for the feature generation process. GAN has generator and discriminator. Here effective generator network is introduced by applying added convolution layers with batch normalization and rectified linear unit activation function. In this proposed system, a novel network called the data perception network is proposed with scale fused architecture. The data perception network is developed to determine generator's efficiency in generating fake data similar to original data. This perception network is also considered for estimating loss function by analyzing in different scales. Hence, the major strength of this network is that highly reliable data are provided using the synthesized data. An efficient network architecture called scale fused bidirectional long short term memory attention model (SFBAM) is applied for the classification process. The proposed model is evaluated using the IoT‐23 dataset, which can differentiate between benign and malicious data in IoT attacks. Compared to existing models, this proposed model provides effective results by improving accuracy and reducing loss.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"92 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121665719","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Diganta Kumar Pathak, S. Kalita, D. K. Bhattacharya
Hyperspectral sensor generates huge datasets which conveys abundance of information. However, it poses many challenges in the analysis and interpretation of these data. Deep networks like VGG16, VGG19 are difficult to directly apply for hyperspectral image (HSI) classification because of its higher number of layers which in turn requires high level of system resources. This article suggests a novel framework with lesser number of layers for hyperspectral image classification (HSIC) that takes into account spectral‐spatial context sensitivity of HSI, which focuses on enhancing the discriminating capability of HSIC. The model uses available spectral feature as well as spatial contexts of HSI and consecutively learn the distinctive features. A small training set has been used to optimize the network parameters while the overfitting problem is alleviated using the validation set. Regularization has been performed using batch normalization (BN) layer after each convolution layer. The cost of the model is measured in terms of training and testing time duration under the same platform, which has further been compared with some ensemble learning methods, SVM and other three recent state‐of‐the‐art methods. Experimental results establish that the proposed model performs very well with the three benchmark datasets: Indian Pines, Salinas and University of Pavia, which mostly contain land cover of agriculture, forest, soil, rural, and urban area.
{"title":"Spectral spatial joint feature based convolution neural network for hyperspectral image classification","authors":"Diganta Kumar Pathak, S. Kalita, D. K. Bhattacharya","doi":"10.1002/cpe.6547","DOIUrl":"https://doi.org/10.1002/cpe.6547","url":null,"abstract":"Hyperspectral sensor generates huge datasets which conveys abundance of information. However, it poses many challenges in the analysis and interpretation of these data. Deep networks like VGG16, VGG19 are difficult to directly apply for hyperspectral image (HSI) classification because of its higher number of layers which in turn requires high level of system resources. This article suggests a novel framework with lesser number of layers for hyperspectral image classification (HSIC) that takes into account spectral‐spatial context sensitivity of HSI, which focuses on enhancing the discriminating capability of HSIC. The model uses available spectral feature as well as spatial contexts of HSI and consecutively learn the distinctive features. A small training set has been used to optimize the network parameters while the overfitting problem is alleviated using the validation set. Regularization has been performed using batch normalization (BN) layer after each convolution layer. The cost of the model is measured in terms of training and testing time duration under the same platform, which has further been compared with some ensemble learning methods, SVM and other three recent state‐of‐the‐art methods. Experimental results establish that the proposed model performs very well with the three benchmark datasets: Indian Pines, Salinas and University of Pavia, which mostly contain land cover of agriculture, forest, soil, rural, and urban area.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"112 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128018131","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The main concept of this article is to plan for the intelligent rainfall prediction using the combination of deep learning models. The dataset is gathered from the standard publically available dataset concerning the Tamil Nadu state. The collected data is given to the feature extraction, in which few features such as; “minimum value, maximum value, mean, median, standard deviation, kurtosis, entropy, skewness, variance, and zero cross” are extracted. Additionally, the extracted features are applied to the optimal feature formation, in which optimized convolutional neural network (O‐CNN) is employed for the final feature formation. Here, the activation function, count of pooling layer, and count of hidden neurons are tuned with the intention of minimizing the correlation between the selected features. Once the optimal features are selected with less correlation, adaptive long short‐term memory (A‐LSTM) is adopted for the prediction model. Here, the enhancement is concentrated on minimizing the function concerning the error through the optimization of the hidden neurons of A‐LSTM. The improvement of both the deep learning models O‐CNN and A‐LSTM is performed by the improved sun flower optimization (I‐SFO). The research results reveal superior performance to existing techniques that offer novel thinking in rainfall prediction area with optimal rate of prediction.
{"title":"Automated rain fall prediction enabled by optimized convolutional neural network-based feature formation with adaptive long short-term memory framework","authors":"K. Ananthajothi, T. Karthick, M. Amanullah","doi":"10.1002/cpe.6868","DOIUrl":"https://doi.org/10.1002/cpe.6868","url":null,"abstract":"The main concept of this article is to plan for the intelligent rainfall prediction using the combination of deep learning models. The dataset is gathered from the standard publically available dataset concerning the Tamil Nadu state. The collected data is given to the feature extraction, in which few features such as; “minimum value, maximum value, mean, median, standard deviation, kurtosis, entropy, skewness, variance, and zero cross” are extracted. Additionally, the extracted features are applied to the optimal feature formation, in which optimized convolutional neural network (O‐CNN) is employed for the final feature formation. Here, the activation function, count of pooling layer, and count of hidden neurons are tuned with the intention of minimizing the correlation between the selected features. Once the optimal features are selected with less correlation, adaptive long short‐term memory (A‐LSTM) is adopted for the prediction model. Here, the enhancement is concentrated on minimizing the function concerning the error through the optimization of the hidden neurons of A‐LSTM. The improvement of both the deep learning models O‐CNN and A‐LSTM is performed by the improved sun flower optimization (I‐SFO). The research results reveal superior performance to existing techniques that offer novel thinking in rainfall prediction area with optimal rate of prediction.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123237623","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The energy efficient and delay are the two important optimization issues in the mobile adhoc network (MANET), where the nodes move randomly at any direction with limited battery life, resulting in occasional change of network topology. In this article, a hybrid fruit fly optimization algorithm and whale optimization algorithm (FOA‐WOA) is proposed for energy efficient with delay aware cluster head (CH) selection. The major objective of the proposed method is “to solve the problems of energy efficient with delay and develop a clustering mechanism”. The performance of the hybrid FOA‐WOA is evaluated based on packet delivery ratio (PDR), delay, energy consumption, and throughput. Moreover, the proposed method is analyzed with two existing algorithms, like ant colony optimization (ACO) and genetic algorithm (GA). The experimental results show that the proposed method attains 11.6% better than ACO and 1.8% better than GA based on packet delivery ratio, 57.6% better than ACO and 27.3% better than GA based on delay and 15.3% better than ACO and 36.4% better than GA based on energy consumption.
{"title":"Energy efficient and delay aware clustering in mobile adhoc network: A hybrid fruit fly optimization algorithm and whale optimization algorithm approach","authors":"Saminathan Karunakaran, T. Renukadevi","doi":"10.1002/cpe.6867","DOIUrl":"https://doi.org/10.1002/cpe.6867","url":null,"abstract":"The energy efficient and delay are the two important optimization issues in the mobile adhoc network (MANET), where the nodes move randomly at any direction with limited battery life, resulting in occasional change of network topology. In this article, a hybrid fruit fly optimization algorithm and whale optimization algorithm (FOA‐WOA) is proposed for energy efficient with delay aware cluster head (CH) selection. The major objective of the proposed method is “to solve the problems of energy efficient with delay and develop a clustering mechanism”. The performance of the hybrid FOA‐WOA is evaluated based on packet delivery ratio (PDR), delay, energy consumption, and throughput. Moreover, the proposed method is analyzed with two existing algorithms, like ant colony optimization (ACO) and genetic algorithm (GA). The experimental results show that the proposed method attains 11.6% better than ACO and 1.8% better than GA based on packet delivery ratio, 57.6% better than ACO and 27.3% better than GA based on delay and 15.3% better than ACO and 36.4% better than GA based on energy consumption.","PeriodicalId":214565,"journal":{"name":"Concurr. Comput. Pract. Exp.","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124674032","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}