TCP (Transmission Control Protocol) is the most used transport protocol for wired and wireless networks. It provides many services (reliability, end to end delivery ...) to the applications running over the Internet, but to be able to manage traffics with a huge quantity of data, TCP must have robust congestion control mechanisms. Many researchers have agreed that despite the existence of some congestion control algorithms, TCP still suffers from disappointing performances for short and long flows. For that, researches are still launched by the network community in order to have the suitable mechanism ensuring fair and efficient bandwidth allocation. The works already carried out in this subject has elaborated several congestion control mechanisms. In this paper, we discuss, identify, analyze and compare the behavior of some congestion control mechanisms under congested wireless mech networks, in order to identify their advantages and their respective limits. For the simulation, we used the well known network simulator ns2. Simulation results show that TCP Tahoe, TCP Reno, TCP New Reno, Sack are loss-based, they are beneficial for latency-sensitive flows, while TCP Vegas which is delay-based, it is recommended for applications that don't endure the loss of information but suffers from fairness problems when sharing a bottleneck with competitive flows.
TCP(传输控制协议)是有线和无线网络中最常用的传输协议。它为运行在Internet上的应用程序提供了许多服务(可靠性、端到端交付……),但是为了能够管理大量数据的流量,TCP必须具有健壮的拥塞控制机制。许多研究人员一致认为,尽管存在一些拥塞控制算法,但TCP在短流和长流方面的性能仍然令人失望。为此,网络学界仍在开展研究,以期有合适的机制保证公平高效的带宽分配。本课题已经开展的工作阐述了几种拥塞控制机制。在本文中,我们讨论,识别,分析和比较一些拥塞控制机制在拥塞无线机械网络下的行为,以确定它们的优势和各自的局限性。在模拟中,我们使用了著名的网络模拟器ns2。仿真结果表明,TCP Tahoe、TCP Reno、TCP New Reno、Sack是基于时延的,它们有利于延迟敏感的流,而TCP Vegas是基于时延的,适合于不存在信息丢失,但在与竞争流共享瓶颈时存在公平性问题的应用。
{"title":"Comparative analysis of TCP congestion control mechanisms","authors":"Kaoutar Bazi, B. Nassereddine","doi":"10.1145/3386723.3387832","DOIUrl":"https://doi.org/10.1145/3386723.3387832","url":null,"abstract":"TCP (Transmission Control Protocol) is the most used transport protocol for wired and wireless networks. It provides many services (reliability, end to end delivery ...) to the applications running over the Internet, but to be able to manage traffics with a huge quantity of data, TCP must have robust congestion control mechanisms. Many researchers have agreed that despite the existence of some congestion control algorithms, TCP still suffers from disappointing performances for short and long flows. For that, researches are still launched by the network community in order to have the suitable mechanism ensuring fair and efficient bandwidth allocation. The works already carried out in this subject has elaborated several congestion control mechanisms. In this paper, we discuss, identify, analyze and compare the behavior of some congestion control mechanisms under congested wireless mech networks, in order to identify their advantages and their respective limits. For the simulation, we used the well known network simulator ns2. Simulation results show that TCP Tahoe, TCP Reno, TCP New Reno, Sack are loss-based, they are beneficial for latency-sensitive flows, while TCP Vegas which is delay-based, it is recommended for applications that don't endure the loss of information but suffers from fairness problems when sharing a bottleneck with competitive flows.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129716606","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The features of multimedia contents impose additional requirements such as high bandwidth demand, more energy consumption and processing capabilities. Routing multimedia data in a resource constrained network is a challenge in Wireless multimedia Sensor networks (WMSNs), multipath routing technique is a relevant solution for transmitting multimedia data in wireless sensor networks to satisfy multimedia QoS requirements. This paper proposes an improved version of AGEM protocol based Triangle link quality metric (TLQM-AGEM) which finds multiple node-disjoint paths. This routing scheme selects forwarding node based on distance, a triangle link quality metric and remaining energy. Simulation results indicate that protocol optimizes overall performance and improves network lifetime as compared with state of the art GEAMS and AGEM schemes.
{"title":"AGEM-based Multipath Routing Protocol using Triangle Link Quality for Wireless Multimedia Sensor networks","authors":"Asma Chikh, M. Lehsaini","doi":"10.1145/3386723.3387855","DOIUrl":"https://doi.org/10.1145/3386723.3387855","url":null,"abstract":"The features of multimedia contents impose additional requirements such as high bandwidth demand, more energy consumption and processing capabilities. Routing multimedia data in a resource constrained network is a challenge in Wireless multimedia Sensor networks (WMSNs), multipath routing technique is a relevant solution for transmitting multimedia data in wireless sensor networks to satisfy multimedia QoS requirements. This paper proposes an improved version of AGEM protocol based Triangle link quality metric (TLQM-AGEM) which finds multiple node-disjoint paths. This routing scheme selects forwarding node based on distance, a triangle link quality metric and remaining energy. Simulation results indicate that protocol optimizes overall performance and improves network lifetime as compared with state of the art GEAMS and AGEM schemes.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116570644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The development of digital tecnnologies nowadays assists people by suggesting opinion, choices, preferences and feelings. This opinion is useful for company's engagement to make certain analysis to know their potential users and personalized their need. However, the information needs extraction to make further analysis. Thus, sentiment analysis is used to extract opinion and others and transform it into meaningful data. During the process of analysis, feature selection method is required to select a subset which consists of relevant features to construct a predictive model. This process requires some conditions during the selection of feature subset. The required conditions for feature selection are that the selected feature subset must be small and relevant for a high dimensional dataset which considers the presence of noise plus there are no redundant features. However, some of the feature selection methods unable to fulfill all conditions. In this research, 40 papers were collected, classified and reviewed. We discussed on the feature selection methods in sentiment analysis based on its level of analysis and make comparison between these methods to know its limitation and advantages. The comparison made between methods are based on its accuracy and CPU performance. Finally, suggest the best/benchmark method for feature selection. The findings obtained from this research shows that hybrid methods obtain the best accuracy and CPU performance compared to the other methods.
{"title":"Feature Selection Methods in Sentiment Analysis: A Review","authors":"Nurilhami Izzatie Khairi, A. Mohamed, N. Yusof","doi":"10.1145/3386723.3387840","DOIUrl":"https://doi.org/10.1145/3386723.3387840","url":null,"abstract":"The development of digital tecnnologies nowadays assists people by suggesting opinion, choices, preferences and feelings. This opinion is useful for company's engagement to make certain analysis to know their potential users and personalized their need. However, the information needs extraction to make further analysis. Thus, sentiment analysis is used to extract opinion and others and transform it into meaningful data. During the process of analysis, feature selection method is required to select a subset which consists of relevant features to construct a predictive model. This process requires some conditions during the selection of feature subset. The required conditions for feature selection are that the selected feature subset must be small and relevant for a high dimensional dataset which considers the presence of noise plus there are no redundant features. However, some of the feature selection methods unable to fulfill all conditions. In this research, 40 papers were collected, classified and reviewed. We discussed on the feature selection methods in sentiment analysis based on its level of analysis and make comparison between these methods to know its limitation and advantages. The comparison made between methods are based on its accuracy and CPU performance. Finally, suggest the best/benchmark method for feature selection. The findings obtained from this research shows that hybrid methods obtain the best accuracy and CPU performance compared to the other methods.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"408 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122735356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital image Watermarking is a way to achieve information security. In this paper, we present an algorithm for digital image watermarking based on 2-level Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and QR decomposition. The watermark embedding process is performed on specific blocks of the host image according to its entropy values. The experimental results show that this algorithm has good features of imperceptibility and it is robust against different attacks.
{"title":"An Algorithm for Digital Image Watermarking using 2-Level DWT, DCT and QR Decomposition based on Optimal Blocks Selection","authors":"M. Zairi, T. Boujiha, Ouelli Abdelhaq","doi":"10.1145/3386723.3387863","DOIUrl":"https://doi.org/10.1145/3386723.3387863","url":null,"abstract":"Digital image Watermarking is a way to achieve information security. In this paper, we present an algorithm for digital image watermarking based on 2-level Discrete Wavelet Transform (DWT), Discrete Cosine Transform (DCT) and QR decomposition. The watermark embedding process is performed on specific blocks of the host image according to its entropy values. The experimental results show that this algorithm has good features of imperceptibility and it is robust against different attacks.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127850974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Over the last few years, many works have been done in earthquake prediction using different techniques and precursors in order to warn of earthquake damages and save human lives. Plenty of works have failed to sufficiently predict earthquakes, because of the complexity and the unpredictable nature of this task. Therefore, in this work we use the powerful deep learning technique. A useful algorithm that captures complex relationships in time series data. The technique is called long short-term memory (LSTM). The work employs this method in two cases of study; the first learns all the datasets in one model, the second case learns the correlations on two divided groups considering their range of magnitude. The results show that learning decomposed datasets gives more well-functioning predictions since it exploits the nature of each type of seismic events.
{"title":"LSTM-based Models for Earthquake Prediction","authors":"Asmae Berhich, Fatima-Zahra Belouadha, M. Kabbaj","doi":"10.1145/3386723.3387865","DOIUrl":"https://doi.org/10.1145/3386723.3387865","url":null,"abstract":"Over the last few years, many works have been done in earthquake prediction using different techniques and precursors in order to warn of earthquake damages and save human lives. Plenty of works have failed to sufficiently predict earthquakes, because of the complexity and the unpredictable nature of this task. Therefore, in this work we use the powerful deep learning technique. A useful algorithm that captures complex relationships in time series data. The technique is called long short-term memory (LSTM). The work employs this method in two cases of study; the first learns all the datasets in one model, the second case learns the correlations on two divided groups considering their range of magnitude. The results show that learning decomposed datasets gives more well-functioning predictions since it exploits the nature of each type of seismic events.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132780400","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
One of the challenges in many industrial activities is to analyze the products and investigate their dimensional properties, such as deformations. The digital speckle pattern interferometry technique offers several solutions for the measurement of wide range of parameters (deformations, displacements...) with high accuracy. Generally, there is a correlation between these parameters and the phase of the noisy reflected intensity images (also called correlation fringe patterns or correlograms) of the tested products. Thus, getting access to these parameters requires a good estimation of the phase information. To extract this phase, we propose a ridgelet transform based algorithm for the fringes demodulation, after have been denoised by a new variant of total variation denoising method. These algorithms provide an automatic estimation of the phase feature with high accuracy. Because of such advantages, this method is particularly suitable for real time analyzing of dynamic events even in perturbative environments.
{"title":"Speckled correlation fringes denoising and demodulation using directional total variation and ridgelet transform","authors":"Mustapha Bahich, Mohammed Bailich","doi":"10.1145/3386723.3387864","DOIUrl":"https://doi.org/10.1145/3386723.3387864","url":null,"abstract":"One of the challenges in many industrial activities is to analyze the products and investigate their dimensional properties, such as deformations. The digital speckle pattern interferometry technique offers several solutions for the measurement of wide range of parameters (deformations, displacements...) with high accuracy. Generally, there is a correlation between these parameters and the phase of the noisy reflected intensity images (also called correlation fringe patterns or correlograms) of the tested products. Thus, getting access to these parameters requires a good estimation of the phase information. To extract this phase, we propose a ridgelet transform based algorithm for the fringes demodulation, after have been denoised by a new variant of total variation denoising method. These algorithms provide an automatic estimation of the phase feature with high accuracy. Because of such advantages, this method is particularly suitable for real time analyzing of dynamic events even in perturbative environments.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121400858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Nabou, M. Laanaoui, M. Ouzzif, Mohammed-Alamine El Houssaini
Mobile Ad hoc Network (MANET) considers as simple network that use the wireless communication between difference wireless devices named as nodes. MANET has many challenges in its functions due to absence of fixed infrastructure and autoconfiguration. Optimized Link State Routing Protocol (OLSR) is proactive MANET routing protocol dedicated to the large network density. However it can be affected by the congestion that decrease its performance by losing packets, taking more delay to receive packets and also minimize the throughput of the protocol. In this paper, we propose new method reckons on Normality Test that applied in statistic domain to detect the congestion in OLSR protocol without any modification in the algorithm and without any other additional control messages. To detect the congestion in OLSR we use Shapiro_Wilk (W) method to analyze the results of Throughput following two scenarios that leads to the congestion.
移动自组织网络(MANET)是一种简单的网络,在不同的无线设备之间进行无线通信,这些设备被称为节点。由于缺乏固定的基础设施和自动配置,MANET在功能上面临许多挑战。OLSR (Optimized Link State Routing Protocol)是面向大网络密度的主动MANET路由协议。然而,它可能会受到拥塞的影响,它的性能会因为丢失数据包而降低,接收数据包的延迟更长,并且还会使协议的吞吐量最小化。本文提出了一种应用于统计域的基于正态性检验的方法来检测OLSR协议中的拥塞,而不需要对算法进行任何修改,也不需要增加任何控制消息。为了检测OLSR中的拥塞,我们使用Shapiro_Wilk (W)方法来分析导致拥塞的两种情况下的吞吐量结果。
{"title":"Normality Test to Detect the Congestion in MANET by Using OLSR Protocol","authors":"A. Nabou, M. Laanaoui, M. Ouzzif, Mohammed-Alamine El Houssaini","doi":"10.1145/3386723.3387836","DOIUrl":"https://doi.org/10.1145/3386723.3387836","url":null,"abstract":"Mobile Ad hoc Network (MANET) considers as simple network that use the wireless communication between difference wireless devices named as nodes. MANET has many challenges in its functions due to absence of fixed infrastructure and autoconfiguration. Optimized Link State Routing Protocol (OLSR) is proactive MANET routing protocol dedicated to the large network density. However it can be affected by the congestion that decrease its performance by losing packets, taking more delay to receive packets and also minimize the throughput of the protocol. In this paper, we propose new method reckons on Normality Test that applied in statistic domain to detect the congestion in OLSR protocol without any modification in the algorithm and without any other additional control messages. To detect the congestion in OLSR we use Shapiro_Wilk (W) method to analyze the results of Throughput following two scenarios that leads to the congestion.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128610861","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The technological evolution and the immensity of the data produced, circulated into company makes these data, the real capital of the companies to the detriment of the customers. The erroneous data put the knockout to relationships with customers, the company must address this problem and identify the quality projects on which it must make an effort. In this article, we will present an approach based on qualitative and quantitative analysis to help the decision-makers to target data by its impacts and complexities of process improvement. The Qualitative study will be a survey and a quantitative to learn from survey data to decide the prediction and the completeness of data.
{"title":"Contribution of Artificial Neural Network in Predicting Completeness Through the Impact and Complexity of its Improvement","authors":"Jaouad Maqboul, Bouchaib Bounabat Jaouad","doi":"10.1145/3386723.3387850","DOIUrl":"https://doi.org/10.1145/3386723.3387850","url":null,"abstract":"The technological evolution and the immensity of the data produced, circulated into company makes these data, the real capital of the companies to the detriment of the customers. The erroneous data put the knockout to relationships with customers, the company must address this problem and identify the quality projects on which it must make an effort. In this article, we will present an approach based on qualitative and quantitative analysis to help the decision-makers to target data by its impacts and complexities of process improvement. The Qualitative study will be a survey and a quantitative to learn from survey data to decide the prediction and the completeness of data.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128963029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic classification of medical images especially of tissue images is an important task in computer aided diagnosis (CAD) systems. Deep learning methods such as convolutional networks (ConvNets) outperform other state of-the-art methods in images classification tasks. This article describes an accurate and efficient algorithms for this challenging problem, and aims to present different convolutional neural networks to classify the tissue images. first, we built a model that consist of feature extraction and the classification with simple CNN, the second model consist of a CNN as feature extractor by removing the classification layers and using the activations of the last fully connected layer to train Random Forest, and the last one using transfer learning --Fine-Tuning-- pre-trained CNN "DenseNet201". Finally, we have evaluated our models using three metrics: accuracy, Precision and F1 Score.
{"title":"An efficient Algorithm for medical image classification using Deep Convolutional Network: Case of Cancer Pathology","authors":"Dahdouh Yousra, A. Boudhir, M. Ahmed","doi":"10.1145/3386723.3387896","DOIUrl":"https://doi.org/10.1145/3386723.3387896","url":null,"abstract":"Automatic classification of medical images especially of tissue images is an important task in computer aided diagnosis (CAD) systems. Deep learning methods such as convolutional networks (ConvNets) outperform other state of-the-art methods in images classification tasks. This article describes an accurate and efficient algorithms for this challenging problem, and aims to present different convolutional neural networks to classify the tissue images. first, we built a model that consist of feature extraction and the classification with simple CNN, the second model consist of a CNN as feature extractor by removing the classification layers and using the activations of the last fully connected layer to train Random Forest, and the last one using transfer learning --Fine-Tuning-- pre-trained CNN \"DenseNet201\". Finally, we have evaluated our models using three metrics: accuracy, Precision and F1 Score.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131129535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Free codes are investigated. It is illustrated that every free cyclic has some important proprieties and the minimal distance of every code depends upon its associated direct component.
研究免费代码。说明了每一个自由循环都有一些重要的性质,每一个码的最小距离取决于它所关联的直接分量。
{"title":"Some Properties of Free Cyclic Codes over Finite Chain Rings","authors":"M. Sabiri, Youssef Bensalih, A. Elbour","doi":"10.1145/3386723.3387856","DOIUrl":"https://doi.org/10.1145/3386723.3387856","url":null,"abstract":"Free codes are investigated. It is illustrated that every free cyclic has some important proprieties and the minimal distance of every code depends upon its associated direct component.","PeriodicalId":139072,"journal":{"name":"Proceedings of the 3rd International Conference on Networking, Information Systems & Security","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-03-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128639869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}