The deployment of biosensors is increasing with advancement of bio-electronics. Owing to challenging state of working of biosensors, the present applications of biosensors are capable of capturing only certain types of signals till date. This detection also depends on the type of the bio transducers used for signal generation and therefore, the signals generated from biosensors cannot be considered to be error free. This paper has reviewed some of the existing research contributions towards biosensor validation to find that there are no computational framework that efficiently performs validation as majority of the technique uses either clinical approach or experimental approach, which limits the validation of bio signal performance. Therefore, this paper presents a novel computational framework that uses enhanced version of auto-associative neural network and significantly optimizes the validation performance of biosensors as compared to other conventional optimization techniques.
{"title":"Optimized Performance Validation of Biosensors with High Fault Tolerance","authors":"Subhas A. Meti, V. Sangam","doi":"10.1109/IACC.2017.0076","DOIUrl":"https://doi.org/10.1109/IACC.2017.0076","url":null,"abstract":"The deployment of biosensors is increasing with advancement of bio-electronics. Owing to challenging state of working of biosensors, the present applications of biosensors are capable of capturing only certain types of signals till date. This detection also depends on the type of the bio transducers used for signal generation and therefore, the signals generated from biosensors cannot be considered to be error free. This paper has reviewed some of the existing research contributions towards biosensor validation to find that there are no computational framework that efficiently performs validation as majority of the technique uses either clinical approach or experimental approach, which limits the validation of bio signal performance. Therefore, this paper presents a novel computational framework that uses enhanced version of auto-associative neural network and significantly optimizes the validation performance of biosensors as compared to other conventional optimization techniques.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128378555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogeneity and complexity of distributed computing increases rapidly as high speed processors are widely available. In modern computing environment, resources are dynamic, heterogeneous, geographically spread over different computational domains and connected through different capacity of high speed communication links. In a large distributed environment a modular program can be considered as a set of loosely coupled interacting modules/tasks (since all the modules/tasks are considered as simultaneously and independently executable) and represented by task interaction graph (TIG) model. Parallel execution of these interacting modules/tasks is highly preferred to reduce the overall completion time of a program. During parallel execution of tasks, the communication overhead due to message passing may increase the cost of parallel execution. Parallel execution of tasks is chosen if and only if parallel execution cost together with communication overhead is less than serial execution cost. So, resources are to be allocated such that advantage of parallel execution is maintained. In this paper, for any task and resource graph, we propose a heuristics based approach to find out an optimal number of tasks that can be executed in parallel on a set of resources where they can be executed.
{"title":"A Heuristic-Based Resource Allocation Approach for Parallel Execution of Interacting Tasks","authors":"Uddalok Sen, M. Sarkar, N. Mukherjee","doi":"10.1109/IACC.2017.0158","DOIUrl":"https://doi.org/10.1109/IACC.2017.0158","url":null,"abstract":"Heterogeneity and complexity of distributed computing increases rapidly as high speed processors are widely available. In modern computing environment, resources are dynamic, heterogeneous, geographically spread over different computational domains and connected through different capacity of high speed communication links. In a large distributed environment a modular program can be considered as a set of loosely coupled interacting modules/tasks (since all the modules/tasks are considered as simultaneously and independently executable) and represented by task interaction graph (TIG) model. Parallel execution of these interacting modules/tasks is highly preferred to reduce the overall completion time of a program. During parallel execution of tasks, the communication overhead due to message passing may increase the cost of parallel execution. Parallel execution of tasks is chosen if and only if parallel execution cost together with communication overhead is less than serial execution cost. So, resources are to be allocated such that advantage of parallel execution is maintained. In this paper, for any task and resource graph, we propose a heuristics based approach to find out an optimal number of tasks that can be executed in parallel on a set of resources where they can be executed.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"99 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128383711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Satellite Data concisely convey information about positions, sizes and interrelationships between objects. The satellite image losses information due to lack of Acquisition capability of sensor and atmosphere's effect. It is very difficult to extract useful information at intensity level with low SNR, non wavelet segmented schemes losing high frequency contact with results texture is blurred several preprocesses are applied to make textual image clear and segmentation. Unsatisfied results due with lack of directionality with DWT, Here we can implement advance image processing technique for improving texture based features to multispectral satellite image, find discrepancy distribution of observed and normal region using Higher order statistical methods(HOS) like skewness, Kurtosis. The shape of the distribution of intensity levels are examined by HOG. For improving the visualization quality we examine features based on edges, lines and their gradients using Curvelet and Histogram of oriented Gradient (HOG), intensity distribution using Higher order Statistics (HOS).
{"title":"Higher Order Statistics for Multispectral Satellite Data","authors":"T. V. Krishnamoorthy, G. Reddy","doi":"10.1109/IACC.2017.0056","DOIUrl":"https://doi.org/10.1109/IACC.2017.0056","url":null,"abstract":"Satellite Data concisely convey information about positions, sizes and interrelationships between objects. The satellite image losses information due to lack of Acquisition capability of sensor and atmosphere's effect. It is very difficult to extract useful information at intensity level with low SNR, non wavelet segmented schemes losing high frequency contact with results texture is blurred several preprocesses are applied to make textual image clear and segmentation. Unsatisfied results due with lack of directionality with DWT, Here we can implement advance image processing technique for improving texture based features to multispectral satellite image, find discrepancy distribution of observed and normal region using Higher order statistical methods(HOS) like skewness, Kurtosis. The shape of the distribution of intensity levels are examined by HOG. For improving the visualization quality we examine features based on edges, lines and their gradients using Curvelet and Histogram of oriented Gradient (HOG), intensity distribution using Higher order Statistics (HOS).","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125866715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
J. Panda, Akshay Uppal, Akhil S. Nair, Bhavesh Agrawal
This paper presents a new optimized DWT-SVD based watermarking technique using Genetic Algorithm. The singular value component of the original Image is modified by adding the singular component of the watermark image along with a suitable scaling factor. This scaling factor is optimized by GA using the PSNR values as the fitness criteria in order to achieve high values or robustness without compromising the transparency of the watermark. Further application based analysis is done by using the Noise Correlation as a fitness function to test for better results in robustness.
{"title":"Genetic Algorithm Based Optimized Color Image Watermarking Technique Using SVD and DWT","authors":"J. Panda, Akshay Uppal, Akhil S. Nair, Bhavesh Agrawal","doi":"10.1109/IACC.2017.0124","DOIUrl":"https://doi.org/10.1109/IACC.2017.0124","url":null,"abstract":"This paper presents a new optimized DWT-SVD based watermarking technique using Genetic Algorithm. The singular value component of the original Image is modified by adding the singular component of the watermark image along with a suitable scaling factor. This scaling factor is optimized by GA using the PSNR values as the fitness criteria in order to achieve high values or robustness without compromising the transparency of the watermark. Further application based analysis is done by using the Noise Correlation as a fitness function to test for better results in robustness.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127294910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Local Binary Pattern (LBP) is one of the successful texture analysis methods. However, LBP suffers from noise robustness and rotation invariance. This paper proposes a novel noise insensitive texture descriptor, Adjacent Evaluation Local Ternary Count (AELTC) for rotation invariant texture classification. Unlike LBP, AELTC uses an adjacent evaluation window to change the threshold scheme. It is enhanced to Adjacent Evaluation Completed Local Ternary Count (AECLTC) with three operators to improve the performance of texture classification. During the performance evaluation, various experiments are conducted on Outex and CUReT databases using seven existing LBP variants and with proposed AECLTC. The results demonstrated the superiority of AECLTC when compared to other LBP variants.
{"title":"Adjacent Evaluation of Completed Local Ternary Count for Texture Classification","authors":"Ch. Sudha Sree, M.V.P. Chandra Sekhara Rao","doi":"10.1109/IACC.2017.0144","DOIUrl":"https://doi.org/10.1109/IACC.2017.0144","url":null,"abstract":"Local Binary Pattern (LBP) is one of the successful texture analysis methods. However, LBP suffers from noise robustness and rotation invariance. This paper proposes a novel noise insensitive texture descriptor, Adjacent Evaluation Local Ternary Count (AELTC) for rotation invariant texture classification. Unlike LBP, AELTC uses an adjacent evaluation window to change the threshold scheme. It is enhanced to Adjacent Evaluation Completed Local Ternary Count (AECLTC) with three operators to improve the performance of texture classification. During the performance evaluation, various experiments are conducted on Outex and CUReT databases using seven existing LBP variants and with proposed AECLTC. The results demonstrated the superiority of AECLTC when compared to other LBP variants.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"134 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127362215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Now-a-days, Speech Recognition had become a prominent and challenging research domain because of its vast usage. The factors affecting Speech Recognition are Vocalization, Pitch, Tone, Noise, Pronunciation, Frequency, finding where the phoneme starts and stops, Loudness, Speed, Accent and so on. Research is going on to enhance the efficacy of Speech Recognition. Speech Recognition requires efficient models, algorithms and programming frameworks to analyze large amount of real-time data. These algorithms and programming paradigms have to learn knowledge on their own to fit in to the model for massively evolving data in real-time. The developments in parallel computing platforms opens four major possibilities for Speech Recognition systems: improving recognition accuracy, increasing recognition throughput, reducing recognition latency and reducing the recognition training period.
{"title":"Continuous Automatic Speech Recognition System Using MapReduce Framework","authors":"M. Vikram, N. Reddy, K. Madhavi","doi":"10.1109/IACC.2017.0031","DOIUrl":"https://doi.org/10.1109/IACC.2017.0031","url":null,"abstract":"Now-a-days, Speech Recognition had become a prominent and challenging research domain because of its vast usage. The factors affecting Speech Recognition are Vocalization, Pitch, Tone, Noise, Pronunciation, Frequency, finding where the phoneme starts and stops, Loudness, Speed, Accent and so on. Research is going on to enhance the efficacy of Speech Recognition. Speech Recognition requires efficient models, algorithms and programming frameworks to analyze large amount of real-time data. These algorithms and programming paradigms have to learn knowledge on their own to fit in to the model for massively evolving data in real-time. The developments in parallel computing platforms opens four major possibilities for Speech Recognition systems: improving recognition accuracy, increasing recognition throughput, reducing recognition latency and reducing the recognition training period.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123035698","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In comparative genomics, genome rearrangement evolution is an important effort. Genome conversion is the major problem in this field using different sorting process. Transforming one sequence into another and finding an optimal solution is a useful tool for analyzing real evolutionary scenario but it will be much better if we find all possible solution for that. In order to obtain more accurate result, some solution should be taken into consideration as there is large number of different optimal sorting sequence. Reversal and translocation are the two common genome sorting process used in development of mammalian species. The problem of genome sorting using reversal and translocation is to find the shortest sequence that transforms any source genome A into some target genome B. Currently the question is resolved by lessening of sorting by reversal and sorting by translocation problem separately, but here we are applying both the sorting process together at the same time. By this paper we present an algorithm for the two sorting process that explicitly treats them as two distinct operations, along with that finding the various solutions which is a better hypothetical and real-world solution than just finding a solo one. If we have single solution for any problem then we cannot decide whether this solution is the perfect one or not but if we have more solution indeed we can find the best one among them and say this is the perfect solution. We also present an example which proves that this solution is more prominent than previous one.
{"title":"Stack Solution for Finding Optimal One","authors":"P. Kumar, G. Sahoo","doi":"10.1109/IACC.2017.0159","DOIUrl":"https://doi.org/10.1109/IACC.2017.0159","url":null,"abstract":"In comparative genomics, genome rearrangement evolution is an important effort. Genome conversion is the major problem in this field using different sorting process. Transforming one sequence into another and finding an optimal solution is a useful tool for analyzing real evolutionary scenario but it will be much better if we find all possible solution for that. In order to obtain more accurate result, some solution should be taken into consideration as there is large number of different optimal sorting sequence. Reversal and translocation are the two common genome sorting process used in development of mammalian species. The problem of genome sorting using reversal and translocation is to find the shortest sequence that transforms any source genome A into some target genome B. Currently the question is resolved by lessening of sorting by reversal and sorting by translocation problem separately, but here we are applying both the sorting process together at the same time. By this paper we present an algorithm for the two sorting process that explicitly treats them as two distinct operations, along with that finding the various solutions which is a better hypothetical and real-world solution than just finding a solo one. If we have single solution for any problem then we cannot decide whether this solution is the perfect one or not but if we have more solution indeed we can find the best one among them and say this is the perfect solution. We also present an example which proves that this solution is more prominent than previous one.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129132479","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Emerging sensor based electronic gadgets desire to seek high levels of energy conservation by adopting extreme low power techniques in combination with traditional techniques. In this study the authors examine memory units with data retention capability in the Energy-Delay space for an emerging application namely Transiently Powered System for three levels of power and performance optimization. The study presents a novel Dual Edge Triggered Flip-Flop (DETRFF) with retention latch that is suitable for ultra low power application with dynamic voltage switch between super and sub threshold levels. The DETRFF designs are simulated in 45nm NCSU CMOS technology using Cadence. The proposed design excels in the EDP and Leakage Energy metrics as compared to the existing DETFF designs.
{"title":"Novel Ultra Low Power Dual Edge Triggered Retention Flip-Flop for Transiently Powered Systems","authors":"Madhavi Dasari, R. Nikhil, A. Chavan","doi":"10.1109/IACC.2017.0109","DOIUrl":"https://doi.org/10.1109/IACC.2017.0109","url":null,"abstract":"Emerging sensor based electronic gadgets desire to seek high levels of energy conservation by adopting extreme low power techniques in combination with traditional techniques. In this study the authors examine memory units with data retention capability in the Energy-Delay space for an emerging application namely Transiently Powered System for three levels of power and performance optimization. The study presents a novel Dual Edge Triggered Flip-Flop (DETRFF) with retention latch that is suitable for ultra low power application with dynamic voltage switch between super and sub threshold levels. The DETRFF designs are simulated in 45nm NCSU CMOS technology using Cadence. The proposed design excels in the EDP and Leakage Energy metrics as compared to the existing DETFF designs.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115932470","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Savitha, Rajiv R. Chetwani, Y. R. Bhanumathy, M. Ravindra
ISRO Satellite Centre of the Indian Space Research Organization develops satellites for variety of scientific applications like communication, navigation, earth observation and many more. These satellites consist of very complex intensive systems which carry out advanced mission functions. Hence software plays a critical constituent for mission success. Some of the geostationary missions onboard software is finalized, changes are minimal for the new spacecraft. This model based system for software change analysis for embedded systems deals with managing changes to existing software items and re configuring in any part of development life cycle. This model helps in handing change management, such as maintenance of a component library, predicting the impacts of changes in reused modules, analyzing the behavior of the combination of reused modules. The use of reusable software modules has augmented the development of embedded software for GSAT series of satellites. This model mainly reduces the time during each phase of software life cycle when the changes are to be implemented within short span of time. This paper describes how complete traceability can be established between specifications and design requirements, model elements and their realization.
{"title":"Model Based System for Software Change Analysis for Embedded Systems on Spacecraft","authors":"A. Savitha, Rajiv R. Chetwani, Y. R. Bhanumathy, M. Ravindra","doi":"10.1109/IACC.2017.0093","DOIUrl":"https://doi.org/10.1109/IACC.2017.0093","url":null,"abstract":"ISRO Satellite Centre of the Indian Space Research Organization develops satellites for variety of scientific applications like communication, navigation, earth observation and many more. These satellites consist of very complex intensive systems which carry out advanced mission functions. Hence software plays a critical constituent for mission success. Some of the geostationary missions onboard software is finalized, changes are minimal for the new spacecraft. This model based system for software change analysis for embedded systems deals with managing changes to existing software items and re configuring in any part of development life cycle. This model helps in handing change management, such as maintenance of a component library, predicting the impacts of changes in reused modules, analyzing the behavior of the combination of reused modules. The use of reusable software modules has augmented the development of embedded software for GSAT series of satellites. This model mainly reduces the time during each phase of software life cycle when the changes are to be implemented within short span of time. This paper describes how complete traceability can be established between specifications and design requirements, model elements and their realization.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116323627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Video quality assessment aims to compute the formalmeasure of perceived video degradation when video is passedthrough a video transmission/processing system. Most of theexisting video quality measures extend Image Quality Measuresby applying them on each frame and later combining the qualityvalues of each frame to get the quality of the entire video. Whencombining the quality values of frames, a simple average or invery few metrics, weighted average has been traditionally used. In this work, saliency of a frame has been used to compute theweight required for each frame to obtain the quality value ofvideo. The goal of every objective quality metric is to correlateas closely as possible to the perceived quality, and the objectiveof saliency is parallel to this as the saliency values should matchthe human perception. Hence we have experimented by usingsaliency to get the final video quality. The idea is demonstratedby using a number of state of art quality metrics on some of thebenchmark datasets.
{"title":"Saliency Based Assessment of Videos from Frame-Wise Quality Measures","authors":"B. Roja, B. Sandhya","doi":"10.1109/IACC.2017.0135","DOIUrl":"https://doi.org/10.1109/IACC.2017.0135","url":null,"abstract":"Video quality assessment aims to compute the formalmeasure of perceived video degradation when video is passedthrough a video transmission/processing system. Most of theexisting video quality measures extend Image Quality Measuresby applying them on each frame and later combining the qualityvalues of each frame to get the quality of the entire video. Whencombining the quality values of frames, a simple average or invery few metrics, weighted average has been traditionally used. In this work, saliency of a frame has been used to compute theweight required for each frame to obtain the quality value ofvideo. The goal of every objective quality metric is to correlateas closely as possible to the perceived quality, and the objectiveof saliency is parallel to this as the saliency values should matchthe human perception. Hence we have experimented by usingsaliency to get the final video quality. The idea is demonstratedby using a number of state of art quality metrics on some of thebenchmark datasets.","PeriodicalId":248433,"journal":{"name":"2017 IEEE 7th International Advance Computing Conference (IACC)","volume":"86 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126137078","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}