Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269689
Pratiksha Gautam, H. Saini
Over the past few years, several software clone detection techniques have been developed. The software clones are the consequence of copied/pasted activity in software development which eventuates at different level of abstraction and may have different inception in a software system. This paper presents an efficient approach for the detection of type-1 software clones. The proposed detect type-1 software clones with high precision, recall, portability, and scalability. The type-1 clones generated by using mutation operator-based editing taxonomy.
{"title":"A hybrid approach for detection of Type-1 software clones","authors":"Pratiksha Gautam, H. Saini","doi":"10.1109/ISPCC.2017.8269689","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269689","url":null,"abstract":"Over the past few years, several software clone detection techniques have been developed. The software clones are the consequence of copied/pasted activity in software development which eventuates at different level of abstraction and may have different inception in a software system. This paper presents an efficient approach for the detection of type-1 software clones. The proposed detect type-1 software clones with high precision, recall, portability, and scalability. The type-1 clones generated by using mutation operator-based editing taxonomy.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121730433","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269753
P. Singh, Sheely Garg, Mandeep Kaur, M. Bajwa, Y. Kumar
Testing is the most important and critical task in software development life cycle. Whenever software testing execution fails its test scripts is analyzed so that the point where fault occurred can be detected and the expected result can be achieved. Detecting fault in software is called as fault localization. Manually fault localization can be a cumbersome job so providing automated technique to do the same without human intervention is the demand from long time. In this paper, a brief overview of some important fault localization technique using soft computing techniques is carried out. Based on the identified points, it is identified that better result may be generated using machine learning technique along with time reduction. Prime objective of this paper is to made and attempt for identifying the fault localization techniques in combination with soft computing approaches to minimize the time and space complexities, so that the better results may be achieved in context of usability and effectiveness.
{"title":"Fault localization in software testing using soft computing approaches","authors":"P. Singh, Sheely Garg, Mandeep Kaur, M. Bajwa, Y. Kumar","doi":"10.1109/ISPCC.2017.8269753","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269753","url":null,"abstract":"Testing is the most important and critical task in software development life cycle. Whenever software testing execution fails its test scripts is analyzed so that the point where fault occurred can be detected and the expected result can be achieved. Detecting fault in software is called as fault localization. Manually fault localization can be a cumbersome job so providing automated technique to do the same without human intervention is the demand from long time. In this paper, a brief overview of some important fault localization technique using soft computing techniques is carried out. Based on the identified points, it is identified that better result may be generated using machine learning technique along with time reduction. Prime objective of this paper is to made and attempt for identifying the fault localization techniques in combination with soft computing approaches to minimize the time and space complexities, so that the better results may be achieved in context of usability and effectiveness.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"53 22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124590088","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269688
Nithin Varma Malathkar, S. Soni
An efficient compression algorithm for the capsule endoscopy is described in this paper. This paper consists of a simplified YUV color space, which is developed by taking endoscopy images unique properties into consideration. This is built on RGB-sYUV color conversion, differential pulse code modulation (DPCM) and Golomb-Rice encoder. This DPCM doesn't need any extra buffer memory to store one row of images and Golomb-Rice (G-R) code is simple and easily hardware implemented. This algorithm is lossless and give a compression ratio (CR) of 68.1%. It gives better results than the standard lossless algorithm regarding complexity and compression ratio in capsule endoscopy applications.
{"title":"Low-cost color space based image compression algorithm for capsule endoscopy","authors":"Nithin Varma Malathkar, S. Soni","doi":"10.1109/ISPCC.2017.8269688","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269688","url":null,"abstract":"An efficient compression algorithm for the capsule endoscopy is described in this paper. This paper consists of a simplified YUV color space, which is developed by taking endoscopy images unique properties into consideration. This is built on RGB-sYUV color conversion, differential pulse code modulation (DPCM) and Golomb-Rice encoder. This DPCM doesn't need any extra buffer memory to store one row of images and Golomb-Rice (G-R) code is simple and easily hardware implemented. This algorithm is lossless and give a compression ratio (CR) of 68.1%. It gives better results than the standard lossless algorithm regarding complexity and compression ratio in capsule endoscopy applications.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127851535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269698
Rama Rani, Gaurav Deep
Steganography is a technique of concealing private information in a cover medium in such a way that it becomes impossible for the third person to come to know that some confidential information is contained in the cover envelope. In today's era with the inception of new emerging technologies, barcodes has become one of the most popular methods to provide a mechanism for protecting sensitive information.3D barcodes are used to accommodate high data rates by making use of third dimension as a color. 3D barcodes serves as the most reliable technique to hide data because they do not make use of any error correction levels due to the reason that it is very difficult to alter the encoded information. This paper introduces the concept of data hiding in barcodes by using color as third dimension. The process is classified into different categories and performance is evaluated by using various statistical parameters.
{"title":"Digital 3D barcode image as a container for data hiding using steganography","authors":"Rama Rani, Gaurav Deep","doi":"10.1109/ISPCC.2017.8269698","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269698","url":null,"abstract":"Steganography is a technique of concealing private information in a cover medium in such a way that it becomes impossible for the third person to come to know that some confidential information is contained in the cover envelope. In today's era with the inception of new emerging technologies, barcodes has become one of the most popular methods to provide a mechanism for protecting sensitive information.3D barcodes are used to accommodate high data rates by making use of third dimension as a color. 3D barcodes serves as the most reliable technique to hide data because they do not make use of any error correction levels due to the reason that it is very difficult to alter the encoded information. This paper introduces the concept of data hiding in barcodes by using color as third dimension. The process is classified into different categories and performance is evaluated by using various statistical parameters.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130902934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269648
Pawanjot Kaur, Harbinder Singh, Vinay Kumar
The considerable information from different images of same scene can be integrated into single image by using image fusion which is worthy for human visualization and computer vision and other image-processing tasks. In this paper, single resolution weighted average image fusion approach based on morphological operations is proposed. To select salient infrared targets details from infrared imagery and spatial detailed information from visible imagery, the morphological operation are applied on input images for weight map computation. By adopting the proposed method, spatial information is mostly preserved and infrared targets can be easily visualized in the resulting fused images. Experimental results are demonstrated to support the validity of morphological operations for weighted average based fusion of infrared image and visible image.
{"title":"Salient infrared target and visible image fusion based on morphological segmentation","authors":"Pawanjot Kaur, Harbinder Singh, Vinay Kumar","doi":"10.1109/ISPCC.2017.8269648","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269648","url":null,"abstract":"The considerable information from different images of same scene can be integrated into single image by using image fusion which is worthy for human visualization and computer vision and other image-processing tasks. In this paper, single resolution weighted average image fusion approach based on morphological operations is proposed. To select salient infrared targets details from infrared imagery and spatial detailed information from visible imagery, the morphological operation are applied on input images for weight map computation. By adopting the proposed method, spatial information is mostly preserved and infrared targets can be easily visualized in the resulting fused images. Experimental results are demonstrated to support the validity of morphological operations for weighted average based fusion of infrared image and visible image.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132704878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269748
J. Kaur
Content-based image retrieval is a system which extracts the relevant set of images and matches with query image from large number of dataset. CBIR is used in many important areas such as education, defense, biomedical, crime prevention etc. In CBIR, the images are indexed according to content of image i.e. color, texture and shape that are derived from images. Many features and algorithms can be used to improve retrieval accuracy and to reduce the retrieval time. In this paper, we compare the different algorithms to extract color and texture features of an image and retrieve the relevant images. We measure the similarity between two images using different distance measures. The performance of each method has been individually evaluated in terms of average precision.
{"title":"Comparative analysis of color and texture features in content based image retrieval","authors":"J. Kaur","doi":"10.1109/ISPCC.2017.8269748","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269748","url":null,"abstract":"Content-based image retrieval is a system which extracts the relevant set of images and matches with query image from large number of dataset. CBIR is used in many important areas such as education, defense, biomedical, crime prevention etc. In CBIR, the images are indexed according to content of image i.e. color, texture and shape that are derived from images. Many features and algorithms can be used to improve retrieval accuracy and to reduce the retrieval time. In this paper, we compare the different algorithms to extract color and texture features of an image and retrieve the relevant images. We measure the similarity between two images using different distance measures. The performance of each method has been individually evaluated in terms of average precision.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130832732","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269733
Gaurav, R. Anand, Vinod Kumar
Psychological stress is a vital parameter related to individual's health and cognitive performance which may affect emotions and professional efficiency. Regula stress profile generated can be used as neurofeedback for the clinical as personal assessment. This paper describes a method to detect mental stress level based on physiological parameters. In this method an electroencephalogram (EEG) parameter based binary stress classifier is developed which is validated through probabilistic stress profiler of differential stress inventory questionnaire. A non-invasive 9 channel EEG is used to extract physiological signal and an EEG-metric based cognitive state and workload outputs is generated for 41 healthy volunteers (37 males and 4 females, age; 24±5 years). All subjects were performed three simple tasks of closed eye, focusing vision on a red dot on center of dark screen and focusing on a white screen. Central tendencies (mean, median and mode) are extracted from of EEG-metric (sleep onset, distraction, low engagement, high engagement and cognitive states) as features. Either of the two classes as low stress or high stress are evaluated from probabilistic stress profiler of differential stress inventory and used as training output classes. A supervisory training of multiple layer perceptron based binary support vector machine classifier was used to detect stress class one by one. 40 subject's samples were used for training and interchanging one-by one 41th subject's stress class is determined from the designed classifier. Out of 41 subjects, stress level of 30 subjects is correctly identified.
{"title":"Non-invasive EEG-metric based stress detection","authors":"Gaurav, R. Anand, Vinod Kumar","doi":"10.1109/ISPCC.2017.8269733","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269733","url":null,"abstract":"Psychological stress is a vital parameter related to individual's health and cognitive performance which may affect emotions and professional efficiency. Regula stress profile generated can be used as neurofeedback for the clinical as personal assessment. This paper describes a method to detect mental stress level based on physiological parameters. In this method an electroencephalogram (EEG) parameter based binary stress classifier is developed which is validated through probabilistic stress profiler of differential stress inventory questionnaire. A non-invasive 9 channel EEG is used to extract physiological signal and an EEG-metric based cognitive state and workload outputs is generated for 41 healthy volunteers (37 males and 4 females, age; 24±5 years). All subjects were performed three simple tasks of closed eye, focusing vision on a red dot on center of dark screen and focusing on a white screen. Central tendencies (mean, median and mode) are extracted from of EEG-metric (sleep onset, distraction, low engagement, high engagement and cognitive states) as features. Either of the two classes as low stress or high stress are evaluated from probabilistic stress profiler of differential stress inventory and used as training output classes. A supervisory training of multiple layer perceptron based binary support vector machine classifier was used to detect stress class one by one. 40 subject's samples were used for training and interchanging one-by one 41th subject's stress class is determined from the designed classifier. Out of 41 subjects, stress level of 30 subjects is correctly identified.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126011866","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269683
Ashutosh Sharma, Mohd Dilshad Ansari, Rajiv Kumar
Edge detection is one of the most fundamental operations in image processing and is one of the most commonly used operations in image processing and pattern recognition. The reason for this is that edges form the outline of an object and thus reduce the size of file without losing the useful information. An edge is the boundary between an object and the background, and indicates the boundary between overlapping objects. Edge detection reduces the amount of data needed to process by removing unnecessary features. Knowing the positions of these boundaries is critical in the process of image enhancement, recognition, restoration and compression. The edges of image are considered to be most important image attributes that provide valuable information for human image perception. The areas of this work are in digital image process and telecommunication engineering, which are very wide fields. In this paper a comparison of different edge detectors has been made and results formed using the values of mean square error and peak signal to noise ratio shows that intuitionistic fuzzy edge detector outperform over the existed edge detectors.
{"title":"A comparative study of edge detectors in digital image processing","authors":"Ashutosh Sharma, Mohd Dilshad Ansari, Rajiv Kumar","doi":"10.1109/ISPCC.2017.8269683","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269683","url":null,"abstract":"Edge detection is one of the most fundamental operations in image processing and is one of the most commonly used operations in image processing and pattern recognition. The reason for this is that edges form the outline of an object and thus reduce the size of file without losing the useful information. An edge is the boundary between an object and the background, and indicates the boundary between overlapping objects. Edge detection reduces the amount of data needed to process by removing unnecessary features. Knowing the positions of these boundaries is critical in the process of image enhancement, recognition, restoration and compression. The edges of image are considered to be most important image attributes that provide valuable information for human image perception. The areas of this work are in digital image process and telecommunication engineering, which are very wide fields. In this paper a comparison of different edge detectors has been made and results formed using the values of mean square error and peak signal to noise ratio shows that intuitionistic fuzzy edge detector outperform over the existed edge detectors.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117085349","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269694
Rudrika Kalsotra, Sakshi Arora
Moving object detection is an active research area in the field of video processing and computer vision forming the base of many video analytic applications. The typical impediments in detection of moving objects including dynamic scenes, sudden illumination variations, complex background, effects of shadows, bootstrapping, video noise and camouflage receive the attention of researchers around the globe. This study proposes a morphological based approach for moving object detection. Morphological operations are combined with background subtraction technique and thresholding for experimental purpose. Furthermore, this paper outlines the methods of moving object detection and summarizes the recent research trends in this direction. The goal of this research is to explore the effects of morphological changes on the detection of moving objects. The preliminary results indicate that the proposed approach can generate accurate and complete moving object keeping the required details intact for meaningful object detection.
{"title":"Morphological based moving object detection with background subtraction method","authors":"Rudrika Kalsotra, Sakshi Arora","doi":"10.1109/ISPCC.2017.8269694","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269694","url":null,"abstract":"Moving object detection is an active research area in the field of video processing and computer vision forming the base of many video analytic applications. The typical impediments in detection of moving objects including dynamic scenes, sudden illumination variations, complex background, effects of shadows, bootstrapping, video noise and camouflage receive the attention of researchers around the globe. This study proposes a morphological based approach for moving object detection. Morphological operations are combined with background subtraction technique and thresholding for experimental purpose. Furthermore, this paper outlines the methods of moving object detection and summarizes the recent research trends in this direction. The goal of this research is to explore the effects of morphological changes on the detection of moving objects. The preliminary results indicate that the proposed approach can generate accurate and complete moving object keeping the required details intact for meaningful object detection.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129330027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2017-09-01DOI: 10.1109/ISPCC.2017.8269751
P. Misra, A. Chintanpalli
A signal processing model based on temporal cues of auditory-nerve fibers had been developed to understand the level dependent changes in vowel identification scores. To this study, the rate-place cues of auditory-nerve fibers were added to the existing temporal model of vowel identification. The model includes the human version of the auditory-nerve model, with added rate-place cues, along with the neural network to identify vowels. The model predictions of vowel identification across levels with only temporal cues are compared with the model predictions with both temporal and rate-place cues of auditory-nerve fibers. This paper also analyses the vowel identification scores from the perspective of auditory-nerves corresponding to first and second formants (F1 and F2) besides the entire spectrum of auditory-nerve fibers. The model prediction revealed that the representation of second formant (F2) was improved with added rate-place cues especially at low-to-mid levels and could be associated with lower acoustic energy of F2. Thus, this paper possibly explains the role of rate-places cues for vowel identification scores across levels.
{"title":"Computational model predictions of level dependent changes in vowel identification with addition of rate-place cue","authors":"P. Misra, A. Chintanpalli","doi":"10.1109/ISPCC.2017.8269751","DOIUrl":"https://doi.org/10.1109/ISPCC.2017.8269751","url":null,"abstract":"A signal processing model based on temporal cues of auditory-nerve fibers had been developed to understand the level dependent changes in vowel identification scores. To this study, the rate-place cues of auditory-nerve fibers were added to the existing temporal model of vowel identification. The model includes the human version of the auditory-nerve model, with added rate-place cues, along with the neural network to identify vowels. The model predictions of vowel identification across levels with only temporal cues are compared with the model predictions with both temporal and rate-place cues of auditory-nerve fibers. This paper also analyses the vowel identification scores from the perspective of auditory-nerves corresponding to first and second formants (F1 and F2) besides the entire spectrum of auditory-nerve fibers. The model prediction revealed that the representation of second formant (F2) was improved with added rate-place cues especially at low-to-mid levels and could be associated with lower acoustic energy of F2. Thus, this paper possibly explains the role of rate-places cues for vowel identification scores across levels.","PeriodicalId":142166,"journal":{"name":"2017 4th International Conference on Signal Processing, Computing and Control (ISPCC)","volume":" 33","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132095860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}