Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516596
Daniela Paula Petrinca, E. Todoran
CARMA(Collective Adaptive Resource-Sharing Markovian Agents) is a recently developed stochastic process algebra language which provides constructions for specifying the behavior of collective adaptive systems. By using CARMA and membrane computing patterns, in this paper we develop and analyze a model of the immune system response against virus attacks. By varying the rates of virus propagation,replication and destruction in our CARMA model, we investigate formal conditions for successful immune responses.
{"title":"Immune System Modeling and Analysis using CARMA","authors":"Daniela Paula Petrinca, E. Todoran","doi":"10.1109/ICCP.2018.8516596","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516596","url":null,"abstract":"CARMA(Collective Adaptive Resource-Sharing Markovian Agents) is a recently developed stochastic process algebra language which provides constructions for specifying the behavior of collective adaptive systems. By using CARMA and membrane computing patterns, in this paper we develop and analyze a model of the immune system response against virus attacks. By varying the rates of virus propagation,replication and destruction in our CARMA model, we investigate formal conditions for successful immune responses.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130669724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516647
Tiberius Dumitriu, Corina Cimpanu, F. Ungureanu, V. Manta
Existing achievements in the domain of HumanComputer Interaction (HCI) intend to attain a more natural interplay between its involved actors. Automatic and reliable estimations of affective states in particular from physiological signals received much attention lately. From the physiological measures point of view, emotion assessment benefits of pure, unaltered sensations in contrast to facial or vocal measures that can be simulated. In this paper, some physiological measures based classification approaches for assessing the affective state are analyzed in different scenarios. The analysis is performed on the data acquired from Eye-Tracker (ET) sensors, as well as for Heart Rate (HR) and Electro-Dermal Activity (EDA) in visual stimuli based experiments. To this end, a comparison between AdaBoost (AB), K Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) is accomplished examining entropy indices as primary features.
{"title":"Experimental Analysis of Emotion Classification Techniques","authors":"Tiberius Dumitriu, Corina Cimpanu, F. Ungureanu, V. Manta","doi":"10.1109/ICCP.2018.8516647","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516647","url":null,"abstract":"Existing achievements in the domain of HumanComputer Interaction (HCI) intend to attain a more natural interplay between its involved actors. Automatic and reliable estimations of affective states in particular from physiological signals received much attention lately. From the physiological measures point of view, emotion assessment benefits of pure, unaltered sensations in contrast to facial or vocal measures that can be simulated. In this paper, some physiological measures based classification approaches for assessing the affective state are analyzed in different scenarios. The analysis is performed on the data acquired from Eye-Tracker (ET) sensors, as well as for Heart Rate (HR) and Electro-Dermal Activity (EDA) in visual stimuli based experiments. To this end, a comparison between AdaBoost (AB), K Nearest Neighbors (KNN), Linear Discriminant Analysis (LDA) and Support Vector Machine (SVM) is accomplished examining entropy indices as primary features.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"133 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133441632","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516581
Horatiu Florea, R. Varga, S. Nedevschi
This paper discusses the architecture of an environment perception system for autonomous vehicles. The modules of the system are described briefly and we focus on important changes in the architecture that enable: decoupling of data acquisition from data processing; synchronous data processing; parallel computation on GPU and multiple CPU cores; efficient data passing using pointers; adaptive architecture capable of working with different number of sensors. The experimental results compare execution times before and after the proposed optimizations. We achieve a 10 Hz frame rate for an object detection system working with 4 cameras and 4 LIDAR point clouds.
{"title":"Environment Perception Architecture using Images and 3D Data","authors":"Horatiu Florea, R. Varga, S. Nedevschi","doi":"10.1109/ICCP.2018.8516581","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516581","url":null,"abstract":"This paper discusses the architecture of an environment perception system for autonomous vehicles. The modules of the system are described briefly and we focus on important changes in the architecture that enable: decoupling of data acquisition from data processing; synchronous data processing; parallel computation on GPU and multiple CPU cores; efficient data passing using pointers; adaptive architecture capable of working with different number of sensors. The experimental results compare execution times before and after the proposed optimizations. We achieve a 10 Hz frame rate for an object detection system working with 4 cameras and 4 LIDAR point clouds.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124471221","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516435
Grigore Burloiu
SoundThimble is an interactive sound installation which uses motion capture and machine learning to establish relationships between human movement and virtual objects in 3D space. This paper documents the strategy for adapting this system for interacting with children with physical and cognitive impairments. Starting from a specific child subject, we show how the hardware, software and interaction design can be modified, with the view of generalising to a wider range of young disabled users. The project’s ultimate goal is threefold: inclusion, entertainment, and rehabilitation.
{"title":"Adapting the SoundThimble Movement Sonification System for Young Motion-impaired Users","authors":"Grigore Burloiu","doi":"10.1109/ICCP.2018.8516435","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516435","url":null,"abstract":"SoundThimble is an interactive sound installation which uses motion capture and machine learning to establish relationships between human movement and virtual objects in 3D space. This paper documents the strategy for adapting this system for interacting with children with physical and cognitive impairments. Starting from a specific child subject, we show how the hardware, software and interaction design can be modified, with the view of generalising to a wider range of young disabled users. The project’s ultimate goal is threefold: inclusion, entertainment, and rehabilitation.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"119 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124037574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516642
F. Oniga, S. Nedevschi
A low complexity approach for computing the orientation of 3D obstacles, detected from lidar data, is proposed in this paper. The proposed method takes as input obstacles represented as cuboids without orientation (aligned with the reference frame). Each cuboid contains a cluster of obstacle locations (discrete grid cells). First, for each obstacle, the boundaries that are visible for the perception system are selected. A model consisting of two perpendicular lines is fitted to the set of boundary cells, one for each presumed visible side. The main dominant line is computed with a RANSAC approach. Then, the second line is searched, using a constraint of perpendicularity on the dominant line. The existence of the second line is used to validate the orientation. Finally, additional criteria are proposed to select the best orientation based on the free area of the cuboid (on top view) that is visible to the perception system.
{"title":"A Fast Ransac Based Approach for Computing the Orientation of Obstacles in Traffic Scenes","authors":"F. Oniga, S. Nedevschi","doi":"10.1109/ICCP.2018.8516642","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516642","url":null,"abstract":"A low complexity approach for computing the orientation of 3D obstacles, detected from lidar data, is proposed in this paper. The proposed method takes as input obstacles represented as cuboids without orientation (aligned with the reference frame). Each cuboid contains a cluster of obstacle locations (discrete grid cells). First, for each obstacle, the boundaries that are visible for the perception system are selected. A model consisting of two perpendicular lines is fitted to the set of boundary cells, one for each presumed visible side. The main dominant line is computed with a RANSAC approach. Then, the second line is searched, using a constraint of perpendicularity on the dominant line. The existence of the second line is used to validate the orientation. Finally, additional criteria are proposed to select the best orientation based on the free area of the cuboid (on top view) that is visible to the perception system.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123236296","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516634
Christina Sigl, A. Berl
There are many different benchmarks and user types available to evaluate virtual desktop infrastructure solutions. Benchmarks as well as user types are not directly comparable, because they are not uniformly defined. Therefore, the existing benchmarks and user types are analyzed. Benchmarks are analyzed and discussed with regard to the appropriate purpose and structured into the proposed scheme. Existing user types are analyzed with regard to their properties, workloads and complexity. Hence, a classification scheme is proposed. The adaption of both proposed schemes is discussed for the workstation based virtual desktop infrastructure approach.
{"title":"Benchmarking and User Types in Virtual Desktop Infrastructures","authors":"Christina Sigl, A. Berl","doi":"10.1109/ICCP.2018.8516634","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516634","url":null,"abstract":"There are many different benchmarks and user types available to evaluate virtual desktop infrastructure solutions. Benchmarks as well as user types are not directly comparable, because they are not uniformly defined. Therefore, the existing benchmarks and user types are analyzed. Benchmarks are analyzed and discussed with regard to the appropriate purpose and structured into the proposed scheme. Existing user types are analyzed with regard to their properties, workloads and complexity. Hence, a classification scheme is proposed. The adaption of both proposed schemes is discussed for the workstation based virtual desktop infrastructure approach.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125744810","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516582
M. Vasiu, L. Marghescu, Ioana Barbantan, R. Potolea
The current paper proposes a strategy for exploring and integrating related information extracted from unstructured documents with different degree of confidence, standardization and representation. The strategy was instantiated on the medical domain and designed for the English language. The goal of the proposed strategy was of augmenting the therapeutic information from patient leaflets with information extracted from clinical records. The approach proved to be a sound one as the information from the clinical records aligns with the information in the standardized sources. It confirmed the assumption that we can derive drug repositioning from clinical records and thus augmenting the existing medical knowledge. The reported metrics <95.14% precision, 83.3% recall>for patient leaflets and <94.07% precision, 87.27% recall >for EHRs measured for the concept extraction strategy, further support a good performance for the entities correlation approach. The degree of correlation between the extracted information from the two data sources reported as matches is of 85%.
{"title":"Cross Documents Concept Augmentation","authors":"M. Vasiu, L. Marghescu, Ioana Barbantan, R. Potolea","doi":"10.1109/ICCP.2018.8516582","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516582","url":null,"abstract":"The current paper proposes a strategy for exploring and integrating related information extracted from unstructured documents with different degree of confidence, standardization and representation. The strategy was instantiated on the medical domain and designed for the English language. The goal of the proposed strategy was of augmenting the therapeutic information from patient leaflets with information extracted from clinical records. The approach proved to be a sound one as the information from the clinical records aligns with the information in the standardized sources. It confirmed the assumption that we can derive drug repositioning from clinical records and thus augmenting the existing medical knowledge. The reported metrics <95.14% precision, 83.3% recall>for patient leaflets and <94.07% precision, 87.27% recall >for EHRs measured for the concept extraction strategy, further support a good performance for the entities correlation approach. The degree of correlation between the extracted information from the two data sources reported as matches is of 85%.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"71 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132656567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516578
V. Bâcu, A. Sabou, T. Stefanut, D. Gorgan, Ovidiu Vaduvescu
The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras’ performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic “blink” detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations.
{"title":"NEARBY Platform for Detecting Asteroids in Astronomical Images Using Cloud-based Containerized Applications","authors":"V. Bâcu, A. Sabou, T. Stefanut, D. Gorgan, Ovidiu Vaduvescu","doi":"10.1109/ICCP.2018.8516578","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516578","url":null,"abstract":"The continuing monitoring and surveying of the nearby space to detect Near Earth Objects (NEOs) and Near Earth Asteroids (NEAs) are essential because of the threats that this kind of objects impose on the future of our planet. We need more computational resources and advanced algorithms to deal with the exponential growth of the digital cameras’ performances and to be able to process (in near real-time) data coming from large surveys. This paper presents a software platform called NEARBY that supports automated detection of moving sources (asteroids) among stars from astronomical images. The detection procedure is based on the classic “blink” detection and, after that, the system supports visual analysis techniques to validate the moving sources, assisted by static and dynamical presentations.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"42 32","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120930541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516641
R. R. Slavescu, L. Szakacs
When we rely on GPS systems for navigating inside cities, localization errors might arise, especially when passing crossroads or in areas with bad signal due to high buildings. To address this, we investigated a new navigation method, based on identifying location through Deep Learning. We trained two Convolutional Neural Networks on street images, then used them for location recognition. The first neural network is responsible to identify the street, while the second one to identify the segment of the street we are on. We have obtained 99.70% accuracy for street recognition and 96.02% for segment recognition. The results show that, at a proof-of-concept level, the Convolutional Neural Networks are able to accurately identify the location using images, which could be used for complementing the GPS localization systems.
{"title":"Towards Improving Location Identification by Deep Learning on Images","authors":"R. R. Slavescu, L. Szakacs","doi":"10.1109/ICCP.2018.8516641","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516641","url":null,"abstract":"When we rely on GPS systems for navigating inside cities, localization errors might arise, especially when passing crossroads or in areas with bad signal due to high buildings. To address this, we investigated a new navigation method, based on identifying location through Deep Learning. We trained two Convolutional Neural Networks on street images, then used them for location recognition. The first neural network is responsible to identify the street, while the second one to identify the segment of the street we are on. We have obtained 99.70% accuracy for street recognition and 96.02% for segment recognition. The results show that, at a proof-of-concept level, the Convolutional Neural Networks are able to accurately identify the location using images, which could be used for complementing the GPS localization systems.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":"85 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116411452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2018-09-01DOI: 10.1109/ICCP.2018.8516602
Ionut Tamas, I. Salomie, Marcel Antal
Modern systems must take full advantage of the underlying hardware paradigms in order to yield higher throughput and lower latency. A common way of maximizing the hardware usage in a system is by leveraging multithreaded techniques. These techniques however are very hard to reason by and can yield hard to detect bugs, such as deadlocks, livelocks or race conditions from unwanted interleavings of threads in the system's execution. Atomic locks are a standard mechanism to provide a safe way to alleviate such issues by specifying what regions of code need to be executed atomically such that regardless of the threads interleavings the shared memory remains in a consistent state and makes the code execution as a simple serial execution that easy to analyze and reason by, yielding increased programmer productivity and system efficiency. Our paper proposes a system that allows user to easily verify if a C# codebase has correctly implemented the way shared memory (field or properties) are modified and is able to detect race conditions or deadlocks for the specified shared memory. The main goal is to improve developer productivity and to improve the system codebase by specifying the atomicity constraints as unit or integration tests. We present the overall architecture of the system and how it detects the way certain atomic invariants are checked and deadlocks are identified, as well as the integration with an existing codebase. We also describe how the system proves correctness in checking these invariants. We have verified our system against multithreaded C# codebases and the system successfully checks the atomicity invariants and deadlock cases outputting the correct scenarios of how these can happen. We have also provided a way to decrease the risk of concurrency bugs regressions and improving the code quality, thus proving that our system achieves the proposed goals of providing a way for increased developer productivity, correct detection of deadlocks, atomic invariants checking and concurrency bugs mitigation.
{"title":"Atomic invariants verification and deadlock detection at compile-time","authors":"Ionut Tamas, I. Salomie, Marcel Antal","doi":"10.1109/ICCP.2018.8516602","DOIUrl":"https://doi.org/10.1109/ICCP.2018.8516602","url":null,"abstract":"Modern systems must take full advantage of the underlying hardware paradigms in order to yield higher throughput and lower latency. A common way of maximizing the hardware usage in a system is by leveraging multithreaded techniques. These techniques however are very hard to reason by and can yield hard to detect bugs, such as deadlocks, livelocks or race conditions from unwanted interleavings of threads in the system's execution. Atomic locks are a standard mechanism to provide a safe way to alleviate such issues by specifying what regions of code need to be executed atomically such that regardless of the threads interleavings the shared memory remains in a consistent state and makes the code execution as a simple serial execution that easy to analyze and reason by, yielding increased programmer productivity and system efficiency. Our paper proposes a system that allows user to easily verify if a C# codebase has correctly implemented the way shared memory (field or properties) are modified and is able to detect race conditions or deadlocks for the specified shared memory. The main goal is to improve developer productivity and to improve the system codebase by specifying the atomicity constraints as unit or integration tests. We present the overall architecture of the system and how it detects the way certain atomic invariants are checked and deadlocks are identified, as well as the integration with an existing codebase. We also describe how the system proves correctness in checking these invariants. We have verified our system against multithreaded C# codebases and the system successfully checks the atomicity invariants and deadlock cases outputting the correct scenarios of how these can happen. We have also provided a way to decrease the risk of concurrency bugs regressions and improving the code quality, thus proving that our system achieves the proposed goals of providing a way for increased developer productivity, correct detection of deadlocks, atomic invariants checking and concurrency bugs mitigation.","PeriodicalId":259007,"journal":{"name":"2018 IEEE 14th International Conference on Intelligent Computer Communication and Processing (ICCP)","volume":" 6","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120828257","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}