P. Roth, Martin Köstinger, Paul Wohlhart, H. Bischof, J. Birchbauer
In this paper, we present an efficient solution for automaticdetection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.detection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.
{"title":"Automatic Detection and Reading of Dangerous Goods Plates","authors":"P. Roth, Martin Köstinger, Paul Wohlhart, H. Bischof, J. Birchbauer","doi":"10.1109/AVSS.2010.28","DOIUrl":"https://doi.org/10.1109/AVSS.2010.28","url":null,"abstract":"In this paper, we present an efficient solution for automaticdetection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.detection and reading of dangerous goods plates ontrucks and trains. According to the ADR agreement dangerousgoods transports are marked with an orange platecovering the hazard class and the identification number forthe hazardous substances. Since under real-world conditionshigh resolution images (often at low quality) have tobe processed an efficient and robust system is required. Inparticular, we propose a multi-stage system consisting ofan acquisition step, a saliency region detector (to reducethe run-time), a plate detector, and a robust recognitionstep based on an Optical Character Recognition (OCR). Todemonstrate the system, we show qualitative and quantitativelocalization/recognition results on two challenging datasets. In fact, building on proven robust and efficient methods,we show excellent detection and classification resultsunder hard environmental conditions at low run-time.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126628574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a framework to discover activities inan unsupervised manner, and add semantics with minimalsupervision. The framework uses basic trajectory informationas input and goes up to video interpretation. The workreduces the gap between low-level information and semanticinterpretation, building an intermediate layer composedof Primitive Events. The proposed representation for primitiveevents aims at capturing small meaningful motions overthe scene with the advantage of being learnt in an unsupervisedmanner. We propose the discovery of an activity usingthese Primitive Events as the main descriptors. The activitydiscovery is done using only real tracking data. Semanticsare added to the discovered activities and the recognition ofactivities (e.g., “Cooking”, “Eating”) can be automaticallydone with new datasets. Finally we validate the descriptorsby discovering and recognizing activities in a home careapplication dataset.
{"title":"Trajectory Based Activity Discovery","authors":"Guido Pusiol, F. Brémond, M. Thonnat","doi":"10.1109/AVSS.2010.15","DOIUrl":"https://doi.org/10.1109/AVSS.2010.15","url":null,"abstract":"This paper proposes a framework to discover activities inan unsupervised manner, and add semantics with minimalsupervision. The framework uses basic trajectory informationas input and goes up to video interpretation. The workreduces the gap between low-level information and semanticinterpretation, building an intermediate layer composedof Primitive Events. The proposed representation for primitiveevents aims at capturing small meaningful motions overthe scene with the advantage of being learnt in an unsupervisedmanner. We propose the discovery of an activity usingthese Primitive Events as the main descriptors. The activitydiscovery is done using only real tracking data. Semanticsare added to the discovered activities and the recognition ofactivities (e.g., “Cooking”, “Eating”) can be automaticallydone with new datasets. Finally we validate the descriptorsby discovering and recognizing activities in a home careapplication dataset.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"72 3","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114016822","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automatic facial expression recognition is a challengingproblem in computer vision, and has gained significantimportance in applications of human-computer interaction.This paper presents a new appearance-based feature descriptor,the Local Directional Pattern Variance (LDPv), torepresent facial components for human expression recognition.In contrast with LDP, the proposed LDPv introducesthe local variance of directional responses to encodethe contrast information within the descriptor. Here,the LDPv represenation characterizes both spatial structureand contrast information of each micro-patterns. Templatematching and Support Vector Machine (SVM) classifierare used to classify the LDPv feature vector of differentprototypic expression images. Experimental results usingthe Cohn-Kanade database show that the LDPv descriptoryields an improved recognition rate, as compared to existingappearance-based feature descriptors, such as the Gaborwaveletand Local Binary Pattern (LBP).
{"title":"A Local Directional Pattern Variance (LDPv) Based Face Descriptor for Human Facial Expression Recognition","authors":"M. H. Kabir, T. Jabid, O. Chae","doi":"10.1109/AVSS.2010.9","DOIUrl":"https://doi.org/10.1109/AVSS.2010.9","url":null,"abstract":"Automatic facial expression recognition is a challengingproblem in computer vision, and has gained significantimportance in applications of human-computer interaction.This paper presents a new appearance-based feature descriptor,the Local Directional Pattern Variance (LDPv), torepresent facial components for human expression recognition.In contrast with LDP, the proposed LDPv introducesthe local variance of directional responses to encodethe contrast information within the descriptor. Here,the LDPv represenation characterizes both spatial structureand contrast information of each micro-patterns. Templatematching and Support Vector Machine (SVM) classifierare used to classify the LDPv feature vector of differentprototypic expression images. Experimental results usingthe Cohn-Kanade database show that the LDPv descriptoryields an improved recognition rate, as compared to existingappearance-based feature descriptors, such as the Gaborwaveletand Local Binary Pattern (LBP).","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121020644","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donatello Conte, P. Foggia, G. Percannella, M. Vento
In this paper a system for autonomous video surveillance in relatively unconstrained environments is described. The system consists of two principal phases: object detection and object tracking. An adaptive background subtraction, together with a set of corrective algorithms, is used to cope with variable lighting, dynamic and articulate scenes, etc. The tracking algorithm is based on a matrix representation of the problem, and is used to face splitting and occlusion problems. When the tracking algorithm fails in following actual object trajectories, an appearancebased module is used to restore object identities. An experimental evaluation, carried out on the PETS2009 dataset for tracking, shows promising results.
{"title":"Performance Evaluation of a People Tracking System on PETS2009 Database","authors":"Donatello Conte, P. Foggia, G. Percannella, M. Vento","doi":"10.1109/AVSS.2010.87","DOIUrl":"https://doi.org/10.1109/AVSS.2010.87","url":null,"abstract":"In this paper a system for autonomous video surveillance in relatively unconstrained environments is described. The system consists of two principal phases: object detection and object tracking. An adaptive background subtraction, together with a set of corrective algorithms, is used to cope with variable lighting, dynamic and articulate scenes, etc. The tracking algorithm is based on a matrix representation of the problem, and is used to face splitting and occlusion problems. When the tracking algorithm fails in following actual object trajectories, an appearancebased module is used to restore object identities. An experimental evaluation, carried out on the PETS2009 dataset for tracking, shows promising results.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115545462","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present an event driven surveillance system.The purpose of this system is to enable thorough explorationof surveillance events. The system uses a clientserverweb architecture as this provides scalability for furtherdevelopment of the system infrastructure. The systemis designed to be accessed by surveillance operators whocan review and comment on events generated by our eventdetection processing modules. The presentation interfaceis based around a cross between Gmail and YouTube, aswe believe these interfaces to be intuitive for ordinary computeroperators. Our motivation is to fully utilize the eventsarchived in our database and to further refine the relevantevents. We do not just focus on event detection, but areworking towards the optimization of event detection. To thebest of our knowledge this system provides a novel approachto the technological surveillance paradigm.
{"title":"A Framework for an Event Driven Video Surveillance System","authors":"Declan F. Kieran, Weiqi Yan","doi":"10.1109/AVSS.2010.57","DOIUrl":"https://doi.org/10.1109/AVSS.2010.57","url":null,"abstract":"In this paper we present an event driven surveillance system.The purpose of this system is to enable thorough explorationof surveillance events. The system uses a clientserverweb architecture as this provides scalability for furtherdevelopment of the system infrastructure. The systemis designed to be accessed by surveillance operators whocan review and comment on events generated by our eventdetection processing modules. The presentation interfaceis based around a cross between Gmail and YouTube, aswe believe these interfaces to be intuitive for ordinary computeroperators. Our motivation is to fully utilize the eventsarchived in our database and to further refine the relevantevents. We do not just focus on event detection, but areworking towards the optimization of event detection. To thebest of our knowledge this system provides a novel approachto the technological surveillance paradigm.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130195276","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Recently, classifier grids have shown to be a considerablealternative to sliding window approaches for objectdetection from static cameras. The main drawback of suchmethods is that they are biased by the initial model. In fact,the classifiers can be adapted to changing environmentalconditions but due to conservative updates no new objectspecificinformation is acquired. Thus, the goal of this workis to increase the recall of scene-specific classifiers whilepreserving their accuracy and speed. In particular, we introducea co-training strategy for classifier grids using arobust on-line learner. Thus, the robustness is preservedwhile the recall can be increased. The co-training strategyrobustly provides negative as well as positive updates. Inaddition, the number of negative updates can be drasticallyreduced, which additionally speeds up the system. In theexperimental results these benefits are demonstrated on differentpublicly available surveillance benchmark data sets.
{"title":"Learning of Scene-Specific Object Detectors by Classifier Co-Grids","authors":"Sabine Sternig, P. Roth, H. Bischof","doi":"10.1109/AVSS.2010.10","DOIUrl":"https://doi.org/10.1109/AVSS.2010.10","url":null,"abstract":"Recently, classifier grids have shown to be a considerablealternative to sliding window approaches for objectdetection from static cameras. The main drawback of suchmethods is that they are biased by the initial model. In fact,the classifiers can be adapted to changing environmentalconditions but due to conservative updates no new objectspecificinformation is acquired. Thus, the goal of this workis to increase the recall of scene-specific classifiers whilepreserving their accuracy and speed. In particular, we introducea co-training strategy for classifier grids using arobust on-line learner. Thus, the robustness is preservedwhile the recall can be increased. The co-training strategyrobustly provides negative as well as positive updates. Inaddition, the number of negative updates can be drasticallyreduced, which additionally speeds up the system. In theexperimental results these benefits are demonstrated on differentpublicly available surveillance benchmark data sets.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"22 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127694572","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowing the number of people in a crowded scene is of big interest in the surveillance scene. In the past, this problem has been tackled mostly in an indirect, statistical way. This paper presents a direct, counting by detection, method based on fusing spatial information received from an adapted Histogram of Oriented Gradientsalgorithm (HOG) with temporal information by exploiting distinctive motion characteristics of different human body parts. For that purpose, this paper defines a measure for uniformity of motion. Furthermore, the system performance is enhanced by validating the resulting human hypotheses by tracking and applying a coherent motion detection. The approach is illustrated with an experimental evaluation.
{"title":"Counting People in Crowded Environments by Fusion of Shape and Motion Information","authors":"Michael Pätzold, Rubén Heras Evangelio, T. Sikora","doi":"10.1109/AVSS.2010.92","DOIUrl":"https://doi.org/10.1109/AVSS.2010.92","url":null,"abstract":"Knowing the number of people in a crowded scene is of big interest in the surveillance scene. In the past, this problem has been tackled mostly in an indirect, statistical way. This paper presents a direct, counting by detection, method based on fusing spatial information received from an adapted Histogram of Oriented Gradientsalgorithm (HOG) with temporal information by exploiting distinctive motion characteristics of different human body parts. For that purpose, this paper defines a measure for uniformity of motion. Furthermore, the system performance is enhanced by validating the resulting human hypotheses by tracking and applying a coherent motion detection. The approach is illustrated with an experimental evaluation.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116447693","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ming-Ching Chang, N. Krahnstoever, Ser-Nam Lim, Ting Yu
Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.
{"title":"Group Level Activity Recognition in Crowded Environments across Multiple Cameras","authors":"Ming-Ching Chang, N. Krahnstoever, Ser-Nam Lim, Ting Yu","doi":"10.1109/AVSS.2010.65","DOIUrl":"https://doi.org/10.1109/AVSS.2010.65","url":null,"abstract":"Environments such as schools, public parks and prisonsand others that contain a large number of people are typi-cally characterized by frequent and complex social interac-tions. In order to identify activities and behaviors in suchenvironments, it is necessary to understand the interactionsthat take place at a group level. To this end, this paper ad-dresses the problem of detecting and predicting suspiciousand in particular aggressive behaviors between groups ofindividuals such as gangs in prison yards. The work buildson a mature multi-camera multi-target person tracking sys-tem that operates in real-time and has the ability to han-dle crowded conditions. We consider two approaches forgrouping individuals: (i) agglomerative clustering favoredby the computer vision community, as well as (ii) decisiveclustering based on the concept of modularity, which is fa-vored by the social network analysis community. We showthe utility of such grouping analysis towards the detectionof group activities of interest. The presented algorithm isintegrated with a system operating in real-time to success-fully detect highly realistic aggressive behaviors enacted bycorrectional officers in a simulated prison environment. Wepresent results from these enactments that demonstrate theefficacy of our approach.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"117198827","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Robust background subtraction under sudden illuminationchanges is a challenging problem. In this paper, wepropose an approach to address this issue, which combinesthe Eigenbackground algorithm together with a statisticalillumination model. The rst algorithm is used to give arough reconstruction of the input frame, while the secondone improves the foreground segmentation. We introduce anonline spatial likelihood model by detecting reliable backgroundand foreground pixels. Experimental results illustratethat our approach achieves consistently higher accuracycompared to several state-of-the-art algorithms
{"title":"Background Subtraction under Sudden Illumination Changes","authors":"L. Vosters, Caifeng Shan, T. Gritti","doi":"10.1109/AVSS.2010.72","DOIUrl":"https://doi.org/10.1109/AVSS.2010.72","url":null,"abstract":"Robust background subtraction under sudden illuminationchanges is a challenging problem. In this paper, wepropose an approach to address this issue, which combinesthe Eigenbackground algorithm together with a statisticalillumination model. The rst algorithm is used to give arough reconstruction of the input frame, while the secondone improves the foreground segmentation. We introduce anonline spatial likelihood model by detecting reliable backgroundand foreground pixels. Experimental results illustratethat our approach achieves consistently higher accuracycompared to several state-of-the-art algorithms","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116819558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visual tracking is a fundamental task in computer vision. However there has been no systematic way of analyzing visual trackers so far. In this paper we propose a method that can help researchers determine strengths and weaknesses of any visual tracker. To this end, we consider visual tracking as an isolated problem and decompose it into fundamental and independent subproblems. Each subproblem is designed to associate with a different tracking circumstance. By evaluating a visual tracker onto a specific subproblem, we can determine how good it is with respect to that dimension. In total we come up with thirteen subproblems in our decomposition. We demonstrate the use of our proposed method by analyzing working conditions of two state-of-theart trackers.
{"title":"Thirteen Hard Cases in Visual Tracking","authors":"D. M. Chu, A. Smeulders","doi":"10.1109/AVSS.2010.85","DOIUrl":"https://doi.org/10.1109/AVSS.2010.85","url":null,"abstract":"Visual tracking is a fundamental task in computer vision. However there has been no systematic way of analyzing visual trackers so far. In this paper we propose a method that can help researchers determine strengths and weaknesses of any visual tracker. To this end, we consider visual tracking as an isolated problem and decompose it into fundamental and independent subproblems. Each subproblem is designed to associate with a different tracking circumstance. By evaluating a visual tracker onto a specific subproblem, we can determine how good it is with respect to that dimension. In total we come up with thirteen subproblems in our decomposition. We demonstrate the use of our proposed method by analyzing working conditions of two state-of-theart trackers.","PeriodicalId":415758,"journal":{"name":"2010 7th IEEE International Conference on Advanced Video and Signal Based Surveillance","volume":"32 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-08-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125709716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}