Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425311
S. Cho, Jinseok Lee, Sangjin Hong
This paper develops and evaluates the threshold based algorithm proposed in [S.H. Cho, J. Lee, and S. Hong, "Passive Sensor Based Dynamic Object Association Method in Wireless Sensor Network," Proceedings of MWSCAS07 and NEWCAS07, Aug. 2007. ] for dynamic data association in wireless sensor networks. The sensor node incorporates RFID reader and acoustic sensor where the signals are fused for tracking and associating multiple objects. The RFID tag is used for object identification and acoustic sensor is used for estimating object movement. For the better data association, we apply the particle filtering for the prediction of an object. The algorithm with the particle filtering has an effect on increasing the association case where even objects overlap. The simulation result is compared to that using only the original algorithm. The association performance under single node coverage and multiple node coverage is evaluated as a function of sampling time.
{"title":"Passive sensor based dynamic object association with particle filtering","authors":"S. Cho, Jinseok Lee, Sangjin Hong","doi":"10.1109/AVSS.2007.4425311","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425311","url":null,"abstract":"This paper develops and evaluates the threshold based algorithm proposed in [S.H. Cho, J. Lee, and S. Hong, \"Passive Sensor Based Dynamic Object Association Method in Wireless Sensor Network,\" Proceedings of MWSCAS07 and NEWCAS07, Aug. 2007. ] for dynamic data association in wireless sensor networks. The sensor node incorporates RFID reader and acoustic sensor where the signals are fused for tracking and associating multiple objects. The RFID tag is used for object identification and acoustic sensor is used for estimating object movement. For the better data association, we apply the particle filtering for the prediction of an object. The algorithm with the particle filtering has an effect on increasing the association case where even objects overlap. The simulation result is compared to that using only the original algorithm. The association performance under single node coverage and multiple node coverage is evaluated as a function of sampling time.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"126 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131219512","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425325
J. Jang, Yong Sun Kim, J. Ra
In multi-sensor image fusion, multi-resolution approaches became popular because they can preserve detailed information well. Among them, the gradient-based multi-resolution (GBMR) algorithm is known to effectively reduce ringing artifacts near edges compared with the discrete wavelet transform (DWT)-based algorithm. However, since the GBMR algorithm does not consider the diagonal direction, the ringing artifacts reduction is not satisfactory at diagonal edges. In this paper, we generalize the GBMR algorithm by adopting the wavelet structure. Thereby, the proposed algorithm improves the fusion process in high-frequency sub-bands so as to preserve details of input images. Meanwhile, the algorithm fuses the low-frequency sub-band by considering the overall contrast in the output image. To evaluate the proposed algorithm, we compare it with the DWT-based and GBMR algorithms. Experimental results clearly demonstrate that the proposed algorithm effectively reduces ringing artifacts for edges of all directions and greatly enhances the overall contrast while minimizing visual information loss.
{"title":"Image enhancement in multi-resolution multi-sensor fusion","authors":"J. Jang, Yong Sun Kim, J. Ra","doi":"10.1109/AVSS.2007.4425325","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425325","url":null,"abstract":"In multi-sensor image fusion, multi-resolution approaches became popular because they can preserve detailed information well. Among them, the gradient-based multi-resolution (GBMR) algorithm is known to effectively reduce ringing artifacts near edges compared with the discrete wavelet transform (DWT)-based algorithm. However, since the GBMR algorithm does not consider the diagonal direction, the ringing artifacts reduction is not satisfactory at diagonal edges. In this paper, we generalize the GBMR algorithm by adopting the wavelet structure. Thereby, the proposed algorithm improves the fusion process in high-frequency sub-bands so as to preserve details of input images. Meanwhile, the algorithm fuses the low-frequency sub-band by considering the overall contrast in the output image. To evaluate the proposed algorithm, we compare it with the DWT-based and GBMR algorithms. Experimental results clearly demonstrate that the proposed algorithm effectively reduces ringing artifacts for edges of all directions and greatly enhances the overall contrast while minimizing visual information loss.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"192 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132794631","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425309
F. Tsalakanidou, S. Malassiotis, M. Strintzis
A novel surveillance system integrating 2D and 3D facial data is presented in this paper, based on a low-cost sensor capable of real-time acquisition of 3D images and associated color images of a scene. Depth data is used for robust face detection, localization and 3D pose estimation, as well as for compensating pose and illumination variations of facial images prior to classification . The proposed system was tested under an open-set identification scenario for surveillance of humans passing through a relatively constrained area. Experimental results demonstrate the accuracy and robustness of the system under a variety of conditions usually encountered in surveillance applications.
{"title":"A 2D+3D face identification system for surveillance applications","authors":"F. Tsalakanidou, S. Malassiotis, M. Strintzis","doi":"10.1109/AVSS.2007.4425309","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425309","url":null,"abstract":"A novel surveillance system integrating 2D and 3D facial data is presented in this paper, based on a low-cost sensor capable of real-time acquisition of 3D images and associated color images of a scene. Depth data is used for robust face detection, localization and 3D pose estimation, as well as for compensating pose and illumination variations of facial images prior to classification . The proposed system was tested under an open-set identification scenario for surveillance of humans passing through a relatively constrained area. Experimental results demonstrate the accuracy and robustness of the system under a variety of conditions usually encountered in surveillance applications.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132932225","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425279
P. V. Hengel, T. Andringa
The paper presents a knowledge-based system designed to detect evidence of aggression by means of audio analysis. The detection is based on the way sounds are analyzed and how they attract attention in the human auditory system. The performance achieved is comparable to human performance in complex social environments. The SIgard system has been deployed in a number of different real-life situations and was tested extensively in the inner city of Groningen. Experienced police observers have annotated ~1400 recordings with various degrees of shouting, which were used for optimization. All essential events and a small number of nonessential aggressive events were detected. The system produces only a few false alarms (non-shouts) per microphone per year and misses no incidents. This makes it the first successful detection system for a non-trivial target in an unconstrained environment.
{"title":"Verbal aggression detection in complex social environments","authors":"P. V. Hengel, T. Andringa","doi":"10.1109/AVSS.2007.4425279","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425279","url":null,"abstract":"The paper presents a knowledge-based system designed to detect evidence of aggression by means of audio analysis. The detection is based on the way sounds are analyzed and how they attract attention in the human auditory system. The performance achieved is comparable to human performance in complex social environments. The SIgard system has been deployed in a number of different real-life situations and was tested extensively in the inner city of Groningen. Experienced police observers have annotated ~1400 recordings with various degrees of shouting, which were used for optimization. All essential events and a small number of nonessential aggressive events were detected. The system produces only a few false alarms (non-shouts) per microphone per year and misses no incidents. This makes it the first successful detection system for a non-trivial target in an unconstrained environment.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122291414","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425351
A. Oikonomopoulos, M. Pantic
In this paper we propose a tracking scheme specifically tailored for tracking human body parts in cluttered scenes. We model the background and the human skin using Gaussian mixture models and we combine these estimates to localize the features to be tracked. We further use these estimates to determine the pixels which belong to the background and those which belong to the subject's skin and we incorporate this information in the observation model of the used tracking scheme. For handling self-occlusion (i.e., when one body part occludes another), we incorporate the information about the direction of the observed motion into the propagation model of the used tracking scheme. We demonstrate that the proposed method outperforms the conventional condensation and auxiliary particle filtering when the hands and the head are the tracked body features. For the purposes of human body gesture recognition, we use a variant of the longest common subsequence algorithm (LCSS) in order to acquire a distance measure between the acquired trajectories and we use this measure in order to define new kernels for a relevance vector machine (RVM) classification scheme. We present results on real image sequences from a small database depicting people performing 15 aerobic exercises.
{"title":"Human body gesture recognition using adapted auxiliary particle filtering","authors":"A. Oikonomopoulos, M. Pantic","doi":"10.1109/AVSS.2007.4425351","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425351","url":null,"abstract":"In this paper we propose a tracking scheme specifically tailored for tracking human body parts in cluttered scenes. We model the background and the human skin using Gaussian mixture models and we combine these estimates to localize the features to be tracked. We further use these estimates to determine the pixels which belong to the background and those which belong to the subject's skin and we incorporate this information in the observation model of the used tracking scheme. For handling self-occlusion (i.e., when one body part occludes another), we incorporate the information about the direction of the observed motion into the propagation model of the used tracking scheme. We demonstrate that the proposed method outperforms the conventional condensation and auxiliary particle filtering when the hands and the head are the tracked body features. For the purposes of human body gesture recognition, we use a variant of the longest common subsequence algorithm (LCSS) in order to acquire a distance measure between the acquired trajectories and we use this measure in order to define new kernels for a relevance vector machine (RVM) classification scheme. We present results on real image sequences from a small database depicting people performing 15 aerobic exercises.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130180077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425370
P. Borges, E. Izquierdo, J. Mayer
This paper proposes a new coding method that increases significantly the signal-to-watermark ratio in document watermarking algorithms. A possible approach to text document watermarking is to consider text characters as a data structure consisting of several modifiable features such as size, shape, position, luminance, among others. In existing algorithms, these features can be modified sequentially according to bit values to be embedded. In contrast, the solution proposed here uses a positional information coding approach to embed information. Using this approach, the information is related to the position of modified characters, and not to the bit embedded on each character. This coding is based on combinatorial analysis and it can embed more bits in comparison to the usual methods, given a distortion constraint. An analysis showing the superior performance of positional coding for this type of application is presented. Experiments validate the analysis and the applicability of the method.
{"title":"Efficient side information encoding for text hardcopy documents","authors":"P. Borges, E. Izquierdo, J. Mayer","doi":"10.1109/AVSS.2007.4425370","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425370","url":null,"abstract":"This paper proposes a new coding method that increases significantly the signal-to-watermark ratio in document watermarking algorithms. A possible approach to text document watermarking is to consider text characters as a data structure consisting of several modifiable features such as size, shape, position, luminance, among others. In existing algorithms, these features can be modified sequentially according to bit values to be embedded. In contrast, the solution proposed here uses a positional information coding approach to embed information. Using this approach, the information is related to the position of modified characters, and not to the bit embedded on each character. This coding is based on combinatorial analysis and it can embed more bits in comparison to the usual methods, given a distortion constraint. An analysis showing the superior performance of positional coding for this type of application is presented. Experiments validate the analysis and the applicability of the method.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129036472","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425366
T. Tanaka, Atsushi Shimada, Daisaku Arita, R. Taniguchi
Non-parametric representation of pixel intensity distribution is quite effective to construct proper background model and to detect foreground objects accurately. However, from the viewpoint of practical application, the computation cost of the distribution estimation should be reduced. In this paper, we present fast estimation of the probability density function (PDF) of pixel value using Parzen density estimation and foreground object detection based on the estimated PDF. Here, the PDF is computed by partially updating the PDF estimated at the previous frame, and it greatly reduces the computation cost of the PDF estimation. Thus, the background model adapts quickly to changes in the scene and, therefore, foreground objects can be robustly detected. Several experiments show the effectiveness of our approach.
{"title":"A fast algorithm for adaptive background model construction using parzen density estimation","authors":"T. Tanaka, Atsushi Shimada, Daisaku Arita, R. Taniguchi","doi":"10.1109/AVSS.2007.4425366","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425366","url":null,"abstract":"Non-parametric representation of pixel intensity distribution is quite effective to construct proper background model and to detect foreground objects accurately. However, from the viewpoint of practical application, the computation cost of the distribution estimation should be reduced. In this paper, we present fast estimation of the probability density function (PDF) of pixel value using Parzen density estimation and foreground object detection based on the estimated PDF. Here, the PDF is computed by partially updating the PDF estimated at the previous frame, and it greatly reduces the computation cost of the PDF estimation. Thus, the background model adapts quickly to changes in the scene and, therefore, foreground objects can be robustly detected. Several experiments show the effectiveness of our approach.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129514091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425296
A. Rabaoui, M. Davy, S. Rossignol, Z. Lachiri, N. Ellouze
This paper proposes to apply optimized one-class support vector machines (1-SVMs) as a discriminative framework in order to address a specific audio classification problem. First, since SVM-based classifier with gaussian RBF kernel is sensitive to the kernel width, the width will be scaled in a distribution-dependent way permitting to avoid under-fitting and over-fitting problems. Moreover, an advanced dissimilarity measure will be introduced. We illustrate the performance of these methods on an audio database containing environmental sounds that may be of great importance for surveillance and security applications. The experiments conducted on a multi-class problem show that by choosing adequately the SVM parameters, we can efficiently address a sounds classification problem characterized by complex real-world datasets.
{"title":"Improved one-class SVM classifier for sounds classification","authors":"A. Rabaoui, M. Davy, S. Rossignol, Z. Lachiri, N. Ellouze","doi":"10.1109/AVSS.2007.4425296","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425296","url":null,"abstract":"This paper proposes to apply optimized one-class support vector machines (1-SVMs) as a discriminative framework in order to address a specific audio classification problem. First, since SVM-based classifier with gaussian RBF kernel is sensitive to the kernel width, the width will be scaled in a distribution-dependent way permitting to avoid under-fitting and over-fitting problems. Moreover, an advanced dissimilarity measure will be introduced. We illustrate the performance of these methods on an audio database containing environmental sounds that may be of great importance for surveillance and security applications. The experiments conducted on a multi-class problem show that by choosing adequately the SVM parameters, we can efficiently address a sounds classification problem characterized by complex real-world datasets.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129042596","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425300
S. Calderara, R. Cucchiara, A. Prati
This paper proposes the use of a mixture of Von Mises distributions to detect abnormal behaviors of moving people. The mixture is created from an unsupervised training set by exploiting k-medoids clustering algorithm based on Bhattacharyya distance between distributions. The extracted medoids are used as modes in the multi-modal mixture whose weights are the priors of the specific medoid. Given the mixture model a new trajectory is verified on the model by considering each direction composing it as independent. Experiments over a real scenario composed of multiple, partially-overlapped cameras are reported.
{"title":"Detection of abnormal behaviors using a mixture of Von Mises distributions","authors":"S. Calderara, R. Cucchiara, A. Prati","doi":"10.1109/AVSS.2007.4425300","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425300","url":null,"abstract":"This paper proposes the use of a mixture of Von Mises distributions to detect abnormal behaviors of moving people. The mixture is created from an unsupervised training set by exploiting k-medoids clustering algorithm based on Bhattacharyya distance between distributions. The extracted medoids are used as modes in the multi-modal mixture whose weights are the priors of the specific medoid. Given the mixture model a new trajectory is verified on the model by considering each direction composing it as independent. Experiments over a real scenario composed of multiple, partially-overlapped cameras are reported.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"18 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120856372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2007-09-05DOI: 10.1109/AVSS.2007.4425358
J. Annesley, A. Colombo, J. Orwell, S. Velastín
This paper builds on previous work to propose a meta-data standard for video surveillance. The motivation is to promote interoperability. The starting point is the set of requirements under consideration for a Multimedia Application Format. These requirements cover a description of the surveillance system and of the activity in the scene. In addition to this set, appropriate descriptions for the relation between camera and scene are also considered. To improve interoperability between systems and between components of a system, two types of restrictions are proposed. The first proposal is a restricted subset of the MPEG-7 elements that are applicable to the surveillance domain. The second proposal is to use the MPEG-7 tools to include domain-specific taxonomies to restrict the names of elements used in the semantic descriptions. Both proposals are incorporated into examples which demonstrate the use of the standard.
{"title":"A profile of MPEG-7 for visual surveillance","authors":"J. Annesley, A. Colombo, J. Orwell, S. Velastín","doi":"10.1109/AVSS.2007.4425358","DOIUrl":"https://doi.org/10.1109/AVSS.2007.4425358","url":null,"abstract":"This paper builds on previous work to propose a meta-data standard for video surveillance. The motivation is to promote interoperability. The starting point is the set of requirements under consideration for a Multimedia Application Format. These requirements cover a description of the surveillance system and of the activity in the scene. In addition to this set, appropriate descriptions for the relation between camera and scene are also considered. To improve interoperability between systems and between components of a system, two types of restrictions are proposed. The first proposal is a restricted subset of the MPEG-7 elements that are applicable to the surveillance domain. The second proposal is to use the MPEG-7 tools to include domain-specific taxonomies to restrict the names of elements used in the semantic descriptions. Both proposals are incorporated into examples which demonstrate the use of the standard.","PeriodicalId":371050,"journal":{"name":"2007 IEEE Conference on Advanced Video and Signal Based Surveillance","volume":"171 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2007-09-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121305510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}