Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485290
H. Teimoori, H. Pota, M. Garratt, M. K. Samal
This paper presents a hierarchical inner-outer loop-based scheme for flight control of a small unmanned helicopter in the presence of input time-delay. The controller is designed based on a two-time-scale separation architecture which includes a fast inner loop and a slow outer loop. The inner-loop (attitude controller) employs an inverse optimal control strategy, which circumvents the tedious task of numerically solving an online Hamilton-Jacobi-Bellman (HJB) equation to obtain the optimal controller. The designed controller is optimal with respect to a meaningful objective function which considers penalties for control input, angular position and angular velocity errors. The outer loop (position) controller uses the backstepping technique to control the position and keep the helicopter on track. Finally, computer simulations are conducted to validate the theoretical results and illustrate the tracking performance of the proposed control method.
{"title":"Helicopter flight control using inverse optimal control and backstepping","authors":"H. Teimoori, H. Pota, M. Garratt, M. K. Samal","doi":"10.1109/ICARCV.2012.6485290","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485290","url":null,"abstract":"This paper presents a hierarchical inner-outer loop-based scheme for flight control of a small unmanned helicopter in the presence of input time-delay. The controller is designed based on a two-time-scale separation architecture which includes a fast inner loop and a slow outer loop. The inner-loop (attitude controller) employs an inverse optimal control strategy, which circumvents the tedious task of numerically solving an online Hamilton-Jacobi-Bellman (HJB) equation to obtain the optimal controller. The designed controller is optimal with respect to a meaningful objective function which considers penalties for control input, angular position and angular velocity errors. The outer loop (position) controller uses the backstepping technique to control the position and keep the helicopter on track. Finally, computer simulations are conducted to validate the theoretical results and illustrate the tracking performance of the proposed control method.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"62 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126869502","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485232
D. Moratuwage, B. Vo, Danwei W. Wang, Han Wang
In this paper we present a novel solution to the Multi-Vehicle SLAM (MVSLAM) problem by extending the random finite set (RFS) based SLAM filter framework using two recently developed multi-sensor information fusion approaches. Our solution is based on the modelling of the measurements and the landmark map as RFSs and factorizing the MVSLAM posterior into a product of the joint vehicle trajectories posterior and the landmark map posterior conditioned the vehicle trajectories. The joint vehicle trajectories posterior is propagated using a particle filter while the landmark map posterior conditioned on the vehicle trajectories is propagated using a Gaussian Mixture (GM) implementation of the probability hypothesis density (PHD) filter.
{"title":"Extending Bayesian RFS SLAM to multi-vehicle SLAM","authors":"D. Moratuwage, B. Vo, Danwei W. Wang, Han Wang","doi":"10.1109/ICARCV.2012.6485232","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485232","url":null,"abstract":"In this paper we present a novel solution to the Multi-Vehicle SLAM (MVSLAM) problem by extending the random finite set (RFS) based SLAM filter framework using two recently developed multi-sensor information fusion approaches. Our solution is based on the modelling of the measurements and the landmark map as RFSs and factorizing the MVSLAM posterior into a product of the joint vehicle trajectories posterior and the landmark map posterior conditioned the vehicle trajectories. The joint vehicle trajectories posterior is propagated using a particle filter while the landmark map posterior conditioned on the vehicle trajectories is propagated using a Gaussian Mixture (GM) implementation of the probability hypothesis density (PHD) filter.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"2 5","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114037126","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485418
R. Chi, Tao Su, S. Jin
In this paper, an iterative learning control approach is developed for a class of uncertainty nonlinear discrete-time systems based on the identification of the controlled system. At first, the linearized model of the nonlinear system is proposed. And then using the identification method, we present an indirect iterative learning control scheme for the controlled system. Analysis shows that the scheme can guarantee the system convergence under some conditions.
{"title":"An identification based indirect iterative learning control via data-driven approach","authors":"R. Chi, Tao Su, S. Jin","doi":"10.1109/ICARCV.2012.6485418","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485418","url":null,"abstract":"In this paper, an iterative learning control approach is developed for a class of uncertainty nonlinear discrete-time systems based on the identification of the controlled system. At first, the linearized model of the nonlinear system is proposed. And then using the identification method, we present an indirect iterative learning control scheme for the controlled system. Analysis shows that the scheme can guarantee the system convergence under some conditions.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122784300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485135
Weisheng Chen, Shaoyong Hua, Wenlong Ren, Wenbo Hu
This paper considers the problem of cooperative adaptive identification for a class of nonlinear systems via neural networks. The proposed adaptive laws of neural network weights are distributed, and the interconnection topologies are established among identification models in order to share their data on-line. It is proved that if the interconnection topologies are undirected and connected, then all adaptive laws of neural network weights for the same system function can converge to a small neighborhood around their optimal values over a union of sets consisting of system trajectories. Thus, the learned system model has the better generalization capability. A simulation example are provided to verify the effectiveness and advantages of the algorithms proposed in this paper.
{"title":"Neural-network-based cooperative adaptive identification of nonlinear systems","authors":"Weisheng Chen, Shaoyong Hua, Wenlong Ren, Wenbo Hu","doi":"10.1109/ICARCV.2012.6485135","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485135","url":null,"abstract":"This paper considers the problem of cooperative adaptive identification for a class of nonlinear systems via neural networks. The proposed adaptive laws of neural network weights are distributed, and the interconnection topologies are established among identification models in order to share their data on-line. It is proved that if the interconnection topologies are undirected and connected, then all adaptive laws of neural network weights for the same system function can converge to a small neighborhood around their optimal values over a union of sets consisting of system trajectories. Thus, the learned system model has the better generalization capability. A simulation example are provided to verify the effectiveness and advantages of the algorithms proposed in this paper.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"57 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127652496","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485168
Shaolong Shu, Wenhao Zong
Here we discussed recoverability of faulty discrete event systems. A faulty discrete event system includes the healthy dynamics and the faulty dynamics. In faulty modes, the performance of the system often degrades. Hence we need to execute some recovery actions to recover the performance of the system. Recovery actions may be renewing/repairing the faulty device or re-configuring the healthy devices. Generally recovery actions can be fired not for all the time, but for some time. They were modeled as a mapping from the state set of the faulty mode to a binary set in which 1 means recovery actions can be fired and 0 means recovery actions cannot be fired. Then we defined two types of recoverabilities: recoverability and weak recoverability. Recoverability means we periodically have chances to do recovery actions and weak recoverability means we have at least one chance to do recovery actions when the system has a fault. With observer technology, we proposed criteria to check these two types of recoverabilities.
{"title":"Recoverability of faulty discrete event systems","authors":"Shaolong Shu, Wenhao Zong","doi":"10.1109/ICARCV.2012.6485168","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485168","url":null,"abstract":"Here we discussed recoverability of faulty discrete event systems. A faulty discrete event system includes the healthy dynamics and the faulty dynamics. In faulty modes, the performance of the system often degrades. Hence we need to execute some recovery actions to recover the performance of the system. Recovery actions may be renewing/repairing the faulty device or re-configuring the healthy devices. Generally recovery actions can be fired not for all the time, but for some time. They were modeled as a mapping from the state set of the faulty mode to a binary set in which 1 means recovery actions can be fired and 0 means recovery actions cannot be fired. Then we defined two types of recoverabilities: recoverability and weak recoverability. Recoverability means we periodically have chances to do recovery actions and weak recoverability means we have at least one chance to do recovery actions when the system has a fault. With observer technology, we proposed criteria to check these two types of recoverabilities.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127820027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485156
Junjie Yan, Zhiwei Zhang, Zhen Lei, Dong Yi, S. Li
Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detection method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100% accuracy on Idiap print-attack database and the best performance on self-collected face anti-spoofing database.
{"title":"Face liveness detection by exploring multiple scenic clues","authors":"Junjie Yan, Zhiwei Zhang, Zhen Lei, Dong Yi, S. Li","doi":"10.1109/ICARCV.2012.6485156","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485156","url":null,"abstract":"Liveness detection is an indispensable guarantee for reliable face recognition, which has recently received enormous attention. In this paper we propose three scenic clues, which are non-rigid motion, face-background consistency and imaging banding effect, to conduct accurate and efficient face liveness detection. Non-rigid motion clue indicates the facial motions that a genuine face can exhibit such as blinking, and a low rank matrix decomposition based image alignment approach is designed to extract this non-rigid motion. Face-background consistency clue believes that the motion of face and background has high consistency for fake facial photos while low consistency for genuine faces, and this consistency can serve as an efficient liveness clue which is explored by GMM based motion detection method. Image banding effect reflects the imaging quality defects introduced in the fake face reproduction, which can be detected by wavelet decomposition. By fusing these three clues, we thoroughly explore sufficient clues for liveness detection. The proposed face liveness detection method achieves 100% accuracy on Idiap print-attack database and the best performance on self-collected face anti-spoofing database.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122532703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485308
Sascha Schrader, Markus Dambek, Adrian Block, Stefan Brending, D. Nakath, Falko Schmid, J. V. D. Ven
In this paper we introduce a way of tracking people in an indoor environment across multiple cameras with overlapping as well as non-overlapping fields of view. To do so, we use our distribution model called SpARTA and an extended Tracking-Learning-Detection algorithm. A big advantage in comparison to other systems is that each camera node learns the tracked person and builds a database of positive and negative examples in real time. With these datasets we are able to distinguish different people across different nodes. The learned data is shared across nodes, so that they improve each other while tracking. In the main part we present an experimental validation of the system. Finally, we will show that distribution of tracking data improves tracking across multiple nodes considerably with regard to partial occlusion of the tracked object.
{"title":"A distributed online learning tracking algorithm","authors":"Sascha Schrader, Markus Dambek, Adrian Block, Stefan Brending, D. Nakath, Falko Schmid, J. V. D. Ven","doi":"10.1109/ICARCV.2012.6485308","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485308","url":null,"abstract":"In this paper we introduce a way of tracking people in an indoor environment across multiple cameras with overlapping as well as non-overlapping fields of view. To do so, we use our distribution model called SpARTA and an extended Tracking-Learning-Detection algorithm. A big advantage in comparison to other systems is that each camera node learns the tracked person and builds a database of positive and negative examples in real time. With these datasets we are able to distinguish different people across different nodes. The learned data is shared across nodes, so that they improve each other while tracking. In the main part we present an experimental validation of the system. Finally, we will show that distribution of tracking data improves tracking across multiple nodes considerably with regard to partial occlusion of the tracked object.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127698578","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485332
Huiling Xu, Zhiping Lin, A. Makur
This paper is concerned with the problem of robust unbiased H∞ filtering for uncertain two-dimensional (2-D) systems described by the Fornasini-Marchesini local state-space second model. The parameter uncertainties are assumed to be norm-bounded in both the state and measurement equations. The concept of robust unbiased filtering is first introduced into uncertain 2-D systems. A necessary and sufficient condition for the existence of robust unbiased 2-D H∞ filters is derived based on the rank condition of the given system matrices. A method is then proposed for the design of robust unbiased H∞ filters for uncertain 2-D systems using a linear matrix inequality (LMI) technique. The main advantage of the proposed method is that it can be applied to unstable uncertain 2-D systems while existing robust 2-D H∞ filtering approaches only work for robust stable uncertain 2-D systems. An illustrative example is also provided and comparison with existing results is made.
{"title":"Robust unbiased H∞ filtering for uncertain two-dimensional systems","authors":"Huiling Xu, Zhiping Lin, A. Makur","doi":"10.1109/ICARCV.2012.6485332","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485332","url":null,"abstract":"This paper is concerned with the problem of robust unbiased H∞ filtering for uncertain two-dimensional (2-D) systems described by the Fornasini-Marchesini local state-space second model. The parameter uncertainties are assumed to be norm-bounded in both the state and measurement equations. The concept of robust unbiased filtering is first introduced into uncertain 2-D systems. A necessary and sufficient condition for the existence of robust unbiased 2-D H∞ filters is derived based on the rank condition of the given system matrices. A method is then proposed for the design of robust unbiased H∞ filters for uncertain 2-D systems using a linear matrix inequality (LMI) technique. The main advantage of the proposed method is that it can be applied to unstable uncertain 2-D systems while existing robust 2-D H∞ filtering approaches only work for robust stable uncertain 2-D systems. An illustrative example is also provided and comparison with existing results is made.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127884271","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485315
C. Antonya
The gaze point and gaze line, measured with an eye tracking device, can be used in various interaction interfaces, like mobile robot programming in immersive virtual environment. Path generation of the robot should be made without any tedious eye gestures, but rather it should be detected from the context. The obtained trajectory, influenced by the precision of the estimated gaze point, can be used by physically disabled people in moving with a wheelchair using their eyes. The goal of this study is to assess the accuracy of the gaze point computation based on eye tracking in an immersive virtual environment. The point in space where the two directions of left and right eye converge gives a measure for the distance to the gazed objects. This distance is needed whenever the user wants to indicate a point in space or in case two or more objects to be selected are placed one behind the other. In this work several experiments have been conducted to assess the accuracy of the convergence point detection in space.
{"title":"Accuracy of gaze point estimation in immersive 3D interaction interface based on eye tracking","authors":"C. Antonya","doi":"10.1109/ICARCV.2012.6485315","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485315","url":null,"abstract":"The gaze point and gaze line, measured with an eye tracking device, can be used in various interaction interfaces, like mobile robot programming in immersive virtual environment. Path generation of the robot should be made without any tedious eye gestures, but rather it should be detected from the context. The obtained trajectory, influenced by the precision of the estimated gaze point, can be used by physically disabled people in moving with a wheelchair using their eyes. The goal of this study is to assess the accuracy of the gaze point computation based on eye tracking in an immersive virtual environment. The point in space where the two directions of left and right eye converge gives a measure for the distance to the gazed objects. This distance is needed whenever the user wants to indicate a point in space or in case two or more objects to be selected are placed one behind the other. In this work several experiments have been conducted to assess the accuracy of the convergence point detection in space.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127997020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2012-12-01DOI: 10.1109/ICARCV.2012.6485296
S. R. U. N. Jafri, Zhao Li, A. A. Chandio, R. Chellali
This paper presents multi-robot simultaneous localization and mapping (SLAM) framework for a team of robots with unknown initial poses. The proposed solution is using feature based Rao-Blackwellised particle filter (RBPF) SLAM for each robot working in an unknown environment equipped only with 2D range sensor and communication module. To represent the environment in compact form, line and corner features (or point features) are used. By sharing and comparing distinct feature based maps of each robot, a global map with known poses is formed without any physical meeting among the robots. This approach can easily applicable to the distributed or centralized robotic systems with ease of data handling and reduced computational cost.
{"title":"Laser only feature based multi robot SLAM","authors":"S. R. U. N. Jafri, Zhao Li, A. A. Chandio, R. Chellali","doi":"10.1109/ICARCV.2012.6485296","DOIUrl":"https://doi.org/10.1109/ICARCV.2012.6485296","url":null,"abstract":"This paper presents multi-robot simultaneous localization and mapping (SLAM) framework for a team of robots with unknown initial poses. The proposed solution is using feature based Rao-Blackwellised particle filter (RBPF) SLAM for each robot working in an unknown environment equipped only with 2D range sensor and communication module. To represent the environment in compact form, line and corner features (or point features) are used. By sharing and comparing distinct feature based maps of each robot, a global map with known poses is formed without any physical meeting among the robots. This approach can easily applicable to the distributed or centralized robotic systems with ease of data handling and reduced computational cost.","PeriodicalId":441236,"journal":{"name":"2012 12th International Conference on Control Automation Robotics & Vision (ICARCV)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2012-12-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121301004","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}