Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505189
S. Shanker, S. Mahmud
U.S federal and state governments invest millions of dollars for enforcing parking control in the metro city area. Tasks like parking enforcement, towing of illegally parked vehicles and maintenance require many man-hours and resources. The parking control and revenue system in the metro are essentially dependant on devices like coin or token based parking meters, which require the use of exact change and is therefore cumbersome. Also a patrol officer is required to monitor these spaces constantly. This entails the need for a more efficient and redundant system. In this paper, we propose an architecture for automated parking meter and driver assistance system that shall be connected to a centralized traffic control authority responsible for parking enforcement and toll collection. This system would provide a more efficient and redundant way of enforcing parking control and toll collection and also assist drivers in metro area for searching an available parking space. The above-mentioned architecture enables automated parking toll collection.
{"title":"An intelligent architecture for metropolitan area parking control and toll collection","authors":"S. Shanker, S. Mahmud","doi":"10.1109/IVS.2005.1505189","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505189","url":null,"abstract":"U.S federal and state governments invest millions of dollars for enforcing parking control in the metro city area. Tasks like parking enforcement, towing of illegally parked vehicles and maintenance require many man-hours and resources. The parking control and revenue system in the metro are essentially dependant on devices like coin or token based parking meters, which require the use of exact change and is therefore cumbersome. Also a patrol officer is required to monitor these spaces constantly. This entails the need for a more efficient and redundant system. In this paper, we propose an architecture for automated parking meter and driver assistance system that shall be connected to a centralized traffic control authority responsible for parking enforcement and toll collection. This system would provide a more efficient and redundant way of enforcing parking control and toll collection and also assist drivers in metro area for searching an available parking space. The above-mentioned architecture enables automated parking toll collection.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125949832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505192
M. Kuwahara, S. Tanaka, M. Kano, M. Furukawa, K. Honda, K. Maruoka, T. Yamamoto, T. Shiraishi, H. Hanabusa, N. Webster
This study develops a microscopic traffic simulation model (KAKUMO), which is built into a mixed reality experiment system under an interactive traffic environment. We are developing a mixed reality experiment system to examine several human factors such as driving behavior and travel choice behavior, which are required to design and evaluate various ITS systems. The system is characterized by the environment, in which a driver, vehicles and infrastructure are dynamically interacting with each other. In this paper, we introduce the specification and performance of the micro simulation model, which is built into the prototype mixed reality system.
{"title":"An enhanced traffic simulation system for interactive traffic environment","authors":"M. Kuwahara, S. Tanaka, M. Kano, M. Furukawa, K. Honda, K. Maruoka, T. Yamamoto, T. Shiraishi, H. Hanabusa, N. Webster","doi":"10.1109/IVS.2005.1505192","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505192","url":null,"abstract":"This study develops a microscopic traffic simulation model (KAKUMO), which is built into a mixed reality experiment system under an interactive traffic environment. We are developing a mixed reality experiment system to examine several human factors such as driving behavior and travel choice behavior, which are required to design and evaluate various ITS systems. The system is characterized by the environment, in which a driver, vehicles and infrastructure are dynamically interacting with each other. In this paper, we introduce the specification and performance of the micro simulation model, which is built into the prototype mixed reality system.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126298412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505115
I. Cabani, G. Toulminet, A. Bensrhair
In this article, we present a first approach in the conception of a vision system dedicated to the detection of vehicles in reduced visibility conditions. It is based on a self-adaptive stereo vision extractor of 3D edges or obstacle and a color detection of vehicle lights. The detection of vehicle lights uses the L*a*b* color space. The vision system detects three kinds of vehicle lights: rear lights and rear-brake-lights; flashing and warning lights; reverse lights and headlights.
{"title":"Color-based detection of vehicle lights","authors":"I. Cabani, G. Toulminet, A. Bensrhair","doi":"10.1109/IVS.2005.1505115","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505115","url":null,"abstract":"In this article, we present a first approach in the conception of a vision system dedicated to the detection of vehicles in reduced visibility conditions. It is based on a self-adaptive stereo vision extractor of 3D edges or obstacle and a color detection of vehicle lights. The detection of vehicle lights uses the L*a*b* color space. The vision system detects three kinds of vehicle lights: rear lights and rear-brake-lights; flashing and warning lights; reverse lights and headlights.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126440263","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505095
A. Watanabe, M. Nishida
In this paper, a lane detection algorithm for a steering assistance system (SAS) is introduced. Our aim is to develop a lane detection sensor that is sufficiently robust to enable it to be applied to the SAS without the need for application specific hardware. For this purpose, we simplify the process that groups detected edge points into lines, using a road surface with low resolution. Additionally, we use a method to select the correct lane boundary lines from multiple candidates using pattern matching for scenes where lane boundaries are complex. Our algorithm has been implemented on simple hardware consisting of a CMOS imager and two microprocessors.
{"title":"Lane detection for a steering assistance system","authors":"A. Watanabe, M. Nishida","doi":"10.1109/IVS.2005.1505095","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505095","url":null,"abstract":"In this paper, a lane detection algorithm for a steering assistance system (SAS) is introduced. Our aim is to develop a lane detection sensor that is sufficiently robust to enable it to be applied to the SAS without the need for application specific hardware. For this purpose, we simplify the process that groups detected edge points into lines, using a road surface with low resolution. Additionally, we use a method to select the correct lane boundary lines from multiple candidates using pattern matching for scenes where lane boundaries are complex. Our algorithm has been implemented on simple hardware consisting of a CMOS imager and two microprocessors.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125869385","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505195
R. Schweiger, Heiko Neumann, Werner Ritter
In this contribution we present a sensor data fusing concept utilizing particle filters. The investigation aims at the development of a robust and easy to extend approach, capable of combining the information of different sensors. We use the particle filters characteristics and introduce weighting functions that are multiplied during the measurement update stage of the particle filter implementation. The concept is demonstrated in a vehicle detection system that conjoins symmetry detection, tail lamp detection and radar measurements in night vision applications.
{"title":"Multiple-cue data fusion with particle filters for vehicle detection in night view automotive applications","authors":"R. Schweiger, Heiko Neumann, Werner Ritter","doi":"10.1109/IVS.2005.1505195","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505195","url":null,"abstract":"In this contribution we present a sensor data fusing concept utilizing particle filters. The investigation aims at the development of a robust and easy to extend approach, capable of combining the information of different sensors. We use the particle filters characteristics and introduce weighting functions that are multiplied during the measurement update stage of the particle filter implementation. The concept is demonstrated in a vehicle detection system that conjoins symmetry detection, tail lamp detection and radar measurements in night vision applications.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"56 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122019120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper proposes a combination scenario of vision perception and fuzzy decision making for developing an intelligent vehicle collision-avoidance system (IVCAS). In IVCAS, a CCD camera is installed on the following vehicle and used to capture the image of leading vehicles and road information. The features of the leading vehicles and lane boundary are recognized by vision perception method, which derived from our previous work on histogram-based color difference fuzzy c-means (HCDFCM). HCDFCM is a robust and fast algorithm for detecting object boundary. In this paper, we adopted the coordinate mapping relationship (CMR) with HCDFCM to provide a robust vision perception for the necessary information such as relative velocity, relative distance between leading and following vehicle and absolute velocity of following vehicle, etc. The collision-avoidance strategy is based on the vision perception and implemented by a fuzzy decision making mechanism. In this paper, the necessary information is integrated as a degree of exceeding safe-distance (DESD) to estimate the possibility of collision. A safety coefficient (SC) is defined to indicate the degree of safety. Therefore, the number of fuzzy rules that based on DESD and SC could be reduced to improve the efficiency of decision making. In addition to robust image processing, abundant information are derived from recognizing image feature using the proposed algorithm in this paper. The fuzzy decision making mechanism abstract useful compact data extracted from these abundant information. Therefore, the main advantage of IVCAS is using less number of fuzzy rules than other systems, and gets more effectiveness in vehicle collision-avoidance.
{"title":"The study on intelligent vehicle collision-avoidance system with vision perception and fuzzy decision making","authors":"Tsung-Ying Sun, Shang-Jeng Tsai, Jiun-Yuan Tseng, Yen-Chang Tseng","doi":"10.1109/IVS.2005.1505087","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505087","url":null,"abstract":"This paper proposes a combination scenario of vision perception and fuzzy decision making for developing an intelligent vehicle collision-avoidance system (IVCAS). In IVCAS, a CCD camera is installed on the following vehicle and used to capture the image of leading vehicles and road information. The features of the leading vehicles and lane boundary are recognized by vision perception method, which derived from our previous work on histogram-based color difference fuzzy c-means (HCDFCM). HCDFCM is a robust and fast algorithm for detecting object boundary. In this paper, we adopted the coordinate mapping relationship (CMR) with HCDFCM to provide a robust vision perception for the necessary information such as relative velocity, relative distance between leading and following vehicle and absolute velocity of following vehicle, etc. The collision-avoidance strategy is based on the vision perception and implemented by a fuzzy decision making mechanism. In this paper, the necessary information is integrated as a degree of exceeding safe-distance (DESD) to estimate the possibility of collision. A safety coefficient (SC) is defined to indicate the degree of safety. Therefore, the number of fuzzy rules that based on DESD and SC could be reduced to improve the efficiency of decision making. In addition to robust image processing, abundant information are derived from recognizing image feature using the proposed algorithm in this paper. The fuzzy decision making mechanism abstract useful compact data extracted from these abundant information. Therefore, the main advantage of IVCAS is using less number of fuzzy rules than other systems, and gets more effectiveness in vehicle collision-avoidance.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129787640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Variable illumination is a difficult problem for path mark detection of high-speed vision-based vehicle. It is vitally important to adopt an effective real-time method to solve the problem. In this paper a new grayscale image automatic thresholding algorithm is presented to segment the path mark from the image under different illumination conditions. The new method aims at enhancing its real-time performance which derives from Otsu automatic thresholding algorithm The experimental results show that the new improved algorithm dramatically reduces the operating time in image segmentation while ensures the final image segmentation quality.
{"title":"An improved Otsu image segmentation algorithm for path mark detection under variable illumination","authors":"Li-sheng Jin, Tian Lei, Rong-ben Wang, Guo Lie, Jiang-wei Chu","doi":"10.1109/IVS.2005.1505209","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505209","url":null,"abstract":"Variable illumination is a difficult problem for path mark detection of high-speed vision-based vehicle. It is vitally important to adopt an effective real-time method to solve the problem. In this paper a new grayscale image automatic thresholding algorithm is presented to segment the path mark from the image under different illumination conditions. The new method aims at enhancing its real-time performance which derives from Otsu automatic thresholding algorithm The experimental results show that the new improved algorithm dramatically reduces the operating time in image segmentation while ensures the final image segmentation quality.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127884507","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505101
N. Hautière, R. Labayrade, D. Aubert
An atmospheric visibility measurement system capable of quantifying the most common operating range of onboard exteroceptive sensors is a key parameter in the creation of driving assistance systems. This information is then utilized to adapt sensor operations and processing or to alert the driver that the onboard assistance system is momentarily inoperative. Moreover, a system capable of either detecting the presence of fog or estimating visibility distances constitutes in itself a driving assistance. In this paper, we present a measurement framework of different visibility distances using onboard CCD cameras, that we beforehand defined: meteorological visibility, obstacle visibility, mobilized visibility. The methods to estimate these different visibility distances are detailed. Whereas the first one is based on a physical model of light diffusion by the atmosphere, the two other methods are based on the "v-disparity" approach and local contrasts computation. The methods are evaluated thanks to video sequences under sunny weather and foggy weather.
{"title":"Detection of visibility conditions through use of onboard cameras","authors":"N. Hautière, R. Labayrade, D. Aubert","doi":"10.1109/IVS.2005.1505101","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505101","url":null,"abstract":"An atmospheric visibility measurement system capable of quantifying the most common operating range of onboard exteroceptive sensors is a key parameter in the creation of driving assistance systems. This information is then utilized to adapt sensor operations and processing or to alert the driver that the onboard assistance system is momentarily inoperative. Moreover, a system capable of either detecting the presence of fog or estimating visibility distances constitutes in itself a driving assistance. In this paper, we present a measurement framework of different visibility distances using onboard CCD cameras, that we beforehand defined: meteorological visibility, obstacle visibility, mobilized visibility. The methods to estimate these different visibility distances are detailed. Whereas the first one is based on a physical model of light diffusion by the atmosphere, the two other methods are based on the \"v-disparity\" approach and local contrasts computation. The methods are evaluated thanks to video sequences under sunny weather and foggy weather.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"223 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132719240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505196
E. Binelli, A. Broggi, A. Fascioli, S. Ghidoni, P. Grisleri, Thorsten Dr. Graf, M. Meinecke
This paper describes a modular tracking system designed to improve the performance of a pedestrian detector. The tracking system consists of two modules, a labeler and a predictor. The former associates a tracking identifier to each pedestrian, keeping memory of the past history; this is achieved by merging the detector and predictor outputs combined with data about vehicle motion. The predictor, basically a Kalman filter, estimates the new pedestrian position by observing his previous movements. Its output helps the labeler to improve the match between the pedestrians detected in the new frame and those observed in the previous shots (feedback). If a pedestrian is occluded by some obstacle for a short while, the system continues tracking its movement using motion parameters. Moreover, it is able to reassign the same tracking ID in case the occlusion disappears in a short time. This behavior helps to correct temporary mis-recognitions that occur when the detector fails. The system has been tested using a quantitative performance evaluation tool, giving promising results.
{"title":"A modular tracking system for far infrared pedestrian recognition","authors":"E. Binelli, A. Broggi, A. Fascioli, S. Ghidoni, P. Grisleri, Thorsten Dr. Graf, M. Meinecke","doi":"10.1109/IVS.2005.1505196","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505196","url":null,"abstract":"This paper describes a modular tracking system designed to improve the performance of a pedestrian detector. The tracking system consists of two modules, a labeler and a predictor. The former associates a tracking identifier to each pedestrian, keeping memory of the past history; this is achieved by merging the detector and predictor outputs combined with data about vehicle motion. The predictor, basically a Kalman filter, estimates the new pedestrian position by observing his previous movements. Its output helps the labeler to improve the match between the pedestrians detected in the new frame and those observed in the previous shots (feedback). If a pedestrian is occluded by some obstacle for a short while, the system continues tracking its movement using motion parameters. Moreover, it is able to reassign the same tracking ID in case the occlusion disappears in a short time. This behavior helps to correct temporary mis-recognitions that occur when the detector fails. The system has been tested using a quantitative performance evaluation tool, giving promising results.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"38 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132849125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2005-06-06DOI: 10.1109/IVS.2005.1505117
D. Baehring, S. Simon, W. Niehsen, C. Stiller
Image processing is widely considered an essential part of future driver assistance systems. This paper presents a motion-based vision approach to initial detection of static and moving objects observed by a monocular camera attached to a moving observer. The underlying principle is based on parallax flow induced by all non-planar static or moving object of a 3D scene that is determined from optical flow measurements. Initial object hypotheses are created in regions containing significant parallax flow. The significance is determined from planar parallax decomposition automatically. Furthermore, we propose a separation of detected image motion into three hypotheses classes, namely coplanar, static and moving regions. To achieve a high degree of robustness and accuracy in real traffic situations some key processing steps are supported by the data of inertial sensors rigidly attached to our vehicle. The proposed method serves as a visual short-range surveillance module providing instantaneous object candidates to a driver assistance system. Our experiments and simulations confirm the feasibility and robustness of the detection method even in complex urban environment.
{"title":"Detection of close cut-in and overtaking vehicles for driver assistance based on planar parallax","authors":"D. Baehring, S. Simon, W. Niehsen, C. Stiller","doi":"10.1109/IVS.2005.1505117","DOIUrl":"https://doi.org/10.1109/IVS.2005.1505117","url":null,"abstract":"Image processing is widely considered an essential part of future driver assistance systems. This paper presents a motion-based vision approach to initial detection of static and moving objects observed by a monocular camera attached to a moving observer. The underlying principle is based on parallax flow induced by all non-planar static or moving object of a 3D scene that is determined from optical flow measurements. Initial object hypotheses are created in regions containing significant parallax flow. The significance is determined from planar parallax decomposition automatically. Furthermore, we propose a separation of detected image motion into three hypotheses classes, namely coplanar, static and moving regions. To achieve a high degree of robustness and accuracy in real traffic situations some key processing steps are supported by the data of inertial sensors rigidly attached to our vehicle. The proposed method serves as a visual short-range surveillance module providing instantaneous object candidates to a driver assistance system. Our experiments and simulations confirm the feasibility and robustness of the detection method even in complex urban environment.","PeriodicalId":386189,"journal":{"name":"IEEE Proceedings. Intelligent Vehicles Symposium, 2005.","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2005-06-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131947630","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}