Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092396
Yi Wu, M. E. Choubassi, I. Kozintsev
We describe an augmented reality prototype for exploring a 3D urban environment on mobile devices. Our system utilizes the location and orientation sensors on the mobile platform as well as computer vision techniques to register the live view of the device with the 3D urban data. In particular, the system recognizes the buildings in the live video, tracks the camera pose, and augments the video with relevant information about the buildings in the correct perspective. The 3D urban data consist of 3D point clouds and corresponding geo-tagged RGB images of the urban environment. We also discuss the processing steps to make such 3D data scalable and usable by our system.
{"title":"Augmenting 3D urban environment using mobile devices","authors":"Yi Wu, M. E. Choubassi, I. Kozintsev","doi":"10.1109/ISMAR.2011.6092396","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092396","url":null,"abstract":"We describe an augmented reality prototype for exploring a 3D urban environment on mobile devices. Our system utilizes the location and orientation sensors on the mobile platform as well as computer vision techniques to register the live view of the device with the 3D urban data. In particular, the system recognizes the buildings in the live video, tracks the camera pose, and augments the video with relevant information about the buildings in the correct perspective. The 3D urban data consist of 3D point clouds and corresponding geo-tagged RGB images of the urban environment. We also discuss the processing steps to make such 3D data scalable and usable by our system.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124742619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR-AMH.2011.6093646
C. Perey
The researchers and developers of mobile AR platforms need to use a common platform for developing experiences regardless of the surroundings of the user. In order to expand the use of AR both indoor and outdoor with and without computer vision techniques, the breadth of options available for positioning users and points of interest needs to expand. Separately, the experts in indoor positioning and navigation are generally not as familiar with AR use scenarios as they are with other domains. Together, positioning and navigation experts, and mobile AR experts, will discuss: — What are the indoor positioning and navigation systems best suited for mobile AR? — What studies are underway or need to be conducted in order to advance this field?
{"title":"Indoor positioning and navigation for mobile AR","authors":"C. Perey","doi":"10.1109/ISMAR-AMH.2011.6093646","DOIUrl":"https://doi.org/10.1109/ISMAR-AMH.2011.6093646","url":null,"abstract":"The researchers and developers of mobile AR platforms need to use a common platform for developing experiences regardless of the surroundings of the user. In order to expand the use of AR both indoor and outdoor with and without computer vision techniques, the breadth of options available for positioning users and points of interest needs to expand. Separately, the experts in indoor positioning and navigation are generally not as familiar with AR use scenarios as they are with other domains. Together, positioning and navigation experts, and mobile AR experts, will discuss: — What are the indoor positioning and navigation systems best suited for mobile AR? — What studies are underway or need to be conducted in order to advance this field?","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113935836","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092369
P. Baheti, A. Swaminathan, Murali Chari, S. Diaz, Slawek Grzechnik
Recently, there has been tremendous interest in the area of mobile Augmented Reality (AR) with applications including navigation, social networking, gaming and education. Current generation mobile phones are equipped with camera, GPS and other sensors, e.g., magnetic compass, accelerometer, gyro in addition to having ever increasing computing/graphics capabilities and memory storage. Mobile AR applications process the output of one or more sensors to augment the real world view with useful information. This paper's focus is on the camera sensor output, and describes the building blocks for a vision-based AR system. We present information-theoretic techniques to build and maintain an image (feature) database based on reference images, and for querying the captured input images against this database. Performance results using standard image sets are provided demonstrating superior recognition performance even with dramatic reductions in feature database size.
{"title":"Information-theoretic database building and querying for mobile augmented reality applications","authors":"P. Baheti, A. Swaminathan, Murali Chari, S. Diaz, Slawek Grzechnik","doi":"10.1109/ISMAR.2011.6092369","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092369","url":null,"abstract":"Recently, there has been tremendous interest in the area of mobile Augmented Reality (AR) with applications including navigation, social networking, gaming and education. Current generation mobile phones are equipped with camera, GPS and other sensors, e.g., magnetic compass, accelerometer, gyro in addition to having ever increasing computing/graphics capabilities and memory storage. Mobile AR applications process the output of one or more sensors to augment the real world view with useful information. This paper's focus is on the camera sensor output, and describes the building blocks for a vision-based AR system. We present information-theoretic techniques to build and maintain an image (feature) database based on reference images, and for querying the captured input images against this database. Performance results using standard image sets are provided demonstrating superior recognition performance even with dramatic reductions in feature database size.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127115780","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ismar.2011.6092373
Gilles Simon
Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.
{"title":"Tracking-by-synthesis using point features and pyramidal blurring","authors":"Gilles Simon","doi":"10.1109/ismar.2011.6092373","DOIUrl":"https://doi.org/10.1109/ismar.2011.6092373","url":null,"abstract":"Tracking-by-synthesis is a promising method for markerless vision-based camera tracking, particularly suitable for Augmented Reality applications. In particular, it is drift-free, viewpoint invariant and easy-to-combine with physical sensors such as GPS and inertial sensors. While edge features have been used succesfully within the tracking-by-synthesis framework, point features have, to our knowledge, still never been used. We believe that this is due to the fact that real-time corner detectors are generally weakly repeatable between a camera image and a rendered texture. In this paper, we compare the repeatability of commonly used FAST, Harris and SURF interest point detectors across view synthesis. We show that adding depth blur to the rendered texture can drastically improve the repeatability of FAST and Harris corner detectors (up to 100% in our experiments), which can be very helpful, e.g., to make tracking-by-synthesis running on mobile phones. We propose a method for simulating depth blur on the rendered images using a pre-calibrated depth response curve. In order to fulfil the performance requirements, a pyramidal approach is used based on the well-known MIP mapping technique. We also propose an original method for calibrating the depth response curve, which is suitable for any kind of focus lenses and comes for free in terms of programming effort, once the tracking-by-synthesis algorithm has been implemented.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130171694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092391
N. Dedual, Ohan Oda, Steven K. Feiner
How can multiple different display and interaction devices be used together to create an effective augmented reality environment? We explore the design of several prototype hybrid user interfaces that combine a 2D multi-touch tabletop display with a 3D head-tracked video-see-through display. We describe a simple modeling application and an urban visualization tool in which the information presented on the head-worn display supplements the information displayed on the tabletop, using a variety of approaches to track the head-worn display relative to the tabletop. In all cases, our goal is to allow users who can see only the tabletop to interact effectively with users wearing head-worn displays.
{"title":"Creating hybrid user interfaces with a 2D multi-touch tabletop and a 3D see-through head-worn display","authors":"N. Dedual, Ohan Oda, Steven K. Feiner","doi":"10.1109/ISMAR.2011.6092391","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092391","url":null,"abstract":"How can multiple different display and interaction devices be used together to create an effective augmented reality environment? We explore the design of several prototype hybrid user interfaces that combine a 2D multi-touch tabletop display with a 3D head-tracked video-see-through display. We describe a simple modeling application and an urban visualization tool in which the information presented on the head-worn display supplements the information displayed on the tabletop, using a variety of approaches to track the head-worn display relative to the tabletop. In all cases, our goal is to allow users who can see only the tabletop to interact effectively with users wearing head-worn displays.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134195749","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092355
Daniel Wagner, I. Barakonyi, Istvan Siklossy, Jay Wright, R. Ashok, S. Diaz, B. MacIntyre, D. Schmalstieg
Academic/Research Overview of Mobile Augmented Reality Introduction to the Qualcomm AR Platform — Features and Usage Scenarios Developing Mobile AR Applications using Qualcomm's Unity Extension Introduction to Qualcomm's QCAR SDK Native API Cross Platform Development with the Native QCAR SDK
{"title":"Building your vision with Qualcomm's Mobile Augmented Reality (AR) platform: AR on mobile devices","authors":"Daniel Wagner, I. Barakonyi, Istvan Siklossy, Jay Wright, R. Ashok, S. Diaz, B. MacIntyre, D. Schmalstieg","doi":"10.1109/ISMAR.2011.6092355","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092355","url":null,"abstract":"Academic/Research Overview of Mobile Augmented Reality Introduction to the Qualcomm AR Platform — Features and Usage Scenarios Developing Mobile AR Applications using Qualcomm's Unity Extension Introduction to Qualcomm's QCAR SDK Native API Cross Platform Development with the Native QCAR SDK","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133033134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092384
D. Nowrouzezahrai, S. Geiger, Kenny Mitchell, R. Sumner, Wojciech Jarosz, M. Gross
Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radio-metric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting.
{"title":"Light factorization for mixed-frequency shadows in augmented reality","authors":"D. Nowrouzezahrai, S. Geiger, Kenny Mitchell, R. Sumner, Wojciech Jarosz, M. Gross","doi":"10.1109/ISMAR.2011.6092384","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092384","url":null,"abstract":"Integrating animated virtual objects with their surroundings for high-quality augmented reality requires both geometric and radio-metric consistency. We focus on the latter of these problems and present an approach that captures and factorizes external lighting in a manner that allows for realistic relighting of both animated and static virtual objects. Our factorization facilitates a combination of hard and soft shadows, with high-performance, in a manner that is consistent with the surrounding scene lighting.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114143907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092380
S. Lieberknecht, Andrea Huber, Slobodan Ilic, Selim Benhimane
Compared to standard color cameras, RGB-D cameras are designed to additionally provide the depth of imaged pixels which in turn results in a dense colored 3D point cloud representing the environment from a certain viewpoint. We present a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh. Unlike parallel tracking and mapping performed with a standard color or grey scale camera, tracking with an RGB-D camera allows a correctly scaled camera motion estimation. Therefore, there is no need for measuring the environment by any additional tool or equipping the environment by placing objects in it with known sizes. The tracking can be directly started and does not require any preliminary known and/or constrained camera motion. The colored point clouds obtained from every RGB-D image are used to create textured meshes representing the environment from a certain camera view and the real-time estimated camera motion is used to correctly align these meshes over time in order to combine them into a dense reconstruction of the environment. We quantitatively evaluated the proposed method using real image sequences of a challenging scenario and their corresponding ground truth motion obtained with a mechanical measurement arm. We also compared it to a commonly used state-of-the-art method where only the color information is used. We show the superiority of the proposed tracking in terms of accuracy, robustness and usability. We also demonstrate its usage in several Augmented Reality scenarios where the tracking allows a reliable camera motion estimation and the meshing increases the realism of the augmentations by correctly handling their occlusions.
{"title":"RGB-D camera-based parallel tracking and meshing","authors":"S. Lieberknecht, Andrea Huber, Slobodan Ilic, Selim Benhimane","doi":"10.1109/ISMAR.2011.6092380","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092380","url":null,"abstract":"Compared to standard color cameras, RGB-D cameras are designed to additionally provide the depth of imaged pixels which in turn results in a dense colored 3D point cloud representing the environment from a certain viewpoint. We present a real-time tracking method that performs motion estimation of a consumer RGB-D camera with respect to an unknown environment while at the same time reconstructing this environment as a dense textured mesh. Unlike parallel tracking and mapping performed with a standard color or grey scale camera, tracking with an RGB-D camera allows a correctly scaled camera motion estimation. Therefore, there is no need for measuring the environment by any additional tool or equipping the environment by placing objects in it with known sizes. The tracking can be directly started and does not require any preliminary known and/or constrained camera motion. The colored point clouds obtained from every RGB-D image are used to create textured meshes representing the environment from a certain camera view and the real-time estimated camera motion is used to correctly align these meshes over time in order to combine them into a dense reconstruction of the environment. We quantitatively evaluated the proposed method using real image sequences of a challenging scenario and their corresponding ground truth motion obtained with a mechanical measurement arm. We also compared it to a commonly used state-of-the-art method where only the color information is used. We show the superiority of the proposed tracking in terms of accuracy, robustness and usability. We also demonstrate its usage in several Augmented Reality scenarios where the tracking allows a reliable camera motion estimation and the meshing increases the realism of the augmentations by correctly handling their occlusions.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129972024","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092366
Hideaki Uchiyama, É. Marchand
This paper presents an approach for detecting and tracking various types of planar objects with geometrical features. We combine traditional keypoint detectors with Locally Likely Arrangement Hashing (LLAH) [21] for geometrical feature based keypoint matching. Because the stability of keypoint extraction affects the accuracy of the keypoint matching, we set the criteria of keypoint selection on keypoint response and the distance between keypoints. In order to produce robustness to scale changes, we build a non-uniform image pyramid according to keypoint distribution at each scale. In the experiments, we evaluate the applicability of traditional keypoint detectors with LLAH for the detection. We also compare our approach with SURF and finally demonstrate that it is possible to detect and track different types of textures including colorful pictures, binary fiducial markers and handwritings.
{"title":"Toward augmenting everything: Detecting and tracking geometrical features on planar objects","authors":"Hideaki Uchiyama, É. Marchand","doi":"10.1109/ISMAR.2011.6092366","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092366","url":null,"abstract":"This paper presents an approach for detecting and tracking various types of planar objects with geometrical features. We combine traditional keypoint detectors with Locally Likely Arrangement Hashing (LLAH) [21] for geometrical feature based keypoint matching. Because the stability of keypoint extraction affects the accuracy of the keypoint matching, we set the criteria of keypoint selection on keypoint response and the distance between keypoints. In order to produce robustness to scale changes, we build a non-uniform image pyramid according to keypoint distribution at each scale. In the experiments, we evaluate the applicability of traditional keypoint detectors with LLAH for the detection. We also compare our approach with SURF and finally demonstrate that it is possible to detect and track different types of textures including colorful pictures, binary fiducial markers and handwritings.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"84 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126409862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2011-10-26DOI: 10.1109/ISMAR.2011.6092398
Gun A. Lee, M. Billinghurst
Recent advances in mobile computing and augmented reality (AR) technology have lead to popularization of mobile AR applications. Touch screen input is common in mobile devices, and also widely used in mobile AR applications. However, due to unsteady camera view movement, it can be hard to carry out precise interactions in handheld AR environments, for tasks such as tracing physical objects. In this research, we investigate a Snap-To-Feature interaction method that helps users to perform more accurate touch screen interactions by attracting user input points to image features in the AR scene. A user experiment is performed using the method to trace a physical object, which is typical for modeling real objects within the AR scene. The results shows that the Snap-To-Feature method makes a significant difference in the accuracy of touch screen based AR interaction.
{"title":"A user study on the Snap-To-Feature interaction method","authors":"Gun A. Lee, M. Billinghurst","doi":"10.1109/ISMAR.2011.6092398","DOIUrl":"https://doi.org/10.1109/ISMAR.2011.6092398","url":null,"abstract":"Recent advances in mobile computing and augmented reality (AR) technology have lead to popularization of mobile AR applications. Touch screen input is common in mobile devices, and also widely used in mobile AR applications. However, due to unsteady camera view movement, it can be hard to carry out precise interactions in handheld AR environments, for tasks such as tracing physical objects. In this research, we investigate a Snap-To-Feature interaction method that helps users to perform more accurate touch screen interactions by attracting user input points to image features in the AR scene. A user experiment is performed using the method to trace a physical object, which is typical for modeling real objects within the AR scene. The results shows that the Snap-To-Feature method makes a significant difference in the accuracy of touch screen based AR interaction.","PeriodicalId":298757,"journal":{"name":"2011 10th IEEE International Symposium on Mixed and Augmented Reality","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2011-10-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126667073","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}