Pub Date : 2020-11-01DOI: 10.1109/3dv50981.2020.00007
{"title":"3DV 2020 Organizing Committee","authors":"","doi":"10.1109/3dv50981.2020.00007","DOIUrl":"https://doi.org/10.1109/3dv50981.2020.00007","url":null,"abstract":"","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130606209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00033
Tristan Aumentado-Armstrong, Alex Levinshtein, Stavros Tsogkas, K. Derpanis, A. Jepson
For humans, visual understanding is inherently generative: given a 3D shape, we can postulate how it would look in the world; given a 2D image, we can infer the 3D structure that likely gave rise to it. We can thus translate between the 2D visual and 3D structural modalities of a given object. In the context of computer vision, this corresponds to a learnable module that serves two purposes: (i) generate a realistic rendering of a 3D object (shape-toimage translation) and (ii) infer a realistic 3D shape from an image (image-to-shape translation). In this paper, we learn such a module while being conscious of the difficulties in obtaining large paired 2D-3D datasets. By leveraging generative domain translation methods, we are able to define a learning algorithm that requires only weak supervision, with unpaired data. The resulting model is not only able to perform 3D shape, pose, and texture inference from 2D images, but can also generate novel textured 3D shapes and renders, similar to a graphics pipeline. More specifically, our method (i) infers an explicit 3D mesh representation, (ii) utilizes example shapes to regularize inference, (iii) requires only an image mask (no keypoints or camera extrinsics), and (iv) has generative capabilities. While prior work explores subsets of these properties, their combination is novel. We demonstrate the utility of our learned representation, as well as its performance on image generation and unpaired 3D shape inference tasks.
{"title":"Cycle-Consistent Generative Rendering for 2D-3D Modality Translation","authors":"Tristan Aumentado-Armstrong, Alex Levinshtein, Stavros Tsogkas, K. Derpanis, A. Jepson","doi":"10.1109/3DV50981.2020.00033","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00033","url":null,"abstract":"For humans, visual understanding is inherently generative: given a 3D shape, we can postulate how it would look in the world; given a 2D image, we can infer the 3D structure that likely gave rise to it. We can thus translate between the 2D visual and 3D structural modalities of a given object. In the context of computer vision, this corresponds to a learnable module that serves two purposes: (i) generate a realistic rendering of a 3D object (shape-toimage translation) and (ii) infer a realistic 3D shape from an image (image-to-shape translation). In this paper, we learn such a module while being conscious of the difficulties in obtaining large paired 2D-3D datasets. By leveraging generative domain translation methods, we are able to define a learning algorithm that requires only weak supervision, with unpaired data. The resulting model is not only able to perform 3D shape, pose, and texture inference from 2D images, but can also generate novel textured 3D shapes and renders, similar to a graphics pipeline. More specifically, our method (i) infers an explicit 3D mesh representation, (ii) utilizes example shapes to regularize inference, (iii) requires only an image mask (no keypoints or camera extrinsics), and (iv) has generative capabilities. While prior work explores subsets of these properties, their combination is novel. We demonstrate the utility of our learned representation, as well as its performance on image generation and unpaired 3D shape inference tasks.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125116934","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00071
Xianghao Xu, David Charatan, Sonia Raychaudhuri, Hanxiao Jiang, Mae Heitmann, Vladimir G. Kim, S. Chaudhuri, M. Savva, Angel X. Chang, Daniel Ritchie
3D models of real-world objects are essential for many applications, including the creation of virtual environments for AI training. To mimic real-world objects in these applications, objects must be annotated with their kinematic mobilities. Annotating kinematic motions is time-consuming, and it is not well-suited to typical crowdsourcing workflows due to the significant domain expertise required. In this paper, we present a system that helps individual expert users rapidly annotate kinematic motions in large 3D shape collections. The organizing concept of our system is motion annotation programs: simple, re-usable procedural rules that generate motion for a given input shape. Our interactive system allows users to author these rules and quickly apply them to collections of functionally-related objects. Using our system, an expert annotated over 1000 joints in under 3 hours. In a user study, participants with no prior experience with our system were able to annotate motions 1.5x faster than with a baseline manual annotation tool.
{"title":"Motion Annotation Programs: A Scalable Approach to Annotating Kinematic Articulations in Large 3D Shape Collections","authors":"Xianghao Xu, David Charatan, Sonia Raychaudhuri, Hanxiao Jiang, Mae Heitmann, Vladimir G. Kim, S. Chaudhuri, M. Savva, Angel X. Chang, Daniel Ritchie","doi":"10.1109/3DV50981.2020.00071","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00071","url":null,"abstract":"3D models of real-world objects are essential for many applications, including the creation of virtual environments for AI training. To mimic real-world objects in these applications, objects must be annotated with their kinematic mobilities. Annotating kinematic motions is time-consuming, and it is not well-suited to typical crowdsourcing workflows due to the significant domain expertise required. In this paper, we present a system that helps individual expert users rapidly annotate kinematic motions in large 3D shape collections. The organizing concept of our system is motion annotation programs: simple, re-usable procedural rules that generate motion for a given input shape. Our interactive system allows users to author these rules and quickly apply them to collections of functionally-related objects. Using our system, an expert annotated over 1000 joints in under 3 hours. In a user study, participants with no prior experience with our system were able to annotate motions 1.5x faster than with a baseline manual annotation tool.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"240 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114663381","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00108
David Hug, M. Chli
Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motion corrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discrete-time approaches. Related works mostly rely on cubic, $C^{2}-$continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed $mathbb{SE}_{3}$ transformations in time, it is prone to under- or overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in $mathbb{R}times mathbb{SU}_{2}times mathbb{R}^{3}$ and commence with development of ‘HyperSLAM’, a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-of-concept simulations based on a visual, monocular setup.
{"title":"HyperSLAM: A Generic and Modular Approach to Sensor Fusion and Simultaneous Localization And Mapping in Continuous-Time","authors":"David Hug, M. Chli","doi":"10.1109/3DV50981.2020.00108","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00108","url":null,"abstract":"Within recent years, Continuous-Time Simultaneous Localization And Mapping (CTSLAM) formalisms have become subject to increased attention from the scientific community due to their vast potential in facilitating motion corrected feature reprojection and direct unsynchronized multi-rate sensor fusion. They also hold the promise of yielding better estimates in traditional sensor setups (e.g. visual, inertial) when compared to conventional discrete-time approaches. Related works mostly rely on cubic, $C^{2}-$continuous, uniform cumulative B-Splines to exemplify and demonstrate the benefits inherent to continuous-time representations. However, as this type of splines gives rise to continuous trajectories by blending uniformly distributed $mathbb{SE}_{3}$ transformations in time, it is prone to under- or overparametrize underlying motions with varying volatility and prohibits dynamic trajectory refinement or sparsification by design. In light of this, we propose employing a more generalized and efficient non-uniform split interpolation method in $mathbb{R}times mathbb{SU}_{2}times mathbb{R}^{3}$ and commence with development of ‘HyperSLAM’, a generic and modular CTSLAM framework. The efficacy of our approach is exemplified in proof-of-concept simulations based on a visual, monocular setup.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"22 14_suppl 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128204516","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00125
D. Thul, Vagia Tsiminaki, L. Ladicky, M. Pollefeys
Decomposing scenes into reflectance and lighting is an important task for applications such as relighting, image matching or content creation. Advanced light transport effects like occlusion and indirect lighting are often ignored, leading to subpar decompositions in which the albedo needs to compensate for insufficiencies in the estimated shading. We show how to account for these advanced lighting effects by utilizing precomputed radiance transfer to estimate reflectance and lighting. Given the geometry of an object and one or multiple images, our method reconstructs the object’s surface reflectance properties—such as its albedo and glossiness—as well as a colored lighting environment map. Evaluation on synthetic and real data shows that incorporation of indirect light leads to qualitatively and quantitatively improved results.
{"title":"Precomputed Radiance Transfer for Reflectance and Lighting Estimation","authors":"D. Thul, Vagia Tsiminaki, L. Ladicky, M. Pollefeys","doi":"10.1109/3DV50981.2020.00125","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00125","url":null,"abstract":"Decomposing scenes into reflectance and lighting is an important task for applications such as relighting, image matching or content creation. Advanced light transport effects like occlusion and indirect lighting are often ignored, leading to subpar decompositions in which the albedo needs to compensate for insufficiencies in the estimated shading. We show how to account for these advanced lighting effects by utilizing precomputed radiance transfer to estimate reflectance and lighting. Given the geometry of an object and one or multiple images, our method reconstructs the object’s surface reflectance properties—such as its albedo and glossiness—as well as a colored lighting environment map. Evaluation on synthetic and real data shows that incorporation of indirect light leads to qualitatively and quantitatively improved results.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124108510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00045
Alejandro Sztrajman, A. Neophytou, T. Weyrich, Eric Sommerlade
We present a CNN-based method for outdoor highdynamic-range (HDR) environment map prediction from low-dynamic-range (LDR) portrait images. Our method relies on two different CNN architectures, one for light encoding and another for face-to-light prediction. Outdoor lighting is characterised by an extremely high dynamic range, and thus our encoding splits the environment map data between low and high-intensity components, and encodes them using tailored representations. The combination of both network architectures constitutes an end-to-end method for accurate HDR light prediction from faces at real-time rates, inaccessible for previous methods which focused on low dynamic range lighting or relied on non-linear optimisation schemes. We train our networks using both real and synthetic images, we compare our light encoding with other methods for light representation, and we analyse our results for light prediction on real images. We show that our predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality.
{"title":"High-Dynamic-Range Lighting Estimation From Face Portraits","authors":"Alejandro Sztrajman, A. Neophytou, T. Weyrich, Eric Sommerlade","doi":"10.1109/3DV50981.2020.00045","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00045","url":null,"abstract":"We present a CNN-based method for outdoor highdynamic-range (HDR) environment map prediction from low-dynamic-range (LDR) portrait images. Our method relies on two different CNN architectures, one for light encoding and another for face-to-light prediction. Outdoor lighting is characterised by an extremely high dynamic range, and thus our encoding splits the environment map data between low and high-intensity components, and encodes them using tailored representations. The combination of both network architectures constitutes an end-to-end method for accurate HDR light prediction from faces at real-time rates, inaccessible for previous methods which focused on low dynamic range lighting or relied on non-linear optimisation schemes. We train our networks using both real and synthetic images, we compare our light encoding with other methods for light representation, and we analyse our results for light prediction on real images. We show that our predicted HDR environment maps can be used as accurate illumination sources for scene renderings, with potential applications in 3D object insertion for augmented reality.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132508346","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00044
P. Chandran, D. Bradley, M. Gross, T. Beeler
Face models built from 3D face databases are often used in computer vision and graphics tasks such as face reconstruction, replacement, tracking and manipulation. For such tasks, commonly used multi-linear morphable models, which provide semantic control over facial identity and expression, often lack quality and expressivity due to their linear nature. Deep neural networks offer the possibility of non-linear face modeling, where so far most research has focused on generating realistic facial images with less focus on 3D geometry, and methods that do produce geometry have little or no notion of semantic control, thereby limiting their artistic applicability. We present a method for nonlinear 3D face modeling using neural architectures that provides intuitive semantic control over both identity and expression by disentangling these dimensions from each other, essentially combining the benefits of both multi-linear face models and nonlinear deep face networks. The result is a powerful, semantically controllable, nonlinear, parametric face model. We demonstrate the value of our semantic deep face model with applications of 3D face synthesis, facial performance transfer, performance editing, and 2D landmark-based performance retargeting.
{"title":"Semantic Deep Face Models","authors":"P. Chandran, D. Bradley, M. Gross, T. Beeler","doi":"10.1109/3DV50981.2020.00044","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00044","url":null,"abstract":"Face models built from 3D face databases are often used in computer vision and graphics tasks such as face reconstruction, replacement, tracking and manipulation. For such tasks, commonly used multi-linear morphable models, which provide semantic control over facial identity and expression, often lack quality and expressivity due to their linear nature. Deep neural networks offer the possibility of non-linear face modeling, where so far most research has focused on generating realistic facial images with less focus on 3D geometry, and methods that do produce geometry have little or no notion of semantic control, thereby limiting their artistic applicability. We present a method for nonlinear 3D face modeling using neural architectures that provides intuitive semantic control over both identity and expression by disentangling these dimensions from each other, essentially combining the benefits of both multi-linear face models and nonlinear deep face networks. The result is a powerful, semantically controllable, nonlinear, parametric face model. We demonstrate the value of our semantic deep face model with applications of 3D face synthesis, facial performance transfer, performance editing, and 2D landmark-based performance retargeting.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130461317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00095
Zihui Zhang, Cuican Yu, Huibin Li, Jian Sun, Feng Liu
Learning disentangled 3D face shape representation is beneficial to face attribute transfer, generation and recognition, etc. In this paper, we propose a novel distribution independence-based method to learn to decompose 3D face shapes. Specifically, we design a variational auto-encoder with Graph Convolutional Network (GCN), namely Mesh-Encoder, to model the distributions of identity and expression representations via variational inference. To disentangle facial expression and identity, we eliminate correlation of the two distributions, and enforce them to be independent by adversarial training. Extensive experiments show that the proposed approach can achieve state-of-the-art results in 3D face shape decomposition and expression transfer. Though focusing on disentanglement, our method also achieves the reconstruction accuracies comparable to the state-of-the-art 3D face reconstruction methods.
{"title":"Learning Distribution Independent Latent Representation for 3D Face Disentanglement","authors":"Zihui Zhang, Cuican Yu, Huibin Li, Jian Sun, Feng Liu","doi":"10.1109/3DV50981.2020.00095","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00095","url":null,"abstract":"Learning disentangled 3D face shape representation is beneficial to face attribute transfer, generation and recognition, etc. In this paper, we propose a novel distribution independence-based method to learn to decompose 3D face shapes. Specifically, we design a variational auto-encoder with Graph Convolutional Network (GCN), namely Mesh-Encoder, to model the distributions of identity and expression representations via variational inference. To disentangle facial expression and identity, we eliminate correlation of the two distributions, and enforce them to be independent by adversarial training. Extensive experiments show that the proposed approach can achieve state-of-the-art results in 3D face shape decomposition and expression transfer. Though focusing on disentanglement, our method also achieves the reconstruction accuracies comparable to the state-of-the-art 3D face reconstruction methods.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127661047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3dv50981.2020.00008
{"title":"3DV 2020 Program Committee","authors":"","doi":"10.1109/3dv50981.2020.00008","DOIUrl":"https://doi.org/10.1109/3dv50981.2020.00008","url":null,"abstract":"","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131377788","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2020-11-01DOI: 10.1109/3DV50981.2020.00121
Nicolas Hurtubise, S. Roy
Structured light-based 3D scanning presents various challenges. While robustness to indirect illumination has been the subject of recent research, little has been said about discontinuities. This paper proposes a new discontinuity-aware algorithm for estimating structured light correspondences with subpixel accuracy. The algorithm is not only robust to common structured light problems, such as indirect lighting effects, but also identifies discontinuities explicitly. This results in a significant reduction of reconstruction artifacts at objects borders, an omnipresent problem of structured light methods, especially those relying on direct decoding. Our method is faster than previously proposed robust subpixel methods, has been tested on synthetic as well as real data and shows a significant improvement on measurement at discontinuities when compared with other state-of-the-art methods.
{"title":"Fast Discontinuity-Aware Subpixel Correspondence in Structured Light","authors":"Nicolas Hurtubise, S. Roy","doi":"10.1109/3DV50981.2020.00121","DOIUrl":"https://doi.org/10.1109/3DV50981.2020.00121","url":null,"abstract":"Structured light-based 3D scanning presents various challenges. While robustness to indirect illumination has been the subject of recent research, little has been said about discontinuities. This paper proposes a new discontinuity-aware algorithm for estimating structured light correspondences with subpixel accuracy. The algorithm is not only robust to common structured light problems, such as indirect lighting effects, but also identifies discontinuities explicitly. This results in a significant reduction of reconstruction artifacts at objects borders, an omnipresent problem of structured light methods, especially those relying on direct decoding. Our method is faster than previously proposed robust subpixel methods, has been tested on synthetic as well as real data and shows a significant improvement on measurement at discontinuities when compared with other state-of-the-art methods.","PeriodicalId":293399,"journal":{"name":"2020 International Conference on 3D Vision (3DV)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129176845","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}