The 2020 CHCCS/SCDHM Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Ravin Balakrishnan. This award recognizes his significant and varied contributions in the areas of Human Computer Interaction (HCI), Information and Communications Technology for Development, and Interactive Computer Graphics. Ravin’s work has had a tremendous impact on real-world applications. His research includes early innovations in areas such as 3D user interfaces, large display input, multitouch gestures, freehand input, and pen-based computing, which has informed and inspired techniques and technologies that are now commonplace in commercial products. a conversation between Ravin Balakrishnan and Prof. Tovi Grossman (University of Toronto) that took place in April, 2020.
{"title":"A conversation with CHCCS 2020 achievement award winner Ravin Balakrishnan","authors":"R. Balakrishnan","doi":"10.20380/GI2020.01","DOIUrl":"https://doi.org/10.20380/GI2020.01","url":null,"abstract":"The 2020 CHCCS/SCDHM Achievement Award from the Canadian Human-Computer Communications Society is presented to Dr. Ravin Balakrishnan. This award recognizes his significant and varied contributions in the areas of Human Computer Interaction (HCI), Information and Communications Technology for Development, and Interactive Computer Graphics. Ravin’s work has had a tremendous impact on real-world applications. His research includes early innovations in areas such as 3D user interfaces, large display input, multitouch gestures, freehand input, and pen-based computing, which has informed and inspired techniques and technologies that are now commonplace in commercial products. a conversation between Ravin Balakrishnan and Prof. Tovi Grossman (University of Toronto) that took place in April, 2020.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"23 1","pages":"1-2"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83705785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Colorization is a complex task of selecting a combination of colors and arriving at an appropriate spatial arrangement of the colors in an image. In this paper, we propose a novel approach for automatic colorization of graphic arts like graphic patterns, info-graphics and cartoons. Our approach uses the artist’s colored graphics as a reference to color a template image. We also propose a retrieval system for selecting a relevant reference image corresponding to the given template from a dataset of reference images colored by different artists. Finally, we formulate the problem of colorization as a optimal graph matching problem over color groups in the reference and the template image. We demonstrate results on a variety of coloring tasks and evaluate our model through multiple perceptual studies. The studies show that the results generated through our model are significantly preferred by the participants over other automatic colorization methods.
{"title":"ColorArt: Suggesting Colorizations For Graphic Arts Using Optimal Color-Graph Matching","authors":"Murtuza Bohra, Vineet Gandhi","doi":"10.20380/GI2020.11","DOIUrl":"https://doi.org/10.20380/GI2020.11","url":null,"abstract":"Colorization is a complex task of selecting a combination of colors and arriving at an appropriate spatial arrangement of the colors in an image. In this paper, we propose a novel approach for automatic colorization of graphic arts like graphic patterns, info-graphics and cartoons. Our approach uses the artist’s colored graphics as a reference to color a template image. We also propose a retrieval system for selecting a relevant reference image corresponding to the given template from a dataset of reference images colored by different artists. Finally, we formulate the problem of colorization as a optimal graph matching problem over color groups in the reference and the template image. We demonstrate results on a variety of coloring tasks and evaluate our model through multiple perceptual studies. The studies show that the results generated through our model are significantly preferred by the participants over other automatic colorization methods.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"90 1","pages":"95-102"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75018607","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Daniella Briotto Faustino, Sara Nabil, A. Girouard
People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.
{"title":"Bend or PIN: Studying Bend Password Authentication with People with Vision Impairment","authors":"Daniella Briotto Faustino, Sara Nabil, A. Girouard","doi":"10.20380/GI2020.19","DOIUrl":"https://doi.org/10.20380/GI2020.19","url":null,"abstract":"People living with vision impairment can be vulnerable to attackers when entering passwords on their smartphones, as their technology is more 'observable'. While researchers have proposed tangible interactions such as bend input as an alternative authentication method, limited work have evaluated this method with people with vision impairment. This paper extends previous work by presenting our user study of bend passwords with 16 participants who live with varying levels of vision impairment or blindness. Each participant created their own passwords using both PIN codes and BendyPass, a combination of bend gestures performed on a flexible device. We explored whether BendyPass does indeed offer greater opportunity over PINs and evaluated the usability of both. Our findings show bend passwords have learnability and memorability potential as a tactile authentication method for people with vision impairment, and could be faster to enter than PINs. However, BendyPass still has limitations relating to security and usability.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"36 1","pages":"183-191"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74498434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Subhajit Das, Dylan Cashman, Remco Chang, A. Endert
Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and auPaste the appropriate copyright statement here. ACM now supports three different copyright statements: • ACM copyright: ACM holds the copyright on the work. This is the historical approach. • License: The author(s) retain copyright, but ACM receives an exclusive publication license. • Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is
{"title":"Gaggle: Visual Analytics for Model Space Navigation","authors":"Subhajit Das, Dylan Cashman, Remco Chang, A. Endert","doi":"10.20380/GI2020.15","DOIUrl":"https://doi.org/10.20380/GI2020.15","url":null,"abstract":"Recent visual analytics systems make use of multiple machine learning models to better fit the data as opposed to traditional single, pre-defined model systems. However, while multi-model visual analytic systems can be effective, their added complexity poses usability concerns, as users are required to interact with the parameters of multiple models. Further, the advent of various model algorithms and associated hyperparameters creates an exhaustive model space to sample models from. This poses complexity to navigate this model space to find the right model for the data and the task. In this paper, we present Gaggle, a multi-model visual analytic system that enables users to interactively navigate the model space. Further translating user interactions into inferences, Gaggle simplifies working with multiple models by automatically finding the best model from the high-dimensional model space to support various user tasks. Through a qualitative user study, we show how our approach helps users to find a best model for a classification and ranking task. The study results confirm that Gaggle is intuitive and easy to use, supporting interactive model space navigation and auPaste the appropriate copyright statement here. ACM now supports three different copyright statements: • ACM copyright: ACM holds the copyright on the work. This is the historical approach. • License: The author(s) retain copyright, but ACM receives an exclusive publication license. • Open Access: The author(s) wish to pay for the work to be open access. The additional fee must be paid to ACM. This text field is large enough to hold the appropriate release statement assuming it is","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"137-147"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83178091","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.
{"title":"AnimationPak: Packing Elements with Scripted Animations","authors":"Reza Adhitya Saputra, C. Kaplan, P. Asente","doi":"10.20380/GI2020.39","DOIUrl":"https://doi.org/10.20380/GI2020.39","url":null,"abstract":"We present AnimationPak, a technique to create animated packings by arranging animated two-dimensional elements inside a static container. We represent animated elements in a three-dimensional spacetime domain, and view the animated packing problem as a three-dimensional packing in that domain. Every element is represented as a discretized spacetime mesh. In a physical simulation, meshes grow and repel each other, consuming the negative space in the container. The final animation frames are cross sections of the three-dimensional packing at a sequence of time values. The simulation trades off between the evenness of the negative space in the container, the temporal coherence of the animation, and the deformations of the elements. Elements can be guided around the container and the entire animation can be closed into a loop.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"19 1","pages":"393-403"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77157356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donya Ghafourzadeh, Srinivasan Ramachandran, Martin de Lasa, T. Popa, Eric Paquette
In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new approach to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects.
{"title":"Local Editing of Cross-Surface Mappings with Iterative Least Squares Conformal Maps","authors":"Donya Ghafourzadeh, Srinivasan Ramachandran, Martin de Lasa, T. Popa, Eric Paquette","doi":"10.20380/GI2020.20","DOIUrl":"https://doi.org/10.20380/GI2020.20","url":null,"abstract":"In this paper, we propose a novel approach to improve a given surface mapping through local refinement. The approach receives an established mapping between two surfaces and follows four phases: (i) inspection of the mapping and creation of a sparse set of landmarks in mismatching regions; (ii) segmentation with a low-distortion region-growing process based on flattening the segmented parts; (iii) optimization of the deformation of segmented parts to align the landmarks in the planar parameterization domain; and (iv) aggregation of the mappings from segments to update the surface mapping. In addition, we propose a new approach to deform the mesh in order to meet constraints (in our case, the landmark alignment of phase (iii)). We incrementally adjust the cotangent weights for the constraints and apply the deformation in a fashion that guarantees that the deformed mesh will be free of flipped faces and will have low conformal distortion. Our new deformation approach, Iterative Least Squares Conformal Mapping (ILSCM), outperforms other low-distortion deformation methods. The approach is general, and we tested it by improving the mappings from different existing surface mapping methods. We also tested its effectiveness by editing the mappings for a variety of 3D objects.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"192-205"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79992574","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
2D scalar data fields are often represented as heatmaps because color can help viewers perceive structure without having to interpret individual digits. Although heatmaps and color mapping have received much research attention, there are alternative representations that have been generally overlooked and might overcome heatmap problems. For example, color perception is subject to context-based perceptual bias and high error, which can be addressed through representations that use digits to enable more accurate value reading. We designed a series of three experiments that compare five techniques: a regular table of digits (Digits), a state-of-the-art heatmap (Color), a heatmap with an interactive tooltip showing the value under the cursor (Tooltip), a heatmap with the digits overlapped over it (DigitsColor), and FatFonts. Data analysis from the three experiments, which test locating values, finding extrema, and clustering tasks, show that overlapping digits on color (DigitsColor) offers a substantial increase in accuracy (between 10 and 60 percent points of improvement over the plain heatmap (Color), depending on the task) at the cost of extra time when locating extrema or forming clusters, but none when locating values. The interactive tooltip offered a poor speed-accuracy tradeoff, but participants preferred it to the plain heatmap (color) or digits-only (Digits) representations. We conclude that hybrid color-digit representations of scalar data fields could be highly beneficial for uses where spatial resolution and speed are not the main concern.
{"title":"The Effect of Visual and Interactive Representations on Human Performance and Preference with Scalar Data Fields","authors":"Han L. Han, Miguel A. Nacenta","doi":"10.20380/GI2020.23","DOIUrl":"https://doi.org/10.20380/GI2020.23","url":null,"abstract":"2D scalar data fields are often represented as heatmaps because color can help viewers perceive structure without having to interpret individual digits. Although heatmaps and color mapping have received much research attention, there are alternative representations that have been generally overlooked and might overcome heatmap problems. For example, color perception is subject to context-based perceptual bias and high error, which can be addressed through representations that use digits to enable more accurate value reading. We designed a series of three experiments that compare five techniques: a regular table of digits (Digits), a state-of-the-art heatmap (Color), a heatmap with an interactive tooltip showing the value under the cursor (Tooltip), a heatmap with the digits overlapped over it (DigitsColor), and FatFonts. Data analysis from the three experiments, which test locating values, finding extrema, and clustering tasks, show that overlapping digits on color (DigitsColor) offers a substantial increase in accuracy (between 10 and 60 percent points of improvement over the plain heatmap (Color), depending on the task) at the cost of extra time when locating extrema or forming clusters, but none when locating values. The interactive tooltip offered a poor speed-accuracy tradeoff, but participants preferred it to the plain heatmap (color) or digits-only (Digits) representations. We conclude that hybrid color-digit representations of scalar data fields could be highly beneficial for uses where spatial resolution and speed are not the main concern.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"45 1","pages":"225-235"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79714098","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation
{"title":"StarHopper: A Touch Interface for Remote Object-Centric Drone Navigation","authors":"Jiannan Li, Ravin Balakrishnan, Tovi Grossman","doi":"10.20380/GI2020.32","DOIUrl":"https://doi.org/10.20380/GI2020.32","url":null,"abstract":"Camera drones, a rapidly emerging technology, offer people the ability to remotely inspect an environment with a high degree of mobility and agility. However, manual remote piloting of a drone is prone to errors. In contrast, autopilot systems can require a significant degree of environmental knowledge and are not necessarily designed to support flexible visual inspections. Inspired by camera manipulation techniques in interactive graphics, we designed StarHopper, a novel touch screen interface for efficient object-centric camera drone navigation","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"1 1","pages":"317-326"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74898110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Schott, Benjamin Hatscher, F. Joeres, Mareike Gabele, Steffi Hußlein, C. Hansen
Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a handsfree image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user’s performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.
{"title":"Lean-Interaction: passive image manipulation in concurrent multitasking","authors":"D. Schott, Benjamin Hatscher, F. Joeres, Mareike Gabele, Steffi Hußlein, C. Hansen","doi":"10.20380/GI2020.40","DOIUrl":"https://doi.org/10.20380/GI2020.40","url":null,"abstract":"Complex bi-manual tasks often benefit from supporting visual information and guidance. Controlling the system that provides this information is a secondary task that forces the user to perform concurrent multitasking, which in turn may affect the main task performance. Interactions based on natural behavior are a promising solution to this challenge. We investigated the performance of these interactions in a handsfree image manipulation task during a primary manual task with an upright stance. Essential tasks were extracted from the example of clinical workflow and turned into an abstract simulation to gain general insights into how different interaction techniques impact the user’s performance and workload. The interaction techniques we compared were full-body movements, facial expression, gesture and speech input. We found that leaning as an interaction technique facilitates significantly faster image manipulation at lower subjective workloads than facial expression. Our results pave the way towards efficient, natural, hands-free interaction in a challenging multitasking environment.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"14 1","pages":"404-412"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73972830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.
{"title":"Immersive Visualization of the Classical Non-Euclidean Spaces using Real-Time Ray Tracing in VR","authors":"L. Velho, V. Silva, Tiago Novello","doi":"10.20380/GI2020.42","DOIUrl":"https://doi.org/10.20380/GI2020.42","url":null,"abstract":"This paper presents a system for immersive visualization of Non-Euclidean spaces using real-time ray tracing. It exploits the capabilities of the new generation of GPU’s based on the NVIDIA’s Turing architecture in order to develop new methods for intuitive exploration of landscapes featuring non-trivial geometry and topology in virtual reality.","PeriodicalId":93493,"journal":{"name":"Proceedings. Graphics Interface (Conference)","volume":"21 1","pages":"423-430"},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80826441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}