This paper presents a model allowing inferences of perceivable screen content in relation to position and orientation of mobile or wearable devices with respect to their user. The model is based on findings from vision science and allows prediction of a value of effective resolution that can be perceived by a user. It considers distance and angle between the device and the eyes of the observer as well as the resulting retinal eccentricity when the device is not directly focused but observed in the periphery. To validate our model, we conducted a study with 12 participants. Based on our results, we outline implications for the design of mobile applications that are able to adapt themselves to facilitate information throughput and usability.
{"title":"Modeling Perceived Screen Resolution Based on Position and Orientation of Wrist-Worn Devices","authors":"Frederic Kerber, Michael Mauderer, A. Krüger","doi":"10.1145/3173574.3174184","DOIUrl":"https://doi.org/10.1145/3173574.3174184","url":null,"abstract":"This paper presents a model allowing inferences of perceivable screen content in relation to position and orientation of mobile or wearable devices with respect to their user. The model is based on findings from vision science and allows prediction of a value of effective resolution that can be perceived by a user. It considers distance and angle between the device and the eyes of the observer as well as the resulting retinal eccentricity when the device is not directly focused but observed in the periphery. To validate our model, we conducted a study with 12 participants. Based on our results, we outline implications for the design of mobile applications that are able to adapt themselves to facilitate information throughput and usability.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"41 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81758266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Kariryaa, Isaac L. Johnson, Johannes Schöning, Brent J. Hecht
Many applications of geotagged content are predicated on the concept of localness (e.g., local restaurant recommendation, mining social media for local perspectives on an issue). However, definitions of who is a "local" in a given area are typically informal and ad-hoc and, as a result, approaches for localness assessment that have been used in the past have not been formally validated. In this paper, we begin the process of addressing these gaps in the literature. Specifically, we (1) formalize definitions of "local" using themes identified in a 30-paper literature review, (2) develop the first ground truth localness dataset consisting of 132 Twitter users and 58,945 place-tagged tweets, and (3) use this dataset to evaluate existing localness assessment approaches. Our results provide important methodological guidance to the large body of research and practice that depends on the concept of localness and suggest means by which localness assessment can be improved.
{"title":"Defining and Predicting the Localness of Volunteered Geographic Information using Ground Truth Data","authors":"A. Kariryaa, Isaac L. Johnson, Johannes Schöning, Brent J. Hecht","doi":"10.1145/3173574.3173839","DOIUrl":"https://doi.org/10.1145/3173574.3173839","url":null,"abstract":"Many applications of geotagged content are predicated on the concept of localness (e.g., local restaurant recommendation, mining social media for local perspectives on an issue). However, definitions of who is a \"local\" in a given area are typically informal and ad-hoc and, as a result, approaches for localness assessment that have been used in the past have not been formally validated. In this paper, we begin the process of addressing these gaps in the literature. Specifically, we (1) formalize definitions of \"local\" using themes identified in a 30-paper literature review, (2) develop the first ground truth localness dataset consisting of 132 Twitter users and 58,945 place-tagged tweets, and (3) use this dataset to evaluate existing localness assessment approaches. Our results provide important methodological guidance to the large body of research and practice that depends on the concept of localness and suggest means by which localness assessment can be improved.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81863524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Joon Hyub Lee, Sang-Gyun An, Yongkwan Kim, Seok-Hyung Bae
In augmented and virtual reality (AR and VR), there may be many 3D planar windows with 2D texts, images, and videos on them. However, managing the position, orientation, and scale of such a window in an immersive 3D workspace can be difficult. Projective Windows strategically uses the absolute and apparent sizes of the window at various stages of the interaction to enable the grabbing, moving, scaling, and releasing of the window in one continuous hand gesture. With it, the user can quickly and intuitively manage and interact with windows in space without any controller hardware or dedicated widget. Through an evaluation, we demonstrate that our technique is performant and preferable, and that projective geometry plays an important role in the design of spatial user interfaces.
{"title":"Projective Windows: Bringing Windows in Space to the Fingertip","authors":"Joon Hyub Lee, Sang-Gyun An, Yongkwan Kim, Seok-Hyung Bae","doi":"10.1145/3173574.3173792","DOIUrl":"https://doi.org/10.1145/3173574.3173792","url":null,"abstract":"In augmented and virtual reality (AR and VR), there may be many 3D planar windows with 2D texts, images, and videos on them. However, managing the position, orientation, and scale of such a window in an immersive 3D workspace can be difficult. Projective Windows strategically uses the absolute and apparent sizes of the window at various stages of the interaction to enable the grabbing, moving, scaling, and releasing of the window in one continuous hand gesture. With it, the user can quickly and intuitively manage and interact with windows in space without any controller hardware or dedicated widget. Through an evaluation, we demonstrate that our technique is performant and preferable, and that projective geometry plays an important role in the design of spatial user interfaces.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82348442","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dominic DiFranzo, S. Taylor, F. Kazerooni, Olivia D. Wherry, Natalya N. Bazarova
Although bystander intervention can mitigate the negative effects of cyberbullying, few bystanders ever attempt to intervene. In this study, we explored the effects of interface design on bystander intervention using a simulated custom-made social media platform. Participants took part in a three-day, in-situ experiment, in which they were exposed to several cyberbullying incidents. Depending on the experimental condition, they received different information about the audience size and viewing notifications intended to increase a sense of personal responsibility in bystanders. Results indicated that bystanders were more likely to intervene indirectly than directly, and information about the audience size and viewership increased the likelihood of flagging cyberbullying posts through serial mediation of public surveillance, accountability, and personal responsibility. The study has implications for understanding bystander effect in cyberbullying, and how to develop design solutions to encourage bystander intervention in social media.
{"title":"Upstanding by Design: Bystander Intervention in Cyberbullying","authors":"Dominic DiFranzo, S. Taylor, F. Kazerooni, Olivia D. Wherry, Natalya N. Bazarova","doi":"10.1145/3173574.3173785","DOIUrl":"https://doi.org/10.1145/3173574.3173785","url":null,"abstract":"Although bystander intervention can mitigate the negative effects of cyberbullying, few bystanders ever attempt to intervene. In this study, we explored the effects of interface design on bystander intervention using a simulated custom-made social media platform. Participants took part in a three-day, in-situ experiment, in which they were exposed to several cyberbullying incidents. Depending on the experimental condition, they received different information about the audience size and viewing notifications intended to increase a sense of personal responsibility in bystanders. Results indicated that bystanders were more likely to intervene indirectly than directly, and information about the audience size and viewership increased the likelihood of flagging cyberbullying posts through serial mediation of public surveillance, accountability, and personal responsibility. The study has implications for understanding bystander effect in cyberbullying, and how to develop design solutions to encourage bystander intervention in social media.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"86 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76218885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Ion, Róbert Kovács, Oliver S. Schneider, Pedro Lopes, Patrick Baudisch
We present metamaterial textures---3D printed surface geometries that can perform a controlled transition between two or more textures. Metamaterial textures are integrated into 3D printed objects and allow designing how the object interacts with the environment and the user's tactile sense. Inspired by foldable paper sheets ("origami") and surface wrinkling, our 3D printed metamaterial textures consist of a grid of cells that fold when compressed by an external global force. Unlike origami, however, metamaterial textures offer full control over the transformation, such as in between states and sequence of actuation. This allows for integrating multiple textures and makes them useful, e.g., for exploring parameters in the rapid prototyping of textures. Metamaterial textures are also robust enough to allow the resulting objects to be grasped, pushed, or stood on. This allows us to make objects, such as a shoe sole that transforms from flat to treaded, a textured door handle that provides tactile feedback to visually impaired users, and a configurable bicycle grip. We present an editor assists users in creating metamaterial textures interactively by arranging cells, applying forces, and previewing their deformation.
{"title":"Metamaterial Textures","authors":"A. Ion, Róbert Kovács, Oliver S. Schneider, Pedro Lopes, Patrick Baudisch","doi":"10.1145/3173574.3173910","DOIUrl":"https://doi.org/10.1145/3173574.3173910","url":null,"abstract":"We present metamaterial textures---3D printed surface geometries that can perform a controlled transition between two or more textures. Metamaterial textures are integrated into 3D printed objects and allow designing how the object interacts with the environment and the user's tactile sense. Inspired by foldable paper sheets (\"origami\") and surface wrinkling, our 3D printed metamaterial textures consist of a grid of cells that fold when compressed by an external global force. Unlike origami, however, metamaterial textures offer full control over the transformation, such as in between states and sequence of actuation. This allows for integrating multiple textures and makes them useful, e.g., for exploring parameters in the rapid prototyping of textures. Metamaterial textures are also robust enough to allow the resulting objects to be grasped, pushed, or stood on. This allows us to make objects, such as a shoe sole that transforms from flat to treaded, a textured door handle that provides tactile feedback to visually impaired users, and a configurable bicycle grip. We present an editor assists users in creating metamaterial textures interactively by arranging cells, applying forces, and previewing their deformation.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"13 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87539707","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Internet plays an important role in the formation of political opinions by supporting citizens in discovering diverse political information and opinions. However, the echo chamber effect has become of increasing concern, referring to the tendency for people to encounter opinions and information similar to their own online. It remains poorly understood how ordinary citizens use the Internet in the formation of political opinions. To answer this question, we conducted an interview study with 32 Chinese citizens. We found that participants used complex strategies to coordinate personal networks and technologies in specific ways to better understand political events. To analyze this phenomenon, we draw on Bødker and Andersen's model of complex mediation which describes how multiple mediators including people and artifacts work together to mediate an activity. We discuss how complex mediation supported participants in informing their political opinions. We derive design implications for supporting people to form political opinions.
{"title":"Complex Mediation in the Formation of Political Opinions","authors":"Yubo Kou, B. Nardi","doi":"10.1145/3173574.3174210","DOIUrl":"https://doi.org/10.1145/3173574.3174210","url":null,"abstract":"The Internet plays an important role in the formation of political opinions by supporting citizens in discovering diverse political information and opinions. However, the echo chamber effect has become of increasing concern, referring to the tendency for people to encounter opinions and information similar to their own online. It remains poorly understood how ordinary citizens use the Internet in the formation of political opinions. To answer this question, we conducted an interview study with 32 Chinese citizens. We found that participants used complex strategies to coordinate personal networks and technologies in specific ways to better understand political events. To analyze this phenomenon, we draw on Bødker and Andersen's model of complex mediation which describes how multiple mediators including people and artifacts work together to mediate an activity. We discuss how complex mediation supported participants in informing their political opinions. We derive design implications for supporting people to form political opinions.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88337084","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
André Dahlinger, Felix Wortmann, Benjamin Ryder, Bernhard Gahr
About 17% of the worldwide CO2-emissions can be ascribed to road transportation. Using information systems (IS)-enabled feedback has shown to be very efficient in promoting a less fuel-consuming driving style. Today, in-car IS that provide feedback on driving behavior are in the midst of a fundamental change. Increasing digitalization of in-car IS enables virtually any kind of feedback. Still, we see a gap in the empirical evidence on how to leverage this potential, raising questions on future HCI-based feedback design. To address this knowledge gap, we designed an eco-driving feedback IS and, building upon construal level theory, hypothesize that abstract feedback is more effective in reducing fuel consumption than concrete feedback. Deployed in a large field experiment with 56 participants covering over 297,000km, we provide first empirical evidence that supports this hypothesis. Despite its limitations, this research may have general implications for the design of real-time feedback.
{"title":"The Impact of Abstract vs. Concrete Feedback Design on Behavior Insights from a Large Eco-Driving Field Experiment","authors":"André Dahlinger, Felix Wortmann, Benjamin Ryder, Bernhard Gahr","doi":"10.1145/3173574.3173953","DOIUrl":"https://doi.org/10.1145/3173574.3173953","url":null,"abstract":"About 17% of the worldwide CO2-emissions can be ascribed to road transportation. Using information systems (IS)-enabled feedback has shown to be very efficient in promoting a less fuel-consuming driving style. Today, in-car IS that provide feedback on driving behavior are in the midst of a fundamental change. Increasing digitalization of in-car IS enables virtually any kind of feedback. Still, we see a gap in the empirical evidence on how to leverage this potential, raising questions on future HCI-based feedback design. To address this knowledge gap, we designed an eco-driving feedback IS and, building upon construal level theory, hypothesize that abstract feedback is more effective in reducing fuel consumption than concrete feedback. Deployed in a large field experiment with 56 participants covering over 297,000km, we provide first empirical evidence that supports this hypothesis. Despite its limitations, this research may have general implications for the design of real-time feedback.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"259 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82959338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Digital self-improvement programs (e.g., interventions, training programs, self-help apps) are widely accessible, but can not employ the same degree of external regulation as programs delivered in controlled environments. As a result, they suffer from high attrition -- even the best programs won't work if people don't use them. We propose that volitional engagement -- facilitated through avatar customization -- can help combat attrition. We asked 250 participants to engage daily for 3 weeks in a one-minute breathing exercise for anxiety reduction, using either a generic avatar or one that they customized. Customizing an avatar resulted in significantly less attrition and more sustained engagement as measured through login counts. The problem of attrition affects self-improvement programs across a range of do-mains; we provide a subtle, versatile, and broadly-applicable solution.
{"title":"Combating Attrition in Digital Self-Improvement Programs using Avatar Customization","authors":"M. Birk, R. Mandryk","doi":"10.1145/3173574.3174234","DOIUrl":"https://doi.org/10.1145/3173574.3174234","url":null,"abstract":"Digital self-improvement programs (e.g., interventions, training programs, self-help apps) are widely accessible, but can not employ the same degree of external regulation as programs delivered in controlled environments. As a result, they suffer from high attrition -- even the best programs won't work if people don't use them. We propose that volitional engagement -- facilitated through avatar customization -- can help combat attrition. We asked 250 participants to engage daily for 3 weeks in a one-minute breathing exercise for anxiety reduction, using either a generic avatar or one that they customized. Customizing an avatar resulted in significantly less attrition and more sustained engagement as measured through login counts. The problem of attrition affects self-improvement programs across a range of do-mains; we provide a subtle, versatile, and broadly-applicable solution.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86781581","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We introduce the concept of Veritaps: a communication layer to help users identify truths and lies in mobile input. Existing lie detection research typically uses features not suitable for the breadth of mobile interaction. We explore the feasibility of detecting lies across all mobile touch interaction using sensor data from commodity smartphones. We report on three studies in which we collect discrete, truth-labelled mobile input using swipes and taps. The studies demonstrate the potential of using mobile interaction as a truth estimator by employing features such as touch pressure and the inter-tap details of number entry, for example. In our final study, we report an F1-score of .98 for classifying truths and .57 for lies. Finally we sketch three potential future scenarios of using lie detection in mobile applications; as a security measure during online log-in, a trust layer during online sale negotiations, and a tool for exploring self-deception.
{"title":"Veritaps: Truth Estimation from Mobile Interaction","authors":"Aske Mottelson, Jarrod Knibbe, K. Hornbæk","doi":"10.1145/3173574.3174135","DOIUrl":"https://doi.org/10.1145/3173574.3174135","url":null,"abstract":"We introduce the concept of Veritaps: a communication layer to help users identify truths and lies in mobile input. Existing lie detection research typically uses features not suitable for the breadth of mobile interaction. We explore the feasibility of detecting lies across all mobile touch interaction using sensor data from commodity smartphones. We report on three studies in which we collect discrete, truth-labelled mobile input using swipes and taps. The studies demonstrate the potential of using mobile interaction as a truth estimator by employing features such as touch pressure and the inter-tap details of number entry, for example. In our final study, we report an F1-score of .98 for classifying truths and .57 for lies. Finally we sketch three potential future scenarios of using lie detection in mobile applications; as a security measure during online log-in, a trust layer during online sale negotiations, and a tool for exploring self-deception.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"19 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90153730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cognitive load has been shown, over hundreds of validated studies, to be an important variable for understanding human performance. However, establishing practical, non-contact approaches for automated estimation of cognitive load under real-world conditions is far from a solved problem. Toward the goal of designing such a system, we propose two novel vision-based methods for cognitive load estimation, and evaluate them on a large-scale dataset collected under real-world driving conditions. Cognitive load is defined by which of 3 levels of a validated reference task the observed subject was performing. On this 3-class problem, our best proposed method of using 3D convolutional neural networks achieves 86.1% accuracy at predicting task-induced cognitive load in a sample of 92 subjects from video alone. This work uses the driving context as a training and evaluation dataset, but the trained network is not constrained to the driving environment as it requires no calibration and makes no assumptions about the subject's visual appearance, activity, head pose, scale, and perspective.
{"title":"Cognitive Load Estimation in the Wild","authors":"Alex Fridman, B. Reimer, Bruce Mehler, W. Freeman","doi":"10.1145/3173574.3174226","DOIUrl":"https://doi.org/10.1145/3173574.3174226","url":null,"abstract":"Cognitive load has been shown, over hundreds of validated studies, to be an important variable for understanding human performance. However, establishing practical, non-contact approaches for automated estimation of cognitive load under real-world conditions is far from a solved problem. Toward the goal of designing such a system, we propose two novel vision-based methods for cognitive load estimation, and evaluate them on a large-scale dataset collected under real-world driving conditions. Cognitive load is defined by which of 3 levels of a validated reference task the observed subject was performing. On this 3-class problem, our best proposed method of using 3D convolutional neural networks achieves 86.1% accuracy at predicting task-induced cognitive load in a sample of 92 subjects from video alone. This work uses the driving context as a training and evaluation dataset, but the trained network is not constrained to the driving environment as it requires no calibration and makes no assumptions about the subject's visual appearance, activity, head pose, scale, and perspective.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"29 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88855475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}