Many people would find the Web easier to use if content was a little bigger, even those who already find the Web possible to use now. This paper introduces the idea of opportunistic accessibility improvement in which improvements intended to make a web page easier to access, such as magnification, are automatically applied to the extent that they can be without causing negative side effects. We explore this idea with oppaccess.js, an easily-deployed system for magnifying web pages that iteratively increases magnification until it notices negative side effects, such as horizontal scrolling or overlapping text. We validate this approach by magnifying existing web pages 1.6x on average without introducing negative side effects. We believe this concept applies generally across a wide range of accessibility improvements designed to help people with diverse abilities.
{"title":"Making the web easier to see with opportunistic accessibility improvement","authors":"Jeffrey P. Bigham","doi":"10.1145/2642918.2647357","DOIUrl":"https://doi.org/10.1145/2642918.2647357","url":null,"abstract":"Many people would find the Web easier to use if content was a little bigger, even those who already find the Web possible to use now. This paper introduces the idea of opportunistic accessibility improvement in which improvements intended to make a web page easier to access, such as magnification, are automatically applied to the extent that they can be without causing negative side effects. We explore this idea with oppaccess.js, an easily-deployed system for magnifying web pages that iteratively increases magnification until it notices negative side effects, such as horizontal scrolling or overlapping text. We validate this approach by magnifying existing web pages 1.6x on average without introducing negative side effects. We believe this concept applies generally across a wide range of accessibility improvements designed to help people with diverse abilities.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84474493","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We contribute a novel method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device. In particular, we discuss new features and algorithms that infer side taps by probabilistically combining estimates of tap location and the hand pose--the hand holding the device. Based on a dataset collected from 9 participants, our method achieved 97.3% precision and 98.4% recall on tap event detection against ambient motion. For detecting single-tap locations, our method outperformed an approach that uses inferred hand postures deterministically by 3% and an approach that does not use hand posture inference by 17%. For inferring the location of two consecutive side taps from the same direction, our method outperformed the two baseline approaches by 6% and 17% respectively. We discuss our insights into designing the detection algorithm and the implication on side tap-based interaction behaviors.
{"title":"Detecting tapping motion on the side of mobile devices by probabilistically combining hand postures","authors":"William McGrath, Yang Li","doi":"10.1145/2642918.2647363","DOIUrl":"https://doi.org/10.1145/2642918.2647363","url":null,"abstract":"We contribute a novel method for detecting finger taps on the different sides of a smartphone, using the built-in motion sensors of the device. In particular, we discuss new features and algorithms that infer side taps by probabilistically combining estimates of tap location and the hand pose--the hand holding the device. Based on a dataset collected from 9 participants, our method achieved 97.3% precision and 98.4% recall on tap event detection against ambient motion. For detecting single-tap locations, our method outperformed an approach that uses inferred hand postures deterministically by 3% and an approach that does not use hand posture inference by 17%. For inferring the location of two consecutive side taps from the same direction, our method outperformed the two baseline approaches by 6% and 17% respectively. We discuss our insights into designing the detection algorithm and the implication on side tap-based interaction behaviors.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"14 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82412953","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A tag system is proposed that offers a practical approach to ubiquitous computing. It provides small and low-power tags that are easy to distribute; does not need a special device to read the tags (in the future), thus enabling their use anytime, anywhere; and has a wide reading range in angle and distance that extends the design space of tag-based applications. The tag consists of a kind of liquid crystal (LC) and a retroreflector, and it sends its ID by switching the LC. A depth sensing camera that emits infrared (IR) is used as the tag reader; we assume that it will be part of the user's everyday devices, such as a smartphone. Experiments were conducted to confirm its potential, and a regular IR camera was also tested for comparison. The results show that the tag system has a wide readable range in terms of both distance (up to 8m) and viewing angle offset. Several applications were also developed to explore the design space. Finally, limitations of the current setup and possible improvements are discussed.
{"title":"Tag system with low-powered tag and depth sensing camera","authors":"H. Manabe, Wataru Yamada, H. Inamura","doi":"10.1145/2642918.2647404","DOIUrl":"https://doi.org/10.1145/2642918.2647404","url":null,"abstract":"A tag system is proposed that offers a practical approach to ubiquitous computing. It provides small and low-power tags that are easy to distribute; does not need a special device to read the tags (in the future), thus enabling their use anytime, anywhere; and has a wide reading range in angle and distance that extends the design space of tag-based applications. The tag consists of a kind of liquid crystal (LC) and a retroreflector, and it sends its ID by switching the LC. A depth sensing camera that emits infrared (IR) is used as the tag reader; we assume that it will be part of the user's everyday devices, such as a smartphone. Experiments were conducted to confirm its potential, and a regular IR camera was also tested for comparison. The results show that the tag system has a wide readable range in terms of both distance (up to 8m) and viewing angle offset. Several applications were also developed to explore the design space. Finally, limitations of the current setup and possible improvements are discussed.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"32 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73325076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen Li, Krzysztof Z Gajos, Rob Miller
With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.
{"title":"Data-driven interaction techniques for improving navigation of educational videos","authors":"Juho Kim, Philip J. Guo, Carrie J. Cai, Shang-Wen Li, Krzysztof Z Gajos, Rob Miller","doi":"10.1145/2642918.2647389","DOIUrl":"https://doi.org/10.1145/2642918.2647389","url":null,"abstract":"With an unprecedented scale of learners watching educational videos on online platforms such as MOOCs and YouTube, there is an opportunity to incorporate data generated from their interactions into the design of novel video interaction techniques. Interaction data has the potential to help not only instructors to improve their videos, but also to enrich the learning experience of educational video watchers. This paper explores the design space of data-driven interaction techniques for educational video navigation. We introduce a set of techniques that augment existing video interface widgets, including: a 2D video timeline with an embedded visualization of collective navigation traces; dynamic and non-linear timeline scrubbing; data-enhanced transcript search and keyword summary; automatic display of relevant still frames next to the video; and a visual summary representing points with high learner activity. To evaluate the feasibility of the techniques, we ran a laboratory user study with simulated learning tasks. Participants rated watching lecture videos with interaction data to be efficient and useful in completing the tasks. However, no significant differences were found in task performance, suggesting that interaction data may not always align with moment-by-moment information needs during the tasks.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"81 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82046809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew D. Wilson, E. Ofek, B. MacIntyre, N. Raghuvanshi, Lior Shapira
RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.
{"title":"RoomAlive: magical experiences enabled by scalable, adaptive projector-camera units","authors":"Brett R. Jones, Rajinder Sodhi, Michael Murdock, Ravish Mehra, Hrvoje Benko, Andrew D. Wilson, E. Ofek, B. MacIntyre, N. Raghuvanshi, Lior Shapira","doi":"10.1145/2642918.2647383","DOIUrl":"https://doi.org/10.1145/2642918.2647383","url":null,"abstract":"RoomAlive is a proof-of-concept prototype that transforms any room into an immersive, augmented entertainment experience. Our system enables new interactive projection mapping experiences that dynamically adapts content to any room. Users can touch, shoot, stomp, dodge and steer projected content that seamlessly co-exists with their existing physical environment. The basic building blocks of RoomAlive are projector-depth camera units, which can be combined through a scalable, distributed framework. The projector-depth camera units are individually auto-calibrating, self-localizing, and create a unified model of the room with no user intervention. We investigate the design space of gaming experiences that are possible with RoomAlive and explore methods for dynamically mapping content based on room layout and user position. Finally we showcase four experience prototypes that demonstrate the novel interactive experiences that are possible with RoomAlive and discuss the design challenges of adapting any game to any room.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"63 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85570474","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present LightRing, a wearable sensor in a ring form factor that senses the 2d location of a fingertip on any surface, independent of orientation or material. The device consists of an infrared proximity sensor for measuring finger flexion and a 1-axis gyroscope for measuring finger rotation. Notably, LightRing tracks subtle fingertip movements from the finger base without requiring instrumentation of other body parts or the environment. This keeps the normal hand function intact and allows for a socially acceptable appearance. We evaluate LightRing in a 2d pointing experiment in two scenarios: on a desk while sitting down, and on the leg while standing. Our results indicate that the device has potential to enable a variety of rich mobile input scenarios.
{"title":"LightRing: always-available 2D input on any surface","authors":"W. Kienzle, K. Hinckley","doi":"10.1145/2642918.2647376","DOIUrl":"https://doi.org/10.1145/2642918.2647376","url":null,"abstract":"We present LightRing, a wearable sensor in a ring form factor that senses the 2d location of a fingertip on any surface, independent of orientation or material. The device consists of an infrared proximity sensor for measuring finger flexion and a 1-axis gyroscope for measuring finger rotation. Notably, LightRing tracks subtle fingertip movements from the finger base without requiring instrumentation of other body parts or the environment. This keeps the normal hand function intact and allows for a socially acceptable appearance. We evaluate LightRing in a 2d pointing experiment in two scenarios: on a desk while sitting down, and on the leg while standing. Our results indicate that the device has potential to enable a variety of rich mobile input scenarios.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"10 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78435060","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such high-dimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.
{"title":"Crowd-powered parameter analysis for visual design exploration","authors":"Yuki Koyama, Daisuke Sakamoto, T. Igarashi","doi":"10.1145/2642918.2647386","DOIUrl":"https://doi.org/10.1145/2642918.2647386","url":null,"abstract":"Parameter tweaking is one of the fundamental tasks in the editing of visual digital contents, such as correcting photo color or executing blendshape facial expression control. A problem with parameter tweaking is that it often requires much time and effort to explore a high-dimensional parameter space. We present a new technique to analyze such high-dimensional parameter space to obtain a distribution of human preference. Our method uses crowdsourcing to gather pairwise comparisons between various parameter sets. As a result of analysis, the user obtains a goodness function that computes the goodness value of a given parameter set. This goodness function enables two interfaces for exploration: Smart Suggestion, which provides suggestions of preferable parameter sets, and VisOpt Slider, which interactively visualizes the distribution of goodness values on sliders and gently optimizes slider values while the user is editing. We created four applications with different design parameter spaces. As a result, the system could facilitate the user's design exploration.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"17 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"86929533","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
D. Leigh, C. Forlines, Ricardo Jota, Steven Sanders, Daniel J. Wigdor
We present "Fast Multi-Touch" (FMT), an extremely high frame rate and low-latency multi-touch sensor based on a novel projected capacitive architecture that employs simultaneous orthogonal signals. The sensor has a frame rate of 4000 Hz and a touch-to-data output latency of only 40 microseconds, providing unprecedented responsiveness. FMT is demonstrated with a high-speed DLP projector yielding a touch-to-light latency of 110 microseconds.
{"title":"High rate, low-latency multi-touch sensing with simultaneous orthogonal multiplexing","authors":"D. Leigh, C. Forlines, Ricardo Jota, Steven Sanders, Daniel J. Wigdor","doi":"10.1145/2642918.2647353","DOIUrl":"https://doi.org/10.1145/2642918.2647353","url":null,"abstract":"We present \"Fast Multi-Touch\" (FMT), an extremely high frame rate and low-latency multi-touch sensor based on a novel projected capacitive architecture that employs simultaneous orthogonal signals. The sensor has a frame rate of 4000 Hz and a touch-to-data output latency of only 40 microseconds, providing unprecedented responsiveness. FMT is demonstrated with a high-speed DLP projector yielding a touch-to-light latency of 110 microseconds.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"26 11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82698551","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kotaro Hara, J. Sun, Robert Moore, D. Jacobs, Jon E. Froehlich
Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.
{"title":"Tohme: detecting curb ramps in google street view using crowdsourcing, computer vision, and machine learning","authors":"Kotaro Hara, J. Sun, Robert Moore, D. Jacobs, Jon E. Froehlich","doi":"10.1145/2642918.2647403","DOIUrl":"https://doi.org/10.1145/2642918.2647403","url":null,"abstract":"Building on recent prior work that combines Google Street View (GSV) and crowdsourcing to remotely collect information on physical world accessibility, we present the first 'smart' system, Tohme, that combines machine learning, computer vision (CV), and custom crowd interfaces to find curb ramps remotely in GSV scenes. Tohme consists of two workflows, a human labeling pipeline and a CV pipeline with human verification, which are scheduled dynamically based on predicted performance. Using 1,086 GSV scenes (street intersections) from four North American cities and data from 403 crowd workers, we show that Tohme performs similarly in detecting curb ramps compared to a manual labeling approach alone (F- measure: 84% vs. 86% baseline) but at a 13% reduction in time cost. Our work contributes the first CV-based curb ramp detection system, a custom machine-learning based workflow controller, a validation of GSV as a viable curb ramp data source, and a detailed examination of why curb ramp detection is a hard problem along with steps forward.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89124134","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.
{"title":"PrintScreen: fabricating highly customizable thin-film touch-displays","authors":"Simon Olberding, Michael Wessely, Jürgen Steimle","doi":"10.1145/2642918.2647413","DOIUrl":"https://doi.org/10.1145/2642918.2647413","url":null,"abstract":"PrintScreen is an enabling technology for digital fabrication of customized flexible displays using thin-film electroluminescence (TFEL). It enables inexpensive and rapid fabrication of highly customized displays in low volume, in a simple lab environment, print shop or even at home. We show how to print ultra-thin (120 µm) segmented and passive matrix displays in greyscale or multi-color on a variety of deformable and rigid substrate materials, including PET film, office paper, leather, metal, stone, and wood. The displays can have custom, unconventional 2D shapes and can be bent, rolled and folded to create 3D shapes. We contribute a systematic overview of graphical display primitives for customized displays and show how to integrate them with static print and printed electronics. Furthermore, we contribute a sensing framework, which leverages the display itself for touch sensing. To demonstrate the wide applicability of PrintScreen, we present application examples from ubiquitous, mobile and wearable computing.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"113 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77277066","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}