C. Zeidler, C. Lutteroth, W. Stuerzlinger, Gerald Weber
Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are among the most powerful. However, they are also more complex and their layouts are prone to problems such as over-constrained specifications and widget overlap. This poses challenges for GUI builder tools, which ideally should address these issues automatically. We present a new GUI builderthe Auckland Layout Editor (ALE)that addresses these challenges by enabling GUI designers to specify constraint-based layouts using simple, mouse-based operations. We give a detailed description of ALE's edit operations, which do not require direct constraint editing. ALE guarantees that all edit operations lead to sound specifications, ensuring solvable and non-overlapping layouts. To achieve that, we present a new algorithm that automatically generates the constraints necessary to keep a layout non-overlapping. Furthermore, we discuss how our innovations can be combined with manual constraint editing in a sound way. Finally, to aid designers in creating layouts with good resize behavior, we propose a novel automatic layout preview. This displays the layout at its minimum and in an enlarged size, which allows visualizing potential resize issues directly. All these features permit GUI developers to focus more on the overall UI design.
{"title":"The auckland layout editor: an improved GUI layout specification process","authors":"C. Zeidler, C. Lutteroth, W. Stuerzlinger, Gerald Weber","doi":"10.1145/2501988.2502007","DOIUrl":"https://doi.org/10.1145/2501988.2502007","url":null,"abstract":"Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are among the most powerful. However, they are also more complex and their layouts are prone to problems such as over-constrained specifications and widget overlap. This poses challenges for GUI builder tools, which ideally should address these issues automatically. We present a new GUI builderthe Auckland Layout Editor (ALE)that addresses these challenges by enabling GUI designers to specify constraint-based layouts using simple, mouse-based operations. We give a detailed description of ALE's edit operations, which do not require direct constraint editing. ALE guarantees that all edit operations lead to sound specifications, ensuring solvable and non-overlapping layouts. To achieve that, we present a new algorithm that automatically generates the constraints necessary to keep a layout non-overlapping. Furthermore, we discuss how our innovations can be combined with manual constraint editing in a sound way. Finally, to aid designers in creating layouts with good resize behavior, we propose a novel automatic layout preview. This displays the layout at its minimum and in an enlarged size, which allows visualizing potential resize issues directly. All these features permit GUI developers to focus more on the overall UI design.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130919527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
3D printers enable designers and makers to rapidly produce physical models of future products. Today these physical prototypes are mostly passive. Our research goal is to enable users to turn models produced on commodity 3D printers into interactive objects with a minimum of required assembly or instrumentation. We present Sauron, an embedded machine vision-based system for sensing human input on physical controls like buttons, sliders, and joysticks. With Sauron, designers attach a single camera with integrated ring light to a printed prototype. This camera observes the interior portions of input components to determine their state. In many prototypes, input components may be occluded or outside the viewing frustum of a single camera. We introduce algorithms that generate internal geometry and calculate mirror placements to redirect input motion into the visible camera area. To investigate the space of designs that can be built with Sauron along with its limitations, we built prototype devices, evaluated the suitability of existing models for vision sensing, and performed an informal study with three CAD users. While our approach imposes some constraints on device design, results suggest that it is expressive and accessible enough to enable constructing a useful variety of devices.
{"title":"Sauron: embedded single-camera sensing of printed physical user interfaces","authors":"Valkyrie Savage, Colin Chang, Bjoern Hartmann","doi":"10.1145/2501988.2501992","DOIUrl":"https://doi.org/10.1145/2501988.2501992","url":null,"abstract":"3D printers enable designers and makers to rapidly produce physical models of future products. Today these physical prototypes are mostly passive. Our research goal is to enable users to turn models produced on commodity 3D printers into interactive objects with a minimum of required assembly or instrumentation. We present Sauron, an embedded machine vision-based system for sensing human input on physical controls like buttons, sliders, and joysticks. With Sauron, designers attach a single camera with integrated ring light to a printed prototype. This camera observes the interior portions of input components to determine their state. In many prototypes, input components may be occluded or outside the viewing frustum of a single camera. We introduce algorithms that generate internal geometry and calculate mirror placements to redirect input motion into the visible camera area. To investigate the space of designs that can be built with Sauron along with its limitations, we built prototype devices, evaluated the suitability of existing models for vision sensing, and performed an informal study with three CAD users. While our approach imposes some constraints on device design, results suggest that it is expressive and accessible enough to enable constructing a useful variety of devices.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"23 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mirage proposes an effective non body contact technique to infer the amount and type of body motion, gesture, and activity. This approach involves passive measurement of static electric field of the environment flowing through sense electrode. This sensing method leverages electric field distortion by the presence of an intruder (e.g. human body). Mirage sensor has simple analog circuitry and supports ultra-low power operation. It requires no instrumentation to the user, and can be configured as environmental, mobile, and peripheral-attached sensor. We report on a series of experiments with 10 participants showing robust activity and gesture recognition, as well as promising results for robust location classification and multiple user differentiation. To further illustrate the utility of our approach, we demonstrate real-time interactive applications including activity monitoring, and two games which allow the users to interact with a computer using body motion and gestures.
{"title":"Mirage: exploring interaction modalities using off-body static electric field sensing","authors":"Adiyan Mujibiya, J. Rekimoto","doi":"10.1145/2501988.2502031","DOIUrl":"https://doi.org/10.1145/2501988.2502031","url":null,"abstract":"Mirage proposes an effective non body contact technique to infer the amount and type of body motion, gesture, and activity. This approach involves passive measurement of static electric field of the environment flowing through sense electrode. This sensing method leverages electric field distortion by the presence of an intruder (e.g. human body). Mirage sensor has simple analog circuitry and supports ultra-low power operation. It requires no instrumentation to the user, and can be configured as environmental, mobile, and peripheral-attached sensor. We report on a series of experiments with 10 participants showing robust activity and gesture recognition, as well as promising results for robust location classification and multiple user differentiation. To further illustrate the utility of our approach, we demonstrate real-time interactive applications including activity monitoring, and two games which allow the users to interact with a computer using body motion and gestures.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121457771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Kuat Yessenov, Shubham Tulsiani, A. Menon, Rob Miller, Sumit Gulwani, B. Lampson, A. Kalai
Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures. We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.
{"title":"A colorful approach to text processing by example","authors":"Kuat Yessenov, Shubham Tulsiani, A. Menon, Rob Miller, Sumit Gulwani, B. Lampson, A. Kalai","doi":"10.1145/2501988.2502040","DOIUrl":"https://doi.org/10.1145/2501988.2502040","url":null,"abstract":"Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures. We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brian A. Smith, Qi Yin, Steven K. Feiner, S. Nayar
Eye contact plays a crucial role in our everyday social interactions. The ability of a device to reliably detect when a person is looking at it can lead to powerful human-object interfaces. Today, most gaze-based interactive systems rely on gaze tracking technology. Unfortunately, current gaze tracking techniques require active infrared illumination, calibration, or are sensitive to distance and pose. In this work, we propose a different solution-a passive, appearance-based approach for sensing eye contact in an image. By focusing on gaze *locking* rather than gaze tracking, we exploit the special appearance of direct eye gaze, achieving a Matthews correlation coefficient (MCC) of over 0.83 at long distances (up to 18 m) and large pose variations (up to ±30° of head yaw rotation) using a very basic classifier and without calibration. To train our detector, we also created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. We demonstrate how our method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography.
{"title":"Gaze locking: passive eye contact detection for human-object interaction","authors":"Brian A. Smith, Qi Yin, Steven K. Feiner, S. Nayar","doi":"10.1145/2501988.2501994","DOIUrl":"https://doi.org/10.1145/2501988.2501994","url":null,"abstract":"Eye contact plays a crucial role in our everyday social interactions. The ability of a device to reliably detect when a person is looking at it can lead to powerful human-object interfaces. Today, most gaze-based interactive systems rely on gaze tracking technology. Unfortunately, current gaze tracking techniques require active infrared illumination, calibration, or are sensitive to distance and pose. In this work, we propose a different solution-a passive, appearance-based approach for sensing eye contact in an image. By focusing on gaze *locking* rather than gaze tracking, we exploit the special appearance of direct eye gaze, achieving a Matthews correlation coefficient (MCC) of over 0.83 at long distances (up to 18 m) and large pose variations (up to ±30° of head yaw rotation) using a very basic classifier and without calibration. To train our detector, we also created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. We demonstrate how our method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Steve Rubin, Floraine Berthouzoz, G. Mysore, Wilmot Li, Maneesh Agrawala
Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.
{"title":"Content-based tools for editing audio stories","authors":"Steve Rubin, Floraine Berthouzoz, G. Mysore, Wilmot Li, Maneesh Agrawala","doi":"10.1145/2501988.2501993","DOIUrl":"https://doi.org/10.1145/2501988.2501993","url":null,"abstract":"Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129762618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
{"title":"Session details: Visualization & video","authors":"G. Fitzmaurice","doi":"10.1145/3254701","DOIUrl":"https://doi.org/10.1145/3254701","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.
{"title":"Touch & activate: adding interactivity to existing objects using active acoustic sensing","authors":"Makoto Ono, B. Shizuki, J. Tanaka","doi":"10.1145/2501988.2501989","DOIUrl":"https://doi.org/10.1145/2501988.2501989","url":null,"abstract":"In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129264680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In the process of construction work, taking notes of real world objects like walls, pipes, cables and others is an important task. The ad hoc capturing of small information pieces on such objects on site can be challenging when there is no specialized technology available. Handwritten or hand drawn notes on paper are good for textual information like measurements whereas images are better to capture the physical state of objects. Without a proper combination however the benefit is limited. In this paper we present an interaction system for taking ad hoc notes on real world objects by using a combination of a smartphone and a laserpointer as input device. Our interface enables the user to directly annotate objects by drawing on them and to store these annotations for later reviewing. The deictic gestures of the user are then replayed on a stitched image of the scene. The users voice input is captured and analyzed to integrate additional Information. The user can mark positions and place hand taken measurements by pointing on the objects and speaking the corresponding voice commands.
{"title":"Capturing on site laser annotations with smartphones to document construction work","authors":"J. Schweitzer, R. Dörner","doi":"10.1145/2501988.2502010","DOIUrl":"https://doi.org/10.1145/2501988.2502010","url":null,"abstract":"In the process of construction work, taking notes of real world objects like walls, pipes, cables and others is an important task. The ad hoc capturing of small information pieces on such objects on site can be challenging when there is no specialized technology available. Handwritten or hand drawn notes on paper are good for textual information like measurements whereas images are better to capture the physical state of objects. Without a proper combination however the benefit is limited. In this paper we present an interaction system for taking ad hoc notes on real world objects by using a combination of a smartphone and a laserpointer as input device. Our interface enables the user to directly annotate objects by drawing on them and to store these annotations for later reviewing. The deictic gestures of the user are then replayed on a stitched image of the scene. The users voice input is captured and analyzed to integrate additional Information. The user can mark positions and place hand taken measurements by pointing on the objects and speaking the corresponding voice commands.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122706095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai
We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.
{"title":"SenSkin: adapting skin as a soft interface","authors":"Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai","doi":"10.1145/2501988.2502039","DOIUrl":"https://doi.org/10.1145/2501988.2502039","url":null,"abstract":"We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}