D. M. Plasencia, Florent Berthaut, Abhijit Karnik, S. Subramanian
Reflective optical combiners like beam splitters and two way mirrors are used in AR to overlap digital contents on the users' hands or bodies. Augmentations are usually unidirectional, either reflecting virtual contents on the user's body (Situated Augmented Reality) or augmenting user's reflections with digital contents (AR mirrors). But many other novel possibilities remain unexplored. For example, users' hands, reflected inside a museum AR cabinet, can allow visitors to interact with the artifacts exhibited. Projecting on the user's hands as their reflection cuts through the objects can be used to reveal objects' internals. Augmentations from both sides are blended by the combiner, so they are consistently seen by any number of users, independently of their location or, even, the side of the combiner through which they are looking. This paper explores the potential of optical combiners to merge the space in front and behind them. We present this design space, identify novel augmentations/interaction opportunities and explore the design space using three prototypes.
{"title":"Through the combining glass","authors":"D. M. Plasencia, Florent Berthaut, Abhijit Karnik, S. Subramanian","doi":"10.1145/2642918.2647351","DOIUrl":"https://doi.org/10.1145/2642918.2647351","url":null,"abstract":"Reflective optical combiners like beam splitters and two way mirrors are used in AR to overlap digital contents on the users' hands or bodies. Augmentations are usually unidirectional, either reflecting virtual contents on the user's body (Situated Augmented Reality) or augmenting user's reflections with digital contents (AR mirrors). But many other novel possibilities remain unexplored. For example, users' hands, reflected inside a museum AR cabinet, can allow visitors to interact with the artifacts exhibited. Projecting on the user's hands as their reflection cuts through the objects can be used to reveal objects' internals. Augmentations from both sides are blended by the combiner, so they are consistently seen by any number of users, independently of their location or, even, the side of the combiner through which they are looking. This paper explores the potential of optical combiners to merge the space in front and behind them. We present this design space, identify novel augmentations/interaction opportunities and explore the design space using three prototypes.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"1 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"91321899","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
InterState is a new programming language and environment that addresses the challenges of writing and reusing user interface code. InterState represents interactive behaviors clearly and concisely using a combination of novel forms of state machines and constraints. It also introduces new language features that allow programmers to easily modularize and reuse behaviors. InterState uses a new visual notation that allows programmers to better understand and navigate their code. InterState also includes a live editor that immediately updates the running application in response to changes in the editor and vice versa to help programmers understand the state of their program. Finally, InterState can interface with code and widgets written in other languages, for example to create a user interface in InterState that communicates with a database. We evaluated the understandability of InterState's programming primitives in a comparative laboratory study. We found that participants were twice as fast at understanding and modifying GUI components when they were implemented with InterState than when they were implemented in a conventional textual event-callback style. We evaluated InterState's scalability with a series of benchmarks and example applications and found that it can scale to implement complex behaviors involving thousands of objects and constraints.
{"title":"InterState: a language and environment for expressing interface behavior","authors":"Stephen Oney, B. Myers, Joel Brandt","doi":"10.1145/2642918.2647358","DOIUrl":"https://doi.org/10.1145/2642918.2647358","url":null,"abstract":"InterState is a new programming language and environment that addresses the challenges of writing and reusing user interface code. InterState represents interactive behaviors clearly and concisely using a combination of novel forms of state machines and constraints. It also introduces new language features that allow programmers to easily modularize and reuse behaviors. InterState uses a new visual notation that allows programmers to better understand and navigate their code. InterState also includes a live editor that immediately updates the running application in response to changes in the editor and vice versa to help programmers understand the state of their program. Finally, InterState can interface with code and widgets written in other languages, for example to create a user interface in InterState that communicates with a database. We evaluated the understandability of InterState's programming primitives in a comparative laboratory study. We found that participants were twice as fast at understanding and modifying GUI components when they were implemented with InterState than when they were implemented in a conventional textual event-callback style. We evaluated InterState's scalability with a series of benchmarks and example applications and found that it can scale to implement complex behaviors involving thousands of objects and constraints.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"30 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90361332","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This work presents GaussStones, a system of shielded magnetic tangibles design for supporting multi-token interactions on portable displays. Unlike prior works in sensing magnetic tangibles on portable displays, the proposed tangible design applies magnetic shielding by using an inexpensive galvanized steel case, which eliminates interference between magnetic tangibles. An analog Hall-sensor grid can recognize the identity of each shielded magnetic unit since each unit generates a magnetic field with a specific intensity distribution and/or polarization. Combining multiple units as a knob further allows for resolving additional identities and their orientations. Enabling these features improves support for applications involving multiple tokens. Thus, using prevalent portable displays provides generic platforms for tangible interaction design.
{"title":"GaussStones: shielded magnetic tangibles for multi-token interactions on portable displays","authors":"Rong-Hao Liang, Han-Chih Kuo, Liwei Chan, De-Nian Yang, Bing-Yu Chen","doi":"10.1145/2642918.2647384","DOIUrl":"https://doi.org/10.1145/2642918.2647384","url":null,"abstract":"This work presents GaussStones, a system of shielded magnetic tangibles design for supporting multi-token interactions on portable displays. Unlike prior works in sensing magnetic tangibles on portable displays, the proposed tangible design applies magnetic shielding by using an inexpensive galvanized steel case, which eliminates interference between magnetic tangibles. An analog Hall-sensor grid can recognize the identity of each shielded magnetic unit since each unit generates a magnetic field with a specific intensity distribution and/or polarization. Combining multiple units as a knob further allows for resolving additional identities and their orientations. Enabling these features improves support for applications involving multiple tokens. Thus, using prevalent portable displays provides generic platforms for tangible interaction design.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"142 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77868705","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haijun Xia, Ricardo Jota, Benjamin McCanny, Zhe Yu, C. Forlines, Karan Singh, Daniel J. Wigdor
A method of reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed. A corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown. The model is validated for target distances >= 25.5cm, and demonstrated to have a mean accuracy of 1.05cm 128ms before the user touches the screen. Preference study of different levels of latency reveals a strong preference for unperceived latency touchdown feedback. A form of 'soft' feedback, as well as other uses for this prediction to improve performance, is proposed.
{"title":"Zero-latency tapping: using hover information to predict touch locations and eliminate touchdown latency","authors":"Haijun Xia, Ricardo Jota, Benjamin McCanny, Zhe Yu, C. Forlines, Karan Singh, Daniel J. Wigdor","doi":"10.1145/2642918.2647348","DOIUrl":"https://doi.org/10.1145/2642918.2647348","url":null,"abstract":"A method of reducing the perceived latency of touch input by employing a model to predict touch events before the finger reaches the touch surface is proposed. A corpus of 3D finger movement data was collected, and used to develop a model capable of three granularities at different phases of movement: initial direction, final touch location, time of touchdown. The model is validated for target distances >= 25.5cm, and demonstrated to have a mean accuracy of 1.05cm 128ms before the user touches the screen. Preference study of different levels of latency reveals a strong preference for unperceived latency touchdown feedback. A form of 'soft' feedback, as well as other uses for this prediction to improve performance, is proposed.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"44 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76283344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Designer's Augmented Reality Toolkit (DART) was an augmented (AR) and mixed reality (MR) authoring tool targeted at new media designers. It was released in 2003 and was heavily used by a diverse population of creators for the next several years [28]. Ten years later, we approached a group of users to collect reflections on their use of DART, the artifacts they produced, their subsequent AR/MR authoring, their thoughts on the challenges of AR/MR authoring in general, and the state of modern tools. In this paper we present the findings from in-depth interviews with these DART developers and other AR experts. Their reflections provide insights on how to successfully engage non-technologists with new media and the challenges they face during authoring, the unique requirements of new media authoring, and how modern tools are still not meeting the needs of this type of author, highlighting where additional research is needed.
{"title":"Designer's augmented reality toolkit, ten years later: implications for new media authoring tools","authors":"Maribeth Gandy Coleman, B. MacIntyre","doi":"10.1145/2642918.2647369","DOIUrl":"https://doi.org/10.1145/2642918.2647369","url":null,"abstract":"The Designer's Augmented Reality Toolkit (DART) was an augmented (AR) and mixed reality (MR) authoring tool targeted at new media designers. It was released in 2003 and was heavily used by a diverse population of creators for the next several years [28]. Ten years later, we approached a group of users to collect reflections on their use of DART, the artifacts they produced, their subsequent AR/MR authoring, their thoughts on the challenges of AR/MR authoring in general, and the state of modern tools. In this paper we present the findings from in-depth interviews with these DART developers and other AR experts. Their reflections provide insights on how to successfully engage non-technologists with new media and the challenges they face during authoring, the unique requirements of new media authoring, and how modern tools are still not meeting the needs of this type of author, highlighting where additional research is needed.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"39 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73195559","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas D. Latoza, W. B. Towne, C. Adriano, A. Hoek
Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.
{"title":"Microtask programming: building software with a crowd","authors":"Thomas D. Latoza, W. B. Towne, C. Adriano, A. Hoek","doi":"10.1145/2642918.2647349","DOIUrl":"https://doi.org/10.1145/2642918.2647349","url":null,"abstract":"Microtask crowdsourcing organizes complex work into workflows, decomposing large tasks into small, relatively independent microtasks. Applied to software development, this model might increase participation in open source software development by lowering the barriers to contribu-tion and dramatically decrease time to market by increasing the parallelism in development work. To explore this idea, we have developed an approach to decomposing programming work into microtasks. Work is coordinated through tracking changes to a graph of artifacts, generating appropriate microtasks and propagating change notifications to artifacts with dependencies. We have implemented our approach in CrowdCode, a cloud IDE for crowd development. To evaluate the feasibility of microtask programming, we performed a small study and found that a small crowd of 12 workers was able to successfully write 480 lines of code and 61 unit tests in 14.25 person-hours of time.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"74298657","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Lindlbauer, Toru Aoki, Robert Walter, Yuji Uema, Anita Höchtl, M. Haller, M. Inami, Jörg Müller
We present Tracs, a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co-workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency-control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.
{"title":"Tracs: transparency-control for see-through displays","authors":"David Lindlbauer, Toru Aoki, Robert Walter, Yuji Uema, Anita Höchtl, M. Haller, M. Inami, Jörg Müller","doi":"10.1145/2642918.2647350","DOIUrl":"https://doi.org/10.1145/2642918.2647350","url":null,"abstract":"We present Tracs, a dual-sided see-through display system with controllable transparency. Traditional displays are a constant visual and communication barrier, hindering fast and efficient collaboration of spatially close or facing co-workers. Transparent displays could potentially remove these barriers, but introduce new issues of personal privacy, screen content privacy and visual interference. We therefore propose a solution with controllable transparency to overcome these problems. Tracs consists of two see-through displays, with a transparency-control layer, a backlight layer and a polarization adjustment layer in-between. The transparency-control layer is built as a grid of individually addressable transparency-controlled patches, allowing users to control the transparency overall or just locally. Additionally, the locally switchable backlight layer improves the contrast of LCD screen content. Tracs allows users to switch between personal and collaborative work fast and easily and gives them full control of transparent regions on their display.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84711144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiang 'Anthony' Chen, Tovi Grossman, G. Fitzmaurice
Ultra-small smart devices, such as smart watches, have become increasingly popular in recent years. Most of these devices rely on touch as the primary input modality, which makes tasks such as text entry increasingly difficult as the devices continue to shrink. In the sole pursuit of entry speed, the ultimate solution is a shorthand technique (e.g., Morse code) that sequences tokens of input (e.g., key, tap, swipe) into unique representations of each character. However, learning such techniques is hard, as it often resorts to rote memory. Our technique, Swipeboard, leverages our spatial memory of a QWERTY keyboard to learn, and eventually master a shorthand, eyes-free text entry method designed for ultra-small interfaces. Characters are entered with two swipes; the first swipe specifies the region where the character is located, and the second swipe specifies the character within that region. Our study showed that with less than two hours' training, Tested on a reduced word set, Swipeboard users achieved 19.58 words per minute (WPM), 15% faster than an existing baseline technique.
{"title":"Swipeboard: a text entry technique for ultra-small interfaces that supports novice to expert transitions","authors":"Xiang 'Anthony' Chen, Tovi Grossman, G. Fitzmaurice","doi":"10.1145/2642918.2647354","DOIUrl":"https://doi.org/10.1145/2642918.2647354","url":null,"abstract":"Ultra-small smart devices, such as smart watches, have become increasingly popular in recent years. Most of these devices rely on touch as the primary input modality, which makes tasks such as text entry increasingly difficult as the devices continue to shrink. In the sole pursuit of entry speed, the ultimate solution is a shorthand technique (e.g., Morse code) that sequences tokens of input (e.g., key, tap, swipe) into unique representations of each character. However, learning such techniques is hard, as it often resorts to rote memory. Our technique, Swipeboard, leverages our spatial memory of a QWERTY keyboard to learn, and eventually master a shorthand, eyes-free text entry method designed for ultra-small interfaces. Characters are entered with two swipes; the first swipe specifies the region where the character is located, and the second swipe specifies the character within that region. Our study showed that with less than two hours' training, Tested on a reduced word set, Swipeboard users achieved 19.58 words per minute (WPM), 15% faster than an existing baseline technique.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"33 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80721007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we describe the development of ShrinkyCircuits, a novel electronic prototyping technique that captures the flexibility of sketching and leverages properties of a common everyday plastic polymer to enable low-cost, miniature, planar, and curved, multi-layer circuit designs in minutes. ShrinkyCircuits take advantage of inexpensive prestressed polymer film that shrinks to its original size when exposed to heat. This enables improved electrical characteristics though sintering of the conductive electrical layer, partial self-assembly of the circuit and components, and mechanically robust custom shapes Including curves and non-planar form factors. We demonstrate the range and adaptability of ShrinkyCircuits designs from simple hand drawn circuits with through-hole components to complex multilayer, printed circuit boards (PCB), with curved and irregular shaped electronic layouts and surface mount components. Our approach enables users to create extremely customized circuit boards with dense circuit layouts while avoiding messy chemical etching, expensive board milling machines, or time consuming delays in using outside PCB production houses.
{"title":"ShrinkyCircuits: sketching, shrinking, and formgiving for electronic circuits","authors":"Joanne Lo, E. Paulos","doi":"10.1145/2642918.2647421","DOIUrl":"https://doi.org/10.1145/2642918.2647421","url":null,"abstract":"In this paper we describe the development of ShrinkyCircuits, a novel electronic prototyping technique that captures the flexibility of sketching and leverages properties of a common everyday plastic polymer to enable low-cost, miniature, planar, and curved, multi-layer circuit designs in minutes. ShrinkyCircuits take advantage of inexpensive prestressed polymer film that shrinks to its original size when exposed to heat. This enables improved electrical characteristics though sintering of the conductive electrical layer, partial self-assembly of the circuit and components, and mechanically robust custom shapes Including curves and non-planar form factors. We demonstrate the range and adaptability of ShrinkyCircuits designs from simple hand drawn circuits with through-hole components to complex multilayer, printed circuit boards (PCB), with curved and irregular shaped electronic layouts and surface mount components. Our approach enables users to create extremely customized circuit boards with dense circuit layouts while avoiding messy chemical etching, expensive board milling machines, or time consuming delays in using outside PCB production houses.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"60 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78817862","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
E. Paulos, Chris Myers, Rundong Tian, Paxton Paulos
Sensory Triptych is a set of exploratory, interactive sensors designed for children that invite "new ways of seeing" our world from the perspective of the here (the earth, air, and water around us), near (things just out of sight), and out there (orbiting satellites and space junk) using familiar and novel interfaces, affordances, and narratives. We present a series of novel physical design prototypes that reframe sensing technologies for children that foster an early adoption of technology usage for exploring, understanding, communicating, sharing, and changing our world. Finally, we discuss how such designs expand the potential opportunities and landscapes for our future interactive systems and experiences within the UIST community.
{"title":"Sensory triptych: here, near, out there","authors":"E. Paulos, Chris Myers, Rundong Tian, Paxton Paulos","doi":"10.1145/2642918.2647410","DOIUrl":"https://doi.org/10.1145/2642918.2647410","url":null,"abstract":"Sensory Triptych is a set of exploratory, interactive sensors designed for children that invite \"new ways of seeing\" our world from the perspective of the here (the earth, air, and water around us), near (things just out of sight), and out there (orbiting satellites and space junk) using familiar and novel interfaces, affordances, and narratives. We present a series of novel physical design prototypes that reframe sensing technologies for children that foster an early adoption of technology usage for exploring, understanding, communicating, sharing, and changing our world. Finally, we discuss how such designs expand the potential opportunities and landscapes for our future interactive systems and experiences within the UIST community.","PeriodicalId":20543,"journal":{"name":"Proceedings of the 27th annual ACM symposium on User interface software and technology","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90108320","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}