Yang Zhang, W. Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, Chris Harrison
Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband (e.g., smartwatch) and sensors integrated into an AR/VR headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin. Author
{"title":"ActiTouch","authors":"Yang Zhang, W. Kienzle, Yanjun Ma, Shiu S. Ng, Hrvoje Benko, Chris Harrison","doi":"10.1145/3332165.3347869","DOIUrl":"https://doi.org/10.1145/3332165.3347869","url":null,"abstract":"Contemporary AR/VR systems use in-air gestures or handheld controllers for interactivity. This overlooks the skin as a convenient surface for tactile, touch-driven interactions, which are generally more accurate and comfortable than free space interactions. In response, we developed ActiTouch, a new electrical method that enables precise on-skin touch segmentation by using the body as an RF waveguide. We combine this method with computer vision, enabling a system with both high tracking precision and robust touch detection. Our system requires no cumbersome instrumentation of the fingers or hands, requiring only a single wristband (e.g., smartwatch) and sensors integrated into an AR/VR headset. We quantify the accuracy of our approach through a user study and demonstrate how it can enable touchscreen-like interactions on the skin. Author","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"75 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122589746","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present FaceWidgets, a device integrated with the backside of a head-mounted display (HMD) that enables tangible interactions using physical controls. To allow for near range-to-eye interactions, our first study suggested displaying the virtual widgets at 20 cm from the eye positions, which is 9 cm from the HMD backside. We propose two novel interactions, widget canvas and palm-facing gesture, that can help users avoid double vision and allow them to access the interface as needed. Our second study showed that displaying a hand reference improved performance of face widgets interactions. We developed two applications of FaceWidgets, a fixed-layout 360 video player and a contextual input for smart home control. Finally, we compared four hand visualizations against the two applications in an exploratory study. Participants considered the transparent hand as the most suitable and responded positively to our system.
{"title":"FaceWidgets: Exploring Tangible Interaction on Face with Head-Mounted Displays","authors":"Wen-Jie Tseng, Li-Yang Wang, Liwei Chan","doi":"10.1145/3332165.3347946","DOIUrl":"https://doi.org/10.1145/3332165.3347946","url":null,"abstract":"We present FaceWidgets, a device integrated with the backside of a head-mounted display (HMD) that enables tangible interactions using physical controls. To allow for near range-to-eye interactions, our first study suggested displaying the virtual widgets at 20 cm from the eye positions, which is 9 cm from the HMD backside. We propose two novel interactions, widget canvas and palm-facing gesture, that can help users avoid double vision and allow them to access the interface as needed. Our second study showed that displaying a hand reference improved performance of face widgets interactions. We developed two applications of FaceWidgets, a fixed-layout 360 video player and a contextual input for smart home control. Finally, we compared four hand visualizations against the two applications in an exploratory study. Participants considered the transparent hand as the most suitable and responded positively to our system.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"17 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124768069","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yuanzhi Cao, Tianyi Wang, Xun Qian, P. S. Rao, M. Wadhawan, Ke Huo, K. Ramani
We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.
{"title":"GhostAR: A Time-space Editor for Embodied Authoring of Human-Robot Collaborative Task with Augmented Reality","authors":"Yuanzhi Cao, Tianyi Wang, Xun Qian, P. S. Rao, M. Wadhawan, Ke Huo, K. Ramani","doi":"10.1145/3332165.3347902","DOIUrl":"https://doi.org/10.1145/3332165.3347902","url":null,"abstract":"We present GhostAR, a time-space editor for authoring and acting Human-Robot-Collaborative (HRC) tasks in-situ. Our system adopts an embodied authoring approach in Augmented Reality (AR), for spatially editing the actions and programming the robots through demonstrative role-playing. We propose a novel HRC workflow that externalizes user's authoring as demonstrative and editable AR ghost, allowing for spatially situated visual referencing, realistic animated simulation, and collaborative action guidance. We develop a dynamic time warping (DTW) based collaboration model which takes the real-time captured motion as inputs, maps it to the previously authored human actions, and outputs the corresponding robot actions to achieve adaptive collaboration. We emphasize an in-situ authoring and rapid iterations of joint plans without an offline training process. Further, we demonstrate and evaluate the effectiveness of our workflow through HRC use cases and a three-session user study.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133934280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rébecca Kleinberger, Alexandra Rieger, Janelle Sands, Janet M. Baker
Isolation is one of the largest contributors to a lack of wellbeing, increased anxiety and loneliness in older adults. In collaboration with elders in living facilities, we designed the Memory Music Box; a low-threshold platform to increase connectedness. The HCI community has contributed notable research in support of elders through monitoring, tracking and memory augmentation. Despite the Information and Communication Technologies field (ICT) advances in providing new opportunities for connection, challenges in accessibility increase the gap between elders and their loved ones. We approach this challenge by embedding a familiar form factor with innovative applications, performing design evaluations with our key target group to incorporate multi-iteration learnings. These findings culminate in a novel design that facilitates elders in crossing technology and communication barriers. Based on these findings, we discuss how future inclusive technologies for the older adults' can balance ease of use, subtlety and elements of Cognitively Sustainable Design.
{"title":"Supporting Elder Connectedness through Cognitively Sustainable Design Interactions with the Memory Music Box","authors":"Rébecca Kleinberger, Alexandra Rieger, Janelle Sands, Janet M. Baker","doi":"10.1145/3332165.3347877","DOIUrl":"https://doi.org/10.1145/3332165.3347877","url":null,"abstract":"Isolation is one of the largest contributors to a lack of wellbeing, increased anxiety and loneliness in older adults. In collaboration with elders in living facilities, we designed the Memory Music Box; a low-threshold platform to increase connectedness. The HCI community has contributed notable research in support of elders through monitoring, tracking and memory augmentation. Despite the Information and Communication Technologies field (ICT) advances in providing new opportunities for connection, challenges in accessibility increase the gap between elders and their loved ones. We approach this challenge by embedding a familiar form factor with innovative applications, performing design evaluations with our key target group to incorporate multi-iteration learnings. These findings culminate in a novel design that facilitates elders in crossing technology and communication barriers. Based on these findings, we discuss how future inclusive technologies for the older adults' can balance ease of use, subtlety and elements of Cognitively Sustainable Design.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134422240","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
For the past 40 years, research communities have embraced a culture that relies heavily on physical meetings of people from around the world: we present our most important work in conferences, we meet our peers in conferences, and we even make life-long friends in conferences. Also at the same time, a broad scientific consensus has emerged that warns that human emissions of greenhouse gases are warming the earth. For many of us, travel to conferences may be a substantial or even dominant part of our individual contribution to climate change. A single round-trip flight from Paris to New Orleans emits the equivalent of about 2.5 tons of carbon dioxide (CO2e) per passenger, which is a significant fraction of the total yearly emissions for an average resident of the US or Europe. Moreover, these emissions have no near-term technological fix, since jet fuel is difficult to replace with renewable energy sources. In this talk, I want to first raise awareness of the conundrum we are in by relying so heavily in air travel for our work. I will present some of the possible solutions that go from adopting small, incremental changes to radical ones. The talk focuses one of the radical alternatives: virtual conferences. The technology for them is almost here and, for some time, I have been part of one community that organizes an annual conference in a virtual environment. Virtual conferences present many interesting challenges, some of them technological in nature, others that go beyond technology. Creating truly immersive conference experiences that make us feel "there" requires attention to personal and social experiences at physical conferences. Those experiences need to be recreated from the ground up in virtual spaces. But in that process, they can also be rethought to become experiences not possible in real life.
{"title":"Virtual Conferences","authors":"C. Lopes","doi":"10.1145/3332165.3348236","DOIUrl":"https://doi.org/10.1145/3332165.3348236","url":null,"abstract":"For the past 40 years, research communities have embraced a culture that relies heavily on physical meetings of people from around the world: we present our most important work in conferences, we meet our peers in conferences, and we even make life-long friends in conferences. Also at the same time, a broad scientific consensus has emerged that warns that human emissions of greenhouse gases are warming the earth. For many of us, travel to conferences may be a substantial or even dominant part of our individual contribution to climate change. A single round-trip flight from Paris to New Orleans emits the equivalent of about 2.5 tons of carbon dioxide (CO2e) per passenger, which is a significant fraction of the total yearly emissions for an average resident of the US or Europe. Moreover, these emissions have no near-term technological fix, since jet fuel is difficult to replace with renewable energy sources. In this talk, I want to first raise awareness of the conundrum we are in by relying so heavily in air travel for our work. I will present some of the possible solutions that go from adopting small, incremental changes to radical ones. The talk focuses one of the radical alternatives: virtual conferences. The technology for them is almost here and, for some time, I have been part of one community that organizes an annual conference in a virtual environment. Virtual conferences present many interesting challenges, some of them technological in nature, others that go beyond technology. Creating truly immersive conference experiences that make us feel \"there\" requires attention to personal and social experiences at physical conferences. Those experiences need to be recreated from the ground up in virtual spaces. But in that process, they can also be rethought to become experiences not possible in real life.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134479156","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rafael Morales, A. Marzo, Sriram Subramanian, Diego Martínez
LeviProps are tangible structures used to create interactive mid-air experiences. They are composed of an acoustically- transparent lightweight piece of fabric and attached beads that act as levitated anchors. This combination enables real- time 6 Degrees-of-Freedom control of levitated structures which are larger and more diverse than those possible with previous acoustic manipulation techniques. LeviProps can be used as free-form interactive elements and as projection surfaces. We developed an authoring tool to support the creation of LeviProps. Our tool considers the outline of the prop and the user constraints to compute the optimum locations for the anchors (i.e. maximizing trapping forces), increasing prop stability and maximum size. The tool produces a final LeviProp design which can be fabricated following a simple procedure. This paper explains and evaluates our approach and showcases example applications, such as interactive storytelling, games and mid-air displays.
{"title":"LeviProps","authors":"Rafael Morales, A. Marzo, Sriram Subramanian, Diego Martínez","doi":"10.1145/3332165.3347882","DOIUrl":"https://doi.org/10.1145/3332165.3347882","url":null,"abstract":"LeviProps are tangible structures used to create interactive mid-air experiences. They are composed of an acoustically- transparent lightweight piece of fabric and attached beads that act as levitated anchors. This combination enables real- time 6 Degrees-of-Freedom control of levitated structures which are larger and more diverse than those possible with previous acoustic manipulation techniques. LeviProps can be used as free-form interactive elements and as projection surfaces. We developed an authoring tool to support the creation of LeviProps. Our tool considers the outline of the prop and the user constraints to compute the optimum locations for the anchors (i.e. maximizing trapping forces), increasing prop stability and maximum size. The tool produces a final LeviProp design which can be fabricated following a simple procedure. This paper explains and evaluates our approach and showcases example applications, such as interactive storytelling, games and mid-air displays.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127841591","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.
{"title":"Context-Aware Online Adaptation of Mixed Reality Interfaces","authors":"David Lindlbauer, A. Feit, Otmar Hilliges","doi":"10.1145/3332165.3347945","DOIUrl":"https://doi.org/10.1145/3332165.3347945","url":null,"abstract":"We present an optimization-based approach for Mixed Reality (MR) systems to automatically control when and where applications are shown, and how much information they display. Currently, content creators design applications, and users then manually adjust which applications are visible and how much information they show. This choice has to be adjusted every time users switch context, i.e., whenever they switch their task or environment. Since context switches happen many times a day, we believe that MR interfaces require automation to alleviate this problem. We propose a real-time approach to automate this process based on users' current cognitive load and knowledge about their task and environment. Our system adapts which applications are displayed, how much information they show, and where they are placed. We formulate this problem as a mix of rule-based decision making and combinatorial optimization which can be solved efficiently in real-time. We present a set of proof-of-concept applications showing that our approach is applicable in a wide range of scenarios. Finally, we show in a dual-task evaluation that our approach decreased secondary tasks interactions by 36%.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"45 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124554949","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sebastian Marwecki, Andrew D. Wilson, E. Ofek, Mar González-Franco, Christian Holz
Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.
{"title":"Mise-Unseen: Using Eye Tracking to Hide Virtual Reality Scene Changes in Plain Sight","authors":"Sebastian Marwecki, Andrew D. Wilson, E. Ofek, Mar González-Franco, Christian Holz","doi":"10.1145/3332165.3347919","DOIUrl":"https://doi.org/10.1145/3332165.3347919","url":null,"abstract":"Creating or arranging objects at runtime is needed in many virtual reality applications, but such changes are noticed when they occur inside the user's field of view. We present Mise-Unseen, a software system that applies such scene changes covertly inside the user's field of view. Mise-Unseen leverages gaze tracking to create models of user attention, intention, and spatial memory to determine if and when to inject a change. We present seven applications of Mise-Unseen to unnoticeably modify the scene within view (i) to hide that task difficulty is adapted to the user, (ii) to adapt the experience to the user's preferences, (iii) to time the use of low fidelity effects, (iv) to detect user choice for passive haptics even when lacking physical props, (v) to sustain physical locomotion despite a lack of physical space, (vi) to reduce motion sickness during virtual locomotion, and (vii) to verify user understanding during story progression. We evaluated Mise-Unseen and our applications in a user study with 15 participants and find that while gaze data indeed supports obfuscating changes inside the field of view, a change is rendered unnoticeably by using gaze in combination with common masking techniques.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123313809","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rundong Tian, V. Saran, Mareike Kritzler, F. Michahelles, E. Paulos
Advances in digital fabrication have simultaneously created new capabilities while reinforcing outdated workflows that constrain how, and by whom, these fabrication tools are used. In this paper, we investigate how a new class of hybrid-controlled machines can collaborate with novice and expert users alike to yield a more lucid making experience. We demonstrate these ideas through our system, Turn-by-Wire. By combining the capabilities of a traditional lathe with haptic input controllers that modulate both position and force, we detail a series of novel interaction metaphors that invite a more fluid making process spanning digital, model-centric, computer control, and embodied, adaptive, human control. We evaluate our system through a user study and discuss how these concepts generalize to other fabrication tools.
{"title":"Turn-by-Wire: Computationally Mediated Physical Fabrication","authors":"Rundong Tian, V. Saran, Mareike Kritzler, F. Michahelles, E. Paulos","doi":"10.1145/3332165.3347918","DOIUrl":"https://doi.org/10.1145/3332165.3347918","url":null,"abstract":"Advances in digital fabrication have simultaneously created new capabilities while reinforcing outdated workflows that constrain how, and by whom, these fabrication tools are used. In this paper, we investigate how a new class of hybrid-controlled machines can collaborate with novice and expert users alike to yield a more lucid making experience. We demonstrate these ideas through our system, Turn-by-Wire. By combining the capabilities of a traditional lathe with haptic input controllers that modulate both position and force, we detail a series of novel interaction metaphors that invite a more fluid making process spanning digital, model-centric, computer control, and embodied, adaptive, human control. We evaluate our system through a user study and discuss how these concepts generalize to other fabrication tools.","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"95 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124226133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Donghwi Kim, Sooyoung Park, Jihoon Ko, Steven Y. Ko, Sung-ju Lee
We present X-Droid, a framework that provides Android app developers an ability to quickly and easily produce functional prototypes. Our work is motivated by the need for such ability and the lack of tools that provide it. Developers want to produce a functional prototype rapidly to test out potential features in real-life situations. However, current prototyping tools for mobile apps are limited to creating non-functional UI mockups that do not demonstrate actual features. With X-Droid, developers can create a new app that imports various kinds of functionality provided by other existing Android apps. In doing so, developers do not need to understand how other Android apps are implemented or need access to their source code. X-Droid provides a developer tool that enables developers to use the UIs of other Android apps and import desired functions into their prototypes. X-Droid also provides a run-time system that executes other apps' functionality in the background on off-the-shelf Android devices for seamless integration. Our evaluation shows that with the help of X-Droid, a developer imported a function from an existing Android app into a new prototype with only 51 lines of Java code, while the function itself requires 10,334 lines of Java code to implement (i.e., 200× improvement).
{"title":"X-Droid: A Quick and Easy Android Prototyping Framework with a Single-App Illusion","authors":"Donghwi Kim, Sooyoung Park, Jihoon Ko, Steven Y. Ko, Sung-ju Lee","doi":"10.1145/3332165.3347890","DOIUrl":"https://doi.org/10.1145/3332165.3347890","url":null,"abstract":"We present X-Droid, a framework that provides Android app developers an ability to quickly and easily produce functional prototypes. Our work is motivated by the need for such ability and the lack of tools that provide it. Developers want to produce a functional prototype rapidly to test out potential features in real-life situations. However, current prototyping tools for mobile apps are limited to creating non-functional UI mockups that do not demonstrate actual features. With X-Droid, developers can create a new app that imports various kinds of functionality provided by other existing Android apps. In doing so, developers do not need to understand how other Android apps are implemented or need access to their source code. X-Droid provides a developer tool that enables developers to use the UIs of other Android apps and import desired functions into their prototypes. X-Droid also provides a run-time system that executes other apps' functionality in the background on off-the-shelf Android devices for seamless integration. Our evaluation shows that with the help of X-Droid, a developer imported a function from an existing Android app into a new prototype with only 51 lines of Java code, while the function itself requires 10,334 lines of Java code to implement (i.e., 200× improvement).","PeriodicalId":431403,"journal":{"name":"Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130953536","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}