Prototyping devices with kinetic mechanisms, such as automata and robots, has become common in physical computing projects. However, mechanism design in the early-concept exploration phase is challenging, due to the dynamic and unpredictable characteristics of mechanisms. We present Mechanism Perfboard, an augmented reality environment that supports linkage mechanism design and fabrication. It supports the concretization of ideas by generating the initial desired linkage mechanism from a real world movement. The projection of simulated movement within the environment enables iterative tests and modifications in real scale. Augmented information and accompanying tangible parts help users to fabricate mechanisms. Through a user study with 10 participants, we found that Mechanism Perfboard helped the participant to achieve their desired movement. The augmented environment enabled intuitive modification and fabrication with an understanding of mechanical movement. Based on the tool development and the user study, we discuss implications for mechanism prototyping with augmented reality and computational support.
{"title":"Mechanism Perfboard: An Augmented Reality Environment for Linkage Mechanism Design and Fabrication","authors":"Yunwoo Jeong, Han-Jong Kim, T. Nam","doi":"10.1145/3173574.3173985","DOIUrl":"https://doi.org/10.1145/3173574.3173985","url":null,"abstract":"Prototyping devices with kinetic mechanisms, such as automata and robots, has become common in physical computing projects. However, mechanism design in the early-concept exploration phase is challenging, due to the dynamic and unpredictable characteristics of mechanisms. We present Mechanism Perfboard, an augmented reality environment that supports linkage mechanism design and fabrication. It supports the concretization of ideas by generating the initial desired linkage mechanism from a real world movement. The projection of simulated movement within the environment enables iterative tests and modifications in real scale. Augmented information and accompanying tangible parts help users to fabricate mechanisms. Through a user study with 10 participants, we found that Mechanism Perfboard helped the participant to achieve their desired movement. The augmented environment enabled intuitive modification and fabrication with an understanding of mechanical movement. Based on the tool development and the user study, we discuss implications for mechanism prototyping with augmented reality and computational support.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"109 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"80720797","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Martez E. Mott, J. E, Cynthia L. Bennett, Edward Cutrell, M. Morris
We present the results of an exploration to understand the accessibility of smartphone photography for people with motor impairments. We surveyed forty-six people and interviewed twelve people about capturing, editing, and sharing photographs on smartphones. We found that people with motor impairments encounter many challenges with smartphone photography, resulting in users capturing fewer photographs than they would like. Participants described various strategies they used to overcome challenges in order to capture a quality photograph. We also found that photograph quality plays a large role in deciding which photographs users share and how often they share, with most participants rating their photographs as average or poor quality compared to photos shared on their social networks. Additionally, we created design probes of two novel photography interfaces and received feedback from our interview participants about their usefulness and functionality. Based on our findings, we propose design recommendations for how to improve the accessibility of mobile photoware for people with motor impairments.
{"title":"Understanding the Accessibility of Smartphone Photography for People with Motor Impairments","authors":"Martez E. Mott, J. E, Cynthia L. Bennett, Edward Cutrell, M. Morris","doi":"10.1145/3173574.3174094","DOIUrl":"https://doi.org/10.1145/3173574.3174094","url":null,"abstract":"We present the results of an exploration to understand the accessibility of smartphone photography for people with motor impairments. We surveyed forty-six people and interviewed twelve people about capturing, editing, and sharing photographs on smartphones. We found that people with motor impairments encounter many challenges with smartphone photography, resulting in users capturing fewer photographs than they would like. Participants described various strategies they used to overcome challenges in order to capture a quality photograph. We also found that photograph quality plays a large role in deciding which photographs users share and how often they share, with most participants rating their photographs as average or poor quality compared to photos shared on their social networks. Additionally, we created design probes of two novel photography interfaces and received feedback from our interview participants about their usefulness and functionality. Based on our findings, we propose design recommendations for how to improve the accessibility of mobile photoware for people with motor impairments.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"43 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83680321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Touchscreen mobile devices can afford rich interaction behaviors but they are complex to model. Scrollable two-dimensional grids are a common user interface on mobile devices that allow users to access a large number of items on a small screen by direct touch. By analyzing touch input and eye gaze of users during grid interaction, we reveal how multiple performance components come into play in such a task, including navigation, visual search and pointing. These findings inspired us to design a novel predictive model that combines these components for modeling grid tasks. We realized these model components by employing both traditional analytical methods and data-driven machine learning approaches. In addition to showing high accuracy achieved by our model in predicting human performance on a test dataset, we demonstrate how such a model can lead to a significant reduction in interaction time when used in a predictive user interface.
{"title":"Analysis and Modeling of Grid Performance on Touchscreen Mobile Devices","authors":"Ken Pfeuffer, Yang Li","doi":"10.1145/3173574.3173862","DOIUrl":"https://doi.org/10.1145/3173574.3173862","url":null,"abstract":"Touchscreen mobile devices can afford rich interaction behaviors but they are complex to model. Scrollable two-dimensional grids are a common user interface on mobile devices that allow users to access a large number of items on a small screen by direct touch. By analyzing touch input and eye gaze of users during grid interaction, we reveal how multiple performance components come into play in such a task, including navigation, visual search and pointing. These findings inspired us to design a novel predictive model that combines these components for modeling grid tasks. We realized these model components by employing both traditional analytical methods and data-driven machine learning approaches. In addition to showing high accuracy achieved by our model in predicting human performance on a test dataset, we demonstrate how such a model can lead to a significant reduction in interaction time when used in a predictive user interface.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"53 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83804540","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.
{"title":"Greater than the Sum of its PARTs: Expressing and Reusing Design Intent in 3D Models","authors":"M. Hofmann, G. Hann, S. Hudson, Jennifer Mankoff","doi":"10.1145/3173574.3173875","DOIUrl":"https://doi.org/10.1145/3173574.3173875","url":null,"abstract":"With the increasing popularity of consumer-grade 3D printing, many people are creating, and even more using, objects shared on sites such as Thingiverse. However, our formative study of 962 Thingiverse models shows a lack of re-use of models, perhaps due to the advanced skills needed for 3D modeling. An end user program perspective on 3D modeling is needed. Our framework (PARTs) empowers amateur modelers to graphically specify design intent through geometry. PARTs includes a GUI, scripting API and exemplar library of assertions which test design expectations and integrators which act on intent to create geometry. PARTs lets modelers integrate advanced, model specific functionality into designs, so that they can be re-used and extended, without programming. In two workshops, we show that PARTs helps to create 3D printable models, and modify existing models more easily than with a standard tool.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84764279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Brittany M. Garcia, Sharon Lynn Chu Yew Yee, Beth Nam, Colin Banigan
Relatively little research exists on the use of smartwatches to support learning. This paper presents an approach for commodity smartwatches as a tool for situated reflection in elementary school science. The approach was embodied in a smartwatch app called ScienceStories that allows students to voice record reflections about science concepts anytime, anywhere. We conducted a study with 18 fifth-grade children to investigate first, the effects of ScienceStories on students' science self-efficacy, and second the effects of different motivational structures (gamification, narrative-based, hybrid) designed into the smartwatch app on students' quality and quantity of use. Quantitative results showed ScienceStories increased science self-efficacy especially with a motivational structure. The gamified version had the highest quantity of use, while narrative performance performed worst. Qualitative findings described how students' recordings related to science topics and were contextualized. We discuss how our findings contribute to understanding of how to design smartwatch apps for educational purposes.
{"title":"Wearables for Learning: Examining the Smartwatch as a Tool for Situated Science Reflection","authors":"Brittany M. Garcia, Sharon Lynn Chu Yew Yee, Beth Nam, Colin Banigan","doi":"10.1145/3173574.3173830","DOIUrl":"https://doi.org/10.1145/3173574.3173830","url":null,"abstract":"Relatively little research exists on the use of smartwatches to support learning. This paper presents an approach for commodity smartwatches as a tool for situated reflection in elementary school science. The approach was embodied in a smartwatch app called ScienceStories that allows students to voice record reflections about science concepts anytime, anywhere. We conducted a study with 18 fifth-grade children to investigate first, the effects of ScienceStories on students' science self-efficacy, and second the effects of different motivational structures (gamification, narrative-based, hybrid) designed into the smartwatch app on students' quality and quantity of use. Quantitative results showed ScienceStories increased science self-efficacy especially with a motivational structure. The gamified version had the highest quantity of use, while narrative performance performed worst. Qualitative findings described how students' recordings related to science topics and were contextualized. We discuss how our findings contribute to understanding of how to design smartwatch apps for educational purposes.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"65 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89362167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Diaz, Isaac L. Johnson, Amanda Lazar, Anne Marie Piper, D. Gergle
Computational approaches to text analysis are useful in understanding aspects of online interaction, such as opinions and subjectivity in text. Yet, recent studies have identified various forms of bias in language-based models, raising concerns about the risk of propagating social biases against certain groups based on sociodemographic factors (e.g., gender, race, geography). In this study, we contribute a systematic examination of the application of language models to study discourse on aging. We analyze the treatment of age-related terms across 15 sentiment analysis models and 10 widely-used GloVe word embeddings and attempt to alleviate bias through a method of processing model training data. Our results demonstrate that significant age bias is encoded in the outputs of many sentiment analysis algorithms and word embeddings. We discuss the models' characteristics in relation to output bias and how these models might be best incorporated into research.
{"title":"Addressing Age-Related Bias in Sentiment Analysis","authors":"Mark Diaz, Isaac L. Johnson, Amanda Lazar, Anne Marie Piper, D. Gergle","doi":"10.1145/3173574.3173986","DOIUrl":"https://doi.org/10.1145/3173574.3173986","url":null,"abstract":"Computational approaches to text analysis are useful in understanding aspects of online interaction, such as opinions and subjectivity in text. Yet, recent studies have identified various forms of bias in language-based models, raising concerns about the risk of propagating social biases against certain groups based on sociodemographic factors (e.g., gender, race, geography). In this study, we contribute a systematic examination of the application of language models to study discourse on aging. We analyze the treatment of age-related terms across 15 sentiment analysis models and 10 widely-used GloVe word embeddings and attempt to alleviate bias through a method of processing model training data. Our results demonstrate that significant age bias is encoded in the outputs of many sentiment analysis algorithms and word embeddings. We discuss the models' characteristics in relation to output bias and how these models might be best incorporated into research.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"89 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87386367","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Primal Wijesekera, Joel Reardon, Irwin Reyes, Lynn Tsai, Jung-Wei Chen, Nathaniel Good, D. Wagner, K. Beznosov, Serge Egelman
Modern mobile operating systems implement an ask-on-first-use policy to regulate applications' access to private user data: the user is prompted to allow or deny access to a sensitive resource the first time an app attempts to use it. Prior research shows that this model may not adequately capture user privacy preferences because subsequent requests may occur under varying contexts. To address this shortcoming, we implemented a novel privacy management system in Android, in which we use contextual signals to build a classifier that predicts user privacy preferences under various scenarios. We performed a 37-person field study to evaluate this new permission model under normal device usage. From our exit interviews and collection of over 5 million data points from participants, we show that this new permission model reduces the error rate by 75% (i.e., fewer privacy violations), while preserving usability. We offer guidelines for how platforms can better support user privacy decision making.
{"title":"Contextualizing Privacy Decisions for Better Prediction (and Protection)","authors":"Primal Wijesekera, Joel Reardon, Irwin Reyes, Lynn Tsai, Jung-Wei Chen, Nathaniel Good, D. Wagner, K. Beznosov, Serge Egelman","doi":"10.1145/3173574.3173842","DOIUrl":"https://doi.org/10.1145/3173574.3173842","url":null,"abstract":"Modern mobile operating systems implement an ask-on-first-use policy to regulate applications' access to private user data: the user is prompted to allow or deny access to a sensitive resource the first time an app attempts to use it. Prior research shows that this model may not adequately capture user privacy preferences because subsequent requests may occur under varying contexts. To address this shortcoming, we implemented a novel privacy management system in Android, in which we use contextual signals to build a classifier that predicts user privacy preferences under various scenarios. We performed a 37-person field study to evaluate this new permission model under normal device usage. From our exit interviews and collection of over 5 million data points from participants, we show that this new permission model reduces the error rate by 75% (i.e., fewer privacy violations), while preserving usability. We offer guidelines for how platforms can better support user privacy decision making.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"52 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85179768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Mariakakis, Sayna Parsi, Shwetak N. Patel, J. Wobbrock
Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a person's motor coordination and cognition using performance metrics and sensor data. We examine five drunk user interfaces and combine them to form the "DUI app". DUI uses machine learning models trained on human performance metrics and sensor data to estimate a person's blood alcohol level (BAL). We evaluated DUI on 14 individuals in a week-long longitudinal study wherein each participant used DUI at various BALs. We found that with a global model that accounts for user-specific learning, DUI can estimate a person's BAL with an absolute mean error of 0.005% ± 0.007% and a Pearson's correlation coefficient of 0.96 with breathalyzer measurements.
{"title":"Drunk User Interfaces: Determining Blood Alcohol Level through Everyday Smartphone Tasks","authors":"A. Mariakakis, Sayna Parsi, Shwetak N. Patel, J. Wobbrock","doi":"10.1145/3173574.3173808","DOIUrl":"https://doi.org/10.1145/3173574.3173808","url":null,"abstract":"Breathalyzers, the standard quantitative method for assessing inebriation, are primarily owned by law enforcement and used only after a potentially inebriated individual is caught driving. However, not everyone has access to such specialized hardware. We present drunk user interfaces: smartphone user interfaces that measure how alcohol affects a person's motor coordination and cognition using performance metrics and sensor data. We examine five drunk user interfaces and combine them to form the \"DUI app\". DUI uses machine learning models trained on human performance metrics and sensor data to estimate a person's blood alcohol level (BAL). We evaluated DUI on 14 individuals in a week-long longitudinal study wherein each participant used DUI at various BALs. We found that with a global model that accounts for user-specific learning, DUI can estimate a person's BAL with an absolute mean error of 0.005% ± 0.007% and a Pearson's correlation coefficient of 0.96 with breathalyzer measurements.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"46 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85181284","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Yi-Hao Peng, Ming-Wei Hsu, Paul Taele, Ting-Yu Lin, P. Lai, L. Hsu, Tzu-chuan Chen, Te-Yen Wu, Yu-An Chen, Hsien-Hui Tang, Mike Y. Chen
Deaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speechbubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display. From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.
{"title":"SpeechBubbles: Enhancing Captioning Experiences for Deaf and Hard-of-Hearing People in Group Conversations","authors":"Yi-Hao Peng, Ming-Wei Hsu, Paul Taele, Ting-Yu Lin, P. Lai, L. Hsu, Tzu-chuan Chen, Te-Yen Wu, Yu-An Chen, Hsien-Hui Tang, Mike Y. Chen","doi":"10.1145/3173574.3173867","DOIUrl":"https://doi.org/10.1145/3173574.3173867","url":null,"abstract":"Deaf and hard-of-hearing (DHH) individuals encounter difficulties when engaged in group conversations with hearing individuals, due to factors such as simultaneous utterances from multiple speakers and speakers whom may be potentially out of view. We interviewed and co-designed with eight DHH participants to address the following challenges: 1) associating utterances with speakers, 2) ordering utterances from different speakers, 3) displaying optimal content length, and 4) visualizing utterances from out-of-view speakers. We evaluated multiple designs for each of the four challenges through a user study with twelve DHH participants. Our study results showed that participants significantly preferred speechbubble visualizations over traditional captions. These design preferences guided our development of SpeechBubbles, a real-time speech recognition interface prototype on an augmented reality head-mounted display. From our evaluations, we further demonstrated that DHH participants preferred our prototype over traditional captions for group conversations.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85354601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A. Siu, Eric J. Gonzalez, Shenli Yuan, Jason B. Ginsberg, Sean Follmer
We explore interactions enabled by 2D spatial manipulation and self-actuation of a tabletop shape display. To explore these interactions, we developed shapeShift, a compact, high-resolution (7 mm pitch), mobile tabletop shape display. shapeShift can be mounted on passive rollers allowing for bimanual interaction where the user can freely manipulate the system while it renders spatially relevant content. shapeShift can also be mounted on an omnidirectional-robot to provide both vertical and lateral kinesthetic feedback, display moving objects, or act as an encountered-type haptic device for VR. We present a study on haptic search tasks comparing spatial manipulation of a shape display for egocentric exploration of a map versus exploration using a fixed display and a touch pad. Results show a 30% decrease in navigation path lengths, 24% decrease in task time, 15% decrease in mental demand and 29% decrease in frustration in favor of egocentric navigation.
{"title":"shapeShift: 2D Spatial Manipulation and Self-Actuation of Tabletop Shape Displays for Tangible and Haptic Interaction","authors":"A. Siu, Eric J. Gonzalez, Shenli Yuan, Jason B. Ginsberg, Sean Follmer","doi":"10.1145/3173574.3173865","DOIUrl":"https://doi.org/10.1145/3173574.3173865","url":null,"abstract":"We explore interactions enabled by 2D spatial manipulation and self-actuation of a tabletop shape display. To explore these interactions, we developed shapeShift, a compact, high-resolution (7 mm pitch), mobile tabletop shape display. shapeShift can be mounted on passive rollers allowing for bimanual interaction where the user can freely manipulate the system while it renders spatially relevant content. shapeShift can also be mounted on an omnidirectional-robot to provide both vertical and lateral kinesthetic feedback, display moving objects, or act as an encountered-type haptic device for VR. We present a study on haptic search tasks comparing spatial manipulation of a shape display for egocentric exploration of a map versus exploration using a fixed display and a touch pad. Results show a 30% decrease in navigation path lengths, 24% decrease in task time, 15% decrease in mental demand and 29% decrease in frustration in favor of egocentric navigation.","PeriodicalId":20512,"journal":{"name":"Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems","volume":"253 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2018-04-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90760158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}