Danielle Cummings, Manoj Prasad, G. Lucchese, C. Aikens, T. Hammond
Navigation and assembly are critical tasks for Soldiers in battlefield situations. Paratroopers, in particular, must be able to parachute into a battlefield and locate and assemble their equipment as quickly and quietly as possible. Current assembly methods rely on bulky and antiquated equipment that inhibit the speed and effectiveness of such operations. To address this we have created a multi-modal mobile navigation system that uses ruggedized to mark assembly points and smartphones to assist in navigating to these points while minimizing cognitive load and maximizing situational awareness. To achieve this task, we implemented a novel beacon receiver protocol that allows an infinite number of receivers to listen to the encrypted beaconing message using only ad-hoc Wi-Fi technologies. The system was evaluated by U.S. Army Paratroopers and proved quick to learn and efficient at moving Soldiers to navigation waypoints. Beyond military operations, this system could be applied to any task that requires the assembly and coordination of many individuals or teams, such as emergency evacuations, fighting wildfires or locating airdropped humanitarian aid.
{"title":"Multi-modal location-aware system for paratrooper team coordination","authors":"Danielle Cummings, Manoj Prasad, G. Lucchese, C. Aikens, T. Hammond","doi":"10.1145/2468356.2468779","DOIUrl":"https://doi.org/10.1145/2468356.2468779","url":null,"abstract":"Navigation and assembly are critical tasks for Soldiers in battlefield situations. Paratroopers, in particular, must be able to parachute into a battlefield and locate and assemble their equipment as quickly and quietly as possible. Current assembly methods rely on bulky and antiquated equipment that inhibit the speed and effectiveness of such operations. To address this we have created a multi-modal mobile navigation system that uses ruggedized to mark assembly points and smartphones to assist in navigating to these points while minimizing cognitive load and maximizing situational awareness. To achieve this task, we implemented a novel beacon receiver protocol that allows an infinite number of receivers to listen to the encrypted beaconing message using only ad-hoc Wi-Fi technologies. The system was evaluated by U.S. Army Paratroopers and proved quick to learn and efficient at moving Soldiers to navigation waypoints. Beyond military operations, this system could be applied to any task that requires the assembly and coordination of many individuals or teams, such as emergency evacuations, fighting wildfires or locating airdropped humanitarian aid.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130658457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
UnoJoy! is a free, open-source library for the Arduino Uno platform allowing users to rapidly prototype system-native video game controllers. Using standard Arduino code, users assign inputs to button presses, and then the user can run a program to overwrite the Arduino firmware, allowing the Arduino to register as a native game controller for Windows, OSX, and Playstation 3. Focusing on ease of use, the library allows researchers and interaction designers to quickly experiment with novel interaction methods while using high-quality commercial videogames. In our practice, we have used it to add exertion-based controls to existing games and to explore how different controllers can affect the social experience of video games. We hope this tool can help other researchers and designers deepen our understanding of game interaction mechanics by making controller design simple.
{"title":"UnoJoy!: a library for rapid video game prototyping using arduino","authors":"Alan D. Chatham, W. Walmink, F. Mueller","doi":"10.1145/2468356.2479512","DOIUrl":"https://doi.org/10.1145/2468356.2479512","url":null,"abstract":"UnoJoy! is a free, open-source library for the Arduino Uno platform allowing users to rapidly prototype system-native video game controllers. Using standard Arduino code, users assign inputs to button presses, and then the user can run a program to overwrite the Arduino firmware, allowing the Arduino to register as a native game controller for Windows, OSX, and Playstation 3. Focusing on ease of use, the library allows researchers and interaction designers to quickly experiment with novel interaction methods while using high-quality commercial videogames. In our practice, we have used it to add exertion-based controls to existing games and to explore how different controllers can affect the social experience of video games. We hope this tool can help other researchers and designers deepen our understanding of game interaction mechanics by making controller design simple.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124202584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Our posture affects us in a number of surprising and unexpected ways, by influencing how we handle stress and how confident we feel. But it is difficult for people to main good posture. We present a non-invasive posture training system using an Xbox Kinect sensor. We provide real-time visual feedback at two levels of fidelity.
{"title":"Posture training with real-time visual feedback","authors":"Brett Taylor, M. Birk, R. Mandryk, Z. Ivkovic","doi":"10.1145/2468356.2479629","DOIUrl":"https://doi.org/10.1145/2468356.2479629","url":null,"abstract":"Our posture affects us in a number of surprising and unexpected ways, by influencing how we handle stress and how confident we feel. But it is difficult for people to main good posture. We present a non-invasive posture training system using an Xbox Kinect sensor. We provide real-time visual feedback at two levels of fidelity.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123521811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
T. Piazza, Shengdong Zhao, Gonzalo A. Ramos, A. Yantaç, M. Fjeld
As an increasing number of users carry smartphones and tablets simultaneously, there is an opportunity to leverage the use of these two form factors in a more complementary way. Our work aims to explore this by a) defining the design space of distributed input and output solutions that rely on and benefit from phone- tablet combinations working together physically and digitally; and b) reveal the idiosyncrasies of each particular device combination via interactive prototypes. Our research provides actionable insight in this emerging area by defining a design space, suggesting a mobile framework, and implementing prototypical applications in such areas as distributed information display, distributed control, and combinations of these. For each of these, we show a few example techniques and demonstrate an application combining more techniques.
{"title":"Dynamic duo: phone-tablet interaction on tabletops","authors":"T. Piazza, Shengdong Zhao, Gonzalo A. Ramos, A. Yantaç, M. Fjeld","doi":"10.1145/2468356.2479520","DOIUrl":"https://doi.org/10.1145/2468356.2479520","url":null,"abstract":"As an increasing number of users carry smartphones and tablets simultaneously, there is an opportunity to leverage the use of these two form factors in a more complementary way. Our work aims to explore this by a) defining the design space of distributed input and output solutions that rely on and benefit from phone- tablet combinations working together physically and digitally; and b) reveal the idiosyncrasies of each particular device combination via interactive prototypes. Our research provides actionable insight in this emerging area by defining a design space, suggesting a mobile framework, and implementing prototypical applications in such areas as distributed information display, distributed control, and combinations of these. For each of these, we show a few example techniques and demonstrate an application combining more techniques.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123532959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Interactive visual analytic systems can help to solve the problem of identifying relevant information in the growing amount of data. For guiding the user through visualization tasks, these semi-automatic systems need to store and use knowledge of this interdisciplinary domain. Unfortunately, visualisation knowledge stored in one system cannot easily be reused in another due to a lack of shared formal models. In order to approach this problem, we introduce a visualization ontology (VISO) that formally models visualization-specific concepts and facts. Furthermore, we give first examples of the ontology's use within two systems and highlight how the community can get involved in extending and improving it.
{"title":"VISO: a shared, formal knowledge base as a foundation for semi-automatic infovis systems","authors":"Jan Polowinski, M. Voigt","doi":"10.1145/2468356.2468677","DOIUrl":"https://doi.org/10.1145/2468356.2468677","url":null,"abstract":"Interactive visual analytic systems can help to solve the problem of identifying relevant information in the growing amount of data. For guiding the user through visualization tasks, these semi-automatic systems need to store and use knowledge of this interdisciplinary domain. Unfortunately, visualisation knowledge stored in one system cannot easily be reused in another due to a lack of shared formal models. In order to approach this problem, we introduce a visualization ontology (VISO) that formally models visualization-specific concepts and facts. Furthermore, we give first examples of the ontology's use within two systems and highlight how the community can get involved in extending and improving it.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114078863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
People commonly manipulate their smartphones using the thumb, but this is often done with an unstable grip in which the phone lays on their fingers, while the thumb hovers over the touch screen. In order to offer a secure and stable grip, we designed a virtual control called TouchShield, which provides place in which the thumb can pin the phone down in order to provide a stable grip. In a user study, we confirmed that this form of control does not interfere with existing touch screen operations, and the possibility that TouchShield can make more stable grip. An incidental function of TouchShield is that it provides shortcuts to frequently used commands via the thumb, a function that was also shown to be effective in the user study.
{"title":"TouchShield: a virtual control for stable grip of a smartphone using the thumb","authors":"Jonggi Hong, Geehyuk Lee","doi":"10.1145/2468356.2468589","DOIUrl":"https://doi.org/10.1145/2468356.2468589","url":null,"abstract":"People commonly manipulate their smartphones using the thumb, but this is often done with an unstable grip in which the phone lays on their fingers, while the thumb hovers over the touch screen. In order to offer a secure and stable grip, we designed a virtual control called TouchShield, which provides place in which the thumb can pin the phone down in order to provide a stable grip. In a user study, we confirmed that this form of control does not interfere with existing touch screen operations, and the possibility that TouchShield can make more stable grip. An incidental function of TouchShield is that it provides shortcuts to frequently used commands via the thumb, a function that was also shown to be effective in the user study.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114083645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Erick Oduor, Carman Neustaedter, Gina Venolia, Tejinder K. Judge
Personal video communication systems such as Skype or FaceTime are starting to become a common tool used by family and friends to communicate and interact over distance. Yet many are designed to only support conversation with a focus on display 'talking heads'. In this workshop, we want to discuss the opportunities and challenges in moving beyond this design paradigm to one where personal video communication systems can be used to share everyday experiences. By this we are referring to systems that might support shared dinners, shared television watching, or even remote participation in events such as weddings, parties, or graduations. This list could go on and on as the future of personal video communications is ripe for explorations and discussions.
{"title":"The future of personal video communication: moving beyond talking heads to shared experiences","authors":"Erick Oduor, Carman Neustaedter, Gina Venolia, Tejinder K. Judge","doi":"10.1145/2468356.2479658","DOIUrl":"https://doi.org/10.1145/2468356.2479658","url":null,"abstract":"Personal video communication systems such as Skype or FaceTime are starting to become a common tool used by family and friends to communicate and interact over distance. Yet many are designed to only support conversation with a focus on display 'talking heads'. In this workshop, we want to discuss the opportunities and challenges in moving beyond this design paradigm to one where personal video communication systems can be used to share everyday experiences. By this we are referring to systems that might support shared dinners, shared television watching, or even remote participation in events such as weddings, parties, or graduations. This list could go on and on as the future of personal video communications is ripe for explorations and discussions.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114778029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other "smarter objects". As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs, buttons, etc). As such Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.
{"title":"Smarter objects: using AR technology to program physical objects and their interactions","authors":"Valentin Heun, Shunichi Kasahara, P. Maes","doi":"10.1145/2468356.2468528","DOIUrl":"https://doi.org/10.1145/2468356.2468528","url":null,"abstract":"The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other \"smarter objects\". As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs, buttons, etc). As such Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visuospatial attention is often investigated with features related to the head or the gaze during Human-Computer Interaction (HCI). However the focus of attention can be dissociated from overt responses such as eye movements, and impossible to detect from behavioral data. Actually, Electroencephalography (EEG) can also provide valuable information about covert aspects of spatial attention. Therefore we propose a innovative approach in view of developping a Brain-Computer Interface (BCI) to enhance human reaction speed and accuracy. This poster presents an offline evaluation of the approach based on physiological data recorded in a visuospatial attention experiment. Finally we discuss about the future interface that could enhance HCI by displaying visual information at the focus of attention.
{"title":"Enhancing visuospatial attention performance with brain-computer interfaces","authors":"R. Trachel, T. Brochier, Maureen Clerc","doi":"10.1145/2468356.2468579","DOIUrl":"https://doi.org/10.1145/2468356.2468579","url":null,"abstract":"Visuospatial attention is often investigated with features related to the head or the gaze during Human-Computer Interaction (HCI). However the focus of attention can be dissociated from overt responses such as eye movements, and impossible to detect from behavioral data. Actually, Electroencephalography (EEG) can also provide valuable information about covert aspects of spatial attention. Therefore we propose a innovative approach in view of developping a Brain-Computer Interface (BCI) to enhance human reaction speed and accuracy. This poster presents an offline evaluation of the approach based on physiological data recorded in a visuospatial attention experiment. Finally we discuss about the future interface that could enhance HCI by displaying visual information at the focus of attention.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127636104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
F. Mueller, R. A. Khot, Alan D. Chatham, S. Pijnappel, Cagdas Toprak, Joe Marshall
Recent advances in cheap sensor technology has made technology support for sports and physical exercise increasingly commonplace, which is evident from the growing popularity of heart rate monitors and GPS sports watches. This rise of technology to support sports activities raises many interaction issues, such as how to interact with these devices while moving and physically exerting. This special interest group brings together industry practitioners and researchers who are interested in designing and understanding human-computer interaction where the human is being physically active, engaging in exertion activities. Fitting with the theme, this special interest group will be "run" while running: participants will be invited to a jog together during which we will discuss technology interaction that is specific to being physically active whilst being physically active ourselves.
{"title":"HCI with sports","authors":"F. Mueller, R. A. Khot, Alan D. Chatham, S. Pijnappel, Cagdas Toprak, Joe Marshall","doi":"10.1145/2468356.2468817","DOIUrl":"https://doi.org/10.1145/2468356.2468817","url":null,"abstract":"Recent advances in cheap sensor technology has made technology support for sports and physical exercise increasingly commonplace, which is evident from the growing popularity of heart rate monitors and GPS sports watches. This rise of technology to support sports activities raises many interaction issues, such as how to interact with these devices while moving and physically exerting. This special interest group brings together industry practitioners and researchers who are interested in designing and understanding human-computer interaction where the human is being physically active, engaging in exertion activities. Fitting with the theme, this special interest group will be \"run\" while running: participants will be invited to a jog together during which we will discuss technology interaction that is specific to being physically active whilst being physically active ourselves.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126453658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}