首页 > 最新文献

CHI '13 Extended Abstracts on Human Factors in Computing Systems最新文献

英文 中文
Multi-modal location-aware system for paratrooper team coordination 伞兵小组协调的多模式位置感知系统
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468779
Danielle Cummings, Manoj Prasad, G. Lucchese, C. Aikens, T. Hammond
Navigation and assembly are critical tasks for Soldiers in battlefield situations. Paratroopers, in particular, must be able to parachute into a battlefield and locate and assemble their equipment as quickly and quietly as possible. Current assembly methods rely on bulky and antiquated equipment that inhibit the speed and effectiveness of such operations. To address this we have created a multi-modal mobile navigation system that uses ruggedized to mark assembly points and smartphones to assist in navigating to these points while minimizing cognitive load and maximizing situational awareness. To achieve this task, we implemented a novel beacon receiver protocol that allows an infinite number of receivers to listen to the encrypted beaconing message using only ad-hoc Wi-Fi technologies. The system was evaluated by U.S. Army Paratroopers and proved quick to learn and efficient at moving Soldiers to navigation waypoints. Beyond military operations, this system could be applied to any task that requires the assembly and coordination of many individuals or teams, such as emergency evacuations, fighting wildfires or locating airdropped humanitarian aid.
在战场上,导航和装配是士兵的关键任务。特别是伞兵,必须能够空降到战场上,并尽可能快速和安静地定位和组装他们的装备。目前的装配方法依赖于笨重和过时的设备,这抑制了此类操作的速度和有效性。为了解决这个问题,我们创建了一个多模式移动导航系统,该系统使用坚固耐用的标记集合点和智能手机来帮助导航到这些点,同时最大限度地减少认知负荷和最大化态势感知。为了完成这项任务,我们实现了一种新的信标接收器协议,该协议允许无限数量的接收器仅使用ad-hoc Wi-Fi技术来收听加密的信标消息。该系统由美国陆军伞兵进行了评估,并证明了快速学习和有效地将士兵移动到导航路点。除了军事行动,该系统还可以应用于任何需要集结和协调许多个人或团队的任务,例如紧急疏散、扑灭野火或定位空投的人道主义援助。
{"title":"Multi-modal location-aware system for paratrooper team coordination","authors":"Danielle Cummings, Manoj Prasad, G. Lucchese, C. Aikens, T. Hammond","doi":"10.1145/2468356.2468779","DOIUrl":"https://doi.org/10.1145/2468356.2468779","url":null,"abstract":"Navigation and assembly are critical tasks for Soldiers in battlefield situations. Paratroopers, in particular, must be able to parachute into a battlefield and locate and assemble their equipment as quickly and quietly as possible. Current assembly methods rely on bulky and antiquated equipment that inhibit the speed and effectiveness of such operations. To address this we have created a multi-modal mobile navigation system that uses ruggedized to mark assembly points and smartphones to assist in navigating to these points while minimizing cognitive load and maximizing situational awareness. To achieve this task, we implemented a novel beacon receiver protocol that allows an infinite number of receivers to listen to the encrypted beaconing message using only ad-hoc Wi-Fi technologies. The system was evaluated by U.S. Army Paratroopers and proved quick to learn and efficient at moving Soldiers to navigation waypoints. Beyond military operations, this system could be applied to any task that requires the assembly and coordination of many individuals or teams, such as emergency evacuations, fighting wildfires or locating airdropped humanitarian aid.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130658457","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
UnoJoy!: a library for rapid video game prototyping using arduino UnoJoy !:一个使用arduino的快速视频游戏原型库
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2479512
Alan D. Chatham, W. Walmink, F. Mueller
UnoJoy! is a free, open-source library for the Arduino Uno platform allowing users to rapidly prototype system-native video game controllers. Using standard Arduino code, users assign inputs to button presses, and then the user can run a program to overwrite the Arduino firmware, allowing the Arduino to register as a native game controller for Windows, OSX, and Playstation 3. Focusing on ease of use, the library allows researchers and interaction designers to quickly experiment with novel interaction methods while using high-quality commercial videogames. In our practice, we have used it to add exertion-based controls to existing games and to explore how different controllers can affect the social experience of video games. We hope this tool can help other researchers and designers deepen our understanding of game interaction mechanics by making controller design simple.
UnoJoy !是Arduino Uno平台的免费开源库,允许用户快速原型化系统原生视频游戏控制器。使用标准的Arduino代码,用户将输入分配给按键,然后用户可以运行一个程序来覆盖Arduino固件,允许Arduino注册为Windows, OSX和Playstation 3的本机游戏控制器。专注于易用性,该库允许研究人员和交互设计师在使用高质量的商业视频游戏时快速试验新颖的交互方法。在我们的实践中,我们使用它在现有游戏中添加基于动作的控制,并探索不同的控制器如何影响电子游戏的社交体验。我们希望这个工具可以帮助其他研究人员和设计师通过简化控制器设计来加深我们对游戏交互机制的理解。
{"title":"UnoJoy!: a library for rapid video game prototyping using arduino","authors":"Alan D. Chatham, W. Walmink, F. Mueller","doi":"10.1145/2468356.2479512","DOIUrl":"https://doi.org/10.1145/2468356.2479512","url":null,"abstract":"UnoJoy! is a free, open-source library for the Arduino Uno platform allowing users to rapidly prototype system-native video game controllers. Using standard Arduino code, users assign inputs to button presses, and then the user can run a program to overwrite the Arduino firmware, allowing the Arduino to register as a native game controller for Windows, OSX, and Playstation 3. Focusing on ease of use, the library allows researchers and interaction designers to quickly experiment with novel interaction methods while using high-quality commercial videogames. In our practice, we have used it to add exertion-based controls to existing games and to explore how different controllers can affect the social experience of video games. We hope this tool can help other researchers and designers deepen our understanding of game interaction mechanics by making controller design simple.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124202584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Posture training with real-time visual feedback 姿势训练与实时视觉反馈
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2479629
Brett Taylor, M. Birk, R. Mandryk, Z. Ivkovic
Our posture affects us in a number of surprising and unexpected ways, by influencing how we handle stress and how confident we feel. But it is difficult for people to main good posture. We present a non-invasive posture training system using an Xbox Kinect sensor. We provide real-time visual feedback at two levels of fidelity.
我们的姿势通过影响我们处理压力的方式和自信程度,以许多令人惊讶和意想不到的方式影响着我们。但是人们很难保持良好的姿势。我们提出了一种使用Xbox Kinect传感器的非侵入式姿势训练系统。我们提供两个保真度的实时视觉反馈。
{"title":"Posture training with real-time visual feedback","authors":"Brett Taylor, M. Birk, R. Mandryk, Z. Ivkovic","doi":"10.1145/2468356.2479629","DOIUrl":"https://doi.org/10.1145/2468356.2479629","url":null,"abstract":"Our posture affects us in a number of surprising and unexpected ways, by influencing how we handle stress and how confident we feel. But it is difficult for people to main good posture. We present a non-invasive posture training system using an Xbox Kinect sensor. We provide real-time visual feedback at two levels of fidelity.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123521811","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Dynamic duo: phone-tablet interaction on tabletops 动态组合:手机和平板电脑在台式机上的交互
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2479520
T. Piazza, Shengdong Zhao, Gonzalo A. Ramos, A. Yantaç, M. Fjeld
As an increasing number of users carry smartphones and tablets simultaneously, there is an opportunity to leverage the use of these two form factors in a more complementary way. Our work aims to explore this by a) defining the design space of distributed input and output solutions that rely on and benefit from phone- tablet combinations working together physically and digitally; and b) reveal the idiosyncrasies of each particular device combination via interactive prototypes. Our research provides actionable insight in this emerging area by defining a design space, suggesting a mobile framework, and implementing prototypical applications in such areas as distributed information display, distributed control, and combinations of these. For each of these, we show a few example techniques and demonstrate an application combining more techniques.
随着越来越多的用户同时携带智能手机和平板电脑,这两种形式的使用将有机会以一种更互补的方式加以利用。我们的工作旨在通过a)定义分布式输入和输出解决方案的设计空间,这些解决方案依赖并受益于手机-平板电脑组合的物理和数字协同工作;b)通过交互原型揭示每个特定设备组合的特质。我们的研究通过定义一个设计空间,提出一个移动框架,并在分布式信息显示、分布式控制和这些的组合等领域实现原型应用,为这个新兴领域提供了可操作的见解。对于每一种技术,我们都展示了一些示例技术,并演示了一个结合了更多技术的应用程序。
{"title":"Dynamic duo: phone-tablet interaction on tabletops","authors":"T. Piazza, Shengdong Zhao, Gonzalo A. Ramos, A. Yantaç, M. Fjeld","doi":"10.1145/2468356.2479520","DOIUrl":"https://doi.org/10.1145/2468356.2479520","url":null,"abstract":"As an increasing number of users carry smartphones and tablets simultaneously, there is an opportunity to leverage the use of these two form factors in a more complementary way. Our work aims to explore this by a) defining the design space of distributed input and output solutions that rely on and benefit from phone- tablet combinations working together physically and digitally; and b) reveal the idiosyncrasies of each particular device combination via interactive prototypes. Our research provides actionable insight in this emerging area by defining a design space, suggesting a mobile framework, and implementing prototypical applications in such areas as distributed information display, distributed control, and combinations of these. For each of these, we show a few example techniques and demonstrate an application combining more techniques.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123532959","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
VISO: a shared, formal knowledge base as a foundation for semi-automatic infovis systems VISO:一个共享的、正式的知识库,作为半自动信息系统的基础
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468677
Jan Polowinski, M. Voigt
Interactive visual analytic systems can help to solve the problem of identifying relevant information in the growing amount of data. For guiding the user through visualization tasks, these semi-automatic systems need to store and use knowledge of this interdisciplinary domain. Unfortunately, visualisation knowledge stored in one system cannot easily be reused in another due to a lack of shared formal models. In order to approach this problem, we introduce a visualization ontology (VISO) that formally models visualization-specific concepts and facts. Furthermore, we give first examples of the ontology's use within two systems and highlight how the community can get involved in extending and improving it.
交互式可视化分析系统可以帮助解决在不断增长的数据量中识别相关信息的问题。为了指导用户完成可视化任务,这些半自动系统需要存储和使用这一跨学科领域的知识。不幸的是,由于缺乏共享的形式化模型,存储在一个系统中的可视化知识无法在另一个系统中轻松重用。为了解决这个问题,我们引入了一个可视化本体(VISO),它对特定于可视化的概念和事实进行形式化建模。此外,我们给出了本体在两个系统中使用的第一个例子,并强调了社区如何参与扩展和改进它。
{"title":"VISO: a shared, formal knowledge base as a foundation for semi-automatic infovis systems","authors":"Jan Polowinski, M. Voigt","doi":"10.1145/2468356.2468677","DOIUrl":"https://doi.org/10.1145/2468356.2468677","url":null,"abstract":"Interactive visual analytic systems can help to solve the problem of identifying relevant information in the growing amount of data. For guiding the user through visualization tasks, these semi-automatic systems need to store and use knowledge of this interdisciplinary domain. Unfortunately, visualisation knowledge stored in one system cannot easily be reused in another due to a lack of shared formal models. In order to approach this problem, we introduce a visualization ontology (VISO) that formally models visualization-specific concepts and facts. Furthermore, we give first examples of the ontology's use within two systems and highlight how the community can get involved in extending and improving it.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114078863","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
TouchShield: a virtual control for stable grip of a smartphone using the thumb TouchShield:一种虚拟控制,可以用拇指稳定地握持智能手机
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468589
Jonggi Hong, Geehyuk Lee
People commonly manipulate their smartphones using the thumb, but this is often done with an unstable grip in which the phone lays on their fingers, while the thumb hovers over the touch screen. In order to offer a secure and stable grip, we designed a virtual control called TouchShield, which provides place in which the thumb can pin the phone down in order to provide a stable grip. In a user study, we confirmed that this form of control does not interfere with existing touch screen operations, and the possibility that TouchShield can make more stable grip. An incidental function of TouchShield is that it provides shortcuts to frequently used commands via the thumb, a function that was also shown to be effective in the user study.
人们通常用拇指来操作智能手机,但这通常是一种不稳定的握持方式,即手机放在手指上,而拇指则悬浮在触摸屏上。为了提供安全稳定的握持,我们设计了一个名为TouchShield的虚拟控制器,它提供了拇指可以固定手机的位置,以提供稳定的握持。在一项用户研究中,我们证实了这种形式的控制不会干扰现有的触摸屏操作,并且TouchShield可以提供更稳定的握持。TouchShield附带的一个功能是,它通过拇指提供了常用命令的快捷方式,这个功能在用户研究中也被证明是有效的。
{"title":"TouchShield: a virtual control for stable grip of a smartphone using the thumb","authors":"Jonggi Hong, Geehyuk Lee","doi":"10.1145/2468356.2468589","DOIUrl":"https://doi.org/10.1145/2468356.2468589","url":null,"abstract":"People commonly manipulate their smartphones using the thumb, but this is often done with an unstable grip in which the phone lays on their fingers, while the thumb hovers over the touch screen. In order to offer a secure and stable grip, we designed a virtual control called TouchShield, which provides place in which the thumb can pin the phone down in order to provide a stable grip. In a user study, we confirmed that this form of control does not interfere with existing touch screen operations, and the possibility that TouchShield can make more stable grip. An incidental function of TouchShield is that it provides shortcuts to frequently used commands via the thumb, a function that was also shown to be effective in the user study.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114083645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 9
The future of personal video communication: moving beyond talking heads to shared experiences 个人视频通信的未来:超越对话头到共享体验
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2479658
Erick Oduor, Carman Neustaedter, Gina Venolia, Tejinder K. Judge
Personal video communication systems such as Skype or FaceTime are starting to become a common tool used by family and friends to communicate and interact over distance. Yet many are designed to only support conversation with a focus on display 'talking heads'. In this workshop, we want to discuss the opportunities and challenges in moving beyond this design paradigm to one where personal video communication systems can be used to share everyday experiences. By this we are referring to systems that might support shared dinners, shared television watching, or even remote participation in events such as weddings, parties, or graduations. This list could go on and on as the future of personal video communications is ripe for explorations and discussions.
个人视频通信系统,如Skype或FaceTime,正开始成为家人和朋友进行远程交流和互动的常用工具。然而,许多智能手机的设计只是为了支持对话,重点是展示“会说话的头”。在这个研讨会上,我们想讨论超越这种设计范式的机遇和挑战,在这种设计范式中,个人视频通信系统可以用来分享日常体验。通过这种方式,我们指的是可能支持共享晚餐、共享电视观看,甚至远程参与婚礼、聚会或毕业典礼等活动的系统。随着个人视频通信的未来的探索和讨论已经成熟,这个列表可以继续下去。
{"title":"The future of personal video communication: moving beyond talking heads to shared experiences","authors":"Erick Oduor, Carman Neustaedter, Gina Venolia, Tejinder K. Judge","doi":"10.1145/2468356.2479658","DOIUrl":"https://doi.org/10.1145/2468356.2479658","url":null,"abstract":"Personal video communication systems such as Skype or FaceTime are starting to become a common tool used by family and friends to communicate and interact over distance. Yet many are designed to only support conversation with a focus on display 'talking heads'. In this workshop, we want to discuss the opportunities and challenges in moving beyond this design paradigm to one where personal video communication systems can be used to share everyday experiences. By this we are referring to systems that might support shared dinners, shared television watching, or even remote participation in events such as weddings, parties, or graduations. This list could go on and on as the future of personal video communications is ripe for explorations and discussions.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114778029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
Smarter objects: using AR technology to program physical objects and their interactions 更智能的对象:使用AR技术对物理对象及其交互进行编程
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468528
Valentin Heun, Shunichi Kasahara, P. Maes
The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other "smarter objects". As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs, buttons, etc). As such Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.
智能物品系统探索了一种与日常物品交互的新方法。该系统将虚拟对象与每个物理对象相关联,以支持修改物理对象的接口和行为以及与其他“智能对象”的交互的简单方法。当用户将智能手机或平板电脑指向物理对象时,增强现实(AR)应用程序会识别该对象,并提供直观的图形界面来编程对象的行为以及与其他对象的交互。一旦重新编程,智能对象就可以用一个简单的有形界面(如旋钮、按钮等)来操作。因此,智能对象将数字对象的适应性与物理对象的简单有形界面相结合。我们已经实现了几个智能对象和使用场景,展示了这种方法的潜力。
{"title":"Smarter objects: using AR technology to program physical objects and their interactions","authors":"Valentin Heun, Shunichi Kasahara, P. Maes","doi":"10.1145/2468356.2468528","DOIUrl":"https://doi.org/10.1145/2468356.2468528","url":null,"abstract":"The Smarter Objects system explores a new method for interaction with everyday objects. The system associates a virtual object with every physical object to support an easy means of modifying the interface and the behavior of that physical object as well as its interactions with other \"smarter objects\". As a user points a smart phone or tablet at a physical object, an augmented reality (AR) application recognizes the object and offers an intuitive graphical interface to program the object's behavior and interactions with other objects. Once reprogrammed, the Smarter Object can then be operated with a simple tangible interface (such as knobs, buttons, etc). As such Smarter Objects combine the adaptability of digital objects with the simple tangible interface of a physical object. We have implemented several Smarter Objects and usage scenarios demonstrating the potential of this approach.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127627204","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 85
Enhancing visuospatial attention performance with brain-computer interfaces 脑机接口增强视觉空间注意力表现
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468579
R. Trachel, T. Brochier, Maureen Clerc
Visuospatial attention is often investigated with features related to the head or the gaze during Human-Computer Interaction (HCI). However the focus of attention can be dissociated from overt responses such as eye movements, and impossible to detect from behavioral data. Actually, Electroencephalography (EEG) can also provide valuable information about covert aspects of spatial attention. Therefore we propose a innovative approach in view of developping a Brain-Computer Interface (BCI) to enhance human reaction speed and accuracy. This poster presents an offline evaluation of the approach based on physiological data recorded in a visuospatial attention experiment. Finally we discuss about the future interface that could enhance HCI by displaying visual information at the focus of attention.
在人机交互(HCI)过程中,视觉空间注意通常与头部或凝视相关的特征进行研究。然而,注意力的焦点可以从明显的反应(如眼球运动)中分离出来,并且不可能从行为数据中检测到。实际上,脑电图(EEG)也可以提供关于空间注意的隐蔽方面的有价值的信息。因此,我们提出了一种创新的方法来开发脑机接口(BCI),以提高人类的反应速度和准确性。这张海报展示了基于视觉空间注意实验中记录的生理数据对该方法的离线评估。最后,我们讨论了未来的界面,可以通过在关注的焦点上显示视觉信息来增强人机交互。
{"title":"Enhancing visuospatial attention performance with brain-computer interfaces","authors":"R. Trachel, T. Brochier, Maureen Clerc","doi":"10.1145/2468356.2468579","DOIUrl":"https://doi.org/10.1145/2468356.2468579","url":null,"abstract":"Visuospatial attention is often investigated with features related to the head or the gaze during Human-Computer Interaction (HCI). However the focus of attention can be dissociated from overt responses such as eye movements, and impossible to detect from behavioral data. Actually, Electroencephalography (EEG) can also provide valuable information about covert aspects of spatial attention. Therefore we propose a innovative approach in view of developping a Brain-Computer Interface (BCI) to enhance human reaction speed and accuracy. This poster presents an offline evaluation of the approach based on physiological data recorded in a visuospatial attention experiment. Finally we discuss about the future interface that could enhance HCI by displaying visual information at the focus of attention.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127636104","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
HCI with sports HCI与体育
Pub Date : 2013-04-27 DOI: 10.1145/2468356.2468817
F. Mueller, R. A. Khot, Alan D. Chatham, S. Pijnappel, Cagdas Toprak, Joe Marshall
Recent advances in cheap sensor technology has made technology support for sports and physical exercise increasingly commonplace, which is evident from the growing popularity of heart rate monitors and GPS sports watches. This rise of technology to support sports activities raises many interaction issues, such as how to interact with these devices while moving and physically exerting. This special interest group brings together industry practitioners and researchers who are interested in designing and understanding human-computer interaction where the human is being physically active, engaging in exertion activities. Fitting with the theme, this special interest group will be "run" while running: participants will be invited to a jog together during which we will discuss technology interaction that is specific to being physically active whilst being physically active ourselves.
最近廉价传感器技术的进步使得对运动和体育锻炼的技术支持越来越普遍,这一点从心率监测仪和GPS运动手表的日益普及中可见一斑。支持体育活动的技术的兴起引发了许多互动问题,例如如何在运动和体力活动时与这些设备互动。这个特殊的兴趣小组汇集了行业从业者和研究人员,他们对设计和理解人机交互感兴趣,在人机交互中,人类正在进行身体活动,从事体力活动。为了配合主题,这个特别的兴趣小组将在跑步中“跑步”:参与者将被邀请一起慢跑,在此期间我们将讨论特定于身体运动的技术互动,同时我们自己也在运动。
{"title":"HCI with sports","authors":"F. Mueller, R. A. Khot, Alan D. Chatham, S. Pijnappel, Cagdas Toprak, Joe Marshall","doi":"10.1145/2468356.2468817","DOIUrl":"https://doi.org/10.1145/2468356.2468817","url":null,"abstract":"Recent advances in cheap sensor technology has made technology support for sports and physical exercise increasingly commonplace, which is evident from the growing popularity of heart rate monitors and GPS sports watches. This rise of technology to support sports activities raises many interaction issues, such as how to interact with these devices while moving and physically exerting. This special interest group brings together industry practitioners and researchers who are interested in designing and understanding human-computer interaction where the human is being physically active, engaging in exertion activities. Fitting with the theme, this special interest group will be \"run\" while running: participants will be invited to a jog together during which we will discuss technology interaction that is specific to being physically active whilst being physically active ourselves.","PeriodicalId":228717,"journal":{"name":"CHI '13 Extended Abstracts on Human Factors in Computing Systems","volume":"25 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-04-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126453658","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
CHI '13 Extended Abstracts on Human Factors in Computing Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1