We are very excited to welcome you to the 28th Annual ACM Symposium on User Interface Software and Technology (UIST), held from November 8-11th 2015, in Charlotte, North Carolina, USA. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, fabrication, wearable computing and CSCW. UIST 2015 received 297 technical paper submissions. After a thorough review process, the 39-member program committee accepted 70 papers (23.6%). Each anonymous submission that entered the full review process was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If, after these four reviews, the submission was deemed to pass a rebuttal threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns. A second member of the program committee was then asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Berkeley, California, USA on June 25th and 26th, 2015, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. In addition to papers, our program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 22 posters, 45 demonstrations, and 8 student presentations in the eleventh annual Doctoral Symposium. Our program also features the seventh annual Student Innovation Contest. Teams from all over the world will compete in this year's contest, which focuses on blurring the lines between art and engineering and creating tools for robotic storytelling. UIST 2015 will feature two keynote presentations. The opening keynote will be given by Ramesh Raskar (MIT Media Lab) on extreme computational imaging. Blaise Aguera Y Arcas from Google will deliver the closing keynote on machine intelligence. We welcome you to Charlotte, a city full of southern hospitality. We hope that you will find the technical program interesting and thought-provoking. We also hope that UIST 2015 will provide you with enjoyable opportunities to engage with fellow researchers from both industry and academia, from institutions around the world.
我们非常高兴地欢迎您参加2015年11月8日至11日在美国北卡罗来纳州夏洛特市举行的第28届ACM用户界面软件与技术年会(UIST)。UIST是展示软件和人机界面技术研究创新的主要论坛。UIST由ACM的人机交互(SIGCHI)和计算机图形学(SIGGRAPH)特别兴趣小组赞助,汇集了来自不同领域的研究人员和从业人员,包括图形和网络用户界面,有形和无处不在的计算,虚拟和增强现实,多媒体,新的输入和输出设备,制造,可穿戴计算和CSCW。UIST 2015共收到科技论文297篇。39人组成的计划委员会经过彻底的审查,接受了70篇论文(23.6%)。每个进入完整评审过程的匿名提交首先由三位外部评审人员进行评审,并由项目委员会成员提供元评审。如果在这四次评审之后,投稿被认为通过了一个反驳的门槛,我们要求作者提交一个简短的反驳来解决审稿人关注的问题。然后,项目委员会的另一名成员被要求检查论文、反驳和评论,并提供他们自己的元评论。项目委员会于2015年6月25日至26日在美国加州伯克利亲自召开会议,遴选项目邀请论文。只有在作者提供了针对委员会意见的最终修订后,才接受提交的意见。除了论文外,我们的项目还包括两篇来自ACM人机交互学报(TOCHI)的论文,以及第十一届博士研讨会上的22张海报,45个演示和8个学生演讲。我们的项目还包括第七届年度学生创新大赛。来自世界各地的团队将参加今年的比赛,比赛的重点是模糊艺术和工程之间的界限,并为机器人讲故事创造工具。UIST 2015将有两个主题演讲。开幕主题演讲将由Ramesh Raskar(麻省理工学院媒体实验室)发表关于极限计算成像的演讲。来自谷歌的Blaise Aguera Y Arcas将发表关于机器智能的闭幕主题演讲。我们欢迎您来到夏洛特,一个充满南方热情好客的城市。我们希望你会觉得这个技术项目很有趣,发人深省。我们也希望UIST 2015将为您提供愉快的机会,与来自世界各地的产业界和学术界的研究人员进行交流。
{"title":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","authors":"C. Latulipe, Bjoern Hartmann, Tovi Grossman","doi":"10.1145/2807442","DOIUrl":"https://doi.org/10.1145/2807442","url":null,"abstract":"We are very excited to welcome you to the 28th Annual ACM Symposium on User Interface Software and Technology (UIST), held from November 8-11th 2015, in Charlotte, North Carolina, USA. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from diverse areas including graphical & web user interfaces, tangible & ubiquitous computing, virtual & augmented reality, multimedia, new input & output devices, fabrication, wearable computing and CSCW. \u0000 \u0000UIST 2015 received 297 technical paper submissions. After a thorough review process, the 39-member program committee accepted 70 papers (23.6%). Each anonymous submission that entered the full review process was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If, after these four reviews, the submission was deemed to pass a rebuttal threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns. A second member of the program committee was then asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Berkeley, California, USA on June 25th and 26th, 2015, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to papers, our program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 22 posters, 45 demonstrations, and 8 student presentations in the eleventh annual Doctoral Symposium. Our program also features the seventh annual Student Innovation Contest. Teams from all over the world will compete in this year's contest, which focuses on blurring the lines between art and engineering and creating tools for robotic storytelling. UIST 2015 will feature two keynote presentations. The opening keynote will be given by Ramesh Raskar (MIT Media Lab) on extreme computational imaging. Blaise Aguera Y Arcas from Google will deliver the closing keynote on machine intelligence. \u0000 \u0000We welcome you to Charlotte, a city full of southern hospitality. We hope that you will find the technical program interesting and thought-provoking. We also hope that UIST 2015 will provide you with enjoyable opportunities to engage with fellow researchers from both industry and academia, from institutions around the world.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129876330","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents FlexiBend, an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability. After installation, FlexiBend can simultaneously sense user inputs in different parts of a fabrication or even capture the geometry of a deformable fabrication.
{"title":"FlexiBend: Enabling Interactivity of Multi-Part, Deformable Fabrications Using Single Shape-Sensing Strip","authors":"Chin-yu Chien, Rong-Hao Liang, Long-Fei Lin, Liwei Chan, Bing-Yu Chen","doi":"10.1145/2807442.2807456","DOIUrl":"https://doi.org/10.1145/2807442.2807456","url":null,"abstract":"This paper presents FlexiBend, an easily installable shape-sensing strip that enables interactivity of multi-part, deformable fabrications. The flexible sensor strip is composed of a dense linear array of strain gauges, therefore it has shape sensing capability. After installation, FlexiBend can simultaneously sense user inputs in different parts of a fabrication or even capture the geometry of a deformable fabrication.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128594048","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.
{"title":"Explaining Visual Changes in Web Interfaces","authors":"Brian Burg, Amy J. Ko, Michael D. Ernst","doi":"10.1145/2807442.2807473","DOIUrl":"https://doi.org/10.1145/2807442.2807473","url":null,"abstract":"Web developers often want to repurpose interactive behaviors from third-party web pages, but struggle to locate the specific source code that implements the behavior. This task is challenging because developers must find and connect all of the non-local interactions between event-based JavaScript code, declarative CSS styles, and web page content that combine to express the behavior. The Scry tool embodies a new approach to locating the code that implements interactive behaviors. A developer selects a page element; whenever the element changes, Scry captures the rendering engine's inputs (DOM, CSS) and outputs (screenshot) for the element. For any two captured element states, Scry can compute how the states differ and which lines of JavaScript code were responsible. Using Scry, a developer can locate an interactive behavior's implementation by picking two output states; Scry indicates the JavaScript code directly responsible for their differences.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124568610","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven K. Feiner, B. Tversky
In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.
{"title":"Virtual Replicas for Remote Assistance in Virtual and Augmented Reality","authors":"Ohan Oda, Carmine Elvezio, Mengu Sukan, Steven K. Feiner, B. Tversky","doi":"10.1145/2807442.2807497","DOIUrl":"https://doi.org/10.1145/2807442.2807497","url":null,"abstract":"In many complex tasks, a remote subject-matter expert may need to assist a local user to guide actions on objects in the local user's environment. However, effective spatial referencing and action demonstration in a remote physical environment can be challenging. We introduce two approaches that use Virtual Reality (VR) or Augmented Reality (AR) for the remote expert, and AR for the local user, each wearing a stereo head-worn display. Both approaches allow the expert to create and manipulate virtual replicas of physical objects in the local environment to refer to parts of those physical objects and to indicate actions on them. This can be especially useful for parts that are occluded or difficult to access. In one approach, the expert points in 3D to portions of virtual replicas to annotate them. In another approach, the expert demonstrates actions in 3D by manipulating virtual replicas, supported by constraints and annotations. We performed a user study of a 6DOF alignment task, a key operation in many physical task domains, comparing both approaches to an approach in which the expert uses a 2D tablet-based drawing system similar to ones developed for prior work on remote assistance. The study showed the 3D demonstration approach to be faster than the others. In addition, the 3D pointing approach was faster than the 2D tablet in the case of a highly trained expert.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120964154","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane. NOTE: publication of full-text held until November 9, 2015.
{"title":"Printem: Instant Printed Circuit Boards with Standard Office Printers & Inks","authors":"Perumal Varun Chadalavada, Daniel J. Wigdor","doi":"10.1145/2807442.2807511","DOIUrl":"https://doi.org/10.1145/2807442.2807511","url":null,"abstract":"Printem film, a novel method for the fabrication of Printed Circuit Boards (PCBs) for small batch/prototyping use, is presented. Printem film enables a standard office inkjet or laser printer, using standard inks, to produce a PCB: the user prints a negative of the PCB onto the film, exposes it to UV or sunlight, and then tears-away the unneeded portion of the film, leaving-behind a copper PCB. PCBs produced with Printem film are as conductive as PCBs created using standard industrial methods. Herein, the composition of Printem film is described, and advantages of various materials discussed. Sample applications are also described, each of which demonstrates some unique advantage of Printem film over current prototyping methods: conductivity, flexibility, the ability to be cut with a pair of scissors, and the ability to be mounted to a rigid backplane. NOTE: publication of full-text held until November 9, 2015.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"97 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116273601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
M. Annett, Tovi Grossman, Daniel J. Wigdor, G. Fitzmaurice
In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children's books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.
{"title":"MoveableMaker: Facilitating the Design, Generation, and Assembly of Moveable Papercraft","authors":"M. Annett, Tovi Grossman, Daniel J. Wigdor, G. Fitzmaurice","doi":"10.1145/2807442.2807483","DOIUrl":"https://doi.org/10.1145/2807442.2807483","url":null,"abstract":"In this work, we explore moveables, i.e., interactive papercraft that harness user interaction to generate visual effects. First, we present a survey of children's books that captured the state of the art of moveables. The results of this survey were synthesized into a moveable taxonomy and informed MoveableMaker, a new tool to assist users in designing, generating, and assembling moveable papercraft. MoveableMaker supports the creation and customization of a number of moveable effects and employs moveable-specific features including animated tooltips, automatic instruction generation, constraint-based rendering, techniques to reduce material waste, and so on. To understand how MoveableMaker encourages creativity and enhances the workflow when creating moveables, a series of exploratory workshops were conducted. The results of these explorations, including the content participants created and their impressions, are discussed, along with avenues for future research involving moveables.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"205 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116497006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Dongwook Yoon, K. Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, M. Pahud, M. Gavriliu
The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.
{"title":"Sensing Tablet Grasp + Micro-mobility for Active Reading","authors":"Dongwook Yoon, K. Hinckley, Hrvoje Benko, François Guimbretière, Pourang Irani, M. Pahud, M. Gavriliu","doi":"10.1145/2807442.2807510","DOIUrl":"https://doi.org/10.1145/2807442.2807510","url":null,"abstract":"The orientation and repositioning of physical artefacts (such as paper documents) to afford shared viewing of content, or to steer the attention of others to specific details, is known as micro-mobility. But the role of grasp in micro-mobility has rarely been considered, much less sensed by devices. We therefore employ capacitive grip sensing and inertial motion to explore the design space of combined grasp + micro-mobility by considering three classes of technique in the context of active reading. Single user, single device techniques support grip-influenced behaviors such as bookmarking a page with a finger, but combine this with physical embodiment to allow flipping back to a previous location. Multiple user, single device techniques, such as passing a tablet to another user or working side-by-side on a single device, add fresh nuances of expression to co-located collaboration. And single user, multiple device techniques afford facile cross-referencing of content across devices. Founded on observations of grasp and micro-mobility, these techniques open up new possibilities for both individual and collaborative interaction with electronic documents.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133501974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus system performance and user experience. We first analyse calibration drift on a new dataset of natural gaze data recorded using synchronised video-based and Electrooculography-based eye trackers of 20 users performing everyday activities in a mobile setting. Based on this analysis we present a method to automatically self-calibrate head-mounted eye trackers based on a computational model of bottom-up visual saliency. Through evaluations on the dataset we show that our method 1) is effective in reducing calibration drift in calibrated eye trackers and 2) given sufficient data, can achieve gaze estimation accuracy competitive with that of a calibrated eye tracker, without any manual calibration.
{"title":"Self-Calibrating Head-Mounted Eye Trackers Using Egocentric Visual Saliency","authors":"Yusuke Sugano, A. Bulling","doi":"10.1145/2807442.2807445","DOIUrl":"https://doi.org/10.1145/2807442.2807445","url":null,"abstract":"Head-mounted eye tracking has significant potential for gaze-based applications such as life logging, mental health monitoring, or the quantified self. A neglected challenge for the long-term recordings required by these applications is that drift in the initial person-specific eye tracker calibration, for example caused by physical activity, can severely impact gaze estimation accuracy and thus system performance and user experience. We first analyse calibration drift on a new dataset of natural gaze data recorded using synchronised video-based and Electrooculography-based eye trackers of 20 users performing everyday activities in a mobile setting. Based on this analysis we present a method to automatically self-calibrate head-mounted eye trackers based on a computational model of bottom-up visual saliency. Through evaluations on the dataset we show that our method 1) is effective in reducing calibration drift in calibrated eye trackers and 2) given sufficient data, can achieve gaze estimation accuracy competitive with that of a calibrated eye tracker, without any manual calibration.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"64 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121894775","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional near-infrared spectroscopy (fNIRS) holds increasing potential for Brain-Computer Interfaces (BCI) due to its portability, ease of application, robustness to movement artifacts, and relatively low cost. The use of fNIRS to support the development of affective BCI has received comparatively less attention, despite the role played by the prefrontal cortex in affective control, and the appropriateness of fNIRS to measure prefrontal activity. We present an active, fNIRS-based neurofeedback (NF) interface, which uses differential changes in oxygenation between the left and right sides of the dorsolateral prefrontal cortex to operationalize BCI input. The system is activated by users generating a state of anger, which has been previously linked to increased left prefrontal asymmetry. We have incorporated this NF interface into an experimental platform adapted from a virtual 3D narrative, in which users can express anger at a virtual character perceived as evil, causing the character to disappear progressively. Eleven subjects used the system and were able to successfully perform NF despite minimal training. Extensive analysis confirms that success was associated with the intent to express anger. This has positive implications for the design of affective BCI based on prefrontal asymmetry.
{"title":"Anger-based BCI Using fNIRS Neurofeedback","authors":"Gabor Aranyi, Fred Charles, M. Cavazza","doi":"10.1145/2807442.2807447","DOIUrl":"https://doi.org/10.1145/2807442.2807447","url":null,"abstract":"Functional near-infrared spectroscopy (fNIRS) holds increasing potential for Brain-Computer Interfaces (BCI) due to its portability, ease of application, robustness to movement artifacts, and relatively low cost. The use of fNIRS to support the development of affective BCI has received comparatively less attention, despite the role played by the prefrontal cortex in affective control, and the appropriateness of fNIRS to measure prefrontal activity. We present an active, fNIRS-based neurofeedback (NF) interface, which uses differential changes in oxygenation between the left and right sides of the dorsolateral prefrontal cortex to operationalize BCI input. The system is activated by users generating a state of anger, which has been previously linked to increased left prefrontal asymmetry. We have incorporated this NF interface into an experimental platform adapted from a virtual 3D narrative, in which users can express anger at a virtual character perceived as evil, causing the character to disappear progressively. Eleven subjects used the system and were able to successfully perform NF despite minimal training. Extensive analysis confirms that success was associated with the intent to express anger. This has positive implications for the design of affective BCI based on prefrontal asymmetry.","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125405003","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Harshit Agrawal, Udayan Umapathi, Róbert Kovács, Johannes Frohnhofen, Hsiang-Ting Chen, Stefanie Müller, Patrick Baudisch
Physical sketching of 3D wireframe models, using a hand-held plastic extruder, allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users' design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device, that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, thus scale well to large structures. Since the tape is pre-coated with adhesive it allows connecting tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper's use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being "useful for creative exploration", "its ability to sketch at actual scale helped judge fit", and "fun to use."
{"title":"Protopiper: Physically Sketching Room-Sized Objects at Actual Scale","authors":"Harshit Agrawal, Udayan Umapathi, Róbert Kovács, Johannes Frohnhofen, Hsiang-Ting Chen, Stefanie Müller, Patrick Baudisch","doi":"10.1145/2807442.2807505","DOIUrl":"https://doi.org/10.1145/2807442.2807505","url":null,"abstract":"Physical sketching of 3D wireframe models, using a hand-held plastic extruder, allows users to explore the design space of 3D models efficiently. Unfortunately, the scale of these devices limits users' design explorations to small-scale objects. We present protopiper, a computer aided, hand-held fabrication device, that allows users to sketch room-sized objects at actual scale. The key idea behind protopiper is that it forms adhesive tape into tubes as its main building material, rather than extruded plastic or photopolymer lines. Since the resulting tubes are hollow they offer excellent strength-to-weight ratio, thus scale well to large structures. Since the tape is pre-coated with adhesive it allows connecting tubes quickly, unlike extruded plastic that would require heating and cooling in the kilowatt range. We demonstrate protopiper's use through several demo objects, ranging from more constructive objects, such as furniture, to more decorative objects, such as statues. In our exploratory user study, 16 participants created objects based on their own ideas. They rated the device as being \"useful for creative exploration\", \"its ability to sketch at actual scale helped judge fit\", and \"fun to use.\"","PeriodicalId":103668,"journal":{"name":"Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126091910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}