首页 > 最新文献

Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
A Match Made in Heaven: Streaming Real-time Imagery from a Lightfield Camera to a Lightfield Display 天作之合:从光场相机到光场显示器的实时图像流
Alex N. Hornstein, Evan Moore, Kai-han Chang
We are demonstrating a novel design of a live lightfield camera capturing a scene and showing the captured lightfield in realtime in a lightfield display. The simple, distributed design of the camera allows for low-cost construction of an array of 2D cameras that captures high quality, artifact-free imagery of the most challenging of subjects. This camera takes advantage of the natural duality of outside-in lightfield cameras with inside-out lightfield displays, letting us render complex lightfield imagery with a minimum of processing power.
我们正在展示一种新颖的设计,一种实时光场相机捕捉场景,并在光场显示器上实时显示捕获的光场。相机的简单,分布式设计允许低成本的2D相机阵列的构建,捕获最具挑战性的主题的高质量,无伪像的图像。这款相机利用了由外向内光场相机的自然二元性和由内向外的光场显示,让我们用最少的处理能力来呈现复杂的光场图像。
{"title":"A Match Made in Heaven: Streaming Real-time Imagery from a Lightfield Camera to a Lightfield Display","authors":"Alex N. Hornstein, Evan Moore, Kai-han Chang","doi":"10.1145/3332167.3356899","DOIUrl":"https://doi.org/10.1145/3332167.3356899","url":null,"abstract":"We are demonstrating a novel design of a live lightfield camera capturing a scene and showing the captured lightfield in realtime in a lightfield display. The simple, distributed design of the camera allows for low-cost construction of an array of 2D cameras that captures high quality, artifact-free imagery of the most challenging of subjects. This camera takes advantage of the natural duality of outside-in lightfield cameras with inside-out lightfield displays, letting us render complex lightfield imagery with a minimum of processing power.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"73 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125016165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A New Approach to Studying Sleep in Autonomous Vehicles: Simulating the Waking Situation 一种研究自动驾驶汽车睡眠的新方法:模拟清醒状态
Won Kim, Seungjun Kim
In this paper, we present a novel methodology for simulating the physical and cognitive demands that individuals experience when waking from sleep. Better understanding this scenario has significant implications for research in Autonomous Vehicles (AV), where prior research has shown that many drivers would like to sleep while the vehicle is in operation. Our experiment setup replicates the waking situation in two ways: (1) Subjects wear a sleep shade (physical demand) for 3 sessions (5min, 8min, and 11min) in randomly assigned order, after which (2) they view a screen (cognitive demand) that fades from blurry to clear over a 10s-timeframe. We compared subjects' experiences in-study to the physical and cognitive conditions they experience when waking in real life. Our experiment setup was highly rated in effectiveness and appropriateness for alternating sleeping situation. Findings will be utilized as scenario design in future AV studies and can be adopted in other fields, as well.
在本文中,我们提出了一种新的方法来模拟个体从睡眠中醒来时所经历的身体和认知需求。更好地理解这种情况对自动驾驶汽车(AV)的研究具有重要意义,因为之前的研究表明,许多司机都希望在车辆运行时睡觉。我们的实验设置以两种方式复制醒着的情况:(1)受试者按随机指定的顺序戴着遮光罩(生理需求)睡3次(5分钟、8分钟和11分钟),之后(2)他们看一个屏幕(认知需求),屏幕在10秒的时间范围内从模糊到清晰。我们将研究对象在研究中的经历与他们在现实生活中醒来时的身体和认知状况进行了比较。我们的实验设置在交替睡眠情况下的有效性和适宜性方面得到了高度评价。研究结果将用于未来AV研究的场景设计,也可用于其他领域。
{"title":"A New Approach to Studying Sleep in Autonomous Vehicles: Simulating the Waking Situation","authors":"Won Kim, Seungjun Kim","doi":"10.1145/3332167.3357098","DOIUrl":"https://doi.org/10.1145/3332167.3357098","url":null,"abstract":"In this paper, we present a novel methodology for simulating the physical and cognitive demands that individuals experience when waking from sleep. Better understanding this scenario has significant implications for research in Autonomous Vehicles (AV), where prior research has shown that many drivers would like to sleep while the vehicle is in operation. Our experiment setup replicates the waking situation in two ways: (1) Subjects wear a sleep shade (physical demand) for 3 sessions (5min, 8min, and 11min) in randomly assigned order, after which (2) they view a screen (cognitive demand) that fades from blurry to clear over a 10s-timeframe. We compared subjects' experiences in-study to the physical and cognitive conditions they experience when waking in real life. Our experiment setup was highly rated in effectiveness and appropriateness for alternating sleeping situation. Findings will be utilized as scenario design in future AV studies and can be adopted in other fields, as well.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"292 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114949100","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Estimating Focused Object using Smooth Pursuit Eye Movements and Interest Points in the Real World 在真实世界中使用平滑追踪眼球运动和兴趣点估计聚焦对象
Yuto Tamura, K. Takemura
User calibration is a significant problem in eye-based interaction. To overcome this, several solutions, such as the calibration-free method and implicit user calibration, have been proposed. Pursuits-based interaction is another such solution that has been studied for public screens and virtual reality. It has been applied to select graphical user interfaces (GUIs) because the movements in a GUI can be designed in advance. Smooth pursuit eye movements (smooth pursuits) occur when a user looks at objects in the physical space as well and thus, we propose a method to identify the focused object by using smooth pursuits in the real world. We attempted to determine the focused objects without prior information under several conditions by using the pursuits-based approach and confirmed the feasibility and limitations of the proposed method through experimental evaluations.
用户校准是眼交互中的一个重要问题。为了克服这一问题,人们提出了几种解决方案,如无需校准法和隐式用户校准法。基于追求的互动是另一个这样的解决方案,已经研究了公共屏幕和虚拟现实。由于图形用户界面中的动作可以预先设计,因此已应用于选择图形用户界面(GUI)。当用户看物理空间中的物体时,也会发生平滑追求眼动(平滑追求),因此,我们提出了一种通过在现实世界中使用平滑追求来识别聚焦物体的方法。我们尝试使用基于追踪的方法在几种情况下在没有先验信息的情况下确定聚焦对象,并通过实验评估证实了所提出方法的可行性和局限性。
{"title":"Estimating Focused Object using Smooth Pursuit Eye Movements and Interest Points in the Real World","authors":"Yuto Tamura, K. Takemura","doi":"10.1145/3332167.3357102","DOIUrl":"https://doi.org/10.1145/3332167.3357102","url":null,"abstract":"User calibration is a significant problem in eye-based interaction. To overcome this, several solutions, such as the calibration-free method and implicit user calibration, have been proposed. Pursuits-based interaction is another such solution that has been studied for public screens and virtual reality. It has been applied to select graphical user interfaces (GUIs) because the movements in a GUI can be designed in advance. Smooth pursuit eye movements (smooth pursuits) occur when a user looks at objects in the physical space as well and thus, we propose a method to identify the focused object by using smooth pursuits in the real world. We attempted to determine the focused objects without prior information under several conditions by using the pursuits-based approach and confirmed the feasibility and limitations of the proposed method through experimental evaluations.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122341028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Towards Universal Evaluation of Image Annotation Interfaces 图像标注接口的通用评价
Andrew M. Vernier, Jean Y. Song, Edward Sun, A. Kench, Walter S. Lasecki
To guide the design of interactive image annotation systems that generalize to new domains and applications, we need ways to evaluate the capabilities of new annotation tools across a range of different types of image, content, and task domains. In this work, we introduce Corsica, a test harness for image an- notation tools that uses calibration images to evaluate a tool's capabilities on general image properties and task requirements. Corsica is comprised of sets of three key components: 1) synthesized images with visual elements that are not domain- specific, 2) target microtasks that connects the visual elements and tools for evaluation, and 3) ground truth data for each mi- crotask and visual element pair. By introducing a specification for calibration images and microtasks, we aim to create an evolving repository that allows the community to propose new evaluation challenges. Our work aims to help facilitate the robust verification of image annotation tools and techniques.
为了指导可推广到新领域和应用程序的交互式图像注释系统的设计,我们需要评估跨一系列不同类型的图像、内容和任务领域的新注释工具的能力的方法。在这项工作中,我们介绍了Corsica,这是一个用于图像标记工具的测试工具,它使用校准图像来评估工具在一般图像属性和任务要求上的能力。Corsica由三个关键组件组成:1)非特定领域的视觉元素合成图像,2)连接视觉元素和评估工具的目标微任务,以及3)每个mi- crotask和视觉元素对的真实数据。通过引入校准图像和微任务的规范,我们的目标是创建一个不断发展的存储库,允许社区提出新的评估挑战。我们的工作旨在帮助促进图像标注工具和技术的鲁棒性验证。
{"title":"Towards Universal Evaluation of Image Annotation Interfaces","authors":"Andrew M. Vernier, Jean Y. Song, Edward Sun, A. Kench, Walter S. Lasecki","doi":"10.1145/3332167.3357122","DOIUrl":"https://doi.org/10.1145/3332167.3357122","url":null,"abstract":"To guide the design of interactive image annotation systems that generalize to new domains and applications, we need ways to evaluate the capabilities of new annotation tools across a range of different types of image, content, and task domains. In this work, we introduce Corsica, a test harness for image an- notation tools that uses calibration images to evaluate a tool's capabilities on general image properties and task requirements. Corsica is comprised of sets of three key components: 1) synthesized images with visual elements that are not domain- specific, 2) target microtasks that connects the visual elements and tools for evaluation, and 3) ground truth data for each mi- crotask and visual element pair. By introducing a specification for calibration images and microtasks, we aim to create an evolving repository that allows the community to propose new evaluation challenges. Our work aims to help facilitate the robust verification of image annotation tools and techniques.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122559458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Extending AR Interaction through 3D Printed Tangible Interfaces in an Urban Planning Context 在城市规划背景下,通过3D打印的有形界面扩展AR交互
Marla Narazani, Chloe Eghtebas, G. Klinker, Sarah L. Jenney, Michael Mühlhaus, F. Petzold
Embedding conductive material into 3D printed objects enables non-interactive objects to become tangible without the need to attach additional components. We present a novel use for such touch-sensitive objects in an augmented reality (AR) setting and explore the use of gestures for enabling different types of interaction with digital and physical content. In our demonstration, the setting is an urban planning scenario. The multi-material 3D printed buildings consist of thin layers of white plastic filament and a conductive wireframe to enable touch gestures. Attendees can either interact with the physical model or with the mobile AR interface for selecting, adding or deleting buildings.
将导电材料嵌入到3D打印对象中,使非交互式对象变得有形,而无需附加额外的组件。我们提出了一种在增强现实(AR)环境中使用这种触摸敏感对象的新方法,并探索了使用手势来实现与数字和物理内容的不同类型的交互。在我们的演示中,设置是一个城市规划场景。这种多材料3D打印建筑由薄层白色塑料丝和导电线框组成,以实现触摸手势。与会者既可以与物理模型交互,也可以与移动AR界面交互,以选择、添加或删除建筑物。
{"title":"Extending AR Interaction through 3D Printed Tangible Interfaces in an Urban Planning Context","authors":"Marla Narazani, Chloe Eghtebas, G. Klinker, Sarah L. Jenney, Michael Mühlhaus, F. Petzold","doi":"10.1145/3332167.3356891","DOIUrl":"https://doi.org/10.1145/3332167.3356891","url":null,"abstract":"Embedding conductive material into 3D printed objects enables non-interactive objects to become tangible without the need to attach additional components. We present a novel use for such touch-sensitive objects in an augmented reality (AR) setting and explore the use of gestures for enabling different types of interaction with digital and physical content. In our demonstration, the setting is an urban planning scenario. The multi-material 3D printed buildings consist of thin layers of white plastic filament and a conductive wireframe to enable touch gestures. Attendees can either interact with the physical model or with the mobile AR interface for selecting, adding or deleting buildings.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125979806","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Pre-screen: Assisting Material Screening in Early-stage of Video Editing 预屏:在视频剪辑前期协助素材筛选
Qian Zhu, Shuai Ma, Cuixia Ma
Video editing is a difficult task for both professionals and amateur editors. One of the biggest reasons is that screening useful clips from raw material in the early stage of editing is too much time-consuming and laborious. To better understand current difficulties faced by users in editing task, we first conduct a pilot study involving a survey and an interview among 20 participants. Based on the results, we then design Pre-screen, a novel tool to provide users with both global-view and detailed-view video analysis as well as material screening features based on intelligent video processing, analysis and visualization methods. User study shows that Pre-screen can not only effectively help users screen and arrange raw video material to save much more time than a widely used video editing tool in video editing tasks, but also inspire and satisfy users.
视频编辑对专业人士和业余编辑来说都是一项艰巨的任务。其中一个最大的原因是,在编辑的早期阶段,从原始材料中筛选有用的片段过于耗时和费力。为了更好地了解当前用户在编辑任务中面临的困难,我们首先对20名参与者进行了调查和访谈的试点研究。在此基础上,我们设计了基于智能视频处理、分析和可视化方法的Pre-screen工具,为用户提供全局视图和详细视图视频分析以及材料筛选功能。用户研究表明,Pre-screen不仅可以有效地帮助用户对原始视频材料进行筛选和整理,比目前广泛使用的视频编辑工具在视频编辑任务中节省更多的时间,而且可以激发和满足用户。
{"title":"Pre-screen: Assisting Material Screening in Early-stage of Video Editing","authors":"Qian Zhu, Shuai Ma, Cuixia Ma","doi":"10.1145/3332167.3357112","DOIUrl":"https://doi.org/10.1145/3332167.3357112","url":null,"abstract":"Video editing is a difficult task for both professionals and amateur editors. One of the biggest reasons is that screening useful clips from raw material in the early stage of editing is too much time-consuming and laborious. To better understand current difficulties faced by users in editing task, we first conduct a pilot study involving a survey and an interview among 20 participants. Based on the results, we then design Pre-screen, a novel tool to provide users with both global-view and detailed-view video analysis as well as material screening features based on intelligent video processing, analysis and visualization methods. User study shows that Pre-screen can not only effectively help users screen and arrange raw video material to save much more time than a widely used video editing tool in video editing tasks, but also inspire and satisfy users.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"121 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132038885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice Input Interface Failures and Frustration: Developer and User Perspectives 语音输入界面的失败和挫折:开发者和用户的观点
Shiyoh Goetsu, T. Sakai
We identify different types of failures in a voice user interface application, from both developer and user perspectives. Our preliminary experiment suggests that user-perceived Pattern Match Failure may have a strong negative effect on user frustration; based on this result, we conduct power analysis to obtain more conclusive results in a future experiment.
我们从开发人员和用户的角度确定语音用户界面应用程序中不同类型的故障。我们的初步实验表明,用户感知的模式匹配失败可能对用户挫败感产生强烈的负面影响;在此基础上,我们进行了功率分析,以便在以后的实验中得到更确切的结果。
{"title":"Voice Input Interface Failures and Frustration: Developer and User Perspectives","authors":"Shiyoh Goetsu, T. Sakai","doi":"10.1145/3332167.3357103","DOIUrl":"https://doi.org/10.1145/3332167.3357103","url":null,"abstract":"We identify different types of failures in a voice user interface application, from both developer and user perspectives. Our preliminary experiment suggests that user-perceived Pattern Match Failure may have a strong negative effect on user frustration; based on this result, we conduct power analysis to obtain more conclusive results in a future experiment.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"66 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127238007","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Lucid Fabrication 清晰的制造
Rundong Tian
Advances in digital fabrication have created new capabilities and simultaneously reinforced outdated workflows. In my thesis work, I primarily explore alternative workflows for digital fabrication that introduce new capabilities and interactions. Methodologically, I build fabrication systems spanning mechanical design, electronics, and software in order to examine these ideas in specific detail. In this paper, I introduce related work and frame it within the historical context of digital fabrication, and discuss my previous and ongoing work.
数字制造的进步创造了新的能力,同时也加强了过时的工作流程。在我的论文工作中,我主要探讨了引入新功能和交互的数字制造的替代工作流程。在方法上,我建立了跨越机械设计、电子和软件的制造系统,以便在具体细节上检查这些想法。在本文中,我介绍了相关的工作,并将其置于数字制造的历史背景下,并讨论了我以前和正在进行的工作。
{"title":"Lucid Fabrication","authors":"Rundong Tian","doi":"10.1145/3332167.3356881","DOIUrl":"https://doi.org/10.1145/3332167.3356881","url":null,"abstract":"Advances in digital fabrication have created new capabilities and simultaneously reinforced outdated workflows. In my thesis work, I primarily explore alternative workflows for digital fabrication that introduce new capabilities and interactions. Methodologically, I build fabrication systems spanning mechanical design, electronics, and software in order to examine these ideas in specific detail. In this paper, I introduce related work and frame it within the historical context of digital fabrication, and discuss my previous and ongoing work.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"145 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116192133","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing Accessible 3D Design for the Blind and Visually-Impaired via Tactile Shape Displays 通过触觉形状显示为盲人和视障人士推进无障碍3D设计
A. Siu
Affordable rapid 3D printing technologies have become key enablers of the Maker Movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access to people with blindness and visual impairment (BVI). In this paper I address three areas of research that I will conduct as part of my PhD dissertation towards bridging a gap between blind and sighted makers. My dissertation aims to create an accessible 3D design and printing workflow for BVI people through the use of 2.5D tactile displays, and to rigorously understand how BVI people use the workflow in the context of perception, interaction, and learning.
经济实惠的快速3D打印技术通过赋予个人创造物理成品的能力,已成为创客运动的关键推动者。然而,现有的允许创作和编辑3D模型的计算机辅助设计(CAD)工具大多依赖于视觉,限制了盲人和视力障碍(BVI)的使用。在这篇论文中,我提出了三个研究领域,作为我博士论文的一部分,我将进行这些研究,以弥合盲人和视力正常的制造商之间的差距。我的论文旨在通过使用2.5D触觉显示器为BVI人创建一个可访问的3D设计和打印工作流,并严格了解BVI人如何在感知,交互和学习的背景下使用工作流。
{"title":"Advancing Accessible 3D Design for the Blind and Visually-Impaired via Tactile Shape Displays","authors":"A. Siu","doi":"10.1145/3332167.3356875","DOIUrl":"https://doi.org/10.1145/3332167.3356875","url":null,"abstract":"Affordable rapid 3D printing technologies have become key enablers of the Maker Movement by giving individuals the ability to create physical finished products. However, existing computer-aided design (CAD) tools that allow authoring and editing of 3D models are mostly visually reliant and limit access to people with blindness and visual impairment (BVI). In this paper I address three areas of research that I will conduct as part of my PhD dissertation towards bridging a gap between blind and sighted makers. My dissertation aims to create an accessible 3D design and printing workflow for BVI people through the use of 2.5D tactile displays, and to rigorously understand how BVI people use the workflow in the context of perception, interaction, and learning.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129458279","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Demo of AuraRing: Precise Electromagnetic Finger Tracking AuraRing演示:精确的电磁手指追踪
Farshid Salemi Parizi, Eric Whitmire, Alvin Cao, Tianke Li, Shwetak N. Patel
We present AuraRing, a wearable electromagnetic tracking system for fine-grained finger movement. The hardware consists of a ring with an embedded electromagnetic transmitter coil and a wristband with multiple sensor coils. By measuring the magnetic fields at different points around the wrist, AuraRing estimates the five degree-of-freedom pose of the finger. AuraRing is trained only on simulated data and requires no runtime supervised training, ensuring user and session independence. AuraRing has a resolution of 0.1 mm and a dynamic accuracy of 4.4 mm, as measured through a user evaluation with optical ground truth. The ring is completely self-contained and consumes just 2.3 mW of power.
我们提出了AuraRing,一个可穿戴的电磁跟踪系统,用于细粒度的手指运动。硬件包括一个带有嵌入式电磁发射器线圈的戒指和一个带有多个传感器线圈的腕带。通过测量手腕周围不同位置的磁场,AuraRing估算出手指的五个自由度。AuraRing仅在模拟数据上进行训练,不需要运行时监督训练,从而确保了用户和会话的独立性。AuraRing的分辨率为0.1 mm,动态精度为4.4 mm,通过光学地面真实度用户评估测量。这个环是完全独立的,只消耗2.3兆瓦的能量。
{"title":"Demo of AuraRing: Precise Electromagnetic Finger Tracking","authors":"Farshid Salemi Parizi, Eric Whitmire, Alvin Cao, Tianke Li, Shwetak N. Patel","doi":"10.1145/3332167.3356893","DOIUrl":"https://doi.org/10.1145/3332167.3356893","url":null,"abstract":"We present AuraRing, a wearable electromagnetic tracking system for fine-grained finger movement. The hardware consists of a ring with an embedded electromagnetic transmitter coil and a wristband with multiple sensor coils. By measuring the magnetic fields at different points around the wrist, AuraRing estimates the five degree-of-freedom pose of the finger. AuraRing is trained only on simulated data and requires no runtime supervised training, ensuring user and session independence. AuraRing has a resolution of 0.1 mm and a dynamic accuracy of 4.4 mm, as measured through a user evaluation with optical ground truth. The ring is completely self-contained and consumes just 2.3 mW of power.","PeriodicalId":322598,"journal":{"name":"Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology","volume":"61 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2019-10-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132843158","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Adjunct Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1