首页 > 最新文献

Proceedings of the 26th annual ACM symposium on User interface software and technology最新文献

英文 中文
Session details: Mobile 会议详情:
Nicolai Marquardt
{"title":"Session details: Mobile","authors":"Nicolai Marquardt","doi":"10.1145/3254700","DOIUrl":"https://doi.org/10.1145/3254700","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"153 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133192737","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human-computer interaction for hybrid carving 混合雕刻的人机交互
Amit Zoran, Roy Shilkrot, J. Paradiso
In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice.
在本文中,我们探索人机交互雕刻,建立在我们以前的工作与自由的数字雕刻设备。我们提供了一种新的工具设计(FreeD V2),使用一套新的用于制造静态模型的交互技术:个性化的工具路径、手动覆盖和虚拟模型的物理合并。我们还提出了制造动态模型的技术,这些模型可以在制造过程中直接或参数化地改变。我们演示了半自主操作,并评估了该工具的性能。最后,我们讨论了人与机器之间的协同合作,以确保准确性,同时保留人工练习的表现力。
{"title":"Human-computer interaction for hybrid carving","authors":"Amit Zoran, Roy Shilkrot, J. Paradiso","doi":"10.1145/2501988.2502023","DOIUrl":"https://doi.org/10.1145/2501988.2502023","url":null,"abstract":"In this paper we explore human-computer interaction for carving, building upon our previous work with the FreeD digital sculpting device. We contribute a new tool design (FreeD V2), with a novel set of interaction techniques for the fabrication of static models: personalized tool paths, manual overriding, and physical merging of virtual models. We also present techniques for fabricating dynamic models, which may be altered directly or parametrically during fabrication. We demonstrate a semi-autonomous operation and evaluate the performance of the tool. We end by discussing synergistic cooperation between human and machine to ensure accuracy while preserving the expressiveness of manual practice.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"9 11","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134447038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 52
Chorus: a crowd-powered conversational assistant 合唱:一个群众性的对话助手
Walter S. Lasecki, Rachel Wesley, Jeffrey Nichols, A. Kulkarni, James F. Allen, Jeffrey P. Bigham
Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.
尽管几十年来的研究试图在人与计算机之间建立会话交互,但自动会话系统的能力仍然有限。在本文中,我们介绍了合唱,一个群众动力会话助手。当使用Chorus时,终端用户可以连续地与看似单一的对话伙伴进行对话。在幕后,Chorus利用多个人群工作人员提出并对回应进行投票。共享内存空间有助于动态群体劳动力保持一致性,博弈论激励机制有助于平衡他们在提议和投票之间的努力。对12名终端用户和100名人群工作人员的研究表明,Chorus可以提供准确的、热门的回答,正确回答近93%的用户提问,并在95%以上的回答中保持主题。我们还观察到,在速度、质量和广度方面,Chorus比将最终用户与单个人群工作人员配对以及最终用户完成自己的任务具有优势。Chorus展示了一个新的未来,通过结合人类和机器智能,对话助手可以在现实世界中使用,并可能提供一种有用的新方式,与驱动其他系统的人群进行交互。
{"title":"Chorus: a crowd-powered conversational assistant","authors":"Walter S. Lasecki, Rachel Wesley, Jeffrey Nichols, A. Kulkarni, James F. Allen, Jeffrey P. Bigham","doi":"10.1145/2501988.2502057","DOIUrl":"https://doi.org/10.1145/2501988.2502057","url":null,"abstract":"Despite decades of research attempting to establish conversational interaction between humans and computers, the capabilities of automated conversational systems are still limited. In this paper, we introduce Chorus, a crowd-powered conversational assistant. When using Chorus, end users converse continuously with what appears to be a single conversational partner. Behind the scenes, Chorus leverages multiple crowd workers to propose and vote on responses. A shared memory space helps the dynamic crowd workforce maintain consistency, and a game-theoretic incentive mechanism helps to balance their efforts between proposing and voting. Studies with 12 end users and 100 crowd workers demonstrate that Chorus can provide accurate, topical responses, answering nearly 93% of user queries appropriately, and staying on-topic in over 95% of responses. We also observed that Chorus has advantages over pairing an end user with a single crowd worker and end users completing their own tasks in terms of speed, quality, and breadth of assistance. Chorus demonstrates a new future in which conversational assistants are made usable in the real world by combining human and machine intelligence, and may enable a useful new way of interacting with the crowds powering other systems.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123764823","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 174
Crowd-scale interactive formal reasoning and analytics 群体规模互动形式推理和分析
Ethan Fast, Colleen Lee, A. Aiken, Michael S. Bernstein, D. Koller, Eric Smith
Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.
大型在线课程通常会分配一些容易评分的问题,因为它们有固定的解决方案(比如选择题),但在正确答案数量无限的问题领域,评分和指导学生就更加困难了。其中一个领域是推导:通常用于技术、数学和科学学科作业的逻辑步骤序列。我们提出了一个用于创建、评分和分析任何形式域的派生作业的系统。它支持任何逻辑形式的作业,为学生提供增量反馈,并通过每个证明汇总学生的路径来生成教师分析。它从检查网络上成千上万的推导中获益:它引入了一个证明缓存,这是一种新颖的数据结构,利用一群学生来降低检查推导的成本,并提供实时的、建设性的反馈。我们在一个在线编译课程中对990名学生进行了评估,发现学生利用了它的增量反馈,教师也从它对课程主题的结构化见解中受益。我们的研究表明,自动推理可以将在线作业和大规模教育扩展到许多新的领域。
{"title":"Crowd-scale interactive formal reasoning and analytics","authors":"Ethan Fast, Colleen Lee, A. Aiken, Michael S. Bernstein, D. Koller, Eric Smith","doi":"10.1145/2501988.2502028","DOIUrl":"https://doi.org/10.1145/2501988.2502028","url":null,"abstract":"Large online courses often assign problems that are easy to grade because they have a fixed set of solutions (such as multiple choice), but grading and guiding students is more difficult in problem domains that have an unbounded number of correct answers. One such domain is derivations: sequences of logical steps commonly used in assignments for technical, mathematical and scientific subjects. We present DeduceIt, a system for creating, grading, and analyzing derivation assignments in any formal domain. DeduceIt supports assignments in any logical formalism, provides students with incremental feedback, and aggregates student paths through each proof to produce instructor analytics. DeduceIt benefits from checking thousands of derivations on the web: it introduces a proof cache, a novel data structure which leverages a crowd of students to decrease the cost of checking derivations and providing real-time, constructive feedback. We evaluate DeduceIt with 990 students in an online compilers course, finding students take advantage of its incremental feedback and instructors benefit from its structured insights into course topics. Our work suggests that automated reasoning can extend online assignments and large-scale education to many new domains.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134253014","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Paper generators: harvesting energy from touching, rubbing and sliding 纸张发电机:通过触摸、摩擦和滑动收集能量
M. E. Karagozler, I. Poupyrev, G. Fedder, Yuri Suzuki
We present a new energy harvesting technology that generates electrical energy from a user's interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user's gestures such as tapping, touching, rubbing and sliding to generate electrical energy. The harvested energy is then used to actuate LEDs, e-paper displays and various other devices to create novel interactive applications, such as enhancing books and other printed media with interactivity.
我们提出了一种新的能量收集技术,通过用户与类似纸张的材料的互动产生电能。能量收集器灵活、轻便、廉价,它们利用用户的手势,如轻拍、触摸、摩擦和滑动来产生电能。然后,收集的能量用于驱动led,电子纸显示器和各种其他设备,以创建新颖的交互式应用,例如增强书籍和其他印刷媒体的交互性。
{"title":"Paper generators: harvesting energy from touching, rubbing and sliding","authors":"M. E. Karagozler, I. Poupyrev, G. Fedder, Yuri Suzuki","doi":"10.1145/2501988.2502054","DOIUrl":"https://doi.org/10.1145/2501988.2502054","url":null,"abstract":"We present a new energy harvesting technology that generates electrical energy from a user's interactions with paper-like materials. The energy harvesters are flexible, light, and inexpensive, and they utilize a user's gestures such as tapping, touching, rubbing and sliding to generate electrical energy. The harvested energy is then used to actuate LEDs, e-paper displays and various other devices to create novel interactive applications, such as enhancing books and other printed media with interactivity.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"120987860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 39
Panopticon: a parallel video overview system 全景监狱:一个并行视频概述系统
D. Jackson, James Nicholson, Gerrit Stoeckigt, Rebecca Wrobel, Anja Thieme, P. Olivier
Panopticon is a video surrogate system that displays multiple sub-sequences in parallel to present a rapid overview of the entire sequence to the user. A novel, precisely animated arrangement slides thumbnails to provide a consistent spatiotemporal layout while allowing any sub-sequence of the original video to be watched without interruption. Furthermore, this output can be generated offline as a highly efficient repeated animation loop, making it suitable for resource-constrained environments, such as web-based interaction. Two versions of Panopticon were evaluated using three different types of video footage with the aim of determining the usability of the proposed system. Results demonstrated an advantage over another surrogate with surveillance footage in terms of search times and this advantage was further improved with Panopticon 2. Eye tracking data suggests that Panopticon's advantage stems from the animated timeline that users heavily rely on.
Panopticon是一个视频替代系统,它并行显示多个子序列,向用户快速展示整个序列的概况。一种新颖的,精确的动画排列滑动缩略图,提供一致的时空布局,同时允许原始视频的任何子序列不间断地观看。此外,该输出可以作为高效的重复动画循环离线生成,使其适合资源受限的环境,例如基于web的交互。使用三种不同类型的录像对两个版本的圆形监狱进行了评估,目的是确定拟议系统的可用性。结果表明,在搜索时间方面,它比另一种具有监视录像的替代品具有优势,并且在Panopticon 2中进一步提高了这种优势。眼动追踪数据显示,Panopticon的优势源于用户非常依赖的动画时间轴。
{"title":"Panopticon: a parallel video overview system","authors":"D. Jackson, James Nicholson, Gerrit Stoeckigt, Rebecca Wrobel, Anja Thieme, P. Olivier","doi":"10.1145/2501988.2502038","DOIUrl":"https://doi.org/10.1145/2501988.2502038","url":null,"abstract":"Panopticon is a video surrogate system that displays multiple sub-sequences in parallel to present a rapid overview of the entire sequence to the user. A novel, precisely animated arrangement slides thumbnails to provide a consistent spatiotemporal layout while allowing any sub-sequence of the original video to be watched without interruption. Furthermore, this output can be generated offline as a highly efficient repeated animation loop, making it suitable for resource-constrained environments, such as web-based interaction. Two versions of Panopticon were evaluated using three different types of video footage with the aim of determining the usability of the proposed system. Results demonstrated an advantage over another surrogate with surveillance footage in terms of search times and this advantage was further improved with Panopticon 2. Eye tracking data suggests that Panopticon's advantage stems from the animated timeline that users heavily rely on.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124210729","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
CrowdLearner: rapidly creating mobile recognizers using crowdsourcing CrowdLearner:使用众包快速创建移动识别器
Shahriyar Amini, Y. Li
Mobile applications can offer improved user experience through the use of novel modalities and user context. However, these new input dimensions often require recognition-based techniques, with which mobile app developers or designers may not be familiar. Furthermore, the recruiting, data collection and labeling, necessary for using these techniques, are usually time-consuming and expensive. We present CrowdLearner, a framework based on crowdsourcing to automatically generate recognizers using mobile sensor input such as accelerometer or touchscreen readings. CrowdLearner allows a developer to easily create a recognition task, distribute it to the crowd, and monitor its progress as more data becomes available. We deployed CrowdLearner to a crowd of 72 mobile users over a period of 2.5 weeks. We evaluated the system by experimenting with 6 recognition tasks concerning motion gestures, touchscreen gestures, and activity recognition. The experimental results indicated that CrowdLearner enables a developer to quickly acquire a usable recognizer for their specific application by spending a moderate amount of money, often less than $10, in a short period of time, often in the order of 2 hours. Our exploration also revealed challenges and provided insights into the design of future crowdsourcing systems for machine learning tasks.
通过使用新颖的模式和用户环境,移动应用程序可以提供更好的用户体验。然而,这些新的输入维度通常需要基于识别的技术,而手机应用开发者或设计师可能并不熟悉这些技术。此外,使用这些技术所必需的招聘、数据收集和标记通常既耗时又昂贵。我们提出了CrowdLearner,这是一个基于众包的框架,可以使用移动传感器输入(如加速度计或触摸屏读数)自动生成识别器。CrowdLearner允许开发人员轻松创建识别任务,将其分发给人群,并随着更多数据可用而监控其进度。在2.5周的时间里,我们将CrowdLearner部署到72个移动用户中。我们通过实验6个识别任务来评估该系统,这些任务涉及动作手势、触摸屏手势和活动识别。实验结果表明,CrowdLearner使开发人员能够在短时间内(通常在2小时左右)花费适量的钱(通常不到10美元),快速获得适合其特定应用的可用识别器。我们的探索也揭示了挑战,并为机器学习任务的未来众包系统的设计提供了见解。
{"title":"CrowdLearner: rapidly creating mobile recognizers using crowdsourcing","authors":"Shahriyar Amini, Y. Li","doi":"10.1145/2501988.2502029","DOIUrl":"https://doi.org/10.1145/2501988.2502029","url":null,"abstract":"Mobile applications can offer improved user experience through the use of novel modalities and user context. However, these new input dimensions often require recognition-based techniques, with which mobile app developers or designers may not be familiar. Furthermore, the recruiting, data collection and labeling, necessary for using these techniques, are usually time-consuming and expensive. We present CrowdLearner, a framework based on crowdsourcing to automatically generate recognizers using mobile sensor input such as accelerometer or touchscreen readings. CrowdLearner allows a developer to easily create a recognition task, distribute it to the crowd, and monitor its progress as more data becomes available. We deployed CrowdLearner to a crowd of 72 mobile users over a period of 2.5 weeks. We evaluated the system by experimenting with 6 recognition tasks concerning motion gestures, touchscreen gestures, and activity recognition. The experimental results indicated that CrowdLearner enables a developer to quickly acquire a usable recognizer for their specific application by spending a moderate amount of money, often less than $10, in a short period of time, often in the order of 2 hours. Our exploration also revealed challenges and provided insights into the design of future crowdsourcing systems for machine learning tasks.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"69 5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122688539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
PAPILLON: designing curved display surfaces with printed optics PAPILLON:设计弯曲的显示表面与印刷光学
Eric Brockmeyer, I. Poupyrev, S. Hudson
We present a technology for designing curved display surfaces that can both display information and sense two dimensions of human touch. It is based on 3D printed optics, where the surface of the display is constructed as a bundle of printed light pipes, that direct images from an arbitrary planar image source to the surface of the display. This effectively decouples the display surface and image source, allowing to iterate the design of displays without requiring changes to the complex electronics and optics of the device. In addition, the same optical elements also direct light from the surface of the display back to the image sensor allowing for touch input and proximity detection of a hand relative to the display surface. The resulting technology is effective in designing compact, efficient displays of a small size; this has been applied in the design of interactive animated eyes.
我们提出了一种设计曲面显示表面的技术,它既能显示信息,又能感知人类触摸的二维空间。它基于3D打印光学,其中显示器的表面构造为一束打印光管,将图像从任意平面图像源引导到显示器表面。这有效地解耦了显示表面和图像源,允许迭代显示设计,而不需要改变设备的复杂电子和光学器件。此外,相同的光学元件还可以将来自显示器表面的光线引导回图像传感器,从而实现触摸输入和相对于显示器表面的手的接近检测。由此产生的技术在设计紧凑、高效的小尺寸显示器方面是有效的;这已经应用于交互式动画眼睛的设计中。
{"title":"PAPILLON: designing curved display surfaces with printed optics","authors":"Eric Brockmeyer, I. Poupyrev, S. Hudson","doi":"10.1145/2501988.2502027","DOIUrl":"https://doi.org/10.1145/2501988.2502027","url":null,"abstract":"We present a technology for designing curved display surfaces that can both display information and sense two dimensions of human touch. It is based on 3D printed optics, where the surface of the display is constructed as a bundle of printed light pipes, that direct images from an arbitrary planar image source to the surface of the display. This effectively decouples the display surface and image source, allowing to iterate the design of displays without requiring changes to the complex electronics and optics of the device. In addition, the same optical elements also direct light from the surface of the display back to the image sensor allowing for touch input and proximity detection of a hand relative to the display surface. The resulting technology is effective in designing compact, efficient displays of a small size; this has been applied in the design of interactive animated eyes.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"46 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116448123","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 74
GIST: a gestural interface for remote nonvisual spatial perception 远距离非视觉空间感知的手势界面
V. Khambadkar, Eelke Folmer
Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable gestural interface that offers spatial perception functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. By allowing blind users to directly explore the physical space using gestures, GIST allows for the closest mapping between augmented and physical reality, which facilitates spatial interaction. A user study with eight blind users evaluates GIST in its ability to help perform everyday tasks that rely on spatial perception, such as grabbing an object or interacting with a person. Results of our study may help develop new gesture based assistive applications.
对于盲人来说,空间感知是一项具有挑战性的任务,因为他们的手的功能和感知范围有限。我们提出GIST,这是一种可穿戴的手势界面,通过将用户的手新颖地用于多功能感应杆,提供空间感知功能。GIST使用可穿戴深度感测相机分析可见的物理空间,并允许盲人用户使用不同的手势访问该空间的空间信息。通过允许盲人用户使用手势直接探索物理空间,GIST允许增强现实和物理现实之间最密切的映射,从而促进空间交互。一项由8名盲人参与的用户研究评估了GIST帮助完成依赖于空间感知的日常任务的能力,比如抓取物体或与人互动。我们的研究结果可能有助于开发新的基于手势的辅助应用程序。
{"title":"GIST: a gestural interface for remote nonvisual spatial perception","authors":"V. Khambadkar, Eelke Folmer","doi":"10.1145/2501988.2502047","DOIUrl":"https://doi.org/10.1145/2501988.2502047","url":null,"abstract":"Spatial perception is a challenging task for people who are blind due to the limited functionality and sensing range of hands. We present GIST, a wearable gestural interface that offers spatial perception functionality through the novel appropriation of the user's hands into versatile sensing rods. Using a wearable depth-sensing camera, GIST analyzes the visible physical space and allows blind users to access spatial information about this space using different hand gestures. By allowing blind users to directly explore the physical space using gestures, GIST allows for the closest mapping between augmented and physical reality, which facilitates spatial interaction. A user study with eight blind users evaluates GIST in its ability to help perform everyday tasks that rely on spatial perception, such as grabbing an object or interacting with a person. Results of our study may help develop new gesture based assistive applications.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"477 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131875270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 35
dePENd: augmented handwriting system using ferromagnetism of a ballpoint pen 依靠:增强书写系统使用铁磁性的圆珠笔
Junichi Yamaoka, Y. Kakehi
This paper presents dePENd, a novel interactive system that assists in sketching using regular pens and paper. Our system utilizes the ferromagnetic feature of the metal tip of a regular ballpoint pen. The computer controlling the X and Y positions of the magnet under the surface of the table provides entirely new drawing experiences. By controlling the movements of a pen and presenting haptic guides, the system allows a user to easily draw diagrams and pictures consisting of lines and circles, which are difficult to create by free-hand drawing. Moreover, the system also allows users to freely edit and arrange prescribed pictures. This is expected to reduce the resistance to drawing and promote users' creativity. In addition, we propose a communication tool using two dePENd systems that is expected to enhance the drawing skills of users. The functions of this system enable users to utilize interactive applications such as copying and redrawing drafted pictures or scaling the pictures using a digital pen. Furthermore, we implement the system and evaluate its technical features. In this paper, we describe the details of the design and implementations of the device, along with applications, technical evaluations, and future prospects.
本文介绍了一种新的交互式系统dePENd,它可以帮助你使用普通的笔和纸来绘制草图。我们的系统利用了普通圆珠笔金属笔尖的铁磁特性。电脑控制桌子表面下磁铁的X和Y位置,提供全新的绘图体验。通过控制笔的运动和提供触觉引导,用户可以轻松地绘制由线条和圆圈组成的图表和图片,这是徒手绘制难以绘制的。此外,该系统还允许用户自由编辑和排列指定的图片。这有望减少绘图的阻力,促进用户的创造力。此外,我们还提出了一种使用两个dePENd系统的通信工具,有望提高用户的绘图技能。该系统的功能使用户能够利用交互式应用程序,例如使用数字笔复制和重新绘制草图或缩放图片。在此基础上,对系统进行了实现,并对其技术特点进行了评价。在本文中,我们描述了该设备的设计和实现的细节,以及应用,技术评估和未来展望。
{"title":"dePENd: augmented handwriting system using ferromagnetism of a ballpoint pen","authors":"Junichi Yamaoka, Y. Kakehi","doi":"10.1145/2501988.2502017","DOIUrl":"https://doi.org/10.1145/2501988.2502017","url":null,"abstract":"This paper presents dePENd, a novel interactive system that assists in sketching using regular pens and paper. Our system utilizes the ferromagnetic feature of the metal tip of a regular ballpoint pen. The computer controlling the X and Y positions of the magnet under the surface of the table provides entirely new drawing experiences. By controlling the movements of a pen and presenting haptic guides, the system allows a user to easily draw diagrams and pictures consisting of lines and circles, which are difficult to create by free-hand drawing. Moreover, the system also allows users to freely edit and arrange prescribed pictures. This is expected to reduce the resistance to drawing and promote users' creativity. In addition, we propose a communication tool using two dePENd systems that is expected to enhance the drawing skills of users. The functions of this system enable users to utilize interactive applications such as copying and redrawing drafted pictures or scaling the pictures using a digital pen. Furthermore, we implement the system and evaluate its technical features. In this paper, we describe the details of the design and implementations of the device, along with applications, technical evaluations, and future prospects.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127556982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 59
期刊
Proceedings of the 26th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1