首页 > 最新文献

Proceedings of the 26th annual ACM symposium on User interface software and technology最新文献

英文 中文
The auckland layout editor: an improved GUI layout specification process 奥克兰布局编辑器:一个改进的GUI布局规范过程
C. Zeidler, C. Lutteroth, W. Stuerzlinger, Gerald Weber
Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are among the most powerful. However, they are also more complex and their layouts are prone to problems such as over-constrained specifications and widget overlap. This poses challenges for GUI builder tools, which ideally should address these issues automatically. We present a new GUI builderthe Auckland Layout Editor (ALE)that addresses these challenges by enabling GUI designers to specify constraint-based layouts using simple, mouse-based operations. We give a detailed description of ALE's edit operations, which do not require direct constraint editing. ALE guarantees that all edit operations lead to sound specifications, ensuring solvable and non-overlapping layouts. To achieve that, we present a new algorithm that automatically generates the constraints necessary to keep a layout non-overlapping. Furthermore, we discuss how our innovations can be combined with manual constraint editing in a sound way. Finally, to aid designers in creating layouts with good resize behavior, we propose a novel automatic layout preview. This displays the layout at its minimum and in an enlarged size, which allows visualizing potential resize issues directly. All these features permit GUI developers to focus more on the overall UI design.
布局管理器用于控制图形用户界面(gui)中小部件的位置。基于约束的布局管理器是最强大的。然而,它们也更复杂,它们的布局容易出现诸如过度约束的规范和小部件重叠等问题。这对GUI构建器工具提出了挑战,理想情况下,这些工具应该自动解决这些问题。我们提出了一个新的GUI构建器——奥克兰布局编辑器(ALE),它使GUI设计人员能够使用简单的、基于鼠标的操作来指定基于约束的布局,从而解决了这些挑战。我们详细描述了ALE的编辑操作,这些操作不需要直接的约束编辑。ALE保证所有编辑操作都有良好的规格,确保可解决和不重叠的布局。为了实现这一点,我们提出了一种新的算法,该算法自动生成保持布局不重叠所需的约束。此外,我们还讨论了我们的创新如何以一种合理的方式与手动约束编辑相结合。最后,为了帮助设计人员创建具有良好调整大小行为的布局,我们提出了一种新颖的自动布局预览。这将以最小尺寸和放大尺寸显示布局,从而可以直接可视化潜在的大小调整问题。所有这些特性使GUI开发人员能够更多地关注整体UI设计。
{"title":"The auckland layout editor: an improved GUI layout specification process","authors":"C. Zeidler, C. Lutteroth, W. Stuerzlinger, Gerald Weber","doi":"10.1145/2501988.2502007","DOIUrl":"https://doi.org/10.1145/2501988.2502007","url":null,"abstract":"Layout managers are used to control the placement of widgets in graphical user interfaces (GUIs). Constraint-based layout managers are among the most powerful. However, they are also more complex and their layouts are prone to problems such as over-constrained specifications and widget overlap. This poses challenges for GUI builder tools, which ideally should address these issues automatically. We present a new GUI builderthe Auckland Layout Editor (ALE)that addresses these challenges by enabling GUI designers to specify constraint-based layouts using simple, mouse-based operations. We give a detailed description of ALE's edit operations, which do not require direct constraint editing. ALE guarantees that all edit operations lead to sound specifications, ensuring solvable and non-overlapping layouts. To achieve that, we present a new algorithm that automatically generates the constraints necessary to keep a layout non-overlapping. Furthermore, we discuss how our innovations can be combined with manual constraint editing in a sound way. Finally, to aid designers in creating layouts with good resize behavior, we propose a novel automatic layout preview. This displays the layout at its minimum and in an enlarged size, which allows visualizing potential resize issues directly. All these features permit GUI developers to focus more on the overall UI design.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130919527","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
Sauron: embedded single-camera sensing of printed physical user interfaces 索伦:嵌入式单摄像头感应打印物理用户界面
Valkyrie Savage, Colin Chang, Bjoern Hartmann
3D printers enable designers and makers to rapidly produce physical models of future products. Today these physical prototypes are mostly passive. Our research goal is to enable users to turn models produced on commodity 3D printers into interactive objects with a minimum of required assembly or instrumentation. We present Sauron, an embedded machine vision-based system for sensing human input on physical controls like buttons, sliders, and joysticks. With Sauron, designers attach a single camera with integrated ring light to a printed prototype. This camera observes the interior portions of input components to determine their state. In many prototypes, input components may be occluded or outside the viewing frustum of a single camera. We introduce algorithms that generate internal geometry and calculate mirror placements to redirect input motion into the visible camera area. To investigate the space of designs that can be built with Sauron along with its limitations, we built prototype devices, evaluated the suitability of existing models for vision sensing, and performed an informal study with three CAD users. While our approach imposes some constraints on device design, results suggest that it is expressive and accessible enough to enable constructing a useful variety of devices.
3D打印机使设计师和制造商能够快速生产未来产品的物理模型。今天,这些物理原型大多是被动的。我们的研究目标是使用户能够将商品3D打印机上生产的模型转化为具有最少所需组件或仪器的交互式对象。我们介绍了Sauron,一个嵌入式的基于机器视觉的系统,用于感知人类输入的物理控制,如按钮、滑块和操纵杆。对于索伦,设计师们将一个带有集成环形灯的单摄像头连接到打印的原型上。该摄像机观察输入组件的内部部分以确定它们的状态。在许多原型中,输入组件可能被遮挡或在单个摄像机的视锥台之外。我们引入了生成内部几何和计算反射镜位置的算法,以将输入运动重定向到可见的相机区域。为了研究索伦可以构建的设计空间及其局限性,我们构建了原型设备,评估了现有模型对视觉传感的适用性,并与三个CAD用户进行了非正式研究。虽然我们的方法对设备设计施加了一些限制,但结果表明,它具有足够的表现力和可访问性,可以构建各种有用的设备。
{"title":"Sauron: embedded single-camera sensing of printed physical user interfaces","authors":"Valkyrie Savage, Colin Chang, Bjoern Hartmann","doi":"10.1145/2501988.2501992","DOIUrl":"https://doi.org/10.1145/2501988.2501992","url":null,"abstract":"3D printers enable designers and makers to rapidly produce physical models of future products. Today these physical prototypes are mostly passive. Our research goal is to enable users to turn models produced on commodity 3D printers into interactive objects with a minimum of required assembly or instrumentation. We present Sauron, an embedded machine vision-based system for sensing human input on physical controls like buttons, sliders, and joysticks. With Sauron, designers attach a single camera with integrated ring light to a printed prototype. This camera observes the interior portions of input components to determine their state. In many prototypes, input components may be occluded or outside the viewing frustum of a single camera. We introduce algorithms that generate internal geometry and calculate mirror placements to redirect input motion into the visible camera area. To investigate the space of designs that can be built with Sauron along with its limitations, we built prototype devices, evaluated the suitability of existing models for vision sensing, and performed an informal study with three CAD users. While our approach imposes some constraints on device design, results suggest that it is expressive and accessible enough to enable constructing a useful variety of devices.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"23 6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207077","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 128
Mirage: exploring interaction modalities using off-body static electric field sensing 海市蜃楼:利用离体静电场传感探索相互作用模式
Adiyan Mujibiya, J. Rekimoto
Mirage proposes an effective non body contact technique to infer the amount and type of body motion, gesture, and activity. This approach involves passive measurement of static electric field of the environment flowing through sense electrode. This sensing method leverages electric field distortion by the presence of an intruder (e.g. human body). Mirage sensor has simple analog circuitry and supports ultra-low power operation. It requires no instrumentation to the user, and can be configured as environmental, mobile, and peripheral-attached sensor. We report on a series of experiments with 10 participants showing robust activity and gesture recognition, as well as promising results for robust location classification and multiple user differentiation. To further illustrate the utility of our approach, we demonstrate real-time interactive applications including activity monitoring, and two games which allow the users to interact with a computer using body motion and gestures.
Mirage提出了一种有效的非身体接触技术来推断身体动作、手势和活动的数量和类型。该方法涉及对流经感应电极的环境静电场进行被动测量。这种传感方法利用由于入侵者(例如人体)的存在而产生的电场畸变。幻影传感器具有简单的模拟电路,支持超低功耗工作。它不需要用户使用仪器,可以配置为环境、移动和外设附加传感器。我们报告了一系列有10名参与者的实验,显示出强大的活动和手势识别,以及在强大的位置分类和多用户区分方面的有希望的结果。为了进一步说明我们方法的实用性,我们演示了实时交互应用程序,包括活动监控,以及两个允许用户使用身体动作和手势与计算机交互的游戏。
{"title":"Mirage: exploring interaction modalities using off-body static electric field sensing","authors":"Adiyan Mujibiya, J. Rekimoto","doi":"10.1145/2501988.2502031","DOIUrl":"https://doi.org/10.1145/2501988.2502031","url":null,"abstract":"Mirage proposes an effective non body contact technique to infer the amount and type of body motion, gesture, and activity. This approach involves passive measurement of static electric field of the environment flowing through sense electrode. This sensing method leverages electric field distortion by the presence of an intruder (e.g. human body). Mirage sensor has simple analog circuitry and supports ultra-low power operation. It requires no instrumentation to the user, and can be configured as environmental, mobile, and peripheral-attached sensor. We report on a series of experiments with 10 participants showing robust activity and gesture recognition, as well as promising results for robust location classification and multiple user differentiation. To further illustrate the utility of our approach, we demonstrate real-time interactive applications including activity monitoring, and two games which allow the users to interact with a computer using body motion and gestures.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"299 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121457771","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
A colorful approach to text processing by example 一个丰富多彩的文本处理方法的例子
Kuat Yessenov, Shubham Tulsiani, A. Menon, Rob Miller, Sumit Gulwani, B. Lampson, A. Kalai
Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures. We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.
即使对于程序员来说,文本处理也是乏味且容易出错的,但它仍然是示例编程最吸引人的目标之一。对帮助论坛上的实际文本处理任务的研究表明,除了简单的字符串操作之外,许多这样的任务还涉及潜在的层次结构。通过实例介绍了一个处理结构化和半结构化文本的编程系统STEPS。STEPS用户通过示例创建和操作层次结构。在一项针对14名计算机科学家的受试者间用户研究中,STEPS比传统编程更有优势。
{"title":"A colorful approach to text processing by example","authors":"Kuat Yessenov, Shubham Tulsiani, A. Menon, Rob Miller, Sumit Gulwani, B. Lampson, A. Kalai","doi":"10.1145/2501988.2502040","DOIUrl":"https://doi.org/10.1145/2501988.2502040","url":null,"abstract":"Text processing, tedious and error-prone even for programmers, remains one of the most alluring targets of Programming by Example. An examination of real-world text processing tasks found on help forums reveals that many such tasks, beyond simple string manipulation, involve latent hierarchical structures. We present STEPS, a programming system for processing structured and semi-structured text by example. STEPS users create and manipulate hierarchical structure by example. In a between-subject user study on fourteen computer scientists, STEPS compares favorably to traditional programming.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114998544","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 50
Gaze locking: passive eye contact detection for human-object interaction 凝视锁定:人-物交互的被动眼神接触检测
Brian A. Smith, Qi Yin, Steven K. Feiner, S. Nayar
Eye contact plays a crucial role in our everyday social interactions. The ability of a device to reliably detect when a person is looking at it can lead to powerful human-object interfaces. Today, most gaze-based interactive systems rely on gaze tracking technology. Unfortunately, current gaze tracking techniques require active infrared illumination, calibration, or are sensitive to distance and pose. In this work, we propose a different solution-a passive, appearance-based approach for sensing eye contact in an image. By focusing on gaze *locking* rather than gaze tracking, we exploit the special appearance of direct eye gaze, achieving a Matthews correlation coefficient (MCC) of over 0.83 at long distances (up to 18 m) and large pose variations (up to ±30° of head yaw rotation) using a very basic classifier and without calibration. To train our detector, we also created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. We demonstrate how our method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography.
眼神交流在我们日常的社会交往中起着至关重要的作用。当一个人在看它时,设备能够可靠地检测到它的能力可以带来强大的人机界面。如今,大多数基于凝视的交互系统都依赖于凝视跟踪技术。不幸的是,目前的注视跟踪技术需要主动红外照明、校准,或者对距离和姿势很敏感。在这项工作中,我们提出了一种不同的解决方案——一种被动的、基于外观的方法来感知图像中的眼神接触。通过专注于凝视锁定而不是凝视跟踪,我们利用直接眼睛凝视的特殊外观,使用非常基本的分类器,无需校准,在长距离(高达18米)和大姿态变化(高达头部偏航旋转±30°)下实现马修斯相关系数(MCC)超过0.83。为了训练我们的检测器,我们还创建了一个大型的公开可用的凝视数据集:56个人的5880张图像,不同的凝视方向和头部姿势。我们演示了我们的方法如何促进人机交互,用户分析,图像过滤和凝视触发摄影。
{"title":"Gaze locking: passive eye contact detection for human-object interaction","authors":"Brian A. Smith, Qi Yin, Steven K. Feiner, S. Nayar","doi":"10.1145/2501988.2501994","DOIUrl":"https://doi.org/10.1145/2501988.2501994","url":null,"abstract":"Eye contact plays a crucial role in our everyday social interactions. The ability of a device to reliably detect when a person is looking at it can lead to powerful human-object interfaces. Today, most gaze-based interactive systems rely on gaze tracking technology. Unfortunately, current gaze tracking techniques require active infrared illumination, calibration, or are sensitive to distance and pose. In this work, we propose a different solution-a passive, appearance-based approach for sensing eye contact in an image. By focusing on gaze *locking* rather than gaze tracking, we exploit the special appearance of direct eye gaze, achieving a Matthews correlation coefficient (MCC) of over 0.83 at long distances (up to 18 m) and large pose variations (up to ±30° of head yaw rotation) using a very basic classifier and without calibration. To train our detector, we also created a large publicly available gaze data set: 5,880 images of 56 people over varying gaze directions and head poses. We demonstrate how our method facilitates human-object interaction, user analytics, image filtering, and gaze-triggered photography.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115508985","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 293
Content-based tools for editing audio stories 用于编辑音频故事的基于内容的工具
Steve Rubin, Floraine Berthouzoz, G. Mysore, Wilmot Li, Maneesh Agrawala
Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.
音频故事是一种引人入胜的交流形式,它将演讲和音乐结合成引人入胜的叙述。现有的音频编辑工具迫使故事制作人通过繁琐的低级波形编辑来操纵语音和音乐轨道。相比之下,我们提供了一套工具来分析语音和音乐的音频内容,从而使制作人能够在更高的层次上工作。我们的工具解决了创建音频故事的几个挑战,包括(1)导航和编辑演讲,(2)为乐谱选择合适的音乐,(3)编辑音乐以补充演讲。主要功能包括基于转录的语音编辑工具,该工具自动将转录文本中的编辑传播到相应的语音轨道;一个音乐浏览器,支持搜索基于情感,节奏,键,或音色相似的其他歌曲;音乐重新定位工具可以很容易地将音乐片段与演讲结合起来。我们使用我们的工具从各种原始演讲来源中创建音频故事,包括脚本叙述,采访和政治演讲。来自首次用户的非正式反馈表明,我们的工具很容易学习,并且极大地促进了将原始素材编辑成最终故事的过程。
{"title":"Content-based tools for editing audio stories","authors":"Steve Rubin, Floraine Berthouzoz, G. Mysore, Wilmot Li, Maneesh Agrawala","doi":"10.1145/2501988.2501993","DOIUrl":"https://doi.org/10.1145/2501988.2501993","url":null,"abstract":"Audio stories are an engaging form of communication that combine speech and music into compelling narratives. Existing audio editing tools force story producers to manipulate speech and music tracks via tedious, low-level waveform editing. In contrast, we present a set of tools that analyze the audio content of the speech and music and thereby allow producers to work at much higher level. Our tools address several challenges in creating audio stories, including (1) navigating and editing speech, (2) selecting appropriate music for the score, and (3) editing the music to complement the speech. Key features include a transcript-based speech editing tool that automatically propagates edits in the transcript text to the corresponding speech track; a music browser that supports searching based on emotion, tempo, key, or timbral similarity to other songs; and music retargeting tools that make it easy to combine sections of music with the speech. We have used our tools to create audio stories from a variety of raw speech sources, including scripted narratives, interviews and political speeches. Informal feedback from first-time users suggests that our tools are easy to learn and greatly facilitate the process of editing raw footage into a final story.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129762618","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 77
Session details: Visualization & video 会议细节:可视化和视频
G. Fitzmaurice
{"title":"Session details: Visualization & video","authors":"G. Fitzmaurice","doi":"10.1145/3254701","DOIUrl":"https://doi.org/10.1145/3254701","url":null,"abstract":"","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128974524","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Touch & activate: adding interactivity to existing objects using active acoustic sensing 触摸和激活:使用主动声学传感为现有对象添加交互性
Makoto Ono, B. Shizuki, J. Tanaka
In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.
在本文中,我们提出了一种新的声触摸传感技术,称为touch & Activate。它通过连接一个振动扬声器和一个作为传感器的压电麦克风来识别丰富的触摸环境,包括对现有物体的抓取。它为具有触摸输入功能的交互式对象的原型设计提供了简单的硬件配置。我们进行了一个对照实验来测量我们的技术的准确性和准确性与训练回合数之间的权衡。从结果来看,以塑料玩具为简单示例的5个触摸手势和以姿势识别为复杂示例的6个手势的每用户识别准确率分别为99.6%和86.3%。两种应用程序的行走用户识别准确率分别为97.8%和71.2%。由于我们的实验结果显示触摸手势和手势的识别具有良好的准确性,因此touch & Activate对于具有触摸输入能力的原型交互对象应该是可行的。
{"title":"Touch & activate: adding interactivity to existing objects using active acoustic sensing","authors":"Makoto Ono, B. Shizuki, J. Tanaka","doi":"10.1145/2501988.2501989","DOIUrl":"https://doi.org/10.1145/2501988.2501989","url":null,"abstract":"In this paper, we present a novel acoustic touch sensing technique called Touch & Activate. It recognizes a rich context of touches including grasp on existing objects by attaching only a vibration speaker and a piezo-electric microphone paired as a sensor. It provides easy hardware configuration for prototyping interactive objects that have touch input capability. We conducted a controlled experiment to measure the accuracy and trade-off between the accuracy and number of training rounds for our technique. From its results, per-user recognition accuracies with five touch gestures for a plastic toy as a simple example and six hand postures for the posture recognition as a complex example were 99.6% and 86.3%, respectively. Walk up user recognition accuracies for the two applications were 97.8% and 71.2%, respectively. Since the results of our experiment showed a promising accuracy for the recognition of touch gestures and hand postures, Touch & Activate should be feasible for prototype interactive objects that have touch input capability.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"37 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129264680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 156
Capturing on site laser annotations with smartphones to document construction work 用智能手机捕捉现场激光注释,记录施工工作
J. Schweitzer, R. Dörner
In the process of construction work, taking notes of real world objects like walls, pipes, cables and others is an important task. The ad hoc capturing of small information pieces on such objects on site can be challenging when there is no specialized technology available. Handwritten or hand drawn notes on paper are good for textual information like measurements whereas images are better to capture the physical state of objects. Without a proper combination however the benefit is limited. In this paper we present an interaction system for taking ad hoc notes on real world objects by using a combination of a smartphone and a laserpointer as input device. Our interface enables the user to directly annotate objects by drawing on them and to store these annotations for later reviewing. The deictic gestures of the user are then replayed on a stitched image of the scene. The users voice input is captured and analyzed to integrate additional Information. The user can mark positions and place hand taken measurements by pointing on the objects and speaking the corresponding voice commands.
在施工过程中,记录真实世界的物体,如墙壁、管道、电缆等是一项重要的任务。在没有专门的技术可用的情况下,现场捕获此类对象上的小信息片段可能具有挑战性。手写或手绘在纸上的笔记适合于测量等文本信息,而图像更适合于捕捉物体的物理状态。然而,如果没有适当的组合,效果是有限的。在本文中,我们提出了一个交互系统,通过使用智能手机和激光笔作为输入设备的组合,对现实世界的对象进行临时笔记。我们的界面使用户可以通过在对象上绘图来直接注释对象,并存储这些注释以供以后查看。然后,用户的指示手势会在现场的拼接图像上重播。捕获并分析用户的语音输入,以集成额外的信息。用户可以通过指向物体并发出相应的语音命令来标记位置并进行手动测量。
{"title":"Capturing on site laser annotations with smartphones to document construction work","authors":"J. Schweitzer, R. Dörner","doi":"10.1145/2501988.2502010","DOIUrl":"https://doi.org/10.1145/2501988.2502010","url":null,"abstract":"In the process of construction work, taking notes of real world objects like walls, pipes, cables and others is an important task. The ad hoc capturing of small information pieces on such objects on site can be challenging when there is no specialized technology available. Handwritten or hand drawn notes on paper are good for textual information like measurements whereas images are better to capture the physical state of objects. Without a proper combination however the benefit is limited. In this paper we present an interaction system for taking ad hoc notes on real world objects by using a combination of a smartphone and a laserpointer as input device. Our interface enables the user to directly annotate objects by drawing on them and to store these annotations for later reviewing. The deictic gestures of the user are then replayed on a stitched image of the scene. The users voice input is captured and analyzed to integrate additional Information. The user can mark positions and place hand taken measurements by pointing on the objects and speaking the corresponding voice commands.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122706095","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
SenSkin: adapting skin as a soft interface SenSkin:将皮肤适配为软界面
Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai
We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.
我们提出了一种传感技术和输入方法,该技术通过附着在人体上的薄带式设备来估计皮肤变形,其外观在日常生活中似乎是社会可接受的。输入接口通常需要反馈。SenSkin提供触觉反馈,使用户能够知道他们触摸的是皮肤的哪一部分,以便发出命令。在开始输入操作之前,用户已经找到了一个可接受的区域,可以继续输入命令,而无需接收明确的反馈。我们开发了一个带有两个臂带的实验装置来感知施加在皮肤上的三维压力。在没有触觉障碍的情况下,在裸露的皮肤上感应切向力以前尚未实现。SenSkin的另一个创新之处在于,它可以测量施加在皮肤上的切向力,比如前臂或手指的切向力。红外(IR)反射传感器的使用,因为它的耐用性和廉价,使其适合日常的人类传感目的。位于两个臂带上的多个传感器可以感知施加在皮肤上的切向和法向力。使用支持向量机(SVM)学习和识别输入命令。最后,我们将展示一个实现这种输入法的应用程序。
{"title":"SenSkin: adapting skin as a soft interface","authors":"Masa Ogata, Yuta Sugiura, Yasutoshi Makino, M. Inami, M. Imai","doi":"10.1145/2501988.2502039","DOIUrl":"https://doi.org/10.1145/2501988.2502039","url":null,"abstract":"We present a sensing technology and input method that uses skin deformation estimated through a thin band-type device attached to the human body, the appearance of which seems socially acceptable in daily life. An input interface usually requires feedback. SenSkin provides tactile feedback that enables users to know which part of the skin they are touching in order to issue commands. The user, having found an acceptable area before beginning the input operation, can continue to input commands without receiving explicit feedback. We developed an experimental device with two armbands to sense three-dimensional pressure applied to the skin. Sensing tangential force on uncovered skin without haptic obstacles has not previously been achieved. SenSkin is also novel in that quantitative tangential force applied to the skin, such as that of the forearm or fingers, is measured. An infrared (IR) reflective sensor is used since its durability and inexpensiveness make it suitable for everyday human sensing purposes. The multiple sensors located on the two armbands allow the tangential and normal force applied to the skin dimension to be sensed. The input command is learned and recognized using a Support Vector Machine (SVM). Finally, we show an application in which this input method is implemented.","PeriodicalId":294436,"journal":{"name":"Proceedings of the 26th annual ACM symposium on User interface software and technology","volume":"18 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2013-10-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126741733","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 86
期刊
Proceedings of the 26th annual ACM symposium on User interface software and technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1