首页 > 最新文献

Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology最新文献

英文 中文
AmbientLetter: Letter Presentation Method for Discreet Notification of Unknown Spelling when Handwriting AmbientLetter:字母呈现方法谨慎通知未知拼写时,手写
Xaver Tomihiro Toyozaki, Keita Watanabe
We propose a technique to support writing activity in a confidential manner with a pen-based device. Autocorrect and predictive conversion do not work when writing by hand, and looking up unknown spelling is sometimes embarrassing. Therefore, we propose AmbientLetter which seamlessly and discretely presents the forgotten spelling to the user in scenarios where handwriting is necessary. In this work, we describe the system structure and the technique used to conceal the user"s getting the information.
我们提出了一种技术,以支持书面活动的保密方式与笔为基础的设备。手写时,自动更正和预测转换不起作用,查找未知拼写有时会令人尴尬。因此,我们提出了AmbientLetter,它可以在需要手写的情况下无缝地、离散地向用户呈现被遗忘的拼写。在这项工作中,我们描述了系统的结构和用于隐藏用户获取信息的技术。
{"title":"AmbientLetter: Letter Presentation Method for Discreet Notification of Unknown Spelling when Handwriting","authors":"Xaver Tomihiro Toyozaki, Keita Watanabe","doi":"10.1145/3266037.3266093","DOIUrl":"https://doi.org/10.1145/3266037.3266093","url":null,"abstract":"We propose a technique to support writing activity in a confidential manner with a pen-based device. Autocorrect and predictive conversion do not work when writing by hand, and looking up unknown spelling is sometimes embarrassing. Therefore, we propose AmbientLetter which seamlessly and discretely presents the forgotten spelling to the user in scenarios where handwriting is necessary. In this work, we describe the system structure and the technique used to conceal the user\"s getting the information.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"129 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116521380","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Head Pose Classification by using Body-Conducted Sound 用身体传导声进行头部姿势分类
Ryo Kamoshida, K. Takemura
Vibrations generated by human activity have been used for recognizing human behavior and developing user interfaces; however, it is difficult to estimate static poses that do not generate a vibration. This can be solved using active acoustic sensing; however, this method is not suitable for emitting some vibrations around the head in terms of the influence of audition. Therefore, we propose a method for estimating head poses using body-conducted sound naturally and regularly generated in the human body. The support vector classification recognizes vertical and horizontal directions of the head, and we confirmed the feasibility of the proposed method through experiments.
人类活动产生的振动已被用于识别人类行为和开发用户界面;然而,很难估计不产生振动的静态姿势。这可以通过主动声传感来解决;然而,由于听觉的影响,这种方法不适合在头部周围发出一些振动。因此,我们提出了一种利用人体自然和有规律地产生的身体传导声音来估计头部姿势的方法。支持向量分类对头部的垂直方向和水平方向进行了识别,并通过实验验证了该方法的可行性。
{"title":"Head Pose Classification by using Body-Conducted Sound","authors":"Ryo Kamoshida, K. Takemura","doi":"10.1145/3266037.3266094","DOIUrl":"https://doi.org/10.1145/3266037.3266094","url":null,"abstract":"Vibrations generated by human activity have been used for recognizing human behavior and developing user interfaces; however, it is difficult to estimate static poses that do not generate a vibration. This can be solved using active acoustic sensing; however, this method is not suitable for emitting some vibrations around the head in terms of the influence of audition. Therefore, we propose a method for estimating head poses using body-conducted sound naturally and regularly generated in the human body. The support vector classification recognizes vertical and horizontal directions of the head, and we confirmed the feasibility of the proposed method through experiments.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"32 ","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114461303","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Perceptual Switch for Gaze Selection 凝视选择的感知开关
Jooyeon Lee, Jong-Seok Lee
One of the main drawbacks of the fixation-based gaze interfaces is that they are unable to distinguish top-down attention (or selection, a gaze with a purpose) from stimulus driven bottom-up attention (or navigation, a stare without any intentions) without time durations or unnatural eye movements. We found that using the bistable image called the Necker's cube as a button user interface (UI) helps to remedy the limitation. When users switch two rivaling percepts of the Necker's cube at will, unique eye movements are triggered and these characteristics can be used to indicate a button press or a selecting action. In this paper, we introduce (1) the cognitive phenomenon called "percept switch" for gaze interaction, and (2) propose "perceptual switch" or the Necker's cube user interface (UI) which uses "percept switch" as the indication of a selection. Our preliminary experiment confirms that perceptual switch can be used to distinguish voluntary gaze selection from random navigation, and discusses that the visual elements of the Necker's cube such as size and biased visual cues could be adjusted for the optimal use of individual users.
基于注视的注视界面的主要缺点之一是,它们无法区分自上而下的注意(或选择,有目的的注视)和刺激驱动的自下而上的注意(或导航,没有任何意图的注视),没有持续时间或不自然的眼球运动。我们发现,使用称为Necker立方体的双稳图像作为按钮用户界面(UI)有助于弥补这一限制。当用户随意切换内克尔立方体的两个相互竞争的感知时,会触发独特的眼球运动,这些特征可以用来指示按下按钮或选择动作。在本文中,我们介绍了(1)凝视交互的认知现象“感知切换”,(2)提出了“感知切换”或Necker's cube用户界面(UI),它使用“感知切换”作为选择的指示。我们的初步实验证实,感知开关可以用于区分自主凝视选择和随机导航,并讨论了内克尔立方体的视觉元素,如大小和有偏差的视觉线索可以调整,以实现个人用户的最佳使用。
{"title":"Perceptual Switch for Gaze Selection","authors":"Jooyeon Lee, Jong-Seok Lee","doi":"10.1145/3266037.3266107","DOIUrl":"https://doi.org/10.1145/3266037.3266107","url":null,"abstract":"One of the main drawbacks of the fixation-based gaze interfaces is that they are unable to distinguish top-down attention (or selection, a gaze with a purpose) from stimulus driven bottom-up attention (or navigation, a stare without any intentions) without time durations or unnatural eye movements. We found that using the bistable image called the Necker's cube as a button user interface (UI) helps to remedy the limitation. When users switch two rivaling percepts of the Necker's cube at will, unique eye movements are triggered and these characteristics can be used to indicate a button press or a selecting action. In this paper, we introduce (1) the cognitive phenomenon called \"percept switch\" for gaze interaction, and (2) propose \"perceptual switch\" or the Necker's cube user interface (UI) which uses \"percept switch\" as the indication of a selection. Our preliminary experiment confirms that perceptual switch can be used to distinguish voluntary gaze selection from random navigation, and discusses that the visual elements of the Necker's cube such as size and biased visual cues could be adjusted for the optimal use of individual users.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128389168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trans-scale Playground: An Immersive Visual Telexistence System for Human Adaptation 跨尺度游乐场:人类适应的沉浸式视觉远存系统
Satoshi Hashizume, Akira Ishii, Kenta Suzuki, Kazuki Takazawa, Yoichi Ochiai
In this paper, we present a novel telexistence system and design methods for telexistence studies to explore spatialscale deconstruction. There have been studies on the experience of dwarf-sized or giant-sized telepresence have been conducted over a period of many years. In this study, we discuss the scale of movements, image transformation, technical components of telepresence robots, and user experiences of telexistence-based spatial transformations. We implemented two types of telepresence robots with an omnidirectional stereo camera setup for a spatial trans-scale experience, wheeled robots, and quadcopters. These telepresence robots provide users with a trans-scale experience for a distance ranging from 15 cm to 30 m. We conducted user studies for different camera positions on robots and for different image transformation method.
在本文中,我们提出了一个新的远存系统和远存研究的设计方法,以探索空间尺度的解构。在许多年的时间里,一直有关于侏儒或巨人远程呈现体验的研究。在这项研究中,我们讨论了运动的规模,图像转换,远程呈现机器人的技术组件,以及基于远程存在的空间转换的用户体验。我们实现了两种类型的远程呈现机器人,具有全方位立体摄像机设置,用于空间跨尺度体验,轮式机器人和四轴飞行器。这些远程呈现机器人为用户提供距离从15厘米到30米的跨尺度体验。我们针对机器人上不同的摄像头位置和不同的图像变换方法进行了用户研究。
{"title":"Trans-scale Playground: An Immersive Visual Telexistence System for Human Adaptation","authors":"Satoshi Hashizume, Akira Ishii, Kenta Suzuki, Kazuki Takazawa, Yoichi Ochiai","doi":"10.1145/3266037.3266103","DOIUrl":"https://doi.org/10.1145/3266037.3266103","url":null,"abstract":"In this paper, we present a novel telexistence system and design methods for telexistence studies to explore spatialscale deconstruction. There have been studies on the experience of dwarf-sized or giant-sized telepresence have been conducted over a period of many years. In this study, we discuss the scale of movements, image transformation, technical components of telepresence robots, and user experiences of telexistence-based spatial transformations. We implemented two types of telepresence robots with an omnidirectional stereo camera setup for a spatial trans-scale experience, wheeled robots, and quadcopters. These telepresence robots provide users with a trans-scale experience for a distance ranging from 15 cm to 30 m. We conducted user studies for different camera positions on robots and for different image transformation method.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"29 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114070464","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
A Demonstration of VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR VR旋转的演示:探索一维旋转平台的设计空间,以增加VR中自我运动的感知
Thomas Dreja, Michael Rietzler, Teresa Hirzle, Jan Gugenheimer, Julian Frommel, E. Rukzio
In this demonstration we introduce VRSpinning, a seated locomotion approach based around stimulating the user's vestibular system using a rotational impulse to induce the perception of linear self-motion. Currently, most approaches for locomotion in VR use either concepts like teleportation for traveling longer distances or present a virtual motion that creates a visual-vestibular conflict, which is assumed to cause simulator sickness. With our platform we evaluated two designs for using the rotation of a motorized swivel chair to alleviate this, wiggle and impulse. Our evaluation showed that impulse, using short rotation bursts matched with the visual acceleration, can significantly reduce simulator sickness and increase the perception of self-motion compared to no physical motion.
在这个演示中,我们介绍了VRSpinning,这是一种坐式运动方法,基于使用旋转脉冲刺激用户的前庭系统来诱导线性自我运动的感知。目前,VR中的大多数运动方法要么使用远距离传送之类的概念,要么呈现虚拟运动,从而产生视觉前庭冲突,这被认为会导致模拟器病。在我们的平台上,我们评估了两种设计,使用电动转椅的旋转来缓解这种摆动和冲动。我们的评估表明,与没有身体运动相比,脉冲,使用与视觉加速相匹配的短旋转爆发,可以显着减少模拟器眩晕,增加自我运动的感觉。
{"title":"A Demonstration of VRSpinning: Exploring the Design Space of a 1D Rotation Platform to Increase the Perception of Self-Motion in VR","authors":"Thomas Dreja, Michael Rietzler, Teresa Hirzle, Jan Gugenheimer, Julian Frommel, E. Rukzio","doi":"10.1145/3266037.3271645","DOIUrl":"https://doi.org/10.1145/3266037.3271645","url":null,"abstract":"In this demonstration we introduce VRSpinning, a seated locomotion approach based around stimulating the user's vestibular system using a rotational impulse to induce the perception of linear self-motion. Currently, most approaches for locomotion in VR use either concepts like teleportation for traveling longer distances or present a virtual motion that creates a visual-vestibular conflict, which is assumed to cause simulator sickness. With our platform we evaluated two designs for using the rotation of a motorized swivel chair to alleviate this, wiggle and impulse. Our evaluation showed that impulse, using short rotation bursts matched with the visual acceleration, can significantly reduce simulator sickness and increase the perception of self-motion compared to no physical motion.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"179 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132301020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Designing Interactive Behaviours Beyond the Desktop 设计桌面之外的交互行为
David Ledo
As interactions move beyond the desktop, interactive behaviours (effects of actions as they happen, or once they happen) are becoming increasingly complex. This complexity is due to the variety of forms that objects might take, and the different inputs and sensors capturing information, and the ability to create nuanced responses to those inputs. Current interaction design tools do not support much of this rich behaviour authoring. In my work I create prototyping tools that examine ways in which designers can create interactive behaviours. Thus far, I have created two prototyping tools: Pineal and Astral, which examine how to create physical forms based on a smart object's behaviour, and how to reuse existing desktop infrastructures to author different kinds of interactive behaviour. I also contribute conceptual elements, such as how to create smart objects using mobile devices, their sensors and outputs, instead of using custom electronic circuits, as well as devising evaluation strategies used in HCI toolkit research which directly informs my approach to evaluating my tools.
随着交互超越桌面,交互行为(动作发生时或发生后的效果)变得越来越复杂。这种复杂性是由于物体可能采取的各种形式,不同的输入和捕获信息的传感器,以及对这些输入产生细微反应的能力。当前的交互设计工具并不支持这种丰富的行为创作。在我的工作中,我创建了原型工具,用于检查设计师创建交互行为的方法。到目前为止,我已经创建了两个原型工具:Pineal和Astral,它们研究如何基于智能对象的行为创建物理形式,以及如何重用现有的桌面基础设施来编写不同类型的交互行为。我还贡献了概念元素,例如如何使用移动设备及其传感器和输出来创建智能对象,而不是使用定制的电子电路,以及设计用于HCI工具包研究的评估策略,这直接影响了我评估工具的方法。
{"title":"Designing Interactive Behaviours Beyond the Desktop","authors":"David Ledo","doi":"10.1145/3266037.3266132","DOIUrl":"https://doi.org/10.1145/3266037.3266132","url":null,"abstract":"As interactions move beyond the desktop, interactive behaviours (effects of actions as they happen, or once they happen) are becoming increasingly complex. This complexity is due to the variety of forms that objects might take, and the different inputs and sensors capturing information, and the ability to create nuanced responses to those inputs. Current interaction design tools do not support much of this rich behaviour authoring. In my work I create prototyping tools that examine ways in which designers can create interactive behaviours. Thus far, I have created two prototyping tools: Pineal and Astral, which examine how to create physical forms based on a smart object's behaviour, and how to reuse existing desktop infrastructures to author different kinds of interactive behaviour. I also contribute conceptual elements, such as how to create smart objects using mobile devices, their sensors and outputs, instead of using custom electronic circuits, as well as devising evaluation strategies used in HCI toolkit research which directly informs my approach to evaluating my tools.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"24 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124051954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
DynamicSlide: Reference-based Interaction Techniques for Slide-based Lecture Videos 动态幻灯片:基于参考的互动技术,用于基于幻灯片的讲座视频
Hyeungshik Jung, Hijung Valentina Shin, Juho Kim
Presentation slides play an important role in online lecture videos. Slides convey the main points of the lecture visually, while the instructor's narration adds detailed verbal explanations to each item in the slide. We call the link between a slide item and the corresponding part of the narration a reference. In order to assess the feasibility of reference-based interaction techniques for watching videos, we introduce DynamicSlide, a video processing system that automatically extracts references from slide-based lecture videos and a video player. The system incorporates a set of reference-based techniques: emphasizing the current item in the slide that is being explained, enabling item-based navigation, and enabling item-based note-taking. Our pipeline correctly finds 79% of the references in a set of five videos with 141 references. Results from a user study suggest that DynamicSlide's features improve the learner's video browsing and navigation experience.
演示幻灯片在在线讲座视频中扮演着重要的角色。幻灯片直观地传达了讲座的要点,而讲师的叙述则对幻灯片中的每个项目进行了详细的口头解释。我们把幻灯片项目和相应的叙述部分之间的链接称为参考。为了评估基于参考的视频观看交互技术的可行性,我们介绍了DynamicSlide,一个视频处理系统,可以自动从基于幻灯片的讲座视频中提取参考,以及一个视频播放器。该系统结合了一组基于引用的技术:强调正在解释的幻灯片中的当前项目,支持基于项目的导航,以及支持基于项目的笔记。我们的管道在一组包含141个引用的5个视频中正确地发现了79%的引用。一项用户研究的结果表明,DynamicSlide的功能改善了学习者的视频浏览和导航体验。
{"title":"DynamicSlide: Reference-based Interaction Techniques for Slide-based Lecture Videos","authors":"Hyeungshik Jung, Hijung Valentina Shin, Juho Kim","doi":"10.1145/3266037.3266089","DOIUrl":"https://doi.org/10.1145/3266037.3266089","url":null,"abstract":"Presentation slides play an important role in online lecture videos. Slides convey the main points of the lecture visually, while the instructor's narration adds detailed verbal explanations to each item in the slide. We call the link between a slide item and the corresponding part of the narration a reference. In order to assess the feasibility of reference-based interaction techniques for watching videos, we introduce DynamicSlide, a video processing system that automatically extracts references from slide-based lecture videos and a video player. The system incorporates a set of reference-based techniques: emphasizing the current item in the slide that is being explained, enabling item-based navigation, and enabling item-based note-taking. Our pipeline correctly finds 79% of the references in a set of five videos with 141 references. Results from a user study suggest that DynamicSlide's features improve the learner's video browsing and navigation experience.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124089818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Crowd-AI Systems for Non-Visual Information Access in the Real World 现实世界中非视觉信息获取的群体人工智能系统
Anhong Guo
The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.
这个世界充满了盲人无法接触到的信息、界面和环境。在室内导航时,盲人往往不知道关键的视觉信息,如海报、标志和出口门。在访问特定界面时,如果不先了解其布局并将其标记为视力辅助,盲人就无法独立完成操作。我的工作是研究集成了计算机视觉、按需众包和可穿戴设备的交互系统,以增强盲人的能力,为实时环境和界面导航提供解决方案。我的工作为盲人获取信息提供了更多的选择,增加了他们在世界上航行的自由。
{"title":"Crowd-AI Systems for Non-Visual Information Access in the Real World","authors":"Anhong Guo","doi":"10.1145/3266037.3266133","DOIUrl":"https://doi.org/10.1145/3266037.3266133","url":null,"abstract":"The world is full of information, interfaces and environments that are inaccessible to blind people. When navigating indoors, blind people are often unaware of key visual information, such as posters, signs, and exit doors. When accessing specific interfaces, blind people cannot independently do so without at least first learning their layout and labeling them with sighted assistance. My work investigates interactive systems that integrates computer vision, on-demand crowdsourcing, and wearables to amplify the abilities of blind people, offering solutions for real-time environment and interface navigation. My work provides more options for blind people to access information and increases their freedom in navigating the world.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130253586","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Aalto Interface Metrics (AIM): A Service and Codebase for Computational GUI Evaluation 阿尔托接口度量(AIM):用于计算GUI评估的服务和代码库
Antti Oulasvirta, Samuli De Pascale, Janin Koch, T. Langerak, Jussi P. P. Jokinen, Kashyap Todi, Markku Laine, Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, G. Palmas, T. Weinkauf
Aalto Interface Metrics (AIM) pools several empirically validated models and metrics of user perception and attention into an easy-to-use online service for the evaluation of graphical user interface (GUI) designs. Users input a GUI design via URL, and select from a list of 17 different metrics covering aspects ranging from visual clutter to visual learnability. AIM presents detailed breakdowns, visualizations, and statistical comparisons, enabling designers and practitioners to detect shortcomings and possible improvements. The web service and code repository are available at interfacemetrics.aalto.fi.
阿尔托界面度量(AIM)汇集了几个经验验证的模型和用户感知和注意力度量到一个易于使用的在线服务,用于评估图形用户界面(GUI)设计。用户通过URL输入GUI设计,并从17个不同的指标列表中进行选择,这些指标涵盖了从视觉混乱到视觉易学性的各个方面。AIM提供了详细的分解、可视化和统计比较,使设计人员和实践者能够发现缺点和可能的改进。web服务和代码存储库可在interfacemetrics. alto.fi中获得。
{"title":"Aalto Interface Metrics (AIM): A Service and Codebase for Computational GUI Evaluation","authors":"Antti Oulasvirta, Samuli De Pascale, Janin Koch, T. Langerak, Jussi P. P. Jokinen, Kashyap Todi, Markku Laine, Manoj Kristhombuge, Yuxi Zhu, Aliaksei Miniukovich, G. Palmas, T. Weinkauf","doi":"10.1145/3266037.3266087","DOIUrl":"https://doi.org/10.1145/3266037.3266087","url":null,"abstract":"Aalto Interface Metrics (AIM) pools several empirically validated models and metrics of user perception and attention into an easy-to-use online service for the evaluation of graphical user interface (GUI) designs. Users input a GUI design via URL, and select from a list of 17 different metrics covering aspects ranging from visual clutter to visual learnability. AIM presents detailed breakdowns, visualizations, and statistical comparisons, enabling designers and practitioners to detect shortcomings and possible improvements. The web service and code repository are available at interfacemetrics.aalto.fi.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131683089","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 34
Designing Inherent Interactions on Wearable Devices 设计可穿戴设备的内在交互
Teng Han
Wearable devices are becoming important computing devices to personal users. They have shown promising applications in multiple domains. However, designing interactions on smartwears remains challenging as the miniature sized formfactors limit both its input and output space. My thesis research proposes a new paradigm of Inherent Interaction on smartwears, with the idea of seeking interaction opportunities from users daily activities. This is to help bridging the gap between novel smartwear interactions and real-life experiences shared among users. This report introduces the concept of Inherent Interaction with my previous and current explorations in the category.
可穿戴设备正在成为个人用户重要的计算设备。它们在多个领域都有很好的应用前景。然而,在智能服装上设计交互仍然具有挑战性,因为微型尺寸的形状因素限制了其输入和输出空间。我的论文研究提出了一种新的智能穿戴的固有交互范式,从用户的日常活动中寻找交互机会。这是为了帮助弥合新颖的智能穿戴互动与用户共享的现实体验之间的差距。这篇报告介绍了内在互动的概念,以及我之前和现在在这个类别中的探索。
{"title":"Designing Inherent Interactions on Wearable Devices","authors":"Teng Han","doi":"10.1145/3266037.3266130","DOIUrl":"https://doi.org/10.1145/3266037.3266130","url":null,"abstract":"Wearable devices are becoming important computing devices to personal users. They have shown promising applications in multiple domains. However, designing interactions on smartwears remains challenging as the miniature sized formfactors limit both its input and output space. My thesis research proposes a new paradigm of Inherent Interaction on smartwears, with the idea of seeking interaction opportunities from users daily activities. This is to help bridging the gap between novel smartwear interactions and real-life experiences shared among users. This report introduces the concept of Inherent Interaction with my previous and current explorations in the category.","PeriodicalId":208006,"journal":{"name":"Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology","volume":"67 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2018-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122557954","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Adjunct Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1