首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Enabling Auto-Correction on Soft Braille Keyboard. 启用自动校正软盲文键盘。
Dan Zhang, Yan Ma, Glenn Dausch, William H Seiple, David Xianfeng Gu, I V Ramakrishnan, Xiaojun Bi

A soft Braille keyboard is a graphical representation of the Braille writing system on smartphones. It provides an essential text input method for visually impaired individuals, but accuracy and efficiency remain significant challenges. We present an intelligent Braille keyboard with auto-correction ability, which uses optimal transportation theory to estimate the distances between touch input and Braille patterns, and combines it with a language model to estimate the probability of entering words. The proposed system was evaluated through both simulations and user studies. In a touch interaction simulation on an Android phone and an iPhone, our intelligent Braille keyboard demonstrated superior error correction performance compared to the Android Braille keyboard with proofreading suggestions and the iPhone Braille keyboard with spelling suggestions. It reduced the error rate from 55.81% on Android and 57.13% on iPhone to 19.80% under high typing noise. Furthermore, in a user study of 12 participants who are legally blind, the intelligent Braille keyboard reduced word error rate (WER) by 59.5% (42.53% to 17.28%) with a slight drop of 0.74 words per minute (WPM), compared to a conventional Braille keyboard without auto-correction. These findings suggest that our approach has the potential to greatly improve the typing experience for Braille users on touchscreen devices.

软盲文键盘是智能手机上盲文书写系统的图形表示。它为视障人士提供了一种重要的文本输入方法,但准确性和效率仍然是一个重大挑战。提出了一种具有自动纠错功能的智能盲文键盘,该键盘利用最优传输理论估计触摸输入与盲文模式之间的距离,并结合语言模型估计输入单词的概率。通过模拟和用户研究对所提出的系统进行了评估。在Android手机和iPhone的触控交互模拟中,我们的智能盲文键盘比具有校对建议的Android盲文键盘和具有拼写建议的iPhone盲文键盘显示出更好的纠错性能。它将错误率从Android的55.81%和iPhone的57.13%降低到高输入噪音下的19.80%。此外,在一项针对12名盲人的用户研究中,与没有自动纠错的传统盲文键盘相比,智能盲文键盘将单词错误率(WER)降低了59.5%(42.53%至17.28%),每分钟仅下降0.74个单词(WPM)。这些发现表明,我们的方法有可能极大地改善盲文用户在触摸屏设备上的打字体验。
{"title":"Enabling Auto-Correction on Soft Braille Keyboard.","authors":"Dan Zhang, Yan Ma, Glenn Dausch, William H Seiple, David Xianfeng Gu, I V Ramakrishnan, Xiaojun Bi","doi":"10.1145/3746059.3747699","DOIUrl":"10.1145/3746059.3747699","url":null,"abstract":"<p><p>A soft Braille keyboard is a graphical representation of the Braille writing system on smartphones. It provides an essential text input method for visually impaired individuals, but accuracy and efficiency remain significant challenges. We present an intelligent Braille keyboard with auto-correction ability, which uses optimal transportation theory to estimate the distances between touch input and Braille patterns, and combines it with a language model to estimate the probability of entering words. The proposed system was evaluated through both simulations and user studies. In a touch interaction simulation on an Android phone and an iPhone, our intelligent Braille keyboard demonstrated superior error correction performance compared to the Android Braille keyboard with proofreading suggestions and the iPhone Braille keyboard with spelling suggestions. It reduced the error rate from 55.81% on Android and 57.13% on iPhone to 19.80% under high typing noise. Furthermore, in a user study of 12 participants who are legally blind, the intelligent Braille keyboard reduced word error rate (WER) by 59.5% (42.53% to 17.28%) with a slight drop of 0.74 words per minute (WPM), compared to a conventional Braille keyboard without auto-correction. These findings suggest that our approach has the potential to greatly improve the typing experience for Braille users on touchscreen devices.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2025 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12723526/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145829114","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision. CookAR:可穿戴AR的功能增强,支持低视力人群的厨房工具交互。
Jaewook Lee, Andrew D Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E Froehlich, Yapeng Tian, Yuhang Zhao

Cooking is a central activity of daily living, supporting independence as well as mental and physical health. However, prior work has highlighted key barriers for people with low vision (LV) to cook, particularly around safely interacting with tools, such as sharp knives or hot pans. Drawing on recent advancements in computer vision (CV), we present CookAR, a head-mounted AR system with real-time object affordance augmentations to support safe and efficient interactions with kitchen tools. To design and implement CookAR, we collected and annotated the first egocentric dataset of kitchen tool affordances, fine-tuned an affordance segmentation model, and developed an AR system with a stereo camera to generate visual augmentations. To validate CookAR, we conducted a technical evaluation of our fine-tuned model as well as a qualitative lab study with 10 LV participants for suitable augmentation design. Our technical evaluation demonstrates that our model outperforms the baseline on our tool affordance dataset, while our user study indicates a preference for affordance augmentations over the traditional whole object augmentations.

烹饪是日常生活的一项核心活动,有助于独立以及身心健康。然而,之前的工作已经强调了低视力人群烹饪的主要障碍,特别是在与工具(如锋利的刀或热锅)安全互动方面。利用计算机视觉(CV)的最新进展,我们提出了CookAR,这是一种具有实时对象功能增强的头戴式AR系统,可支持与厨房工具的安全高效交互。为了设计和实现CookAR,我们收集并注释了第一个以自我为中心的厨房工具功能特征数据集,对功能特征分割模型进行了微调,并开发了一个带有立体摄像头的AR系统来生成视觉增强。为了验证CookAR,我们对我们的微调模型进行了技术评估,并对10名LV参与者进行了定性实验室研究,以确定合适的增强设计。我们的技术评估表明,我们的模型优于我们的工具功能化数据集的基线,而我们的用户研究表明,功能化增强优于传统的整个对象增强。
{"title":"CookAR: Affordance Augmentations in Wearable AR to Support Kitchen Tool Interactions for People with Low Vision.","authors":"Jaewook Lee, Andrew D Tjahjadi, Jiho Kim, Junpu Yu, Minji Park, Jiawen Zhang, Jon E Froehlich, Yapeng Tian, Yuhang Zhao","doi":"10.1145/3654777.3676449","DOIUrl":"10.1145/3654777.3676449","url":null,"abstract":"<p><p>Cooking is a central activity of daily living, supporting independence as well as mental and physical health. However, prior work has highlighted key barriers for people with low vision (LV) to cook, particularly around safely interacting with tools, such as sharp knives or hot pans. Drawing on recent advancements in computer vision (CV), we present <i>CookAR</i>, a head-mounted AR system with real-time object affordance augmentations to support safe and efficient interactions with kitchen tools. To design and implement CookAR, we collected and annotated the first egocentric dataset of kitchen tool affordances, fine-tuned an affordance segmentation model, and developed an AR system with a stereo camera to generate visual augmentations. To validate CookAR, we conducted a technical evaluation of our fine-tuned model as well as a qualitative lab study with 10 LV participants for suitable augmentation design. Our technical evaluation demonstrates that our model outperforms the baseline on our tool affordance dataset, while our user study indicates a preference for affordance augmentations over the traditional whole object augmentations.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12279023/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144683778","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Accessible Gesture Typing on Smartphones for People with Low Vision. 低视力人士在智能手机上的无障碍手势输入。
Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi

While gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.

虽然手势输入在触摸屏键盘上被广泛采用,但它对低视力用户的支持有限。我们设计并实现了两种键盘原型,布局放大键盘和按键放大键盘,为弱视人士提供手势输入功能。两个键盘方便不间断地访问所有键,而屏幕放大镜是活跃的,允许低视力的人输入文字与一个连续的stroke。此外,我们还创建了一种基于运动学的解码算法,以适应低视力人群的打字行为。即使手势轨迹偏离预定义的单词模板,并且手势的起始位置远离目标单词的起始字母,该算法也可以对手势输入进行解码。我们的用户研究表明,放大键键盘每分钟可输入5.28个单词,比带语音反馈的传统手势打字键盘快27.5%。
{"title":"Accessible Gesture Typing on Smartphones for People with Low Vision.","authors":"Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi","doi":"10.1145/3654777.3676447","DOIUrl":"10.1145/3654777.3676447","url":null,"abstract":"<p><p>While gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones. 基于语音和触摸的智能手机容错多模态文本编辑和校正。
Maozheng Zhao, Wenzhe Cui, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi

Editing operations such as cut, copy, paste, and correcting errors in typed text are often tedious and challenging to perform on smartphones. In this paper, we present VT, a voice and touch-based multi-modal text editing and correction method for smartphones. To edit text with VT, the user glides over a text fragment with a finger and dictates a command, such as "bold" to change the format of the fragment, or the user can tap inside a text area and speak a command such as "highlight this paragraph" to edit the text. For text correcting, the user taps approximately at the area of erroneous text fragment and dictates the new content for substitution or insertion. VT combines touch and voice inputs with language context such as language model and phrase similarity to infer a user's editing intention, which can handle ambiguities and noisy input signals. It is a great advantage over the existing error correction methods (e.g., iOS's Voice Control) which require precise cursor control or text selection. Our evaluation shows that VT significantly improves the efficiency of text editing and text correcting on smartphones over the touch-only method and the iOS's Voice Control method. Our user studies showed that VT reduced the text editing time by 30.80%, and text correcting time by 29.97% over the touch-only method. VT reduced the text editing time by 30.81%, and text correcting time by 47.96% over the iOS's Voice Control method.

在智能手机上进行编辑操作,如剪切、复制、粘贴和纠正输入文本中的错误,通常是乏味且具有挑战性的。本文提出了一种基于语音和触摸的智能手机多模态文本编辑和纠错方法。要使用VT编辑文本,用户可以用手指在文本片段上滑动并发出命令,例如“加粗”来更改片段的格式,或者用户可以在文本区域内点击并发出命令,例如“突出显示此段落”来编辑文本。对于文本更正,用户在错误文本片段的区域附近点击,并指示替换或插入的新内容。VT将触摸和语音输入与语言模型和短语相似度等语言语境相结合,推断用户的编辑意图,可以处理歧义和噪声输入信号。与现有的纠错方法(如iOS的语音控制)相比,这是一个很大的优势,因为现有的纠错方法需要精确的光标控制或文本选择。我们的评估表明,与纯触控和iOS的语音控制相比,VT显著提高了智能手机上文本编辑和文本校正的效率。我们的用户研究表明,与纯触控相比,VT将文本编辑时间减少了30.80%,文本更正时间减少了29.97%。与iOS的Voice Control方式相比,VT将文本编辑时间减少了30.81%,文本纠错时间减少了47.96%。
{"title":"Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones.","authors":"Maozheng Zhao,&nbsp;Wenzhe Cui,&nbsp;I V Ramakrishnan,&nbsp;Shumin Zhai,&nbsp;Xiaojun Bi","doi":"10.1145/3472749.3474742","DOIUrl":"https://doi.org/10.1145/3472749.3474742","url":null,"abstract":"<p><p>Editing operations such as cut, copy, paste, and correcting errors in typed text are often tedious and challenging to perform on smartphones. In this paper, we present VT, a voice and touch-based multi-modal text editing and correction method for smartphones. To edit text with VT, the user glides over a text fragment with a finger and dictates a command, such as \"bold\" to change the format of the fragment, or the user can tap inside a text area and speak a command such as \"highlight this paragraph\" to edit the text. For text correcting, the user taps approximately at the area of erroneous text fragment and dictates the new content for substitution or insertion. VT combines touch and voice inputs with language context such as language model and phrase similarity to infer a user's editing intention, which can handle ambiguities and noisy input signals. It is a great advantage over the existing error correction methods (e.g., iOS's Voice Control) which require precise cursor control or text selection. Our evaluation shows that VT significantly improves the efficiency of text editing and text correcting on smartphones over the touch-only method and the iOS's Voice Control method. Our user studies showed that VT reduced the text editing time by 30.80%, and text correcting time by 29.97% over the touch-only method. VT reduced the text editing time by 30.81%, and text correcting time by 47.96% over the iOS's Voice Control method.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"162-178"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/02/ef/nihms-1777404.PMC8845054.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39930110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling Touch Point Distribution with Rotational Dual Gaussian Model. 用旋转双高斯模型模拟触摸点分布。
Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi

Touch point distribution models are important tools for designing touchscreen interfaces. In this paper, we investigate how the finger movement direction affects the touch point distribution, and how to account for it in modeling. We propose the Rotational Dual Gaussian model, a refinement and generalization of the Dual Gaussian model, to account for the finger movement direction in predicting touch point distribution. In this model, the major axis of the prediction ellipse of the touch point distribution is along the finger movement direction, and the minor axis is perpendicular to the finger movement direction. We also propose using projected target width and height, in lieu of nominal target width and height to model touch point distribution. Evaluation on three empirical datasets shows that the new model reflects the observation that the touch point distribution is elongated along the finger movement direction, and outperforms the original Dual Gaussian Model in all prediction tests. Compared with the original Dual Gaussian model, the Rotational Dual Gaussian model reduces the RMSE of touch error rate prediction from 8.49% to 4.95%, and more accurately predicts the touch point distribution in target acquisition. Using the Rotational Dual Gaussian model can also improve the soft keyboard decoding accuracy on smartwatches.

触摸点分布模型是设计触摸屏界面的重要工具。在本文中,我们研究了手指移动方向如何影响触摸点分布,以及如何在建模中考虑手指移动方向。我们提出了旋转双高斯模型,这是对双高斯模型的改进和概括,用于在预测触摸点分布时考虑手指移动方向。在该模型中,触摸点分布预测椭圆的主轴沿手指移动方向,小轴垂直于手指移动方向。我们还建议使用投影目标宽度和高度,而不是名义目标宽度和高度来建立触摸点分布模型。在三个经验数据集上进行的评估表明,新模型反映了触摸点分布沿手指移动方向拉长的观察结果,在所有预测测试中均优于原始双高斯模型。与原始双高斯模型相比,旋转双高斯模型将触摸误差率预测的均方根误差从 8.49% 降低到 4.95%,并能更准确地预测目标获取过程中的触摸点分布。使用旋转双高斯模型还能提高智能手表上软键盘解码的准确性。
{"title":"Modeling Touch Point Distribution with Rotational Dual Gaussian Model.","authors":"Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi","doi":"10.1145/3472749.3474816","DOIUrl":"10.1145/3472749.3474816","url":null,"abstract":"<p><p>Touch point distribution models are important tools for designing touchscreen interfaces. In this paper, we investigate how the finger movement direction affects the touch point distribution, and how to account for it in modeling. We propose the Rotational Dual Gaussian model, a refinement and generalization of the Dual Gaussian model, to account for the finger movement direction in predicting touch point distribution. In this model, the major axis of the prediction ellipse of the touch point distribution is along the finger movement direction, and the minor axis is perpendicular to the finger movement direction. We also propose using <i>projected</i> target width and height, in lieu of nominal target width and height to model touch point distribution. Evaluation on three empirical datasets shows that the new model reflects the observation that the touch point distribution is elongated along the finger movement direction, and outperforms the original Dual Gaussian Model in all prediction tests. Compared with the original Dual Gaussian model, the Rotational Dual Gaussian model reduces the RMSE of touch error rate prediction from 8.49% to 4.95%, and more accurately predicts the touch point distribution in target acquisition. Using the Rotational Dual Gaussian model can also improve the soft keyboard decoding accuracy on smartwatches.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e0/88/nihms-1777409.PMC8853834.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术年度研讨会附刊,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3474349","DOIUrl":"https://doi.org/10.1145/3474349","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89532699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术研讨会,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3472749","DOIUrl":"https://doi.org/10.1145/3472749","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"92 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72715627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Two Dimensional Touch Pointing. 二维触摸指向建模
Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi

Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.

触摸指向建模对于触摸屏界面的开发和研究至关重要,因为指向是用户在触摸屏设备上执行的最基本、最常见的触摸操作之一。Finger-Fitts 定律 [4] 将传统的 Fitts 定律修正为手指触摸的一维(一维)指向模型,明确考虑了原 Fitts 定律中未考虑的胖手指模糊性(绝对误差)问题。我们通过解决两个关键问题,将 Finger-Fitts 定律推广到二维触摸指向。首先,我们扩展了两种最成功的二维菲茨定律形式,以适应手指的模糊性。其次,我们发现使用标称目标宽度和高度是一种概念简单而有效的方法,可用于定义二维触摸指向在不同运动方向上的振幅和方向限制。评估结果表明,我们推导出的二维手指-菲茨定律模型既有原则性,又很强大。具体来说,根据回归系数和模型选择信息标准(如阿凯克信息标准)(考虑参数数量),它们的表现优于现有的二维菲茨定律。最后,二维菲茨定律还促进了我们对触摸指向的理解,从而成为触摸界面设计的基础。
{"title":"Modeling Two Dimensional Touch Pointing.","authors":"Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi","doi":"10.1145/3379337.3415871","DOIUrl":"10.1145/3379337.3415871","url":null,"abstract":"<p><p>Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2020 ","pages":"858-868"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8318005/pdf/nihms-1666148.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39258978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020 第33届ACM用户界面软件与技术研讨会,虚拟事件,美国,2020年10月20-23日
{"title":"UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020","authors":"","doi":"10.1145/3379350","DOIUrl":"https://doi.org/10.1145/3379350","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77239870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls 使用个人设备促进与大型显示墙的多用户交互
Ulrich von Zadow
Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.
大型展示墙和智能手机等个人设备具有互补的特点。虽然大型显示器非常适合多用户交互(可能与复杂数据交互),但它们本质上是公共的,通常不能呈现适合单个用户的界面。然而,在许多情况下,有效的多用户交互取决于定制界面的能力,在不干扰他人的情况下进行交互的能力,以及访问和可能共享私有数据的能力。与个人设备的结合正是促进了这一点。多设备交互概念支持数据传输,包括将ui的部分移动到个人设备。此外,手持设备可用于向用户展示个人视图。我们的工作将侧重于使用个人设备与互动展示墙进行真正的多用户交互。它将涵盖适当的交互技术以及技术基础,并将通过相应的应用案例进行验证。
{"title":"Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls","authors":"Ulrich von Zadow","doi":"10.1145/2815585.2815592","DOIUrl":"https://doi.org/10.1145/2815585.2815592","url":null,"abstract":"Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"14 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2015-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89889664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1