首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
Accessible Gesture Typing on Smartphones for People with Low Vision. 低视力人士在智能手机上的无障碍手势输入。
Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi

While gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.

虽然手势输入在触摸屏键盘上被广泛采用,但它对低视力用户的支持有限。我们设计并实现了两种键盘原型,布局放大键盘和按键放大键盘,为弱视人士提供手势输入功能。两个键盘方便不间断地访问所有键,而屏幕放大镜是活跃的,允许低视力的人输入文字与一个连续的stroke。此外,我们还创建了一种基于运动学的解码算法,以适应低视力人群的打字行为。即使手势轨迹偏离预定义的单词模板,并且手势的起始位置远离目标单词的起始字母,该算法也可以对手势输入进行解码。我们的用户研究表明,放大键键盘每分钟可输入5.28个单词,比带语音反馈的传统手势打字键盘快27.5%。
{"title":"Accessible Gesture Typing on Smartphones for People with Low Vision.","authors":"Dan Zhang, William H Seiple, Zhi Li, I V Ramakrishnan, Vikas Ashok, Xiaojun Bi","doi":"10.1145/3654777.3676447","DOIUrl":"10.1145/3654777.3676447","url":null,"abstract":"<p><p>While gesture typing is widely adopted on touchscreen keyboards, its support for low vision users is limited. We have designed and implemented two keyboard prototypes, layout-magnified and key-magnified keyboards, to enable gesture typing for people with low vision. Both keyboards facilitate uninterrupted access to all keys while the screen magnifier is active, allowing people with low vision to input text with one continuous stroke. Furthermore, we have created a kinematics-based decoding algorithm to accommodate the typing behavior of people with low vision. This algorithm can decode the gesture input even if the gesture trace deviates from a pre-defined word template, and the starting position of the gesture is far from the starting letter of the target word. Our user study showed that the key-magnified keyboard achieved 5.28 words per minute, 27.5% faster than a conventional gesture typing keyboard with voice feedback.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2024 ","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11707649/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142960116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones. 基于语音和触摸的智能手机容错多模态文本编辑和校正。
Maozheng Zhao, Wenzhe Cui, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi

Editing operations such as cut, copy, paste, and correcting errors in typed text are often tedious and challenging to perform on smartphones. In this paper, we present VT, a voice and touch-based multi-modal text editing and correction method for smartphones. To edit text with VT, the user glides over a text fragment with a finger and dictates a command, such as "bold" to change the format of the fragment, or the user can tap inside a text area and speak a command such as "highlight this paragraph" to edit the text. For text correcting, the user taps approximately at the area of erroneous text fragment and dictates the new content for substitution or insertion. VT combines touch and voice inputs with language context such as language model and phrase similarity to infer a user's editing intention, which can handle ambiguities and noisy input signals. It is a great advantage over the existing error correction methods (e.g., iOS's Voice Control) which require precise cursor control or text selection. Our evaluation shows that VT significantly improves the efficiency of text editing and text correcting on smartphones over the touch-only method and the iOS's Voice Control method. Our user studies showed that VT reduced the text editing time by 30.80%, and text correcting time by 29.97% over the touch-only method. VT reduced the text editing time by 30.81%, and text correcting time by 47.96% over the iOS's Voice Control method.

在智能手机上进行编辑操作,如剪切、复制、粘贴和纠正输入文本中的错误,通常是乏味且具有挑战性的。本文提出了一种基于语音和触摸的智能手机多模态文本编辑和纠错方法。要使用VT编辑文本,用户可以用手指在文本片段上滑动并发出命令,例如“加粗”来更改片段的格式,或者用户可以在文本区域内点击并发出命令,例如“突出显示此段落”来编辑文本。对于文本更正,用户在错误文本片段的区域附近点击,并指示替换或插入的新内容。VT将触摸和语音输入与语言模型和短语相似度等语言语境相结合,推断用户的编辑意图,可以处理歧义和噪声输入信号。与现有的纠错方法(如iOS的语音控制)相比,这是一个很大的优势,因为现有的纠错方法需要精确的光标控制或文本选择。我们的评估表明,与纯触控和iOS的语音控制相比,VT显著提高了智能手机上文本编辑和文本校正的效率。我们的用户研究表明,与纯触控相比,VT将文本编辑时间减少了30.80%,文本更正时间减少了29.97%。与iOS的Voice Control方式相比,VT将文本编辑时间减少了30.81%,文本纠错时间减少了47.96%。
{"title":"Voice and Touch Based Error-tolerant Multimodal Text Editing and Correction for Smartphones.","authors":"Maozheng Zhao,&nbsp;Wenzhe Cui,&nbsp;I V Ramakrishnan,&nbsp;Shumin Zhai,&nbsp;Xiaojun Bi","doi":"10.1145/3472749.3474742","DOIUrl":"https://doi.org/10.1145/3472749.3474742","url":null,"abstract":"<p><p>Editing operations such as cut, copy, paste, and correcting errors in typed text are often tedious and challenging to perform on smartphones. In this paper, we present VT, a voice and touch-based multi-modal text editing and correction method for smartphones. To edit text with VT, the user glides over a text fragment with a finger and dictates a command, such as \"bold\" to change the format of the fragment, or the user can tap inside a text area and speak a command such as \"highlight this paragraph\" to edit the text. For text correcting, the user taps approximately at the area of erroneous text fragment and dictates the new content for substitution or insertion. VT combines touch and voice inputs with language context such as language model and phrase similarity to infer a user's editing intention, which can handle ambiguities and noisy input signals. It is a great advantage over the existing error correction methods (e.g., iOS's Voice Control) which require precise cursor control or text selection. Our evaluation shows that VT significantly improves the efficiency of text editing and text correcting on smartphones over the touch-only method and the iOS's Voice Control method. Our user studies showed that VT reduced the text editing time by 30.80%, and text correcting time by 29.97% over the touch-only method. VT reduced the text editing time by 30.81%, and text correcting time by 47.96% over the iOS's Voice Control method.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"162-178"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/02/ef/nihms-1777404.PMC8845054.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39930110","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
Modeling Touch Point Distribution with Rotational Dual Gaussian Model. 用旋转双高斯模型模拟触摸点分布。
Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi

Touch point distribution models are important tools for designing touchscreen interfaces. In this paper, we investigate how the finger movement direction affects the touch point distribution, and how to account for it in modeling. We propose the Rotational Dual Gaussian model, a refinement and generalization of the Dual Gaussian model, to account for the finger movement direction in predicting touch point distribution. In this model, the major axis of the prediction ellipse of the touch point distribution is along the finger movement direction, and the minor axis is perpendicular to the finger movement direction. We also propose using projected target width and height, in lieu of nominal target width and height to model touch point distribution. Evaluation on three empirical datasets shows that the new model reflects the observation that the touch point distribution is elongated along the finger movement direction, and outperforms the original Dual Gaussian Model in all prediction tests. Compared with the original Dual Gaussian model, the Rotational Dual Gaussian model reduces the RMSE of touch error rate prediction from 8.49% to 4.95%, and more accurately predicts the touch point distribution in target acquisition. Using the Rotational Dual Gaussian model can also improve the soft keyboard decoding accuracy on smartwatches.

触摸点分布模型是设计触摸屏界面的重要工具。在本文中,我们研究了手指移动方向如何影响触摸点分布,以及如何在建模中考虑手指移动方向。我们提出了旋转双高斯模型,这是对双高斯模型的改进和概括,用于在预测触摸点分布时考虑手指移动方向。在该模型中,触摸点分布预测椭圆的主轴沿手指移动方向,小轴垂直于手指移动方向。我们还建议使用投影目标宽度和高度,而不是名义目标宽度和高度来建立触摸点分布模型。在三个经验数据集上进行的评估表明,新模型反映了触摸点分布沿手指移动方向拉长的观察结果,在所有预测测试中均优于原始双高斯模型。与原始双高斯模型相比,旋转双高斯模型将触摸误差率预测的均方根误差从 8.49% 降低到 4.95%,并能更准确地预测目标获取过程中的触摸点分布。使用旋转双高斯模型还能提高智能手表上软键盘解码的准确性。
{"title":"Modeling Touch Point Distribution with Rotational Dual Gaussian Model.","authors":"Yan Ma, Shumin Zhai, I V Ramakrishnan, Xiaojun Bi","doi":"10.1145/3472749.3474816","DOIUrl":"10.1145/3472749.3474816","url":null,"abstract":"<p><p>Touch point distribution models are important tools for designing touchscreen interfaces. In this paper, we investigate how the finger movement direction affects the touch point distribution, and how to account for it in modeling. We propose the Rotational Dual Gaussian model, a refinement and generalization of the Dual Gaussian model, to account for the finger movement direction in predicting touch point distribution. In this model, the major axis of the prediction ellipse of the touch point distribution is along the finger movement direction, and the minor axis is perpendicular to the finger movement direction. We also propose using <i>projected</i> target width and height, in lieu of nominal target width and height to model touch point distribution. Evaluation on three empirical datasets shows that the new model reflects the observation that the touch point distribution is elongated along the finger movement direction, and outperforms the original Dual Gaussian Model in all prediction tests. Compared with the original Dual Gaussian model, the Rotational Dual Gaussian model reduces the RMSE of touch error rate prediction from 8.49% to 4.95%, and more accurately predicts the touch point distribution in target acquisition. Using the Rotational Dual Gaussian model can also improve the soft keyboard decoding accuracy on smartwatches.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2021 ","pages":"1197-1209"},"PeriodicalIF":0.0,"publicationDate":"2021-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_pdf/e0/88/nihms-1777409.PMC8853834.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39941356","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术年度研讨会附刊,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The Adjunct Publication of the 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3474349","DOIUrl":"https://doi.org/10.1145/3474349","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"85 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89532699","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021 UIST '21:第34届ACM用户界面软件与技术研讨会,虚拟事件,美国,2021年10月10-14日
{"title":"UIST '21: The 34th Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 10-14, 2021","authors":"","doi":"10.1145/3472749","DOIUrl":"https://doi.org/10.1145/3472749","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"92 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2021-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72715627","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Modeling Two Dimensional Touch Pointing. 二维触摸指向建模
Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi

Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.

触摸指向建模对于触摸屏界面的开发和研究至关重要,因为指向是用户在触摸屏设备上执行的最基本、最常见的触摸操作之一。Finger-Fitts 定律 [4] 将传统的 Fitts 定律修正为手指触摸的一维(一维)指向模型,明确考虑了原 Fitts 定律中未考虑的胖手指模糊性(绝对误差)问题。我们通过解决两个关键问题,将 Finger-Fitts 定律推广到二维触摸指向。首先,我们扩展了两种最成功的二维菲茨定律形式,以适应手指的模糊性。其次,我们发现使用标称目标宽度和高度是一种概念简单而有效的方法,可用于定义二维触摸指向在不同运动方向上的振幅和方向限制。评估结果表明,我们推导出的二维手指-菲茨定律模型既有原则性,又很强大。具体来说,根据回归系数和模型选择信息标准(如阿凯克信息标准)(考虑参数数量),它们的表现优于现有的二维菲茨定律。最后,二维菲茨定律还促进了我们对触摸指向的理解,从而成为触摸界面设计的基础。
{"title":"Modeling Two Dimensional Touch Pointing.","authors":"Yu-Jung Ko, Hang Zhao, Yoonsang Kim, I V Ramakrishnan, Shumin Zhai, Xiaojun Bi","doi":"10.1145/3379337.3415871","DOIUrl":"10.1145/3379337.3415871","url":null,"abstract":"<p><p>Modeling touch pointing is essential to touchscreen interface development and research, as pointing is one of the most basic and common touch actions users perform on touchscreen devices. Finger-Fitts Law [4] revised the conventional Fitts' law into a 1D (one-dimensional) pointing model for finger touch by explicitly accounting for the fat finger ambiguity (absolute error) problem which was unaccounted for in the original Fitts' law. We generalize Finger-Fitts law to 2D touch pointing by solving two critical problems. First, we extend two of the most successful 2D Fitts law forms to accommodate finger ambiguity. Second, we discovered that using nominal target width and height is a conceptually simple yet effective approach for defining amplitude and directional constraints for 2D touch pointing across different movement directions. The evaluation shows our derived 2D Finger-Fitts law models can be both principled and powerful. Specifically, they outperformed the existing 2D Fitts' laws, as measured by the regression coefficient and model selection information criteria (e.g., Akaike Information Criterion) considering the number of parameters. Finally, 2D Finger-Fitts laws also advance our understanding of touch pointing and thereby serve as the basis for touch interface designs.</p>","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"2020 ","pages":"858-868"},"PeriodicalIF":0.0,"publicationDate":"2020-10-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8318005/pdf/nihms-1666148.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"39258978","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020 第33届ACM用户界面软件与技术研讨会,虚拟事件,美国,2020年10月20-23日
{"title":"UIST '20 Adjunct: The 33rd Annual ACM Symposium on User Interface Software and Technology, Virtual Event, USA, October 20-23, 2020","authors":"","doi":"10.1145/3379350","DOIUrl":"https://doi.org/10.1145/3379350","url":null,"abstract":"","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"9 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2020-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77239870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls 使用个人设备促进与大型显示墙的多用户交互
Ulrich von Zadow
Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.
大型展示墙和智能手机等个人设备具有互补的特点。虽然大型显示器非常适合多用户交互(可能与复杂数据交互),但它们本质上是公共的,通常不能呈现适合单个用户的界面。然而,在许多情况下,有效的多用户交互取决于定制界面的能力,在不干扰他人的情况下进行交互的能力,以及访问和可能共享私有数据的能力。与个人设备的结合正是促进了这一点。多设备交互概念支持数据传输,包括将ui的部分移动到个人设备。此外,手持设备可用于向用户展示个人视图。我们的工作将侧重于使用个人设备与互动展示墙进行真正的多用户交互。它将涵盖适当的交互技术以及技术基础,并将通过相应的应用案例进行验证。
{"title":"Using Personal Devices to Facilitate Multi-user Interaction with Large Display Walls","authors":"Ulrich von Zadow","doi":"10.1145/2815585.2815592","DOIUrl":"https://doi.org/10.1145/2815585.2815592","url":null,"abstract":"Large display walls and personal devices such as Smartphones have complementary characteristics. While large displays are well-suited to multi-user interaction (potentially with complex data), they are inherently public and generally cannot present an interface adapted to the individual user. However, effective multi-user interaction in many cases depends on the ability to tailor the interface, to interact without interfering with others, and to access and possibly share private data. The combination with personal devices facilitates exactly this. Multi-device interaction concepts enable data transfer and include moving parts of UIs to the personal device. In addition, hand-held devices can be used to present personal views to the user. Our work will focus on using personal devices for true multi-user interaction with interactive display walls. It will cover appropriate interaction techniques as well as the technical foundation and will be validated with corresponding application cases.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"14 1","pages":"25-28"},"PeriodicalIF":0.0,"publicationDate":"2015-11-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89889664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Machine Intelligence and Human Intelligence 机器智能和人类智能
B. A. Y. Arcas
There has been a stellar rise in computational power since 2006 in part thanks to GPUs, yet today, we are as an intelligent species essentially singular. There are of course some other brainy species, like chimpanzees, dolphins, crows and octopuses, but if anything they only emphasize our unique position on Earth -- as animals richly gifted with self-awareness, language, abstract thought, art, mathematical capability, science, technology and so on. Many of us have staked our entire self-concept on the idea that to be human is to have a mind, and that minds are the unique province of humans. For those of us who are not religious, this could be interpreted as the last bastion of dualism. Our economic, legal and ethical systems are also implicitly built around this idea. Now, we're well along the road to really understanding the fundamental principles of how a mind can be built, and Moore's Law will put brain-scale computing within reach this decade. (We need to put some asterisks next to Moore's Law, since we are already running up against certain limits in computational scale using our present-day approaches, but I'll stand behind the broader statement.) In this talk I will discuss the relationships between engineered neurally inspired systems and brains today, between humans and machines tomorrow, and how these relationships will alter user interfaces, software and technology.
自2006年以来,计算能力有了显著提高,部分原因是gpu的出现,但今天,我们作为一个智能物种,本质上是单一的。当然还有其他一些聪明的物种,比如黑猩猩、海豚、乌鸦和章鱼,但如果有什么区别的话,它们只强调了我们在地球上的独特地位——作为具有自我意识、语言、抽象思维、艺术、数学能力、科学、技术等丰富天赋的动物。我们中的许多人把我们的整个自我概念都押在这样一个观点上:作为人就是要有思想,思想是人类独有的领域。对于我们这些不信教的人来说,这可以被解释为二元论的最后堡垒。我们的经济、法律和道德体系也暗含着这一理念。现在,我们正沿着真正理解大脑如何构建的基本原理的道路前进,摩尔定律将在这个十年里实现大脑规模的计算。(我们需要在摩尔定律旁边加上一些星号,因为我们现在的方法已经达到了计算规模的某些限制,但我支持更广泛的说法。)在这次演讲中,我将讨论今天的工程神经启发系统和大脑之间的关系,明天的人与机器之间的关系,以及这些关系将如何改变用户界面、软件和技术。
{"title":"Machine Intelligence and Human Intelligence","authors":"B. A. Y. Arcas","doi":"10.1145/2807442.2814655","DOIUrl":"https://doi.org/10.1145/2807442.2814655","url":null,"abstract":"There has been a stellar rise in computational power since 2006 in part thanks to GPUs, yet today, we are as an intelligent species essentially singular. There are of course some other brainy species, like chimpanzees, dolphins, crows and octopuses, but if anything they only emphasize our unique position on Earth -- as animals richly gifted with self-awareness, language, abstract thought, art, mathematical capability, science, technology and so on. Many of us have staked our entire self-concept on the idea that to be human is to have a mind, and that minds are the unique province of humans. For those of us who are not religious, this could be interpreted as the last bastion of dualism. Our economic, legal and ethical systems are also implicitly built around this idea. Now, we're well along the road to really understanding the fundamental principles of how a mind can be built, and Moore's Law will put brain-scale computing within reach this decade. (We need to put some asterisks next to Moore's Law, since we are already running up against certain limits in computational scale using our present-day approaches, but I'll stand behind the broader statement.) In this talk I will discuss the relationships between engineered neurally inspired systems and brains today, between humans and machines tomorrow, and how these relationships will alter user interfaces, software and technology.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"43 1","pages":"665"},"PeriodicalIF":0.0,"publicationDate":"2015-11-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90587401","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology, UIST 2014 Adjunct Volume, Honolulu, Hawaii, USA, October 5-8, 2014 第27届ACM用户界面软件与技术年度研讨会附刊论文集,UIST 2014附刊卷,檀香山,夏威夷,美国,2014年10月5-8日
Hrvoje Benko, Mira Dontcheva, Daniel J. Wigdor
It is our pleasure to welcome you to the 27th Annual ACM Symposium on User Interface Software and Technology (UIST), held from October 5-8th 2014, in Honolulu, Hawaii, USA. UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, fabrication, wearable and mobile computing, and computer supported cooperative work. UIST 2014 received a record 333 technical paper submissions from 34 countries. After a thorough review process, the 36-member program committee accepted 74 papers (22.2%). Each anonymous submission was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If any of the four reviewers deemed a submission to pass a rejection threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns, and a second member of the program committee was asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Toronto, Ontario, Canada on June 19th and 20th, 2014, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. In addition to papers submitted directly, the symposium program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 31 posters, 48 demonstrations, and 8 student presentations in the tenth annual Doctoral Symposium. Our program also features the sixth annual Student Innovation Contest. This year, there are 24 teams taking part in the contest, which is focused on household interfaces based on the Kinoma Create platform by Marvell. UIST 2014 will feature two keynote presentations. The opening keynote will be given by Mark Bolas (University of Southern California) on designing the user in the user interface. Bret Victor will deliver the closing keynote on the impact of dynamic media on representation of thought. Our community has been growing tremendously both in the number of submissions as well as attendees. For the first time, this year's program will be held in two parallel tracks. We hope that you will find our program interesting and thought-provoking and that UIST 2014 will provide you with a valuable opportunity to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations with other researchers and practitioners from institutions around the world.
我们很高兴欢迎您参加2014年10月5日至8日在美国夏威夷檀香山举行的第27届ACM用户界面软件与技术(UIST)年度研讨会。UIST是展示软件和人机界面技术研究创新的主要论坛。UIST由ACM计算机人机交互(SIGCHI)和计算机图形学(SIGGRAPH)特别兴趣小组赞助,汇集了来自许多领域的研究人员和从业人员,包括网络和图形界面、输入和输出设备、信息可视化、传感技术、交互式显示、桌面和有形计算、交互技术、增强和虚拟现实、普适计算、制造、可穿戴和移动计算。以及计算机支持的协同工作。UIST 2014年收到了来自34个国家的创纪录的333篇技术论文。经过全面的评审,36人组成的计划委员会接受了74篇论文(22.2%)。每个匿名提交首先由三位外部审稿人进行审查,并由项目委员会成员提供元审查。如果四位审稿人中的任何一位认为投稿通过了拒绝门槛,我们要求作者提交一份简短的反驳,解决审稿人关注的问题,并要求项目委员会的另一位成员检查论文、反驳和评论,并提供他们自己的元评论。项目委员会于2014年6月19日至20日在加拿大安大略省多伦多市亲自召开会议,选择项目邀请论文。只有在作者提供了针对委员会意见的最终修订后,才接受提交的意见。除了直接提交的论文外,研讨会计划还包括来自ACM人机交互学报(TOCHI)的两篇论文,以及第十届年度博士研讨会的31张海报,48篇演示和8篇学生演讲。我们的项目还包括第六届年度学生创新大赛。今年,有24支队伍参加了比赛,比赛的重点是基于Marvell的Kinoma Create平台的家庭界面。UIST 2014将有两个主题演讲。Mark Bolas(南加州大学)将发表关于在用户界面中设计用户的主题演讲。Bret Victor将发表关于动态媒体对思想表达的影响的闭幕主题演讲。我们的社区在提交的数量和参与者数量上都有了巨大的增长。今年的项目将首次在两个平行轨道上举行。我们希望您会发现我们的项目有趣且发人深省,UIST 2014将为您提供一个宝贵的机会,在用户界面研究的前沿交流成果,与朋友和同事会面,并与来自世界各地机构的其他研究人员和从业者建立未来的合作关系。
{"title":"Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology, UIST 2014 Adjunct Volume, Honolulu, Hawaii, USA, October 5-8, 2014","authors":"Hrvoje Benko, Mira Dontcheva, Daniel J. Wigdor","doi":"10.1145/2658779","DOIUrl":"https://doi.org/10.1145/2658779","url":null,"abstract":"It is our pleasure to welcome you to the 27th Annual ACM Symposium on User Interface Software and Technology (UIST), held from October 5-8th 2014, in Honolulu, Hawaii, USA. \u0000 \u0000UIST is the premier forum for the presentation of research innovations in the software and technology of human-computer interfaces. Sponsored by ACM's special interest groups on computer-human interaction (SIGCHI) and computer graphics (SIGGRAPH), UIST brings together researchers and practitioners from many areas, including web and graphical interfaces, input and output devices, information visualization, sensing technologies, interactive displays, tabletop and tangible computing, interaction techniques, augmented and virtual reality, ubiquitous computing, fabrication, wearable and mobile computing, and computer supported cooperative work. \u0000 \u0000UIST 2014 received a record 333 technical paper submissions from 34 countries. After a thorough review process, the 36-member program committee accepted 74 papers (22.2%). Each anonymous submission was first reviewed by three external reviewers, and a meta-review was provided by a program committee member. If any of the four reviewers deemed a submission to pass a rejection threshold, we asked the authors to submit a short rebuttal addressing the reviewers' concerns, and a second member of the program committee was asked to examine the paper, rebuttal, and reviews, and to provide their own meta-review. The program committee met in person in Toronto, Ontario, Canada on June 19th and 20th, 2014, to select which papers to invite for the program. Submissions were accepted only after the authors provided a final revision addressing the committee's comments. \u0000 \u0000In addition to papers submitted directly, the symposium program includes two papers from the ACM Transactions on Computer-Human Interaction journal (TOCHI), as well as 31 posters, 48 demonstrations, and 8 student presentations in the tenth annual Doctoral Symposium. Our program also features the sixth annual Student Innovation Contest. This year, there are 24 teams taking part in the contest, which is focused on household interfaces based on the Kinoma Create platform by Marvell. UIST 2014 will feature two keynote presentations. The opening keynote will be given by Mark Bolas (University of Southern California) on designing the user in the user interface. Bret Victor will deliver the closing keynote on the impact of dynamic media on representation of thought. \u0000 \u0000Our community has been growing tremendously both in the number of submissions as well as attendees. For the first time, this year's program will be held in two parallel tracks. We hope that you will find our program interesting and thought-provoking and that UIST 2014 will provide you with a valuable opportunity to exchange results at the cutting edge of user interfaces research, to meet friends and colleagues, and to forge future collaborations with other researchers and practitioners from institutions around the world.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"5 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2014-10-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"88775343","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1