首页 > 最新文献

Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology最新文献

英文 中文
DoubleFlip: a motion gesture delimiter for interaction DoubleFlip:用于交互的动作手势分隔符
J. Ruiz, Yang Li
In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.
为了在移动设备上使用动作手势,设备必须能够区分输入动作和日常动作。在这个摘要中,我们介绍了DoubleFlip,一个独特的运动手势,用于作为移动运动手势的输入分隔符。我们证明了DoubleFlip手势对假阳性条件具有极强的抵抗力,同时仍然实现了很高的识别精度。由于DoubleFlip易于执行并且不太可能被意外调用,因此它为移动交互提供了一个始终活跃的输入事件。
{"title":"DoubleFlip: a motion gesture delimiter for interaction","authors":"J. Ruiz, Yang Li","doi":"10.1145/1866218.1866265","DOIUrl":"https://doi.org/10.1145/1866218.1866265","url":null,"abstract":"In order to use motion gestures with mobile devices it is imperative that the device be able to distinguish between input motion and everyday motion. In this abstract we present DoubleFlip, a unique motion gesture designed to act as an input delimiter for mobile motion gestures. We demonstrate that the DoubleFlip gesture is extremely resistant to false positive conditions, while still achieving high recognition accuracy. Since DoubleFlip is easy to perform and less likely to be accidentally invoked, it provides an always-active input event for mobile interaction.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"449-450"},"PeriodicalIF":0.0,"publicationDate":"2010-10-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"89474452","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
User guided audio selection from complex sound mixtures 用户引导音频选择从复杂的声音混合
P. Smaragdis
In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.
在本文中,我们提出了一个新的接口来选择声音混合。音频编辑器中的传统界面提供声音的图形表示,它要么是波形,要么是时间/频率变换的某种变体。尽管使用这些表示,用户可能能够在视觉上识别混合声音的元素,但它们不能促进特定对象的编辑(例如,仅选择歌曲中歌手的声音)。这个界面使用来自用户的音频指导,以便在混音中选择目标声音。用户被要求发出所需的目标声音(或以其他方式用声音表示),然后一个自动过程识别并隔离与用户输入最相关的混合元素。这种指向音频流的特定部分的方法允许用户执行音频选择,否则这将是不可行的。
{"title":"User guided audio selection from complex sound mixtures","authors":"P. Smaragdis","doi":"10.1145/1622176.1622193","DOIUrl":"https://doi.org/10.1145/1622176.1622193","url":null,"abstract":"In this paper we present a novel interface for selecting sounds in audio mixtures. Traditional interfaces in audio editors provide a graphical representation of sounds which is either a waveform, or some variation of a time/frequency transform. Although with these representations a user might be able to visually identify elements of sounds in a mixture, they do not facilitate object-specific editing (e.g. selecting only the voice of a singer in a song). This interface uses audio guidance from a user in order to select a target sound within a mixture. The user is asked to vocalize (or otherwise sonically represent) the desired target sound, and an automatic process identifies and isolates the elements of the mixture that best relate to the user's input. This way of pointing to specific parts of an audio stream allows a user to perform audio selections which would have been infeasible otherwise.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"89-92"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"73491891","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines 在现实HCI设置中使用fNIRS脑传感:实验和指南
E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob
Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.
由于功能性近红外光谱(fNIRS)减轻了其他大脑传感器的许多限制,它有可能为HCI研究开辟新的可能性。根据我们在HCI中使用fNIRS技术的经验,我们确定了几个注意事项,并提供了在实际HCI实验室环境中使用fNIRS的指导方针。我们实证研究了典型的人类行为(如头部和面部运动)或计算机交互(如键盘和鼠标的使用)是否会干扰使用近红外光谱的大脑测量。基于我们的研究结果,我们确定了计算机使用中固有的哪些物理行为会干扰准确的近红外感知认知状态信息,哪些可以在数据分析中纠正,哪些是可以接受的。通过这些发现,我们希望促进fNIRS脑传感技术在HCI研究中的进一步应用。
{"title":"Using fNIRS brain sensing in realistic HCI settings: experiments and guidelines","authors":"E. Solovey, A. Girouard, K. Chauncey, Leanne M. Hirshfield, A. Sassaroli, F. Zheng, S. Fantini, R. Jacob","doi":"10.1145/1622176.1622207","DOIUrl":"https://doi.org/10.1145/1622176.1622207","url":null,"abstract":"Because functional near-infrared spectroscopy (fNIRS) eases many of the restrictions of other brain sensors, it has potential to open up new possibilities for HCI research. From our experience using fNIRS technology for HCI, we identify several considerations and provide guidelines for using fNIRS in realistic HCI laboratory settings. We empirically examine whether typical human behavior (e.g. head and facial movement) or computer interaction (e.g. keyboard and mouse usage) interfere with brain measurement using fNIRS. Based on the results of our study, we establish which physical behaviors inherent in computer usage interfere with accurate fNIRS sensing of cognitive state information, which can be corrected in data analysis, and which are acceptable. With these findings, we hope to facilitate further adoption of fNIRS brain sensing technology in HCI research.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"8 1","pages":"157-166"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83576584","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 136
A screen-space formulation for 2D and 3D direct manipulation 用于2D和3D直接操作的屏幕空间公式
J. Reisman, Philip L. Davidson, Jefferson Y. Han
Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.
在单点触控和多点触控环境中,旋转-缩放-平移(RST)交互已经成为与二维(2D)环境交互的事实上的标准。因为到目前为止RST的使用几乎完全集中在二维,还没有将这些原则扩展到三维的标准技术。在本文中,我们描述了一种屏幕空间方法,它完全捕获了传统2D RST多点触摸交互的语义,但也允许我们将这些相同的原则扩展到三维(3D)交互中。就像RST允许用户使用两个或更多的点直接操作2D上下文一样,我们的方法允许用户使用三个或更多的点直接操作3D对象。我们展示了一些新的相互作用,这些相互作用考虑到了透视,因此在正字法环境中不可用。此外,我们确定了在3D中执行直接操作时出现的关键歧义和意外行为,并提供解决方案来减轻每个问题所带来的困难。最后,我们将展示如何扩展我们的方法以满足特定于应用程序的控制目标,并展示我们的方法在一些示例环境中工作。
{"title":"A screen-space formulation for 2D and 3D direct manipulation","authors":"J. Reisman, Philip L. Davidson, Jefferson Y. Han","doi":"10.1145/1622176.1622190","DOIUrl":"https://doi.org/10.1145/1622176.1622190","url":null,"abstract":"Rotate-Scale-Translate (RST) interactions have become the de facto standard when interacting with two-dimensional (2D) contexts in single-touch and multi-touch environments. Because the use of RST has thus far focused almost entirely on 2D, there are not yet standard techniques for extending these principles into three dimensions. In this paper we describe a screen-space method which fully captures the semantics of the traditional 2D RST multi-touch interaction, but also allows us to extend these same principles into three-dimensional (3D) interaction. Just like RST allows users to directly manipulate 2D contexts with two or more points, our method allows the user to directly manipulate 3D objects with three or more points. We show some novel interactions, which take perspective into account and are thus not available in orthographic environments. Furthermore, we identify key ambiguities and unexpected behaviors that arise when performing direct manipulation in 3D and offer solutions to mitigate the difficulties each presents. Finally, we show how to extend our method to meet application-specific control objectives, as well as show our method working in some example environments.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"11 1","pages":"69-78"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85350830","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 207
A practical pressure sensitive computer keyboard 一种实用的压敏计算机键盘
P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche
A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.
提出了一种压敏计算机键盘,它可以独立地感知每个按下的键上的力度。该设计利用了现有的膜技术,适用于低成本、大批量生产。讨论了一些有代表性的应用。
{"title":"A practical pressure sensitive computer keyboard","authors":"P. Dietz, Benjamin D. Eidelson, Jonathan Westhues, Steven Bathiche","doi":"10.1145/1622176.1622187","DOIUrl":"https://doi.org/10.1145/1622176.1622187","url":null,"abstract":"A pressure sensitive computer keyboard is presented that independently senses the force level on every depressed key. The design leverages existing membrane technologies and is suitable for low-cost, high-volume manufacturing. A number of representative applications are discussed.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"27 1","pages":"55-58"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"76988800","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices SemFeel:为移动触摸屏设备提供语义触觉反馈的用户界面
K. Yatani, K. Truong
One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.
使用移动触屏设备的挑战之一是,它们不能向用户提供触觉反馈。因此,用户需要看着屏幕才能与这些设备进行交互。在本文中,我们介绍了SemFeel,这是一种触觉反馈系统,可以通知用户在屏幕上触摸物体的位置,并提供有关该物体的额外语义信息。通过安装在移动触摸屏设备背面的多个振动马达,SemFeel可以产生不同的振动模式,比如从右到左或从上到下的振动,以帮助用户与移动设备进行交互。通过两项用户研究,我们表明用户可以区分十种不同的模式,包括线性模式和圆形模式,准确率约为90%,并且SemFeel支持准确的无眼交互。
{"title":"SemFeel: a user interface with semantic tactile feedback for mobile touch-screen devices","authors":"K. Yatani, K. Truong","doi":"10.1145/1622176.1622198","DOIUrl":"https://doi.org/10.1145/1622176.1622198","url":null,"abstract":"One of the challenges with using mobile touch-screen devices is that they do not provide tactile feedback to the user. Thus, the user is required to look at the screen to interact with these devices. In this paper, we present SemFeel, a tactile feedback system which informs the user about the presence of an object where she touches on the screen and can offer additional semantic information about that item. Through multiple vibration motors that we attached to the backside of a mobile touch-screen device, SemFeel can generate different patterns of vibration, such as ones that flow from right to left or from top to bottom, to help the user interact with a mobile device. Through two user studies, we show that users can distinguish ten different patterns, including linear patterns and a circular pattern, at approximately 90% accuracy, and that SemFeel supports accurate eyes-free interactions.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"65 1","pages":"111-120"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"72979757","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 177
Enabling always-available input with muscle-computer interfaces 通过肌肉计算机接口实现始终可用的输入
T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay
Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.
先前的工作已经证明了应用离线分析来解释前臂肌电图(EMG)和在物理表面上对手指手势进行分类的可行性。我们扩展了这些结果,使我们更接近于在现实应用中使用肌肉计算机接口来提供始终可用的输入。我们利用现有的人类自然握力分类来开发一个手势集,涵盖自由空间中的交互,即使手忙于其他物体。我们提出了一个实时对这些手势进行分类的系统,并引入了一个双手动范例,使其能够在交互式系统中使用。我们报告的实验结果表明,在捏捏时,四指分类准确率平均为79%,拿着旅行杯时为85%,拿着加重的包时为88%。我们进一步展示了不同手臂姿势的普遍性,并探索了提供实时视觉反馈的权衡。
{"title":"Enabling always-available input with muscle-computer interfaces","authors":"T. S. Saponas, Desney S. Tan, Dan Morris, Ravin Balakrishnan, Jim Turner, J. Landay","doi":"10.1145/1622176.1622208","DOIUrl":"https://doi.org/10.1145/1622176.1622208","url":null,"abstract":"Previous work has demonstrated the viability of applying offline analysis to interpret forearm electromyography (EMG) and classify finger gestures on a physical surface. We extend those results to bring us closer to using muscle-computer interfaces for always-available input in real-world applications. We leverage existing taxonomies of natural human grips to develop a gesture set covering interaction in free space even when hands are busy with other objects. We present a system that classifies these gestures in real-time and we introduce a bi-manual paradigm that enables use in interactive systems. We report experimental results demonstrating four-finger classification accuracies averaging 79% for pinching, 85% while holding a travel mug, and 88% when carrying a weighted bag. We further show generalizability across different arm postures and explore the tradeoffs of providing real-time visual feedback.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"22 1","pages":"167-176"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"78719783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 325
PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity PhotoelasticTouch:使用LCD和光弹性的透明橡胶有形界面
Toshiki Sato, Haruko Mamiya, H. Koike, K. Fukuchi
PhotoelasticTouch is a novel tabletop system designed to intuitively facilitate touch-based interaction via real objects made from transparent elastic material. The system utilizes vision-based recognition techniques and the photoelastic properties of the transparent rubber to recognize deformed regions of the elastic material. Our system works with elastic materials over a wide variety of shapes and does not require any explicit visual markers. Compared to traditional interactive surfaces, our 2.5 dimensional interface system enables direct touch interaction and soft tactile feedback. In this paper we present our force sensing technique using photoelasticity and describe the implementation of our prototype system. We also present three practical applications of PhotoelasticTouch, a force-sensitive touch panel, a tangible face application, and a paint application.
PhotoelasticTouch是一种新颖的桌面系统,旨在通过透明弹性材料制成的真实物体直观地促进基于触摸的交互。该系统利用基于视觉的识别技术和透明橡胶的光弹性特性来识别弹性材料的变形区域。我们的系统适用于各种形状的弹性材料,不需要任何明确的视觉标记。与传统的交互表面相比,我们的2.5维界面系统可以实现直接的触摸交互和柔软的触觉反馈。本文介绍了光弹性力传感技术,并描述了我们的原型系统的实现。我们还介绍了PhotoelasticTouch的三个实际应用,力敏触摸面板,有形面部应用和油漆应用。
{"title":"PhotoelasticTouch: transparent rubbery tangible interface using an LCD and photoelasticity","authors":"Toshiki Sato, Haruko Mamiya, H. Koike, K. Fukuchi","doi":"10.1145/1622176.1622185","DOIUrl":"https://doi.org/10.1145/1622176.1622185","url":null,"abstract":"PhotoelasticTouch is a novel tabletop system designed to intuitively facilitate touch-based interaction via real objects made from transparent elastic material. The system utilizes vision-based recognition techniques and the photoelastic properties of the transparent rubber to recognize deformed regions of the elastic material. Our system works with elastic materials over a wide variety of shapes and does not require any explicit visual markers. Compared to traditional interactive surfaces, our 2.5 dimensional interface system enables direct touch interaction and soft tactile feedback. In this paper we present our force sensing technique using photoelasticity and describe the implementation of our prototype system. We also present three practical applications of PhotoelasticTouch, a force-sensitive touch panel, a tangible face application, and a paint application.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"1 1","pages":"43-50"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"87630358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 62
Changing how people view changes on the web 改变人们对网络变化的看法
J. Teevan, S. Dumais, Daniel J. Liebling, Richard L. Hughes
The Web is a dynamic information environment. Web content changes regularly and people revisit Web pages frequently. But the tools used to access the Web, including browsers and search engines, do little to explicitly support these dynamics. In this paper we present DiffIE, a browser plug-in that makes content change explicit in a simple and lightweight manner. DiffIE caches the pages a person visits and highlights how those pages have changed when the person returns to them. We describe how we built a stable, reliable, and usable system, including how we created compact, privacy-preserving page representations to support fast difference detection. Via a longitudinal user study, we explore how DiffIE changed the way people dealt with changing content. We find that much of its benefit came not from exposing expected change, but rather from drawing attention to unexpected change and helping people build a richer understanding of the Web content they frequent.
Web是一个动态的信息环境。Web内容定期变化,人们经常重新访问Web页面。但是用于访问Web的工具,包括浏览器和搜索引擎,在明确支持这些动态方面做得很少。在本文中,我们介绍了DiffIE,这是一个浏览器插件,它以一种简单而轻量级的方式显式地更改内容。DiffIE缓存用户访问过的页面,并在用户返回时突出显示这些页面的变化情况。我们描述了如何构建一个稳定、可靠和可用的系统,包括如何创建紧凑、保护隐私的页面表示来支持快速差异检测。通过纵向用户研究,我们探讨了DiffIE如何改变人们处理变化内容的方式。我们发现,它的大部分好处不是来自于暴露预期的变化,而是来自于引起人们对意外变化的注意,并帮助人们对他们经常访问的Web内容建立更丰富的理解。
{"title":"Changing how people view changes on the web","authors":"J. Teevan, S. Dumais, Daniel J. Liebling, Richard L. Hughes","doi":"10.1145/1622176.1622221","DOIUrl":"https://doi.org/10.1145/1622176.1622221","url":null,"abstract":"The Web is a dynamic information environment. Web content changes regularly and people revisit Web pages frequently. But the tools used to access the Web, including browsers and search engines, do little to explicitly support these dynamics. In this paper we present DiffIE, a browser plug-in that makes content change explicit in a simple and lightweight manner. DiffIE caches the pages a person visits and highlights how those pages have changed when the person returns to them. We describe how we built a stable, reliable, and usable system, including how we created compact, privacy-preserving page representations to support fast difference detection. Via a longitudinal user study, we explore how DiffIE changed the way people dealt with changing content. We find that much of its benefit came not from exposing expected change, but rather from drawing attention to unexpected change and helping people build a richer understanding of the Web content they frequent.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"46 3 1","pages":"237-246"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81036539","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Mining web interactions to automatically create mash-ups 挖掘网络交互来自动创建混搭
Jeffrey P. Bigham, R. S. Kaminsky, Jeffrey Nichols
The deep web contains an order of magnitude more information than the surface web, but that information is hidden behind the web forms of a large number of web sites. Metasearch engines can help users explore this information by aggregating results from multiple resources, but previously these could only be created and maintained by programmers. In this paper, we explore the automatic creation of metasearch mash-ups by mining the web interactions of multiple web users to find relations between query forms on different web sites. We also present an implemented system called TX2 that uses those connections to search multiple deep web resources simultaneously and integrate the results in context in a single results page. TX2 illustrates the promise of constructing mash-ups automatically and the potential of mining web interactions to explore deep web resources.
深层网络包含的信息比表层网络多一个数量级,但这些信息隐藏在大量网站的网络形式后面。元搜索引擎可以通过聚合来自多个资源的结果来帮助用户探索这些信息,但以前这些只能由程序员创建和维护。在本文中,我们通过挖掘多个web用户的web交互来发现不同网站上查询表单之间的关系,探索元搜索混混的自动创建。我们还提出了一个名为TX2的实现系统,该系统使用这些连接同时搜索多个深网资源,并将结果整合到一个结果页面中。TX2展示了自动构建混搭的前景,以及挖掘网络交互以探索深层网络资源的潜力。
{"title":"Mining web interactions to automatically create mash-ups","authors":"Jeffrey P. Bigham, R. S. Kaminsky, Jeffrey Nichols","doi":"10.1145/1622176.1622215","DOIUrl":"https://doi.org/10.1145/1622176.1622215","url":null,"abstract":"The deep web contains an order of magnitude more information than the surface web, but that information is hidden behind the web forms of a large number of web sites. Metasearch engines can help users explore this information by aggregating results from multiple resources, but previously these could only be created and maintained by programmers. In this paper, we explore the automatic creation of metasearch mash-ups by mining the web interactions of multiple web users to find relations between query forms on different web sites. We also present an implemented system called TX2 that uses those connections to search multiple deep web resources simultaneously and integrate the results in context in a single results page. TX2 illustrates the promise of constructing mash-ups automatically and the potential of mining web interactions to explore deep web resources.","PeriodicalId":93361,"journal":{"name":"Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology","volume":"260 1","pages":"203-212"},"PeriodicalIF":0.0,"publicationDate":"2009-10-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"79638321","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
期刊
Proceedings of the ACM Symposium on User Interface Software and Technology. ACM Symposium on User Interface Software and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1