首页 > 最新文献

Proceedings of the working conference on Advanced visual interfaces最新文献

英文 中文
Identification and validation of cognitive design principles for automated generation of assembly instructions 自动生成装配指令的认知设计原则的识别和验证
Pub Date : 2004-05-25 DOI: 10.1145/989863.989917
Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan
Designing effective instructions for everyday products is challenging. One reason is that designers lack a set of design principles for producing visually comprehensible and accessible instructions. We describe an approach for identifying such design principles through experiments investigating the production, preference, and comprehension of assembly instructions for furniture. We instantiate these principles into an algorithm that automatically generates assembly instructions. Finally, we perform a user study comparing our computer-generated instructions to factory-provided and highly rated hand-designed instructions. Our results indicate that the computer-generated instructions informed by our cognitive design principles significantly reduce assembly time an average of 35% and error by 50%. Details of the experimental methodology and the implementation of the automated system are described.
为日常用品设计有效的说明书是一项挑战。一个原因是设计师缺乏一套设计原则来制作视觉上可理解和可访问的指令。我们描述了一种方法,通过实验研究家具的生产、偏好和组装说明的理解来识别这样的设计原则。我们将这些原则实例化为自动生成汇编指令的算法。最后,我们进行了一项用户研究,将我们的计算机生成的指令与工厂提供的和高度评价的手工设计的指令进行比较。我们的研究结果表明,根据我们的认知设计原则,计算机生成的指令显着减少了平均35%的装配时间和50%的错误。详细描述了实验方法和自动化系统的实现。
{"title":"Identification and validation of cognitive design principles for automated generation of assembly instructions","authors":"Julie Heiser, Doantam Phan, Maneesh Agrawala, B. Tversky, P. Hanrahan","doi":"10.1145/989863.989917","DOIUrl":"https://doi.org/10.1145/989863.989917","url":null,"abstract":"Designing effective instructions for everyday products is challenging. One reason is that designers lack a set of design principles for producing visually comprehensible and accessible instructions. We describe an approach for identifying such design principles through experiments investigating the production, preference, and comprehension of assembly instructions for furniture. We instantiate these principles into an algorithm that automatically generates assembly instructions. Finally, we perform a user study comparing our computer-generated instructions to factory-provided and highly rated hand-designed instructions. Our results indicate that the computer-generated instructions informed by our cognitive design principles significantly reduce assembly time an average of 35% and error by 50%. Details of the experimental methodology and the implementation of the automated system are described.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129067876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 108
DeepDocument: use of a multi-layered display to provide context awareness in text editing DeepDocument:使用多层显示在文本编辑中提供上下文感知
Pub Date : 2004-05-25 DOI: 10.1145/989863.989902
M. Masoodian, Sam McKoy, Bill Rogers, David Ware
Word Processing software usually only displays paragraphs of text immediately adjacent to the cursor position. Generally this is appropriate, for example when composing a single paragraph. However, when reviewing or working on the layout of a document it is necessary to establish awareness of current text in the context of the document as a whole. This can be done by scrolling or zooming, but when doing so, focus is easily lost and hard to regain.We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™.
文字处理软件通常只显示紧邻光标位置的文本段落。一般来说,这是合适的,例如在组成一个段落时。然而,在审查或处理文档布局时,有必要在整个文档的上下文中建立对当前文本的认识。这可以通过滚动或缩放来实现,但这样做时,焦点很容易失去,很难恢复。我们开发了一个名为DeepDocument的系统,该系统使用双层LCD显示屏,同时显示聚焦视图和文档视图。概览显示在后面的显示屏上,聚焦视图显示在前面,每个都保持全屏幕大小。层的物理分离利用了人类的深度感知能力,允许用户独立地感知视图,而无需重新定向他们的视线。DeepDocument是作为Microsoft Word™的扩展而编写的。
{"title":"DeepDocument: use of a multi-layered display to provide context awareness in text editing","authors":"M. Masoodian, Sam McKoy, Bill Rogers, David Ware","doi":"10.1145/989863.989902","DOIUrl":"https://doi.org/10.1145/989863.989902","url":null,"abstract":"Word Processing software usually only displays paragraphs of text immediately adjacent to the cursor position. Generally this is appropriate, for example when composing a single paragraph. However, when reviewing or working on the layout of a document it is necessary to establish awareness of current text in the context of the document as a whole. This can be done by scrolling or zooming, but when doing so, focus is easily lost and hard to regain.We have developed a system called DeepDocument using a two-layered LCD display in which both focussed and document-wide views are presented simultaneously. The overview is shown on the rear display and the focussed view on the front, maintaining full screen size for each. The physical separation of the layers takes advantage of human depth perception capabilities to allow users to perceive the views independently without having to redirect their gaze. DeepDocument has been written as an extension to Microsoft Word™.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"74 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116327521","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ValueCharts: analyzing linear models expressing preferences and evaluations ValueCharts:分析表示偏好和评估的线性模型
Pub Date : 2004-05-25 DOI: 10.1145/989863.989885
G. Carenini, J. Loyd
In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation.
在本文中,我们提出了ValueCharts,这是一套可视化和交互式技术,旨在支持决策者检查偏好和评估的线性模型。线性模型是个人、团体和组织普遍使用的决策工具。在决策分析中,它们帮助决策者在目标冲突的情况下分析优先选择。在经济学和社会科学中,设计了类似的模型,根据兴趣的评估指数对实体进行排名。构建表达偏好和评估的模型的基本目标是帮助决策者将与决策相关的所有信息组织到一个可以有效分析的结构中。然而,随着模型及其应用领域的复杂性增长,模型分析可能成为一项非常具有挑战性的任务。我们声称ValueCharts将使这些模型的检验和应用更加自然和有效。我们通过展示ValueCharts如何有效地实现一组基本任务来支持我们的主张,我们认为这些任务是分析和理解偏好和评估的线性模型的核心。
{"title":"ValueCharts: analyzing linear models expressing preferences and evaluations","authors":"G. Carenini, J. Loyd","doi":"10.1145/989863.989885","DOIUrl":"https://doi.org/10.1145/989863.989885","url":null,"abstract":"In this paper we propose ValueCharts, a set of visualizations and interactive techniques intended to support decision-makers in inspecting linear models of preferences and evaluation. Linear models are popular decision-making tools for individuals, groups and organizations. In Decision Analysis, they help the decision-maker analyze preferential choices under conflicting objectives. In Economics and the Social Sciences, similar models are devised to rank entities according to an evaluative index of interest. The fundamental goal of building models expressing preferences and evaluations is to help the decision-maker organize all the information relevant to a decision into a structure that can be effectively analyzed. However, as models and their domain of application grow in complexity, model analysis can become a very challenging task. We claim that ValueCharts will make the inspection and application of these models more natural and effective. We support our claim by showing how ValueCharts effectively enable a set of basic tasks that we argue are at the core of analyzing and understanding linear models of preferences and evaluation.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115417885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 70
Focus dependent multi-level graph clustering 依赖焦点的多层次图聚类
Pub Date : 2004-05-25 DOI: 10.1145/989863.989888
François Boutin, Mountaz Hascoët
In this paper we propose a structure-based clustering technique that transforms a given graph into a specific double tree structure called multi-level outline tree. Each meta-node of the tree - that represents a subset of nodes - is itself hierarchically clustered. So, a meta-node is considered as a tree root of included clusters.The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs.
本文提出了一种基于结构的聚类技术,该技术将给定的图转换为特定的双树结构,称为多级轮廓树。树的每个元节点——代表节点的一个子集——本身是分层聚集的。因此,元节点被视为包含集群的树的根。我们的方法的主要独创性在于在聚类过程中考虑到用户的关注点,从而从不同的角度提供视图。多层轮廓树在线性时间内计算,易于探索。我们认为我们的技术非常适合调查各种图表,如Web图表或引用图表。
{"title":"Focus dependent multi-level graph clustering","authors":"François Boutin, Mountaz Hascoët","doi":"10.1145/989863.989888","DOIUrl":"https://doi.org/10.1145/989863.989888","url":null,"abstract":"In this paper we propose a structure-based clustering technique that transforms a given graph into a specific double tree structure called multi-level outline tree. Each meta-node of the tree - that represents a subset of nodes - is itself hierarchically clustered. So, a meta-node is considered as a tree root of included clusters.The main originality of our approach is to account for the user focus in the clustering process to provide views from different perspectives. Multi-level outline trees are computed in linear time and easy to explore. We think that our technique is well suited to investigate various graphs like Web graphs or citation graphs.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"137 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126870773","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 15
Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors 将信息浏览和探索方法与移动城市游客的空间导航辅助相结合
Pub Date : 2004-05-25 DOI: 10.1145/989863.989900
T. Rist, Stephan Baldes, Patrick Brandmeier
Navigation support concerning both physical space as well as information spaces address fundamental information needs of mobile users in many application scenarios including the classical shopping visit in the town centre. Therefore it is a particular research objective in the mobile domain to explore, showcase, and test the interplay of physical navigation with navigation in an information space that, metaphorically speaking, superimposes the physical space. We have developed a demonstrator that couples a spatial navigation aid in the form of a 2D interactive map viewer with other information services, such as an interactive web directory service that provides information about shops and restaurants and their product palettes. The research has raised a number of interesting questions, such as of how to align interactions performed in the navigation aid with meaningful actions in a coupled twin application, and vice versa, how to reflect navigation in an information space in the aligned spatial navigation aid.
对物理空间和信息空间的导航支持解决了移动用户在许多应用场景中的基本信息需求,包括在城镇中心的经典购物访问。因此,在移动领域中,探索、展示和测试物理导航与信息空间中导航的相互作用是一个特定的研究目标,隐喻地说,信息空间与物理空间重叠。我们开发了一个演示程序,它将2D交互式地图查看器形式的空间导航辅助与其他信息服务(例如提供有关商店和餐馆及其产品调色板的交互式web目录服务)耦合在一起。该研究提出了许多有趣的问题,例如如何将导航辅助中执行的交互与耦合双胞胎应用程序中有意义的操作对齐,反之亦然,如何在对齐的空间导航辅助中反映信息空间中的导航。
{"title":"Aligning information browsing and exploration methods with a spatial navigation aid for mobile city visitors","authors":"T. Rist, Stephan Baldes, Patrick Brandmeier","doi":"10.1145/989863.989900","DOIUrl":"https://doi.org/10.1145/989863.989900","url":null,"abstract":"Navigation support concerning both physical space as well as information spaces address fundamental information needs of mobile users in many application scenarios including the classical shopping visit in the town centre. Therefore it is a particular research objective in the mobile domain to explore, showcase, and test the interplay of physical navigation with navigation in an information space that, metaphorically speaking, superimposes the physical space. We have developed a demonstrator that couples a spatial navigation aid in the form of a 2D interactive map viewer with other information services, such as an interactive web directory service that provides information about shops and restaurants and their product palettes. The research has raised a number of interesting questions, such as of how to align interactions performed in the navigation aid with meaningful actions in a coupled twin application, and vice versa, how to reflect navigation in an information space in the aligned spatial navigation aid.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"118 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115088211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Image presentation in space and time: errors, preferences and eye-gaze activity 空间和时间中的图像呈现:错误、偏好和眼球注视活动
Pub Date : 2004-05-25 DOI: 10.1145/989863.989884
R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn
Rapid Serial Visual Presentation (RSVP) is a technique that allows images to be presented sequentially in the time-domain, thereby offering an alternative to the conventional concurrent display of images in the space domain. Such an alternative offers potential advantages where display area is at a premium. However, notwithstanding the flexibility to employ either or both domains for presentation purposes, little is known about the alternatives suited to specific tasks undertaken by a user. As a consequence there is a pressing need to provide guidance for the interaction designer faced with these alternatives.We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes.
快速串行视觉显示(RSVP)是一种允许图像在时域中顺序显示的技术,从而为传统的在空间域中同时显示图像提供了一种替代方案。这样的替代方案提供了潜在的优势,显示面积是一个溢价。然而,尽管出于表示目的可以灵活地使用其中一个或两个域,但对于适合用户承担的特定任务的替代方案知之甚少。因此,我们迫切需要为面对这些选择的交互设计师提供指导。我们研究了在图像集合中识别以前查看过的图像是否存在的任务,这是许多实际活动的要求。在被试实验中,采集的图像以三种模式呈现(1)。“幻灯片”回复模式;(2)并发静态——'static mode';(3)“混合”模式。每种模式都使用相同的显示区域和相同的总呈现时间,它们一起被视为交互设计师可用的主要资源。对于每种呈现模式,结果确定了错误概况和受试者偏好。眼睛凝视研究发现了三种呈现模式之间的显著差异。
{"title":"Image presentation in space and time: errors, preferences and eye-gaze activity","authors":"R. Spence, M. Witkowski, Catherine Fawcett, B. Craft, O. Bruijn","doi":"10.1145/989863.989884","DOIUrl":"https://doi.org/10.1145/989863.989884","url":null,"abstract":"Rapid Serial Visual Presentation (RSVP) is a technique that allows images to be presented sequentially in the time-domain, thereby offering an alternative to the conventional concurrent display of images in the space domain. Such an alternative offers potential advantages where display area is at a premium. However, notwithstanding the flexibility to employ either or both domains for presentation purposes, little is known about the alternatives suited to specific tasks undertaken by a user. As a consequence there is a pressing need to provide guidance for the interaction designer faced with these alternatives.We investigated the task of identifying the presence or absence of a previously viewed image within a collection of images, a requirement of many real activities. In experiments with subjects, the collection of images was presented in three modes (1) 'slide show' RSVP mode; (2) concurrently and statically -- 'static mode'; and (3) a 'mixed' mode. Each mode employed the same display area and the same total presentation time, together regarded as primary resources available to the interaction designer. For each presentation mode, the outcome identified error profiles and subject preferences. Eye-gaze studies detected distinctive differences between the three presentation modes.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130674264","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
Integrating expanding annotations with a 3D explosion probe 集成扩展注释与3D爆炸探针
Pub Date : 2004-05-25 DOI: 10.1145/989863.989871
Henry Sonnet, Sheelagh Carpendale, T. Strothotte
Understanding complex 3D virtual models can be difficult, especially when the model has interior components not initially visible and ancillary text. We describe new techniques for the interactive exploration of 3D models. Specifically, in addition to traditional viewing operations, we present new text extrusion techniques combined with techniques that create an interactive explosion diagram. In our approach, scrollable text annotations that are associated with the various parts of the model can be revealed dynamically, either in part or in full, by moving the mouse cursor within annotation trigger areas. Strong visual connections between model parts and the associated text are included in order to aid comprehension. Furthermore, the model parts can be separated, creating interactive explosion diagrams. Using a 3D probe, occluding objects can be interactively moved apart and then returned to their initial locations. Displayed annotations are kept readable despite model manipulations. Hence, our techniques provide textual context within the spatial context of the 3D model.
理解复杂的3D虚拟模型可能很困难,特别是当模型具有最初不可见的内部组件和辅助文本时。我们描述了3D模型交互式探索的新技术。具体来说,除了传统的查看操作外,我们还提出了新的文本挤压技术,并结合了创建交互式爆炸图的技术。在我们的方法中,通过在注释触发区域内移动鼠标光标,可以动态地显示与模型的各个部分相关联的可滚动文本注释(部分或全部)。模型部件和相关文本之间强烈的视觉联系是为了帮助理解。此外,模型部件可以分离,创建交互的爆炸图。使用3D探针,可以交互式地将遮挡的物体分开,然后返回到它们的初始位置。尽管有模型操作,显示的注释仍然保持可读。因此,我们的技术在3D模型的空间环境中提供文本上下文。
{"title":"Integrating expanding annotations with a 3D explosion probe","authors":"Henry Sonnet, Sheelagh Carpendale, T. Strothotte","doi":"10.1145/989863.989871","DOIUrl":"https://doi.org/10.1145/989863.989871","url":null,"abstract":"Understanding complex 3D virtual models can be difficult, especially when the model has interior components not initially visible and ancillary text. We describe new techniques for the interactive exploration of 3D models. Specifically, in addition to traditional viewing operations, we present new text extrusion techniques combined with techniques that create an interactive explosion diagram. In our approach, scrollable text annotations that are associated with the various parts of the model can be revealed dynamically, either in part or in full, by moving the mouse cursor within annotation trigger areas. Strong visual connections between model parts and the associated text are included in order to aid comprehension. Furthermore, the model parts can be separated, creating interactive explosion diagrams. Using a 3D probe, occluding objects can be interactively moved apart and then returned to their initial locations. Displayed annotations are kept readable despite model manipulations. Hence, our techniques provide textual context within the spatial context of the 3D model.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"150 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134254483","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 64
Task-sensitive user interfaces: grounding information provision within the context of the user's activity 任务敏感用户界面:在用户活动的上下文中提供基础信息
Pub Date : 2004-05-25 DOI: 10.1145/989863.989899
N. Colineau, Andrew Lampert, Cécile Paris
In the context of innovative Airborne Early Warning and Control (AEW&C) platform capabilities, we are building an environment that can support the generation of information tailored to operators' tasks. The challenging issues here are to improve the methods for managing information delivery to the operators, and thus provide them with high-value information on their display whilst avoiding noise and clutter. To this end, we enhance the operator's graphical interface with information delivery mechanisms that support maintenance of situation awareness and improving efficiency. We do this by proactively delivering task-relevant information.
在创新的机载预警和控制(AEW&C)平台能力的背景下,我们正在构建一个环境,可以支持根据操作员的任务生成信息。这里具有挑战性的问题是改进管理信息传递给运营商的方法,从而为他们提供高价值的信息,同时避免噪音和混乱。为此,我们通过支持态势感知维护和提高效率的信息传递机制增强了操作员的图形界面。我们通过主动提供与任务相关的信息来做到这一点。
{"title":"Task-sensitive user interfaces: grounding information provision within the context of the user's activity","authors":"N. Colineau, Andrew Lampert, Cécile Paris","doi":"10.1145/989863.989899","DOIUrl":"https://doi.org/10.1145/989863.989899","url":null,"abstract":"In the context of innovative Airborne Early Warning and Control (AEW&C) platform capabilities, we are building an environment that can support the generation of information tailored to operators' tasks. The challenging issues here are to improve the methods for managing information delivery to the operators, and thus provide them with high-value information on their display whilst avoiding noise and clutter. To this end, we enhance the operator's graphical interface with information delivery mechanisms that support maintenance of situation awareness and improving efficiency. We do this by proactively delivering task-relevant information.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123497152","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 7
Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view 渔网,一个带有搜索词弹出的鱼眼网络浏览器:概览和线性视图的比较评价
Pub Date : 2004-05-25 DOI: 10.1145/989863.989883
Patrick Baudisch, Bongshin Lee, Libby Hanna
Fishnet is a web browser that always displays web pages in their entirety, independent of their size. Fishnet accomplishes this by using a fisheye view, i.e. by showing a focus region at readable scale while spatially compressing page content above and below that region. Fishnet offers search term highlighting, and assures that those terms are readable by using "popouts". This allows users to visually scan search results within the entire page without scrolling.The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique--- fisheye view, overview, or regular linear view---to pick for which type of visual search scenario.
渔网是一个网页浏览器,总是显示网页的整体,独立于他们的大小。Fishnet通过使用鱼眼视图来实现这一点,即通过在可读尺度上显示焦点区域,同时在空间上压缩该区域上下的页面内容。渔网提供搜索词突出显示,并确保这些条款是可读的使用“弹出”。这允许用户在不滚动的情况下在整个页面内可视化地扫描搜索结果。本文的范围是双重的。首先,我们提出渔网作为一种新颖的方式来查看突出显示的搜索结果,我们讨论了设计空间。其次,我们提出了一项用户研究,帮助从业者确定哪种可视化技术——鱼眼视图、概览视图或常规线性视图——为哪种类型的视觉搜索场景选择。
{"title":"Fishnet, a fisheye web browser with search term popouts: a comparative evaluation with overview and linear view","authors":"Patrick Baudisch, Bongshin Lee, Libby Hanna","doi":"10.1145/989863.989883","DOIUrl":"https://doi.org/10.1145/989863.989883","url":null,"abstract":"Fishnet is a web browser that always displays web pages in their entirety, independent of their size. Fishnet accomplishes this by using a fisheye view, i.e. by showing a focus region at readable scale while spatially compressing page content above and below that region. Fishnet offers search term highlighting, and assures that those terms are readable by using \"popouts\". This allows users to visually scan search results within the entire page without scrolling.The scope of this paper is twofold. First, we present fishnet as a novel way of viewing the results of highlighted search and we discuss the design space. Second, we present a user study that helps practitioners determine which visualization technique--- fisheye view, overview, or regular linear view---to pick for which type of visual search scenario.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121017832","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 87
A graph-based interface to complex hypermedia structure visualization 复杂超媒体结构可视化的基于图形的界面
Pub Date : 2004-05-25 DOI: 10.1145/989863.989887
Manuel Freire, P. Rodríguez
Complex hypermedia structures can be difficult to author and maintain, especially when the usual hierarchic representation cannot capture important relations. We propose a graph-based direct manipulation interface that uses multiple focus+context techniques to avoid display clutter and information overload. A semantical fisheye lens based on hierarchical clustering allows the user to work on high-level abstracts of the structure. Navigation through the resulting graph is animated in order to avoid loss of orientation, with a force-directed algorithm in charge of generating successive layouts. Multiple views can be generated over the same data, each with independent settings for filtering, clustering and degree of zoom.While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces.A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented.
复杂的超媒体结构可能难以创建和维护,特别是当通常的层次表示不能捕获重要关系时。我们提出了一个基于图形的直接操作界面,它使用多个焦点+上下文技术来避免显示混乱和信息过载。基于分层聚类的语义鱼眼镜头允许用户处理结构的高级抽象。通过生成的图形进行导航是动画的,以避免方向丢失,并使用力导向算法负责生成连续布局。可以在相同的数据上生成多个视图,每个视图都具有过滤、聚类和缩放程度的独立设置。虽然这些技术在文献中都是众所周知的,但它们在超媒体创作领域的组合和应用构成了开发下一代超空间的强大工具。已经实现了一个通用框架CLOVER和两个用于现有超媒体系统的特定应用程序。
{"title":"A graph-based interface to complex hypermedia structure visualization","authors":"Manuel Freire, P. Rodríguez","doi":"10.1145/989863.989887","DOIUrl":"https://doi.org/10.1145/989863.989887","url":null,"abstract":"Complex hypermedia structures can be difficult to author and maintain, especially when the usual hierarchic representation cannot capture important relations. We propose a graph-based direct manipulation interface that uses multiple focus+context techniques to avoid display clutter and information overload. A semantical fisheye lens based on hierarchical clustering allows the user to work on high-level abstracts of the structure. Navigation through the resulting graph is animated in order to avoid loss of orientation, with a force-directed algorithm in charge of generating successive layouts. Multiple views can be generated over the same data, each with independent settings for filtering, clustering and degree of zoom.While these techniques are all well-known in the literature, it is their combination and application to the field of hypermedia authoring that constitutes a powerful tool for the development of next-generation hyperspaces.A generic framework, CLOVER, and two specific applications for existing hypermedia systems have been implemented.","PeriodicalId":215861,"journal":{"name":"Proceedings of the working conference on Advanced visual interfaces","volume":"24 Suppl 2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2004-05-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121216904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 19
期刊
Proceedings of the working conference on Advanced visual interfaces
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1