This paper describes an approach exploiting the full capabilities of GPU's to enhance the usability of edge bundling in real applications. Edge bundling, as well as other edge clustering approaches relying on the use of high quality edge rerouting. Typical approach for drawing edge-bundled graph is to render edges as curves. But curves generation can have a relatively high computational costs and do not easily comply with real-time interaction. Furthermore, while edge bundling provides a much better overall readability of a graph, the bundles make it more difficult to recover local information. Our goal was thus to provide fluid interaction allowing the recovery of local information through specific interaction techniques. The system we built offers folklore or classical interaction such as zoom & pan, fish-eye and magnifying lens. We also implemented the Bring & Go technique by Tominski et al. We proposed an approach exploiting the full computing power of GPU's when rendering graph edges as parametric splines. The gain in efficiency when running all curves computations on the GPU turns bundling techniques into techniques that can be embedded in interactive systems concerned with graphs of several thousands of nodes and edges.
{"title":"Living Flows: Enhanced Exploration of Edge-Bundled Graphs Based on GPU-Intensive Edge Rendering","authors":"A. Lambert, D. Auber, G. Melançon","doi":"10.1109/IV.2010.78","DOIUrl":"https://doi.org/10.1109/IV.2010.78","url":null,"abstract":"This paper describes an approach exploiting the full capabilities of GPU's to enhance the usability of edge bundling in real applications. Edge bundling, as well as other edge clustering approaches relying on the use of high quality edge rerouting. Typical approach for drawing edge-bundled graph is to render edges as curves. But curves generation can have a relatively high computational costs and do not easily comply with real-time interaction. Furthermore, while edge bundling provides a much better overall readability of a graph, the bundles make it more difficult to recover local information. Our goal was thus to provide fluid interaction allowing the recovery of local information through specific interaction techniques. The system we built offers folklore or classical interaction such as zoom & pan, fish-eye and magnifying lens. We also implemented the Bring & Go technique by Tominski et al. We proposed an approach exploiting the full computing power of GPU's when rendering graph edges as parametric splines. The gain in efficiency when running all curves computations on the GPU turns bundling techniques into techniques that can be embedded in interactive systems concerned with graphs of several thousands of nodes and edges.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129022762","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Vision is central in human perception. Images are everywhere. Real life applications produce and use huge amounts of different types images. Retrieving an image having some characteristics in a big database is a crucial task. We need then mechanisms for indexing and retrieving images. CBIR (Content Based Image Retrieval) systems perform these tasks by indexing images using the physical characteristics automatically extracted and searching by an image query. We will present a CBIR system named YACBIR (Yet Another CBIR) that combines several properties (color, texture and points of interest) extracted automatically to index and retrieve images.
视觉是人类感知的中心。图像无处不在。现实生活中的应用程序产生并使用大量不同类型的图像。在大型数据库中检索具有某些特征的图像是一项至关重要的任务。我们需要索引和检索图像的机制。CBIR(基于内容的图像检索)系统通过使用图像查询自动提取和搜索的物理特征对图像进行索引来执行这些任务。我们将提出一个名为YACBIR (Yet Another CBIR)的CBIR系统,它结合了自动提取的几种属性(颜色、纹理和兴趣点)来索引和检索图像。
{"title":"YACBIR: Yet Another Content Based Image Retrieval System","authors":"S. Ait-Aoudia, R. Mahiou, Billel Benzaid","doi":"10.1109/IV.2010.83","DOIUrl":"https://doi.org/10.1109/IV.2010.83","url":null,"abstract":"Vision is central in human perception. Images are everywhere. Real life applications produce and use huge amounts of different types images. Retrieving an image having some characteristics in a big database is a crucial task. We need then mechanisms for indexing and retrieving images. CBIR (Content Based Image Retrieval) systems perform these tasks by indexing images using the physical characteristics automatically extracted and searching by an image query. We will present a CBIR system named YACBIR (Yet Another CBIR) that combines several properties (color, texture and points of interest) extracted automatically to index and retrieve images.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"36 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128403153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rapid growth in the availability of digital libraries of K-12 curriculum, coupled with an increased emphasis on standard-based teaching has led to the development of automated standard assignment tools. To assess the performance of one of those tools and to gain insight into the differences between how human catalogers and automated tools conduct these standard assignments, we explore the use of network modeling and visualization techniques for comparing and contrasting the two. The results show significant differences between the human-based and machine-based network maps. Unlike the machine-based maps, the human-based assignment maps elegantly reflect the rationales and principles of the assignments; i.e., clusters of standards separate along lines of content and pedagogical principles. In addition, humans seem significantly more apt at assigning so-called ‘methodological’ standards.
{"title":"Network Visualization of Human and Machine-Based Educational Standard Assignment","authors":"R. Reitsma, A. Diekema","doi":"10.1109/IV.2010.14","DOIUrl":"https://doi.org/10.1109/IV.2010.14","url":null,"abstract":"Rapid growth in the availability of digital libraries of K-12 curriculum, coupled with an increased emphasis on standard-based teaching has led to the development of automated standard assignment tools. To assess the performance of one of those tools and to gain insight into the differences between how human catalogers and automated tools conduct these standard assignments, we explore the use of network modeling and visualization techniques for comparing and contrasting the two. The results show significant differences between the human-based and machine-based network maps. Unlike the machine-based maps, the human-based assignment maps elegantly reflect the rationales and principles of the assignments; i.e., clusters of standards separate along lines of content and pedagogical principles. In addition, humans seem significantly more apt at assigning so-called ‘methodological’ standards.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"89 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127038955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Linking+Brushing is a proven concept to reveal relationships across multiple views. Defining complex selections, however, may involve a significant interaction overhead. This paper proposes Peek Brush, a point-brush that is designed to temporarily select and highlight items hovered by the user's mouse cursor. This enables quickly skimming through the data to identify relationships between different data projections within seconds. The Peek Brush serves the purpose of defining a starting point to a more focused inspection using brushes with higher complexity. In order to achieve rapid visual updates, we discuss acceleration techniques like preprocessing, threading, and layering. As a result, the Peek Brush is able to scale to datasets with millions of entries. A case study demonstrates how the Peek Brush minimizes the interaction effort required from the user. It delivers a quick overview and reduces the time needed for the initial visual analysis step from minutes to seconds.
{"title":"Peek Brush: A High-Speed Lightweight Ad-Hoc Selection for Multiple Coordinated Views","authors":"Wolfgang Berger, H. Piringer","doi":"10.1109/IV.2010.30","DOIUrl":"https://doi.org/10.1109/IV.2010.30","url":null,"abstract":"Linking+Brushing is a proven concept to reveal relationships across multiple views. Defining complex selections, however, may involve a significant interaction overhead. This paper proposes Peek Brush, a point-brush that is designed to temporarily select and highlight items hovered by the user's mouse cursor. This enables quickly skimming through the data to identify relationships between different data projections within seconds. The Peek Brush serves the purpose of defining a starting point to a more focused inspection using brushes with higher complexity. In order to achieve rapid visual updates, we discuss acceleration techniques like preprocessing, threading, and layering. As a result, the Peek Brush is able to scale to datasets with millions of entries. A case study demonstrates how the Peek Brush minimizes the interaction effort required from the user. It delivers a quick overview and reduces the time needed for the initial visual analysis step from minutes to seconds.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130173144","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The data-state and data-flow models of information visualization are known to be expressively equivalent. Each model is most effective for different combinations of analysis processes and data characteristics. Visualization frameworks tend to either (1) work within a single model or (2) permit either model in separate sub-frameworks. In either case, converting between the two models falls entirely to the programmer. The theoretical basis for automatic translation between the two models was established by Chi. However, that process is insufficiently specified to be directly implemented. This paper characterizes the practical advantages of the data-state model. This is used to identify when such a transformation is beneficial. It then expands on Chi's theoretical framework to provide the tools for translating visualization program fragments from the data-flow to the data-state model. A partial implementation of the expanded theory is described for the Stencil visualization environment.
{"title":"Automatic Application of the Data-State Model in Data-Flow Contexts","authors":"Joseph A. Cottam, A. Lumsdaine","doi":"10.1109/IV.2010.10","DOIUrl":"https://doi.org/10.1109/IV.2010.10","url":null,"abstract":"The data-state and data-flow models of information visualization are known to be expressively equivalent. Each model is most effective for different combinations of analysis processes and data characteristics. Visualization frameworks tend to either (1) work within a single model or (2) permit either model in separate sub-frameworks. In either case, converting between the two models falls entirely to the programmer. The theoretical basis for automatic translation between the two models was established by Chi. However, that process is insufficiently specified to be directly implemented. This paper characterizes the practical advantages of the data-state model. This is used to identify when such a transformation is beneficial. It then expands on Chi's theoretical framework to provide the tools for translating visualization program fragments from the data-flow to the data-state model. A partial implementation of the expanded theory is described for the Stencil visualization environment.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"30 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123917896","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper we present Double Tree, a new visualization of Key Word In Context (KWIC) displays targeted to support linguistic analysis. Inspired by Wattenberg’s and Viégas’ [1] Word Tree visualization, Double Tree extends the idea of representing KWIC results as trees. We address several issues with Word Trees with respect to the specific demands of linguists and discuss the design decisions and implementation details that we chose for Double Tree. In particular we present our approach for displaying a two-sided tree. We describe details of the layout, including how frequency and linguistic information is incorporated, and what user interaction is supported. We conclude with some consideration on possible next steps for Double Tree.
{"title":"Double Tree: An Advanced KWIC Visualization for Expert Users","authors":"C. Culy, V. Lyding","doi":"10.1109/IV.2010.24","DOIUrl":"https://doi.org/10.1109/IV.2010.24","url":null,"abstract":"In this paper we present Double Tree, a new visualization of Key Word In Context (KWIC) displays targeted to support linguistic analysis. Inspired by Wattenberg’s and Viégas’ [1] Word Tree visualization, Double Tree extends the idea of representing KWIC results as trees. We address several issues with Word Trees with respect to the specific demands of linguists and discuss the design decisions and implementation details that we chose for Double Tree. In particular we present our approach for displaying a two-sided tree. We describe details of the layout, including how frequency and linguistic information is incorporated, and what user interaction is supported. We conclude with some consideration on possible next steps for Double Tree.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"16 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124173005","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In her 2009 new media artwork PolesApart, Australian Aboriginal artist r e a, of the Gamilaraay people in northern New South Wales, explores issues relating to the Stolen Generations of Aboriginal children. Based on the personal experiences of her grandmother and great aunt as ‘stolen children’, r e a amplifies the work’s familial dimension by enacting the role of the protagonist fleeing from forced servitude. This paper looks at PolesApart in the broader context of the interrelated phenomena of the stolen generations and the so-called ‘history wars’. It is posited that the power, immediacy and affective dimensions of (moving) visual imagery have been instrumental in shifting Australians’ knowledge about the stolen generations from the margins into the mainstream. The capacity of the moving image to ‘embody affect’ [13], it is argued, has enabled many more Australians than previously to appreciate the historical implications and continuing ramifications of this prolonged episode in Australian history. This has in turn led to the development of a more sympathetic public understanding of the phenomenon of the stolen generations as ‘lived experience’. In turn this broader social knowledge, and its integration into our shared cultural heritage, has contributed to Australians’ general receptiveness to the official Apology issued to members of the stolen generations by Prime Minister Kevin Rudd in Federal Parliament (13th February 2008). It is also the case that the popular reception of mainstream stolen generation-themed movies has influenced Australians’ openness to the themes and issues explored in contemporary non-mainstream new media work such as r e a's PolesApart. In the latter work, through the use of the vehicle of her own body, r e a demonstrates that the personal is inescapably political, and vice versa.
{"title":"Embodying Affect: The Stolen Generations, the History Wars and PolesApart by Indigenous New Media Artist r e a","authors":"C. Nicholls","doi":"10.1109/IV.2010.64","DOIUrl":"https://doi.org/10.1109/IV.2010.64","url":null,"abstract":"In her 2009 new media artwork PolesApart, Australian Aboriginal artist r e a, of the Gamilaraay people in northern New South Wales, explores issues relating to the Stolen Generations of Aboriginal children. Based on the personal experiences of her grandmother and great aunt as ‘stolen children’, r e a amplifies the work’s familial dimension by enacting the role of the protagonist fleeing from forced servitude. This paper looks at PolesApart in the broader context of the interrelated phenomena of the stolen generations and the so-called ‘history wars’. It is posited that the power, immediacy and affective dimensions of (moving) visual imagery have been instrumental in shifting Australians’ knowledge about the stolen generations from the margins into the mainstream. The capacity of the moving image to ‘embody affect’ [13], it is argued, has enabled many more Australians than previously to appreciate the historical implications and continuing ramifications of this prolonged episode in Australian history. This has in turn led to the development of a more sympathetic public understanding of the phenomenon of the stolen generations as ‘lived experience’. In turn this broader social knowledge, and its integration into our shared cultural heritage, has contributed to Australians’ general receptiveness to the official Apology issued to members of the stolen generations by Prime Minister Kevin Rudd in Federal Parliament (13th February 2008). It is also the case that the popular reception of mainstream stolen generation-themed movies has influenced Australians’ openness to the themes and issues explored in contemporary non-mainstream new media work such as r e a's PolesApart. In the latter work, through the use of the vehicle of her own body, r e a demonstrates that the personal is inescapably political, and vice versa.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"48 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127331058","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper describes a series of transdisciplinary research projects in five large-scale, interactive visualization architectures. These immersive architectures and their associated visual, sonic and algorithmic techniques offer compelling means for mapping and remediating the tangible, intangible and abstract aspects of culture and heritage landscapes. This paper brings these unique systems and the installations developed for them together for the first time. The task here is to highlight the strategies for embodied, kinaesthetic, multisensory and collaborative engagement as powerful ways to reformulate narrative made possible through these stereographic, panoramic, situated interfaces.
{"title":"Immersive Visualization Architectures and Situated Embodiments of Culture and Heritage","authors":"S. Kenderdine","doi":"10.1109/IV.2010.63","DOIUrl":"https://doi.org/10.1109/IV.2010.63","url":null,"abstract":"This paper describes a series of transdisciplinary research projects in five large-scale, interactive visualization architectures. These immersive architectures and their associated visual, sonic and algorithmic techniques offer compelling means for mapping and remediating the tangible, intangible and abstract aspects of culture and heritage landscapes. This paper brings these unique systems and the installations developed for them together for the first time. The task here is to highlight the strategies for embodied, kinaesthetic, multisensory and collaborative engagement as powerful ways to reformulate narrative made possible through these stereographic, panoramic, situated interfaces.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"42 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125447309","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
L. Marsalek, Anna Katharina Dehof, Iliyan Georgiev, Hans-Peter Lenhof, P. Slusallek, A. Hildebrandt
Molecular visualization is one of the cornerstones in structural bioinformatics and related fields. Today, rasterization is typically used for the interactive display of molecular scenes, while ray tracing aims at generating high-quality images, taking typically minutes to hours to generate and requiring the usage of an external off-line program. Recently, real-time ray tracing evolved to combine the interactivity of rasterization-based approaches with the superb image quality of ray tracing techniques. We demonstrate how real-time ray tracing integrated into a molecular modelling and visualization tool allows for better understanding of the structural arrangement of biomolecules and natural creation of publication-quality images in real-time. However, unlike most approaches, our technique naturaly integrates into the full-featured molecular modelling and visualization tool BALLView, seamlessly extending a standard workflow with interactive high-quality rendering.
{"title":"Real-Time Ray Tracing of Complex Molecular Scenes","authors":"L. Marsalek, Anna Katharina Dehof, Iliyan Georgiev, Hans-Peter Lenhof, P. Slusallek, A. Hildebrandt","doi":"10.1109/IV.2010.43","DOIUrl":"https://doi.org/10.1109/IV.2010.43","url":null,"abstract":"Molecular visualization is one of the cornerstones in structural bioinformatics and related fields. Today, rasterization is typically used for the interactive display of molecular scenes, while ray tracing aims at generating high-quality images, taking typically minutes to hours to generate and requiring the usage of an external off-line program. Recently, real-time ray tracing evolved to combine the interactivity of rasterization-based approaches with the superb image quality of ray tracing techniques. We demonstrate how real-time ray tracing integrated into a molecular modelling and visualization tool allows for better understanding of the structural arrangement of biomolecules and natural creation of publication-quality images in real-time. However, unlike most approaches, our technique naturaly integrates into the full-featured molecular modelling and visualization tool BALLView, seamlessly extending a standard workflow with interactive high-quality rendering.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127258601","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The work presented in this paper aims at exploring new ways of integrating, visualizing and interacting with geotechnical and geophysical data that may be more rich and interactive than those offered by most current Geographic Information Systems (GIS). Some visualization techniques enabling simultaneous visualization of the several data types available in our case study are proposed. Moreover, methods were developed to guide experts while defining layers and other relevant geological structures. The work is still in an early stage and is main goal has been assessing the validity and adequacy of the proposed techniques to the specific geotechnical and geophysical data under consideration.
{"title":"Exploring New Ways of Integration, Visualization and Interaction with Geotechnical and Geophysical Data","authors":"V. Gonçalves, F. Almeida, Paulo Dias, B. Santos","doi":"10.1109/IV.2010.35","DOIUrl":"https://doi.org/10.1109/IV.2010.35","url":null,"abstract":"The work presented in this paper aims at exploring new ways of integrating, visualizing and interacting with geotechnical and geophysical data that may be more rich and interactive than those offered by most current Geographic Information Systems (GIS). Some visualization techniques enabling simultaneous visualization of the several data types available in our case study are proposed. Moreover, methods were developed to guide experts while defining layers and other relevant geological structures. The work is still in an early stage and is main goal has been assessing the validity and adequacy of the proposed techniques to the specific geotechnical and geophysical data under consideration.","PeriodicalId":328464,"journal":{"name":"2010 14th International Conference Information Visualisation","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2010-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127477071","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}