Visual computing is a broad field ranging from image analysis and computer vision to computer graphics and visualization. All these areas work with visual data, and fur-ther including methods from human-computer interaction and perceptual psychology allows for efficient visual data handling, exploration
{"title":"Quantitative visual computing","authors":"F. Schreiber, D. Weiskopf","doi":"10.1515/itit-2022-0048","DOIUrl":"https://doi.org/10.1515/itit-2022-0048","url":null,"abstract":"Visual computing is a broad field ranging from image analysis and computer vision to computer graphics and visualization. All these areas work with visual data, and fur-ther including methods from human-computer interaction and perceptual psychology allows for efficient visual data handling, exploration","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44556413","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
David Hägele, C. Schulz, C. Beschle, Hannah Booth, Miriam Butt, Andrea Barth, O. Deussen, D. Weiskopf
Abstract This paper provides a brief overview of uncertainty visualization along with some fundamental considerations on uncertainty propagation and modeling. Starting from the visualization pipeline, we discuss how the different stages along this pipeline can be affected by uncertainty and how they can deal with this and propagate uncertainty information to subsequent processing steps. We illustrate recent advances in the field with a number of examples from a wide range of applications: uncertainty visualization of hierarchical data, multivariate time series, stochastic partial differential equations, and data from linguistic annotation.
{"title":"Uncertainty visualization: Fundamentals and recent developments","authors":"David Hägele, C. Schulz, C. Beschle, Hannah Booth, Miriam Butt, Andrea Barth, O. Deussen, D. Weiskopf","doi":"10.1515/itit-2022-0033","DOIUrl":"https://doi.org/10.1515/itit-2022-0033","url":null,"abstract":"Abstract This paper provides a brief overview of uncertainty visualization along with some fundamental considerations on uncertainty propagation and modeling. Starting from the visualization pipeline, we discuss how the different stages along this pipeline can be affected by uncertainty and how they can deal with this and propagate uncertainty information to subsequent processing steps. We illustrate recent advances in the field with a number of examples from a wide range of applications: uncertainty visualization of hierarchical data, multivariate time series, stochastic partial differential equations, and data from linguistic annotation.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43348680","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Francesco Chiossi, Johannes Zagermann, Jakob Karolus, Nils Rodrigues, Priscilla Balestrucci, D. Weiskopf, Benedikt V. Ehinger, Tiare M. Feuchtner, Harald Reiterer, Lewis L. Chuang, Marc Ernst, A. Bulling, Sven Mayer, Albrecht Schmidt
Abstract Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.
{"title":"Adapting visualizations and interfaces to the user","authors":"Francesco Chiossi, Johannes Zagermann, Jakob Karolus, Nils Rodrigues, Priscilla Balestrucci, D. Weiskopf, Benedikt V. Ehinger, Tiare M. Feuchtner, Harald Reiterer, Lewis L. Chuang, Marc Ernst, A. Bulling, Sven Mayer, Albrecht Schmidt","doi":"10.1515/itit-2022-0035","DOIUrl":"https://doi.org/10.1515/itit-2022-0035","url":null,"abstract":"Abstract Adaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47943690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Immersive Analytics is concerned with the systematic examination of the benefits and challenges of using immersive environments for data analysis, and the development of corresponding designs that improve the quality and efficiency of the analysis process. While immersive technologies are now broadly available, practical solutions haven’t received broad acceptance in real-world applications outside of several core areas, and proper guidelines on the design of such solutions are still under development. Both fundamental research and applications bring together topics and questions from several fields, and open a wide range of directions regarding underlying theory, evidence from user studies, and practical solutions tailored towards the requirements of application areas. We give an overview on the concepts, topics, research questions, and challenges.
{"title":"Immersive analytics: An overview","authors":"Karsten Klein, M. Sedlmair, Falk Schreiber","doi":"10.1515/itit-2022-0037","DOIUrl":"https://doi.org/10.1515/itit-2022-0037","url":null,"abstract":"Abstract Immersive Analytics is concerned with the systematic examination of the benefits and challenges of using immersive environments for data analysis, and the development of corresponding designs that improve the quality and efficiency of the analysis process. While immersive technologies are now broadly available, practical solutions haven’t received broad acceptance in real-world applications outside of several core areas, and proper guidelines on the design of such solutions are still under development. Both fundamental research and applications bring together topics and questions from several fields, and open a wide range of directions regarding underlying theory, evidence from user studies, and practical solutions tailored towards the requirements of application areas. We give an overview on the concepts, topics, research questions, and challenges.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"41884429","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Quynh Quang Ngo, Frederik L. Dennig, D. Keim, M. Sedlmair
Abstract In this article, we discuss how Visualization (VIS) with Machine Learning (ML) could mutually benefit from each other. We do so through the lens of our own experience working at this intersection for the last decade. Particularly we focus on describing how VIS supports explaining ML models and aids ML-based Dimensionality Reduction techniques in solving tasks such as parameter space analysis. In the other direction, we discuss approaches showing how ML helps improve VIS, such as applying ML-based automation to improve visualization design. Based on the examples and our own perspective, we describe a number of open research challenges that we frequently encountered in our endeavors to combine ML and VIS.
{"title":"Machine learning meets visualization – Experiences and lessons learned","authors":"Quynh Quang Ngo, Frederik L. Dennig, D. Keim, M. Sedlmair","doi":"10.1515/itit-2022-0034","DOIUrl":"https://doi.org/10.1515/itit-2022-0034","url":null,"abstract":"Abstract In this article, we discuss how Visualization (VIS) with Machine Learning (ML) could mutually benefit from each other. We do so through the lens of our own experience working at this intersection for the last decade. Particularly we focus on describing how VIS supports explaining ML models and aids ML-based Dimensionality Reduction techniques in solving tasks such as parameter space analysis. In the other direction, we discuss approaches showing how ML helps improve VIS, such as applying ML-based automation to improve visualization design. Based on the examples and our own perspective, we describe a number of open research challenges that we frequently encountered in our endeavors to combine ML and VIS.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46453962","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract The use of airborne LiDAR data has become an essential component of landscape archaeology. This review article provides an understandable introduction to airborne LiDAR data processing specific to archaeology with a holistic view from a technical perspective. It is aimed primarily at researchers, students, and experts whose primary field of study is not archaeology. The article first outlines what the archaeological interest in airborne LiDAR data is and how the data processing workflow is archaeology-specific. The article emphasises that the processing workflow is riddled with archaeology-specific details and presents the key processing steps. These are, in order of their impact on the final result, enhanced visualisation, manual reclassification, filtering of ground points, and interpolation. If a single most important characteristic of airborne LiDAR data processing for archaeology is to be emphasised, it is that archaeologists need an archaeology-specific DEM for their work.
{"title":"Airborne LiDAR data in landscape archaeology. An introduction for non-archaeologists","authors":"Benjamin Štular, Edisa Lozić","doi":"10.1515/itit-2022-0001","DOIUrl":"https://doi.org/10.1515/itit-2022-0001","url":null,"abstract":"Abstract The use of airborne LiDAR data has become an essential component of landscape archaeology. This review article provides an understandable introduction to airborne LiDAR data processing specific to archaeology with a holistic view from a technical perspective. It is aimed primarily at researchers, students, and experts whose primary field of study is not archaeology. The article first outlines what the archaeological interest in airborne LiDAR data is and how the data processing workflow is archaeology-specific. The article emphasises that the processing workflow is riddled with archaeology-specific details and presents the key processing steps. These are, in order of their impact on the final result, enhanced visualisation, manual reclassification, filtering of ground points, and interpolation. If a single most important characteristic of airborne LiDAR data processing for archaeology is to be emphasised, it is that archaeologists need an archaeology-specific DEM for their work.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47275659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Archaeologists are interested in better understanding matters of our human past based on material culture. The tools we use to approach archaeological research questions range from the trowel and brush to, more recently, even those of artificial intelligence. As access to computing technology has increased over time, the breadth of computer-assisted methods in archaeology has also increased. This proliferation has provided us a considerable toolset towards engaging both new and long-standing questions, especially as interdisciplinary collaboration between archaeologists, computer scientists, and engineers continues to grow. As an example of an archaeological project engaging in computer-based approaches, the Guadalupe/Colón Archaeological Project is presented as a case study. Project applications and methodologies range from the regional-scale identification of sites using a geographic information system (GIS) or light detection and ranging (LiDAR) down to the microscopic scale of classifying ceramic materials with convolutional neural networks. Methods relating to the 3D modeling of sites, features, and artifacts and the benefits therein are also explored. In this paper, an overview of the methods used by the project is covered, which includes 1) predictive modeling using a GIS slope analysis for the identification of possible site locations, 2) structure from motion (SfM) drone imagery for site mapping and characterization, 3) airborne LiDAR for site identification, mapping, and characterization, 4) 3D modeling of stone features for improved visualization, 5) 3D modeling of ceramic artifacts for more efficient documentation, and 6) the application of deep learning for automated classification of ceramic materials in thin section. These approaches are discussed and critically considered with the understanding that interdisciplinary cooperation between domain experts in engineering, computer science, and archaeology is an important means of improving and expanding upon digital methodologies in archaeology as a whole.
{"title":"From LiDAR to deep learning: A case study of computer-assisted approaches to the archaeology of Guadalupe and northeast Honduras","authors":"Mike Lyons, Franziska Fecher, M. Reindel","doi":"10.1515/itit-2022-0004","DOIUrl":"https://doi.org/10.1515/itit-2022-0004","url":null,"abstract":"Abstract Archaeologists are interested in better understanding matters of our human past based on material culture. The tools we use to approach archaeological research questions range from the trowel and brush to, more recently, even those of artificial intelligence. As access to computing technology has increased over time, the breadth of computer-assisted methods in archaeology has also increased. This proliferation has provided us a considerable toolset towards engaging both new and long-standing questions, especially as interdisciplinary collaboration between archaeologists, computer scientists, and engineers continues to grow. As an example of an archaeological project engaging in computer-based approaches, the Guadalupe/Colón Archaeological Project is presented as a case study. Project applications and methodologies range from the regional-scale identification of sites using a geographic information system (GIS) or light detection and ranging (LiDAR) down to the microscopic scale of classifying ceramic materials with convolutional neural networks. Methods relating to the 3D modeling of sites, features, and artifacts and the benefits therein are also explored. In this paper, an overview of the methods used by the project is covered, which includes 1) predictive modeling using a GIS slope analysis for the identification of possible site locations, 2) structure from motion (SfM) drone imagery for site mapping and characterization, 3) airborne LiDAR for site identification, mapping, and characterization, 4) 3D modeling of stone features for improved visualization, 5) 3D modeling of ceramic artifacts for more efficient documentation, and 6) the application of deep learning for automated classification of ceramic materials in thin section. These approaches are discussed and critically considered with the understanding that interdisciplinary cooperation between domain experts in engineering, computer science, and archaeology is an important means of improving and expanding upon digital methodologies in archaeology as a whole.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-07-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44414431","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Digital circuits are widely utilized in computers, because they provide models for various digital components and arithmetic operations. Arithmetic circuits are a subclass of digital circuits that are used to execute Boolean algebra. To avoid problems like the infamous Pentium FDIV bug, it is critical to ensure that arithmetic circuits are correct. Formal verification can be used to determine the correctness of a circuit with respect to a certain specification. However, arithmetic circuits, particularly integer multipliers, represent a challenge to current verification methodologies and, in reality, still necessitate a significant amount of manual labor. In my dissertation we examine and develop automated reasoning approaches based on computer algebra, where the word-level specification, modeled as a polynomial, is reduced by a Gröbner basis inferred by the gate-level representation of the circuit. We provide a precise formalization of this reasoning process, which includes soundness and completeness arguments and adds to the mathematical background in this field. On the practical side we present an unique incremental column-wise verification algorithm and preprocessing approaches based on variable elimination that simplify the inferred Gröbner basis. Furthermore, we provide an algebraic proof calculus in this thesis that allows obtaining certificates as a by-product of circuit verification in order to boost confidence in the outcomes of automated reasoning tools. These certificates can be efficiently verified with independent proof checking tools.
{"title":"Formal verification of multiplier circuits using computer algebra","authors":"Daniela Kaufmann","doi":"10.1515/itit-2022-0039","DOIUrl":"https://doi.org/10.1515/itit-2022-0039","url":null,"abstract":"Abstract Digital circuits are widely utilized in computers, because they provide models for various digital components and arithmetic operations. Arithmetic circuits are a subclass of digital circuits that are used to execute Boolean algebra. To avoid problems like the infamous Pentium FDIV bug, it is critical to ensure that arithmetic circuits are correct. Formal verification can be used to determine the correctness of a circuit with respect to a certain specification. However, arithmetic circuits, particularly integer multipliers, represent a challenge to current verification methodologies and, in reality, still necessitate a significant amount of manual labor. In my dissertation we examine and develop automated reasoning approaches based on computer algebra, where the word-level specification, modeled as a polynomial, is reduced by a Gröbner basis inferred by the gate-level representation of the circuit. We provide a precise formalization of this reasoning process, which includes soundness and completeness arguments and adds to the mathematical background in this field. On the practical side we present an unique incremental column-wise verification algorithm and preprocessing approaches based on variable elimination that simplify the inferred Gröbner basis. Furthermore, we provide an algebraic proof calculus in this thesis that allows obtaining certificates as a by-product of circuit verification in order to boost confidence in the outcomes of automated reasoning tools. These certificates can be efficiently verified with independent proof checking tools.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47437280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abstract Approximate Computing (AC) and Stochastic Computing (SC) have been studied as new computing paradigms to achieve energy-efficient designs for error-tolerant applications. The hardware cost of SC generally can be small compared to that of AC, but SC has not been applied to a wide range of applications as AC because SC needs very long cycles to use long random bit strings called Stochastic Numbers (SNs) when we need to maintain the desired precision. To mitigate this disadvantage of SC, we propose a new idea to approximate numbers represented by SNs; our idea is to use multiple SNs to represent one number. Indeed our method can shorten the length of SNs drastically while keeping the precision level compared to conventional SNs. We study two specific cases where we use two and three shorter bit-strings to represent a single conventional SN, which we call a dual-rail and a triple-rail SNs, respectively. We also discuss a general case when we use many SNs corresponding to a single conventional SNs. We also compare triple-rail, dual-rail and conventional SNs in terms of hardware overhead and calculation errors in this paper. From the comparison, we can conclude that our idea can be used to shorten the necessary cycles for SC.
{"title":"Approximating stochastic numbers to reduce latency","authors":"Syoki Kawaminami, Yukino Watanabe, S. Yamashita","doi":"10.1515/itit-2021-0041","DOIUrl":"https://doi.org/10.1515/itit-2021-0041","url":null,"abstract":"Abstract Approximate Computing (AC) and Stochastic Computing (SC) have been studied as new computing paradigms to achieve energy-efficient designs for error-tolerant applications. The hardware cost of SC generally can be small compared to that of AC, but SC has not been applied to a wide range of applications as AC because SC needs very long cycles to use long random bit strings called Stochastic Numbers (SNs) when we need to maintain the desired precision. To mitigate this disadvantage of SC, we propose a new idea to approximate numbers represented by SNs; our idea is to use multiple SNs to represent one number. Indeed our method can shorten the length of SNs drastically while keeping the precision level compared to conventional SNs. We study two specific cases where we use two and three shorter bit-strings to represent a single conventional SN, which we call a dual-rail and a triple-rail SNs, respectively. We also discuss a general case when we use many SNs corresponding to a single conventional SNs. We also compare triple-rail, dual-rail and conventional SNs in terms of hardware overhead and calculation errors in this paper. From the comparison, we can conclude that our idea can be used to shorten the necessary cycles for SC.","PeriodicalId":43953,"journal":{"name":"IT-Information Technology","volume":null,"pages":null},"PeriodicalIF":0.9,"publicationDate":"2022-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48883176","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}