Detecting complex behavioral patterns in temporal data, such as moving object trajectories, often relies on precise formal specifications derived from vague domain concepts. However, such methods are sensitive to noise and minor fluctuations, leading to missed pattern occurrences. Conversely, machine learning (ML) approaches require abundant labeled examples, posing practical challenges. Our visual analytics approach enables domain experts to derive, test, and combine interval-based features to discriminate patterns and generate training data for ML algorithms. Visual aids enhance recognition and characterization of expected patterns and discovery of unexpected ones. Case studies demonstrate feasibility and effectiveness of the approach, which offers a novel framework for integrating human expertise and analytical reasoning with ML techniques, advancing data analytics.
Provenance facts, such as who made an image and how, can provide valuable context for users to make trust decisions about visual content. Against a backdrop of inexorable progress in generative AI for computer graphics, over two billion people will vote in public elections this year. Emerging standards and provenance enhancing tools promise to play an important role in fighting fake news and the spread of misinformation. In this article, we contrast three provenance enhancing technologies-metadata, fingerprinting, and watermarking-and discuss how we can build upon the complementary strengths of these three pillars to provide robust trust signals to support stories told by real and generative images. Beyond authenticity, we describe how provenance can also underpin new models for value creation in the age of generative AI. In doing so, we address other risks arising with generative AI such as ensuring training consent, and the proper attribution of credit to creatives who contribute their work to train generative models. We show that provenance may be combined with distributed ledger technology to develop novel solutions for recognizing and rewarding creative endeavor in the age of generative AI.
Autonomous driving is no longer a topic of science fiction. Advancements of autonomous driving technologies are now reliable. Effectively harnessing the information is essential for enhancing the safety, reliability, and efficiency of autonomous vehicles. In this article, we explore the pivotal role of visualization and visual analytics (VA) techniques used in autonomous driving. By employing sophisticated data visualization methods, VA, researchers, and practitioners transform intricate datasets into intuitive visual representations, providing valuable insights for decision-making processes. This article delves into various visualization approaches, including spatial-temporal mapping, interactive dashboards, and machine learning-driven analytics, tailored specifically for autonomous driving scenarios. Furthermore, it investigates the integration of real-time sensor data, sensor coordination with VA, and machine learning algorithms to create comprehensive visualizations. This research advocates for the pivotal role of visualization and VA in shaping the future of autonomous driving systems, fostering innovation, and ensuring the safe integration of self-driving vehicles.
This article presents a visual analytics framework, idMotif, to support domain experts in identifying motifs in protein sequences. A motif is a short sequence of amino acids usually associated with distinct functions of a protein, and identifying similar motifs in protein sequences helps us to predict certain types of disease or infection. idMotif can be used to explore, analyze, and visualize such motifs in protein sequences. We introduce a deep-learning-based method for grouping protein sequences and allow users to discover motif candidates of protein groups based on local explanations of the decision of a deep-learning model. idMotif provides several interactive linked views for between and within protein cluster/group and sequence analysis. Through a case study and experts' feedback, we demonstrate how the framework helps domain experts analyze protein sequences and motif identification.
This inaugural article sets the stage and scope for a new department in IEEE Computer Graphics and Applications: @theSource. In this department, we set out to address the questions, "How have open source projects and open Standards driven graphics innovations and applications?" and "What can we learn from them?" Thus, we are broadly concerned with how open communities and ecosystems have (and are) impacting computer graphics. The intent is to highlight: open source software (such as architectures, engines, frameworks, libraries, services); open Standards and open source data and models; and applications as well as the impacts of open graphics technologies. We also consider historical and summative reviews on the cultural and economic aspects of open source and open Standards graphics ecosystems, such as visualization and mixed reality.