Online job advertisements on various job portals or websites have become the most popular way for people to find potential career opportunities nowadays. However, the majority of these job sites are limited to offering fundamental filters such as job titles, keywords, and compensation ranges. This often poses a challenge for job seekers in efficiently identifying relevant job advertisements that align with their unique skill sets amidst a vast sea of listings. Thus, we propose well-coordinated visualizations to provide job seekers with three levels of details of job information: a skill-job overview visualizes skill sets, employment posts as well as relationships between them with a hierarchical visualization design; a post exploration view leverages an augmented radar-chart glyph to represent job posts and further facilitates users’ swift comprehension of the pertinent skills necessitated by respective positions; a post detail view lists the specifics of selected job posts for profound analysis and comparison. By using a real-world recruitment advertisement dataset collected from 51Job, one of the largest job websites in China, we conducted two case studies and user interviews to evaluate JobViz. The results demonstrated the usefulness and effectiveness of our approach.
Visualization onboarding supports users in reading, interpreting, and extracting information from visual data representations. General-purpose onboarding tools and libraries are applicable for explaining a wide range of graphical user interfaces but cannot handle specific visualization requirements. This paper describes a first step towards developing an onboarding library called VisAhoi, which is easy to integrate, extend, semi-automate, reuse, and customize. VisAhoi supports the creation of onboarding elements for different visualization types and datasets. We demonstrate how to extract and describe onboarding instructions using three well-known high-level descriptive visualization grammars — Vega-Lite, Plotly.js, and ECharts. We show the applicability of our library by performing two usage scenarios that describe the integration of VisAhoi into a VA tool for the analysis of high-throughput screening (HTS) data and, second, into a Flourish template to provide an authoring tool for data journalists for a treemap visualization. We provide a supplementary website (https://datavisyn.github.io/visAhoi/) that demonstrates the applicability of VisAhoi to various visualizations, including a bar chart, a horizon graph, a change matrix/heatmap, a scatterplot, and a treemap visualization.
Generative AI (GenAI) has witnessed remarkable progress in recent years and demonstrated impressive performance in various generation tasks in different domains such as computer vision and computational design. Many researchers have attempted to integrate GenAI into visualization framework, leveraging the superior generative capacity for different operations. Concurrently, recent major breakthroughs in GenAI like diffusion models and large language models have also drastically increased the potential of GenAI4VIS. From a technical perspective, this paper looks back on previous visualization studies leveraging GenAI and discusses the challenges and opportunities for future research. Specifically, we cover the applications of different types of GenAI methods including sequence, tabular, spatial and graph generation techniques for different tasks of visualization which we summarize into four major stages: data enhancement, visual mapping generation, stylization and interaction. For each specific visualization sub-task, we illustrate the typical data and concrete GenAI algorithms, aiming to provide in-depth understanding of the state-of-the-art GenAI4VIS techniques and their limitations. Furthermore, based on the survey, we discuss three major aspects of challenges and research opportunities including evaluation, dataset, and the gap between end-to-end GenAI methods and visualizations. By summarizing different generation algorithms, their current applications and limitations, this paper endeavors to provide useful insights for future GenAI4VIS research.
Gaussian mixture models are classical but still popular machine learning models. An appealing feature of Gaussian mixture models is their tractability, that is, they can be learned efficiently and exactly from data, and also support efficient exact inference queries like soft clustering data points. Only seemingly simple, Gaussian mixture models can be hard to understand. There are at least four aspects to understanding Gaussian mixture models, namely, understanding the whole distribution, its individual parts (mixture components), the relationships between the parts, and the interplay of the whole and its parts. In a structured literature review of applications of Gaussian mixture models, we found the need for supporting all four aspects. To identify candidate visualizations that effectively aid the user needs, we structure the available design space along three different representations of Gaussian mixture models, namely as functions, sets of parameters, and sampling processes. From the design space, we implemented three design concepts that visualize the overall distribution together with its components. Finally, we assessed the practical usefulness of the design concepts with respect to the different user needs in expert interviews and an insight-based user study.
Dimensionality reduction is often used to project time series data from multidimensional to two-dimensional space to generate visual representations of the temporal evolution. In this context, we address the problem of multidimensional time series visualization by presenting a new method to show and handle projection errors introduced by dimensionality reduction techniques on multidimensional temporal data. For visualization, subsequent time instances are rendered as dots that are connected by lines or curves to indicate the temporal dependencies. However, inevitable projection artifacts may lead to poor visualization quality and misinterpretation of the temporal information. Wrongly projected data points, inaccurate variations in the distances between projected time instances, and intersections of connecting lines could lead to wrong assumptions about the original data. We adapt local and global quality metrics to measure the visual quality along the projected time series, and we introduce a model to assess the projection error at intersecting lines. These serve as a basis for our new uncertainty visualization techniques that use different visual encodings and interactions to indicate, communicate, and work with the visualization uncertainty from projection errors and artifacts along the timeline of data points, their connections, and intersections. Our approach is agnostic to the projection method and works for linear and non-linear dimensionality reduction methods alike.

