Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.3.mobmu-354
Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg
Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.
{"title":"Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan","authors":"Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-354","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-354","url":null,"abstract":"Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-251
Bernice Rogowitz, Paul Borrel
During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.
{"title":"Am I safe? A preliminary examination of how everyday people interpret covid data visualizations","authors":"Bernice Rogowitz, Paul Borrel","doi":"10.2352/ei.2023.35.10.hvei-251","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-251","url":null,"abstract":"During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-a10
Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.
{"title":"Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.10.hvei-a10","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-a10","url":null,"abstract":"Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.
{"title":"Conditional synthetic food image generation","authors":"Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu","doi":"10.2352/ei.2023.35.7.image-268","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-268","url":null,"abstract":"Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.10.hvei-220
Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler
The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.
记忆宫殿法(loci method of memory palace technique)是一种利用空间环境的可视化来增强记忆的学习策略。位点法的一个特别流行的用途是语言学习,其中该方法可以通过允许用户将位置和其他空间信息与特定的单词/概念联系起来,从而利用空间记忆来辅助通常与语言相关的记忆,从而帮助词汇的长期记忆。增强现实(AR)和虚拟现实(VR)已经被证明有可能提供更好的记忆增强,因为它们具有卓越的可视化能力。然而,两种技术在语言学习增强方面的直接比较尚未被调查。在本次演讲中,我们介绍了一项研究的结果,该研究旨在比较AR和VR在使用loci方法学习第二语言词汇时的效果。
{"title":"Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited)","authors":"Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler","doi":"10.2352/ei.2023.35.10.hvei-220","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-220","url":null,"abstract":"The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.15.color-a15
Abstract Color imaging has historically been treated as a phenomenon sufficiently described by three independent parameters. Recent advances in computational resources and in the understanding of the human aspects are leading to new approaches that extend the purely metrological view of color towards a perceptual approach describing the appearance of objects, documents and displays. Part of this perceptual view is the incorporation of spatial aspects, adaptive color processing based on image content, and the automation of color tasks, to name a few. This dynamic nature applies to all output modalities, including hardcopy devices, but to an even larger extent to soft-copy displays with their even larger options of dynamic processing. Spatially adaptive gamut and tone mapping, dynamic contrast, and color management continue to support the unprecedented development of display hardware covering everything from mobile displays to standard monitors, and all the way to large size screens and emerging technologies. The scope of inquiry is also broadened by the desire to match not only color, but complete appearance perceived by the user. This conference provides an opportunity to present, to interact, and to learn about the most recent developments in color imaging and material appearance researches, technologies and applications. Focus of the conference is on color basic research and testing, color image input, dynamic color image output and rendering, color image automation, emphasizing color in context and color in images, and reproduction of images across local and remote devices. The conference covers also software, media, and systems related to color and material appearance. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision.
{"title":"Color Imaging XXVIII: Displaying, Processing, Hardcopy, and Applications Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.15.color-a15","DOIUrl":"https://doi.org/10.2352/ei.2023.35.15.color-a15","url":null,"abstract":"Abstract Color imaging has historically been treated as a phenomenon sufficiently described by three independent parameters. Recent advances in computational resources and in the understanding of the human aspects are leading to new approaches that extend the purely metrological view of color towards a perceptual approach describing the appearance of objects, documents and displays. Part of this perceptual view is the incorporation of spatial aspects, adaptive color processing based on image content, and the automation of color tasks, to name a few. This dynamic nature applies to all output modalities, including hardcopy devices, but to an even larger extent to soft-copy displays with their even larger options of dynamic processing. Spatially adaptive gamut and tone mapping, dynamic contrast, and color management continue to support the unprecedented development of display hardware covering everything from mobile displays to standard monitors, and all the way to large size screens and emerging technologies. The scope of inquiry is also broadened by the desire to match not only color, but complete appearance perceived by the user. This conference provides an opportunity to present, to interact, and to learn about the most recent developments in color imaging and material appearance researches, technologies and applications. Focus of the conference is on color basic research and testing, color image input, dynamic color image output and rendering, color image automation, emphasizing color in context and color in images, and reproduction of images across local and remote devices. The conference covers also software, media, and systems related to color and material appearance. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.14.coimg-a14
Abstract More than ever before, computers and computation are critical to the image formation process. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference focuses on imaging as a marriage of computation with physical devices. It emphasizes the interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.
{"title":"Computational Imaging XXI Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.14.coimg-a14","DOIUrl":"https://doi.org/10.2352/ei.2023.35.14.coimg-a14","url":null,"abstract":"Abstract More than ever before, computers and computation are critical to the image formation process. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference focuses on imaging as a marriage of computation with physical devices. It emphasizes the interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"21 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.8.iqsp-a08
Abstract We live in a visual world. The perceived quality of images is of crucial importance in industrial, medical, and entertainment application environments. Developments in camera sensors, image processing, 3D imaging, display technology, and digital printing are enabling new or enhanced possibilities for creating and conveying visual content that informs or entertains. Wireless networks and mobile devices expand the ways to share imagery and autonomous vehicles bring image processing into new aspects of society. The power of imaging rests directly on the visual quality of the images and the performance of the systems that produce them. As the images are generally intended to be viewed by humans, a deep understanding of human visual perception is key to the effective assessment of image quality.
{"title":"Image Quality and System Performance XX Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.8.iqsp-a08","DOIUrl":"https://doi.org/10.2352/ei.2023.35.8.iqsp-a08","url":null,"abstract":"Abstract We live in a visual world. The perceived quality of images is of crucial importance in industrial, medical, and entertainment application environments. Developments in camera sensors, image processing, 3D imaging, display technology, and digital printing are enabling new or enhanced possibilities for creating and conveying visual content that informs or entertains. Wireless networks and mobile devices expand the ways to share imagery and autonomous vehicles bring image processing into new aspects of society. The power of imaging rests directly on the visual quality of the images and the performance of the systems that produce them. As the images are generally intended to be viewed by humans, a deep understanding of human visual perception is key to the effective assessment of image quality.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"74 1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.4.mwsf-a04
Abstract The ease of capturing, manipulating, distributing, and consuming digital media (e.g., images, audio, video, graphics, and text) has enabled new applications and brought a number of important security challenges to the forefront. These challenges have prompted significant research and development in the areas of digital watermarking, steganography, data hiding, forensics, deepfakes, media identification, biometrics, and encryption to protect owners’ rights, establish provenance and veracity of content, and to preserve privacy. Research results in these areas has been translated into new paradigms and applications for monetizing media while maintaining ownership rights, and new biometric and forensic identification techniques for novel methods for ensuring privacy. The Media Watermarking, Security, and Forensics Conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast of the latest developments in watermarking, security, and forensics. Early results and fresh ideas are particularly encouraged and supported by the conference review format: only a structured abstract describing the work in progress and preliminary results is initially required and the full paper is requested just before the conference. A strong focus on how research results are applied by industry, in practice, also gives the conference its unique flavor.
{"title":"Media Watermarking, Security, and Forensics 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.4.mwsf-a04","DOIUrl":"https://doi.org/10.2352/ei.2023.35.4.mwsf-a04","url":null,"abstract":"Abstract The ease of capturing, manipulating, distributing, and consuming digital media (e.g., images, audio, video, graphics, and text) has enabled new applications and brought a number of important security challenges to the forefront. These challenges have prompted significant research and development in the areas of digital watermarking, steganography, data hiding, forensics, deepfakes, media identification, biometrics, and encryption to protect owners’ rights, establish provenance and veracity of content, and to preserve privacy. Research results in these areas has been translated into new paradigms and applications for monetizing media while maintaining ownership rights, and new biometric and forensic identification techniques for novel methods for ensuring privacy. The Media Watermarking, Security, and Forensics Conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast of the latest developments in watermarking, security, and forensics. Early results and fresh ideas are particularly encouraged and supported by the conference review format: only a structured abstract describing the work in progress and preliminary results is initially required and the full paper is requested just before the conference. A strong focus on how research results are applied by industry, in practice, also gives the conference its unique flavor.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.2352/ei.2023.35.1.vda-a01
Abstract The Conference on Visualization and Data Analysis (VDA) 2023 covers all research, development, and application aspects of data visualization and visual analytics. Since the first VDA conference was held in 1994, the annual event has grown steadily into a major venue for visualization researchers and practitioners from around the world to present their work and share their experiences. We invite you to participate by submitting your original research as a full paper, for an oral or interactive (poster) presentation, and attending VDA in the upcoming year.
{"title":"Visualization and Data Analysis 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.1.vda-a01","DOIUrl":"https://doi.org/10.2352/ei.2023.35.1.vda-a01","url":null,"abstract":"Abstract The Conference on Visualization and Data Analysis (VDA) 2023 covers all research, development, and application aspects of data visualization and visual analytics. Since the first VDA conference was held in 1994, the annual event has grown steadily into a major venue for visualization researchers and practitioners from around the world to present their work and share their experiences. We invite you to participate by submitting your original research as a full paper, for an oral or interactive (poster) presentation, and attending VDA in the upcoming year.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}