首页 > 最新文献

IS&T International Symposium on Electronic Imaging最新文献

英文 中文
Image Processing: Algorithms and Systems XXI Conference Overview and Papers Program 图像处理:算法和系统第21届会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.9.ipas-a09
Abstract Image Processing: Algorithms and Systems continues the tradition of the past conference, Nonlinear Image Processing and Pattern Analysis, in exploring new image processing algorithms. Specifically, the conference aims at highlighting the importance of the interaction between transform-, model-, and learning-based approaches for creating effective algorithms and building modern imaging systems for new and emerging applications. It also reverberates the growing call for integration of the theoretical research on image processing algorithms with the more applied research on image processing systems.
图像处理:算法与系统会议延续了以往的传统,非线性图像处理与模式分析,在探索新的图像处理算法。具体来说,会议旨在强调基于转换、模型和学习的方法之间的相互作用的重要性,这些方法可以为新的和新兴的应用创建有效的算法和构建现代成像系统。这也反映了将图像处理算法的理论研究与图像处理系统的应用研究相结合的呼声。
{"title":"Image Processing: Algorithms and Systems XXI Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.9.ipas-a09","DOIUrl":"https://doi.org/10.2352/ei.2023.35.9.ipas-a09","url":null,"abstract":"Abstract Image Processing: Algorithms and Systems continues the tradition of the past conference, Nonlinear Image Processing and Pattern Analysis, in exploring new image processing algorithms. Specifically, the conference aims at highlighting the importance of the interaction between transform-, model-, and learning-based approaches for creating effective algorithms and building modern imaging systems for new and emerging applications. It also reverberates the growing call for integration of the theoretical research on image processing algorithms with the more applied research on image processing systems.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695208","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Intelligent Robotics and Industrial Applications using Computer Vision 2023 Conference Overview and Papers Program 使用计算机视觉的智能机器人和工业应用2023会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.5.iriacv-a05
Abstract This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.
本次会议汇集了智能机器人和计算机视觉领域的实践者和研究人员,分享了最新的应用和发展。感兴趣的主题包括集成成像传感器支持硬件,计算机和算法的智能机器人,制造检查,表征,和/或控制。计算能力和视觉传感器成本的降低推动了机器视觉技术在各种行业的快速扩散,包括铝、汽车、林产品、纺织、玻璃、钢铁、金属铸造、飞机、化工、食品、渔业、农业、考古产品、医疗产品、艺术产品等。其他行业,如半导体和电子制造业,几十年来一直在使用机器视觉技术。支持搬运机器人的机器视觉是另一个主要主题。对于智能机器人,另一种方法是传感器融合——将音频、位置、图像和视频数据中的多模态传感器结合起来,用于信号处理、机器学习和计算机视觉,以及其他3D捕获设备。需要对物体及其在空间中的位置进行准确、快速和可靠的检测。它们的表面、背景和照明都是不受控制的,在大多数情况下,感兴趣的物体都在许多其他物体的中间。对于机器视觉的新用户和现有的工业用户来说,有许多创新的方法来提高生产力、质量和产品标准的合规性。近年来,有几个广泛的问题领域受到了极大的关注。例如,一些行业正在从产品监控系统中收集大量的图像数据。需要新的和有效的方法来提取洞察力,并基于此历史记录执行过程诊断。关于测量的物理尺度,显微镜技术在半导体、生物学和其他纳米尺度技术等领域的分辨率已经接近极限。诸如分辨率增强、基于模型的方法和统计成像等技术可以提供扩展这些系统的手段,使其超出当前的能力。此外,对于机器视觉研究人员来说,在恶劣的工业环境中获得实时和可靠的在线或在线测量是一个挑战,特别是当制造商无法对其设施或工艺进行重大更改时。
{"title":"Intelligent Robotics and Industrial Applications using Computer Vision 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.5.iriacv-a05","DOIUrl":"https://doi.org/10.2352/ei.2023.35.5.iriacv-a05","url":null,"abstract":"Abstract This conference brings together real-world practitioners and researchers in intelligent robots and computer vision to share recent applications and developments. Topics of interest include the integration of imaging sensors supporting hardware, computers, and algorithms for intelligent robots, manufacturing inspection, characterization, and/or control. The decreased cost of computational power and vision sensors has motivated the rapid proliferation of machine vision technology in a variety of industries, including aluminum, automotive, forest products, textiles, glass, steel, metal casting, aircraft, chemicals, food, fishing, agriculture, archaeological products, medical products, artistic products, etc. Other industries, such as semiconductor and electronics manufacturing, have been employing machine vision technology for several decades. Machine vision supporting handling robots is another main topic. With respect to intelligent robotics another approach is sensor fusion – combining multi-modal sensors in audio, location, image and video data for signal processing, machine learning and computer vision, and additionally other 3D capturing devices. There is a need for accurate, fast, and robust detection of objects and their position in space. Their surface, background, and illumination are uncontrolled, and in most cases the objects of interest are within a bulk of many others. For both new and existing industrial users of machine vision, there are numerous innovative methods to improve productivity, quality, and compliance with product standards. There are several broad problem areas that have received significant attention in recent years. For example, some industries are collecting enormous amounts of image data from product monitoring systems. New and efficient methods are required to extract insight and to perform process diagnostics based on this historical record. Regarding the physical scale of the measurements, microscopy techniques are nearing resolution limits in fields such as semiconductors, biology, and other nano-scale technologies. Techniques such as resolution enhancement, model-based methods, and statistical imaging may provide the means to extend these systems beyond current capabilities. Furthermore, obtaining real-time and robust measurements in-line or at-line in harsh industrial environments is a challenge for machine vision researchers, especially when the manufacturer cannot make significant changes to their facility or process.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695223","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Immersive security personnel training module for active shooter events 身临其境的安全人员培训模块为主动射击事件
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.12.ervr-217
Sharad Sharma, JeeWoong Park, Brendan Tran Morris
There is a need to prepare for emergencies such as active shooter events. Emergency response training drills and exercises are necessary to train for such events as we are unable to predict when emergencies do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. This paper presents an immersive security personnel training module for active shooter events in an indoor building. We have created an experimental platform for conducting active shooter drills for training that gives a fully immersive feel of the situation and allow one to perform virtual evacuation drills. The security personnel training module also incorporates four sub-modules namely 1) Situational assessment module, 2) Individual officer intervention module, 3) Team Response Module, and 4) Rescue Task Force module. We have developed an immersive virtual reality training module for active shooter events using an Oculus for course of action, visualization, and situational awareness for active shooter events as shown in Fig.1. The immersive security personnel training module aims to get information about the emergency situation inside the building. The dispatched officer will verify the active shooter situation in the building. The security personnel should find a safe zone in the building and secure the people in that area. The security personnel should also find the number and location of persons in possible jeopardy. Upon completion of the initial assessment, the first security personnel shall advise communications and request resources as deemed necessary. This will allow determining whether to take immediate action alone or with another officer or wait until additional resources are available. After successfully gathering the information, the personnel needs to update the info to their officer through a communication device.
有必要为紧急情况做好准备,比如活跃的枪手事件。紧急反应训练演习和演习是必要的,因为我们无法预测紧急情况何时发生。在理解人类行为、不可预测性、人类运动合成、人群动力学及其与主动射击事件的关系方面取得了进展,但挑战依然存在。提出了一种针对室内建筑枪击案的沉浸式安全人员培训模块。我们已经创建了一个实验平台,用于进行主动射击演习的训练,让人完全身临其境的感觉,并允许一个人进行虚拟疏散演习。安保人员培训模块还包括四个子模块,即1)态势评估模块、2)单兵干预模块、3)团队应对模块和4)救援工作队模块。我们已经开发了一个身临其境的虚拟现实训练模块为主动射击事件使用Oculus的行动过程,可视化和态势感知主动射击事件如图1所示。沉浸式安全人员培训模块旨在获取有关建筑物内紧急情况的信息。派去的警官会核实大楼里的枪手情况。保安人员应该在大楼里找到一个安全区域,并保护该区域的人员。安全人员还应查明可能处于危险中的人员的数量和位置。初步评估完成后,第一批保安人员应提供必要的通信建议和资源请求。这将决定是单独行动,还是与另一名警官一起行动,还是等到有更多资源可用时再行动。在成功收集信息后,工作人员需要通过通信设备将信息更新给他们的主管。
{"title":"Immersive security personnel training module for active shooter events","authors":"Sharad Sharma, JeeWoong Park, Brendan Tran Morris","doi":"10.2352/ei.2023.35.12.ervr-217","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-217","url":null,"abstract":"There is a need to prepare for emergencies such as active shooter events. Emergency response training drills and exercises are necessary to train for such events as we are unable to predict when emergencies do occur. There has been progress in understanding human behavior, unpredictability, human motion synthesis, crowd dynamics, and their relationships with active shooter events, but challenges remain. This paper presents an immersive security personnel training module for active shooter events in an indoor building. We have created an experimental platform for conducting active shooter drills for training that gives a fully immersive feel of the situation and allow one to perform virtual evacuation drills. The security personnel training module also incorporates four sub-modules namely 1) Situational assessment module, 2) Individual officer intervention module, 3) Team Response Module, and 4) Rescue Task Force module. We have developed an immersive virtual reality training module for active shooter events using an Oculus for course of action, visualization, and situational awareness for active shooter events as shown in Fig.1. The immersive security personnel training module aims to get information about the emergency situation inside the building. The dispatched officer will verify the active shooter situation in the building. The security personnel should find a safe zone in the building and secure the people in that area. The security personnel should also find the number and location of persons in possible jeopardy. Upon completion of the initial assessment, the first security personnel shall advise communications and request resources as deemed necessary. This will allow determining whether to take immediate action alone or with another officer or wait until additional resources are available. After successfully gathering the information, the personnel needs to update the info to their officer through a communication device.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694149","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Engineering Reality of Virtual Reality 2023 Conference Overview and Papers Program 2023虚拟现实工程现实会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.12.ervr-a12
Abstract Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both creative activity and scientific investigation are presented and discussed. Research results can be presented and applications can be demonstrated.
虚拟现实和增强现实系统正在不断发展。除了研究之外,内容构建的趋势还在继续,从业者发现技术和学科必须针对特定的可视化和交互式应用进行定制和集成。这次会议是一个论坛,在这里,对创造性活动和科学研究的进展和实际建议进行了介绍和讨论。研究成果可以展示和应用可以演示。
{"title":"Engineering Reality of Virtual Reality 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.12.ervr-a12","DOIUrl":"https://doi.org/10.2352/ei.2023.35.12.ervr-a12","url":null,"abstract":"Abstract Virtual and augmented reality systems are evolving. In addition to research, the trend toward content building continues and practitioners find that technologies and disciplines must be tailored and integrated for specific visualization and interactive applications. This conference serves as a forum where advances and practical advice toward both creative activity and scientific investigation are presented and discussed. Research results can be presented and applications can be demonstrated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695217","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Conditional synthetic food image generation 条件合成食品图像生成
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.7.image-268
Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu
Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.
生成对抗网络(GAN)由于其强大的表征学习能力在图像合成领域得到了广泛的研究。在这项工作中,我们探索了StyleGAN及其在合成食品图像生成中的应用。尽管GAN在自然图像生成方面的表现令人印象深刻,但食物图像存在高度的类内多样性和类间相似性,导致合成图像的过拟合和视觉伪像。因此,我们的目标是探索GAN方法在食物图像生成方面的能力并提高其性能。具体而言,我们首先选择StyleGAN3作为基线方法生成合成食品图像并分析其性能。然后,我们确定了在训练阶段可能导致食物图像性能下降的两个问题:(1)多食物类训练期间的类间特征纠缠;(2)图像下采样期间高分辨率细节的损失。为了解决这两个问题,我们建议一次训练一个食物类别,以避免特征纠缠,并利用从高分辨率数据集裁剪的图像补丁来保留细节。我们在food -101数据集上评估了我们的方法,并显示与基线相比,生成的合成食品图像的质量有所提高。最后,我们展示了通过在数据增强中加入高质量的合成训练样本来提高下游任务(如食品图像分类)性能的巨大潜力。
{"title":"Conditional synthetic food image generation","authors":"Wenjin Fu, Yue Han, Jiangpeng He, Sriram Baireddy, Mridul Gupta, Fengqing Zhu","doi":"10.2352/ei.2023.35.7.image-268","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-268","url":null,"abstract":"Generative Adversarial Networks (GAN) have been widely investigated for image synthesis based on their powerful representation learning ability. In this work, we explore the StyleGAN and its application of synthetic food image generation. Despite the impressive performance of GAN for natural image generation, food images suffer from high intra-class diversity and inter-class similarity, resulting in overfitting and visual artifacts for synthetic images. Therefore, we aim to explore the capability and improve the performance of GAN methods for food image generation. Specifically, we first choose StyleGAN3 as the baseline method to generate synthetic food images and analyze the performance. Then, we identify two issues that can cause performance degradation on food images during the training phase: (1) inter-class feature entanglement during multi-food classes training and (2) loss of high-resolution detail during image downsampling. To address both issues, we propose to train one food category at a time to avoid feature entanglement and leverage image patches cropped from high-resolution datasets to retain fine details. We evaluate our method on the Food-101 dataset and show improved quality of generated synthetic food images compared with the baseline. Finally, we demonstrate the great potential of improving the performance of downstream tasks, such as food image classification by including high-quality synthetic training samples in the data augmentation.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693974","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan OSINT/SOCMINT对现代灾害管理评估的重要性-澳大利亚、海地、日本
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-354
Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg
Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.
开源技术(OSINT)和社会媒体情报(SOCMINT)在调查和政府机构、情报机构、媒体公司和企业中越来越受欢迎。这些OSINT和SOCMINT技术使用复杂的技术和特殊工具来有效地分析不断增长的信息来源。在全球范围内,OSINT领域非常需要培训和继续教育。本报告描述了开源或社会媒体情报对评估灾害管理的重要性。它还概述了澳大利亚、海地和日本使用各种OSINT工具和平台进行灾害管理的政府工作。因此,提供了使用OSINT和SOCMINT工具的决策支持,并且可以更好地估计调查人员的必要培训需求。
{"title":"Importance of OSINT/SOCMINT for modern disaster management evaluation - Australia, Haiti, Japan","authors":"Nazneen Mansoor, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-354","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-354","url":null,"abstract":"Open-source technologies (OSINT) and Social Media Intelligence (SOCMINT) are becoming increasingly popular with investigative and government agencies, intelligence services, media companies, and corporations. These OSINT and SOCMINT technologies use sophisticated techniques and special tools to efficiently analyze the continually growing sources of information. There is a great need for training and further education in the OSINT field worldwide. This report describes the importance of open source or social media intelligence for evaluating disaster management. It also gives an overview of the government work in Australia, Haiti, and Japan for disaster management using various OSINT tools and platforms. Thus, decision support for using OSINT and SOCMINT tools is given, and the necessary training needs for investigators can be better estimated.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694711","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Am I safe? A preliminary examination of how everyday people interpret covid data visualizations 我安全吗?初步研究日常人们如何解读covid数据可视化
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-251
Bernice Rogowitz, Paul Borrel
During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.
在过去几年中,几个知名组织收集了国际COVID数据并向国际社会提供。这导致了不同可视化的源泉。可以选择许多不同的措施(例如,病例、死亡、住院)。对于每一项测量,设计者和政策制定者可以对如何表示数据做出无数不同的选择。来自个别国家的数据可以以线性或对数比例尺、每日、每周或累积、单独或在其他国家的背景下、按比例缩放到共同网格或按比例缩放到自己的范围、原始或人均等。众所周知,数据表示会影响数据的解释。但是,这些不同表征中的哪些视觉特征会影响我们的判断呢?为了探索这个想法,我们进行了一个实验,我们要求参与者查看时间序列数据图,并评估如果他们去其中一个所代表的国家旅行,他们会有多安全,以及他们对自己的判断有多自信。观察者对相同数据的48种可视化方式进行了评级,这些可视化方式在6个受控维度上呈现出不同的效果。我们的初步结果为视觉表征的特征如何影响人类对时间序列数据的判断提供了见解。我们还讨论了这些结果如何影响公共政策和新闻机构如何选择向公众展示数据。
{"title":"Am I safe? A preliminary examination of how everyday people interpret covid data visualizations","authors":"Bernice Rogowitz, Paul Borrel","doi":"10.2352/ei.2023.35.10.hvei-251","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-251","url":null,"abstract":"During these past years, international COVID data have been collected by several reputable organizations and made available to the worldwide community. This has resulted in a wellspring of different visualizations. Many different measures can be selected (e.g., cases, deaths, hospitalizations). And for each measure, designers and policy makers can make a myriad of different choices of how to represent the data. Data from individual countries may be presented on linear or log scales, daily, weekly, or cumulative, alone or in the context of other countries, scaled to a common grid, or scaled to their own range, raw or per capita, etc. It is well known that the data representation can influence the interpretation of data. But, what visual features in these different representations affect our judgments? To explore this idea, we conducted an experiment where we asked participants to look at time-series data plots and assess how safe they would feel if they were traveling to one of the countries represented, and how confident they are of their judgment. Observers rated 48 visualizations of the same data, rendered differently along 6 controlled dimensions. Our initial results provide insight into how characteristics of the visual representation affect human judgments of time series data. We also discuss how these results could impact how public policy and news organizations choose to represent data to the public.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695027","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improvement of vehicles accident detection using object tracking with U-Net 基于U-Net的目标跟踪改进车辆事故检测
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-363
Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg
Over the past decade, researchers have suggested many methods to find anomalies. However, none of the studies has applied frame reconstruction with Object Tracking (OT) to detect anomalies. Therefore, this study focuses on road accident detection using a combination of OT and U-Net associated with variants such as skip, skip residual and attention connections. The U-Net algorithm is developed for reconstructing the images using the UFC-Crime dataset. Furthermore, YOLOV4 and DeepSort are used for object detection and tracking within the frames. Finally, the Mahalanobis distance and the reconstruction error (RCE) are determined using a Kalman filter and the U-Net model.
在过去的十年里,研究人员提出了许多发现异常的方法。然而,目前还没有研究将帧重建与目标跟踪(OT)相结合来检测异常。因此,本研究的重点是使用OT和U-Net相结合的方法进行道路事故检测,并结合诸如跳过、跳过残余和注意连接等变体。U-Net算法用于使用UFC-Crime数据集重建图像。此外,YOLOV4和DeepSort用于帧内的目标检测和跟踪。最后,利用卡尔曼滤波和U-Net模型确定了马氏距离和重建误差。
{"title":"Improvement of vehicles accident detection using object tracking with U-Net","authors":"Kirsnaragavan Arudpiragasam, Taraka Rama Krishna Kanth Kannuri, Klaus Schwarz, Michael Hartmann, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-363","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-363","url":null,"abstract":"Over the past decade, researchers have suggested many methods to find anomalies. However, none of the studies has applied frame reconstruction with Object Tracking (OT) to detect anomalies. Therefore, this study focuses on road accident detection using a combination of OT and U-Net associated with variants such as skip, skip residual and attention connections. The U-Net algorithm is developed for reconstructing the images using the UFC-Crime dataset. Furthermore, YOLOV4 and DeepSort are used for object detection and tracking within the frames. Finally, the Mahalanobis distance and the reconstruction error (RCE) are determined using a Kalman filter and the U-Net model.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program 人类视觉和电子成像2023会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-a10
Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.
人类视觉和电子成像会议探讨了人类感知和认知在电子媒体系统的设计、分析和使用中的作用。多年来,它汇集了来自世界各地的研究人员、技术人员和艺术家,进行了丰富而活跃的思想交流。我们相信,理解人类观察者是电子媒体系统进步的基础,这些系统和应用的进步推动了对人类观察者感知和认知的新研究。每年,我们都会通过特别会议引入新的主题,重点关注在感知和新兴媒体技术交叉领域推动创新的领域。
{"title":"Human Vision and Electronic Imaging 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.10.hvei-a10","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-a10","url":null,"abstract":"Abstract The conference on Human Vision and Electronic Imaging explores the role of human perception and cognition in the design, analysis, and use of electronic media systems. Over the years, it has brought together researchers, technologists, and artists, from all over the world, for a rich and lively exchange of ideas. We believe that understanding the human observer is fundamental to the advancement of electronic media systems, and that advances in these systems and applications drive new research into the perception and cognition of the human observer. Every year, we introduce new topics through our Special Sessions, centered on areas driving innovation at the intersection of perception and emerging media technologies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695218","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited) AR和VR在二语词汇习得中的记忆宫殿质量比较(特邀)
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.10.hvei-220
Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler
The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.
记忆宫殿法(loci method of memory palace technique)是一种利用空间环境的可视化来增强记忆的学习策略。位点法的一个特别流行的用途是语言学习,其中该方法可以通过允许用户将位置和其他空间信息与特定的单词/概念联系起来,从而利用空间记忆来辅助通常与语言相关的记忆,从而帮助词汇的长期记忆。增强现实(AR)和虚拟现实(VR)已经被证明有可能提供更好的记忆增强,因为它们具有卓越的可视化能力。然而,两种技术在语言学习增强方面的直接比较尚未被调查。在本次演讲中,我们介绍了一项研究的结果,该研究旨在比较AR和VR在使用loci方法学习第二语言词汇时的效果。
{"title":"Comparison of AR and VR memory palace quality in second-language vocabulary acquisition (Invited)","authors":"Nicko R. Caluya, Xiaoyang Tian, Damon M. Chandler","doi":"10.2352/ei.2023.35.10.hvei-220","DOIUrl":"https://doi.org/10.2352/ei.2023.35.10.hvei-220","url":null,"abstract":"The method of loci (memory palace technique) is a learning strategy that uses visualizations of spatial environments to enhance memory. One particularly popular use of the method of loci is for language learning, in which the method can help long-term memory of vocabulary by allowing users to associate location and other spatial information with particular words/concepts, thus making use of spatial memory to assist memory typically associated with language. Augmented reality (AR) and virtual reality (VR) have been shown to potentially provide even better memory enhancement due to their superior visualization abilities. However, a direct comparison of the two techniques in terms of language-learning enhancement has not yet been investigated. In this presentation, we present the results of a study designed to compare AR and VR when using the method of loci for learning vocabulary from a second language.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IS&T International Symposium on Electronic Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1