首页 > 最新文献

IS&T International Symposium on Electronic Imaging最新文献

英文 中文
Computational Imaging XXI Conference Overview and Papers Program 计算成像第21届会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.14.coimg-a14
Abstract More than ever before, computers and computation are critical to the image formation process. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference focuses on imaging as a marriage of computation with physical devices. It emphasizes the interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.
计算机和计算在图像形成过程中比以往任何时候都更加重要。在不同的应用和领域中,出现了非常相似的成像问题,需要复杂的数学、统计和算法工具。本次会议的重点是成像作为计算与物理设备的结合。它强调数学理论,物理模型和计算算法之间的相互作用,使有效的当前和未来的成像系统。会议的主题包括从基本理论进展到详细的系统级实现和案例研究。
{"title":"Computational Imaging XXI Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.14.coimg-a14","DOIUrl":"https://doi.org/10.2352/ei.2023.35.14.coimg-a14","url":null,"abstract":"Abstract More than ever before, computers and computation are critical to the image formation process. Across diverse applications and fields, remarkably similar imaging problems appear, requiring sophisticated mathematical, statistical, and algorithmic tools. This conference focuses on imaging as a marriage of computation with physical devices. It emphasizes the interplay between mathematical theory, physical models, and computational algorithms that enable effective current and future imaging systems. Contributions to the conference are solicited on topics ranging from fundamental theoretical advances to detailed system-level implementations and case studies.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695209","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Imaging XXVIII: Displaying, Processing, Hardcopy, and Applications Conference Overview and Papers Program 彩色成像XXVIII:显示,处理,硬拷贝和应用会议概述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.15.color-a15
Abstract Color imaging has historically been treated as a phenomenon sufficiently described by three independent parameters. Recent advances in computational resources and in the understanding of the human aspects are leading to new approaches that extend the purely metrological view of color towards a perceptual approach describing the appearance of objects, documents and displays. Part of this perceptual view is the incorporation of spatial aspects, adaptive color processing based on image content, and the automation of color tasks, to name a few. This dynamic nature applies to all output modalities, including hardcopy devices, but to an even larger extent to soft-copy displays with their even larger options of dynamic processing. Spatially adaptive gamut and tone mapping, dynamic contrast, and color management continue to support the unprecedented development of display hardware covering everything from mobile displays to standard monitors, and all the way to large size screens and emerging technologies. The scope of inquiry is also broadened by the desire to match not only color, but complete appearance perceived by the user. This conference provides an opportunity to present, to interact, and to learn about the most recent developments in color imaging and material appearance researches, technologies and applications. Focus of the conference is on color basic research and testing, color image input, dynamic color image output and rendering, color image automation, emphasizing color in context and color in images, and reproduction of images across local and remote devices. The conference covers also software, media, and systems related to color and material appearance. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision.
彩色成像历来被认为是一种由三个独立参数充分描述的现象。计算资源的最新进展和对人类方面的理解正在导致新的方法,将纯计量的颜色观点扩展到描述物体、文件和显示器外观的感性方法。这种感知视图的一部分是空间方面的结合,基于图像内容的自适应色彩处理,以及色彩任务的自动化,仅举几例。这种动态特性适用于所有输出模式,包括硬拷贝设备,但在更大程度上适用于具有更大动态处理选项的软拷贝显示器。空间自适应色域和色调映射、动态对比和色彩管理继续支持显示硬件的前所未有的发展,涵盖从移动显示器到标准显示器,一直到大尺寸屏幕和新兴技术。查询的范围也扩大了,不仅要匹配颜色,而且要匹配用户所感知的完整外观。本次会议提供了一个展示、互动和了解彩色成像和材料外观研究、技术和应用的最新发展的机会。会议的重点是色彩基础研究和测试、彩色图像输入、动态彩色图像输出和渲染、彩色图像自动化、强调语境中的色彩和图像中的色彩以及跨本地和远程设备的图像再现。会议还涵盖了与颜色和材料外观相关的软件、媒体和系统。特别关注由涉及色彩和/或视觉的多学科领域创建的应用和要求。
{"title":"Color Imaging XXVIII: Displaying, Processing, Hardcopy, and Applications Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.15.color-a15","DOIUrl":"https://doi.org/10.2352/ei.2023.35.15.color-a15","url":null,"abstract":"Abstract Color imaging has historically been treated as a phenomenon sufficiently described by three independent parameters. Recent advances in computational resources and in the understanding of the human aspects are leading to new approaches that extend the purely metrological view of color towards a perceptual approach describing the appearance of objects, documents and displays. Part of this perceptual view is the incorporation of spatial aspects, adaptive color processing based on image content, and the automation of color tasks, to name a few. This dynamic nature applies to all output modalities, including hardcopy devices, but to an even larger extent to soft-copy displays with their even larger options of dynamic processing. Spatially adaptive gamut and tone mapping, dynamic contrast, and color management continue to support the unprecedented development of display hardware covering everything from mobile displays to standard monitors, and all the way to large size screens and emerging technologies. The scope of inquiry is also broadened by the desire to match not only color, but complete appearance perceived by the user. This conference provides an opportunity to present, to interact, and to learn about the most recent developments in color imaging and material appearance researches, technologies and applications. Focus of the conference is on color basic research and testing, color image input, dynamic color image output and rendering, color image automation, emphasizing color in context and color in images, and reproduction of images across local and remote devices. The conference covers also software, media, and systems related to color and material appearance. Special attention is given to applications and requirements created by and for multidisciplinary fields involving color and/or vision.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695222","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Image Quality and System Performance XX Conference Overview and Papers Program 图像质量和系统性能XX会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.8.iqsp-a08
Abstract We live in a visual world. The perceived quality of images is of crucial importance in industrial, medical, and entertainment application environments. Developments in camera sensors, image processing, 3D imaging, display technology, and digital printing are enabling new or enhanced possibilities for creating and conveying visual content that informs or entertains. Wireless networks and mobile devices expand the ways to share imagery and autonomous vehicles bring image processing into new aspects of society. The power of imaging rests directly on the visual quality of the images and the performance of the systems that produce them. As the images are generally intended to be viewed by humans, a deep understanding of human visual perception is key to the effective assessment of image quality.
我们生活在一个视觉世界。在工业、医疗和娱乐应用环境中,图像的感知质量至关重要。相机传感器、图像处理、3D成像、显示技术和数字印刷的发展为创造和传达具有通知性或娱乐性的视觉内容提供了新的或增强的可能性。无线网络和移动设备扩展了共享图像的方式,自动驾驶汽车将图像处理带入了社会的新领域。成像的能力直接取决于图像的视觉质量和产生图像的系统的性能。由于图像通常是供人类观看的,因此深入了解人类的视觉感知是有效评估图像质量的关键。
{"title":"Image Quality and System Performance XX Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.8.iqsp-a08","DOIUrl":"https://doi.org/10.2352/ei.2023.35.8.iqsp-a08","url":null,"abstract":"Abstract We live in a visual world. The perceived quality of images is of crucial importance in industrial, medical, and entertainment application environments. Developments in camera sensors, image processing, 3D imaging, display technology, and digital printing are enabling new or enhanced possibilities for creating and conveying visual content that informs or entertains. Wireless networks and mobile devices expand the ways to share imagery and autonomous vehicles bring image processing into new aspects of society. The power of imaging rests directly on the visual quality of the images and the performance of the systems that produce them. As the images are generally intended to be viewed by humans, a deep understanding of human visual perception is key to the effective assessment of image quality.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695207","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Media Watermarking, Security, and Forensics 2023 Conference Overview and Papers Program 媒体水印,安全和取证2023会议概述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.4.mwsf-a04
Abstract The ease of capturing, manipulating, distributing, and consuming digital media (e.g., images, audio, video, graphics, and text) has enabled new applications and brought a number of important security challenges to the forefront. These challenges have prompted significant research and development in the areas of digital watermarking, steganography, data hiding, forensics, deepfakes, media identification, biometrics, and encryption to protect owners’ rights, establish provenance and veracity of content, and to preserve privacy. Research results in these areas has been translated into new paradigms and applications for monetizing media while maintaining ownership rights, and new biometric and forensic identification techniques for novel methods for ensuring privacy. The Media Watermarking, Security, and Forensics Conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast of the latest developments in watermarking, security, and forensics. Early results and fresh ideas are particularly encouraged and supported by the conference review format: only a structured abstract describing the work in progress and preliminary results is initially required and the full paper is requested just before the conference. A strong focus on how research results are applied by industry, in practice, also gives the conference its unique flavor.
数字媒体(如图像、音频、视频、图形和文本)的捕获、操作、分发和消费的便利性使新应用成为可能,同时也带来了许多重要的安全挑战。这些挑战促使了数字水印、隐写术、数据隐藏、取证、深度伪造、媒体识别、生物识别和加密等领域的重大研究和发展,以保护所有者的权利,确定内容的来源和真实性,并保护隐私。这些领域的研究成果已经转化为在保持所有权的同时将媒体货币化的新范式和应用,以及新的生物识别和法医识别技术,以确保隐私的新方法。媒体水印、安全和取证会议是传播这些领域高质量、前沿研究的首要目的地。会议为研究人员和从业者提供了一个绝佳的场所来展示他们的创新工作,并与水印,安全和取证的最新发展保持同步。会议审查格式特别鼓励和支持早期成果和新想法:最初只需要一份描述正在进行的工作和初步结果的结构化摘要,并要求在会议前提交全文。对研究成果如何在行业中应用的强烈关注也使会议具有独特的风格。
{"title":"Media Watermarking, Security, and Forensics 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.4.mwsf-a04","DOIUrl":"https://doi.org/10.2352/ei.2023.35.4.mwsf-a04","url":null,"abstract":"Abstract The ease of capturing, manipulating, distributing, and consuming digital media (e.g., images, audio, video, graphics, and text) has enabled new applications and brought a number of important security challenges to the forefront. These challenges have prompted significant research and development in the areas of digital watermarking, steganography, data hiding, forensics, deepfakes, media identification, biometrics, and encryption to protect owners’ rights, establish provenance and veracity of content, and to preserve privacy. Research results in these areas has been translated into new paradigms and applications for monetizing media while maintaining ownership rights, and new biometric and forensic identification techniques for novel methods for ensuring privacy. The Media Watermarking, Security, and Forensics Conference is a premier destination for disseminating high-quality, cutting-edge research in these areas. The conference provides an excellent venue for researchers and practitioners to present their innovative work as well as to keep abreast of the latest developments in watermarking, security, and forensics. Early results and fresh ideas are particularly encouraged and supported by the conference review format: only a structured abstract describing the work in progress and preliminary results is initially required and the full paper is requested just before the conference. A strong focus on how research results are applied by industry, in practice, also gives the conference its unique flavor.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695214","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualization and Data Analysis 2023 Conference Overview and Papers Program 可视化和数据分析2023会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.1.vda-a01
Abstract The Conference on Visualization and Data Analysis (VDA) 2023 covers all research, development, and application aspects of data visualization and visual analytics. Since the first VDA conference was held in 1994, the annual event has grown steadily into a major venue for visualization researchers and practitioners from around the world to present their work and share their experiences. We invite you to participate by submitting your original research as a full paper, for an oral or interactive (poster) presentation, and attending VDA in the upcoming year.
可视化与数据分析会议(VDA) 2023涵盖了数据可视化和可视化分析的所有研究、开发和应用方面。自1994年举办第一届VDA会议以来,该年度活动已稳步发展成为来自世界各地的可视化研究人员和实践者展示他们的工作和分享他们的经验的主要场所。我们邀请您以完整的论文形式提交您的原始研究,进行口头或互动(海报)展示,并参加即将到来的VDA。
{"title":"Visualization and Data Analysis 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.1.vda-a01","DOIUrl":"https://doi.org/10.2352/ei.2023.35.1.vda-a01","url":null,"abstract":"Abstract The Conference on Visualization and Data Analysis (VDA) 2023 covers all research, development, and application aspects of data visualization and visual analytics. Since the first VDA conference was held in 1994, the annual event has grown steadily into a major venue for visualization researchers and practitioners from around the world to present their work and share their experiences. We invite you to participate by submitting your original research as a full paper, for an oral or interactive (poster) presentation, and attending VDA in the upcoming year.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695220","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Self-supervised visual representation learning on food images 食物图像的自监督视觉表征学习
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.7.image-269
Andrew W. Peng, Jiangpeng He, Fengqing Zhu
Food image classification is the groundwork for image-based dietary assessment, which is the process of monitoring what kinds of food and how much energy is consumed using captured food or eating scene images. Existing deep learning based methods learn the visual representation for food classification based on human annotation of each food image. However, most food images captured from real life are obtained without labels, requiring human annotation to train deep learning based methods. This approach is not feasible for real world deployment due to high costs. To make use of the vast amount of unlabeled images, many existing works focus on unsupervised or self-supervised learning to learn the visual representation directly from unlabeled data. However, none of these existing works focuses on food images, which is more challenging than general objects due to its high inter-class similarity and intra-class variance. In this paper, we focus on two items: the comparison of existing models and the development of an effective self-supervised learning model for food image classification. Specifically, we first compare the performance of existing state-of-the-art self-supervised learning models, including SimSiam, SimCLR, SwAV, BYOL, MoCo, and Rotation Pretext Task on food images. The experiments are conducted on the Food-101 dataset, which contains 101 different classes of foods with 1,000 images in each class. Next, we analyze the unique features of each model and compare their performance on food images to identify the key factors in each model that can help improve the accuracy. Finally, we propose a new model for unsupervised visual representation learning on food images for the classification task.
食物图像分类是基于图像的饮食评估的基础,它是使用捕获的食物或进食场景图像来监测食物种类和消耗多少能量的过程。现有的基于深度学习的方法是基于人类对每个食物图像的注释来学习食物分类的视觉表示。然而,大多数从现实生活中捕获的食物图像都是没有标签的,需要人工注释来训练基于深度学习的方法。由于成本高,这种方法在实际部署中是不可行的。为了利用大量的未标记图像,许多现有的工作都集中在无监督或自监督学习上,直接从未标记的数据中学习视觉表征。然而,这些现有的作品都没有关注食物图像,因为食物图像具有较高的类间相似性和类内方差,比一般对象更具挑战性。在本文中,我们重点研究了两个项目:现有模型的比较和一种有效的食品图像分类自监督学习模型的开发。具体来说,我们首先比较了现有的最先进的自监督学习模型,包括SimSiam、SimCLR、SwAV、BYOL、MoCo和轮换借口任务在食物图像上的性能。实验是在Food-101数据集上进行的,该数据集包含101个不同类别的食物,每个类别有1000张图像。接下来,我们分析了每个模型的独特特征,并比较了它们在食物图像上的表现,以确定每个模型中有助于提高准确率的关键因素。最后,我们提出了一种新的基于食物图像的无监督视觉表征学习模型。
{"title":"Self-supervised visual representation learning on food images","authors":"Andrew W. Peng, Jiangpeng He, Fengqing Zhu","doi":"10.2352/ei.2023.35.7.image-269","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-269","url":null,"abstract":"Food image classification is the groundwork for image-based dietary assessment, which is the process of monitoring what kinds of food and how much energy is consumed using captured food or eating scene images. Existing deep learning based methods learn the visual representation for food classification based on human annotation of each food image. However, most food images captured from real life are obtained without labels, requiring human annotation to train deep learning based methods. This approach is not feasible for real world deployment due to high costs. To make use of the vast amount of unlabeled images, many existing works focus on unsupervised or self-supervised learning to learn the visual representation directly from unlabeled data. However, none of these existing works focuses on food images, which is more challenging than general objects due to its high inter-class similarity and intra-class variance. In this paper, we focus on two items: the comparison of existing models and the development of an effective self-supervised learning model for food image classification. Specifically, we first compare the performance of existing state-of-the-art self-supervised learning models, including SimSiam, SimCLR, SwAV, BYOL, MoCo, and Rotation Pretext Task on food images. The experiments are conducted on the Food-101 dataset, which contains 101 different classes of foods with 1,000 images in each class. Next, we analyze the unique features of each model and compare their performance on food images to identify the key factors in each model that can help improve the accuracy. Finally, we propose a new model for unsupervised visual representation learning on food images for the classification task.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693973","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
iPhone12 imagery in scene-referred computer graphics pipelines 场景参考计算机图形管道中的iPhone12图像
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-350
Eberhard Hasche, Oliver Karaschewski, Reiner Creutzburg
With the release of the Apple iPhone 12 pro in 2020, various features were integrated that make it attractive as a recording device for scene-related computer graphics pipelines. The captured Apple RAW images have a much higher dynamic range than the standard 8-bit images. Since a scene-based workflow naturally has an extended dynamic range (HDR), the Apple RAW recordings can be well integrated. Another feature is the Dolby Vision HDR recordings, which are primarily adapted to the respective display of the source device. However, these recordings can also be used in the CG workflow since at least the basic HLG transfer function is integrated. The iPhone12pro's two Laser scanners can produce complex 3D models and textures for the CG pipeline. On the one hand, there is a scanner on the back that is primarily intended for capturing the surroundings for AR purposes. On the other hand, there is another scanner on the front for facial recognition. In addition, external software can read out the scanning data for integration in 3D applications. To correctly integrate the iPhone12pro Apple RAW data into a scene-related workflow, two command-line-based software solutions can be used, among others: dcraw and rawtoaces. Dcraw offers the possibility to export RAW images directly to ACES2065-1. Unfortunately, the modifiers for the four RAW color channels to address the different white points are unavailable. Experimental test series are performed under controlled studio conditions to retrieve these modifier values. Subsequently, these RAW-derived images are imported into computer graphics pipelines of various CG software applications (SideFx Houdini, The Foundry Nuke, Autodesk Maya) with the help of OpenColorIO (OCIO) and ACES. Finally, it will be determined if they can improve the overall color quality. Dolby Vision content can be captured using the native Camera app on an iPhone 12. It captures HDR video using Dolby Vision Profile 8.4, which contains a cross-compatible HLG Rec.2020 base layer and Dolby Vision dynamic metadata. Only the HLG base layer is passed on when exporting the Dolby Vision iPhone video without the corresponding metadata. It is investigated whether the iPhone12 videos transferred this way can increase the quality of the computer graphics pipeline. The 3D Scanner App software controls the two integrated Laser Scanners. In addition, the software provides a large number of export formats. Therefore, integrating the OBJ-3D data into industry-standard software like Maya and Houdini is unproblematic. Unfortunately, the models and the corresponding UV map are more or less machine-readable. So, manually improving the 3D geometry (filling holes, refining the geometry, setting up new topology) is cumbersome and time-consuming. It is investigated if standard techniques like using the ZRemesher in ZBrush, applying Texture- and UV-Projection in Maya, and VEX-snippets in Houdini can assemble these models and textures for manual editing.
随着2020年苹果iPhone 12 pro的发布,各种功能被整合在一起,使其成为与场景相关的计算机图形管道的录制设备。捕获的苹果RAW图像比标准的8位图像具有更高的动态范围。由于基于场景的工作流程自然具有扩展的动态范围(HDR),因此Apple RAW录制可以很好地集成。另一个特点是杜比视界HDR录音,这主要是适应各自的显示源设备。然而,这些录音也可以在CG工作流程中使用,因为至少基本的HLG传递函数是集成的。iPhone12pro的两个激光扫描仪可以为CG管道生成复杂的3D模型和纹理。一方面,背面有一个扫描仪,主要用于捕捉AR目的的周围环境。另一方面,前面还有一个用于面部识别的扫描仪。此外,外部软件可以读取扫描数据,以便集成到3D应用程序中。要将iPhone12pro Apple RAW数据正确集成到与场景相关的工作流中,可以使用两种基于命令行的软件解决方案:draw和rawtoaces。draw提供了直接将RAW图像导出到ACES2065-1的可能性。不幸的是,用于处理不同白点的四个RAW颜色通道的修饰符不可用。实验测试系列在受控的工作室条件下进行,以检索这些修改值。随后,这些原始衍生的图像被导入到计算机图形管道的各种CG软件应用程序(SideFx胡迪尼,铸造核,Autodesk Maya)与OpenColorIO (OCIO)和ACES的帮助下。最后,它将确定他们是否可以提高整体色彩质量。杜比视界的内容可以使用iPhone 12上的原生相机应用程序捕获。它使用杜比视界配置文件8.4捕获HDR视频,其中包含交叉兼容的HLG Rec.2020基础层和杜比视界动态元数据。在没有相应元数据的情况下导出杜比视界iPhone视频时,只传递HLG基础层。研究了以这种方式传输的iPhone12视频是否能提高计算机图形流水线的质量。3D扫描仪应用软件控制两个集成的激光扫描仪。此外,该软件提供了大量的导出格式。因此,整合OBJ-3D数据到行业标准的软件,如玛雅和胡迪尼是没有问题的。不幸的是,模型和相应的UV图或多或少是机器可读的。因此,手动改进3D几何形状(填充孔,精炼几何形状,设置新的拓扑结构)既麻烦又耗时。它是调查如果标准的技术,如使用ZRemesher在ZBrush,在玛雅应用纹理和紫外线投影,并在胡迪尼vex片段可以组装这些模型和纹理手动编辑。
{"title":"iPhone12 imagery in scene-referred computer graphics pipelines","authors":"Eberhard Hasche, Oliver Karaschewski, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-350","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-350","url":null,"abstract":"With the release of the Apple iPhone 12 pro in 2020, various features were integrated that make it attractive as a recording device for scene-related computer graphics pipelines. The captured Apple RAW images have a much higher dynamic range than the standard 8-bit images. Since a scene-based workflow naturally has an extended dynamic range (HDR), the Apple RAW recordings can be well integrated. Another feature is the Dolby Vision HDR recordings, which are primarily adapted to the respective display of the source device. However, these recordings can also be used in the CG workflow since at least the basic HLG transfer function is integrated. The iPhone12pro's two Laser scanners can produce complex 3D models and textures for the CG pipeline. On the one hand, there is a scanner on the back that is primarily intended for capturing the surroundings for AR purposes. On the other hand, there is another scanner on the front for facial recognition. In addition, external software can read out the scanning data for integration in 3D applications. To correctly integrate the iPhone12pro Apple RAW data into a scene-related workflow, two command-line-based software solutions can be used, among others: dcraw and rawtoaces. Dcraw offers the possibility to export RAW images directly to ACES2065-1. Unfortunately, the modifiers for the four RAW color channels to address the different white points are unavailable. Experimental test series are performed under controlled studio conditions to retrieve these modifier values. Subsequently, these RAW-derived images are imported into computer graphics pipelines of various CG software applications (SideFx Houdini, The Foundry Nuke, Autodesk Maya) with the help of OpenColorIO (OCIO) and ACES. Finally, it will be determined if they can improve the overall color quality. Dolby Vision content can be captured using the native Camera app on an iPhone 12. It captures HDR video using Dolby Vision Profile 8.4, which contains a cross-compatible HLG Rec.2020 base layer and Dolby Vision dynamic metadata. Only the HLG base layer is passed on when exporting the Dolby Vision iPhone video without the corresponding metadata. It is investigated whether the iPhone12 videos transferred this way can increase the quality of the computer graphics pipeline. The 3D Scanner App software controls the two integrated Laser Scanners. In addition, the software provides a large number of export formats. Therefore, integrating the OBJ-3D data into industry-standard software like Maya and Houdini is unproblematic. Unfortunately, the models and the corresponding UV map are more or less machine-readable. So, manually improving the 3D geometry (filling holes, refining the geometry, setting up new topology) is cumbersome and time-consuming. It is investigated if standard techniques like using the ZRemesher in ZBrush, applying Texture- and UV-Projection in Maya, and VEX-snippets in Houdini can assemble these models and textures for manual editing.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694709","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mobile incident command dashboard (MIC-D) 移动事件命令仪表板(MIC-D)
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-358
Yang Cai, Mel Siegel
Incident Command Dashboard (ICD) plays an essential role in Emergency Support Functions (ESF). They are centralized with a massive amount of live data. In this project, we explore a decentralized mobile incident commanding dashboard (MIC-D) with an improved mobile augmented reality (AR) user interface (UI) that can access and display multimodal live IoT data streams in phones, tablets, and inexpensive HUDs on the first responder’s helmets. The new platform is designed to work in the field and to share live data streams among team members. It also enables users to view the 3D LiDAR scan data on the location, live thermal video data, and vital sign data on the 3D map. We have built a virtual medical helicopter communication center and tested the launchpad on fire and remote fire extinguishing scenarios. We have also tested the wildfire prevention scenario “Cold Trailing” in the outdoor environment.
事件指挥仪表板(ICD)在紧急支持功能(ESF)中起着至关重要的作用。它们集中了大量的实时数据。在这个项目中,我们探索了一个分散的移动事件指挥仪表板(MIC-D),它具有改进的移动增强现实(AR)用户界面(UI),可以访问和显示手机、平板电脑和第一响应者头盔上的廉价hud上的多模态实时物联网数据流。新平台的设计目的是在现场工作,并在团队成员之间共享实时数据流。用户还可以在3D地图上查看位置的3D激光雷达扫描数据、实时热视频数据和生命体征数据。我们建立了虚拟医疗直升机通信中心,并对发射台进行了火灾和远程灭火场景测试。我们还测试了预防野火的场景“在室外环境下。
{"title":"Mobile incident command dashboard (MIC-D)","authors":"Yang Cai, Mel Siegel","doi":"10.2352/ei.2023.35.3.mobmu-358","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-358","url":null,"abstract":"Incident Command Dashboard (ICD) plays an essential role in Emergency Support Functions (ESF). They are centralized with a massive amount of live data. In this project, we explore a decentralized mobile incident commanding dashboard (MIC-D) with an improved mobile augmented reality (AR) user interface (UI) that can access and display multimodal live IoT data streams in phones, tablets, and inexpensive HUDs on the first responder’s helmets. The new platform is designed to work in the field and to share live data streams among team members. It also enables users to view the 3D LiDAR scan data on the location, live thermal video data, and vital sign data on the 3D map. We have built a virtual medical helicopter communication center and tested the launchpad on fire and remote fire extinguishing scenarios. We have also tested the wildfire prevention scenario “Cold Trailing” in the outdoor environment.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694710","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Open-source Intelligence (OSINT) investigation in Facebook 开源情报(OSINT)对Facebook的调查
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-357
Pranesh Kumar Narasimhan, Chinmay Bhosale, Muhammad Hasban Pervez, Najiba Zainab Naqvi, Mert Ilhan Ecevit, Klaus Schwarz, Reiner Creutzburg
Open Source Intelligence (OSINT) has come a long way, and it is still developing ideas, and lots of investigations are yet to happen in the near future. The main essential requirement for all the OSINT investigations is the information that is valuable data from a good source. This paper discusses various tools and methodologies related to Facebook data collection and analyzes part of the collected data. At the end of the paper, the reader will get a deep and clear insight into the available techniques, tools, and descriptions about tools that are present to scrape the data out of the Facebook platform and the types of investigations and analyses that the gathered data can do.
开源智能(OSINT)已经走了很长一段路,它仍在发展想法,在不久的将来还会有很多调查。所有OSINT调查的主要基本要求是来自良好来源的有价值的数据。本文讨论了与Facebook数据收集相关的各种工具和方法,并分析了部分收集到的数据。在论文结束时,读者将深入而清晰地了解可用的技术,工具和描述工具,这些工具用于从Facebook平台中抓取数据,以及收集到的数据可以做的调查和分析类型。
{"title":"Open-source Intelligence (OSINT) investigation in Facebook","authors":"Pranesh Kumar Narasimhan, Chinmay Bhosale, Muhammad Hasban Pervez, Najiba Zainab Naqvi, Mert Ilhan Ecevit, Klaus Schwarz, Reiner Creutzburg","doi":"10.2352/ei.2023.35.3.mobmu-357","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-357","url":null,"abstract":"Open Source Intelligence (OSINT) has come a long way, and it is still developing ideas, and lots of investigations are yet to happen in the near future. The main essential requirement for all the OSINT investigations is the information that is valuable data from a good source. This paper discusses various tools and methodologies related to Facebook data collection and analyzes part of the collected data. At the end of the paper, the reader will get a deep and clear insight into the available techniques, tools, and descriptions about tools that are present to scrape the data out of the Facebook platform and the types of investigations and analyses that the gathered data can do.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694713","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Autonomous Vehicles and Machines 2023 Conference Overview and Papers Program 自动驾驶汽车和机器2023会议概述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.16.avm-a16
Abstract Advancements in sensing, computing, image processing, and computer vision technologies are enabling unprecedented growth and interest in autonomous vehicles and intelligent machines, from self-driving cars to unmanned drones, to personal service robots. These new capabilities have the potential to fundamentally change the way people live, work, commute, and connect with each other, and will undoubtedly provoke entirely new applications and commercial opportunities for generations to come. The main focus of AVM is perception. This begins with sensing. While imaging continues to be an essential emphasis in all EI conferences, AVM also embraces other sensing modalities important to autonomous navigation, including radar, LiDAR, and time of flight. Realization of autonomous systems also includes purpose-built processors, e.g., ISPs, vision processors, DNN accelerators, as well core image processing and computer vision algorithms, system design and architecture, simulation, and image/video quality. AVM topics are at the intersection of these multi-disciplinary areas. AVM is the Perception Conference that bridges the imaging and vision communities, connecting the dots for the entire software and hardware stack for perception, helping people design globally optimized algorithms, processors, and systems for intelligent “eyes” for vehicles and machines.
传感、计算、图像处理和计算机视觉技术的进步使人们对自动驾驶汽车和智能机器的兴趣空前增长,从自动驾驶汽车到无人驾驶飞机,再到个人服务机器人。这些新功能有可能从根本上改变人们的生活、工作、通勤和相互联系的方式,毫无疑问,这将为子孙后代带来全新的应用和商业机会。AVM的主要焦点是感知。这要从感知开始。虽然成像仍然是所有EI会议的重要重点,但AVM还包括其他对自主导航很重要的传感模式,包括雷达、激光雷达和飞行时间。自主系统的实现还包括专用处理器,例如isp,视觉处理器,DNN加速器,以及核心图像处理和计算机视觉算法,系统设计和架构,仿真和图像/视频质量。AVM主题是这些多学科领域的交叉点。AVM是连接成像和视觉社区的感知会议,连接整个感知软件和硬件堆栈的点,帮助人们为车辆和机器的智能“眼睛”设计全球优化的算法,处理器和系统。
{"title":"Autonomous Vehicles and Machines 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.16.avm-a16","DOIUrl":"https://doi.org/10.2352/ei.2023.35.16.avm-a16","url":null,"abstract":"Abstract Advancements in sensing, computing, image processing, and computer vision technologies are enabling unprecedented growth and interest in autonomous vehicles and intelligent machines, from self-driving cars to unmanned drones, to personal service robots. These new capabilities have the potential to fundamentally change the way people live, work, commute, and connect with each other, and will undoubtedly provoke entirely new applications and commercial opportunities for generations to come. The main focus of AVM is perception. This begins with sensing. While imaging continues to be an essential emphasis in all EI conferences, AVM also embraces other sensing modalities important to autonomous navigation, including radar, LiDAR, and time of flight. Realization of autonomous systems also includes purpose-built processors, e.g., ISPs, vision processors, DNN accelerators, as well core image processing and computer vision algorithms, system design and architecture, simulation, and image/video quality. AVM topics are at the intersection of these multi-disciplinary areas. AVM is the Perception Conference that bridges the imaging and vision communities, connecting the dots for the entire software and hardware stack for perception, helping people design globally optimized algorithms, processors, and systems for intelligent “eyes” for vehicles and machines.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695215","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IS&T International Symposium on Electronic Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1