首页 > 最新文献

IS&T International Symposium on Electronic Imaging最新文献

英文 中文
Egocentric Boundaries on Distinguishing Colliding and Non-Colliding Pedestrians while Walking in a Virtual Environment. 在虚拟环境中行走时,区分碰撞和非碰撞行人的自我中心界限。
Pub Date : 2024-01-01 DOI: 10.2352/EI.2024.36.11.HVEI-214
Alex D Hwang, Jaehyun Jung, Alex Bowers, Eli Peli

Avoiding person-to-person collisions is critical for visual field loss patients. Any intervention claiming to improve the safety of such patients should empirically demonstrate its efficacy. To design a VR mobility testing platform presenting multiple pedestrians, a distinction between colliding and non-colliding pedestrians must be clearly defined. We measured nine normally sighted subjects' collision envelopes (CE; an egocentric boundary distinguishing collision and non-collision) and found it changes based on the approaching pedestrian's bearing angle and speed. For person-to-person collision events for the VR mobility testing platform, non-colliding pedestrians should not evade the CE.

避免人与人之间的碰撞对视野缺损患者至关重要。任何声称能提高这类患者安全的干预措施都应通过经验证明其有效性。要设计一个呈现多个行人的 VR 移动性测试平台,必须明确区分碰撞和非碰撞行人。我们测量了九名视力正常的受试者的碰撞包络(CE,一种区分碰撞和非碰撞的自我中心边界),发现它会根据接近行人的方位角和速度发生变化。对于 VR 移动性测试平台上的人与人碰撞事件,非碰撞行人不应避开 CE。
{"title":"Egocentric Boundaries on Distinguishing Colliding and Non-Colliding Pedestrians while Walking in a Virtual Environment.","authors":"Alex D Hwang, Jaehyun Jung, Alex Bowers, Eli Peli","doi":"10.2352/EI.2024.36.11.HVEI-214","DOIUrl":"10.2352/EI.2024.36.11.HVEI-214","url":null,"abstract":"<p><p>Avoiding person-to-person collisions is critical for visual field loss patients. Any intervention claiming to improve the safety of such patients should empirically demonstrate its efficacy. To design a VR mobility testing platform presenting multiple pedestrians, a distinction between colliding and non-colliding pedestrians must be clearly defined. We measured nine normally sighted subjects' collision envelopes (CE; an egocentric boundary distinguishing collision and non-collision) and found it changes based on the approaching pedestrian's bearing angle and speed. For person-to-person collision events for the VR mobility testing platform, non-colliding pedestrians should not evade the CE.</p>","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10883473/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139934514","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
34th Annual Stereoscopic Displays and Applications Conference - Introduction 第三十四届立体显示及应用年会简介
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.2.sda-b02
Andrew J. Woods, Nicolas S. Holliman, Takashi Kawai, Bjorn Sommer
This manuscript serves as an introduction to the conference proceedings for the 34th annual Stereoscopic Displays and Applications conference and also provides an overview of the conference.
这份手稿作为第34届年度立体显示和应用会议的会议记录的介绍,也提供了会议的概述。
{"title":"34th Annual Stereoscopic Displays and Applications Conference - Introduction","authors":"Andrew J. Woods, Nicolas S. Holliman, Takashi Kawai, Bjorn Sommer","doi":"10.2352/ei.2023.35.2.sda-b02","DOIUrl":"https://doi.org/10.2352/ei.2023.35.2.sda-b02","url":null,"abstract":"This manuscript serves as an introduction to the conference proceedings for the 34th annual Stereoscopic Displays and Applications conference and also provides an overview of the conference.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wearable multispectral imaging and telemetry at edge 边缘可穿戴多光谱成像和遥测技术
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.7.image-278
Yang Cai, Mel Siegel
We present a head-mounted holographic display system for thermographic image overlay, biometric sensing, and wireless telemetry. The system is lightweight and reconfigurable for multiple field applications, including object contour detection and enhancement, breathing rate detection, and telemetry over a mobile phone for peer-to-peer communication and incident command dashboard. Due to the constraints of the limited computing power of an embedded system, we developed a lightweight image processing algorithm for edge detection and breath rate detection, as well as an image compression codec. The system can be integrated into a helmet or personal protection equipment such as a face shield or goggles. It can be applied to firefighting, medical emergency response, and other first-response operations. Finally, we present a case study of "Cold Trailing" for forest fire containment.
我们提出了一种头戴式全息显示系统,用于热成像图像覆盖,生物识别传感和无线遥测。该系统重量轻,可重新配置,适用于多种现场应用,包括物体轮廓检测和增强、呼吸率检测、通过移动电话进行遥测,用于点对点通信和事件指挥仪表板。由于嵌入式系统的计算能力有限,我们开发了一种轻量级的图像处理算法,用于边缘检测和呼吸率检测,以及图像压缩编解码器。该系统可以集成到头盔或个人防护设备中,如面罩或护目镜。它可以应用于消防、医疗应急响应和其他第一反应操作。最后,提出了“冷追踪”在野外森林防火中的应用实例。
{"title":"Wearable multispectral imaging and telemetry at edge","authors":"Yang Cai, Mel Siegel","doi":"10.2352/ei.2023.35.7.image-278","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-278","url":null,"abstract":"We present a head-mounted holographic display system for thermographic image overlay, biometric sensing, and wireless telemetry. The system is lightweight and reconfigurable for multiple field applications, including object contour detection and enhancement, breathing rate detection, and telemetry over a mobile phone for peer-to-peer communication and incident command dashboard. Due to the constraints of the limited computing power of an embedded system, we developed a lightweight image processing algorithm for edge detection and breath rate detection, as well as an image compression codec. The system can be integrated into a helmet or personal protection equipment such as a face shield or goggles. It can be applied to firefighting, medical emergency response, and other first-response operations. Finally, we present a case study of \"Cold Trailing\" for forest fire containment.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Stereoscopic Displays and Applications XXXIV Conference Overview and Papers Program 立体显示和应用第三十四届会议概述和论文程序
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.2.sda-a02
Abstract The Stereoscopic Displays and Applications Conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display to perception. The conference brings together practitioners and researchers from industry and academia to facilitate an exchange of current information on stereoscopic imaging topics. The highly popular conference demonstration session provides authors with a perfect additional opportunity to showcase their work. The long-running SD&A 3D Theater Session provides conference attendees with a wonderful opportunity to see how 3D content is being created and exhibited around the world. Publishing your work at SD&A offers excellent exposure—across all publication outlets, SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic imaging field (Google Scholar, May 2013).
立体显示与应用会议(SD&A)聚焦于从捕获、处理、显示到感知的整个立体3D成像管道的发展。会议汇集了来自工业界和学术界的从业人员和研究人员,以促进关于立体成像主题的最新信息交流。非常受欢迎的会议演示环节为作者提供了展示其作品的绝佳机会。长期运行的SD&A 3D影院会议为与会者提供了一个绝佳的机会,可以看到3D内容是如何在世界各地创建和展示的。在SD&A发表您的作品可以提供出色的曝光率——在所有出版渠道中,SD&A在立体成像领域的前100篇被引论文中所占比例最高(Google Scholar, 2013年5月)。
{"title":"Stereoscopic Displays and Applications XXXIV Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.2.sda-a02","DOIUrl":"https://doi.org/10.2352/ei.2023.35.2.sda-a02","url":null,"abstract":"Abstract The Stereoscopic Displays and Applications Conference (SD&A) focuses on developments covering the entire stereoscopic 3D imaging pipeline from capture, processing, and display to perception. The conference brings together practitioners and researchers from industry and academia to facilitate an exchange of current information on stereoscopic imaging topics. The highly popular conference demonstration session provides authors with a perfect additional opportunity to showcase their work. The long-running SD&A 3D Theater Session provides conference attendees with a wonderful opportunity to see how 3D content is being created and exhibited around the world. Publishing your work at SD&A offers excellent exposure—across all publication outlets, SD&A has the highest proportion of papers in the top 100 cited papers in the stereoscopic imaging field (Google Scholar, May 2013).","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695210","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving the performance of web-streaming by super-resolution upscaling techniques 通过超分辨率升级技术提高网络流媒体的性能
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.3.mobmu-351
Yuriy Reznik, Nabajeet Barman
In recent years, we have seen significant progress in advanced image upscaling techniques, sometimes called super-resolution, ML-based, or AI-based upscaling. Such algorithms are now available not only in form of specialized software but also in drivers and SDKs supplied with modern graphics cards. Upscaling functions in NVIDIA Maxine SDK is one of the recent examples. However, to take advantage of this functionality in video streaming applications, one needs to (a) quantify the impacts of super-resolution techniques on the perceived visual quality, (b) implement video rendering incorporating super-resolution upscaling techniques, and (c) implement new bitrate+resolution adaptation algorithms in streaming players, enabling such players to deliver better quality of experience or better efficiency (e.g. reduce bandwidth usage) or both. Towards this end, in this paper, we propose several techniques that may be helpful to the implementation community. First, we offer a model quantifying the impacts of super resolution upscaling on the perceived quality. Our model is based on the Westerink-Roufs model connecting the true resolution of images/videos to perceived quality, with several additional parameters added, allowing its tuning to specific implementations of super-resolution techniques. We verify this model by using several recent datasets including MOS scores measured for several conventional up-scaling and super-resolution algorithms. Then, we propose an improved adaptation logic for video streaming players, considering video bitrates, encoded video resolutions, player size, and the upscaling method. This improved logic relies on our modified Westerink-Roufs model to predict perceived quality and suggests choices of renditions that would deliver the best quality for given display and upscaling method characteristics. Finally, we study the impacts of the proposed techniques and show that they can deliver practically appreciable results in terms of the expected QoE improvements and bandwidth savings.
近年来,我们看到先进的图像升级技术取得了重大进展,有时被称为超分辨率,基于ml或基于ai的升级技术。这样的算法现在不仅以专门软件的形式存在,而且也存在于现代显卡提供的驱动程序和sdk中。NVIDIA Maxine SDK中的升级功能就是最近的一个例子。然而,要在视频流应用中利用这一功能,需要(a)量化超分辨率技术对感知视觉质量的影响,(b)实现包含超分辨率升级技术的视频渲染,以及(c)在流媒体播放器中实现新的比特率+分辨率自适应算法,使这些播放器能够提供更好的体验质量或更高的效率(例如减少带宽使用)或两者兼有。为此,在本文中,我们提出了几种可能对实现社区有所帮助的技术。首先,我们提供了一个模型来量化超分辨率升级对感知质量的影响。我们的模型基于Westerink-Roufs模型,该模型将图像/视频的真实分辨率与感知质量联系起来,并添加了几个额外的参数,允许其调整到超分辨率技术的特定实现。我们通过使用几个最新的数据集来验证该模型,这些数据集包括几种传统的上尺度和超分辨率算法测量的MOS分数。然后,我们提出了一种改进的视频流播放器自适应逻辑,考虑了视频比特率、编码视频分辨率、播放器大小和升级方法。这种改进的逻辑依赖于我们改进的Westerink-Roufs模型来预测感知质量,并建议选择能够为给定的显示和升级方法特性提供最佳质量的再现。最后,我们研究了所提出的技术的影响,并表明它们可以在预期的QoE改进和带宽节省方面提供实际可观的结果。
{"title":"Improving the performance of web-streaming by super-resolution upscaling techniques","authors":"Yuriy Reznik, Nabajeet Barman","doi":"10.2352/ei.2023.35.3.mobmu-351","DOIUrl":"https://doi.org/10.2352/ei.2023.35.3.mobmu-351","url":null,"abstract":"In recent years, we have seen significant progress in advanced image upscaling techniques, sometimes called super-resolution, ML-based, or AI-based upscaling. Such algorithms are now available not only in form of specialized software but also in drivers and SDKs supplied with modern graphics cards. Upscaling functions in NVIDIA Maxine SDK is one of the recent examples. However, to take advantage of this functionality in video streaming applications, one needs to (a) quantify the impacts of super-resolution techniques on the perceived visual quality, (b) implement video rendering incorporating super-resolution upscaling techniques, and (c) implement new bitrate+resolution adaptation algorithms in streaming players, enabling such players to deliver better quality of experience or better efficiency (e.g. reduce bandwidth usage) or both. Towards this end, in this paper, we propose several techniques that may be helpful to the implementation community. First, we offer a model quantifying the impacts of super resolution upscaling on the perceived quality. Our model is based on the Westerink-Roufs model connecting the true resolution of images/videos to perceived quality, with several additional parameters added, allowing its tuning to specific implementations of super-resolution techniques. We verify this model by using several recent datasets including MOS scores measured for several conventional up-scaling and super-resolution algorithms. Then, we propose an improved adaptation logic for video streaming players, considering video bitrates, encoded video resolutions, player size, and the upscaling method. This improved logic relies on our modified Westerink-Roufs model to predict perceived quality and suggests choices of renditions that would deliver the best quality for given display and upscaling method characteristics. Finally, we study the impacts of the proposed techniques and show that they can deliver practically appreciable results in terms of the expected QoE improvements and bandwidth savings.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135693966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
LECA: A learned approach for efficient cover-agnostic watermarking LECA:一种有效的覆盖不可知水印的学习方法
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.4.mwsf-376
Xiyang Luo, Michael Goebel, Elnaz Barshan, Feng Yang
In this work, we present an efficient multi-bit deep image watermarking method that is cover-agnostic yet also robust to geometric distortions such as translation and scaling as well as other distortions such as JPEG compression and noise. Our design consists of a light-weight watermark encoder jointly trained with a deep neural network based decoder. Such a design allows us to retain the efficiency of the encoder while fully utilizing the power of a deep neural network. Moreover, the watermark encoder is independent of the image content, allowing users to pre-generate the watermarks for further efficiency. To offer robustness towards geometric transformations, we introduced a learned model for predicting the scale and offset of the watermarked images. Moreover, our watermark encoder is independent of the image content, making the generated watermarks universally applicable to different cover images. Experiments show that our method outperforms comparably efficient watermarking methods by a large margin.
在这项工作中,我们提出了一种高效的多比特深度图像水印方法,该方法与覆盖无关,但对平移和缩放等几何扭曲以及JPEG压缩和噪声等其他扭曲也具有鲁棒性。我们的设计包括一个轻量级的水印编码器和一个基于深度神经网络的解码器。这样的设计使我们在保留编码器的效率的同时充分利用了深度神经网络的功能。此外,水印编码器独立于图像内容,允许用户预先生成水印以提高效率。为了提供对几何变换的鲁棒性,我们引入了一个学习模型来预测水印图像的尺度和偏移量。此外,我们的水印编码器独立于图像内容,使生成的水印普遍适用于不同的封面图像。实验表明,该方法的性能明显优于其他有效的水印方法。
{"title":"LECA: A learned approach for efficient cover-agnostic watermarking","authors":"Xiyang Luo, Michael Goebel, Elnaz Barshan, Feng Yang","doi":"10.2352/ei.2023.35.4.mwsf-376","DOIUrl":"https://doi.org/10.2352/ei.2023.35.4.mwsf-376","url":null,"abstract":"In this work, we present an efficient multi-bit deep image watermarking method that is cover-agnostic yet also robust to geometric distortions such as translation and scaling as well as other distortions such as JPEG compression and noise. Our design consists of a light-weight watermark encoder jointly trained with a deep neural network based decoder. Such a design allows us to retain the efficiency of the encoder while fully utilizing the power of a deep neural network. Moreover, the watermark encoder is independent of the image content, allowing users to pre-generate the watermarks for further efficiency. To offer robustness towards geometric transformations, we introduced a learned model for predicting the scale and offset of the watermarked images. Moreover, our watermark encoder is independent of the image content, making the generated watermarks universally applicable to different cover images. Experiments show that our method outperforms comparably efficient watermarking methods by a large margin.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135694925","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimization of ISP parameters for low light conditions using a non-linear reference based approach 基于非线性参考的弱光条件下ISP参数优化方法
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.8.iqsp-314
Shubham Ravindra Alai, Radhesh Bhat
An image signal processor (ISP) transforms a sensor's raw image into a RGB image for use in computer or human vision applications. ISP is composed of various functional blocks and each block contributes uniquely to make the image best suitable for the target application. Whereas, each block consists of several hyperparameters and each hyperparameter needs to be tuned (usually done manually by experts in an iterative manner) to achieve the target image quality. The tuning becomes challenging and increasingly iterative especially in low to very low light conditions where the amount of details preserved by the sensor is limited and ISP parameters have to be tuned to balance the amount of details recovered, noise, sharpness, contrast etc. To extract maximum information out of the image, usually it is required to increase the ISO gain which eventually impacts the noise and color accuracy. Also, the number of ISP parameters that need to be tuned are huge and it becomes impractical to consider all of them in such low light conditions to arrive at the best possible settings. To tackle challenges in manual tuning, especially for low light conditions we have implemented an automatic hyperparameter optimization model that can tune the low lux images so that they are perceptually equivalent to high-lux images. The experiments for IQ validation are carried out under challenging low light conditions and scenarios using Qualcomm’s Spectra ISP simulator with a 13MP OV sensor, and the performance of automatic tuned IQ is compared with manual tuned IQ for human vision use-cases. With experimental results, we have proved that with the help of evolutionary algorithms and local optimization it is possible to optimize the ISP parameters such that without using any of the KPI metrics still low-lux image/ image captured with different ISP (test image) can perceptually be improved that are equivalent to high-lux or well-tuned (reference) image.
图像信号处理器(ISP)将传感器的原始图像转换为RGB图像,用于计算机或人类视觉应用。ISP由多个功能块组成,每个功能块都有其独特的作用,使图像最适合目标应用。然而,每个块由几个超参数组成,每个超参数需要调优(通常由专家以迭代的方式手动完成)以达到目标图像质量。调整变得具有挑战性,特别是在低到极低光照条件下,传感器保留的细节数量有限,必须调整ISP参数以平衡恢复的细节数量,噪声,清晰度,对比度等。为了从图像中提取最大的信息,通常需要增加ISO增益,这最终会影响噪声和色彩精度。此外,需要调整的ISP参数数量巨大,在如此低光条件下考虑所有参数以达到最佳设置是不切实际的。为了解决手动调整的挑战,特别是在低光照条件下,我们实现了一个自动超参数优化模型,可以调整低勒克斯图像,使它们在感知上等同于高勒克斯图像。在具有挑战性的弱光条件和场景下,使用Qualcomm’s Spectra ISP模拟器和13MP OV传感器进行了IQ验证实验,并在人类视觉用例中比较了自动调优IQ和手动调优IQ的性能。通过实验结果,我们证明了在进化算法和局部优化的帮助下,可以优化ISP参数,这样在不使用任何KPI指标的情况下,使用不同ISP(测试图像)捕获的低照度图像/图像可以在感知上得到改善,相当于高照度或调优(参考)图像。
{"title":"Optimization of ISP parameters for low light conditions using a non-linear reference based approach","authors":"Shubham Ravindra Alai, Radhesh Bhat","doi":"10.2352/ei.2023.35.8.iqsp-314","DOIUrl":"https://doi.org/10.2352/ei.2023.35.8.iqsp-314","url":null,"abstract":"An image signal processor (ISP) transforms a sensor's raw image into a RGB image for use in computer or human vision applications. ISP is composed of various functional blocks and each block contributes uniquely to make the image best suitable for the target application. Whereas, each block consists of several hyperparameters and each hyperparameter needs to be tuned (usually done manually by experts in an iterative manner) to achieve the target image quality. The tuning becomes challenging and increasingly iterative especially in low to very low light conditions where the amount of details preserved by the sensor is limited and ISP parameters have to be tuned to balance the amount of details recovered, noise, sharpness, contrast etc. To extract maximum information out of the image, usually it is required to increase the ISO gain which eventually impacts the noise and color accuracy. Also, the number of ISP parameters that need to be tuned are huge and it becomes impractical to consider all of them in such low light conditions to arrive at the best possible settings. To tackle challenges in manual tuning, especially for low light conditions we have implemented an automatic hyperparameter optimization model that can tune the low lux images so that they are perceptually equivalent to high-lux images. The experiments for IQ validation are carried out under challenging low light conditions and scenarios using Qualcomm&#x2019;s Spectra ISP simulator with a 13MP OV sensor, and the performance of automatic tuned IQ is compared with manual tuned IQ for human vision use-cases. With experimental results, we have proved that with the help of evolutionary algorithms and local optimization it is possible to optimize the ISP parameters such that without using any of the KPI metrics still low-lux image/ image captured with different ISP (test image) can perceptually be improved that are equivalent to high-lux or well-tuned (reference) image.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695030","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
High Performance Computing for Imaging 2023 Overview and Papers Program 高性能计算成像2023概述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.11.hpci-a11
Abstract In recent years, the rapid development of imaging systems and the growth of compute-intensive imaging algorithms have led to a strong demand for High Performance Computing (HPC) for efficient image processing. However, the two communities, imaging and HPC, have largely remained separate, with little synergy. This conference focuses on research topics that converge HPC and imaging research with an emphasis on advanced HPC facilities and techniques for imaging systems/algorithms and applications. In addition, the conference provides a unique platform that brings imaging and HPC people together and discusses emerging research topics and techniques that benefit both the HPC and imaging community. Papers are solicited on all aspects of research, development, and application of high-performance computing or efficient computing algorithms and systems for imaging applications.
近年来,成像系统的快速发展和计算密集型成像算法的增长导致了对高性能计算(HPC)的强烈需求,以实现高效的图像处理。然而,成像和高性能计算这两个领域在很大程度上仍然是分开的,几乎没有协同作用。本次会议聚焦于高性能计算和成像研究的融合,重点是成像系统/算法和应用的先进高性能计算设施和技术。此外,会议提供了一个独特的平台,将成像和高性能计算人员聚集在一起,讨论有利于高性能计算和成像社区的新兴研究主题和技术。论文征集涉及成像应用的高性能计算或高效计算算法和系统的研究、开发和应用的各个方面。
{"title":"High Performance Computing for Imaging 2023 Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.11.hpci-a11","DOIUrl":"https://doi.org/10.2352/ei.2023.35.11.hpci-a11","url":null,"abstract":"Abstract In recent years, the rapid development of imaging systems and the growth of compute-intensive imaging algorithms have led to a strong demand for High Performance Computing (HPC) for efficient image processing. However, the two communities, imaging and HPC, have largely remained separate, with little synergy. This conference focuses on research topics that converge HPC and imaging research with an emphasis on advanced HPC facilities and techniques for imaging systems/algorithms and applications. In addition, the conference provides a unique platform that brings imaging and HPC people together and discusses emerging research topics and techniques that benefit both the HPC and imaging community. Papers are solicited on all aspects of research, development, and application of high-performance computing or efficient computing algorithms and systems for imaging applications.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695206","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imaging and Multimedia Analytics at the Edge 2023 Conference Overview and Papers Program 影像和多媒体分析在边缘2023会议概述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.7.image-a07
Abstract Recent progress at the intersection of deep learning and imaging has created a new wave of interest in imaging and multimedia analytics topics, from social media sharing to augmented reality, from food and nutrition to health surveillance, from remote sensing and agriculture to wildlife and environment monitoring. Compared to many subjects in traditional imaging, these topics are more multi-disciplinary in nature. This conference will provide a forum for researchers and engineers from various related areas, both academic and industrial, to exchange ideas and share research results in this rapidly evolving field.
深度学习和成像交叉领域的最新进展引发了对成像和多媒体分析主题的新一轮兴趣,从社交媒体共享到增强现实,从食品和营养到健康监测,从遥感和农业到野生动物和环境监测。与传统影像学中的许多学科相比,这些课题在本质上更具多学科性。本次会议将为来自学术和工业各个相关领域的研究人员和工程师提供一个论坛,在这个快速发展的领域交流思想和分享研究成果。
{"title":"Imaging and Multimedia Analytics at the Edge 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.7.image-a07","DOIUrl":"https://doi.org/10.2352/ei.2023.35.7.image-a07","url":null,"abstract":"Abstract Recent progress at the intersection of deep learning and imaging has created a new wave of interest in imaging and multimedia analytics topics, from social media sharing to augmented reality, from food and nutrition to health surveillance, from remote sensing and agriculture to wildlife and environment monitoring. Compared to many subjects in traditional imaging, these topics are more multi-disciplinary in nature. This conference will provide a forum for researchers and engineers from various related areas, both academic and industrial, to exchange ideas and share research results in this rapidly evolving field.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695211","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Imaging Sensors and Systems 2023 Conference Overview and Papers Program 成像传感器和系统2023会议综述和论文计划
Pub Date : 2023-01-16 DOI: 10.2352/ei.2023.35.6.iss-a06
Abstract Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as automotive, space, medical, scientific and industrial applications. The advantages of low-power, low-noise, high-resolution, high-geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency have led to a number of revolutionary uses. ISS focuses on image sensing for consumer, industrial, medical, and scientific applications, as well as embedded image processing, and pipeline tuning for these camera systems. This conference will serve to bring together researchers, scientists, and engineers working in these fields, and provides the opportunity for quick publication of their work. Topics can include, but are not limited to, research and applications in image sensors and detectors, camera/sensor characterization, ISP pipelines and tuning, image artifact correction and removal, image reconstruction, color calibration, image enhancement, HDR imaging, light-field imaging, multi-frame processing, computational photography, 3D imaging, 360/cinematic VR cameras, camera image quality evaluation and metrics, novel imaging applications, imaging system design, and deep learning applications in imaging.
固态光学传感器和固态相机已经成为许多苛刻的专业应用(如汽车、空间、医疗、科学和工业应用)的首选成像系统。低功耗、低噪声、高分辨率、高几何保真度、广谱灵敏度和极高量子效率的优势导致了许多革命性的应用。ISS专注于消费,工业,医疗和科学应用的图像传感,以及嵌入式图像处理,以及这些相机系统的管道调整。这次会议将把在这些领域工作的研究人员、科学家和工程师聚集在一起,并为他们的工作提供快速发表的机会。主题可以包括,但不限于,图像传感器和探测器的研究和应用,相机/传感器表征,ISP管道和调谐,图像伪影校正和去除,图像重建,颜色校准,图像增强,HDR成像,光场成像,多帧处理,计算摄影,3D成像,360/电影VR相机,相机图像质量评估和度量,新型成像应用,成像系统设计,以及深度学习在成像领域的应用。
{"title":"Imaging Sensors and Systems 2023 Conference Overview and Papers Program","authors":"","doi":"10.2352/ei.2023.35.6.iss-a06","DOIUrl":"https://doi.org/10.2352/ei.2023.35.6.iss-a06","url":null,"abstract":"Abstract Solid state optical sensors and solid state cameras have established themselves as the imaging systems of choice for many demanding professional applications such as automotive, space, medical, scientific and industrial applications. The advantages of low-power, low-noise, high-resolution, high-geometric fidelity, broad spectral sensitivity, and extremely high quantum efficiency have led to a number of revolutionary uses. ISS focuses on image sensing for consumer, industrial, medical, and scientific applications, as well as embedded image processing, and pipeline tuning for these camera systems. This conference will serve to bring together researchers, scientists, and engineers working in these fields, and provides the opportunity for quick publication of their work. Topics can include, but are not limited to, research and applications in image sensors and detectors, camera/sensor characterization, ISP pipelines and tuning, image artifact correction and removal, image reconstruction, color calibration, image enhancement, HDR imaging, light-field imaging, multi-frame processing, computational photography, 3D imaging, 360/cinematic VR cameras, camera image quality evaluation and metrics, novel imaging applications, imaging system design, and deep learning applications in imaging.","PeriodicalId":73514,"journal":{"name":"IS&T International Symposium on Electronic Imaging","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135695212","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IS&T International Symposium on Electronic Imaging
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1