首页 > 最新文献

Journal of Imaging Science and Technology最新文献

英文 中文
Color Image Stitching Elimination Method based on Co-occurrence Matrix 基于共生矩阵的彩色图像拼接消除方法
IF 1 4区 计算机科学 Q3 Chemistry Pub Date : 2023-11-01 DOI: 10.2352/j.imagingsci.technol.2023.67.6.060502
Y. Su
{"title":"Color Image Stitching Elimination Method based on Co-occurrence Matrix","authors":"Y. Su","doi":"10.2352/j.imagingsci.technol.2023.67.6.060502","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.6.060502","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"46085981","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development and Implementation of an Augmented Reality Thunderstorm Simulation for General Aviation Weather Theory Training 开发和实施用于通用航空气象理论培训的增强现实雷暴模拟器
IF 1 4区 计算机科学 Q3 Chemistry Pub Date : 2023-11-01 DOI: 10.2352/j.imagingsci.technol.2023.67.6.060402
Kexin Wang, Jack Miller, Philippe Meister, Michael C. Dorneich, Lori Brown, Geoff Whitehurst, E. Winer
. In 2021, there were 1,157 general aviation (GA) accidents, 210 of them fatal, making GA the deadliest civil aviation category. Research shows that accidents are partially caused by ineffective weather theory training. Current weather training in classrooms relies on 2D materials that students often find difficult to map into a real 3D environment. To address these issues, Augmented Reality (AR) was utilized to provide 3D immersive content while running on commodity devices. However, mobile devices have limitations in rendering, camera tracking, and screen size. These limitations make the implementation of mobile device based AR especially challenging for complex visualization of weather phenomena. This paper presents research on how to address the technical challenges of developing and implementing a complex thunderstorm visualization in a marker-based mobile AR application. The development of the system and a technological evaluation of the application’s rendering and tracking performance across different devices is presented.
.2021 年,共发生 1,157 起通用航空(GA)事故,其中 210 起为致命事故,使通用航空成为死亡人数最多的民用航空类别。研究表明,事故的部分原因是气象理论培训效果不佳。目前课堂上的气象培训依赖于二维材料,学生往往难以将其映射到真实的三维环境中。为了解决这些问题,我们利用增强现实技术(AR)在商品设备上运行,提供身临其境的三维内容。然而,移动设备在渲染、摄像头跟踪和屏幕尺寸方面存在限制。这些限制使得基于移动设备的增强现实技术的实施对于复杂的天气现象可视化尤其具有挑战性。本文介绍了如何解决在基于标记的移动 AR 应用程序中开发和实施复杂雷暴可视化所面临的技术挑战。本文介绍了该系统的开发过程,以及对该应用在不同设备上的渲染和跟踪性能进行的技术评估。
{"title":"Development and Implementation of an Augmented Reality Thunderstorm Simulation for General Aviation Weather Theory Training","authors":"Kexin Wang, Jack Miller, Philippe Meister, Michael C. Dorneich, Lori Brown, Geoff Whitehurst, E. Winer","doi":"10.2352/j.imagingsci.technol.2023.67.6.060402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.6.060402","url":null,"abstract":". In 2021, there were 1,157 general aviation (GA) accidents, 210 of them fatal, making GA the deadliest civil aviation category. Research shows that accidents are partially caused by ineffective weather theory training. Current weather training in classrooms relies on 2D materials that students often find difficult to map into a real 3D environment. To address these issues, Augmented Reality (AR) was utilized to provide 3D immersive content while running on commodity devices. However, mobile devices have limitations in rendering, camera tracking, and screen size. These limitations make the implementation of mobile device based AR especially challenging for complex visualization of weather phenomena. This paper presents research on how to address the technical challenges of developing and implementing a complex thunderstorm visualization in a marker-based mobile AR application. The development of the system and a technological evaluation of the application’s rendering and tracking performance across different devices is presented.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139305111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Digital Modeling on Large Kernel Metamaterial Neural Network. 大内核超材料神经网络的数字建模
IF 1 4区 计算机科学 Q3 Chemistry Pub Date : 2023-11-01 DOI: 10.2352/j.imagingsci.technol.2023.67.6.060404
Quan Liu, Hanyu Zheng, Brandon T Swartz, Ho Hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G Valentine, Yuankai Huo

Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.

最近使用的深度神经网络(DNN)是与计算单元(如 CPU 和 GPU)一起物理部署的。这种设计可能会导致沉重的计算负担、明显的延迟和密集的功耗,这些都是物联网(IoT)、边缘计算和无人机等应用的关键限制因素。光学计算单元(如超材料)的最新进展为无能耗和光速神经网络带来了曙光。然而,超材料神经网络(MNN)的数字设计从根本上受限于其物理限制,如制造过程中的精度、噪声和带宽。此外,MNN 的独特优势(如光速计算)并没有通过标准 3×3 卷积核得到充分发挥。在本文中,我们提出了一种新型大核超材料神经网络(LMNN),通过模型重参数化和网络压缩,最大限度地提高了最先进(SOTA)MNN 的数字容量,同时还明确考虑了光学限制。新的数字学习方案可以最大限度地提高 MNN 的学习能力,同时模拟元光学的物理限制。利用所提出的 LMNN,卷积前端的计算成本可被卸载到制造的光学硬件中。在两个公开数据集上的实验结果表明,优化的混合设计提高了分类精度,同时降低了计算延迟。拟议 LMNN 的开发是实现无能耗、光速人工智能终极目标的重要一步。
{"title":"Digital Modeling on Large Kernel Metamaterial Neural Network.","authors":"Quan Liu, Hanyu Zheng, Brandon T Swartz, Ho Hin Lee, Zuhayr Asad, Ivan Kravchenko, Jason G Valentine, Yuankai Huo","doi":"10.2352/j.imagingsci.technol.2023.67.6.060404","DOIUrl":"10.2352/j.imagingsci.technol.2023.67.6.060404","url":null,"abstract":"<p><p>Deep neural networks (DNNs) utilized recently are physically deployed with computational units (e.g., CPUs and GPUs). Such a design might lead to a heavy computational burden, significant latency, and intensive power consumption, which are critical limitations in applications such as the Internet of Things (IoT), edge computing, and the usage of drones. Recent advances in optical computational units (e.g., metamaterial) have shed light on energy-free and light-speed neural networks. However, the digital design of the metamaterial neural network (MNN) is fundamentally limited by its physical limitations, such as precision, noise, and bandwidth during fabrication. Moreover, the unique advantages of MNN's (e.g., light-speed computation) are not fully explored via standard 3×3 convolution kernels. In this paper, we propose a novel large kernel metamaterial neural network (LMNN) that maximizes the digital capacity of the state-of-the-art (SOTA) MNN with model re-parametrization and network compression, while also considering the optical limitation explicitly. The new digital learning scheme can maximize the learning capacity of MNN while modeling the physical restrictions of meta-optic. With the proposed LMNN, the computation cost of the convolutional front-end can be offloaded into fabricated optical hardware. The experimental results on two publicly available datasets demonstrate that the optimized hybrid design improved classification accuracy while reducing computational latency. The development of the proposed LMNN is a promising step towards the ultimate goal of energy-free and light-speed AI.</p>","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-11-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10970463/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140305875","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry 基于深度学习的印刷行业彩色转换图像数据库色彩再现技术获取
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050402
Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura
Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.
色彩空间转换技术对于在不同设备上输出准确的色彩非常重要。特别是,与用于正常图像的RGB(红、绿、蓝)相比,打印机使用的CMYK(青色、品红、黄色和键版)具有有限的可表示颜色范围。这就导致了印刷时颜色信息丢失的问题。当相机捕获的RGB图像按原样打印时,CMYK色域以外的颜色会被降级,并且可能输出与实际图像明显不同的颜色。因此,印刷商和其他公司在印刷前手动校正色调。这一过程是基于经验知识和人的敏感性,尚未被机器自动化。因此,本研究旨在实现RGB到CMYK色彩空间转换过程中的色彩自动校正。具体来说,我们使用机器学习,利用印刷公司拥有的大型颜色转换数据库,通过过去的校正工作培养,学习熟练工人的颜色校正技术。这减少了手工完成的部分工作的负担,并提高了效率。此外,机器可以弥补一些经验知识,这有望简化技能转移。定量和定性评价结果表明了该方法的有效性。
{"title":"Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry","authors":"Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050402","url":null,"abstract":"Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterization of Wood Materials Using Perception-Related Image Statistics 用感知相关的图像统计表征木材材料
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050408
Jiří Filip, Veronika Vilímovská
An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.
真实世界材料的有效计算表征是图像理解的挑战之一。材料的自动评估与人类观察者的性能相似,通常依赖于从人类感知模型中导出的复杂图像滤波。然而,当以动态刺激的形式观察真实材料时,这些模型变得过于复杂。这项研究从另一个方面解决了这一挑战。首先,我们收集了人类对木材样本视频最常见视觉属性的评分,并分析了它们与选定图像统计的关系。在我们对一组60个木材样本的实验中,我们发现这种图像统计在个体样本的区分中表现得非常好,并且与人类评分有合理的相关性。我们还表明,这些统计数据也可以有效地区分在不同照明和观看条件下拍摄的相同材料的图像。
{"title":"Characterization of Wood Materials Using Perception-Related Image Statistics","authors":"Jiří Filip, Veronika Vilímovská","doi":"10.2352/j.imagingsci.technol.2023.67.5.050408","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050408","url":null,"abstract":"An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135433955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Hyperspectral Data Processing using File Fragmentation 使用文件碎片的高效高光谱数据处理
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050403
C. Caruncho, P. J. Pardo, H. Cwierz
In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.
本文提出了一种简便、快速的高光谱数据处理方法。我们解释了如何将高光谱立方体分割成不同的部分,以便使用更少的资源进行处理。我们描述了处理过程,包括原始数据的提取以及黑白校准数据,数据的校准和所需光源,色彩空间和伽马变换的应用。然后,我们提出了一个内置的软件,包括一个简单的交互式图形用户界面(GUI),这将允许研究人员以简单的方式处理高光谱图像。
{"title":"Efficient Hyperspectral Data Processing using File Fragmentation","authors":"C. Caruncho, P. J. Pardo, H. Cwierz","doi":"10.2352/j.imagingsci.technol.2023.67.5.050403","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050403","url":null,"abstract":"In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction 多注意力引导的SKFHDRNet用于HDR视频重建
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050409
Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen
We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.
我们提出了一种基于三阶段学习的方法用于交替曝光的高动态范围(HDR)视频重建。第一阶段通过估计相邻帧与参考帧之间的流量,将相邻帧与参考帧对齐;第二阶段由多关注模块和金字塔级联可变形对齐模块组成,对对齐特征进行细化;最后阶段使用一系列膨胀选择性核融合残余密集块(dskfrdb)对过度曝光区域进行细节填充,对最终HDR场景进行合并和估计。与Chen等人相比,所提出的模型变体在动态数据集上给出的HDR-VDP-2值分别为79.12、78.49和78.89。HDR视频重建:一个从粗到精的网络和一个真实世界的基准数据集,”Proc. IEEE/CVF Int’Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan等 等。[“无鬼高动态范围成像的注意力引导网络,”Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et 等。[“基于补丁的高动态范围视频,”ACM反式。图32 (2013)202–1] 70.36, and Kalantari et al。[“交替曝光序列的深hdr视频,”计算机图形学论坛(Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91。与最先进的方法和更少的参数相比,我们在过度曝光区域实现了更好的细节再现和对齐。
{"title":"Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction","authors":"Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen","doi":"10.2352/j.imagingsci.technol.2023.67.5.050409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050409","url":null,"abstract":"We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et&#xA0;al. [&#x201C;HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,&#x201D; Proc. IEEE/CVF Int&#x2019;l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp.&#xA0;2502&#x2013;2511] 79.09, Yan et&#xA0;al. [&#x201C;Attention-guided network for ghost-free high dynamic range imaging,&#x201D; Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751&#x2013;1760] 78.69, Kalantari et&#xA0;al. [&#x201C;Patch-based high dynamic range video,&#x201D; ACM Trans. Graph. 32 (2013) 202&#x2013;1] 70.36, and Kalantari et&#xA0;al. [&#x201C;Deep hdr video from sequences with alternating exposures,&#x201D; Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193&#x2013;205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus 火星图像的色彩校正:沿光照轨迹的光照分辨研究
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050410
Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild
Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.
地质学家认为,绘制真实的火星图像至关重要。然而,还没有对这些图像进行系统的色彩校正,特别是由于对火星当地天气的了解不足。天气是高度波动的,加上地球的低重力,它往往会为大气中不同数量的灰尘和地面照明变化设定条件。色度适应(Chromatic Adaptation, CA)解释了人类视觉系统对光线变化的低分辨。因此,彩色图像处理通常是与CA相关的一个步骤。本研究调查了这一步骤是否也必须应用于火星图像,并通过对15名观测者进行照明辨别任务,以通过7个led照明系统产生的日光轨迹和日光轨迹(火星行星的灯光)的刺激来完成。本研究给出的结果与其他关于日光轨迹的结果一致,而在日光和日光下的结果差异很小。
{"title":"Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus","authors":"Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild","doi":"10.2352/j.imagingsci.technol.2023.67.5.050410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050410","url":null,"abstract":"Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing Perceptual Differences in White Color Constancy 视觉化白色恒定性的感知差异
IF 1 4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050404
Marco Buzzelli
{"title":"Visualizing Perceptual Differences in White Color Constancy","authors":"Marco Buzzelli","doi":"10.2352/j.imagingsci.technol.2023.67.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050404","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47383306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the Editor 来自编辑
4区 计算机科学 Q3 Chemistry Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050101
Chunghui Kuo
{"title":"From the Editor","authors":"Chunghui Kuo","doi":"10.2352/j.imagingsci.technol.2023.67.5.050101","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050101","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1