首页 > 最新文献

Journal of Imaging Science and Technology最新文献

英文 中文
Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry 基于深度学习的印刷行业彩色转换图像数据库色彩再现技术获取
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050402
Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura
Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.
色彩空间转换技术对于在不同设备上输出准确的色彩非常重要。特别是,与用于正常图像的RGB(红、绿、蓝)相比,打印机使用的CMYK(青色、品红、黄色和键版)具有有限的可表示颜色范围。这就导致了印刷时颜色信息丢失的问题。当相机捕获的RGB图像按原样打印时,CMYK色域以外的颜色会被降级,并且可能输出与实际图像明显不同的颜色。因此,印刷商和其他公司在印刷前手动校正色调。这一过程是基于经验知识和人的敏感性,尚未被机器自动化。因此,本研究旨在实现RGB到CMYK色彩空间转换过程中的色彩自动校正。具体来说,我们使用机器学习,利用印刷公司拥有的大型颜色转换数据库,通过过去的校正工作培养,学习熟练工人的颜色校正技术。这减少了手工完成的部分工作的负担,并提高了效率。此外,机器可以弥补一些经验知识,这有望简化技能转移。定量和定性评价结果表明了该方法的有效性。
{"title":"Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry","authors":"Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050402","url":null,"abstract":"Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Characterization of Wood Materials Using Perception-Related Image Statistics 用感知相关的图像统计表征木材材料
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050408
Jiří Filip, Veronika Vilímovská
An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.
真实世界材料的有效计算表征是图像理解的挑战之一。材料的自动评估与人类观察者的性能相似,通常依赖于从人类感知模型中导出的复杂图像滤波。然而,当以动态刺激的形式观察真实材料时,这些模型变得过于复杂。这项研究从另一个方面解决了这一挑战。首先,我们收集了人类对木材样本视频最常见视觉属性的评分,并分析了它们与选定图像统计的关系。在我们对一组60个木材样本的实验中,我们发现这种图像统计在个体样本的区分中表现得非常好,并且与人类评分有合理的相关性。我们还表明,这些统计数据也可以有效地区分在不同照明和观看条件下拍摄的相同材料的图像。
{"title":"Characterization of Wood Materials Using Perception-Related Image Statistics","authors":"Jiří Filip, Veronika Vilímovská","doi":"10.2352/j.imagingsci.technol.2023.67.5.050408","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050408","url":null,"abstract":"An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135433955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Efficient Hyperspectral Data Processing using File Fragmentation 使用文件碎片的高效高光谱数据处理
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050403
C. Caruncho, P. J. Pardo, H. Cwierz
In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.
本文提出了一种简便、快速的高光谱数据处理方法。我们解释了如何将高光谱立方体分割成不同的部分,以便使用更少的资源进行处理。我们描述了处理过程,包括原始数据的提取以及黑白校准数据,数据的校准和所需光源,色彩空间和伽马变换的应用。然后,我们提出了一个内置的软件,包括一个简单的交互式图形用户界面(GUI),这将允许研究人员以简单的方式处理高光谱图像。
{"title":"Efficient Hyperspectral Data Processing using File Fragmentation","authors":"C. Caruncho, P. J. Pardo, H. Cwierz","doi":"10.2352/j.imagingsci.technol.2023.67.5.050403","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050403","url":null,"abstract":"In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction 多注意力引导的SKFHDRNet用于HDR视频重建
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050409
Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen
We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.
我们提出了一种基于三阶段学习的方法用于交替曝光的高动态范围(HDR)视频重建。第一阶段通过估计相邻帧与参考帧之间的流量,将相邻帧与参考帧对齐;第二阶段由多关注模块和金字塔级联可变形对齐模块组成,对对齐特征进行细化;最后阶段使用一系列膨胀选择性核融合残余密集块(dskfrdb)对过度曝光区域进行细节填充,对最终HDR场景进行合并和估计。与Chen等人相比,所提出的模型变体在动态数据集上给出的HDR-VDP-2值分别为79.12、78.49和78.89。HDR视频重建:一个从粗到精的网络和一个真实世界的基准数据集,”Proc. IEEE/CVF Int’Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan等 等。[“无鬼高动态范围成像的注意力引导网络,”Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et 等。[“基于补丁的高动态范围视频,”ACM反式。图32 (2013)202–1] 70.36, and Kalantari et al。[“交替曝光序列的深hdr视频,”计算机图形学论坛(Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91。与最先进的方法和更少的参数相比,我们在过度曝光区域实现了更好的细节再现和对齐。
{"title":"Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction","authors":"Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen","doi":"10.2352/j.imagingsci.technol.2023.67.5.050409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050409","url":null,"abstract":"We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus 火星图像的色彩校正:沿光照轨迹的光照分辨研究
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050410
Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild
Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.
地质学家认为,绘制真实的火星图像至关重要。然而,还没有对这些图像进行系统的色彩校正,特别是由于对火星当地天气的了解不足。天气是高度波动的,加上地球的低重力,它往往会为大气中不同数量的灰尘和地面照明变化设定条件。色度适应(Chromatic Adaptation, CA)解释了人类视觉系统对光线变化的低分辨。因此,彩色图像处理通常是与CA相关的一个步骤。本研究调查了这一步骤是否也必须应用于火星图像,并通过对15名观测者进行照明辨别任务,以通过7个led照明系统产生的日光轨迹和日光轨迹(火星行星的灯光)的刺激来完成。本研究给出的结果与其他关于日光轨迹的结果一致,而在日光和日光下的结果差异很小。
{"title":"Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus","authors":"Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild","doi":"10.2352/j.imagingsci.technol.2023.67.5.050410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050410","url":null,"abstract":"Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Visualizing Perceptual Differences in White Color Constancy 视觉化白色恒定性的感知差异
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050404
Marco Buzzelli
{"title":"Visualizing Perceptual Differences in White Color Constancy","authors":"Marco Buzzelli","doi":"10.2352/j.imagingsci.technol.2023.67.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050404","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47383306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparison of Pigment Classification Algorithms on Non-Flat Surfaces using Hyperspectral Imaging 基于高光谱成像的非平坦表面色素分类算法比较
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050405
Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust
Cultural heritage objects, such as paintings, provide valuable insights into the history and culture of human societies. Preserving these objects is of utmost importance, and developing new technologies for their analysis and conservation is crucial. Hyperspectral imaging is a technology with a wide range of applications in cultural heritage, including documentation, material identification, visualization and pigment classification. Pigment classification is crucial for conservators and curators in preserving works of art and acquiring valuable insights into the historical and cultural contexts associated with their origin. Various supervised algorithms, including machine learning, are used to classify pigments based on their spectral signatures. Since many artists employ impasto techniques in their artworks that produce a relief on the surface, i.e., transforming it from a flat object to a 2.5D or 3D, this further makes the classification task difficult. To our knowledge, no previous research has been conducted on pigment classification using hyperspectral imaging concerning an elevated surface. Therefore, this study compares different spectral classification techniques that employ deterministic and stochastic methods, their hybrid combinations, and machine learning models for an elevated mockup to determine whether such topographical variation affects classification accuracy. In cultural heritage, the lack of adequate data is also a significant challenge for using machine learning, particularly in domains where data collection is expensive, time-consuming, or impractical. Data augmentation can help mitigate this challenge by generating new samples similar to the original. We also analyzed the impact of data augmentation techniques on the effectiveness of machine learning models for cultural heritage applications.
文化遗产,如绘画,为了解人类社会的历史和文化提供了宝贵的见解。保存这些文物至关重要,开发分析和保存这些文物的新技术至关重要。高光谱成像技术在文化遗产研究中有着广泛的应用,包括文献记录、材料鉴定、可视化和颜料分类等。颜料分类对于保护人员和策展人保护艺术作品以及获得与其起源相关的历史和文化背景的宝贵见解至关重要。包括机器学习在内的各种监督算法用于根据光谱特征对颜料进行分类。由于许多艺术家在他们的艺术作品中使用了浮雕技术,在表面上产生浮雕,即将其从平面物体转换为2.5D或3D,这进一步使分类任务变得困难。据我们所知,以前还没有研究利用高光谱成像对高架表面进行色素分类。因此,本研究比较了采用确定性和随机方法的不同光谱分类技术、它们的混合组合以及用于高架模型的机器学习模型,以确定这种地形变化是否会影响分类精度。在文化遗产中,缺乏足够的数据也是使用机器学习的一个重大挑战,特别是在数据收集昂贵、耗时或不切实际的领域。数据增强可以通过生成与原始样本相似的新样本来帮助缓解这一挑战。我们还分析了数据增强技术对文化遗产应用中机器学习模型有效性的影响。
{"title":"Comparison of Pigment Classification Algorithms on Non-Flat Surfaces using Hyperspectral Imaging","authors":"Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust","doi":"10.2352/j.imagingsci.technol.2023.67.5.050405","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050405","url":null,"abstract":"Cultural heritage objects, such as paintings, provide valuable insights into the history and culture of human societies. Preserving these objects is of utmost importance, and developing new technologies for their analysis and conservation is crucial. Hyperspectral imaging is a technology with a wide range of applications in cultural heritage, including documentation, material identification, visualization and pigment classification. Pigment classification is crucial for conservators and curators in preserving works of art and acquiring valuable insights into the historical and cultural contexts associated with their origin. Various supervised algorithms, including machine learning, are used to classify pigments based on their spectral signatures. Since many artists employ impasto techniques in their artworks that produce a relief on the surface, i.e., transforming it from a flat object to a 2.5D or 3D, this further makes the classification task difficult. To our knowledge, no previous research has been conducted on pigment classification using hyperspectral imaging concerning an elevated surface. Therefore, this study compares different spectral classification techniques that employ deterministic and stochastic methods, their hybrid combinations, and machine learning models for an elevated mockup to determine whether such topographical variation affects classification accuracy. In cultural heritage, the lack of adequate data is also a significant challenge for using machine learning, particularly in domains where data collection is expensive, time-consuming, or impractical. Data augmentation can help mitigate this challenge by generating new samples similar to the original. We also analyzed the impact of data augmentation techniques on the effectiveness of machine learning models for cultural heritage applications.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134995368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
From the Editor 来自编辑
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050101
Chunghui Kuo
{"title":"From the Editor","authors":"Chunghui Kuo","doi":"10.2352/j.imagingsci.technol.2023.67.5.050101","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050101","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Performance Review (CPR): A Color Performance Analyzer for Endoscopy Devices 彩色性能评价(CPR):用于内窥镜设备的彩色性能分析仪
IF 1 4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050406
Wei-Chung Cheng
{"title":"Color Performance Review (CPR): A Color Performance Analyzer for Endoscopy Devices","authors":"Wei-Chung Cheng","doi":"10.2352/j.imagingsci.technol.2023.67.5.050406","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050406","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47084038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Appearance Reproduction Framework for Printed 3D Surfaces 打印3D表面的外观再现框架
4区 计算机科学 Q4 IMAGING SCIENCE & PHOTOGRAPHIC TECHNOLOGY Pub Date : 2023-09-01 DOI: 10.2352/j.imagingsci.technol.2023.67.5.050413
Tanzima Habib, Phil Green, Peter Nussbaum
Bidirectional reflection distribution function (BRDF) is used to measure colour with gloss and surface geometry. In this paper, we aim to provide a practical way of reproducing the appearance of a 3D printed surface in 2.5D printing of any slope angle and colour in a colour-managed workflow as a means for softproofing. To account for the change in colour due to a change in surface slope, we developed a BRDF interpolation algorithm that adjusts the colour of the tristimulus values of the flat target to predict the corresponding colour on a surface with a slope. These adjusted colours are then used by the interpolated BRDF workflow to finally predict the colour parameters for each pixel with a particular slope. The effectiveness of this algorithm in reducing colour differences in 2.5D printing has been successfully demonstrated. We then finally show how all the components, slope colour adjustment method, interpolated BRDF parameters algorithm, and BRDF model encoded profiles using iccMAX are connected to make a practical appearance reproduction framework for 2.5D printing.
双向反射分布函数(BRDF)用于测量光泽度和表面几何形状的颜色。在本文中,我们的目标是提供一种实用的方法,在色彩管理的工作流程中,以任何斜角和颜色的2.5D打印再现3D打印表面的外观,作为软打样的一种手段。为了解释由于表面坡度变化而导致的颜色变化,我们开发了一种BRDF插值算法,该算法可以调整平面目标的三刺激值的颜色,以预测具有坡度的表面上的相应颜色。这些调整后的颜色然后被插值的BRDF工作流使用,最终预测具有特定斜率的每个像素的颜色参数。该算法在减少2.5D打印中的色差方面的有效性已被成功证明。然后,我们最后展示了如何使用iccMAX连接所有组件,坡度颜色调整方法,插值BRDF参数算法和BRDF模型编码配置文件,以制作实用的2.5D打印外观复制框架。
{"title":"An Appearance Reproduction Framework for Printed 3D Surfaces","authors":"Tanzima Habib, Phil Green, Peter Nussbaum","doi":"10.2352/j.imagingsci.technol.2023.67.5.050413","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050413","url":null,"abstract":"Bidirectional reflection distribution function (BRDF) is used to measure colour with gloss and surface geometry. In this paper, we aim to provide a practical way of reproducing the appearance of a 3D printed surface in 2.5D printing of any slope angle and colour in a colour-managed workflow as a means for softproofing. To account for the change in colour due to a change in surface slope, we developed a BRDF interpolation algorithm that adjusts the colour of the tristimulus values of the flat target to predict the corresponding colour on a surface with a slope. These adjusted colours are then used by the interpolated BRDF workflow to finally predict the colour parameters for each pixel with a particular slope. The effectiveness of this algorithm in reducing colour differences in 2.5D printing has been successfully demonstrated. We then finally show how all the components, slope colour adjustment method, interpolated BRDF parameters algorithm, and BRDF model encoded profiles using iccMAX are connected to make a practical appearance reproduction framework for 2.5D printing.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Imaging Science and Technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1