Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.
{"title":"Acquisition of Color Reproduction Technique based on Deep Learning Using a Database of Color-converted Images in the Printing Industry","authors":"Ikumi Hirose, Ryosuke Yabe, Toshiyuki Inoue, Koushi Hashimoto, Yoshikatsu Arizono, Kazunori Harada, Vinh-Tiep Nguyen, Thanh Duc Ngo, Duy-Dinh Le, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050402","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050402","url":null,"abstract":"Color-space conversion technology is important to output accurate colors on different devices. In particular, CMYK (Cyan, Magenta, Yellow and Key plate) used by printers has a limited range of representable colors compared with RGB (Red, Green and Blue) used for normal images. This leads to the problem of loss of color information when printing. When an RGB image captured by a camera is printed as is, colors outside the CMYK gamut are degraded, and colors that differ significantly from the actual image may be output. Therefore, printers and other companies manually correct color tones before printing. This process is based on empirical know-how and human sensitivity and has not yet been automated by machines. Therefore, this study aims to automate color correction in color-space conversion from RGB to CMYK. Specifically, we use machine learning, utilising a large color-conversion database owned by printing companies, which has been cultivated through past correction work, to learn the color-correction techniques of skilled workers. This reduces the burden on the part of the work that has been done manually, and leads to increased efficiency. In addition, the machine can compensate for some of the empirical know-how, which is expected to simplify the transfer of skills. Quantitative and qualitative evaluation results show the effectiveness of the proposed method for automatic color correction.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"307 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298177","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050408
Jiří Filip, Veronika Vilímovská
An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.
{"title":"Characterization of Wood Materials Using Perception-Related Image Statistics","authors":"Jiří Filip, Veronika Vilímovská","doi":"10.2352/j.imagingsci.technol.2023.67.5.050408","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050408","url":null,"abstract":"An efficient computational characterization of real-world materials is one of the challenges in image understanding. An automatic assessment of materials, with similar performance as human observer, usually relies on complicated image filtering derived from models of human perception. However, these models become too complicated when a real material is observed in the form of dynamic stimuli. This study tackles the challenge from the other side. First, we collected human ratings of the most common visual attributes for videos of wood samples and analyzed their relationship to selected image statistics. In our experiments on a set of sixty wood samples, we have found that such image statistics can perform surprisingly well in the discrimination of individual samples with reasonable correlation to human ratings. We have also shown that these statistics can be also effective in the discrimination of images of the same material taken under different illumination and viewing conditions.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"59 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135433955","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050403
C. Caruncho, P. J. Pardo, H. Cwierz
In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.
{"title":"Efficient Hyperspectral Data Processing using File Fragmentation","authors":"C. Caruncho, P. J. Pardo, H. Cwierz","doi":"10.2352/j.imagingsci.technol.2023.67.5.050403","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050403","url":null,"abstract":"In this article, we present a method for processing hyperspectral data in an easy and quick manner. We explain how we split the hyperspectral cube in different sections to be processed using fewer resources. We describe the processing, which includes extraction of the raw data along with white and black calibration data, calibration of the data and application of desired light source, color space, and gamma transformation. We then present a built-in software, including an easy interactive Graphical User Interface (GUI) that will allow fellow researchers to process hyperspectral images in a simple fashion.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135298181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050409
Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen
We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.
我们提出了一种基于三阶段学习的方法用于交替曝光的高动态范围(HDR)视频重建。第一阶段通过估计相邻帧与参考帧之间的流量,将相邻帧与参考帧对齐;第二阶段由多关注模块和金字塔级联可变形对齐模块组成,对对齐特征进行细化;最后阶段使用一系列膨胀选择性核融合残余密集块(dskfrdb)对过度曝光区域进行细节填充,对最终HDR场景进行合并和估计。与Chen等人相比,所提出的模型变体在动态数据集上给出的HDR-VDP-2值分别为79.12、78.49和78.89。HDR视频重建:一个从粗到精的网络和一个真实世界的基准数据集,”Proc. IEEE/CVF Int’Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan等 等。[“无鬼高动态范围成像的注意力引导网络,”Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et 等。[“基于补丁的高动态范围视频,”ACM反式。图32 (2013)202–1] 70.36, and Kalantari et al。[“交替曝光序列的深hdr视频,”计算机图形学论坛(Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91。与最先进的方法和更少的参数相比,我们在过度曝光区域实现了更好的细节再现和对齐。
{"title":"Multi-Attention Guided SKFHDRNet For HDR Video Reconstruction","authors":"Ehsan Ullah, Marius Pedersen, Kjartan Sebastian Waaseth, Bernt-Erik Baltzersen","doi":"10.2352/j.imagingsci.technol.2023.67.5.050409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050409","url":null,"abstract":"We propose a three stage learning-based approach for High Dynamic Range (HDR) video reconstruction with alternating exposures. The first stage performs alignment of neighboring frames to the reference frame by estimating the flows between them, the second stage is composed of multi-attention modules and a pyramid cascading deformable alignment module to refine aligned features, and the final stage merges and estimates the final HDR scene using a series of dilated selective kernel fusion residual dense blocks (DSKFRDBs) to fill the over-exposed regions with details. The proposed model variants give HDR-VDP-2 values on a dynamic dataset of 79.12, 78.49, and 78.89 respectively, compared to Chen et al. [“HDR video reconstruction: A coarse-to-fine network and a real-world benchmark dataset,” Proc. IEEE/CVF Int’l. Conf. on Computer Vision (IEEE, Piscataway, NJ, 2021), pp. 2502–2511] 79.09, Yan et al. [“Attention-guided network for ghost-free high dynamic range imaging,” Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition (IEEE, Piscataway, NJ, 2019), pp. 1751–1760] 78.69, Kalantari et al. [“Patch-based high dynamic range video,” ACM Trans. Graph. 32 (2013) 202–1] 70.36, and Kalantari et al. [“Deep hdr video from sequences with alternating exposures,” Computer Graphics Forum (Wiley Online Library, 2019), Vol. 38, pp. 193–205] 77.91. We achieve better detail reproduction and alignment in over-exposed regions compared to state-of-the-art methods and with a smaller number of parameters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"5 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640659","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050410
Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild
Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.
{"title":"Color Correction of Mars Images: A Study of Illumination Discrimination Along Solight Locus","authors":"Emilie Robert, Che Shen, Magali Estribeau, Edoardo Cucchetti, Mark Fairchild","doi":"10.2352/j.imagingsci.technol.2023.67.5.050410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050410","url":null,"abstract":"Geologists consider it crucial to work on faithful images of Mars. However, no color correction is yet done systematically on those images, especially due to the poor knowledge of the local martian weather. The weather is highly fluctuating and with the low gravity of the planet, it tends to set the conditions for varying amounts of dust in the atmosphere and ground illumination variations as well. Low discrimination of light variations by the Human Visual System is explained by Chromatic Adaptation (CA). Color images processing therefore often accounts for a step related to CA. This study investigates whether this step has to be applied to Mars images as well and is done through an illumination discrimination task performed on 15 observers for stimuli along daylight locus and solight locus (lights of Mars planet) generated through a 7-LEDs lighting system. This study gives outputs in agreement with other on daylight locus while showing low differences between results under daylight and solight.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"93 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135736478","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050404
Marco Buzzelli
{"title":"Visualizing Perceptual Differences in White Color Constancy","authors":"Marco Buzzelli","doi":"10.2352/j.imagingsci.technol.2023.67.5.050404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050404","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47383306","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050405
Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust
Cultural heritage objects, such as paintings, provide valuable insights into the history and culture of human societies. Preserving these objects is of utmost importance, and developing new technologies for their analysis and conservation is crucial. Hyperspectral imaging is a technology with a wide range of applications in cultural heritage, including documentation, material identification, visualization and pigment classification. Pigment classification is crucial for conservators and curators in preserving works of art and acquiring valuable insights into the historical and cultural contexts associated with their origin. Various supervised algorithms, including machine learning, are used to classify pigments based on their spectral signatures. Since many artists employ impasto techniques in their artworks that produce a relief on the surface, i.e., transforming it from a flat object to a 2.5D or 3D, this further makes the classification task difficult. To our knowledge, no previous research has been conducted on pigment classification using hyperspectral imaging concerning an elevated surface. Therefore, this study compares different spectral classification techniques that employ deterministic and stochastic methods, their hybrid combinations, and machine learning models for an elevated mockup to determine whether such topographical variation affects classification accuracy. In cultural heritage, the lack of adequate data is also a significant challenge for using machine learning, particularly in domains where data collection is expensive, time-consuming, or impractical. Data augmentation can help mitigate this challenge by generating new samples similar to the original. We also analyzed the impact of data augmentation techniques on the effectiveness of machine learning models for cultural heritage applications.
{"title":"Comparison of Pigment Classification Algorithms on Non-Flat Surfaces using Hyperspectral Imaging","authors":"Dipendra J. Mandal, Marius Pedersen, Sony George, Clotilde Boust","doi":"10.2352/j.imagingsci.technol.2023.67.5.050405","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050405","url":null,"abstract":"Cultural heritage objects, such as paintings, provide valuable insights into the history and culture of human societies. Preserving these objects is of utmost importance, and developing new technologies for their analysis and conservation is crucial. Hyperspectral imaging is a technology with a wide range of applications in cultural heritage, including documentation, material identification, visualization and pigment classification. Pigment classification is crucial for conservators and curators in preserving works of art and acquiring valuable insights into the historical and cultural contexts associated with their origin. Various supervised algorithms, including machine learning, are used to classify pigments based on their spectral signatures. Since many artists employ impasto techniques in their artworks that produce a relief on the surface, i.e., transforming it from a flat object to a 2.5D or 3D, this further makes the classification task difficult. To our knowledge, no previous research has been conducted on pigment classification using hyperspectral imaging concerning an elevated surface. Therefore, this study compares different spectral classification techniques that employ deterministic and stochastic methods, their hybrid combinations, and machine learning models for an elevated mockup to determine whether such topographical variation affects classification accuracy. In cultural heritage, the lack of adequate data is also a significant challenge for using machine learning, particularly in domains where data collection is expensive, time-consuming, or impractical. Data augmentation can help mitigate this challenge by generating new samples similar to the original. We also analyzed the impact of data augmentation techniques on the effectiveness of machine learning models for cultural heritage applications.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"53 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134995368","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050101
Chunghui Kuo
{"title":"From the Editor","authors":"Chunghui Kuo","doi":"10.2352/j.imagingsci.technol.2023.67.5.050101","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050101","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"376 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735519","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050406
Wei-Chung Cheng
{"title":"Color Performance Review (CPR): A Color Performance Analyzer for Endoscopy Devices","authors":"Wei-Chung Cheng","doi":"10.2352/j.imagingsci.technol.2023.67.5.050406","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050406","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47084038","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050413
Tanzima Habib, Phil Green, Peter Nussbaum
Bidirectional reflection distribution function (BRDF) is used to measure colour with gloss and surface geometry. In this paper, we aim to provide a practical way of reproducing the appearance of a 3D printed surface in 2.5D printing of any slope angle and colour in a colour-managed workflow as a means for softproofing. To account for the change in colour due to a change in surface slope, we developed a BRDF interpolation algorithm that adjusts the colour of the tristimulus values of the flat target to predict the corresponding colour on a surface with a slope. These adjusted colours are then used by the interpolated BRDF workflow to finally predict the colour parameters for each pixel with a particular slope. The effectiveness of this algorithm in reducing colour differences in 2.5D printing has been successfully demonstrated. We then finally show how all the components, slope colour adjustment method, interpolated BRDF parameters algorithm, and BRDF model encoded profiles using iccMAX are connected to make a practical appearance reproduction framework for 2.5D printing.
{"title":"An Appearance Reproduction Framework for Printed 3D Surfaces","authors":"Tanzima Habib, Phil Green, Peter Nussbaum","doi":"10.2352/j.imagingsci.technol.2023.67.5.050413","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050413","url":null,"abstract":"Bidirectional reflection distribution function (BRDF) is used to measure colour with gloss and surface geometry. In this paper, we aim to provide a practical way of reproducing the appearance of a 3D printed surface in 2.5D printing of any slope angle and colour in a colour-managed workflow as a means for softproofing. To account for the change in colour due to a change in surface slope, we developed a BRDF interpolation algorithm that adjusts the colour of the tristimulus values of the flat target to predict the corresponding colour on a surface with a slope. These adjusted colours are then used by the interpolated BRDF workflow to finally predict the colour parameters for each pixel with a particular slope. The effectiveness of this algorithm in reducing colour differences in 2.5D printing has been successfully demonstrated. We then finally show how all the components, slope colour adjustment method, interpolated BRDF parameters algorithm, and BRDF model encoded profiles using iccMAX are connected to make a practical appearance reproduction framework for 2.5D printing.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135735258","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}