Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050411
Gregory High, Peter Nussbaum, Phil Green
Grey balance plays an important role in determining the device values needed to reproduce colours which appear achromatic throughout the tonal range. However, complete observer adaptation to the media white rarely occurs, and these designated device values can still appear non-neutral. This poses a problem for cross-media reproductions, where a mismatch in neutral colours is often the most noticeable difference between them. This paper presents two related experiments which investigate a means of gaining better visual agreement between reproductions which have different background colours or media whites. The first quantifies the degree of adjustment (the degree of media relative transform) needed to make an appearance match between grey patches on a white background and on background colours of various hues and colourfulness. It was found that the degree of adjustment was near-linearly related to the luminance of the patch itself, with lighter patches requiring greater adjustment towards the background colour. Neither the hue nor the chroma of the patch’s background had any significant effect on the underlying function. In the second experiment, this concept is applied to pictorial images on paper-coloured backgrounds. Three pixelwise rendering strategies were compared. In side-by-side viewing, the adaptive control of neutrals outperformed the media relative transform in all cases. Even for modest differences in paper colour (ΔEab of 3), images with significant neutral content benefited from the adaptive approach.
{"title":"Grey Balance in Cross Media Reproductions","authors":"Gregory High, Peter Nussbaum, Phil Green","doi":"10.2352/j.imagingsci.technol.2023.67.5.050411","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050411","url":null,"abstract":"Grey balance plays an important role in determining the device values needed to reproduce colours which appear achromatic throughout the tonal range. However, complete observer adaptation to the media white rarely occurs, and these designated device values can still appear non-neutral. This poses a problem for cross-media reproductions, where a mismatch in neutral colours is often the most noticeable difference between them. This paper presents two related experiments which investigate a means of gaining better visual agreement between reproductions which have different background colours or media whites. The first quantifies the degree of adjustment (the degree of media relative transform) needed to make an appearance match between grey patches on a white background and on background colours of various hues and colourfulness. It was found that the degree of adjustment was near-linearly related to the luminance of the patch itself, with lighter patches requiring greater adjustment towards the background colour. Neither the hue nor the chroma of the patch’s background had any significant effect on the underlying function. In the second experiment, this concept is applied to pictorial images on paper-coloured backgrounds. Three pixelwise rendering strategies were compared. In side-by-side viewing, the adaptive control of neutrals outperformed the media relative transform in all cases. Even for modest differences in paper colour (ΔEab of 3), images with significant neutral content benefited from the adaptive approach.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135588789","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050414
Hiroaki Kotera
The optimal colors with maximum chroma at constant lightness present an ideal target for the colorants pursuing the ultimate wide color gamut. MacAdam proved that optimal colors are composed of square pulse-shaped spectra with at least two tansition wavelengths λ1 and λ2 whose reflectances change from 0 to 1 or 1 to 0. The optimal color gamut is created from two-types, a convex-type with reflectance 1.0 in w = λ1 ∼ λ2 and 0.0 otherwise, or a concave-type with reflectance 0.0 in w = λ1 ∼ λ2 and 1.0 otherwise. It takes a high computation cost to search the optimal color candidates in high precision and to create the 3D color gamut. In addition, the human visual spectral responses to the optimal color spectra remain unknown. This paper (1) proposes an alternative simple method for creating the optimal color gamut with GBD (Gamujt Boundary Descriptor) technique, and (2) clarifies how human vision spectrally respond to the optimal colors based on Matrix-R theory, for the first time which was unknown until now, and (3) presents centroid-invariant novel color gamut expansion method considering the optimal color as an ideal target and finally apply it to actual low-saturation images to verify its effect.
{"title":"Matrix R-based Visual Response to Optimal Colors and Application to Image Color Gamut Expansion","authors":"Hiroaki Kotera","doi":"10.2352/j.imagingsci.technol.2023.67.5.050414","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050414","url":null,"abstract":"The optimal colors with maximum chroma at constant lightness present an ideal target for the colorants pursuing the ultimate wide color gamut. MacAdam proved that optimal colors are composed of square pulse-shaped spectra with at least two tansition wavelengths λ1 and λ2 whose reflectances change from 0 to 1 or 1 to 0. The optimal color gamut is created from two-types, a convex-type with reflectance 1.0 in w = λ1 ∼ λ2 and 0.0 otherwise, or a concave-type with reflectance 0.0 in w = λ1 ∼ λ2 and 1.0 otherwise. It takes a high computation cost to search the optimal color candidates in high precision and to create the 3D color gamut. In addition, the human visual spectral responses to the optimal color spectra remain unknown. This paper (1) proposes an alternative simple method for creating the optimal color gamut with GBD (Gamujt Boundary Descriptor) technique, and (2) clarifies how human vision spectrally respond to the optimal colors based on Matrix-R theory, for the first time which was unknown until now, and (3) presents centroid-invariant novel color gamut expansion method considering the optimal color as an ideal target and finally apply it to actual low-saturation images to verify its effect.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"32 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135639313","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050401
Yun Chen, Jie Yang, Fan Zhang, Kaida Xiao, Stephen Westland
{"title":"The Exploration of Specific Associations from Words to Colours","authors":"Yun Chen, Jie Yang, Fan Zhang, Kaida Xiao, Stephen Westland","doi":"10.2352/j.imagingsci.technol.2023.67.5.050401","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050401","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48035398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, the advances in technology for detecting paint defects on exterior surfaces of automobiles have led to the emergence of research on automatic classification of defect types using deep learning. To develop a deep-learning model capable of identifying defect types, a large dataset consisting of sequential images of paint defects captured during inspection is required. However, generating such a dataset for each factory using actual measurements is expensive. Therefore, we propose a method for generating datasets to train deep-learning models in each factory by simulating images using computer graphics.
{"title":"Automotive Paint Defect Classification: Factory-Specific Data Generation using CG Software for Deep-Learning Models","authors":"Kazuki Iwata, Haotong Guo, Ryuichi Yoshida, Yoshihito Souma, Chawan Koopipat, Masato Takahashi, Norimichi Tsumura","doi":"10.2352/j.imagingsci.technol.2023.67.5.050412","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050412","url":null,"abstract":"In recent years, the advances in technology for detecting paint defects on exterior surfaces of automobiles have led to the emergence of research on automatic classification of defect types using deep learning. To develop a deep-learning model capable of identifying defect types, a large dataset consisting of sequential images of paint defects captured during inspection is required. However, generating such a dataset for each factory using actual measurements is expensive. Therefore, we propose a method for generating datasets to train deep-learning models in each factory by simulating images using computer graphics.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135640921","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-09-01DOI: 10.2352/j.imagingsci.technol.2023.67.5.050407
Rada Deeb, Graham D. Finlayson, Elaheh Daneshvar
Recently, a theoretical framework was presented for designing colored filters called Locus Filters. Locus filters are designed so that any Wien-Planckian light, post filtering, is mapped to another Wien-Planckian light. Moreover, it was also shown that only filters designed according to the locus filter framework have this locus-to-locus mapping property. In this paper, we investigate how locus filters work in the real world. We make two main contributions. First, for daylights, we introduce a new daylight locus with respect to which a locus filter always maps a daylight to another daylight (and their correlated color temperature maps in analogy to the Wien-Planckian temperatures). Importantly, we show that our new locus is close to the standard daylight locus (but has a simpler and more elegant formalism). Secondly, we evaluate the extent to which some commercially available light balancing and color correction filters behave like locus filters.
{"title":"Locus Filters: Theory and Application","authors":"Rada Deeb, Graham D. Finlayson, Elaheh Daneshvar","doi":"10.2352/j.imagingsci.technol.2023.67.5.050407","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.5.050407","url":null,"abstract":"Recently, a theoretical framework was presented for designing colored filters called Locus Filters. Locus filters are designed so that any Wien-Planckian light, post filtering, is mapped to another Wien-Planckian light. Moreover, it was also shown that only filters designed according to the locus filter framework have this locus-to-locus mapping property. In this paper, we investigate how locus filters work in the real world. We make two main contributions. First, for daylights, we introduce a new daylight locus with respect to which a locus filter always maps a daylight to another daylight (and their correlated color temperature maps in analogy to the Wien-Planckian temperatures). Importantly, we show that our new locus is close to the standard daylight locus (but has a simpler and more elegant formalism). Secondly, we evaluate the extent to which some commercially available light balancing and color correction filters behave like locus filters.","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135389547","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.2352/j.imagingsci.technol.2023.67.4.040410
Mei Bie, Huan-Yu Xu, Quanle Liu, Yan Gao, Xiangjiu Che
{"title":"Multi-dimension and Multi-level Information Fusion for Facial Expression Recognition","authors":"Mei Bie, Huan-Yu Xu, Quanle Liu, Yan Gao, Xiangjiu Che","doi":"10.2352/j.imagingsci.technol.2023.67.4.040410","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040410","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"42603807","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.2352/j.imagingsci.technol.2023.67.4.040409
Xiaorui Zhao, Xue-Fang Chen
{"title":"An Intelligent Material Handling System for Hybrid Robot based on Visual Navigation","authors":"Xiaorui Zhao, Xue-Fang Chen","doi":"10.2352/j.imagingsci.technol.2023.67.4.040409","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040409","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45088939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.2352/j.imagingsci.technol.2023.67.4.040404
Zhu Qingbo, Jialin Han, Cheng Shi, Haoling Gao
{"title":"Prediction of Bearing Vibration Fault State based on Fused Bi-LSTM and SVM","authors":"Zhu Qingbo, Jialin Han, Cheng Shi, Haoling Gao","doi":"10.2352/j.imagingsci.technol.2023.67.4.040404","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040404","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45121120","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.2352/j.imagingsci.technol.2023.67.4.040101
Chunghui Kuo
{"title":"From the Editor","authors":"Chunghui Kuo","doi":"10.2352/j.imagingsci.technol.2023.67.4.040101","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040101","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"135762911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.2352/j.imagingsci.technol.2023.67.4.040408
Junying Yuan, Huicheng Zheng, J. Ni
{"title":"Reversible Data Hiding with Neighboring-Prediction-Errors Aided Sorting and CNN Prediction","authors":"Junying Yuan, Huicheng Zheng, J. Ni","doi":"10.2352/j.imagingsci.technol.2023.67.4.040408","DOIUrl":"https://doi.org/10.2352/j.imagingsci.technol.2023.67.4.040408","url":null,"abstract":"","PeriodicalId":15924,"journal":{"name":"Journal of Imaging Science and Technology","volume":" ","pages":""},"PeriodicalIF":1.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48631720","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}