首页 > 最新文献

Color Research and Application最新文献

英文 中文
MAMSN: Multi-Attention Interaction and Multi-Scale Fusion Network for Spectral Reconstruction From RGB Images 基于多关注交互和多尺度融合网络的RGB图像光谱重建
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-02-28 DOI: 10.1002/col.22979
Suyu Wang, Lihao Xu

In the present era, hyperspectral images have become a pervasive tool in a multitude of fields. In order to provide a feasible alternative for scenarios where hyperspectral imaging equipment is not accessible, numerous researchers have endeavored to reconstruct hyperspectral information from limited spectral measurements, leading to the development of spectral reconstruction (SR) algorithms that primarily focus on the visible spectrum. In light of the remarkable advancements achieved in many computer vision tasks through the application of deep learning, an increasing number of SR works aim to leverage deeper and wider convolutional neural networks (CNNs) to learn intricate mappings of SR. However, the majority of deep learning methods tend to neglect the design of initial up-sampling when constructing networks. While some methods introduce innovative attention mechanisms, their transferability is limited, impeding further improvement in SR accuracy. To address these issues, we propose a multi-attention interaction and multi-scale fusion network (MAMSN) for SR. It employs a shunt-confluence multi-branch architecture to learn multi-scale information in images. Furthermore, we have devised a separable enhanced up-sampling (SEU) module, situated at the network head, which processes spatial and channel information separately to produce more refined initial up-sampling results. To fully extract features at different scales for visible-spectrum spectral reconstruction, we introduce an adaptive enhanced channel attention (AECA) mechanism and a joint complementary multi-head self-attention (JCMS) mechanism, which are combined into a more powerful feature extraction module, the dual residual double attention block (DRDAB), through a dual residual structure. The experimental results show that the proposed MAMSN network outperforms other SR methods in overall performance, particularly in quantitative metrics and perceptual quality.

在当今时代,高光谱图像已成为众多领域的普遍工具。为了在无法获得高光谱成像设备的情况下提供可行的替代方案,许多研究人员努力从有限的光谱测量中重建高光谱信息,从而导致了主要关注可见光谱的光谱重建(SR)算法的发展。鉴于深度学习在许多计算机视觉任务中取得的显著进步,越来越多的SR工作旨在利用更深更广的卷积神经网络(cnn)来学习复杂的SR映射。然而,大多数深度学习方法在构建网络时往往忽略了初始上采样的设计。虽然一些方法引入了创新的注意力机制,但它们的可转移性有限,阻碍了SR精度的进一步提高。为了解决这些问题,我们提出了一种多注意力交互和多尺度融合网络(MAMSN),它采用分流融合的多分支架构来学习图像中的多尺度信息。此外,我们设计了一个可分离的增强上采样(SEU)模块,位于网络头部,它分别处理空间和信道信息,以产生更精细的初始上采样结果。为了充分提取不同尺度的特征进行可见光谱重建,我们引入了自适应增强通道注意(AECA)机制和联合互补多头自注意(JCMS)机制,并通过双残差结构将其组合成一个更强大的特征提取模块——双残差双注意块(DRDAB)。实验结果表明,所提出的MAMSN网络在整体性能上优于其他SR方法,特别是在定量指标和感知质量方面。
{"title":"MAMSN: Multi-Attention Interaction and Multi-Scale Fusion Network for Spectral Reconstruction From RGB Images","authors":"Suyu Wang,&nbsp;Lihao Xu","doi":"10.1002/col.22979","DOIUrl":"https://doi.org/10.1002/col.22979","url":null,"abstract":"<div>\u0000 \u0000 <p>In the present era, hyperspectral images have become a pervasive tool in a multitude of fields. In order to provide a feasible alternative for scenarios where hyperspectral imaging equipment is not accessible, numerous researchers have endeavored to reconstruct hyperspectral information from limited spectral measurements, leading to the development of spectral reconstruction (SR) algorithms that primarily focus on the visible spectrum. In light of the remarkable advancements achieved in many computer vision tasks through the application of deep learning, an increasing number of SR works aim to leverage deeper and wider convolutional neural networks (CNNs) to learn intricate mappings of SR. However, the majority of deep learning methods tend to neglect the design of initial up-sampling when constructing networks. While some methods introduce innovative attention mechanisms, their transferability is limited, impeding further improvement in SR accuracy. To address these issues, we propose a multi-attention interaction and multi-scale fusion network (MAMSN) for SR. It employs a shunt-confluence multi-branch architecture to learn multi-scale information in images. Furthermore, we have devised a separable enhanced up-sampling (SEU) module, situated at the network head, which processes spatial and channel information separately to produce more refined initial up-sampling results. To fully extract features at different scales for visible-spectrum spectral reconstruction, we introduce an adaptive enhanced channel attention (AECA) mechanism and a joint complementary multi-head self-attention (JCMS) mechanism, which are combined into a more powerful feature extraction module, the dual residual double attention block (DRDAB), through a dual residual structure. The experimental results show that the proposed MAMSN network outperforms other SR methods in overall performance, particularly in quantitative metrics and perceptual quality.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"388-402"},"PeriodicalIF":1.2,"publicationDate":"2025-02-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281415","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Towards a Model of Color Reproduction Difference 色彩再现差异模型的探讨
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-02-06 DOI: 10.1002/col.22969
Gregory High, Peter Nussbaum, Phil Green

It is difficult to predict the visual difference between cross-media color reproductions. Typically, visual difference occurs due to the limitations of each output medium's color gamut, the difference in substrate colors, and the gamut mapping operations used to transform the source material. However, for pictorial images the magnitude of the resulting visual difference is also somewhat content dependent. Previously, we created an interval scale of overall visual difference (ΔV) by comparing gamut mapped images side-by-side on a variety of simulated output media. In this paper we use the preexisting visual difference data, together with the known source images, as well as information relating to the output gamuts, to create a model of color reproduction difference which is both output-gamut and source-image dependent. The model generalizes well for a range of images, and therefore performs better than mean ΔE00 as a predictor of visual difference. In addition, the inclusion of coefficients derived directly from the source images provides insight into the main drivers of the visual difference.

很难预测跨媒体彩色复制之间的视觉差异。通常,视觉差异的产生是由于每种输出介质的色域、基材颜色的差异以及用于转换源材料的色域映射操作的限制。然而,对于图形图像,所产生的视觉差异的大小也多少取决于内容。之前,我们通过在各种模拟输出媒体上并排比较色域映射图像,创建了一个整体视觉差异的间隔尺度(Δ V)。在本文中,我们使用预先存在的视觉差异数据,连同已知的源图像,以及与输出色域相关的信息,来创建一个既依赖于输出色域又依赖于源图像的颜色再现差异模型。该模型对一系列图像进行了很好的泛化,因此作为视觉差异的预测因子,其表现优于平均值Δ E 00。此外,包含直接从源图像导出的系数,可以深入了解视觉差异的主要驱动因素。
{"title":"Towards a Model of Color Reproduction Difference","authors":"Gregory High,&nbsp;Peter Nussbaum,&nbsp;Phil Green","doi":"10.1002/col.22969","DOIUrl":"https://doi.org/10.1002/col.22969","url":null,"abstract":"<p>It is difficult to predict the visual difference between cross-media color reproductions. Typically, visual difference occurs due to the limitations of each output medium's color gamut, the difference in substrate colors, and the gamut mapping operations used to transform the source material. However, for pictorial images the magnitude of the resulting visual difference is also somewhat content dependent. Previously, we created an interval scale of overall visual difference (<span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>Δ</mi>\u0000 <mi>V</mi>\u0000 </mrow>\u0000 </semantics></math>) by comparing gamut mapped images side-by-side on a variety of simulated output media. In this paper we use the preexisting visual difference data, together with the known source images, as well as information relating to the output gamuts, to create a model of color reproduction difference which is both output-gamut and source-image dependent. The model generalizes well for a range of images, and therefore performs better than mean <span></span><math>\u0000 <semantics>\u0000 <mrow>\u0000 <mi>Δ</mi>\u0000 <msub>\u0000 <mi>E</mi>\u0000 <mn>00</mn>\u0000 </msub>\u0000 </mrow>\u0000 </semantics></math> as a predictor of visual difference. In addition, the inclusion of coefficients derived directly from the source images provides insight into the main drivers of the visual difference.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"372-387"},"PeriodicalIF":1.2,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22969","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281375","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Unified Color Harmony Model 统一色彩和谐模型
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-02-02 DOI: 10.1002/col.22977
Long Xu, Dongyuan Liu, Su Jin Park, Sangwon Lee

Color harmony is an aesthetic sensation evoked by the balanced and coherent arrangement of the colors of visual elements. While traditional methods define harmonious subspaces from geometric relationships or numerical formulas, we employ a data-driven approach to create a unified model for evaluating and generating color combinations of arbitrary sizes. By treating color sequences as linguistic sentences, we construct a color combinations generator using SeqGAN, a generative model capable of learning discrete data through reinforcement learning. The resulting model produces color combinations as much preferred as those by the best models of each size and excels at penalizing color combinations from random sampling. The distribution of the generated colors has more diverse hues than the input data, in contrast to the NLP-based model that predominantly predicts achromatic colors due to exposure bias. The flexible structure of our model allows for simple extension to additional conditions such as group preference or emotional keywords.

色彩和谐是通过视觉元素的色彩的平衡和连贯的排列而引起的一种美感。传统方法通过几何关系或数值公式定义和谐子空间,而我们采用数据驱动的方法来创建一个统一的模型,用于评估和生成任意大小的颜色组合。通过将颜色序列视为语言句子,我们使用SeqGAN构建了一个颜色组合生成器,SeqGAN是一种能够通过强化学习学习离散数据的生成模型。结果模型产生的颜色组合与每种尺寸的最佳模型的颜色组合一样受欢迎,并且在随机抽样中惩罚颜色组合方面表现出色。与基于nlp的模型相比,生成的颜色分布比输入数据具有更多样化的色调,该模型主要预测由于曝光偏差导致的消色差。我们模型的灵活结构允许简单地扩展到附加条件,如群体偏好或情感关键字。
{"title":"Unified Color Harmony Model","authors":"Long Xu,&nbsp;Dongyuan Liu,&nbsp;Su Jin Park,&nbsp;Sangwon Lee","doi":"10.1002/col.22977","DOIUrl":"https://doi.org/10.1002/col.22977","url":null,"abstract":"<div>\u0000 \u0000 <p>Color harmony is an aesthetic sensation evoked by the balanced and coherent arrangement of the colors of visual elements. While traditional methods define harmonious subspaces from geometric relationships or numerical formulas, we employ a data-driven approach to create a unified model for evaluating and generating color combinations of arbitrary sizes. By treating color sequences as linguistic sentences, we construct a color combinations generator using SeqGAN, a generative model capable of learning discrete data through reinforcement learning. The resulting model produces color combinations as much preferred as those by the best models of each size and excels at penalizing color combinations from random sampling. The distribution of the generated colors has more diverse hues than the input data, in contrast to the NLP-based model that predominantly predicts achromatic colors due to exposure bias. The flexible structure of our model allows for simple extension to additional conditions such as group preference or emotional keywords.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"346-371"},"PeriodicalIF":1.2,"publicationDate":"2025-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281459","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Applying Color Appearance Model CAM16-UCS in Image Processing Under HDR Viewing Conditions 色彩外观模型CAM16-UCS在HDR条件下图像处理中的应用
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-01-28 DOI: 10.1002/col.22972
Xinye Shi, Ming Ronnier Luo, Yuechen Zhu

Achieving successful cross-media color reproduction is very important in image processing. The purpose of this study is to accumulate high dynamic range data to verify and modify the CAM16-UCS model. There are two experiments in this study. The first experiment was aimed to collect corresponding data of color patches between colors on a display and the real scene viewed under high dynamic range viewing conditions. The results were used to refine CAM16-UCS model. Six illumination levels (i.e., 15, 100, 1000, 3160, 10 000, and 32 000 lx) and 13 test color samples were used in the experiment. Ten observers adjusted the color patches on the display to match the color samples of the real scene. The visual results showed a clear trend, an increase in the illumination level raised vividness perception (both increase in lightness and colorfulness). However, CAM16-UCS did not give accurate prediction to the visual results, especially in the lightness direction. The model was then refined to achieve satisfactory performance and to truthfully reflect the visual phenomena. However, the effect of the modified model could not achieve successful color image reproduction, especially under low illumination conditions. Experiment 2 was conducted by adjusting the overall lightness and colorfulness of the image. The results were used to extend the model for image reproduction. Also, an independent experiment verified that the image generated by the new model matched the real environment well, indicating that the model can perform well in scene restoration.

实现成功的跨媒体色彩再现在图像处理中是非常重要的。本研究的目的是积累高动态范围的数据来验证和修改CAM16-UCS模型。本研究有两个实验。第一个实验的目的是在高动态范围观看条件下,收集显示器颜色与真实场景之间的色块对应数据。结果用于完善CAM16-UCS模型。实验采用6个照度等级(15、100、1000、3160、10000、32000 lx)和13个测试颜色样本。10名观察者调整显示器上的色块以匹配真实场景的颜色样本。视觉结果显示出明显的趋势,照明水平的提高提高了生动感(亮度和色彩都增加了)。然而,CAM16-UCS并不能准确预测视觉结果,特别是在亮度方向上。然后对模型进行改进,以达到令人满意的性能,并真实地反映视觉现象。然而,改进模型的效果不能实现成功的彩色图像再现,特别是在低照度条件下。实验2是通过调整图像的整体明度和色彩来进行的。结果用于扩展图像再现模型。独立实验验证了新模型生成的图像与真实环境匹配良好,表明该模型具有良好的场景还原效果。
{"title":"Applying Color Appearance Model CAM16-UCS in Image Processing Under HDR Viewing Conditions","authors":"Xinye Shi,&nbsp;Ming Ronnier Luo,&nbsp;Yuechen Zhu","doi":"10.1002/col.22972","DOIUrl":"https://doi.org/10.1002/col.22972","url":null,"abstract":"<div>\u0000 \u0000 <p>Achieving successful cross-media color reproduction is very important in image processing. The purpose of this study is to accumulate high dynamic range data to verify and modify the CAM16-UCS model. There are two experiments in this study. The first experiment was aimed to collect corresponding data of color patches between colors on a display and the real scene viewed under high dynamic range viewing conditions. The results were used to refine CAM16-UCS model. Six illumination levels (i.e., 15, 100, 1000, 3160, 10 000, and 32 000 lx) and 13 test color samples were used in the experiment. Ten observers adjusted the color patches on the display to match the color samples of the real scene. The visual results showed a clear trend, an increase in the illumination level raised vividness perception (both increase in lightness and colorfulness). However, CAM16-UCS did not give accurate prediction to the visual results, especially in the lightness direction. The model was then refined to achieve satisfactory performance and to truthfully reflect the visual phenomena. However, the effect of the modified model could not achieve successful color image reproduction, especially under low illumination conditions. Experiment 2 was conducted by adjusting the overall lightness and colorfulness of the image. The results were used to extend the model for image reproduction. Also, an independent experiment verified that the image generated by the new model matched the real environment well, indicating that the model can perform well in scene restoration.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"335-345"},"PeriodicalIF":1.2,"publicationDate":"2025-01-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281416","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Metamer Mismatching Predicts Color Difference Ellipsoids 异元不匹配预测椭球的色差
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-01-24 DOI: 10.1002/col.22976
Emitis Roshan, Brian Funt

It is well known that color-discrimination thresholds vary throughout color space, as is easily observed from the familiar MacAdam ellipses plotted in chromaticity space. But why is this the case? Existing formulations of uniform color spaces (e.g., CIELAB, CIECAM02, CAM16-UCS) and their associated color-difference DE metrics are all models, not theories, based on fits to psychophysical data. While they are of great practical value, they provide no theoretical understanding as to why color discrimination varies as it does. In contrast, the hypothesis advanced and tested here is that the degree of color variability created by metamer mismatching is the primary (although not exclusive) factor underlying the variation in color-discrimination thresholds throughout color space. Not only is it interesting to understand the likely cause of the variation, but knowing the cause may foster the development of more accurate color difference metrics.

众所周知,色彩辨别阈值在整个色彩空间中都是不同的,这很容易从熟悉的MacAdam在色度空间中绘制的椭圆中观察到。但为什么会这样呢?现有的统一色彩空间公式(如CIELAB, CIECAM02, CAM16-UCS)及其相关的色差DE指标都是基于对心理物理数据的拟合的模型,而不是理论。虽然它们具有很大的实用价值,但它们并没有从理论上解释为什么肤色歧视会如此不同。相比之下,这里提出并检验的假设是,由元聚体不匹配产生的颜色可变性程度是整个色彩空间中颜色辨别阈值变化的主要(尽管不是唯一)因素。了解这种变化的可能原因不仅很有趣,而且了解原因可以促进更准确的色差度量的发展。
{"title":"Metamer Mismatching Predicts Color Difference Ellipsoids","authors":"Emitis Roshan,&nbsp;Brian Funt","doi":"10.1002/col.22976","DOIUrl":"https://doi.org/10.1002/col.22976","url":null,"abstract":"<p>It is well known that color-discrimination thresholds vary throughout color space, as is easily observed from the familiar MacAdam ellipses plotted in chromaticity space. But why is this the case? Existing formulations of uniform color spaces (e.g., CIELAB, CIECAM02, CAM16-UCS) and their associated color-difference DE metrics are all models, not theories, based on fits to psychophysical data. While they are of great practical value, they provide no theoretical understanding as to why color discrimination varies as it does. In contrast, the hypothesis advanced and tested here is that the degree of color variability created by metamer mismatching is the primary (although not exclusive) factor underlying the variation in color-discrimination thresholds throughout color space. Not only is it interesting to understand the likely cause of the variation, but knowing the cause may foster the development of more accurate color difference metrics.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"327-334"},"PeriodicalIF":1.2,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22976","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281458","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Categorical color perception shown in a cross-lingual comparison of visual search 在视觉搜索的跨语言比较中显示的分类颜色感知
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-01-10 DOI: 10.1002/col.22964
Elley Wakui, Dimitris Mylonas, Serge Caparos, Jules Davidoff

Categorical perception (CP) for colors entails that hues within a category look more similar than would be predicted by their perceptual distance. We examined color CP in both a UK and a remote population (Himba) for newly acquired and long-established color terms. Previously, the Himba language used the same color term for blue and green but now they have labels that match the English terms. However, they still have no color terms for the purple areas of color space. Hence, we were able to investigate a color category boundary that exists in the Himba language but not in English as well as a boundary that is the same for both. CP was demonstrated for both populations in a visual search task for one different hue among 12 otherwise similar hues; a task that eliminated concerns of label matching. CP was found at the color-category boundaries that are specific to each language. Alternative explanations of our data are discussed and, in particular, that it is the task-dependent use of categorical rather than non-categorical (perceptual) color networks which produces CP. It is suggested that categorical networks for colors are bilaterally represented and are the default choice in a suprathreshold similarity judgment.

对颜色的分类感知(CP)要求在一个类别内的颜色看起来比它们的感知距离预测的更相似。我们在英国和偏远地区(Himba)对新获得的和长期建立的颜色术语进行了颜色CP检查。以前,辛巴语使用相同的颜色术语来表示蓝色和绿色,但现在他们的标签与英语术语相匹配。然而,他们仍然没有颜色空间的紫色区域的颜色术语。因此,我们能够研究在辛巴语中存在但在英语中不存在的颜色类别边界,以及两者相同的边界。在12种其他相似的色调中,两种人群在视觉搜索任务中对一种不同的色调证明了CP;消除了标签匹配问题的任务。在每种语言特有的颜色类别边界上发现了CP。本文还讨论了对我们数据的其他解释,特别是,产生CP的是分类而非非分类(感知)颜色网络的任务依赖使用。研究表明,颜色的分类网络是双边表示的,是超阈值相似性判断中的默认选择。
{"title":"Categorical color perception shown in a cross-lingual comparison of visual search","authors":"Elley Wakui,&nbsp;Dimitris Mylonas,&nbsp;Serge Caparos,&nbsp;Jules Davidoff","doi":"10.1002/col.22964","DOIUrl":"https://doi.org/10.1002/col.22964","url":null,"abstract":"<p>Categorical perception (CP) for colors entails that hues within a category look more similar than would be predicted by their perceptual distance. We examined color CP in both a UK and a remote population (Himba) for newly acquired and long-established color terms. Previously, the Himba language used the same color term for blue and green but now they have labels that match the English terms. However, they still have no color terms for the purple areas of color space. Hence, we were able to investigate a color category boundary that exists in the Himba language but not in English as well as a boundary that is the same for both. CP was demonstrated for both populations in a visual search task for one different hue among 12 otherwise similar hues; a task that eliminated concerns of label matching. CP was found at the color-category boundaries that are specific to each language. Alternative explanations of our data are discussed and, in particular, that it is the task-dependent use of categorical rather than non-categorical (perceptual) color networks which produces CP. It is suggested that categorical networks for colors are bilaterally represented and are the default choice in a suprathreshold similarity judgment.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"301-313"},"PeriodicalIF":1.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22964","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281477","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Color Space Conversion Model From CMYK to CIELab Based on Stacking Ensemble Learning 基于堆叠集成学习的CMYK到CIELab色彩空间转换模型
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-01-10 DOI: 10.1002/col.22971
Hongwu Zhan, Yifei Zou, Yinwei Zhang, Weiwei Gong, Fang Xu

This paper develops a method based on a stacking ensemble learning model to achieve more accurate conversion from CMYK colors to LAB colors. The model employs tetrahedral interpolation, radial basis function (RBF) interpolation, and KAN as base learners, with linear regression as the meta-learner. Our findings show that the stacking-based model outperforms single models in accuracy for color conversion. In the empirical study, color blocks were printed and the collected data was measured to train and validate the stacking ensemble learning model. The results show that the stacking-based model achieves superior accuracy in color space conversion tasks. This research has substantial practical implications for enhancing color management technology in the printing industry.

本文提出了一种基于叠加集成学习模型的CMYK颜色到LAB颜色更精确转换的方法。该模型采用四面体插值、径向基函数(RBF)插值和KAN作为基础学习器,线性回归作为元学习器。我们的研究结果表明,基于堆叠的模型在颜色转换的准确性上优于单一模型。在实证研究中,通过打印色块并测量收集到的数据来训练和验证堆叠集成学习模型。结果表明,基于叠加的模型在色彩空间转换任务中具有较高的精度。本研究对提高印刷行业的色彩管理技术具有重要的实际意义。
{"title":"Color Space Conversion Model From CMYK to CIELab Based on Stacking Ensemble Learning","authors":"Hongwu Zhan,&nbsp;Yifei Zou,&nbsp;Yinwei Zhang,&nbsp;Weiwei Gong,&nbsp;Fang Xu","doi":"10.1002/col.22971","DOIUrl":"https://doi.org/10.1002/col.22971","url":null,"abstract":"<div>\u0000 \u0000 <p>This paper develops a method based on a stacking ensemble learning model to achieve more accurate conversion from CMYK colors to LAB colors. The model employs tetrahedral interpolation, radial basis function (RBF) interpolation, and KAN as base learners, with linear regression as the meta-learner. Our findings show that the stacking-based model outperforms single models in accuracy for color conversion. In the empirical study, color blocks were printed and the collected data was measured to train and validate the stacking ensemble learning model. The results show that the stacking-based model achieves superior accuracy in color space conversion tasks. This research has substantial practical implications for enhancing color management technology in the printing industry.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"314-326"},"PeriodicalIF":1.2,"publicationDate":"2025-01-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281476","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Comparing AI and Human Emotional Responses to Color: A Semantic Differential and Word-Color Association Approach 比较人工智能和人类对颜色的情绪反应:语义差异和词-颜色关联方法
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2025-01-07 DOI: 10.1002/col.22978
Ling Zheng, Long Xu

This study investigates the ability of artificial intelligence (AI) to simulate human emotional responses to color using two established methods: semantic differential (SD) method and word-color association (WCA) approach. The SD method quantifies emotional reactions to colors through bipolar adjective pairs (e.g., warm–cool, heavy–light), while the WCA method explores associations between specific words and colors. AI responses were compared with data from human participants across various demographics. Results show that AI consistently evaluates basic emotional dimensions, such as warm–cool and heavy–light, with high accuracy, often surpassing human consistency. However, AI struggled with more subjective and culturally influenced dimensions like modern–classical and active-passive. In the WCA experiment, AI replicated many general color associations but faced challenges with complex emotions like joy and anticipation. These findings highlight AI's potential in tasks requiring standardized emotional responses but reveal its limitations in capturing nuanced human emotions, especially in culturally sensitive contexts.

本研究探讨了人工智能(AI)模拟人类对颜色的情感反应的能力,采用两种既定方法:语义差异(SD)方法和单词-颜色关联(WCA)方法。SD方法通过双极性形容词对(例如,暖-冷,重-轻)来量化对颜色的情绪反应,而WCA方法则探索特定单词和颜色之间的联系。将人工智能的反应与不同人口统计数据的人类参与者的数据进行了比较。结果表明,人工智能始终如一地评估基本情感维度,如冷暖和轻重,准确率很高,往往超过人类的一致性。然而,人工智能在更主观和受文化影响的维度上挣扎,比如现代古典和主动被动。在WCA实验中,人工智能复制了许多一般的颜色联想,但面临着喜悦和期待等复杂情绪的挑战。这些发现强调了人工智能在需要标准化情绪反应的任务中的潜力,但也揭示了它在捕捉细微的人类情绪方面的局限性,尤其是在文化敏感的背景下。
{"title":"Comparing AI and Human Emotional Responses to Color: A Semantic Differential and Word-Color Association Approach","authors":"Ling Zheng,&nbsp;Long Xu","doi":"10.1002/col.22978","DOIUrl":"https://doi.org/10.1002/col.22978","url":null,"abstract":"<div>\u0000 \u0000 <p>This study investigates the ability of artificial intelligence (AI) to simulate human emotional responses to color using two established methods: semantic differential (SD) method and word-color association (WCA) approach. The SD method quantifies emotional reactions to colors through bipolar adjective pairs (e.g., warm–cool, heavy–light), while the WCA method explores associations between specific words and colors. AI responses were compared with data from human participants across various demographics. Results show that AI consistently evaluates basic emotional dimensions, such as warm–cool and heavy–light, with high accuracy, often surpassing human consistency. However, AI struggled with more subjective and culturally influenced dimensions like modern–classical and active-passive. In the WCA experiment, AI replicated many general color associations but faced challenges with complex emotions like joy and anticipation. These findings highlight AI's potential in tasks requiring standardized emotional responses but reveal its limitations in capturing nuanced human emotions, especially in culturally sensitive contexts.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 4","pages":"286-300"},"PeriodicalIF":1.2,"publicationDate":"2025-01-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144281374","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The colour Technology of Under the Caribbean (Hans Hass, 1954) Through a Comparison of Original Film Sources and Archival Documents 《加勒比之海》(汉斯·哈斯,1954)的色彩技术——通过对原始电影资料和档案文件的比较
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2024-12-29 DOI: 10.1002/col.22974
Rita Clemens

Hans Hass' Under the Caribbean (1954, LI, AT, DE) was one of the world's first underwater colour films. As such, it provides a unique case study and raises interesting questions about the film's colour technology, combining 35 mm chromogenic negative and 16 mm Kodachrome processes with Technicolor imbibition printing in an interweaving of colour processes. Research into the vast amount of Hass' film material held at the Filmarchiv Austria has not yet revealed any of the original Kodachrome footage of this film nor its opticals. However, based on archival documents, it was possible to confirm and reconstruct the workflow Technicolor adopted for this film. Investigating the production history of Under the Caribbean not only provides film historical knowledge of this specific film, but also film technical insights into the production of other films of the early 50s, that also combine several colour processes. This research will be presented together with a discussion of the restoration possibilities offered by the source material, that is, the cut negative and several release prints.

汉斯·哈斯的《加勒比海底》(1954年,LI, AT, DE)是世界上最早的水下彩色电影之一。因此,它提供了一个独特的案例研究,并提出了有关电影彩色技术的有趣问题,将35毫米显色负片和16毫米柯达彩色胶卷与特艺彩色吸印相结合,交织在一起。对奥地利菲尔玛奇夫博物馆保存的大量哈斯电影材料的研究尚未发现这部电影的原始柯达彩色镜头及其光学元件。然而,根据档案文件,可以确认和重建特艺在这部电影中采用的工作流程。调查《加勒比之下》的制作历史不仅提供了这部特定电影的电影历史知识,而且还提供了对50年代早期其他电影制作的电影技术见解,这些电影也结合了几种彩色工艺。这项研究将与原始材料提供的修复可能性的讨论一起提出,即剪切的负片和几张释放的印刷品。
{"title":"The colour Technology of Under the Caribbean (Hans Hass, 1954) Through a Comparison of Original Film Sources and Archival Documents","authors":"Rita Clemens","doi":"10.1002/col.22974","DOIUrl":"https://doi.org/10.1002/col.22974","url":null,"abstract":"<div>\u0000 \u0000 <p>Hans Hass' <i>Under the Caribbean</i> (1954, LI, AT, DE) was one of the world's first underwater colour films. As such, it provides a unique case study and raises interesting questions about the film's colour technology, combining 35 mm chromogenic negative and 16 mm Kodachrome processes with Technicolor imbibition printing in an interweaving of colour processes. Research into the vast amount of Hass' film material held at the Filmarchiv Austria has not yet revealed any of the original Kodachrome footage of this film nor its opticals. However, based on archival documents, it was possible to confirm and reconstruct the workflow Technicolor adopted for this film. Investigating the production history of <i>Under the Caribbean</i> not only provides film historical knowledge of this specific film, but also film technical insights into the production of other films of the early 50s, that also combine several colour processes. This research will be presented together with a discussion of the restoration possibilities offered by the source material, that is, the cut negative and several release prints.</p>\u0000 </div>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 3","pages":"276-282"},"PeriodicalIF":1.2,"publicationDate":"2024-12-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846153","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Auto-White Balance Algorithm of Skin Color Based on Asymmetric Generative Adversarial Network 基于非对称生成对抗网络的肤色自动白平衡算法
IF 1.2 3区 工程技术 Q4 CHEMISTRY, APPLIED Pub Date : 2024-12-24 DOI: 10.1002/col.22970
Sicong Zhou, Hesong Li, Wenjun Sun, Fanyi Zhou, Kaida Xiao

Skin color constancy under nonuniform correlated color temperatures (CCT) and multiple light sources has always been a hot issue in color science. A more high-quality skin color reproduction method has broad application prospects in camera photography, face recognition, and other fields. The processing process from the 14bit or 16bit RAW pictures taken by the camera to the final output of 8bit JPG pictures is called the image processing pipeline, in which the steps of the auto-white balance algorithm have a decisive impact on the skin color reproduction result. The traditional automatic white balance algorithm is based on hypothetical statistics. Moreover, the estimated illuminant color is obtained through illuminant estimation. However, the traditional grayscale world, perfect reflector, and other auto-white balance algorithms perform unsatisfactorily under non-uniform or complex light sources. The method based on sample statistics proposes a new solution to this problem from another aspect. The deep learning algorithm, especially the generative adversarial network (GAN) algorithm, is very suitable for establishing the mapping between pictures and has an excellent performance in the fields of image reconstruction, image translation, defogging, and coloring. This paper proposes a new solution to this problem. The asymmetric UNet3+ shape generator integrates better global and local information to obtain a more refined correction matrix incorporating details of the whole image. The discriminator is Patch-discriminator, which focuses more on image details by changing the attention field. The dataset used in this article is the Liverpool-Leeds Skin-color Database (LLSD) and some supplementary images, including the skin color of more than 960 subjects under D65 and different light sources. Finally, we calculate the CIEDE2000 color difference and some other image quality index between the test skin color JPEG picture corrected by the auto-white balance algorithm and the skin color under the corresponding D65 to evaluate the effect of white balance correction. The results show that the asymmetric GAN algorithm proposed in this paper can bring higher quality skin color reproduction results than the traditional auto-white balance algorithm and existing deep learning WB algorithm.

非均匀相关色温和多光源条件下的肤色恒常性一直是色彩科学研究的热点问题。一种更高质量的肤色再现方法在相机摄影、人脸识别等领域有着广阔的应用前景。从相机拍摄的14bit或16bit RAW图片到最终输出的8bit JPG图片的处理过程称为图像处理流水线,其中自动白平衡算法的步骤对肤色还原结果有着决定性的影响。传统的自动白平衡算法是基于假设统计的。此外,通过光源估计得到估计的光源颜色。然而,传统的灰度世界、完美反射器和其他自动白平衡算法在非均匀或复杂光源下的表现并不令人满意。基于样本统计的方法从另一个方面提出了解决这一问题的新方法。深度学习算法,尤其是生成式对抗网络(GAN)算法,非常适合建立图像之间的映射关系,在图像重建、图像翻译、去雾、着色等领域都有优异的表现。本文针对这一问题提出了一种新的解决方案。非对称UNet3+形状生成器集成了更好的全局和局部信息,以获得包含整个图像细节的更精细的校正矩阵。该鉴别器为Patch-discriminator,通过改变注意场使其更加关注图像细节。本文使用的数据集是利物浦-利兹肤色数据库(Liverpool-Leeds skin -color Database, LLSD)和一些补充图像,包括960多名受试者在D65和不同光源下的肤色。最后,我们计算了自动白平衡算法校正后的测试肤色JPEG图像与相应D65下的肤色之间的CIEDE2000色差和其他一些图像质量指标,以评估白平衡校正的效果。结果表明,本文提出的非对称GAN算法比传统的自动白平衡算法和现有的深度学习WB算法能带来更高质量的肤色再现结果。
{"title":"Auto-White Balance Algorithm of Skin Color Based on Asymmetric Generative Adversarial Network","authors":"Sicong Zhou,&nbsp;Hesong Li,&nbsp;Wenjun Sun,&nbsp;Fanyi Zhou,&nbsp;Kaida Xiao","doi":"10.1002/col.22970","DOIUrl":"https://doi.org/10.1002/col.22970","url":null,"abstract":"<p>Skin color constancy under nonuniform correlated color temperatures (CCT) and multiple light sources has always been a hot issue in color science. A more high-quality skin color reproduction method has broad application prospects in camera photography, face recognition, and other fields. The processing process from the 14bit or 16bit RAW pictures taken by the camera to the final output of 8bit JPG pictures is called the image processing pipeline, in which the steps of the auto-white balance algorithm have a decisive impact on the skin color reproduction result. The traditional automatic white balance algorithm is based on hypothetical statistics. Moreover, the estimated illuminant color is obtained through illuminant estimation. However, the traditional grayscale world, perfect reflector, and other auto-white balance algorithms perform unsatisfactorily under non-uniform or complex light sources. The method based on sample statistics proposes a new solution to this problem from another aspect. The deep learning algorithm, especially the generative adversarial network (GAN) algorithm, is very suitable for establishing the mapping between pictures and has an excellent performance in the fields of image reconstruction, image translation, defogging, and coloring. This paper proposes a new solution to this problem. The asymmetric UNet3+ shape generator integrates better global and local information to obtain a more refined correction matrix incorporating details of the whole image. The discriminator is Patch-discriminator, which focuses more on image details by changing the attention field. The dataset used in this article is the Liverpool-Leeds Skin-color Database (LLSD) and some supplementary images, including the skin color of more than 960 subjects under D65 and different light sources. Finally, we calculate the CIEDE2000 color difference and some other image quality index between the test skin color JPEG picture corrected by the auto-white balance algorithm and the skin color under the corresponding D65 to evaluate the effect of white balance correction. The results show that the asymmetric GAN algorithm proposed in this paper can bring higher quality skin color reproduction results than the traditional auto-white balance algorithm and existing deep learning WB algorithm.</p>","PeriodicalId":10459,"journal":{"name":"Color Research and Application","volume":"50 3","pages":"266-275"},"PeriodicalIF":1.2,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1002/col.22970","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143846076","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Color Research and Application
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1