Zhihua Wang;Yu Long;Qiuping Jiang;Chao Huang;Xiaochun Cao
{"title":"Harnessing Multi-Modal Large Language Models for Measuring and Interpreting Color Differences","authors":"Zhihua Wang;Yu Long;Qiuping Jiang;Chao Huang;Xiaochun Cao","doi":"10.1109/TIP.2024.3522802","DOIUrl":null,"url":null,"abstract":"The accurate measurement of perceptual color differences (CDs) between two images plays an important role in modern smartphone photography. Although traditional CD metrics provide numerical scores to quantify color variations, they often lack the ability to offer intuitive insights or explanations that reflect the factors behind these differences in a way that aligns with human perception and reasoning. Here, we present CD-Reasoning, an innovative method designed not merely to compute numerical CD scores but also to provide a detailed rationale for the observed CDs between images. This method surpasses simple numerical quantification, delivering a more profound and explanatory analysis that bridges quantitative assessments with the qualitative reasoning characteristic of human perception. The development of the CD-Reasoning model begins with the compilation of a multi-modal CD dataset dubbed M-SPCD based on the existing SPCD, where we collect textual descriptions that detail the quantification of CDs across seven pivotal attributes: white balance, brightness contrast, color contrast, overall brightness, overall color, shadow detail, and highlight detail. Utilizing the newly curated M-SPCD dataset, we enhance the capabilities of cutting-edge Multimodal Large Language Models (MLLMs) to not only accurately assess numerical CD scores but also to provide in-depth reasoning that explains the CDs between two images. Extensive experiments demonstrate that the proposed CD-Reasoning not only achieves superior accuracy compared to state-of-the-art CD metrics but also significantly exceeds leading MLLMs in CD interpreting. Source codes will be available at <uri>https://github.com/LongYu-LY/CD-Reasoning</uri>.","PeriodicalId":94032,"journal":{"name":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","volume":"35 ","pages":"2292-2304"},"PeriodicalIF":13.7000,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IEEE transactions on image processing : a publication of the IEEE Signal Processing Society","FirstCategoryId":"1085","ListUrlMain":"https://ieeexplore.ieee.org/document/10820056/","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
The accurate measurement of perceptual color differences (CDs) between two images plays an important role in modern smartphone photography. Although traditional CD metrics provide numerical scores to quantify color variations, they often lack the ability to offer intuitive insights or explanations that reflect the factors behind these differences in a way that aligns with human perception and reasoning. Here, we present CD-Reasoning, an innovative method designed not merely to compute numerical CD scores but also to provide a detailed rationale for the observed CDs between images. This method surpasses simple numerical quantification, delivering a more profound and explanatory analysis that bridges quantitative assessments with the qualitative reasoning characteristic of human perception. The development of the CD-Reasoning model begins with the compilation of a multi-modal CD dataset dubbed M-SPCD based on the existing SPCD, where we collect textual descriptions that detail the quantification of CDs across seven pivotal attributes: white balance, brightness contrast, color contrast, overall brightness, overall color, shadow detail, and highlight detail. Utilizing the newly curated M-SPCD dataset, we enhance the capabilities of cutting-edge Multimodal Large Language Models (MLLMs) to not only accurately assess numerical CD scores but also to provide in-depth reasoning that explains the CDs between two images. Extensive experiments demonstrate that the proposed CD-Reasoning not only achieves superior accuracy compared to state-of-the-art CD metrics but also significantly exceeds leading MLLMs in CD interpreting. Source codes will be available at https://github.com/LongYu-LY/CD-Reasoning.