{"title":"利用多云台自保持机制,从单幅 RGB 图像中重建纺织品的高光谱图像","authors":"Jianxin Zhang, Jin Ma, Miao Qian, Ming Wang","doi":"10.1177/00405175241268790","DOIUrl":null,"url":null,"abstract":"Hyperspectral images possess abundant information and play a pivotal role in enhancing the accuracy of color difference detection in textiles. However, traditional hyperspectral imaging methods necessitate costly equipment and intricate operational procedures. A novel deep learning model based on a multihead attention mechanism was proposed in this article to facilitate the extensive application of hyperspectral imaging technology in textile quality inspection. This model enabled the reconstruction of the hyperspectral information of plain weave textiles from a single RGB image. In this model, encoder-decoder architecture and pyramid pooling convolutional operations were employed to integrate multiscale features of plain weave cotton-linen textiles. This could capture details and contextual information in textile images more precisely, enhancing the accuracy of hyperspectral image reconstruction. Simultaneously, an attention mechanism was introduced to increase the model’s receptive field and improve its focus on key regions in the input image and feature maps. This resulted in a reduced weighting of redundant information during network learning, leading to an improved feature extraction capability of the network. Through these methods, successful reconstructions of plain weave textiles hyperspectral information from a single RGB image was achieved. Quantitative and qualitative tests were conducted on two datasets, namely, the NTIRE 2020 dataset and a self-made textile dataset, to evaluate the performance of the proposed method. The approach proposed in this article exhibited promising results on both datasets. Specifically, the reconstructed textile hyperspectral images achieved a root mean square error of 0.0344, a peak signal-to-noise ratio of 29.945, a spectral angle mapper of 3.753, and a structural similarity index measure of 0.955 on the textile dataset. In the reconstructed hyperspectral colorimetric experiment, the maximum value of average color difference was 2.641. These results demonstrate that the method can meet the requirements for textile color measurement applications.","PeriodicalId":22323,"journal":{"name":"Textile Research Journal","volume":"14 1","pages":""},"PeriodicalIF":1.6000,"publicationDate":"2024-09-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Reconstructing hyperspectral images of textiles from a single RGB image utilizing the multihead self-attention mechanism\",\"authors\":\"Jianxin Zhang, Jin Ma, Miao Qian, Ming Wang\",\"doi\":\"10.1177/00405175241268790\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Hyperspectral images possess abundant information and play a pivotal role in enhancing the accuracy of color difference detection in textiles. However, traditional hyperspectral imaging methods necessitate costly equipment and intricate operational procedures. A novel deep learning model based on a multihead attention mechanism was proposed in this article to facilitate the extensive application of hyperspectral imaging technology in textile quality inspection. This model enabled the reconstruction of the hyperspectral information of plain weave textiles from a single RGB image. In this model, encoder-decoder architecture and pyramid pooling convolutional operations were employed to integrate multiscale features of plain weave cotton-linen textiles. This could capture details and contextual information in textile images more precisely, enhancing the accuracy of hyperspectral image reconstruction. Simultaneously, an attention mechanism was introduced to increase the model’s receptive field and improve its focus on key regions in the input image and feature maps. This resulted in a reduced weighting of redundant information during network learning, leading to an improved feature extraction capability of the network. Through these methods, successful reconstructions of plain weave textiles hyperspectral information from a single RGB image was achieved. Quantitative and qualitative tests were conducted on two datasets, namely, the NTIRE 2020 dataset and a self-made textile dataset, to evaluate the performance of the proposed method. The approach proposed in this article exhibited promising results on both datasets. Specifically, the reconstructed textile hyperspectral images achieved a root mean square error of 0.0344, a peak signal-to-noise ratio of 29.945, a spectral angle mapper of 3.753, and a structural similarity index measure of 0.955 on the textile dataset. In the reconstructed hyperspectral colorimetric experiment, the maximum value of average color difference was 2.641. These results demonstrate that the method can meet the requirements for textile color measurement applications.\",\"PeriodicalId\":22323,\"journal\":{\"name\":\"Textile Research Journal\",\"volume\":\"14 1\",\"pages\":\"\"},\"PeriodicalIF\":1.6000,\"publicationDate\":\"2024-09-14\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Textile Research Journal\",\"FirstCategoryId\":\"88\",\"ListUrlMain\":\"https://doi.org/10.1177/00405175241268790\",\"RegionNum\":4,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"MATERIALS SCIENCE, TEXTILES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Textile Research Journal","FirstCategoryId":"88","ListUrlMain":"https://doi.org/10.1177/00405175241268790","RegionNum":4,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MATERIALS SCIENCE, TEXTILES","Score":null,"Total":0}
Reconstructing hyperspectral images of textiles from a single RGB image utilizing the multihead self-attention mechanism
Hyperspectral images possess abundant information and play a pivotal role in enhancing the accuracy of color difference detection in textiles. However, traditional hyperspectral imaging methods necessitate costly equipment and intricate operational procedures. A novel deep learning model based on a multihead attention mechanism was proposed in this article to facilitate the extensive application of hyperspectral imaging technology in textile quality inspection. This model enabled the reconstruction of the hyperspectral information of plain weave textiles from a single RGB image. In this model, encoder-decoder architecture and pyramid pooling convolutional operations were employed to integrate multiscale features of plain weave cotton-linen textiles. This could capture details and contextual information in textile images more precisely, enhancing the accuracy of hyperspectral image reconstruction. Simultaneously, an attention mechanism was introduced to increase the model’s receptive field and improve its focus on key regions in the input image and feature maps. This resulted in a reduced weighting of redundant information during network learning, leading to an improved feature extraction capability of the network. Through these methods, successful reconstructions of plain weave textiles hyperspectral information from a single RGB image was achieved. Quantitative and qualitative tests were conducted on two datasets, namely, the NTIRE 2020 dataset and a self-made textile dataset, to evaluate the performance of the proposed method. The approach proposed in this article exhibited promising results on both datasets. Specifically, the reconstructed textile hyperspectral images achieved a root mean square error of 0.0344, a peak signal-to-noise ratio of 29.945, a spectral angle mapper of 3.753, and a structural similarity index measure of 0.955 on the textile dataset. In the reconstructed hyperspectral colorimetric experiment, the maximum value of average color difference was 2.641. These results demonstrate that the method can meet the requirements for textile color measurement applications.
期刊介绍:
The Textile Research Journal is the leading peer reviewed Journal for textile research. It is devoted to the dissemination of fundamental, theoretical and applied scientific knowledge in materials, chemistry, manufacture and system sciences related to fibers, fibrous assemblies and textiles. The Journal serves authors and subscribers worldwide, and it is selective in accepting contributions on the basis of merit, novelty and originality.