Design of pattern image generation technique based on self-attentive residual conditional generative adversarial network

IF 1.7 4区 综合性期刊 Q2 MULTIDISCIPLINARY SCIENCES Journal of Radiation Research and Applied Sciences Pub Date : 2024-10-19 DOI:10.1016/j.jrras.2024.101157
Zhihai Wang
{"title":"Design of pattern image generation technique based on self-attentive residual conditional generative adversarial network","authors":"Zhihai Wang","doi":"10.1016/j.jrras.2024.101157","DOIUrl":null,"url":null,"abstract":"<div><div>Image generation techniques have made remarkable progress in the digital image processing and computer vision. However, traditional generation models cannot meet the complexity and diversity requirements of patterned images. In view of this, the study aims to enhance the quality of generated pattern images, which uses improved residual block, and introduces a self-attention mechanism to compute the weight parameters of the input features to enhance the accuracy. Comparing with three image generation models, the research model shows lower Frechette initial distance, which is better than the other three methods, and the average Frechette initial distance values in the four scenes are 175.23, 176.41, 174.41, and 165.23. Generated mouths and eyes: the average values of Frechette initial distances reach 98.23 and 97.24, respectively. For emotion classification, the Frechette initial distance averages for sad, excited, and calm emotions were 82.34, 75.63, and 70.21, respectively. The model was trained up to 2500 iterations, and the loss value was reduced to 0.54, with an accuracy of 98.23%, confirming its effectiveness and high performance. The self attention residual network enhances the model's ability to capture image details, effectively improving the quality and accuracy of image generation, and providing a new technological path for radiation imaging data processing and analysis in radiation science.</div></div>","PeriodicalId":16920,"journal":{"name":"Journal of Radiation Research and Applied Sciences","volume":"17 4","pages":"Article 101157"},"PeriodicalIF":1.7000,"publicationDate":"2024-10-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Journal of Radiation Research and Applied Sciences","FirstCategoryId":"103","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S1687850724003418","RegionNum":4,"RegionCategory":"综合性期刊","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"MULTIDISCIPLINARY SCIENCES","Score":null,"Total":0}
引用次数: 0

Abstract

Image generation techniques have made remarkable progress in the digital image processing and computer vision. However, traditional generation models cannot meet the complexity and diversity requirements of patterned images. In view of this, the study aims to enhance the quality of generated pattern images, which uses improved residual block, and introduces a self-attention mechanism to compute the weight parameters of the input features to enhance the accuracy. Comparing with three image generation models, the research model shows lower Frechette initial distance, which is better than the other three methods, and the average Frechette initial distance values in the four scenes are 175.23, 176.41, 174.41, and 165.23. Generated mouths and eyes: the average values of Frechette initial distances reach 98.23 and 97.24, respectively. For emotion classification, the Frechette initial distance averages for sad, excited, and calm emotions were 82.34, 75.63, and 70.21, respectively. The model was trained up to 2500 iterations, and the loss value was reduced to 0.54, with an accuracy of 98.23%, confirming its effectiveness and high performance. The self attention residual network enhances the model's ability to capture image details, effectively improving the quality and accuracy of image generation, and providing a new technological path for radiation imaging data processing and analysis in radiation science.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于自注意残差条件生成对抗网络的图案图像生成技术设计
图像生成技术在数字图像处理和计算机视觉领域取得了显著进展。然而,传统的生成模型无法满足图案图像的复杂性和多样性要求。有鉴于此,本研究以提高生成图案图像的质量为目标,采用改进的残差块,并引入自注意机制来计算输入特征的权重参数,以提高准确性。与三种图像生成模型相比,该研究模型的弗雷谢特初始距离较低,优于其他三种方法,四个场景的平均弗雷谢特初始距离值分别为 175.23、176.41、174.41 和 165.23。生成的嘴巴和眼睛:Frechette 初始距离的平均值分别达到 98.23 和 97.24。在情绪分类方面,悲伤、兴奋和平静情绪的弗雷谢特初始距离平均值分别为 82.34、75.63 和 70.21。该模型经过 2500 次迭代训练,损失值降至 0.54,准确率达到 98.23%,证明了其有效性和高性能。自注意力残差网络增强了模型捕捉图像细节的能力,有效提高了图像生成的质量和准确性,为辐射科学领域的辐射成像数据处理和分析提供了一条新的技术路径。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
5.90%
发文量
130
审稿时长
16 weeks
期刊介绍: Journal of Radiation Research and Applied Sciences provides a high quality medium for the publication of substantial, original and scientific and technological papers on the development and applications of nuclear, radiation and isotopes in biology, medicine, drugs, biochemistry, microbiology, agriculture, entomology, food technology, chemistry, physics, solid states, engineering, environmental and applied sciences.
期刊最新文献
Implementation of homotopy analysis method for entropy-optimized two-phase nanofluid flow in a bioconvective non-Newtonian model with thermal radiation Comparative analysis of machine learning techniques for estimating dynamic viscosity in various nanofluids for improving the efficiency of thermal and radiative systems Multi-modal feature integration for thyroid nodule prediction: Combining clinical data with ultrasound-based deep features The New Extended Exponentiated Burr XII distribution: Properties and applications Introducing the unit Zeghdoudi distribution as a novel statistical model for analyzing proportional data
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1