Significance: Traditional optical-property image reconstruction techniques are often constrained by artifacts arising from suboptimal source-detector configurations, the amplification of measurement noise during inversion, and limited depth sensitivity, which particularly impacts the accurate reconstruction of deep-seated anomalies such as tumors.
Aim: To overcome these challenges, this research proposes and implements an end-to-end deep learning framework, Channel Attention Fusion Network (CAFNet).
Approach: CAFNet employs AUTOMAP for domain transformation, feature extraction modules for multi-scale feature learning, and channel attention mechanisms to prioritize critical features. The proposed model is trained and tested on simulated and experimental datasets, utilizing metrics such as mean squared error (MSE), peak signal-to-noise ratio (PSNR), and structural similarity index (SSIM) for evaluating model performance.
Results: CAFNet outperforms traditional and state-of-the-art models, achieving the highest SSIM and PSNR values with the lowest MSE. It effectively reconstructs optical properties with high precision, showcasing its ability to detect and localize inclusions in experimental phantoms. An ablation study is performed to highlight the importance of channel attention in CAFNet.
Conclusions: CAFNet demonstrates a significant advancement in diffuse optical imaging, addressing challenges with noise and domain variability issues. Its robust performance highlights the potential in practical medical imaging applications, offering a reliable solution for reconstructing optical properties in complex scenarios.
扫码关注我们
求助内容:
应助结果提醒方式:
