Pub Date : 2024-09-01DOI: 10.1016/j.infrared.2024.105546
Naima Farman, Muhammad Mumtaz, M. Ahsan Mahmood, A.H. Dogar, Sadia Tahir, Kashif Raza, Izhar Ahmad
Terahertz time-domain spectroscopy (THz-TDS) has been used to study the temperature-dependent refractive index, absorption coefficient and dielectric constant of polyvinyl chloride/polystyrene (PVC/PS) blends in frequency range of 0.2–1.8 THz. Moreover, the Sellmeier and thermo optic coefficients of PVC/PS blends with different weight ratios have been explored in the temperature range of 25–80 °C. These parameters are used to evaluate the dispersion properties of these blends in the observed frequency range. A clear indication of temperature dependence on the values of refractive index and the real dielectric constant of these blends have been observed. Their values decrease linearly with increasing temperature up to 80 °C. Whereas, no noticeable change has been observed in the imaginary dielectric constant and the absorption coefficient. These results provide a database of temperature-dependent optical and dielectric parameters of PVC/PS polymer blends for their efficient utilization for device fabrication in THz technology.
太赫兹时域光谱(THz-TDS)被用来研究聚氯乙烯/聚苯乙烯(PVC/PS)混合物在 0.2-1.8 太赫兹频率范围内随温度变化的折射率、吸收系数和介电常数。此外,还探讨了不同重量比的聚氯乙烯/聚苯乙烯混合物在 25-80 °C 温度范围内的塞尔迈尔系数和热光学系数。这些参数用于评估这些混合物在观测频率范围内的分散特性。结果表明,这些混合物的折射率和实际介电常数的值与温度有明显的关系。它们的值随着温度的升高呈线性下降,最高温度可达 80 °C。而虚介电常数和吸收系数则没有明显变化。这些结果提供了 PVC/PS 聚合物共混物随温度变化的光学和介电参数数据库,可用于太赫兹技术中的器件制造。
{"title":"Temperature-dependent optical and dielectric properties of polyvinyl chloride and polystyrene blends in terahertz regime","authors":"Naima Farman, Muhammad Mumtaz, M. Ahsan Mahmood, A.H. Dogar, Sadia Tahir, Kashif Raza, Izhar Ahmad","doi":"10.1016/j.infrared.2024.105546","DOIUrl":"10.1016/j.infrared.2024.105546","url":null,"abstract":"<div><p>Terahertz time-domain spectroscopy (THz-TDS) has been used to study the temperature-dependent refractive index, absorption coefficient and dielectric constant of polyvinyl chloride/polystyrene (PVC/PS) blends in frequency range of 0.2–1.8 THz. Moreover, the Sellmeier and thermo optic coefficients of PVC/PS blends with different weight ratios have been explored in the temperature range of 25–80 °C. These parameters are used to evaluate the dispersion properties of these blends in the observed frequency range. A clear indication of temperature dependence on the values of refractive index and the real dielectric constant of these blends have been observed. Their values decrease linearly with increasing temperature up to 80 °C. Whereas, no noticeable change has been observed in the imaginary dielectric constant and the absorption coefficient. These results provide a database of temperature-dependent optical and dielectric parameters of PVC/PS polymer blends for their efficient utilization for device fabrication in THz technology.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105546"},"PeriodicalIF":3.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142130108","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-09-01DOI: 10.1016/j.infrared.2024.105483
Indranath Mukhopadhyay , B.E. Billinghurst
{"title":"Addendum to “Perturbation mediated forbidden transitions in the in-plane rocking mode of O-18 substituted methanol: Very high-resolution Fourier transform spectroscopy using globar and synchrotron radiation sources in the 10-µm region” [Infrared Phys. Technol. 128 (2023) 104525]","authors":"Indranath Mukhopadhyay , B.E. Billinghurst","doi":"10.1016/j.infrared.2024.105483","DOIUrl":"10.1016/j.infrared.2024.105483","url":null,"abstract":"","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"141 ","pages":"Article 105483"},"PeriodicalIF":3.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S1350449524003670/pdfft?md5=1ae401c8e5deb12493bbd750aa234ee9&pid=1-s2.0-S1350449524003670-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142095185","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
It’s crucial for both producers and consumers to accurately trace the origin of millet, given the significant differences in price and taste that exist between millets from various origins. The traditional method of identifying the origin of millet is time-consuming, laborious, complex, and destructive. In this study, a new method for fast and non-destructive differentiation of millet origins is developed by combining terahertz time domain spectroscopy with ensemble learning. Firstly, three machine learning algorithms, namely support vector machine (SVM), random forest (RF), and kernel extreme learning machine (KELM), were used to build different discriminative models, and then the impact of six different preprocessing methods on the models’ classification performance was compared. It was observed that models employing Savitzky-Golay preprocessing exhibited pronounced superiority in accurately determining the millet’s geographical origins. Building upon these findings, the research introduces an innovative ensemble learning strategy, leveraging both topsis and stacking techniques, to harness the collective strengths of the three algorithms. The outcomes of this approach reveal its remarkable capacity to distinguish millets originating from five distinct locations without the necessity for any parameter fine-tuning. The accuracy, F1 score, and Kappa on the prediction set are all 100 %, which significantly outperforms the single model, traditional voting method, and stacking method. The culmination of this study suggests that the integration of terahertz time-domain spectroscopy and TOPSIS-Stacking ensemble learning emerges as a promising method for the swift and non-intrusive discrimination of millet geographical origins with remarkable precision.
{"title":"Identification of millet origin using terahertz spectroscopy combined with ensemble learning","authors":"Xianhua Yin, Hao Tian, Fuqiang Zhang, Chuanpei Xu, Linkai Tang, Yongbing Wei","doi":"10.1016/j.infrared.2024.105547","DOIUrl":"10.1016/j.infrared.2024.105547","url":null,"abstract":"<div><p>It’s crucial for both producers and consumers to accurately trace the origin of millet, given the significant differences in price and taste that exist between millets from various origins. The traditional method of identifying the origin of millet is time-consuming, laborious, complex, and destructive. In this study, a new method for fast and non-destructive differentiation of millet origins is developed by combining terahertz time domain spectroscopy with ensemble learning. Firstly, three machine learning algorithms, namely support vector machine (SVM), random forest (RF), and kernel extreme learning machine (KELM), were used to build different discriminative models, and then the impact of six different preprocessing methods on the models’ classification performance was compared. It was observed that models employing Savitzky-Golay preprocessing exhibited pronounced superiority in accurately determining the millet’s geographical origins. Building upon these findings, the research introduces an innovative ensemble learning strategy, leveraging both topsis and stacking techniques, to harness the collective strengths of the three algorithms. The outcomes of this approach reveal its remarkable capacity to distinguish millets originating from five distinct locations without the necessity for any parameter fine-tuning. The accuracy, F1 score, and Kappa on the prediction set are all 100 %, which significantly outperforms the single model, traditional voting method, and stacking method. The culmination of this study suggests that the integration of terahertz time-domain spectroscopy and TOPSIS-Stacking ensemble learning emerges as a promising method for the swift and non-intrusive discrimination of millet geographical origins with remarkable precision.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105547"},"PeriodicalIF":3.1,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1016/j.infrared.2024.105539
Cong Guo, Kan Ren, Qian Chen
The current mainstream object detection networks perform well in RGB visible images, but they require high computational resource and degrade in performance when applied to low-resolution infrared images. To address above issues, we propose a lightweight algorithm YOLO-SGF based on you-only-look-once version8 (YOLOv8). Firstly, the lightweight cross-scale feature map fusion network GCFVoV designed as neck to solve poor detection accuracy and maintain low complexity in lightweight networks. And a lightweight GCVF module in GCFVoV neck uses GSConv and Conv to process deep and shallow features respectively, which maximally preserves implicit connections between each channel and integrates multi-scale features. Secondly, we utilize ShuffleNetV2-block1 in combination with C2f for feature extraction, making the algorithm more lightweight and effectively. Finally, we propose the FIMPDIoU loss function, which focuses on overlooked objects in complex backgrounds and adjusts the prediction boxes using ratios specific to different sizes of objects. Compared with YOLOv8 in our infrared dataset, YOLO-SGF reduces the computational space complexity by 50 % and time complexity by 42 %, increases FPS32 by 36.3 % and improves [email protected] ∼ 0.95 by 1.1 % in object detection. Our algorithm enhances the capability of object detection in infrared images especially in nighttime, low light, and occluded conditions. YOLO-SGF enables deployment on embedded edge devices with limited computing power, and provides a new idea for lightweight networks.
{"title":"YOLO-SGF: Lightweight network for object detection in complex infrared images based on improved YOLOv8","authors":"Cong Guo, Kan Ren, Qian Chen","doi":"10.1016/j.infrared.2024.105539","DOIUrl":"10.1016/j.infrared.2024.105539","url":null,"abstract":"<div><p>The current mainstream object detection networks perform well in RGB visible images, but they require high computational resource and degrade in performance when applied to low-resolution infrared images. To address above issues, we propose a lightweight algorithm YOLO-SGF based on you-only-look-once version8 (YOLOv8). Firstly, the lightweight cross-scale feature map fusion network GCFVoV designed as neck to solve poor detection accuracy and maintain low complexity in lightweight networks. And a lightweight GCVF module in GCFVoV neck uses GSConv and Conv to process deep and shallow features respectively, which maximally preserves implicit connections between each channel and integrates multi-scale features. Secondly, we utilize ShuffleNetV2-block1 in combination with C2f for feature extraction, making the algorithm more lightweight and effectively. Finally, we propose the FIMPDIoU loss function, which focuses on overlooked objects in complex backgrounds and adjusts the prediction boxes using ratios specific to different sizes of objects. Compared with YOLOv8 in our infrared dataset, YOLO-SGF reduces the computational space complexity by 50 % and time complexity by 42 %, increases FPS<sub>32</sub> by 36.3 % and improves [email protected] ∼ 0.95 by 1.1 % in object detection. Our algorithm enhances the capability of object detection in infrared images especially in nighttime, low light, and occluded conditions. YOLO-SGF enables deployment on embedded edge devices with limited computing power, and provides a new idea for lightweight networks.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105539"},"PeriodicalIF":3.1,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142162940","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-31DOI: 10.1016/j.infrared.2024.105537
Zekun Li , Leiying Xie , Ruonan Ji , Yuanping Chen , Shaowei Wang
Efficiently sorting coal gangue and identifying coal types are vital operations in coal preparation, yet they are traditionally resource-consuming, labor-intensive, and potentially hazardous. This work puts forward an straightforward method employing mid-infrared spectroscopy with first derivative spectrum to address these issues. The proposed technique focuses on the delineation and enhancement of characteristic spectra to detect subtle differences among samples. The method utilizes just a few characteristic spectra of 3740–3700 cm−1, 1790–1750 cm−1, 1615–1583 cm−1, 1580–1540 cm−1, 1550–1440 cm−1, 1270–1210 cm−1 and 867–854 cm−1 to achieve 100 % high-accuracy classification of coal gangue and identification of coal types with total 250 spectra, such as bituminite, anthracite, lignite, roof sandstone and gangue, without the need for secondary sample processing or the assistance of machine learning algorithms, simplifying the process considerably. Such a strategy not only significantly improves the efficiency of coal sorting but also endorses real-time on-site detection. It offers a theoretical foundation for advanced coal separation technology and its implementation in real-world mining operations.
{"title":"Classification of coal gangue and identification of coal type based on first-derivative of mid-infrared spectrum","authors":"Zekun Li , Leiying Xie , Ruonan Ji , Yuanping Chen , Shaowei Wang","doi":"10.1016/j.infrared.2024.105537","DOIUrl":"10.1016/j.infrared.2024.105537","url":null,"abstract":"<div><p>Efficiently sorting coal gangue and identifying coal types are vital operations in coal preparation, yet they are traditionally resource-consuming, labor-intensive, and potentially hazardous. This work puts forward an straightforward method employing mid-infrared spectroscopy with first derivative spectrum to address these issues. The proposed technique focuses on the delineation and enhancement of characteristic spectra to detect subtle differences among samples. The method utilizes just a few characteristic spectra of 3740–3700 cm<sup>−1</sup>, 1790–1750 cm<sup>−1</sup>, 1615–1583 cm<sup>−1</sup>, 1580–1540 cm<sup>−1</sup>, 1550–1440 cm<sup>−1</sup>, 1270–1210 cm<sup>−1</sup> and 867–854 cm<sup>−1</sup> to achieve 100 % high-accuracy classification of coal gangue and identification of coal types with total 250 spectra, such as bituminite, anthracite, lignite, roof sandstone and gangue, without the need for secondary sample processing or the assistance of machine learning algorithms, simplifying the process considerably. Such a strategy not only significantly improves the efficiency of coal sorting but also endorses real-time on-site detection. It offers a theoretical foundation for advanced coal separation technology and its implementation in real-world mining operations.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105537"},"PeriodicalIF":3.1,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142168850","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.infrared.2024.105541
Wangjie Li , Xiaoyi Lv , Yaoyong Zhou , Yunling Wang , Min Li
Generating a single fused image that highlights important targets and preserves textural details is the aim of fusing visible and infrared images. The majority of deep learning-based fusion algorithms now in use can produce decent fusion outcomes; however, the modeling process still lacks consideration of the different amounts of information in different scenes or regions. Thus, we propose in this research SeACPFusion, a luminance-aware adaptive fusion network for infrared and visible images, which adaptively preserves the intensity information of the noticeable targets of the source images with the texture information of the background in an optimal ratio. Specifically, we design pixel-level luminance loss (PBL) to direct the fusion model’s training in real-time, and PBL retains the optimal intensity information according to the pixel luminance ratio of different source images. In addition, we designed the Channel Transformer (CTF) to consider the relationship between different attributes from the point of view of the feature channel and to focus on the key information by using the self-focusing mechanism to achieve the goal of adaptive fusion. Our extensive tests on the MSRS, RoadScene, and TNO datasets demonstrate that SeACPFusion surpasses nine representative deep learning methods on six objective metrics and achieves the best visual results in scenes such as overexposure or underexposure. In addition, the relatively efficient operation and fewer model parameters make our algorithm promising as a preprocessing module for downstream complicated vision tasks.
{"title":"SeACPFusion: An Adaptive Fusion Network for Infrared and Visible Images based on brightness perception","authors":"Wangjie Li , Xiaoyi Lv , Yaoyong Zhou , Yunling Wang , Min Li","doi":"10.1016/j.infrared.2024.105541","DOIUrl":"10.1016/j.infrared.2024.105541","url":null,"abstract":"<div><p>Generating a single fused image that highlights important targets and preserves textural details is the aim of fusing visible and infrared images. The majority of deep learning-based fusion algorithms now in use can produce decent fusion outcomes; however, the modeling process still lacks consideration of the different amounts of information in different scenes or regions. Thus, we propose in this research SeACPFusion, a luminance-aware adaptive fusion network for infrared and visible images, which adaptively preserves the intensity information of the noticeable targets of the source images with the texture information of the background in an optimal ratio. Specifically, we design pixel-level luminance loss (PBL) to direct the fusion model’s training in real-time, and PBL retains the optimal intensity information according to the pixel luminance ratio of different source images. In addition, we designed the Channel Transformer (CTF) to consider the relationship between different attributes from the point of view of the feature channel and to focus on the key information by using the self-focusing mechanism to achieve the goal of adaptive fusion. Our extensive tests on the MSRS, RoadScene, and TNO datasets demonstrate that SeACPFusion surpasses nine representative deep learning methods on six objective metrics and achieves the best visual results in scenes such as overexposure or underexposure. In addition, the relatively efficient operation and fewer model parameters make our algorithm promising as a preprocessing module for downstream complicated vision tasks.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105541"},"PeriodicalIF":3.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142122125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.infrared.2024.105543
Hong-Wei Chen, Yu-Jun Guo, Yang Li, Yao-Yu Wei
Non-contact sludge measurement methods for storage tanks can address the challenge of measuring the volume of sedimented sludge during long-term storage. While infrared thermography technology can address the issue of liquid level detection, its measurement accuracy for the undulating interface of sludge is insufficient. This study designed and constructed experimental setups for measuring sludge in storage tanks. In this study, infrared images taken by an infrared camera were used to record the temperature distribution of the outer wall of the storage tank. The threshold segmentation method is used to determine the accurate sludge boundary line in image processing. Finally, the Three-Dimensional Tank Residue Recovery Algorithm (3D-TRRA) was applied to fit the 3D distribution of the sludge and calculate accurate sludge volumes. The results indicate that the best segmentation is achieved with a threshold of 170. The measurement error for sludge volume is less than 5%. Accurate visual positioning and recognition of sludge are achieved.
{"title":"Lab-based scale measurements of internal storage of crude oil tank based on non-contact infrared thermography technique","authors":"Hong-Wei Chen, Yu-Jun Guo, Yang Li, Yao-Yu Wei","doi":"10.1016/j.infrared.2024.105543","DOIUrl":"10.1016/j.infrared.2024.105543","url":null,"abstract":"<div><p>Non-contact sludge measurement methods for storage tanks can address the challenge of measuring the volume of sedimented sludge during long-term storage. While infrared thermography technology can address the issue of liquid level detection, its measurement accuracy for the undulating interface of sludge is insufficient. This study designed and constructed experimental setups for measuring sludge in storage tanks. In this study, infrared images taken by an infrared camera were used to record the temperature distribution of the outer wall of the storage tank. The threshold segmentation method is used to determine the accurate sludge boundary line in image processing. Finally, the Three-Dimensional Tank Residue Recovery Algorithm (3D-TRRA) was applied to fit the 3D distribution of the sludge and calculate accurate sludge volumes. The results indicate that the best segmentation is achieved with a threshold of 170. The measurement error for sludge volume is less than 5%. Accurate visual positioning and recognition of sludge are achieved.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105543"},"PeriodicalIF":3.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.infrared.2024.105544
Deming Kong , Yu Zhao , Yuan Dong , Yang Qiu , Shaonan Zheng , Qize Zhong , Quanzhi Sun , Liqi Zhu , Zhikai Gan , Xingyan Zhao , Ting Hu
The n− region is crucial to the performance of n-on-p HgCdTe devices. However, the underlying mechanisms governing its formation process remain insufficiently elucidated in current literature. In this work, the influence of annealing temperature on the n− region formation process was investigated systematically through experiments and one-dimensional (1D) simulation. The two key parameters, the transport rate of interstitials (TrI) and the diffusion coefficient of vacancies (DV) were determined through the 1D model, and their accuracy was validated by experiments. The determination of TrI and DV allows for more flexible and precise optimization of the n− region in HgCdTe, thereby providing valuable guidance for cost-effective, high performance, and reliable preparation of HgCdTe detectors.
n 区对 n 对 p 碲化镉汞器件的性能至关重要。然而,目前的文献对其形成过程的内在机制仍未充分阐明。本研究通过实验和一维(1D)模拟系统地研究了退火温度对 n 区形成过程的影响。通过一维模型确定了两个关键参数,即间隙的传输速率(TrI)和空位的扩散系数(DV),并通过实验验证了它们的准确性。通过确定 TrI 和 DV,可以更灵活、更精确地优化碲化镉汞的 n 区,从而为经济、高性能、可靠地制备碲化镉汞探测器提供有价值的指导。
{"title":"Study of the n− region formation process in n-on-p HgCdTe devices","authors":"Deming Kong , Yu Zhao , Yuan Dong , Yang Qiu , Shaonan Zheng , Qize Zhong , Quanzhi Sun , Liqi Zhu , Zhikai Gan , Xingyan Zhao , Ting Hu","doi":"10.1016/j.infrared.2024.105544","DOIUrl":"10.1016/j.infrared.2024.105544","url":null,"abstract":"<div><p>The n<sup>−</sup> region is crucial to the performance of n-on-p HgCdTe devices. However, the underlying mechanisms governing its formation process remain insufficiently elucidated in current literature. In this work, the influence of annealing temperature on the n<sup>−</sup> region formation process was investigated systematically through experiments and one-dimensional (1D) simulation. The two key parameters, the transport rate of interstitials (<em>Tr<sub>I</sub></em>) and the diffusion coefficient of vacancies (<em>D<sub>V</sub></em>) were determined through the 1D model, and their accuracy was validated by experiments. The determination of <em>Tr<sub>I</sub></em> and <em>D<sub>V</sub></em> allows for more flexible and precise optimization of the n<sup>−</sup> region in HgCdTe, thereby providing valuable guidance for cost-effective, high performance, and reliable preparation of HgCdTe detectors.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105544"},"PeriodicalIF":3.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-30DOI: 10.1016/j.infrared.2024.105540
Kun Ding , Shiming Xiang , Chunhong Pan
Infrared based visual perception is important for night vision of autonomous vehicles, unmanned aerial vehicles (UAVs), etc. Semantic segmentation based on deep learning is one of the key techniques for infrared vision-based perception systems. Currently, most of the advanced methods are based on Transformers, which can achieve favorable segmentation accuracy. However, the high complexity of Transformers prevents them from meeting the real-time requirement of inference speed in resource constrained applications. In view of this, we suggest several lightweight designs that significantly reduce existing computational complexity. In order to maintain the segmentation accuracy, we further introduce the recent vision big model — Segment Anything Model (SAM) to supply auxiliary supervisory signals while training models. Based on these designs, we propose a lightweight segmentation network termed SMALNet (Segment Anything Model Aided Lightweight Network). Compared to existing state-of-the-art method, SegFormer, it reduces 64% FLOPs while maintaining the accuracy to a large extent on two commonly-used benchmarks. The proposed SMALNet can be used in various infrared based vision perception systems with limited hardware resources.
基于红外的视觉感知对于自动驾驶车辆、无人驾驶飞行器(UAV)等的夜视非常重要。基于深度学习的语义分割是基于红外视觉的感知系统的关键技术之一。目前,大多数先进方法都是基于变换器(Transformers),这种方法可以达到较高的分割精度。然而,变换器的高复杂性使其无法满足资源有限的应用领域对推理速度的实时要求。有鉴于此,我们提出了几种轻量级设计,大大降低了现有的计算复杂度。为了保持分割的准确性,我们进一步引入了最新的视觉大模型 - Segment Anything Model (SAM),在训练模型时提供辅助监督信号。基于这些设计,我们提出了一种轻量级分割网络,称为 SMALNet(Segment Anything Model Aided Lightweight Network)。与现有的最先进方法 SegFormer 相比,它减少了 64% 的 FLOPs,同时在两个常用基准上很大程度上保持了准确性。提出的 SMALNet 可用于硬件资源有限的各种红外视觉感知系统。
{"title":"SMALNet: Segment Anything Model Aided Lightweight Network for Infrared Image Segmentation","authors":"Kun Ding , Shiming Xiang , Chunhong Pan","doi":"10.1016/j.infrared.2024.105540","DOIUrl":"10.1016/j.infrared.2024.105540","url":null,"abstract":"<div><p>Infrared based visual perception is important for night vision of autonomous vehicles, unmanned aerial vehicles (UAVs), etc. Semantic segmentation based on deep learning is one of the key techniques for infrared vision-based perception systems. Currently, most of the advanced methods are based on Transformers, which can achieve favorable segmentation accuracy. However, the high complexity of Transformers prevents them from meeting the real-time requirement of inference speed in resource constrained applications. In view of this, we suggest several lightweight designs that significantly reduce existing computational complexity. In order to maintain the segmentation accuracy, we further introduce the recent vision big model — Segment Anything Model (SAM) to supply auxiliary supervisory signals while training models. Based on these designs, we propose a lightweight segmentation network termed SMALNet (<u>S</u>egment Anything <u>M</u>odel <u>A</u>ided <u>L</u>ightweight <u>N</u>etwork). Compared to existing state-of-the-art method, SegFormer, it reduces 64% FLOPs while maintaining the accuracy to a large extent on two commonly-used benchmarks. The proposed SMALNet can be used in various infrared based vision perception systems with limited hardware resources.</p></div>","PeriodicalId":13549,"journal":{"name":"Infrared Physics & Technology","volume":"142 ","pages":"Article 105540"},"PeriodicalIF":3.1,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142097688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"物理与天体物理","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}