The commonly used shoreline extraction methods need to go through the generation process of point cloud digital elevation model (DEM) with large amount of calculation and easy to introduce errors. This proposed method used the change law of the coordinate value of the point at the time of obtaining the data due to the consistent acquisition method, puts forward the algorithm, and improves the accuracy and efficiency of the algorithm, so that it can quickly and accurately extract the boundary point cloud of the coastal zone, properly process the extracted boundary point cloud and transform it into the coastline point cloud in the strict sense. Compared with the measured coastline data and the coastline data extracted by isoline tracking method, the visual and quantitative data show that the extraction effect of this method is more continuous and the accuracy is higher. It can be seen from the data table that the overall standard deviation and variance of coastline extracted by this method are reduced from 0.3726 and 0.1415 of isoline tracking method to 0.1632 and 0.0266 respectively
{"title":"A new method for the extraction of shoreline based on point cloud distribution characteristics","authors":"chao lv, weihua li, Jianglin Liu, jiuming li","doi":"10.1117/12.3014390","DOIUrl":"https://doi.org/10.1117/12.3014390","url":null,"abstract":"The commonly used shoreline extraction methods need to go through the generation process of point cloud digital elevation model (DEM) with large amount of calculation and easy to introduce errors. This proposed method used the change law of the coordinate value of the point at the time of obtaining the data due to the consistent acquisition method, puts forward the algorithm, and improves the accuracy and efficiency of the algorithm, so that it can quickly and accurately extract the boundary point cloud of the coastal zone, properly process the extracted boundary point cloud and transform it into the coastline point cloud in the strict sense. Compared with the measured coastline data and the coastline data extracted by isoline tracking method, the visual and quantitative data show that the extraction effect of this method is more continuous and the accuracy is higher. It can be seen from the data table that the overall standard deviation and variance of coastline extracted by this method are reduced from 0.3726 and 0.1415 of isoline tracking method to 0.1632 and 0.0266 respectively","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"31 4","pages":"129692L - 129692L-6"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Conventional encryption and return method of UAV converter station inspection image mainly uses Fibonacci scrambling technology to generate two-dimensional encryption mapping, which is easily affected by pixel iteration, resulting in a high difference in peak signal-to-noise ratio. Therefore, a new encryption and return method of UAV converter station inspection image is needed, and an encryption and return algorithm of UAV converter station inspection image is designed based on chaotic sequence. The experimental results show that the difference between the peak signal-to-noise ratio (PSNR) before and after the encrypted return of the inspection encrypted image of the UAV converter station is small, which proves that the encrypted return of the inspection encrypted image is effective, reliable and has certain application value, and has made certain contributions to improving the inspection effect of the UAV converter station.
{"title":"Encryption and return method of inspection image of UAV converter station based on chaotic sequence","authors":"Jingxiang Li, Hao Lai, Yanhui Shi, Yuchao Liu, Haitao Yin","doi":"10.1117/12.3014493","DOIUrl":"https://doi.org/10.1117/12.3014493","url":null,"abstract":"Conventional encryption and return method of UAV converter station inspection image mainly uses Fibonacci scrambling technology to generate two-dimensional encryption mapping, which is easily affected by pixel iteration, resulting in a high difference in peak signal-to-noise ratio. Therefore, a new encryption and return method of UAV converter station inspection image is needed, and an encryption and return algorithm of UAV converter station inspection image is designed based on chaotic sequence. The experimental results show that the difference between the peak signal-to-noise ratio (PSNR) before and after the encrypted return of the inspection encrypted image of the UAV converter station is small, which proves that the encrypted return of the inspection encrypted image is effective, reliable and has certain application value, and has made certain contributions to improving the inspection effect of the UAV converter station.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"43 1","pages":"129691O - 129691O-5"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
With the continuous improvement of the application of artificial intelligence theory and intelligent hardware processing capabilities, the application of machine vision and unmanned technology is becoming more and more popular. As the city scales rapidly, fire safety issues are becoming more and more important. Taking into account the safety of firefighters and the efficient handling of accidents, an intelligent unmanned fire car was designed to simulate fire safety problems in toxic and harmful environments. The functions of this smart car include: automatic driving, image recognition, monitoring of remote and environment, etc., which can help firefighters to extinguish fires in a harmful environment and minimize property damage.
{"title":"Design of intelligent security robot based on machine vision","authors":"Xueying Huang, Zhoulin Chang","doi":"10.1117/12.3014397","DOIUrl":"https://doi.org/10.1117/12.3014397","url":null,"abstract":"With the continuous improvement of the application of artificial intelligence theory and intelligent hardware processing capabilities, the application of machine vision and unmanned technology is becoming more and more popular. As the city scales rapidly, fire safety issues are becoming more and more important. Taking into account the safety of firefighters and the efficient handling of accidents, an intelligent unmanned fire car was designed to simulate fire safety problems in toxic and harmful environments. The functions of this smart car include: automatic driving, image recognition, monitoring of remote and environment, etc., which can help firefighters to extinguish fires in a harmful environment and minimize property damage.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"112 1","pages":"1296922 - 1296922-5"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Functional tissue region segmentation is the segmentation and example description of tissue epithelium, glandular cavity, fiber and other tissues in the image, which helps to accelerate the understanding of the relationship between cells and tissues in the world. By better understanding the relationship between cells, researchers will have a deeper understanding of cell functions that affect human health. Based on convolutional neural networks, we combine the structural advantages of UNet and EficientNet to create an organ tissue segmentation model. The model fuses the UNet structure with the EficientNet structure, and extracts features with the help of the pre-trained EficientNet optimal structure to improve the ability of feature learning. At the same time, the fusion of multi-scale features in the network is realized through the jump connection, and the segmentation accuracy of the model is improved. We compare our model with other models using the metrics of dice similarity efficiency. our Unet2.5D (ConvNext+ Se_resnet101) owns the highest DSC 0.702 among these models, which is 0.052, 0.024, 0.052 higher than Unet(ResNet50), Unet(Se_Resnet101), Unet(ResNet101) respectively.
{"title":"A deep learning-based method for multiorgan functional tissue units segmentation","authors":"Xinmei Feng, Zihao Hao, Shunli Gao, Gang Ma","doi":"10.1117/12.3014697","DOIUrl":"https://doi.org/10.1117/12.3014697","url":null,"abstract":"Functional tissue region segmentation is the segmentation and example description of tissue epithelium, glandular cavity, fiber and other tissues in the image, which helps to accelerate the understanding of the relationship between cells and tissues in the world. By better understanding the relationship between cells, researchers will have a deeper understanding of cell functions that affect human health. Based on convolutional neural networks, we combine the structural advantages of UNet and EficientNet to create an organ tissue segmentation model. The model fuses the UNet structure with the EficientNet structure, and extracts features with the help of the pre-trained EficientNet optimal structure to improve the ability of feature learning. At the same time, the fusion of multi-scale features in the network is realized through the jump connection, and the segmentation accuracy of the model is improved. We compare our model with other models using the metrics of dice similarity efficiency. our Unet2.5D (ConvNext+ Se_resnet101) owns the highest DSC 0.702 among these models, which is 0.052, 0.024, 0.052 higher than Unet(ResNet50), Unet(Se_Resnet101), Unet(ResNet101) respectively.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"88 3","pages":"129691Y - 129691Y-6"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Xiangrui Tian, Yinjun Jia, Tong Xu, Jie Yin, Yihe Chen, Jiansen Mao
Image blur and detail information loss are caused by various factors such as imaging environment and hardware performance, therefore a multi-level image detail enhancement method based on guided filtering is proposed. Firstly, the input image is iteratively filtered by using the guided filter, to obtain background images with different smoothness; then the background image is subtracted from the original image to obtain detail images with different levels; finally, a dynamic saturation function is used to adjust the weights of detail images, which are superimposed with the original image to obtain the enhanced image. The proposed method is compared with the existing enhancement algorithms using open dataset. The experimental results show that, compared with other enhancement methods, the proposed method in this paper achieves a better enhancement effect, the enhanced image has clear edges, and the visual effect is suitable. Compared with other methods, the objective indicators of information entropy, average gradient, and spatial frequency are improved on average. 1.39%, 27.9%, and 19.3%.
{"title":"Multi-level image detail enhancement based on guided filtering","authors":"Xiangrui Tian, Yinjun Jia, Tong Xu, Jie Yin, Yihe Chen, Jiansen Mao","doi":"10.1117/12.3014387","DOIUrl":"https://doi.org/10.1117/12.3014387","url":null,"abstract":"Image blur and detail information loss are caused by various factors such as imaging environment and hardware performance, therefore a multi-level image detail enhancement method based on guided filtering is proposed. Firstly, the input image is iteratively filtered by using the guided filter, to obtain background images with different smoothness; then the background image is subtracted from the original image to obtain detail images with different levels; finally, a dynamic saturation function is used to adjust the weights of detail images, which are superimposed with the original image to obtain the enhanced image. The proposed method is compared with the existing enhancement algorithms using open dataset. The experimental results show that, compared with other enhancement methods, the proposed method in this paper achieves a better enhancement effect, the enhanced image has clear edges, and the visual effect is suitable. Compared with other methods, the objective indicators of information entropy, average gradient, and spatial frequency are improved on average. 1.39%, 27.9%, and 19.3%.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"13 s4","pages":"1296918 - 1296918-8"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139640410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The traditional system capability assessment methods are no longer able to address the challenges in system evaluation. Further development and innovation in algorithms are needed. This paper addresses the new challenges encountered in the system design process and proposes a system capability assessment algorithm centered around the evaluation of capability measurement values, capability advantages and disadvantages, as well as capability improvement and decline. These provide a scientific basis for determining the focus, direction, scale proportion, and improvement optimization of the system.
{"title":"Research on system capability assessment algorithms","authors":"Lanlan Gao, Yijing Liu, Jian Le, Kai Qiu","doi":"10.1117/12.3014565","DOIUrl":"https://doi.org/10.1117/12.3014565","url":null,"abstract":"The traditional system capability assessment methods are no longer able to address the challenges in system evaluation. Further development and innovation in algorithms are needed. This paper addresses the new challenges encountered in the system design process and proposes a system capability assessment algorithm centered around the evaluation of capability measurement values, capability advantages and disadvantages, as well as capability improvement and decline. These provide a scientific basis for determining the focus, direction, scale proportion, and improvement optimization of the system.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"19 1","pages":"129690M - 129690M-8"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The marine gas turbine propulsion system generally works in a healthy state, and the samples collected by the monitoring system are characterized by more normal samples and fewer fault samples. Aiming at the problem of lack of fault samples faced by data-driven fault diagnosis methods, a cross-working condition fault diagnosis model is proposed by using transfer learning to reduce the dependence of data-driven methods on fault samples. The proposed method was experimentally validated by using a real-ship-validated dataset. Compared with traditional methods, the proposed method can achieve cross-working condition fault diagnosis with fewer fault samples.
{"title":"Health assessment of marine gas turbine propulsion system under cross-working conditions based on transfer learning","authors":"Congao Tan, Shijie Shi","doi":"10.1117/12.3014466","DOIUrl":"https://doi.org/10.1117/12.3014466","url":null,"abstract":"The marine gas turbine propulsion system generally works in a healthy state, and the samples collected by the monitoring system are characterized by more normal samples and fewer fault samples. Aiming at the problem of lack of fault samples faced by data-driven fault diagnosis methods, a cross-working condition fault diagnosis model is proposed by using transfer learning to reduce the dependence of data-driven methods on fault samples. The proposed method was experimentally validated by using a real-ship-validated dataset. Compared with traditional methods, the proposed method can achieve cross-working condition fault diagnosis with fewer fault samples.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"7 2","pages":"1296923 - 1296923-9"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Automated vehicle driving requires a heightened awareness of the surrounding environment, and detecting targets is a crucial element in reducing the risk of traffic accidents. Target detection is essential for autonomous driving. In this paper, we improve the CenterPoint 3D target detection algorithm by introducing a self-calibrating convolutional network into the 2D backbone network of the original algorithm. This enhancement improves network extraction speed and feature extraction capability. Additionally, we improve the two-stage refinement module of the original algorithm by extracting feature points from the multi-scale feature map rather than the single-scale feature map. This approach reduces the loss of small target feature information, and we build a data enhancement module to increase the number of training samples and improve the network model’s robustness. We validate the algorithm on the KITTI dataset and analyze domestic data visualizations. Our results show that the bird’s-eye view mAP detection accuracy of the algorithm when the target is a vehicle has improved by 1.68%, and the 3D target mAP detection accuracy has improved by 1.02% compared with the original algorithm.
{"title":"Deep learning-based 3D target detection algorithm","authors":"Chunbao Huo, Ya Zheng, Zhibo Tong, Zengwen Chen","doi":"10.1117/12.3014381","DOIUrl":"https://doi.org/10.1117/12.3014381","url":null,"abstract":"Automated vehicle driving requires a heightened awareness of the surrounding environment, and detecting targets is a crucial element in reducing the risk of traffic accidents. Target detection is essential for autonomous driving. In this paper, we improve the CenterPoint 3D target detection algorithm by introducing a self-calibrating convolutional network into the 2D backbone network of the original algorithm. This enhancement improves network extraction speed and feature extraction capability. Additionally, we improve the two-stage refinement module of the original algorithm by extracting feature points from the multi-scale feature map rather than the single-scale feature map. This approach reduces the loss of small target feature information, and we build a data enhancement module to increase the number of training samples and improve the network model’s robustness. We validate the algorithm on the KITTI dataset and analyze domestic data visualizations. Our results show that the bird’s-eye view mAP detection accuracy of the algorithm when the target is a vehicle has improved by 1.68%, and the 3D target mAP detection accuracy has improved by 1.02% compared with the original algorithm.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"54 2","pages":"129690V - 129690V-8"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Tao Zhang, Xiaogang Yang, Ruitao Lu, Qi Li, Wenxin Xia, Shuang Su, Bin Tang
Remote sensing image ship target detection and course discrimination is one of the important supports for building a maritime power. Since ship target in remote sensing images are generally in strips, the IOU score is very sensitive to the angle of bounding box. Moreover, the angle of the ship is a periodic function, this discontinuity will cause performance degeneration. Meanwhile, methods generally use oriented bounding boxes as anchors to handle rotated ship target and thus introduce excessive hyper-parameters such as box size, aspect ratios. Aiming at the problem of complex calculation of anchor frame traversal mechanism and discontinuity of angle regression caused by increasing angle attribute in ship target detection of remote sensing image, a ship target heading detection method based on ship head point is proposed. The discontinuous angle regression problem is transformed into a continuous key point estimation problem, and the ship target detection and heading recognition are unified. Second, CA attention mechanism is added to the feature extraction network to enhance the attention to the ship target and predict the center point of the ship target. The offset and target width at the center point are regressed. Then, return the heading point and offset to obtain the accurate heading point position. Next, the rotation angle of the ship is determined according to the coordinates of the center point and the ship head point. Combined with the predicted width and height of the ship, the rotation frame detection of the ship target is completed. Finally, the center point and the bow point are connected to determine the course of the ship target. The effectiveness of the proposed method is verified on the RFUE and open source HRSC2016 datasets, respectively, and it also has good robustness in complex environments.
遥感图像船舶目标探测与航向判别是建设海洋强国的重要支撑之一。由于遥感图像中的船舶目标一般呈条状,因此 IOU 分数对边界框的角度非常敏感。而且,船舶的角度是一个周期性函数,这种不连续性会导致性能下降。同时,这些方法一般使用定向边界框作为锚来处理旋转的船体目标,因此会引入过多的超参数,如框的大小、长宽比等。针对遥感图像船舶目标检测中存在的锚框遍历机制计算复杂、角度属性增加导致角度回归不连续等问题,提出了一种基于船头点的船舶目标航向检测方法。将不连续的角度回归问题转化为连续的关键点估计问题,实现了船舶目标检测与航向识别的统一。其次,在特征提取网络中加入 CA 注意机制,以增强对船体目标的注意,并预测船体目标的中心点。对中心点的偏移和目标宽度进行回归。然后,返回航向点和偏移量,得到准确的航向点位置。接着,根据中心点和船头点的坐标确定船只的旋转角度。结合预测的船舶宽度和高度,完成船舶目标的旋转框架检测。最后,连接中心点和船头点,确定目标船的航向。所提方法的有效性分别在 RFUE 和开源 HRSC2016 数据集上得到了验证,而且在复杂环境中也具有良好的鲁棒性。
{"title":"Ship and course detection in remote sensing images based on key-point extraction","authors":"Tao Zhang, Xiaogang Yang, Ruitao Lu, Qi Li, Wenxin Xia, Shuang Su, Bin Tang","doi":"10.1117/12.3014532","DOIUrl":"https://doi.org/10.1117/12.3014532","url":null,"abstract":"Remote sensing image ship target detection and course discrimination is one of the important supports for building a maritime power. Since ship target in remote sensing images are generally in strips, the IOU score is very sensitive to the angle of bounding box. Moreover, the angle of the ship is a periodic function, this discontinuity will cause performance degeneration. Meanwhile, methods generally use oriented bounding boxes as anchors to handle rotated ship target and thus introduce excessive hyper-parameters such as box size, aspect ratios. Aiming at the problem of complex calculation of anchor frame traversal mechanism and discontinuity of angle regression caused by increasing angle attribute in ship target detection of remote sensing image, a ship target heading detection method based on ship head point is proposed. The discontinuous angle regression problem is transformed into a continuous key point estimation problem, and the ship target detection and heading recognition are unified. Second, CA attention mechanism is added to the feature extraction network to enhance the attention to the ship target and predict the center point of the ship target. The offset and target width at the center point are regressed. Then, return the heading point and offset to obtain the accurate heading point position. Next, the rotation angle of the ship is determined according to the coordinates of the center point and the ship head point. Combined with the predicted width and height of the ship, the rotation frame detection of the ship target is completed. Finally, the center point and the bow point are connected to determine the course of the ship target. The effectiveness of the proposed method is verified on the RFUE and open source HRSC2016 datasets, respectively, and it also has good robustness in complex environments.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"59 6","pages":"129691N - 129691N-13"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Ke Huang, Guangyuan Yang, Shenghua Zhang, Zheming Li, Hu Li, Xuebin Jiang, Wenting Liu, Wenfeng Liu, Bo Wang, Xin Yan, Weiguo Lin
(1) Background: To improve the quality control ability of cigarette characteristic cigarette filter rods, build the filter rod structure database of cigarette products, and accelerate the Digital transformation of filter rod platform technology research and development, this study proposed a research method for digital remodeling of cigarette structure.; (2) Methods: Firstly, image acquisition of the cigarette filter rod end face is carried out by combining a color area-array camera, telecentric lens, and coaxial light source to form an area-array camera testing environment; By combining a line scanning camera with a coaxial light source and a point light source, the surface of the cigarette filter is photographed and the point light source transmittance is detected; Simultaneously combining 3D laser camera scanners to establish the contour of the target; (3) Results: By using the above methods, the detection images of four types of filter rods were processed to obtain HSV image curves, grayscale images, and color histograms. These images were used for 3D model reconstruction and 15 3D feature maps were obtained; (4) Conclusions: The reconstructed 15 3D images can accurately distinguish four different filter rods, and this method can provide a reference for real-time detection during cigarette filter rod processing.
{"title":"Research on digital remodeling of structural features of cigarette filter rod based on 3D digital twin technology","authors":"Ke Huang, Guangyuan Yang, Shenghua Zhang, Zheming Li, Hu Li, Xuebin Jiang, Wenting Liu, Wenfeng Liu, Bo Wang, Xin Yan, Weiguo Lin","doi":"10.1117/12.3014423","DOIUrl":"https://doi.org/10.1117/12.3014423","url":null,"abstract":"(1) Background: To improve the quality control ability of cigarette characteristic cigarette filter rods, build the filter rod structure database of cigarette products, and accelerate the Digital transformation of filter rod platform technology research and development, this study proposed a research method for digital remodeling of cigarette structure.; (2) Methods: Firstly, image acquisition of the cigarette filter rod end face is carried out by combining a color area-array camera, telecentric lens, and coaxial light source to form an area-array camera testing environment; By combining a line scanning camera with a coaxial light source and a point light source, the surface of the cigarette filter is photographed and the point light source transmittance is detected; Simultaneously combining 3D laser camera scanners to establish the contour of the target; (3) Results: By using the above methods, the detection images of four types of filter rods were processed to obtain HSV image curves, grayscale images, and color histograms. These images were used for 3D model reconstruction and 15 3D feature maps were obtained; (4) Conclusions: The reconstructed 15 3D images can accurately distinguish four different filter rods, and this method can provide a reference for real-time detection during cigarette filter rod processing.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":"91 3","pages":"1296920 - 1296920-8"},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}