首页 > 最新文献

International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)最新文献

英文 中文
A new method for the extraction of shoreline based on point cloud distribution characteristics 基于点云分布特征的海岸线提取新方法
chao lv, weihua li, Jianglin Liu, jiuming li
The commonly used shoreline extraction methods need to go through the generation process of point cloud digital elevation model (DEM) with large amount of calculation and easy to introduce errors. This proposed method used the change law of the coordinate value of the point at the time of obtaining the data due to the consistent acquisition method, puts forward the algorithm, and improves the accuracy and efficiency of the algorithm, so that it can quickly and accurately extract the boundary point cloud of the coastal zone, properly process the extracted boundary point cloud and transform it into the coastline point cloud in the strict sense. Compared with the measured coastline data and the coastline data extracted by isoline tracking method, the visual and quantitative data show that the extraction effect of this method is more continuous and the accuracy is higher. It can be seen from the data table that the overall standard deviation and variance of coastline extracted by this method are reduced from 0.3726 and 0.1415 of isoline tracking method to 0.1632 and 0.0266 respectively
常用的海岸线提取方法需要经过点云数字高程模型(DEM)的生成过程,计算量大,容易引入误差。本文提出的方法利用数据获取时点的坐标值因获取方式一致而产生的变化规律,提出了算法,提高了算法的精度和效率,从而能够快速准确地提取海岸带边界点云,并对提取的边界点云进行适当处理,转化为严格意义上的海岸线点云。与实测海岸线数据和孤立线跟踪法提取的海岸线数据相比,直观定量数据表明,该方法的提取效果更连续,精度更高。从数据表中可以看出,该方法提取的海岸线总体标准偏差和方差分别从孤立线跟踪法的 0.3726 和 0.1415 降低到 0.1632 和 0.0266
{"title":"A new method for the extraction of shoreline based on point cloud distribution characteristics","authors":"chao lv, weihua li, Jianglin Liu, jiuming li","doi":"10.1117/12.3014390","DOIUrl":"https://doi.org/10.1117/12.3014390","url":null,"abstract":"The commonly used shoreline extraction methods need to go through the generation process of point cloud digital elevation model (DEM) with large amount of calculation and easy to introduce errors. This proposed method used the change law of the coordinate value of the point at the time of obtaining the data due to the consistent acquisition method, puts forward the algorithm, and improves the accuracy and efficiency of the algorithm, so that it can quickly and accurately extract the boundary point cloud of the coastal zone, properly process the extracted boundary point cloud and transform it into the coastline point cloud in the strict sense. Compared with the measured coastline data and the coastline data extracted by isoline tracking method, the visual and quantitative data show that the extraction effect of this method is more continuous and the accuracy is higher. It can be seen from the data table that the overall standard deviation and variance of coastline extracted by this method are reduced from 0.3726 and 0.1415 of isoline tracking method to 0.1632 and 0.0266 respectively","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511372","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Encryption and return method of inspection image of UAV converter station based on chaotic sequence 基于混沌序列的无人机换流站检测图像加密与返回方法
Jingxiang Li, Hao Lai, Yanhui Shi, Yuchao Liu, Haitao Yin
Conventional encryption and return method of UAV converter station inspection image mainly uses Fibonacci scrambling technology to generate two-dimensional encryption mapping, which is easily affected by pixel iteration, resulting in a high difference in peak signal-to-noise ratio. Therefore, a new encryption and return method of UAV converter station inspection image is needed, and an encryption and return algorithm of UAV converter station inspection image is designed based on chaotic sequence. The experimental results show that the difference between the peak signal-to-noise ratio (PSNR) before and after the encrypted return of the inspection encrypted image of the UAV converter station is small, which proves that the encrypted return of the inspection encrypted image is effective, reliable and has certain application value, and has made certain contributions to improving the inspection effect of the UAV converter station.
传统的无人机换流站检测图像加密回传方法主要采用斐波那契扰乱技术生成二维加密映射,容易受到像素迭代的影响,导致峰值信噪比差异较大。因此,需要一种新的无人机换流站检测图像加密回传方法,并设计了一种基于混沌序列的无人机换流站检测图像加密回传算法。实验结果表明,无人机换流站巡检加密图像加密返回前后的峰值信噪比(PSNR)相差较小,证明巡检加密图像加密返回有效、可靠,具有一定的应用价值,为提高无人机换流站的巡检效果做出了一定的贡献。
{"title":"Encryption and return method of inspection image of UAV converter station based on chaotic sequence","authors":"Jingxiang Li, Hao Lai, Yanhui Shi, Yuchao Liu, Haitao Yin","doi":"10.1117/12.3014493","DOIUrl":"https://doi.org/10.1117/12.3014493","url":null,"abstract":"Conventional encryption and return method of UAV converter station inspection image mainly uses Fibonacci scrambling technology to generate two-dimensional encryption mapping, which is easily affected by pixel iteration, resulting in a high difference in peak signal-to-noise ratio. Therefore, a new encryption and return method of UAV converter station inspection image is needed, and an encryption and return algorithm of UAV converter station inspection image is designed based on chaotic sequence. The experimental results show that the difference between the peak signal-to-noise ratio (PSNR) before and after the encrypted return of the inspection encrypted image of the UAV converter station is small, which proves that the encrypted return of the inspection encrypted image is effective, reliable and has certain application value, and has made certain contributions to improving the inspection effect of the UAV converter station.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Design of intelligent security robot based on machine vision 基于机器视觉的智能安防机器人设计
Xueying Huang, Zhoulin Chang
With the continuous improvement of the application of artificial intelligence theory and intelligent hardware processing capabilities, the application of machine vision and unmanned technology is becoming more and more popular. As the city scales rapidly, fire safety issues are becoming more and more important. Taking into account the safety of firefighters and the efficient handling of accidents, an intelligent unmanned fire car was designed to simulate fire safety problems in toxic and harmful environments. The functions of this smart car include: automatic driving, image recognition, monitoring of remote and environment, etc., which can help firefighters to extinguish fires in a harmful environment and minimize property damage.
随着人工智能理论应用和智能硬件处理能力的不断提高,机器视觉和无人驾驶技术的应用越来越普及。随着城市规模的迅速扩大,消防安全问题变得越来越重要。考虑到消防员的安全和事故的高效处理,我们设计了一款智能无人消防车,模拟有毒有害环境下的消防安全问题。该智能消防车的功能包括:自动驾驶、图像识别、远程和环境监测等,可帮助消防员在有害环境中灭火,最大限度地减少财产损失。
{"title":"Design of intelligent security robot based on machine vision","authors":"Xueying Huang, Zhoulin Chang","doi":"10.1117/12.3014397","DOIUrl":"https://doi.org/10.1117/12.3014397","url":null,"abstract":"With the continuous improvement of the application of artificial intelligence theory and intelligent hardware processing capabilities, the application of machine vision and unmanned technology is becoming more and more popular. As the city scales rapidly, fire safety issues are becoming more and more important. Taking into account the safety of firefighters and the efficient handling of accidents, an intelligent unmanned fire car was designed to simulate fire safety problems in toxic and harmful environments. The functions of this smart car include: automatic driving, image recognition, monitoring of remote and environment, etc., which can help firefighters to extinguish fires in a harmful environment and minimize property damage.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A deep learning-based method for multiorgan functional tissue units segmentation 基于深度学习的多器官功能组织单元分割方法
Xinmei Feng, Zihao Hao, Shunli Gao, Gang Ma
Functional tissue region segmentation is the segmentation and example description of tissue epithelium, glandular cavity, fiber and other tissues in the image, which helps to accelerate the understanding of the relationship between cells and tissues in the world. By better understanding the relationship between cells, researchers will have a deeper understanding of cell functions that affect human health. Based on convolutional neural networks, we combine the structural advantages of UNet and EficientNet to create an organ tissue segmentation model. The model fuses the UNet structure with the EficientNet structure, and extracts features with the help of the pre-trained EficientNet optimal structure to improve the ability of feature learning. At the same time, the fusion of multi-scale features in the network is realized through the jump connection, and the segmentation accuracy of the model is improved. We compare our model with other models using the metrics of dice similarity efficiency. our Unet2.5D (ConvNext+ Se_resnet101) owns the highest DSC 0.702 among these models, which is 0.052, 0.024, 0.052 higher than Unet(ResNet50), Unet(Se_Resnet101), Unet(ResNet101) respectively.
功能组织区域分割是对图像中的组织上皮、腺腔、纤维等组织进行分割和实例描述,有助于加快人们对世界上细胞和组织之间关系的理解。通过更好地理解细胞之间的关系,研究人员将更深入地了解影响人类健康的细胞功能。基于卷积神经网络,我们结合 UNet 和 EficientNet 的结构优势,创建了器官组织分割模型。该模型融合了 UNet 结构和 EficientNet 结构,并借助预训练的 EficientNet 最佳结构提取特征,提高了特征学习能力。同时,通过跳转连接实现了网络中多尺度特征的融合,提高了模型的分割精度。在这些模型中,我们的 Unet2.5D (ConvNext+ Se_resnet101) 拥有最高的 DSC 0.702,分别比 Unet(ResNet50)、Unet(Se_Resnet101)、Unet(ResNet101) 高 0.052、0.024、0.052。
{"title":"A deep learning-based method for multiorgan functional tissue units segmentation","authors":"Xinmei Feng, Zihao Hao, Shunli Gao, Gang Ma","doi":"10.1117/12.3014697","DOIUrl":"https://doi.org/10.1117/12.3014697","url":null,"abstract":"Functional tissue region segmentation is the segmentation and example description of tissue epithelium, glandular cavity, fiber and other tissues in the image, which helps to accelerate the understanding of the relationship between cells and tissues in the world. By better understanding the relationship between cells, researchers will have a deeper understanding of cell functions that affect human health. Based on convolutional neural networks, we combine the structural advantages of UNet and EficientNet to create an organ tissue segmentation model. The model fuses the UNet structure with the EficientNet structure, and extracts features with the help of the pre-trained EficientNet optimal structure to improve the ability of feature learning. At the same time, the fusion of multi-scale features in the network is realized through the jump connection, and the segmentation accuracy of the model is improved. We compare our model with other models using the metrics of dice similarity efficiency. our Unet2.5D (ConvNext+ Se_resnet101) owns the highest DSC 0.702 among these models, which is 0.052, 0.024, 0.052 higher than Unet(ResNet50), Unet(Se_Resnet101), Unet(ResNet101) respectively.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511876","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-level image detail enhancement based on guided filtering 基于引导滤波的多级图像细节增强技术
Xiangrui Tian, Yinjun Jia, Tong Xu, Jie Yin, Yihe Chen, Jiansen Mao
Image blur and detail information loss are caused by various factors such as imaging environment and hardware performance, therefore a multi-level image detail enhancement method based on guided filtering is proposed. Firstly, the input image is iteratively filtered by using the guided filter, to obtain background images with different smoothness; then the background image is subtracted from the original image to obtain detail images with different levels; finally, a dynamic saturation function is used to adjust the weights of detail images, which are superimposed with the original image to obtain the enhanced image. The proposed method is compared with the existing enhancement algorithms using open dataset. The experimental results show that, compared with other enhancement methods, the proposed method in this paper achieves a better enhancement effect, the enhanced image has clear edges, and the visual effect is suitable. Compared with other methods, the objective indicators of information entropy, average gradient, and spatial frequency are improved on average. 1.39%, 27.9%, and 19.3%.
图像模糊和细节信息丢失是由成像环境和硬件性能等多种因素造成的,因此提出了一种基于引导滤波的多级图像细节增强方法。首先,利用引导滤波对输入图像进行迭代滤波,得到不同平滑度的背景图像;然后,从原始图像中减去背景图像,得到不同层次的细节图像;最后,利用动态饱和度函数调整细节图像的权重,与原始图像叠加,得到增强后的图像。利用开放数据集将所提出的方法与现有的增强算法进行了比较。实验结果表明,与其他增强方法相比,本文提出的方法取得了较好的增强效果,增强后的图像边缘清晰,视觉效果合适。与其他方法相比,信息熵、平均梯度、空间频率等客观指标平均提高了1.39%、27.9% 和 19.3%。
{"title":"Multi-level image detail enhancement based on guided filtering","authors":"Xiangrui Tian, Yinjun Jia, Tong Xu, Jie Yin, Yihe Chen, Jiansen Mao","doi":"10.1117/12.3014387","DOIUrl":"https://doi.org/10.1117/12.3014387","url":null,"abstract":"Image blur and detail information loss are caused by various factors such as imaging environment and hardware performance, therefore a multi-level image detail enhancement method based on guided filtering is proposed. Firstly, the input image is iteratively filtered by using the guided filter, to obtain background images with different smoothness; then the background image is subtracted from the original image to obtain detail images with different levels; finally, a dynamic saturation function is used to adjust the weights of detail images, which are superimposed with the original image to obtain the enhanced image. The proposed method is compared with the existing enhancement algorithms using open dataset. The experimental results show that, compared with other enhancement methods, the proposed method in this paper achieves a better enhancement effect, the enhanced image has clear edges, and the visual effect is suitable. Compared with other methods, the objective indicators of information entropy, average gradient, and spatial frequency are improved on average. 1.39%, 27.9%, and 19.3%.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139640410","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on system capability assessment algorithms 系统能力评估算法研究
Lanlan Gao, Yijing Liu, Jian Le, Kai Qiu
The traditional system capability assessment methods are no longer able to address the challenges in system evaluation. Further development and innovation in algorithms are needed. This paper addresses the new challenges encountered in the system design process and proposes a system capability assessment algorithm centered around the evaluation of capability measurement values, capability advantages and disadvantages, as well as capability improvement and decline. These provide a scientific basis for determining the focus, direction, scale proportion, and improvement optimization of the system.
传统的系统能力评估方法已无法应对系统评估中的挑战。需要进一步开发和创新算法。本文针对系统设计过程中遇到的新挑战,提出了以能力测量值评估、能力优劣评估、能力提升与下降评估为核心的系统能力评估算法。这些都为确定系统的重点、方向、规模比例和改进优化提供了科学依据。
{"title":"Research on system capability assessment algorithms","authors":"Lanlan Gao, Yijing Liu, Jian Le, Kai Qiu","doi":"10.1117/12.3014565","DOIUrl":"https://doi.org/10.1117/12.3014565","url":null,"abstract":"The traditional system capability assessment methods are no longer able to address the challenges in system evaluation. Further development and innovation in algorithms are needed. This paper addresses the new challenges encountered in the system design process and proposes a system capability assessment algorithm centered around the evaluation of capability measurement values, capability advantages and disadvantages, as well as capability improvement and decline. These provide a scientific basis for determining the focus, direction, scale proportion, and improvement optimization of the system.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511508","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Health assessment of marine gas turbine propulsion system under cross-working conditions based on transfer learning 基于迁移学习的交叉工作条件下船用燃气轮机推进系统健康评估
Congao Tan, Shijie Shi
The marine gas turbine propulsion system generally works in a healthy state, and the samples collected by the monitoring system are characterized by more normal samples and fewer fault samples. Aiming at the problem of lack of fault samples faced by data-driven fault diagnosis methods, a cross-working condition fault diagnosis model is proposed by using transfer learning to reduce the dependence of data-driven methods on fault samples. The proposed method was experimentally validated by using a real-ship-validated dataset. Compared with traditional methods, the proposed method can achieve cross-working condition fault diagnosis with fewer fault samples.
船用燃气轮机推进系统一般工作在健康状态,监测系统采集的样本具有正常样本多、故障样本少的特点。针对数据驱动型故障诊断方法面临的故障样本缺乏问题,利用迁移学习降低数据驱动型方法对故障样本的依赖性,提出了一种交叉工况故障诊断模型。通过使用真实船舶验证数据集对所提出的方法进行了实验验证。与传统方法相比,所提出的方法能以更少的故障样本实现跨工况故障诊断。
{"title":"Health assessment of marine gas turbine propulsion system under cross-working conditions based on transfer learning","authors":"Congao Tan, Shijie Shi","doi":"10.1117/12.3014466","DOIUrl":"https://doi.org/10.1117/12.3014466","url":null,"abstract":"The marine gas turbine propulsion system generally works in a healthy state, and the samples collected by the monitoring system are characterized by more normal samples and fewer fault samples. Aiming at the problem of lack of fault samples faced by data-driven fault diagnosis methods, a cross-working condition fault diagnosis model is proposed by using transfer learning to reduce the dependence of data-driven methods on fault samples. The proposed method was experimentally validated by using a real-ship-validated dataset. Compared with traditional methods, the proposed method can achieve cross-working condition fault diagnosis with fewer fault samples.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511674","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Deep learning-based 3D target detection algorithm 基于深度学习的 3D 目标检测算法
Chunbao Huo, Ya Zheng, Zhibo Tong, Zengwen Chen
Automated vehicle driving requires a heightened awareness of the surrounding environment, and detecting targets is a crucial element in reducing the risk of traffic accidents. Target detection is essential for autonomous driving. In this paper, we improve the CenterPoint 3D target detection algorithm by introducing a self-calibrating convolutional network into the 2D backbone network of the original algorithm. This enhancement improves network extraction speed and feature extraction capability. Additionally, we improve the two-stage refinement module of the original algorithm by extracting feature points from the multi-scale feature map rather than the single-scale feature map. This approach reduces the loss of small target feature information, and we build a data enhancement module to increase the number of training samples and improve the network model’s robustness. We validate the algorithm on the KITTI dataset and analyze domestic data visualizations. Our results show that the bird’s-eye view mAP detection accuracy of the algorithm when the target is a vehicle has improved by 1.68%, and the 3D target mAP detection accuracy has improved by 1.02% compared with the original algorithm.
自动驾驶汽车需要提高对周围环境的感知能力,而探测目标是降低交通事故风险的关键因素。目标检测对于自动驾驶至关重要。在本文中,我们通过在原始算法的二维骨干网络中引入自校准卷积网络,改进了 CenterPoint 三维目标检测算法。这一改进提高了网络提取速度和特征提取能力。此外,我们还改进了原始算法的两阶段细化模块,从多尺度特征图而不是单尺度特征图中提取特征点。这种方法减少了小目标特征信息的损失,我们还建立了一个数据增强模块,以增加训练样本的数量,提高网络模型的鲁棒性。我们在 KITTI 数据集上验证了该算法,并对国内数据进行了可视化分析。结果表明,当目标为车辆时,算法的鸟瞰 mAP 检测准确率比原算法提高了 1.68%,三维目标 mAP 检测准确率比原算法提高了 1.02%。
{"title":"Deep learning-based 3D target detection algorithm","authors":"Chunbao Huo, Ya Zheng, Zhibo Tong, Zengwen Chen","doi":"10.1117/12.3014381","DOIUrl":"https://doi.org/10.1117/12.3014381","url":null,"abstract":"Automated vehicle driving requires a heightened awareness of the surrounding environment, and detecting targets is a crucial element in reducing the risk of traffic accidents. Target detection is essential for autonomous driving. In this paper, we improve the CenterPoint 3D target detection algorithm by introducing a self-calibrating convolutional network into the 2D backbone network of the original algorithm. This enhancement improves network extraction speed and feature extraction capability. Additionally, we improve the two-stage refinement module of the original algorithm by extracting feature points from the multi-scale feature map rather than the single-scale feature map. This approach reduces the loss of small target feature information, and we build a data enhancement module to increase the number of training samples and improve the network model’s robustness. We validate the algorithm on the KITTI dataset and analyze domestic data visualizations. Our results show that the bird’s-eye view mAP detection accuracy of the algorithm when the target is a vehicle has improved by 1.68%, and the 3D target mAP detection accuracy has improved by 1.02% compared with the original algorithm.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511434","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ship and course detection in remote sensing images based on key-point extraction 基于关键点提取的遥感图像中的船舶和航线探测
Tao Zhang, Xiaogang Yang, Ruitao Lu, Qi Li, Wenxin Xia, Shuang Su, Bin Tang
Remote sensing image ship target detection and course discrimination is one of the important supports for building a maritime power. Since ship target in remote sensing images are generally in strips, the IOU score is very sensitive to the angle of bounding box. Moreover, the angle of the ship is a periodic function, this discontinuity will cause performance degeneration. Meanwhile, methods generally use oriented bounding boxes as anchors to handle rotated ship target and thus introduce excessive hyper-parameters such as box size, aspect ratios. Aiming at the problem of complex calculation of anchor frame traversal mechanism and discontinuity of angle regression caused by increasing angle attribute in ship target detection of remote sensing image, a ship target heading detection method based on ship head point is proposed. The discontinuous angle regression problem is transformed into a continuous key point estimation problem, and the ship target detection and heading recognition are unified. Second, CA attention mechanism is added to the feature extraction network to enhance the attention to the ship target and predict the center point of the ship target. The offset and target width at the center point are regressed. Then, return the heading point and offset to obtain the accurate heading point position. Next, the rotation angle of the ship is determined according to the coordinates of the center point and the ship head point. Combined with the predicted width and height of the ship, the rotation frame detection of the ship target is completed. Finally, the center point and the bow point are connected to determine the course of the ship target. The effectiveness of the proposed method is verified on the RFUE and open source HRSC2016 datasets, respectively, and it also has good robustness in complex environments.
遥感图像船舶目标探测与航向判别是建设海洋强国的重要支撑之一。由于遥感图像中的船舶目标一般呈条状,因此 IOU 分数对边界框的角度非常敏感。而且,船舶的角度是一个周期性函数,这种不连续性会导致性能下降。同时,这些方法一般使用定向边界框作为锚来处理旋转的船体目标,因此会引入过多的超参数,如框的大小、长宽比等。针对遥感图像船舶目标检测中存在的锚框遍历机制计算复杂、角度属性增加导致角度回归不连续等问题,提出了一种基于船头点的船舶目标航向检测方法。将不连续的角度回归问题转化为连续的关键点估计问题,实现了船舶目标检测与航向识别的统一。其次,在特征提取网络中加入 CA 注意机制,以增强对船体目标的注意,并预测船体目标的中心点。对中心点的偏移和目标宽度进行回归。然后,返回航向点和偏移量,得到准确的航向点位置。接着,根据中心点和船头点的坐标确定船只的旋转角度。结合预测的船舶宽度和高度,完成船舶目标的旋转框架检测。最后,连接中心点和船头点,确定目标船的航向。所提方法的有效性分别在 RFUE 和开源 HRSC2016 数据集上得到了验证,而且在复杂环境中也具有良好的鲁棒性。
{"title":"Ship and course detection in remote sensing images based on key-point extraction","authors":"Tao Zhang, Xiaogang Yang, Ruitao Lu, Qi Li, Wenxin Xia, Shuang Su, Bin Tang","doi":"10.1117/12.3014532","DOIUrl":"https://doi.org/10.1117/12.3014532","url":null,"abstract":"Remote sensing image ship target detection and course discrimination is one of the important supports for building a maritime power. Since ship target in remote sensing images are generally in strips, the IOU score is very sensitive to the angle of bounding box. Moreover, the angle of the ship is a periodic function, this discontinuity will cause performance degeneration. Meanwhile, methods generally use oriented bounding boxes as anchors to handle rotated ship target and thus introduce excessive hyper-parameters such as box size, aspect ratios. Aiming at the problem of complex calculation of anchor frame traversal mechanism and discontinuity of angle regression caused by increasing angle attribute in ship target detection of remote sensing image, a ship target heading detection method based on ship head point is proposed. The discontinuous angle regression problem is transformed into a continuous key point estimation problem, and the ship target detection and heading recognition are unified. Second, CA attention mechanism is added to the feature extraction network to enhance the attention to the ship target and predict the center point of the ship target. The offset and target width at the center point are regressed. Then, return the heading point and offset to obtain the accurate heading point position. Next, the rotation angle of the ship is determined according to the coordinates of the center point and the ship head point. Combined with the predicted width and height of the ship, the rotation frame detection of the ship target is completed. Finally, the center point and the bow point are connected to determine the course of the ship target. The effectiveness of the proposed method is verified on the RFUE and open source HRSC2016 datasets, respectively, and it also has good robustness in complex environments.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on digital remodeling of structural features of cigarette filter rod based on 3D digital twin technology 基于三维数字孪生技术的卷烟滤嘴杆结构特征数字化重塑研究
Ke Huang, Guangyuan Yang, Shenghua Zhang, Zheming Li, Hu Li, Xuebin Jiang, Wenting Liu, Wenfeng Liu, Bo Wang, Xin Yan, Weiguo Lin
(1) Background: To improve the quality control ability of cigarette characteristic cigarette filter rods, build the filter rod structure database of cigarette products, and accelerate the Digital transformation of filter rod platform technology research and development, this study proposed a research method for digital remodeling of cigarette structure.; (2) Methods: Firstly, image acquisition of the cigarette filter rod end face is carried out by combining a color area-array camera, telecentric lens, and coaxial light source to form an area-array camera testing environment; By combining a line scanning camera with a coaxial light source and a point light source, the surface of the cigarette filter is photographed and the point light source transmittance is detected; Simultaneously combining 3D laser camera scanners to establish the contour of the target; (3) Results: By using the above methods, the detection images of four types of filter rods were processed to obtain HSV image curves, grayscale images, and color histograms. These images were used for 3D model reconstruction and 15 3D feature maps were obtained; (4) Conclusions: The reconstructed 15 3D images can accurately distinguish four different filter rods, and this method can provide a reference for real-time detection during cigarette filter rod processing.
(1)研究背景:(2) 方法:首先,通过组合彩色区域阵列相机、远心镜头、同轴光源,形成区域阵列相机检测环境,对卷烟滤嘴杆端面进行图像采集;通过组合线扫描相机与同轴光源、点光源,对卷烟滤嘴表面进行拍照,检测点光源透射率;同时组合三维激光摄像扫描仪,建立目标轮廓;(3)结果:通过上述方法,对四种滤棒的检测图像进行处理,得到 HSV 图像曲线、灰度图像和颜色直方图。将这些图像用于三维模型重建,得到了 15 幅三维特征图;(4)结论:重建的 15 幅三维图像可以准确区分四种不同的滤棒,该方法可为卷烟滤棒处理过程中的实时检测提供参考。
{"title":"Research on digital remodeling of structural features of cigarette filter rod based on 3D digital twin technology","authors":"Ke Huang, Guangyuan Yang, Shenghua Zhang, Zheming Li, Hu Li, Xuebin Jiang, Wenting Liu, Wenfeng Liu, Bo Wang, Xin Yan, Weiguo Lin","doi":"10.1117/12.3014423","DOIUrl":"https://doi.org/10.1117/12.3014423","url":null,"abstract":"(1) Background: To improve the quality control ability of cigarette characteristic cigarette filter rods, build the filter rod structure database of cigarette products, and accelerate the Digital transformation of filter rod platform technology research and development, this study proposed a research method for digital remodeling of cigarette structure.; (2) Methods: Firstly, image acquisition of the cigarette filter rod end face is carried out by combining a color area-array camera, telecentric lens, and coaxial light source to form an area-array camera testing environment; By combining a line scanning camera with a coaxial light source and a point light source, the surface of the cigarette filter is photographed and the point light source transmittance is detected; Simultaneously combining 3D laser camera scanners to establish the contour of the target; (3) Results: By using the above methods, the detection images of four types of filter rods were processed to obtain HSV image curves, grayscale images, and color histograms. These images were used for 3D model reconstruction and 15 3D feature maps were obtained; (4) Conclusions: The reconstructed 15 3D images can accurately distinguish four different filter rods, and this method can provide a reference for real-time detection during cigarette filter rod processing.","PeriodicalId":516634,"journal":{"name":"International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140511869","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Conference on Algorithm, Imaging Processing and Machine Vision (AIPMV 2023)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1