首页 > 最新文献

International Conference on Image Processing and Intelligent Control最新文献

英文 中文
A neural network model for adversarial defense based on deep learning 基于深度学习的对抗防御神经网络模型
Pub Date : 2023-08-09 DOI: 10.1117/12.3000789
Zhiying Wang, Yong Wang
Deep learning has achieved great success in many fields, such as image classification and target detection. Adding small disturbance which is hard to be detected by the human eyes to original images can make the neural network output error results with high confidence. An image after adding small disturbance is an adversarial example. The existence of adversarial examples brings a huge security problem to deep learning. In order to effectively defend against adversarial examples attacks, an adversarial example defense method based on image reconstruction is proposed by analyzing the existing adversarial examples attack methods and defense methods. Our data set is based on ImageNet 1k data set, and some filtering and expansion are carried out. Four attack modes, FGSM, BIM, DeepFool and C&W are selected to test the defense method. Based on the EDSR network, multi-scale feature fusion module and subspace attention module are added. By capturing the global correlation information of the image, the disturbance can be removed, while the image texture details can be better preserved, and the defense performance can be improved. The experimental results show that the proposed method has good defense effect.
深度学习在图像分类、目标检测等领域取得了巨大的成功。在原始图像中加入人眼难以察觉的微小干扰,可以使神经网络输出的误差结果具有较高的置信度。加入小扰动后的图像是一个对抗性的例子。对抗性示例的存在给深度学习带来了巨大的安全问题。为了有效防御对抗性样例攻击,在分析现有对抗性样例攻击方法和防御方法的基础上,提出了一种基于图像重构的对抗性样例防御方法。我们的数据集基于ImageNet 1k数据集,并进行了一些过滤和扩展。选择FGSM、BIM、DeepFool和C&W四种攻击模式进行防御方法测试。在EDSR网络的基础上,增加了多尺度特征融合模块和子空间关注模块。通过捕获图像的全局相关信息,可以去除干扰,同时更好地保留图像纹理细节,提高防御性能。实验结果表明,该方法具有良好的防御效果。
{"title":"A neural network model for adversarial defense based on deep learning","authors":"Zhiying Wang, Yong Wang","doi":"10.1117/12.3000789","DOIUrl":"https://doi.org/10.1117/12.3000789","url":null,"abstract":"Deep learning has achieved great success in many fields, such as image classification and target detection. Adding small disturbance which is hard to be detected by the human eyes to original images can make the neural network output error results with high confidence. An image after adding small disturbance is an adversarial example. The existence of adversarial examples brings a huge security problem to deep learning. In order to effectively defend against adversarial examples attacks, an adversarial example defense method based on image reconstruction is proposed by analyzing the existing adversarial examples attack methods and defense methods. Our data set is based on ImageNet 1k data set, and some filtering and expansion are carried out. Four attack modes, FGSM, BIM, DeepFool and C&W are selected to test the defense method. Based on the EDSR network, multi-scale feature fusion module and subspace attention module are added. By capturing the global correlation information of the image, the disturbance can be removed, while the image texture details can be better preserved, and the defense performance can be improved. The experimental results show that the proposed method has good defense effect.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"60 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126726056","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on optical detection technology for underwater archaeology 水下考古光学探测技术研究
Pub Date : 2023-08-09 DOI: 10.1117/12.3002208
Wei Mu, Ruohan Zheng, Wenrui Zhang
In response to the problem that the current image processing technology and underwater target recognition algorithms are not yet mature enough in the field of underwater archaeology, this article innovatively applies object detection and underwater image clarity technology to the field of underwater archaeology. We propose a method for detecting and recognizing underwater cultural heritage based on optical devices. The method includes ocean image preprocessing and underwater cultural heritage object recognition based on YOLO V4. The results of experiments demonstrate that the proposed method can effectively and accurately detect and recognize targets in the underwater cultural heritage scene, and the clear image of the underwater relics after image preprocessing can better assist archaeologists in observing the species and distribution of samples in the real scene.
针对目前水下考古领域的图像处理技术和水下目标识别算法还不够成熟的问题,本文创新性地将目标检测和水下图像清晰度技术应用于水下考古领域。提出了一种基于光学装置的水下文化遗产探测与识别方法。该方法包括海洋图像预处理和基于YOLO V4的水下文物目标识别。实验结果表明,该方法能够有效、准确地检测和识别水下文物场景中的目标,图像预处理后的水下文物清晰图像能够更好地辅助考古学家观察真实场景中样本的种类和分布。
{"title":"Research on optical detection technology for underwater archaeology","authors":"Wei Mu, Ruohan Zheng, Wenrui Zhang","doi":"10.1117/12.3002208","DOIUrl":"https://doi.org/10.1117/12.3002208","url":null,"abstract":"In response to the problem that the current image processing technology and underwater target recognition algorithms are not yet mature enough in the field of underwater archaeology, this article innovatively applies object detection and underwater image clarity technology to the field of underwater archaeology. We propose a method for detecting and recognizing underwater cultural heritage based on optical devices. The method includes ocean image preprocessing and underwater cultural heritage object recognition based on YOLO V4. The results of experiments demonstrate that the proposed method can effectively and accurately detect and recognize targets in the underwater cultural heritage scene, and the clear image of the underwater relics after image preprocessing can better assist archaeologists in observing the species and distribution of samples in the real scene.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115529067","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Video description method with fusion of instance-aware temporal features 融合实例感知时间特征的视频描述方法
Pub Date : 2023-08-09 DOI: 10.1117/12.3000765
Ju Huang, He Yan, Lingkun Liu, Yuhan Liu
There are still challenges in the field of video understanding today, especially how to use natural language to describe the visual content in videos. Existing video encoder-decoder models struggle to extract deep semantic information and effectively understand the complex contextual semantics in a video sequence. Furthermore, different visual elements in the video contribute differently to the generation of video text descriptions. In this paper, we propose a video description method that fuses instance-aware temporal features. We extract local features of instances on the temporal sequence to enhance perception of temporal instances. We also employ spatial attention to perform weighted fusion of temporal features. Finally, we use bidirectional long short-term memory networks to encode the contextual semantic information of the video sequence, thereby helping to generate higher quality descriptive text. Experimental results on two public datasets demonstrate that our method achieves good performance on various evaluation metrics.
目前,视频理解领域仍存在诸多挑战,特别是如何使用自然语言来描述视频中的视觉内容。现有的视频编码器-解码器模型难以提取深度语义信息并有效理解视频序列中复杂的上下文语义。此外,视频中不同的视觉元素对视频文本描述的生成也有不同的贡献。本文提出了一种融合实例感知时间特征的视频描述方法。我们在时间序列上提取实例的局部特征,以增强对时间实例的感知。我们还利用空间注意力对时间特征进行加权融合。最后,我们使用双向长短期记忆网络对视频序列的上下文语义信息进行编码,从而有助于生成更高质量的描述性文本。在两个公共数据集上的实验结果表明,我们的方法在各种评估指标上都取得了良好的性能。
{"title":"Video description method with fusion of instance-aware temporal features","authors":"Ju Huang, He Yan, Lingkun Liu, Yuhan Liu","doi":"10.1117/12.3000765","DOIUrl":"https://doi.org/10.1117/12.3000765","url":null,"abstract":"There are still challenges in the field of video understanding today, especially how to use natural language to describe the visual content in videos. Existing video encoder-decoder models struggle to extract deep semantic information and effectively understand the complex contextual semantics in a video sequence. Furthermore, different visual elements in the video contribute differently to the generation of video text descriptions. In this paper, we propose a video description method that fuses instance-aware temporal features. We extract local features of instances on the temporal sequence to enhance perception of temporal instances. We also employ spatial attention to perform weighted fusion of temporal features. Finally, we use bidirectional long short-term memory networks to encode the contextual semantic information of the video sequence, thereby helping to generate higher quality descriptive text. Experimental results on two public datasets demonstrate that our method achieves good performance on various evaluation metrics.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"61 4","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114131405","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Oilfield water injection surface monitoring system 油田注水地面监测系统
Pub Date : 2023-08-09 DOI: 10.1117/12.3000768
Shuchen Xing, Nan Song, Xuhui Wen
Due to the complex and harsh oilfield production environment, oilfield enterprises need a set of digital intelligent monitoring system to realize the real-time monitoring and control of oilfield water injection. This paper designs a field monitoring system of oilfield water injection based on configuration software. The system uses the general monitoring configuration software of Beijing Force Control Yuantong Technology Co., Ltd., which can realize modular function division and has the functions of fault alarm data recording and query. The PID control technology is applied to the system, and the performance of the technology is verified by simulation. It has been verified that the system can control the flow stably and intelligently, and can ensure long-term effective operation.
由于油田生产环境复杂恶劣,油田企业需要一套数字化智能监控系统来实现对油田注水的实时监控和控制。本文设计了一个基于组态软件的油田注水现场监控系统。系统采用北京力控圆通科技有限公司通用监控组态软件,可实现模块化功能划分,具有故障报警数据记录和查询功能。将PID控制技术应用于系统,并通过仿真验证了该技术的性能。实践证明,该系统能实现流量的稳定、智能化控制,并能保证长期有效运行。
{"title":"Oilfield water injection surface monitoring system","authors":"Shuchen Xing, Nan Song, Xuhui Wen","doi":"10.1117/12.3000768","DOIUrl":"https://doi.org/10.1117/12.3000768","url":null,"abstract":"Due to the complex and harsh oilfield production environment, oilfield enterprises need a set of digital intelligent monitoring system to realize the real-time monitoring and control of oilfield water injection. This paper designs a field monitoring system of oilfield water injection based on configuration software. The system uses the general monitoring configuration software of Beijing Force Control Yuantong Technology Co., Ltd., which can realize modular function division and has the functions of fault alarm data recording and query. The PID control technology is applied to the system, and the performance of the technology is verified by simulation. It has been verified that the system can control the flow stably and intelligently, and can ensure long-term effective operation.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"31 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115395302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on the improved apple classification method of AlexNet 改进的AlexNet苹果分类方法研究
Pub Date : 2023-08-09 DOI: 10.1117/12.3000778
Huifang Yang, Weihua Wang, Zhicheng Mao
To address the issue of high cost and low efficiency in the manual sorting of apples, we proposed an improved apple classification method based on the AlexNet architecture. The algorithm added a batch normalization layer after each convolutional layer in the network structure to speed up the model's training process. Furthermore, we replaced the fully connected layer with a global average pooling layer to reduce the number of training parameters and save model training time. To improve the algorithm's robustness, we also performed data augmentation on the training samples before validating the algorithm to obtain an expanded dataset. Experimental results showed that the improved AlexNet network shortened the training time by 0.54%, increased the testing speed by 2.5%, and improved the accuracy by 1.12% compared to the original AlexNet network. Moreover, the training time of the improved AlexNet network was lower than that of other networks (AlexNet, ResNet50, Vgg16). The improved AlexNet network can efficiently and quickly classify apples and promote the automation of apple classification.
针对人工分类苹果成本高、效率低的问题,提出了一种改进的基于AlexNet架构的苹果分类方法。该算法在网络结构的每个卷积层之后增加了批处理归一化层,加快了模型的训练过程。此外,我们用全局平均池化层代替了全连接层,减少了训练参数的数量,节省了模型的训练时间。为了提高算法的鲁棒性,在验证算法之前,我们还对训练样本进行了数据扩充,以获得扩展的数据集。实验结果表明,改进后的AlexNet网络与原AlexNet网络相比,训练时间缩短了0.54%,测试速度提高了2.5%,准确率提高了1.12%。而且,改进后的AlexNet网络的训练时间比其他网络(AlexNet、ResNet50、Vgg16)要短。改进后的AlexNet网络可以高效、快速地对苹果进行分类,促进苹果分类的自动化。
{"title":"Research on the improved apple classification method of AlexNet","authors":"Huifang Yang, Weihua Wang, Zhicheng Mao","doi":"10.1117/12.3000778","DOIUrl":"https://doi.org/10.1117/12.3000778","url":null,"abstract":"To address the issue of high cost and low efficiency in the manual sorting of apples, we proposed an improved apple classification method based on the AlexNet architecture. The algorithm added a batch normalization layer after each convolutional layer in the network structure to speed up the model's training process. Furthermore, we replaced the fully connected layer with a global average pooling layer to reduce the number of training parameters and save model training time. To improve the algorithm's robustness, we also performed data augmentation on the training samples before validating the algorithm to obtain an expanded dataset. Experimental results showed that the improved AlexNet network shortened the training time by 0.54%, increased the testing speed by 2.5%, and improved the accuracy by 1.12% compared to the original AlexNet network. Moreover, the training time of the improved AlexNet network was lower than that of other networks (AlexNet, ResNet50, Vgg16). The improved AlexNet network can efficiently and quickly classify apples and promote the automation of apple classification.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"222 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115536111","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improved PSO-GA-based LSSVM flight conflict detection model 改进的基于pso - ga的LSSVM飞行冲突检测模型
Pub Date : 2023-08-09 DOI: 10.1117/12.3000794
Qiting Liu, Qi Wang, Yulin Cao, Jinyue Wang
With the rapid development of civil aviation industry, the air traffic flow is increasing, which brings a large load to air traffic control, airports and other units, the safety of flight activities has become a research hotspot, flight conflict detection is a necessary link to ensure the safety of flight activities, the increase in air traffic flow requires its more accurate, efficient and stable operation. Based on the least squares support vector machine (LSSVM) in machine learning, this study uses the information provided by ADS-B, such as heading, position and altitude, combined with the regulations and conflict protection zones in actual operation, to classify the occurrence and severity of flight conflicts under the same moment, i.e., to perform multiple classifications, and uses a hybrid optimization algorithm of genetic + particle swarm to optimize this support vector machine model, and proposes A very efficient and accurate real-time flight conflict detection model is proposed. Finally, simulation analysis shows that the support vector machine is faster and more accurate than the traditional SVM, and has excellent conflict detection capability, and by differentiating the classified conflict levels and performing supervised learning, it can provide accurate warnings for upcoming flight conflicts, which can draw early attention of ATCs and provide a basis for the next flight conflict resolution. Eventually, the conflict detection model is expected to be compatible to airborne and ground surveillance equipment, which can significantly improve the safety of flight activities and has a broad application prospect and important research value.
随着民航业的快速发展,空中交通流量不断增加,给空管、机场等单位带来了巨大的负荷,飞行活动的安全已成为研究热点,飞行冲突检测是保证飞行活动安全的必要环节,空中交通流量的增加要求其更加准确、高效、稳定地运行。本研究基于机器学习中的最小二乘支持向量机(LSSVM),利用ADS-B提供的航向、位置、高度等信息,结合实际操作中的法规和冲突保护区,对同一时刻飞行冲突的发生和严重程度进行分类,即多重分类,并采用遗传+粒子群混合优化算法对支持向量机模型进行优化。提出了一种高效、准确的实时飞行冲突检测模型。最后,仿真分析表明,支持向量机比传统的支持向量机更快、更准确,并且具有出色的冲突检测能力,通过区分分类的冲突等级并进行监督学习,可以对即将发生的航班冲突提供准确的预警,从而引起空中交通管制员的早期注意,为下一次航班冲突的解决提供依据。最终,该冲突检测模型有望兼容机载和地面监视设备,能够显著提高飞行活动的安全性,具有广阔的应用前景和重要的研究价值。
{"title":"Improved PSO-GA-based LSSVM flight conflict detection model","authors":"Qiting Liu, Qi Wang, Yulin Cao, Jinyue Wang","doi":"10.1117/12.3000794","DOIUrl":"https://doi.org/10.1117/12.3000794","url":null,"abstract":"With the rapid development of civil aviation industry, the air traffic flow is increasing, which brings a large load to air traffic control, airports and other units, the safety of flight activities has become a research hotspot, flight conflict detection is a necessary link to ensure the safety of flight activities, the increase in air traffic flow requires its more accurate, efficient and stable operation. Based on the least squares support vector machine (LSSVM) in machine learning, this study uses the information provided by ADS-B, such as heading, position and altitude, combined with the regulations and conflict protection zones in actual operation, to classify the occurrence and severity of flight conflicts under the same moment, i.e., to perform multiple classifications, and uses a hybrid optimization algorithm of genetic + particle swarm to optimize this support vector machine model, and proposes A very efficient and accurate real-time flight conflict detection model is proposed. Finally, simulation analysis shows that the support vector machine is faster and more accurate than the traditional SVM, and has excellent conflict detection capability, and by differentiating the classified conflict levels and performing supervised learning, it can provide accurate warnings for upcoming flight conflicts, which can draw early attention of ATCs and provide a basis for the next flight conflict resolution. Eventually, the conflict detection model is expected to be compatible to airborne and ground surveillance equipment, which can significantly improve the safety of flight activities and has a broad application prospect and important research value.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"314 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115936956","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D target detection based on dynamic occlusion processing 基于动态遮挡处理的三维目标检测
Pub Date : 2023-08-09 DOI: 10.1117/12.3000786
Jishen Peng, Jun Ma, Li Li
In order to solve the multi-vehicle mutual occlusion problem encountered in 3D target detection by self-driving vehicles, this paper proposes a monocular 3D detection method that includes dynamic occlusion determination. The method adds a dynamic occlusion processing module to the CenterNet3D network framework to improve the accuracy of 3D target detection of occluded vehicles in the road. Specifically, the occlusion determination module of the method uses the 2D detection results extracted from target detection as the occlusion relationship determination condition, wherein the method of changing the occlusion determination threshold with the depth value is introduced. Then the occlusion compensation module is used to compensate and adjust the 3D detection results of the occurring occluded vehicles, and finally the 3D target detection results are output. The experimental results show that the method improves the accuracy of both vehicle center point detection and 3D dimensional detection results in the case of long-distance continuous vehicle occlusion. And compared with other existing methods, the accuracy of 3D detection results and bird's-eye view detection results are improved by 1%-2.64% in the case of intersection over union of 0.5. The method can compensate for the occluded vehicles in 3D target detection and improve the accuracy
为了解决自动驾驶车辆在三维目标检测中遇到的多车相互遮挡问题,本文提出了一种包含动态遮挡确定的单目三维检测方法。该方法在CenterNet3D网络框架中增加了动态遮挡处理模块,提高了道路中遮挡车辆的三维目标检测精度。具体而言,该方法的遮挡确定模块以目标检测提取的二维检测结果作为遮挡关系确定条件,其中引入了用深度值改变遮挡确定阈值的方法。然后利用遮挡补偿模块对发生遮挡的车辆进行三维检测结果的补偿和调整,最后输出三维目标检测结果。实验结果表明,该方法提高了长距离连续遮挡情况下车辆中心点检测和三维尺寸检测结果的精度。与其他现有方法相比,在交点比并度为0.5的情况下,三维检测结果和鸟瞰检测结果的精度提高了1% ~ 2.64%。该方法可以补偿被遮挡车辆在三维目标检测中的影响,提高检测精度
{"title":"3D target detection based on dynamic occlusion processing","authors":"Jishen Peng, Jun Ma, Li Li","doi":"10.1117/12.3000786","DOIUrl":"https://doi.org/10.1117/12.3000786","url":null,"abstract":"In order to solve the multi-vehicle mutual occlusion problem encountered in 3D target detection by self-driving vehicles, this paper proposes a monocular 3D detection method that includes dynamic occlusion determination. The method adds a dynamic occlusion processing module to the CenterNet3D network framework to improve the accuracy of 3D target detection of occluded vehicles in the road. Specifically, the occlusion determination module of the method uses the 2D detection results extracted from target detection as the occlusion relationship determination condition, wherein the method of changing the occlusion determination threshold with the depth value is introduced. Then the occlusion compensation module is used to compensate and adjust the 3D detection results of the occurring occluded vehicles, and finally the 3D target detection results are output. The experimental results show that the method improves the accuracy of both vehicle center point detection and 3D dimensional detection results in the case of long-distance continuous vehicle occlusion. And compared with other existing methods, the accuracy of 3D detection results and bird's-eye view detection results are improved by 1%-2.64% in the case of intersection over union of 0.5. The method can compensate for the occluded vehicles in 3D target detection and improve the accuracy","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"58 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114918538","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research and application of 3D simulation of truck formation based on Unreal Engine 基于虚幻引擎的卡车编队三维仿真研究与应用
Pub Date : 2023-08-09 DOI: 10.1117/12.3001392
Zhenzhou Wang, Fang Wu, Jiangnan Zhang, Jianguang Wu
In order to show the transport conditions of goods on different roads and provide more real and three-dimensional transport information for situation inference users, this paper proposes a simple and PID controlled three-dimensional simulation method for truck formation based on Unreal Engine. Firstly, based on the basic theory of automatic control [1] , the longitudinal lollipop controller and the transverse PID controller are designed respectively based on the lollipop control and PID control ideas, and the perception-decision framework is combined to realize the automatic driving of the truck along the spline line on the road. On this basis, a truck controller is designed to realize the truck formation driving with high recovery degree based on the leader-follower strategy. The results show that the truck based on PID control can accurately drive along the road line. With the cooperation of truck formation controller, the whole process of formation, maintenance and driving of truck formation can be basically restored.
为了展示货物在不同道路上的运输状况,为情景推理用户提供更加真实、立体的运输信息,本文提出了一种基于虚幻引擎的简单、PID控制的货车编队三维仿真方法。首先,在自动控制基本理论的基础上[1],基于棒棒糖控制和PID控制思想,分别设计纵向棒棒糖控制器和横向PID控制器,并结合感知-决策框架,实现货车沿样条线在道路上的自动驾驶。在此基础上,设计了基于leader-follower策略的货车编队高回收度驾驶控制器。结果表明,基于PID控制的卡车能够准确地沿道路行驶。在卡车编队控制器的配合下,可以基本恢复卡车编队的形成、维护和行驶的整个过程。
{"title":"Research and application of 3D simulation of truck formation based on Unreal Engine","authors":"Zhenzhou Wang, Fang Wu, Jiangnan Zhang, Jianguang Wu","doi":"10.1117/12.3001392","DOIUrl":"https://doi.org/10.1117/12.3001392","url":null,"abstract":"In order to show the transport conditions of goods on different roads and provide more real and three-dimensional transport information for situation inference users, this paper proposes a simple and PID controlled three-dimensional simulation method for truck formation based on Unreal Engine. Firstly, based on the basic theory of automatic control [1] , the longitudinal lollipop controller and the transverse PID controller are designed respectively based on the lollipop control and PID control ideas, and the perception-decision framework is combined to realize the automatic driving of the truck along the spline line on the road. On this basis, a truck controller is designed to realize the truck formation driving with high recovery degree based on the leader-follower strategy. The results show that the truck based on PID control can accurately drive along the road line. With the cooperation of truck formation controller, the whole process of formation, maintenance and driving of truck formation can be basically restored.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"12782 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128967080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evaluation of design factors of an interactive interface of intangible cultural heritage APP based on user experience 基于用户体验的非物质文化遗产APP交互界面设计因素评价
Pub Date : 2023-08-09 DOI: 10.1117/12.3000771
Chengjun Zhou, Ruowei Li
In this paper, the non-cultural material heritage mobile terminal APP interface is the carrier, according to the user experience of the interactive interface design. By using user interview, observation, qualitative research and quantitative research, and based on the theoretical model of user experience, the author conducted data collection and analysis using user interview and questionnaire survey to obtain four evaluation indexes and eight sub-criteria for users' interaction interface of intangible cultural heritage apps. The analytic hierarchy process was introduced into weight calculation. The weight of each evaluation factor is obtained through investigation and calculation, and the evaluation level of each element is determined by referring to the Likert scale. The evaluation data of the design scheme is obtained through the questionnaire method, the fuzzy analysis is carried out on the results of the questionnaire, and the final evaluation results are obtained according to the principle of full membership to provide implementable improvement suggestions for the interactive interface design to improve the user experience. The research results have theoretical guiding significance for the interactive interface design of intangible cultural heritage apps.
本文以非文化物质遗产移动端APP界面为载体,根据用户体验进行交互界面设计。采用用户访谈、观察、定性研究、定量研究等方法,以用户体验理论模型为基础,采用用户访谈、问卷调查等方式进行数据收集和分析,得出非物质文化遗产类app用户交互界面的4项评价指标和8个子标准。将层次分析法引入权重计算。通过调查计算得出各评价因子的权重,参照李克特量表确定各要素的评价等级。通过问卷法获得设计方案的评价数据,对问卷结果进行模糊分析,根据全隶属原则得出最终的评价结果,为交互界面设计提供可实施的改进建议,提高用户体验。研究成果对非物质文化遗产app交互界面设计具有理论指导意义。
{"title":"Evaluation of design factors of an interactive interface of intangible cultural heritage APP based on user experience","authors":"Chengjun Zhou, Ruowei Li","doi":"10.1117/12.3000771","DOIUrl":"https://doi.org/10.1117/12.3000771","url":null,"abstract":"In this paper, the non-cultural material heritage mobile terminal APP interface is the carrier, according to the user experience of the interactive interface design. By using user interview, observation, qualitative research and quantitative research, and based on the theoretical model of user experience, the author conducted data collection and analysis using user interview and questionnaire survey to obtain four evaluation indexes and eight sub-criteria for users' interaction interface of intangible cultural heritage apps. The analytic hierarchy process was introduced into weight calculation. The weight of each evaluation factor is obtained through investigation and calculation, and the evaluation level of each element is determined by referring to the Likert scale. The evaluation data of the design scheme is obtained through the questionnaire method, the fuzzy analysis is carried out on the results of the questionnaire, and the final evaluation results are obtained according to the principle of full membership to provide implementable improvement suggestions for the interactive interface design to improve the user experience. The research results have theoretical guiding significance for the interactive interface design of intangible cultural heritage apps.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"50 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114128327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Application of Videolog visualization technology in workover operation 视频可视化技术在修井作业中的应用
Pub Date : 2023-08-09 DOI: 10.1117/12.3001436
Ying Zhang, Jiatian Zhang, Wenhao Jin
The actual underground situation is of great significance for workover operation. Videolog visualization technology can clearly and accurately obtain the underground color video information, and provide effective guidance for workover operation. This paper introduces the system composition, working principle and functional parameters of Videolog equipment, and gives an example of its practical application in workover operation, which shows that Videolog visualization technology is more efficient, safe and intuitive than traditional downhole video technology, and has a good application prospect in workover operation field.
井下实际情况对修井作业具有重要意义。视频可视化技术能够清晰准确地获取井下彩色视频信息,为修井作业提供有效指导。介绍了Videolog设备的系统组成、工作原理和功能参数,并给出了其在修井作业中的实际应用实例,表明Videolog可视化技术比传统井下视频技术更高效、安全、直观,在修井作业领域具有良好的应用前景。
{"title":"Application of Videolog visualization technology in workover operation","authors":"Ying Zhang, Jiatian Zhang, Wenhao Jin","doi":"10.1117/12.3001436","DOIUrl":"https://doi.org/10.1117/12.3001436","url":null,"abstract":"The actual underground situation is of great significance for workover operation. Videolog visualization technology can clearly and accurately obtain the underground color video information, and provide effective guidance for workover operation. This paper introduces the system composition, working principle and functional parameters of Videolog equipment, and gives an example of its practical application in workover operation, which shows that Videolog visualization technology is more efficient, safe and intuitive than traditional downhole video technology, and has a good application prospect in workover operation field.","PeriodicalId":210802,"journal":{"name":"International Conference on Image Processing and Intelligent Control","volume":"114 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-08-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116248229","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
International Conference on Image Processing and Intelligent Control
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1