Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208614
John McConkey, Yugang Liu
Wilderness search and rescue (WiSAR) has been one of the most significant robotic applications in the past decade. In order to succeed in these life-saving operations, the deployment of drones or unmanned aerial vehicles (UAVs) has become an inevitable trend. This paper presents the development of a low-cost solution for semi-autonomous control of drones/UAVs in WiSAR applications. ArduPilot based flight controller was implemented to enable autonomous trajectory following of the drones/UAVs. A high resolution action camera attached to the drone/UAV was used to take video footage during the flight, which was related to the GPS location through the time stamp. The recorded video footage was manually transferred to a laptop for potential target detection using OpenCV and YOLOv3. The system design is reported in detail, and experiments were conducted to verify the effectiveness of the developed system.
{"title":"Semi-Autonomous Control of Drones/UAVs for Wilderness Search and Rescue","authors":"John McConkey, Yugang Liu","doi":"10.1109/CACRE58689.2023.10208614","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208614","url":null,"abstract":"Wilderness search and rescue (WiSAR) has been one of the most significant robotic applications in the past decade. In order to succeed in these life-saving operations, the deployment of drones or unmanned aerial vehicles (UAVs) has become an inevitable trend. This paper presents the development of a low-cost solution for semi-autonomous control of drones/UAVs in WiSAR applications. ArduPilot based flight controller was implemented to enable autonomous trajectory following of the drones/UAVs. A high resolution action camera attached to the drone/UAV was used to take video footage during the flight, which was related to the GPS location through the time stamp. The recorded video footage was manually transferred to a laptop for potential target detection using OpenCV and YOLOv3. The system design is reported in detail, and experiments were conducted to verify the effectiveness of the developed system.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123580805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208613
Galamo Monkam, Jie Yan
In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.
近年来,智能手机和社交媒体的广泛使用导致了数字内容数量的激增。然而,数字图像使用的增加也导致了改变图像内容的技术使用的增加。因此,对于图像取证领域和公众来说,能够区分真实或真实的图像和被操纵或伪造的图像是至关重要的。深度学习使创建不真实图像变得更容易,这强调了建立一个更强大的平台来检测真实图像和虚假图像的必要性。然而,在图像取证领域,研究人员经常开发非常复杂的深度学习架构来训练模型。这个训练过程是昂贵的,而且模型的大小通常是巨大的,这限制了模型的可用性。本研究的重点是最先进的图像处理的真实性,以及自动或人工检测它们的难度。我们基于生成对抗网络(GAN)建立了一个名为G-JOB GAN的机器学习模型,它可以生成分辨率和质量都提高的最先进、逼真的图像。我们的模型能够以95.7%的准确率检测出真实生成的图像。我们近期的目标是实现一个能够以1- P的概率检测假图像的系统,其中P是相同指纹的概率。为了实现这一目标,我们已经实现并评估了各种GAN架构,如Style GAN, Pro GAN和Original GAN。
{"title":"Digital Image Forensic Analyzer to Detect AI-generated Fake Images","authors":"Galamo Monkam, Jie Yan","doi":"10.1109/CACRE58689.2023.10208613","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208613","url":null,"abstract":"In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122503292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208414
Liangliang Wang, Chengxi Huang, Xinwei Chen
Existing action detection approaches do not take spatio-temporal structural relationships of action clips into account, which leads to a low applicability in real-world scenarios and can benefit detecting if exploited. To this end, this paper proposes to formulate the action detection problem as a reinforcement learning process which is rewarded by observing both the clip sampling and classification results via adjusting the detection schemes. In particular, our framework consists of a heterogeneous graph convolutional network to represent the spatio-temporal features capturing the inherent relation, a policy network which determines the probabilities of a predefined action sampling spaces, and a classification network for action clip recognition. We accomplish the network joint learning by considering the temporal intersection over union and Euclidean distance between detected clips and ground-truth. Experiments on ActivityNet v1.3 and THUMOS14 demonstrate our method.
{"title":"Heterogeneous Graph Convolutional Network for Visual Reinforcement Learning of Action Detection","authors":"Liangliang Wang, Chengxi Huang, Xinwei Chen","doi":"10.1109/CACRE58689.2023.10208414","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208414","url":null,"abstract":"Existing action detection approaches do not take spatio-temporal structural relationships of action clips into account, which leads to a low applicability in real-world scenarios and can benefit detecting if exploited. To this end, this paper proposes to formulate the action detection problem as a reinforcement learning process which is rewarded by observing both the clip sampling and classification results via adjusting the detection schemes. In particular, our framework consists of a heterogeneous graph convolutional network to represent the spatio-temporal features capturing the inherent relation, a policy network which determines the probabilities of a predefined action sampling spaces, and a classification network for action clip recognition. We accomplish the network joint learning by considering the temporal intersection over union and Euclidean distance between detected clips and ground-truth. Experiments on ActivityNet v1.3 and THUMOS14 demonstrate our method.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123777125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208620
Yang Zhao, Yu Feng
The cage of mine hoist is an important device for personnel and vehicle transportation in vertical shaft hoisting system. In order to timely and accurately monitor the internal environment of the cage, miners' dynamics and cage working conditions, the online monitoring system of mine hoist cage based on ARM has been developed. The overall scheme of online monitoring system is proposed, which is composed of WIFI wireless communication network, upper computer monitoring platform, video monitoring platform, online monitoring system sub-station and generator power supply. The on-line monitoring system substation based on stm32f103zet6 microcontroller realizes the functions of cage working condition monitoring, historical data query, alarm threshold setting and threshold alarm. Finally, the wireless transmission base station, wet temperature sensor and encoder are tested by setting up a testbed. The experiment shows that the parameters meet the expected requirements, can provide support for the safe operation of the hoist cage.
{"title":"Development of Online Monitoring System for Mine Hoist Cage","authors":"Yang Zhao, Yu Feng","doi":"10.1109/CACRE58689.2023.10208620","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208620","url":null,"abstract":"The cage of mine hoist is an important device for personnel and vehicle transportation in vertical shaft hoisting system. In order to timely and accurately monitor the internal environment of the cage, miners' dynamics and cage working conditions, the online monitoring system of mine hoist cage based on ARM has been developed. The overall scheme of online monitoring system is proposed, which is composed of WIFI wireless communication network, upper computer monitoring platform, video monitoring platform, online monitoring system sub-station and generator power supply. The on-line monitoring system substation based on stm32f103zet6 microcontroller realizes the functions of cage working condition monitoring, historical data query, alarm threshold setting and threshold alarm. Finally, the wireless transmission base station, wet temperature sensor and encoder are tested by setting up a testbed. The experiment shows that the parameters meet the expected requirements, can provide support for the safe operation of the hoist cage.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126804651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208741
Changsong Pang, Yuwei Cheng
In recent years, the conservation of water resources has attracted widespread attention. The development and application of water surface robots can achieve efficient cleaning of floating waste. However, limited to the small size of floating waste on the water surface, its detection remains a great challenge in the field of object detection. Existing object detection algorithms cannot perform well, such as YOLO (You Only Look Once), SSD (Single-Shot Detector), and Faster R-CNN. In the past two years, diffusion-based networks have shown powerful capabilities in object detection. In this paper, we decouple the position and size regressions of detection boxes, to propose a novel decoupled diffusion network for detecting the floating waste in images. To further promote the detection accuracy of floating waste, we design a new box renewal strategy to obtain desired boxes during the inference stage. To evaluate the performance of the proposed methods, we test the decoupled diffusion network on a public dataset and verify the superiority compared with other object detection methods.
近年来,节约水资源引起了广泛的关注。水面机器人的开发与应用可以实现对漂浮垃圾的高效清洗。然而,由于水面漂浮垃圾的体积较小,其检测在目标检测领域仍然是一个很大的挑战。现有的目标检测算法,如YOLO (You Only Look Once)、SSD (Single-Shot Detector)、Faster R-CNN等,性能不佳。在过去的两年中,基于扩散的网络在目标检测方面显示出强大的能力。在本文中,我们解耦了检测盒的位置和大小回归,提出了一种新的解耦扩散网络来检测图像中的浮动垃圾。为了进一步提高漂浮垃圾的检测精度,我们设计了一种新的盒子更新策略,在推理阶段获得所需的盒子。为了评估所提出方法的性能,我们在公共数据集上测试了解耦扩散网络,并验证了与其他目标检测方法相比的优越性。
{"title":"Detection of River Floating Waste Based on Decoupled Diffusion Model","authors":"Changsong Pang, Yuwei Cheng","doi":"10.1109/CACRE58689.2023.10208741","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208741","url":null,"abstract":"In recent years, the conservation of water resources has attracted widespread attention. The development and application of water surface robots can achieve efficient cleaning of floating waste. However, limited to the small size of floating waste on the water surface, its detection remains a great challenge in the field of object detection. Existing object detection algorithms cannot perform well, such as YOLO (You Only Look Once), SSD (Single-Shot Detector), and Faster R-CNN. In the past two years, diffusion-based networks have shown powerful capabilities in object detection. In this paper, we decouple the position and size regressions of detection boxes, to propose a novel decoupled diffusion network for detecting the floating waste in images. To further promote the detection accuracy of floating waste, we design a new box renewal strategy to obtain desired boxes during the inference stage. To evaluate the performance of the proposed methods, we test the decoupled diffusion network on a public dataset and verify the superiority compared with other object detection methods.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116674599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, there has been an increasing demand for intelligent infrared recognition methods. As current high-precision intelligent recognition algorithms like deep networks largely rely on massive amounts of training data, the lack of infrared databases has become a major limitation for technological development, resulting in an urgent demand for intelligent infrared image simulation technology. Different from most infrared image simulation techniques which expand infrared data amount under conditions of thermal balance, this paper proposes a novel way to simulate infrared images, i.e. generating infrared images for objects in scenes under an unsteady heat conduction process along the time axis. To be specific, this paper incorporates a spatial propagation network structure to predict the equivalent thermal conductivity coefficients for the input infrared image captured at a certain time point, and then infers the infrared images at the next time points by simulating the physical heat conduction process based on the predicted conductivity coefficients. We carry out extensive experiments and analysis on the datasets composed of factual infrared photos and PDE-simulated images, demonstrating that the proposed infrared image generation method can realize the transformation simulation and dataset expansion of infrared images with high speed and high quality.
{"title":"Infrared Image Transformation via Spatial Propagation Network","authors":"Ying Xu, Ningfang Song, Xiong Pan, Jingchun Cheng, Chunxi Zhang","doi":"10.1109/CACRE58689.2023.10208437","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208437","url":null,"abstract":"In recent years, there has been an increasing demand for intelligent infrared recognition methods. As current high-precision intelligent recognition algorithms like deep networks largely rely on massive amounts of training data, the lack of infrared databases has become a major limitation for technological development, resulting in an urgent demand for intelligent infrared image simulation technology. Different from most infrared image simulation techniques which expand infrared data amount under conditions of thermal balance, this paper proposes a novel way to simulate infrared images, i.e. generating infrared images for objects in scenes under an unsteady heat conduction process along the time axis. To be specific, this paper incorporates a spatial propagation network structure to predict the equivalent thermal conductivity coefficients for the input infrared image captured at a certain time point, and then infers the infrared images at the next time points by simulating the physical heat conduction process based on the predicted conductivity coefficients. We carry out extensive experiments and analysis on the datasets composed of factual infrared photos and PDE-simulated images, demonstrating that the proposed infrared image generation method can realize the transformation simulation and dataset expansion of infrared images with high speed and high quality.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128404939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10209058
Zhaokun Deng, Xilong Hou, Mingrui Hao, Shuangyi Wang
The robotic ultrasound system has the potential to improve the conventional practice of diagnosing. Because of the adequate degrees of freedom embedded in a small footprint, the parallel mechanism-based ultrasound robot has attracted attention in the field. However, the analysis of its configuration, design parameters, and workspace is limited. To solve this issue and further promote the potential clinical translation, this paper proposes a task-driven, two-stage mechanism optimization method using the effective regular workspace and the local condition index to determine the parameters for the demanding clinic workspace of a parallel mechanism-based ultrasound robot. The design and implementation method of the robot are then introduced, along with the justification of parameter selection. To analyze the performance, an optical tracking-based experiment and a phantom-based human-robot comparison study were performed. The results show that the workspace meets the required clinical needs, and despite its small footprint, the mechanism could have a reasonable workspace. The kinematic error was found to be 0.2 mm and 0.3°. Based on the above results and the quantitative analysis of the ultrasound images acquired manually and robotically, it was concluded that the robot can effectively deliver the demand function and would be a promising tool for further deployment.
{"title":"System Design and Workspace Optimization of a Parallel Mechanism-Based Portable Robot for Remote Ultrasound","authors":"Zhaokun Deng, Xilong Hou, Mingrui Hao, Shuangyi Wang","doi":"10.1109/CACRE58689.2023.10209058","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10209058","url":null,"abstract":"The robotic ultrasound system has the potential to improve the conventional practice of diagnosing. Because of the adequate degrees of freedom embedded in a small footprint, the parallel mechanism-based ultrasound robot has attracted attention in the field. However, the analysis of its configuration, design parameters, and workspace is limited. To solve this issue and further promote the potential clinical translation, this paper proposes a task-driven, two-stage mechanism optimization method using the effective regular workspace and the local condition index to determine the parameters for the demanding clinic workspace of a parallel mechanism-based ultrasound robot. The design and implementation method of the robot are then introduced, along with the justification of parameter selection. To analyze the performance, an optical tracking-based experiment and a phantom-based human-robot comparison study were performed. The results show that the workspace meets the required clinical needs, and despite its small footprint, the mechanism could have a reasonable workspace. The kinematic error was found to be 0.2 mm and 0.3°. Based on the above results and the quantitative analysis of the ultrasound images acquired manually and robotically, it was concluded that the robot can effectively deliver the demand function and would be a promising tool for further deployment.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127649317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208310
Ming Han, Wangwang Lian, Dong Yang, Tiejun Li
This paper puts forward a novel parallel mechanism with multiple driving modes to address the inherent limitations of workspace and singular configurations in single-driven parallel mechanisms. Taking the planar 6R parallel mechanism as an example, we conduct numerical and simulation-based studies to demonstrate the superior kinematic performance of the multi-drive mode parallel mechanism. The analytical process involved initial investigation and characterization of the mechanism, development of prototype, establishment of inverse kinematics model and introduction of local transmission index. Motion/force transmission indices under both single driving mode and multiple driving modes were then compared and analyzed. Drawing on the motion/force transmission index, we identified the good transmission workspace of the mechanism and performed a performance comparison analysis. The results unequivocally demonstrate that engaging the multi-drive mode substantially enhances the parallel mechanism's kinematic performance.
{"title":"Research on Motion/Force Transmission Characteristics and Good Transmission Workspace Identification Method of Multi-drive Parallel Mechanism","authors":"Ming Han, Wangwang Lian, Dong Yang, Tiejun Li","doi":"10.1109/CACRE58689.2023.10208310","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208310","url":null,"abstract":"This paper puts forward a novel parallel mechanism with multiple driving modes to address the inherent limitations of workspace and singular configurations in single-driven parallel mechanisms. Taking the planar 6R parallel mechanism as an example, we conduct numerical and simulation-based studies to demonstrate the superior kinematic performance of the multi-drive mode parallel mechanism. The analytical process involved initial investigation and characterization of the mechanism, development of prototype, establishment of inverse kinematics model and introduction of local transmission index. Motion/force transmission indices under both single driving mode and multiple driving modes were then compared and analyzed. Drawing on the motion/force transmission index, we identified the good transmission workspace of the mechanism and performed a performance comparison analysis. The results unequivocally demonstrate that engaging the multi-drive mode substantially enhances the parallel mechanism's kinematic performance.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126895664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208324
Hang Yu, Yi-xi Zhao, Ran Zhang, Haiping Guo, Chongben Ni, Jin-hong Ding
To achieve efficient and intelligent welding of common workpieces such as subassemblies without relying on models, and to adapt to the unique large manufacturing scenes of the shipbuilding industry, 3D area-array cameras were used in this study instead of traditional line laser scanning sensors. Based on 3D vision processing technologies such as multisensor data calibration and point cloud registration, the 3D reconstruction and weld reconstruction vision system was designed for large scenes in shipbuilding, and the algorithm was optimized from the perspective of improving scanning efficiency and scanning accuracy. Through 3D scanning reconstruction and weld reconstruction tests on typical ship workpieces in large scenes, it was verified that the vision system in this paper can markedly improve scanning efficiency and scanning accuracy in large scenes, and provide efficient and accurate visual data support for intelligent welding of common workpieces such as subassemblies.
{"title":"3D Scanning Vision System Design and Implementation in Large Shipbuilding Environments","authors":"Hang Yu, Yi-xi Zhao, Ran Zhang, Haiping Guo, Chongben Ni, Jin-hong Ding","doi":"10.1109/CACRE58689.2023.10208324","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208324","url":null,"abstract":"To achieve efficient and intelligent welding of common workpieces such as subassemblies without relying on models, and to adapt to the unique large manufacturing scenes of the shipbuilding industry, 3D area-array cameras were used in this study instead of traditional line laser scanning sensors. Based on 3D vision processing technologies such as multisensor data calibration and point cloud registration, the 3D reconstruction and weld reconstruction vision system was designed for large scenes in shipbuilding, and the algorithm was optimized from the perspective of improving scanning efficiency and scanning accuracy. Through 3D scanning reconstruction and weld reconstruction tests on typical ship workpieces in large scenes, it was verified that the vision system in this paper can markedly improve scanning efficiency and scanning accuracy in large scenes, and provide efficient and accurate visual data support for intelligent welding of common workpieces such as subassemblies.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114969302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1109/CACRE58689.2023.10208384
B. Bokolo, Ebikela Ogegbene-Ise, Lei Chen, Qingzhong Liu
This research examines sentiment analysis in the context of crime intent using machine learning algorithms. A comparison is made between a crime intent dataset generated from a Twitter developer account and Kaggle's sentiment140 dataset for Twitter sentiment analysis. The algorithms employed include Support Vector Machine (SVM), Naïve Bayes, and Long Short-Term Memory (LSTM). The findings indicate that LSTM outperforms the other algorithms, achieving high accuracy (97%) and precision (99%) in detecting crime tweets. Thus, it is concluded that the crime tweets were accurately identified.
{"title":"Crime-Intent Sentiment Detection on Twitter Data Using Machine Learning","authors":"B. Bokolo, Ebikela Ogegbene-Ise, Lei Chen, Qingzhong Liu","doi":"10.1109/CACRE58689.2023.10208384","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208384","url":null,"abstract":"This research examines sentiment analysis in the context of crime intent using machine learning algorithms. A comparison is made between a crime intent dataset generated from a Twitter developer account and Kaggle's sentiment140 dataset for Twitter sentiment analysis. The algorithms employed include Support Vector Machine (SVM), Naïve Bayes, and Long Short-Term Memory (LSTM). The findings indicate that LSTM outperforms the other algorithms, achieving high accuracy (97%) and precision (99%) in detecting crime tweets. Thus, it is concluded that the crime tweets were accurately identified.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131181391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}