首页 > 最新文献

2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)最新文献

英文 中文
Semi-Autonomous Control of Drones/UAVs for Wilderness Search and Rescue 半自主控制无人机/无人机野外搜索和救援
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208614
John McConkey, Yugang Liu
Wilderness search and rescue (WiSAR) has been one of the most significant robotic applications in the past decade. In order to succeed in these life-saving operations, the deployment of drones or unmanned aerial vehicles (UAVs) has become an inevitable trend. This paper presents the development of a low-cost solution for semi-autonomous control of drones/UAVs in WiSAR applications. ArduPilot based flight controller was implemented to enable autonomous trajectory following of the drones/UAVs. A high resolution action camera attached to the drone/UAV was used to take video footage during the flight, which was related to the GPS location through the time stamp. The recorded video footage was manually transferred to a laptop for potential target detection using OpenCV and YOLOv3. The system design is reported in detail, and experiments were conducted to verify the effectiveness of the developed system.
荒野搜索和救援(WiSAR)是过去十年中最重要的机器人应用之一。为了在这些救生行动中取得成功,部署无人机或无人驾驶飞行器(uav)已成为必然趋势。本文提出了一种低成本的解决方案,用于在WiSAR应用中对无人机/无人机进行半自主控制。实现了基于ArduPilot的飞行控制器,以实现无人机/无人机的自主轨迹跟踪。安装在无人机上的高分辨率运动摄像机用于拍摄飞行过程中的视频片段,该视频片段通过时间戳与GPS位置相关。将录制的视频片段手动传输到笔记本电脑上,使用OpenCV和YOLOv3进行潜在目标检测。详细介绍了系统设计,并通过实验验证了系统的有效性。
{"title":"Semi-Autonomous Control of Drones/UAVs for Wilderness Search and Rescue","authors":"John McConkey, Yugang Liu","doi":"10.1109/CACRE58689.2023.10208614","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208614","url":null,"abstract":"Wilderness search and rescue (WiSAR) has been one of the most significant robotic applications in the past decade. In order to succeed in these life-saving operations, the deployment of drones or unmanned aerial vehicles (UAVs) has become an inevitable trend. This paper presents the development of a low-cost solution for semi-autonomous control of drones/UAVs in WiSAR applications. ArduPilot based flight controller was implemented to enable autonomous trajectory following of the drones/UAVs. A high resolution action camera attached to the drone/UAV was used to take video footage during the flight, which was related to the GPS location through the time stamp. The recorded video footage was manually transferred to a laptop for potential target detection using OpenCV and YOLOv3. The system design is reported in detail, and experiments were conducted to verify the effectiveness of the developed system.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"211 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123580805","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Digital Image Forensic Analyzer to Detect AI-generated Fake Images 用于检测人工智能生成的假图像的数字图像法医分析仪
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208613
Galamo Monkam, Jie Yan
In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.
近年来,智能手机和社交媒体的广泛使用导致了数字内容数量的激增。然而,数字图像使用的增加也导致了改变图像内容的技术使用的增加。因此,对于图像取证领域和公众来说,能够区分真实或真实的图像和被操纵或伪造的图像是至关重要的。深度学习使创建不真实图像变得更容易,这强调了建立一个更强大的平台来检测真实图像和虚假图像的必要性。然而,在图像取证领域,研究人员经常开发非常复杂的深度学习架构来训练模型。这个训练过程是昂贵的,而且模型的大小通常是巨大的,这限制了模型的可用性。本研究的重点是最先进的图像处理的真实性,以及自动或人工检测它们的难度。我们基于生成对抗网络(GAN)建立了一个名为G-JOB GAN的机器学习模型,它可以生成分辨率和质量都提高的最先进、逼真的图像。我们的模型能够以95.7%的准确率检测出真实生成的图像。我们近期的目标是实现一个能够以1- P的概率检测假图像的系统,其中P是相同指纹的概率。为了实现这一目标,我们已经实现并评估了各种GAN架构,如Style GAN, Pro GAN和Original GAN。
{"title":"Digital Image Forensic Analyzer to Detect AI-generated Fake Images","authors":"Galamo Monkam, Jie Yan","doi":"10.1109/CACRE58689.2023.10208613","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208613","url":null,"abstract":"In recent years, the widespread use of smartphones and social media has led to a surge in the amount of digital content available. However, this increase in the use of digital images has also led to a rise in the use of techniques to alter image contents. Therefore, it is essential for both the image forensics field and the general public to be able to differentiate between genuine or authentic images and manipulated or fake imagery. Deep learning has made it easier to create unreal images, which underscores the need to establish a more robust platform to detect real from fake imagery. However, in the image forensics field, researchers often develop very complicated deep learning architectures to train the model. This training process is expensive, and the model size is often huge, which limits the usability of the model. This research focuses on the realism of state-of-the-art image manipulations and how difficult it is to detect them automatically or by humans. We built a machine learning model called G-JOB GAN, based on Generative Adversarial Networks (GAN), that can generate state-of-the-art, realistic-looking images with improved resolution and quality. Our model can detect a realistically generated image with an accuracy of 95.7%. Our near future aim is to implement a system that can detect fake images with a probability of odds of 1- P, where P is the chance of identical fingerprints. To achieve this objective, we have implemented and evaluated various GAN architectures such as Style GAN, Pro GAN, and the Original GAN.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122503292","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Heterogeneous Graph Convolutional Network for Visual Reinforcement Learning of Action Detection 动作检测视觉强化学习的异构图卷积网络
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208414
Liangliang Wang, Chengxi Huang, Xinwei Chen
Existing action detection approaches do not take spatio-temporal structural relationships of action clips into account, which leads to a low applicability in real-world scenarios and can benefit detecting if exploited. To this end, this paper proposes to formulate the action detection problem as a reinforcement learning process which is rewarded by observing both the clip sampling and classification results via adjusting the detection schemes. In particular, our framework consists of a heterogeneous graph convolutional network to represent the spatio-temporal features capturing the inherent relation, a policy network which determines the probabilities of a predefined action sampling spaces, and a classification network for action clip recognition. We accomplish the network joint learning by considering the temporal intersection over union and Euclidean distance between detected clips and ground-truth. Experiments on ActivityNet v1.3 and THUMOS14 demonstrate our method.
现有的动作检测方法没有考虑动作片段的时空结构关系,导致在现实场景中的适用性较低,如果加以利用则有利于检测。为此,本文提出将动作检测问题表述为一个强化学习过程,通过调整检测方案观察片段采样和分类结果来获得奖励。特别是,我们的框架包括一个异构图卷积网络,用于表示捕获固有关系的时空特征,一个策略网络,用于确定预定义动作采样空间的概率,以及一个用于动作片段识别的分类网络。我们通过考虑检测片段与ground-truth之间的时间交和欧几里得距离来完成网络联合学习。在ActivityNet v1.3和THUMOS14上的实验验证了我们的方法。
{"title":"Heterogeneous Graph Convolutional Network for Visual Reinforcement Learning of Action Detection","authors":"Liangliang Wang, Chengxi Huang, Xinwei Chen","doi":"10.1109/CACRE58689.2023.10208414","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208414","url":null,"abstract":"Existing action detection approaches do not take spatio-temporal structural relationships of action clips into account, which leads to a low applicability in real-world scenarios and can benefit detecting if exploited. To this end, this paper proposes to formulate the action detection problem as a reinforcement learning process which is rewarded by observing both the clip sampling and classification results via adjusting the detection schemes. In particular, our framework consists of a heterogeneous graph convolutional network to represent the spatio-temporal features capturing the inherent relation, a policy network which determines the probabilities of a predefined action sampling spaces, and a classification network for action clip recognition. We accomplish the network joint learning by considering the temporal intersection over union and Euclidean distance between detected clips and ground-truth. Experiments on ActivityNet v1.3 and THUMOS14 demonstrate our method.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123777125","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Development of Online Monitoring System for Mine Hoist Cage 矿井提升笼在线监测系统的开发
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208620
Yang Zhao, Yu Feng
The cage of mine hoist is an important device for personnel and vehicle transportation in vertical shaft hoisting system. In order to timely and accurately monitor the internal environment of the cage, miners' dynamics and cage working conditions, the online monitoring system of mine hoist cage based on ARM has been developed. The overall scheme of online monitoring system is proposed, which is composed of WIFI wireless communication network, upper computer monitoring platform, video monitoring platform, online monitoring system sub-station and generator power supply. The on-line monitoring system substation based on stm32f103zet6 microcontroller realizes the functions of cage working condition monitoring, historical data query, alarm threshold setting and threshold alarm. Finally, the wireless transmission base station, wet temperature sensor and encoder are tested by setting up a testbed. The experiment shows that the parameters meet the expected requirements, can provide support for the safe operation of the hoist cage.
矿井提升机笼是立井提升系统中人员和车辆运输的重要装置。为了及时准确地监测轿厢内部环境、矿工动态和轿厢工况,开发了基于ARM的矿井提升机轿厢在线监测系统。提出了在线监控系统的总体方案,该系统由WIFI无线通信网络、上位机监控平台、视频监控平台、在线监控系统分站和发电机电源组成。基于stm32f103zet6单片机的变电站在线监测系统实现了轿厢工况监测、历史数据查询、报警阈值设置和阈值报警等功能。最后,通过搭建试验台对无线传输基站、湿式温度传感器和编码器进行了测试。实验表明,各项参数均达到预期要求,可为提升机笼的安全运行提供支撑。
{"title":"Development of Online Monitoring System for Mine Hoist Cage","authors":"Yang Zhao, Yu Feng","doi":"10.1109/CACRE58689.2023.10208620","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208620","url":null,"abstract":"The cage of mine hoist is an important device for personnel and vehicle transportation in vertical shaft hoisting system. In order to timely and accurately monitor the internal environment of the cage, miners' dynamics and cage working conditions, the online monitoring system of mine hoist cage based on ARM has been developed. The overall scheme of online monitoring system is proposed, which is composed of WIFI wireless communication network, upper computer monitoring platform, video monitoring platform, online monitoring system sub-station and generator power supply. The on-line monitoring system substation based on stm32f103zet6 microcontroller realizes the functions of cage working condition monitoring, historical data query, alarm threshold setting and threshold alarm. Finally, the wireless transmission base station, wet temperature sensor and encoder are tested by setting up a testbed. The experiment shows that the parameters meet the expected requirements, can provide support for the safe operation of the hoist cage.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126804651","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Detection of River Floating Waste Based on Decoupled Diffusion Model 基于解耦扩散模型的河流漂浮垃圾检测
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208741
Changsong Pang, Yuwei Cheng
In recent years, the conservation of water resources has attracted widespread attention. The development and application of water surface robots can achieve efficient cleaning of floating waste. However, limited to the small size of floating waste on the water surface, its detection remains a great challenge in the field of object detection. Existing object detection algorithms cannot perform well, such as YOLO (You Only Look Once), SSD (Single-Shot Detector), and Faster R-CNN. In the past two years, diffusion-based networks have shown powerful capabilities in object detection. In this paper, we decouple the position and size regressions of detection boxes, to propose a novel decoupled diffusion network for detecting the floating waste in images. To further promote the detection accuracy of floating waste, we design a new box renewal strategy to obtain desired boxes during the inference stage. To evaluate the performance of the proposed methods, we test the decoupled diffusion network on a public dataset and verify the superiority compared with other object detection methods.
近年来,节约水资源引起了广泛的关注。水面机器人的开发与应用可以实现对漂浮垃圾的高效清洗。然而,由于水面漂浮垃圾的体积较小,其检测在目标检测领域仍然是一个很大的挑战。现有的目标检测算法,如YOLO (You Only Look Once)、SSD (Single-Shot Detector)、Faster R-CNN等,性能不佳。在过去的两年中,基于扩散的网络在目标检测方面显示出强大的能力。在本文中,我们解耦了检测盒的位置和大小回归,提出了一种新的解耦扩散网络来检测图像中的浮动垃圾。为了进一步提高漂浮垃圾的检测精度,我们设计了一种新的盒子更新策略,在推理阶段获得所需的盒子。为了评估所提出方法的性能,我们在公共数据集上测试了解耦扩散网络,并验证了与其他目标检测方法相比的优越性。
{"title":"Detection of River Floating Waste Based on Decoupled Diffusion Model","authors":"Changsong Pang, Yuwei Cheng","doi":"10.1109/CACRE58689.2023.10208741","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208741","url":null,"abstract":"In recent years, the conservation of water resources has attracted widespread attention. The development and application of water surface robots can achieve efficient cleaning of floating waste. However, limited to the small size of floating waste on the water surface, its detection remains a great challenge in the field of object detection. Existing object detection algorithms cannot perform well, such as YOLO (You Only Look Once), SSD (Single-Shot Detector), and Faster R-CNN. In the past two years, diffusion-based networks have shown powerful capabilities in object detection. In this paper, we decouple the position and size regressions of detection boxes, to propose a novel decoupled diffusion network for detecting the floating waste in images. To further promote the detection accuracy of floating waste, we design a new box renewal strategy to obtain desired boxes during the inference stage. To evaluate the performance of the proposed methods, we test the decoupled diffusion network on a public dataset and verify the superiority compared with other object detection methods.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"9 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116674599","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrared Image Transformation via Spatial Propagation Network 基于空间传播网络的红外图像变换
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208437
Ying Xu, Ningfang Song, Xiong Pan, Jingchun Cheng, Chunxi Zhang
In recent years, there has been an increasing demand for intelligent infrared recognition methods. As current high-precision intelligent recognition algorithms like deep networks largely rely on massive amounts of training data, the lack of infrared databases has become a major limitation for technological development, resulting in an urgent demand for intelligent infrared image simulation technology. Different from most infrared image simulation techniques which expand infrared data amount under conditions of thermal balance, this paper proposes a novel way to simulate infrared images, i.e. generating infrared images for objects in scenes under an unsteady heat conduction process along the time axis. To be specific, this paper incorporates a spatial propagation network structure to predict the equivalent thermal conductivity coefficients for the input infrared image captured at a certain time point, and then infers the infrared images at the next time points by simulating the physical heat conduction process based on the predicted conductivity coefficients. We carry out extensive experiments and analysis on the datasets composed of factual infrared photos and PDE-simulated images, demonstrating that the proposed infrared image generation method can realize the transformation simulation and dataset expansion of infrared images with high speed and high quality.
近年来,人们对智能红外识别方法的需求越来越大。由于目前深度网络等高精度智能识别算法在很大程度上依赖于大量的训练数据,红外数据库的缺乏已经成为技术发展的主要限制,因此对智能红外图像仿真技术的需求十分迫切。与大多数红外图像模拟技术在热平衡条件下扩展红外数据量不同,本文提出了一种新的红外图像模拟方法,即对场景中处于非定常热传导过程的物体沿时间轴生成红外图像。具体而言,本文采用空间传播网络结构,对某一时间点采集的输入红外图像进行等效导热系数预测,然后根据预测的导热系数模拟下一时间点的物理导热过程,推断出下一时间点的红外图像。我们对实际红外照片和pde模拟图像组成的数据集进行了大量的实验和分析,结果表明所提出的红外图像生成方法可以高速、高质量地实现红外图像的变换模拟和数据集扩展。
{"title":"Infrared Image Transformation via Spatial Propagation Network","authors":"Ying Xu, Ningfang Song, Xiong Pan, Jingchun Cheng, Chunxi Zhang","doi":"10.1109/CACRE58689.2023.10208437","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208437","url":null,"abstract":"In recent years, there has been an increasing demand for intelligent infrared recognition methods. As current high-precision intelligent recognition algorithms like deep networks largely rely on massive amounts of training data, the lack of infrared databases has become a major limitation for technological development, resulting in an urgent demand for intelligent infrared image simulation technology. Different from most infrared image simulation techniques which expand infrared data amount under conditions of thermal balance, this paper proposes a novel way to simulate infrared images, i.e. generating infrared images for objects in scenes under an unsteady heat conduction process along the time axis. To be specific, this paper incorporates a spatial propagation network structure to predict the equivalent thermal conductivity coefficients for the input infrared image captured at a certain time point, and then infers the infrared images at the next time points by simulating the physical heat conduction process based on the predicted conductivity coefficients. We carry out extensive experiments and analysis on the datasets composed of factual infrared photos and PDE-simulated images, demonstrating that the proposed infrared image generation method can realize the transformation simulation and dataset expansion of infrared images with high speed and high quality.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"72 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128404939","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
System Design and Workspace Optimization of a Parallel Mechanism-Based Portable Robot for Remote Ultrasound 基于并联机构的便携式远程超声机器人系统设计与工作空间优化
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10209058
Zhaokun Deng, Xilong Hou, Mingrui Hao, Shuangyi Wang
The robotic ultrasound system has the potential to improve the conventional practice of diagnosing. Because of the adequate degrees of freedom embedded in a small footprint, the parallel mechanism-based ultrasound robot has attracted attention in the field. However, the analysis of its configuration, design parameters, and workspace is limited. To solve this issue and further promote the potential clinical translation, this paper proposes a task-driven, two-stage mechanism optimization method using the effective regular workspace and the local condition index to determine the parameters for the demanding clinic workspace of a parallel mechanism-based ultrasound robot. The design and implementation method of the robot are then introduced, along with the justification of parameter selection. To analyze the performance, an optical tracking-based experiment and a phantom-based human-robot comparison study were performed. The results show that the workspace meets the required clinical needs, and despite its small footprint, the mechanism could have a reasonable workspace. The kinematic error was found to be 0.2 mm and 0.3°. Based on the above results and the quantitative analysis of the ultrasound images acquired manually and robotically, it was concluded that the robot can effectively deliver the demand function and would be a promising tool for further deployment.
机器人超声系统有可能改善传统的诊断方法。基于并联机构的超声机器人由于具有足够的自由度和较小的占地面积而备受关注。然而,对其结构、设计参数和工作空间的分析是有限的。为了解决这一问题,进一步促进潜在的临床转化,本文提出了一种任务驱动的两阶段机构优化方法,利用有效的常规工作空间和局部条件指标来确定并联机构超声机器人的临床工作空间参数。然后介绍了机器人的设计与实现方法,并对其参数选择进行了论证。为了分析其性能,进行了基于光学跟踪的实验和基于仿真的人机对比研究。结果表明,该工作空间满足临床需要,尽管占地面积小,但该机构可以拥有合理的工作空间。运动学误差分别为0.2 mm和0.3°。基于上述结果,并对人工和机器人采集的超声图像进行定量分析,得出结论:该机器人可以有效地实现需求功能,将是一个有前途的工具,可以进一步部署。
{"title":"System Design and Workspace Optimization of a Parallel Mechanism-Based Portable Robot for Remote Ultrasound","authors":"Zhaokun Deng, Xilong Hou, Mingrui Hao, Shuangyi Wang","doi":"10.1109/CACRE58689.2023.10209058","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10209058","url":null,"abstract":"The robotic ultrasound system has the potential to improve the conventional practice of diagnosing. Because of the adequate degrees of freedom embedded in a small footprint, the parallel mechanism-based ultrasound robot has attracted attention in the field. However, the analysis of its configuration, design parameters, and workspace is limited. To solve this issue and further promote the potential clinical translation, this paper proposes a task-driven, two-stage mechanism optimization method using the effective regular workspace and the local condition index to determine the parameters for the demanding clinic workspace of a parallel mechanism-based ultrasound robot. The design and implementation method of the robot are then introduced, along with the justification of parameter selection. To analyze the performance, an optical tracking-based experiment and a phantom-based human-robot comparison study were performed. The results show that the workspace meets the required clinical needs, and despite its small footprint, the mechanism could have a reasonable workspace. The kinematic error was found to be 0.2 mm and 0.3°. Based on the above results and the quantitative analysis of the ultrasound images acquired manually and robotically, it was concluded that the robot can effectively deliver the demand function and would be a promising tool for further deployment.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"23 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127649317","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Research on Motion/Force Transmission Characteristics and Good Transmission Workspace Identification Method of Multi-drive Parallel Mechanism 多驱动并联机构运动/力传递特性及良好传动工作空间辨识方法研究
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208310
Ming Han, Wangwang Lian, Dong Yang, Tiejun Li
This paper puts forward a novel parallel mechanism with multiple driving modes to address the inherent limitations of workspace and singular configurations in single-driven parallel mechanisms. Taking the planar 6R parallel mechanism as an example, we conduct numerical and simulation-based studies to demonstrate the superior kinematic performance of the multi-drive mode parallel mechanism. The analytical process involved initial investigation and characterization of the mechanism, development of prototype, establishment of inverse kinematics model and introduction of local transmission index. Motion/force transmission indices under both single driving mode and multiple driving modes were then compared and analyzed. Drawing on the motion/force transmission index, we identified the good transmission workspace of the mechanism and performed a performance comparison analysis. The results unequivocally demonstrate that engaging the multi-drive mode substantially enhances the parallel mechanism's kinematic performance.
针对单驱动并联机构固有的工作空间和结构单一的局限性,提出了一种多驱动并联机构。以平面6R并联机构为例,通过数值和仿真研究,论证了多驱动模式并联机构优越的运动性能。分析过程包括机构的初步研究和表征、样机的开发、逆运动学模型的建立和局部传动指标的引入。对单驱动模式和多驱动模式下的运动/力传递指标进行了比较分析。根据运动/力传递指标,确定了机构良好的传递工作空间,并进行了性能对比分析。结果明确表明,采用多驱动方式大大提高了并联机构的运动性能。
{"title":"Research on Motion/Force Transmission Characteristics and Good Transmission Workspace Identification Method of Multi-drive Parallel Mechanism","authors":"Ming Han, Wangwang Lian, Dong Yang, Tiejun Li","doi":"10.1109/CACRE58689.2023.10208310","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208310","url":null,"abstract":"This paper puts forward a novel parallel mechanism with multiple driving modes to address the inherent limitations of workspace and singular configurations in single-driven parallel mechanisms. Taking the planar 6R parallel mechanism as an example, we conduct numerical and simulation-based studies to demonstrate the superior kinematic performance of the multi-drive mode parallel mechanism. The analytical process involved initial investigation and characterization of the mechanism, development of prototype, establishment of inverse kinematics model and introduction of local transmission index. Motion/force transmission indices under both single driving mode and multiple driving modes were then compared and analyzed. Drawing on the motion/force transmission index, we identified the good transmission workspace of the mechanism and performed a performance comparison analysis. The results unequivocally demonstrate that engaging the multi-drive mode substantially enhances the parallel mechanism's kinematic performance.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"47 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126895664","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
3D Scanning Vision System Design and Implementation in Large Shipbuilding Environments 大型造船环境下三维扫描视觉系统的设计与实现
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208324
Hang Yu, Yi-xi Zhao, Ran Zhang, Haiping Guo, Chongben Ni, Jin-hong Ding
To achieve efficient and intelligent welding of common workpieces such as subassemblies without relying on models, and to adapt to the unique large manufacturing scenes of the shipbuilding industry, 3D area-array cameras were used in this study instead of traditional line laser scanning sensors. Based on 3D vision processing technologies such as multisensor data calibration and point cloud registration, the 3D reconstruction and weld reconstruction vision system was designed for large scenes in shipbuilding, and the algorithm was optimized from the perspective of improving scanning efficiency and scanning accuracy. Through 3D scanning reconstruction and weld reconstruction tests on typical ship workpieces in large scenes, it was verified that the vision system in this paper can markedly improve scanning efficiency and scanning accuracy in large scenes, and provide efficient and accurate visual data support for intelligent welding of common workpieces such as subassemblies.
为了在不依赖模型的情况下实现组件等常见工件的高效智能焊接,并适应船舶工业独特的大型制造场景,本研究采用三维面阵相机代替传统的线激光扫描传感器。基于多传感器数据标定、点云配准等三维视觉处理技术,设计了面向船舶大场景的三维重建与焊缝重建视觉系统,并从提高扫描效率和扫描精度的角度对算法进行了优化。通过对典型船舶工件大场景下的三维扫描重建和焊缝重建试验,验证了本文所设计的视觉系统能够显著提高大场景下的扫描效率和扫描精度,为组件等常见工件的智能焊接提供高效、准确的视觉数据支持。
{"title":"3D Scanning Vision System Design and Implementation in Large Shipbuilding Environments","authors":"Hang Yu, Yi-xi Zhao, Ran Zhang, Haiping Guo, Chongben Ni, Jin-hong Ding","doi":"10.1109/CACRE58689.2023.10208324","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208324","url":null,"abstract":"To achieve efficient and intelligent welding of common workpieces such as subassemblies without relying on models, and to adapt to the unique large manufacturing scenes of the shipbuilding industry, 3D area-array cameras were used in this study instead of traditional line laser scanning sensors. Based on 3D vision processing technologies such as multisensor data calibration and point cloud registration, the 3D reconstruction and weld reconstruction vision system was designed for large scenes in shipbuilding, and the algorithm was optimized from the perspective of improving scanning efficiency and scanning accuracy. Through 3D scanning reconstruction and weld reconstruction tests on typical ship workpieces in large scenes, it was verified that the vision system in this paper can markedly improve scanning efficiency and scanning accuracy in large scenes, and provide efficient and accurate visual data support for intelligent welding of common workpieces such as subassemblies.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114969302","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Crime-Intent Sentiment Detection on Twitter Data Using Machine Learning 利用机器学习对Twitter数据进行犯罪意图情感检测
Pub Date : 2023-07-01 DOI: 10.1109/CACRE58689.2023.10208384
B. Bokolo, Ebikela Ogegbene-Ise, Lei Chen, Qingzhong Liu
This research examines sentiment analysis in the context of crime intent using machine learning algorithms. A comparison is made between a crime intent dataset generated from a Twitter developer account and Kaggle's sentiment140 dataset for Twitter sentiment analysis. The algorithms employed include Support Vector Machine (SVM), Naïve Bayes, and Long Short-Term Memory (LSTM). The findings indicate that LSTM outperforms the other algorithms, achieving high accuracy (97%) and precision (99%) in detecting crime tweets. Thus, it is concluded that the crime tweets were accurately identified.
本研究使用机器学习算法研究犯罪意图背景下的情感分析。从Twitter开发者帐户生成的犯罪意图数据集与Kaggle的sentiment140数据集之间进行了比较,用于Twitter情绪分析。使用的算法包括支持向量机(SVM)、Naïve贝叶斯和长短期记忆(LSTM)。研究结果表明,LSTM优于其他算法,在检测犯罪推文方面达到了很高的准确度(97%)和精度(99%)。因此,可以得出结论,犯罪推文被准确识别。
{"title":"Crime-Intent Sentiment Detection on Twitter Data Using Machine Learning","authors":"B. Bokolo, Ebikela Ogegbene-Ise, Lei Chen, Qingzhong Liu","doi":"10.1109/CACRE58689.2023.10208384","DOIUrl":"https://doi.org/10.1109/CACRE58689.2023.10208384","url":null,"abstract":"This research examines sentiment analysis in the context of crime intent using machine learning algorithms. A comparison is made between a crime intent dataset generated from a Twitter developer account and Kaggle's sentiment140 dataset for Twitter sentiment analysis. The algorithms employed include Support Vector Machine (SVM), Naïve Bayes, and Long Short-Term Memory (LSTM). The findings indicate that LSTM outperforms the other algorithms, achieving high accuracy (97%) and precision (99%) in detecting crime tweets. Thus, it is concluded that the crime tweets were accurately identified.","PeriodicalId":447007,"journal":{"name":"2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131181391","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
2023 8th International Conference on Automation, Control and Robotics Engineering (CACRE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1