首页 > 最新文献

UK-RAS Conference: Robots Working For and Among Us Proceedings最新文献

英文 中文
Wireless Power Transfer for Gas Pipe Inspection Robots 用于燃气管道检测机器人的无线电力传输
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.26
Doychinov, B. Kaddouh, G. Mills, B. T. Malik, N. Somjit, I. Robertson
Wireless power transfer in metal pipes is a promising alternative to tethered exploration robots, with strong potential to enable longer operating times. Here we present experimental results, including rectification efficiency, for a prototype gas pipe inspection robot with wireless power receiver functionality.
金属管道中的无线电力传输是一种很有前途的替代系留勘探机器人的方法,具有更长的工作时间的强大潜力。在这里,我们展示了一个具有无线电源接收器功能的燃气管道检测机器人的实验结果,包括整流效率。
{"title":"Wireless Power Transfer for Gas Pipe Inspection Robots","authors":"Doychinov, B. Kaddouh, G. Mills, B. T. Malik, N. Somjit, I. Robertson","doi":"10.31256/ukras17.26","DOIUrl":"https://doi.org/10.31256/ukras17.26","url":null,"abstract":"Wireless power transfer in metal pipes is a promising alternative to tethered exploration robots, with strong potential to enable longer operating times. Here we present experimental results, including rectification efficiency, for a prototype gas pipe inspection robot with wireless power receiver functionality.","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"136 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131207020","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Robin: An Autonomous Robot for Diabetic Children 罗宾:糖尿病儿童的自主机器人
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.5
Matthew Lewis, Cognition Embodied Emotion, Lola Cañamero
{"title":"Robin: An Autonomous Robot for Diabetic Children","authors":"Matthew Lewis, Cognition Embodied Emotion, Lola Cañamero","doi":"10.31256/ukras17.5","DOIUrl":"https://doi.org/10.31256/ukras17.5","url":null,"abstract":"","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"175 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134449961","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Motor Imagery Classification based on RNNs with Spatiotemporal-Energy Feature Extraction 基于时空能量特征提取的rnn运动图像分类
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.55
D-D Zhang, Jianlong Zheng, J. Fathi, M. Sun, F. Deligianni, G. Yang
With the recent advances in artificial intelligence and robotics, Brain Computer Interface (BCI) has become a rapidly evolving research area. Motor imagery (MI) based BCIs have several applications in neurorehabilitation and the control of robotic prosthesis because they offer the potential to seamlessly translate human intentions to machine language. However, to achieve adequate performance, these systems require extensive training with high-density EEG systems even for two-class paradigms. Effectively extracting and translating EEG data features is a key challenge in Brain Computer Interface (BCI) development. This paper presents a method based on Recurrent Neural Networks (RNNs) with spatiotemporal-energy feature extraction that significantly improves the performance of existing methods. We present cross-validation results based on EEG data collected by a 16-channel, dry electrodes system to demonstrate the practical use of our algorithm. Introduction Robotic control, based on brainwave decoding, can be used in a range of scenarios including patients with locked-in syndrome, rehabilitation after a stroke, virtual reality games and so on. In these cases, subjects may not be able to move their limbs. For this reason, the development of MI tasks based BCI is very important [1]. During a MI task, the subjects imagine moving a specific part of their body without initiating the actual movements. This process involves the brain networks, which are responsible for motor control similarly to the actual movements. Decoding brain waves is challenging, since EEG signals have limited spatial resolution and low signal to noise ratio. Furthermore, experimental conditions, such as subjects’ concentration and prior experience with BCI can bring confounds to the results. Thus far, several approaches have been proposed to classify MI tasks based data but their performances are limited even for the two-class paradigms that involve left and right hand MI tasks [2]. EEG-based BCI normally involves noise filtering, feature extraction and classification. Brain signals are normally analysed in cue-triggered or stimulus-triggered time windows. Related methods include identifying changes in Event Potentials (EPs), slow cortical potentials shifts, quantify oscillatory EEG components and so on [3]. These types of BCI are operated with predefined time windows. Furthermore, the interand intra-subject variability cannot be overlooked when finding suitable feature representation model. Recently, Deep Neural Networks (DNNs) have emerged with promising results in several applications. Their adaptive nature allows them to automatically extract relevant features from data without extensive preprocessing and prior knowledge about the signals [4]. Convolutional Neural Networks (CNNs) have been used to classify EEG features by transforming the temporal domain into spatial domain [5]. However, the CNN structure is static and inherently not suitable for processing temporal patterns. F
随着人工智能和机器人技术的发展,脑机接口(BCI)已成为一个快速发展的研究领域。基于运动图像(MI)的脑机接口在神经康复和机器人假肢控制方面有很多应用,因为它们提供了将人类意图无缝转换为机器语言的潜力。然而,为了达到足够的性能,这些系统需要高密度脑电图系统的广泛训练,即使是两类范例。有效提取和翻译脑电数据特征是脑机接口(BCI)开发中的一个关键挑战。本文提出了一种基于递归神经网络(RNNs)的时空能量特征提取方法,显著提高了现有方法的性能。我们给出了基于16通道干电极系统收集的EEG数据的交叉验证结果,以证明我们的算法的实际应用。基于脑波解码的机器人控制可用于一系列场景,包括闭锁综合征患者、中风后康复、虚拟现实游戏等。在这些情况下,受试者可能无法移动他们的四肢。因此,基于BCI的MI任务的开发非常重要[1]。在MI任务中,受试者想象移动自己身体的特定部位,而不进行实际动作。这个过程涉及大脑网络,它负责与实际运动类似的运动控制。由于脑电波信号空间分辨率有限,信噪比较低,因此脑电波解码具有一定的挑战性。此外,实验条件,如受试者的注意力和先前使用脑机接口的经验,也会给结果带来混淆。到目前为止,已经提出了几种方法来对基于数据的人工智能任务进行分类,但即使对于涉及左手和右手人工智能任务的两类范式,它们的性能也受到限制[2]。基于脑电图的脑机接口通常包括噪声滤波、特征提取和分类。大脑信号通常是在线索触发或刺激触发的时间窗口中分析的。相关方法包括识别事件电位(Event Potentials, EPs)变化、减缓皮质电位移位、量化脑电图振荡成分等[3]。这些类型的BCI是在预定义的时间窗口下操作的。此外,在寻找合适的特征表示模型时,主体间和主体内的可变性也不容忽视。最近,深度神经网络(dnn)在一些应用中取得了可喜的成果。它们的自适应特性使其能够自动从数据中提取相关特征,而无需大量预处理和对信号的先验知识[4]。卷积神经网络(Convolutional Neural Networks, cnn)通过将时域转换为空域来对EEG特征进行分类[5]。然而,CNN结构是静态的,本质上不适合处理时间模式。此外,BCI的趋势是减少信道数量,从而构建信号的稀疏空间表示,这阻碍了cnn的有效性。为了处理时间序列数据,基于长短期记忆(LSTM)的递归神经网络(rnn)似乎是一个更好的选择,因为它可以保持信号的时间特征[6]。本文提出了一种基于rnn和从脑电信号中提取时空特征的多通道脑电信号原始数据解码方法。适当的时空特征提取对提高深度神经网络的学习率具有重要作用。所呈现的结果是基于使用干燥、16通道、主动电极等tec Nautilus系统获得的EEG数据集。虽然湿的有源电极是EEG信号采集的金标准,但它需要较长的准备时间和导电凝胶来降低皮肤电极阻抗,这使受试者感到不舒服[7]。干电极使BCI系统更容易从实验室带到患者家中,但存在解码低质量信号的挑战。因此,需要开发更先进的特征提取和分类方法。
{"title":"Motor Imagery Classification based on RNNs with Spatiotemporal-Energy Feature Extraction","authors":"D-D Zhang, Jianlong Zheng, J. Fathi, M. Sun, F. Deligianni, G. Yang","doi":"10.31256/ukras17.55","DOIUrl":"https://doi.org/10.31256/ukras17.55","url":null,"abstract":"With the recent advances in artificial intelligence and robotics, Brain Computer Interface (BCI) has become a rapidly evolving research area. Motor imagery (MI) based BCIs have several applications in neurorehabilitation and the control of robotic prosthesis because they offer the potential to seamlessly translate human intentions to machine language. However, to achieve adequate performance, these systems require extensive training with high-density EEG systems even for two-class paradigms. Effectively extracting and translating EEG data features is a key challenge in Brain Computer Interface (BCI) development. This paper presents a method based on Recurrent Neural Networks (RNNs) with spatiotemporal-energy feature extraction that significantly improves the performance of existing methods. We present cross-validation results based on EEG data collected by a 16-channel, dry electrodes system to demonstrate the practical use of our algorithm. Introduction Robotic control, based on brainwave decoding, can be used in a range of scenarios including patients with locked-in syndrome, rehabilitation after a stroke, virtual reality games and so on. In these cases, subjects may not be able to move their limbs. For this reason, the development of MI tasks based BCI is very important [1]. During a MI task, the subjects imagine moving a specific part of their body without initiating the actual movements. This process involves the brain networks, which are responsible for motor control similarly to the actual movements. Decoding brain waves is challenging, since EEG signals have limited spatial resolution and low signal to noise ratio. Furthermore, experimental conditions, such as subjects’ concentration and prior experience with BCI can bring confounds to the results. Thus far, several approaches have been proposed to classify MI tasks based data but their performances are limited even for the two-class paradigms that involve left and right hand MI tasks [2]. EEG-based BCI normally involves noise filtering, feature extraction and classification. Brain signals are normally analysed in cue-triggered or stimulus-triggered time windows. Related methods include identifying changes in Event Potentials (EPs), slow cortical potentials shifts, quantify oscillatory EEG components and so on [3]. These types of BCI are operated with predefined time windows. Furthermore, the interand intra-subject variability cannot be overlooked when finding suitable feature representation model. Recently, Deep Neural Networks (DNNs) have emerged with promising results in several applications. Their adaptive nature allows them to automatically extract relevant features from data without extensive preprocessing and prior knowledge about the signals [4]. Convolutional Neural Networks (CNNs) have been used to classify EEG features by transforming the temporal domain into spatial domain [5]. However, the CNN structure is static and inherently not suitable for processing temporal patterns. F","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"204 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127046074","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Assessing Pose Features for Predicting User Intention during Dressing with Deep Networks 用深度网络评估姿态特征预测用户穿衣意图
Pub Date : 2017-12-12 DOI: 10.31256/UKRAS17.1
Greg Chance, A. Jevtić, P. Caleb-Solly, G. Alenyà, S. Dogramadzi
{"title":"Assessing Pose Features for Predicting User Intention during Dressing with Deep Networks","authors":"Greg Chance, A. Jevtić, P. Caleb-Solly, G. Alenyà, S. Dogramadzi","doi":"10.31256/UKRAS17.1","DOIUrl":"https://doi.org/10.31256/UKRAS17.1","url":null,"abstract":"","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"13 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123646982","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A practical mSVG interaction method for patrol, search, and rescue aerobots 用于巡逻、搜索和救援飞行器的实用mSVG交互方法
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.13
A. Abioye, S. Prior, T. G. Thomas, P. Saddington, S. Ramchurn
This paper briefly presents the multimodal speech and visual gesture (mSVG) control for aerobots at higher nCA autonomy levels, using a patrol, search, and rescue application example. The developed mSVG control architecture was presented and briefly discussed. This was successfully tested using both MATLAB simulation and python based ROS Gazebo UAV simulations. Some limitations were identified, which formed the basis for the further works presented.
本文通过一个巡逻、搜索和救援应用实例,简要介绍了用于更高nCA自治水平的航空机器人的多模态语音和视觉手势(mSVG)控制。提出并简要讨论了开发的mSVG控制体系结构。这是成功地使用MATLAB仿真和基于python的ROS Gazebo无人机仿真测试。确定了一些限制,这些限制构成了提出的进一步工作的基础。
{"title":"A practical mSVG interaction method for patrol, search, and rescue aerobots","authors":"A. Abioye, S. Prior, T. G. Thomas, P. Saddington, S. Ramchurn","doi":"10.31256/ukras17.13","DOIUrl":"https://doi.org/10.31256/ukras17.13","url":null,"abstract":"This paper briefly presents the multimodal speech and visual gesture (mSVG) control for aerobots at higher nCA autonomy levels, using a patrol, search, and rescue application example. The developed mSVG control architecture was presented and briefly discussed. This was successfully tested using both MATLAB simulation and python based ROS Gazebo UAV simulations. Some limitations were identified, which formed the basis for the further works presented.","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129163730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A Modified Computed Torque Control Approach for a Teleoperation Master- Slave Robot Manipulator System 一种改进的遥操作主从机器人系统转矩计算控制方法
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.10
Ololade O. Obadina, J. Bernth, K. Althoefer, M. Shaheed
{"title":"A Modified Computed Torque Control Approach for a Teleoperation Master- Slave Robot Manipulator System","authors":"Ololade O. Obadina, J. Bernth, K. Althoefer, M. Shaheed","doi":"10.31256/ukras17.10","DOIUrl":"https://doi.org/10.31256/ukras17.10","url":null,"abstract":"","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"703 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116423558","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An Optimised Deep Neural Network Approach for Forest Trail Navigation for UAV Operation within the Forest Canopy 基于优化深度神经网络的无人机林下航迹导航
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.7
Bruna G. Maciel-Pearson, Pratrice Carbonneu, T. Breckon
Autonomous flight within a forest canopy represents a key challenge for generalised scene understanding on-board a future Unmanned Aerial Vehicle (UAV) platform. Here we present an approach for automatic trail navigation within such an environment that successfully generalises across differing image resolutions - allowing UAV with varying sensor payload capabilities to operate equally in such challenging environmental conditions. Specifically, this work presents an optimised deep neural network architecture, capable of stateof-the-art performance across varying resolution aerial UAV imagery, that improves forest trail detection for UAV guidance even when using significantly low resolution images that are representative of low-cost search and rescue capable UAV platforms.
森林冠层内的自主飞行是未来无人机(UAV)平台上通用场景理解的关键挑战。在这里,我们提出了一种在这样的环境中自动跟踪导航的方法,该方法成功地推广了不同的图像分辨率-允许具有不同传感器有效载荷能力的无人机在这种具有挑战性的环境条件下平等地运行。具体来说,这项工作提出了一个优化的深度神经网络架构,能够在不同分辨率的空中无人机图像中实现最先进的性能,即使在使用低成本搜索和救援能力无人机平台代表的低分辨率图像时,也能改善无人机制导的森林路径检测。
{"title":"An Optimised Deep Neural Network Approach for Forest Trail Navigation for UAV Operation within the Forest Canopy","authors":"Bruna G. Maciel-Pearson, Pratrice Carbonneu, T. Breckon","doi":"10.31256/ukras17.7","DOIUrl":"https://doi.org/10.31256/ukras17.7","url":null,"abstract":"Autonomous flight within a forest canopy represents a key challenge for generalised scene understanding \u0000on-board a future Unmanned Aerial Vehicle (UAV) platform. Here we present an approach for automatic \u0000trail navigation within such an environment that successfully generalises across differing image resolutions - \u0000allowing UAV with varying sensor payload capabilities to operate equally in such challenging environmental \u0000conditions. Specifically, this work presents an optimised deep neural network architecture, capable of stateof-the-art \u0000performance across varying resolution aerial UAV imagery, that improves forest trail detection for \u0000UAV guidance even when using significantly low resolution images that are representative of low-cost search \u0000and rescue capable UAV platforms.","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"82 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124311266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
On Decision-making for Computation Offloading in Cloud-assisted Autonomous Vehicle Systems 云辅助自动驾驶车辆系统计算卸载决策研究
Pub Date : 2017-12-12 DOI: 10.31256/ukras17.6
Yi Lu, C. Maple, T. Sheik, M. Dianati, A. Mouzakitis
{"title":"On Decision-making for Computation Offloading in Cloud-assisted Autonomous Vehicle Systems","authors":"Yi Lu, C. Maple, T. Sheik, M. Dianati, A. Mouzakitis","doi":"10.31256/ukras17.6","DOIUrl":"https://doi.org/10.31256/ukras17.6","url":null,"abstract":"","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115470036","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Geographies of Robotization and Automation 机器人化与自动化地理学
Pub Date : 2017-12-12 DOI: 10.31256/UKRAS17.4
M. Kovacic, A. Lockhart
{"title":"Geographies of Robotization and Automation","authors":"M. Kovacic, A. Lockhart","doi":"10.31256/UKRAS17.4","DOIUrl":"https://doi.org/10.31256/UKRAS17.4","url":null,"abstract":"","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"15 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133298993","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Wireless Communications in Nuclear Decommissioning Environments 核退役环境中的无线通信
Pub Date : 2017-12-04 DOI: 10.31256/ukras17.23
A. Buono, Neil Cockbain, Peter T. Green, B. Lennox
The use of Wireless Sensor Networks (WSN) is now widespread, with well-documented deployments across a diverse range of sectors including aerospace, agri-science and consumer electronics. In the nuclear industry there have been successful deployments of the WSN technologies for instrumentation and control, however, there are significant challenges that need to be addressed before wireless sensing can be used in nuclear decommissioning environments. These challenges include: limited sources of power; the radiation tolerance of the sensor and communication system components; the severe attenuation of wireless signals through reinforced wall structures; and the need to deliver secure, interoperable and reliable communication. Introduction Robotics and Automation applications within the nuclear decommissioning industry are rapidly increasing to reduce the cost, time and dose exposure of workers [1]. In addition, there is the need to store nuclear waste and monitoring the condition of the packages in the stores [2]. The design, prototype and evaluation of Wireless Sensor Network with the capability to deliver remote sensing and control can result in reduction of the cost and time to install robotics application and improve the performance, collecting data from hard to reach places not designed to be decommissioned. As a result a successful application can lead to an increase of robotics and automation in the nuclear Industry. Benefits and Challenges Wireless Sensor Networks are extensively employed in agriculture, classic examples are application to monitor soil and crop properties [3] [4]. Similarly in the aerospace industry it is possible to find useful example of Wireless Sensors Networks in harsh environments, to monitor gas turbine engines [5]. In the nuclear industry there have been initiatives to deploy Commercial Off The Shelf (COTS) wireless instrumentation and control systems [6]. One such initiative resulted in Sellafield’s first application of this technology [7], with reported time saving of 16 weeks and a cost saving of £185k. However, there remain a number of significant challenges to address if Wireless Sensor Networks are to be deployed in nuclear decommissioning environments. One key challenge is the damaged to COTS integrated circuits caused by the high radiation levels and elevated temperatures. There are also fundamental communication challenges resulting from the very high signal attenuation experienced by Radio Frequency (RF) signals propagating through reinforced concrete wall and floor structures. In addition, many legacy buildings in nuclear facilities were not designed to be decommissioning, and limited access and unknown conditions are a further problem. In these situations the wireless sensing systems will need to be battery-powered, with the possibility of power harvesting. Wireless Sensor Network for Nuclear Decommissioning Industry A research project, sponsored by the Centre for Innovative Nuclear Decommissioning (CINDe
无线传感器网络(WSN)的使用现在很广泛,在包括航空航天、农业科学和消费电子产品在内的各个领域都有充分的部署。在核工业中,已经成功地部署了用于仪表和控制的WSN技术,然而,在将无线传感应用于核退役环境之前,还需要解决一些重大挑战。这些挑战包括:电力来源有限;传感器和通信系统部件的辐射容限;通过钢筋墙结构的无线信号衰减严重;并且需要提供安全、可互操作和可靠的通信。机器人和自动化在核退役行业中的应用正在迅速增加,以减少工人的成本、时间和剂量暴露[1]。此外,还需要储存核废料,并监测核废料在仓库中的包装状况[2]。无线传感器网络的设计、原型和评估具有提供遥感和控制的能力,可以降低安装机器人应用程序的成本和时间,提高性能,从难以到达的地方收集数据。因此,成功的应用可以导致核工业中机器人技术和自动化的增加。无线传感器网络广泛应用于农业,典型的应用是监测土壤和作物特性[3][4]。同样,在航空航天工业中,可以找到恶劣环境下无线传感器网络的有用示例,以监测燃气轮机发动机[5]。在核工业中,已经有了部署商用现货(COTS)无线仪器和控制系统的倡议[6]。其中一项举措导致Sellafield首次应用该技术[7],据报道节省了16周的时间和18.5万英镑的成本。然而,如果要在核退役环境中部署无线传感器网络,仍有许多重大挑战需要解决。一个关键的挑战是高辐射水平和高温对COTS集成电路造成的损坏。由于射频(RF)信号通过钢筋混凝土墙壁和地板结构传播时经历了非常高的信号衰减,因此也存在基本的通信挑战。此外,核设施中的许多遗留建筑并不是按照退役设计的,进入受限和未知的条件是另一个问题。在这些情况下,无线传感系统将需要电池供电,并有可能收集电力。由创新核退役中心(CINDe)和曼彻斯特大学赞助的一项研究项目,其任务是设计、制作原型并实验评估无线传感器网络,该网络具有在核退役环境中通过钢筋混凝土墙壁和地板结构进行通信的能力。图1显示了所提出的无线传感器网络系统的框图。该系统主要由核退役环境下的一组无线传感节点和操作员环境下的基站节点两部分组成。传感器节点与基站节点采用钢筋混凝土墙/楼结构分隔。每个传感器节点包括:多个传感器;无线收发器,具有传输传感器数据和接收来自基站节点的控制和配置命令的能力;存储传感器测量值的存储装置;并有一个控制系统来协调各节点的功能。传感器节点将使用能量收集和存储技术供电。基站节点包括无线收发器和控制系统。无线收发器接收来自无线传感节点的传感器测量数据,并能够通过传输控制和配置信息来控制传感节点的功能。虽然图1中只显示了一个基站节点,但可以合并多个基站节点以支持更大区域的操作。通信系统将是不对称的,因为在核退役环境中,我们将部署一种使用简单电子COTS组件设计的设备,以限制辐射的影响并将功耗降至最低。相反,基站节点将需要补偿由于使用低复杂度COTS组件而导致的传输信号中的频率漂移等缺陷。 控制系统将在无线传感器网络的设计中发挥基础性作用;实际上,它们将被设计为具有错误检测和前向校正的能力。无线传感器网络的另一个关键点是确保足够和可预测的运行寿命,这保证了系统成本和工作人员剂量暴露方面的效益。操作寿命将取决于传感器节点的功耗概况(假设它们完全由电池操作),以及辐射对COTS电子元件的影响。第二个方面将使用道尔顿坎布里亚设施的辐照能力进行研究[8],其中电子元件将使用Cobalt 60辐照器进行测试,并测量总电离剂量对组件特性的影响。无线传感器网络将通过加密所有传输数据和主动控制无线传输的范围来确保敏感核信息的安全。图1无线传感器网络系统框图结论给出了一种适用于核退役环境的无线传感器网络系统的概念设计。一个成功的原型将有机会增加WSN技术的使用,从而支持核工业中机器人和自主系统的部署。这将带来降低安装成本和缩短完井时间的好处。
{"title":"Wireless Communications in Nuclear Decommissioning Environments","authors":"A. Buono, Neil Cockbain, Peter T. Green, B. Lennox","doi":"10.31256/ukras17.23","DOIUrl":"https://doi.org/10.31256/ukras17.23","url":null,"abstract":"The use of Wireless Sensor Networks (WSN) is now widespread, with well-documented deployments across a diverse range of sectors including aerospace, agri-science and consumer electronics. In the nuclear industry there have been successful deployments of the WSN technologies for instrumentation and control, however, there are significant challenges that need to be addressed before wireless sensing can be used in nuclear decommissioning environments. These challenges include: limited sources of power; the radiation tolerance of the sensor and communication system components; the severe attenuation of wireless signals through reinforced wall structures; and the need to deliver secure, interoperable and reliable communication. Introduction Robotics and Automation applications within the nuclear decommissioning industry are rapidly increasing to reduce the cost, time and dose exposure of workers [1]. In addition, there is the need to store nuclear waste and monitoring the condition of the packages in the stores [2]. The design, prototype and evaluation of Wireless Sensor Network with the capability to deliver remote sensing and control can result in reduction of the cost and time to install robotics application and improve the performance, collecting data from hard to reach places not designed to be decommissioned. As a result a successful application can lead to an increase of robotics and automation in the nuclear Industry. Benefits and Challenges Wireless Sensor Networks are extensively employed in agriculture, classic examples are application to monitor soil and crop properties [3] [4]. Similarly in the aerospace industry it is possible to find useful example of Wireless Sensors Networks in harsh environments, to monitor gas turbine engines [5]. In the nuclear industry there have been initiatives to deploy Commercial Off The Shelf (COTS) wireless instrumentation and control systems [6]. One such initiative resulted in Sellafield’s first application of this technology [7], with reported time saving of 16 weeks and a cost saving of £185k. However, there remain a number of significant challenges to address if Wireless Sensor Networks are to be deployed in nuclear decommissioning environments. One key challenge is the damaged to COTS integrated circuits caused by the high radiation levels and elevated temperatures. There are also fundamental communication challenges resulting from the very high signal attenuation experienced by Radio Frequency (RF) signals propagating through reinforced concrete wall and floor structures. In addition, many legacy buildings in nuclear facilities were not designed to be decommissioning, and limited access and unknown conditions are a further problem. In these situations the wireless sensing systems will need to be battery-powered, with the possibility of power harvesting. Wireless Sensor Network for Nuclear Decommissioning Industry A research project, sponsored by the Centre for Innovative Nuclear Decommissioning (CINDe","PeriodicalId":392429,"journal":{"name":"UK-RAS Conference: Robots Working For and Among Us Proceedings","volume":"30 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-12-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"125229796","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
UK-RAS Conference: Robots Working For and Among Us Proceedings
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1