首页 > 最新文献

2017 18th International Conference on Advanced Robotics (ICAR)最新文献

英文 中文
Cooperative motion planning of redundant rover manipulators on uneven terrains 非均匀地形上冗余漫游机器人的协同运动规划
Pub Date : 2017-07-01 DOI: 10.1109/ICAR.2017.8023502
R. Raja, B. Dasgupta, A. Dutta
In this paper we consider the problem of cooperative motion planning for redundant mobile manipulator on uneven terrains. This approach involves formulating the trajectory planning as non-linear constrained minimization problem of joint angle movement of mobile manipulator at each instance. The main problem is to solve (i) the redundancy exist in the system considering parameters of wheel-terrain interactions, (ii) the cooperative behavior of the mobile manipulator while performing the task, and (iii) the manipulability issues. To perform task the manipulator moves towards desired location, while the mobile robot moves to enhance the manipulator task space. A weighting factors has been introduced to define the level of importance of movement of each joint of the mobile manipulator. A quality measure has been computed to measure the ability of mobile manipulator for a particular configuration. The problem of trajectory planning and redundancy resolution has been solved by Augmented Lagrangian Method (ALM). To evaluate the method several simulations have been performed. The simulation and experimental results have been presented, which shows that the method provides feasible trajectories and successfully tracks the desired end-effector path.
研究了不平坦地形上冗余移动机械手的协同运动规划问题。该方法将移动机械臂关节角运动的轨迹规划表述为每个实例的非线性约束最小化问题。主要问题是解决:(1)考虑车轮-地形相互作用参数的系统冗余,(2)移动机械手在执行任务时的协作行为,以及(3)可操作性问题。为了完成任务,机械手向期望的位置移动,而移动机器人的移动是为了增加机械手的任务空间。引入加权因子来确定移动机械手各关节运动的重要程度。计算了一种质量度量来衡量移动机械手在特定构型下的能力。利用增广拉格朗日方法解决了弹道规划和冗余度求解问题。为了评估该方法,进行了几个仿真。仿真和实验结果表明,该方法提供了可行的轨迹,并成功地跟踪了期望的末端执行器路径。
{"title":"Cooperative motion planning of redundant rover manipulators on uneven terrains","authors":"R. Raja, B. Dasgupta, A. Dutta","doi":"10.1109/ICAR.2017.8023502","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023502","url":null,"abstract":"In this paper we consider the problem of cooperative motion planning for redundant mobile manipulator on uneven terrains. This approach involves formulating the trajectory planning as non-linear constrained minimization problem of joint angle movement of mobile manipulator at each instance. The main problem is to solve (i) the redundancy exist in the system considering parameters of wheel-terrain interactions, (ii) the cooperative behavior of the mobile manipulator while performing the task, and (iii) the manipulability issues. To perform task the manipulator moves towards desired location, while the mobile robot moves to enhance the manipulator task space. A weighting factors has been introduced to define the level of importance of movement of each joint of the mobile manipulator. A quality measure has been computed to measure the ability of mobile manipulator for a particular configuration. The problem of trajectory planning and redundancy resolution has been solved by Augmented Lagrangian Method (ALM). To evaluate the method several simulations have been performed. The simulation and experimental results have been presented, which shows that the method provides feasible trajectories and successfully tracks the desired end-effector path.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121871960","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Controlled tactile exploration and haptic object recognition 控制触觉探索和触觉对象识别
Pub Date : 2017-06-27 DOI: 10.1109/ICAR.2017.8023495
Massimo Regoli, Nawid Jamali, G. Metta, L. Natale
In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects. We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method.
本文提出了一种新的手部物体识别方法。该方法由抓握稳定控制器和捕捉物体形状和柔软度的两个探索行为组成。抓稳在物体识别中起着重要的作用。首先,它可以防止物体滑动,方便对物体的探索。其次,达到稳定和可重复的位置增加了学习算法的鲁棒性,并增加了机器人抓取物体方式的不变性。利用高斯混合模型(GMM)估计稳定姿态。实验结果表明,该分类器可以成功地识别出30个目标。我们还将我们的方法与一个基准实验进行了比较,在基准实验中,抓握稳定被禁用。我们以统计显著性表明,我们的方法优于基准方法。
{"title":"Controlled tactile exploration and haptic object recognition","authors":"Massimo Regoli, Nawid Jamali, G. Metta, L. Natale","doi":"10.1109/ICAR.2017.8023495","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023495","url":null,"abstract":"In this paper we propose a novel method for in-hand object recognition. The method is composed of a grasp stabilization controller and two exploratory behaviours to capture the shape and the softness of an object. Grasp stabilization plays an important role in recognizing objects. First, it prevents the object from slipping and facilitates the exploration of the object. Second, reaching a stable and repeatable position adds robustness to the learning algorithm and increases invariance with respect to the way in which the robot grasps the object. The stable poses are estimated using a Gaussian mixture model (GMM). We present experimental results showing that using our method the classifier can successfully distinguish 30 objects. We also compare our method with a benchmark experiment, in which the grasp stabilization is disabled. We show, with statistical significance, that our method outperforms the benchmark method.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128888281","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
Independent motion detection with event-driven cameras 独立运动检测与事件驱动的相机
Pub Date : 2017-06-27 DOI: 10.1109/ICAR.2017.8023661
Valentina Vasco, Arren J. Glover, Elias Mueggler, D. Scaramuzza, L. Natale, C. Bartolozzi
Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.
与以恒定帧速率发送强度图像的标准相机不同,事件驱动相机异步报告像素级亮度变化,提供低延迟和高时间分辨率(都以微秒为单位)。因此,它们在机器人的快速和低功耗视觉算法方面具有巨大的潜力。例如,即使对于非常快的刺激,视觉跟踪也很容易实现,因为只有移动的物体才会引起亮度变化。然而,安装在移动机器人上的摄像机通常是非静止的,并且由于机器人的自我运动,同样的跟踪问题会受到背景杂波事件的干扰。在本文中,我们提出了一种用于事件驱动相机的独立运动物体的运动分割方法。我们的方法检测和跟踪事件流中的角落,并学习它们的运动统计数据,作为机器人关节速度的函数,当没有独立运动的物体存在时。在机器人操作过程中,通过自我运动预测的拐角速度与测量的拐角速度之间的差异来识别独立运动的物体。我们在从神经形态机器人iCub收集的数据上验证了算法。我们实现了~ 90%的精度,并表明该方法对头部和目标的速度变化都具有鲁棒性。
{"title":"Independent motion detection with event-driven cameras","authors":"Valentina Vasco, Arren J. Glover, Elias Mueggler, D. Scaramuzza, L. Natale, C. Bartolozzi","doi":"10.1109/ICAR.2017.8023661","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023661","url":null,"abstract":"Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ∼ 90% and show that the method is robust to changes in speed of both the head and the target.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"77 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-06-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128003510","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 27
Robot trajectory planning method based on genetic chaos optimization algorithm 基于遗传混沌优化算法的机器人轨迹规划方法
Pub Date : 2017-05-31 DOI: 10.1109/ICAR.2017.8023673
Qiwan Zhang, Mingting Yuan, R. Song
In order to smooth the trajectory of the robot end effector and optimize the running time of the robot, the paper presents a new robot trajectory planning method based on genetic chaos optimization algorithm. Firstly, the planned quintic polynomial is used to interpolate the position nodes in joint space to model the running trajectory of the robot. Subsequently, genetic chaos optimization algorithm based on genetic algorithm and chaos algorithm is introduced. Finally, it is proved that the novel method can make the running trajectory of the robot end effector smooth and time optimal under the constraints of velocity, acceleration and jerk through simulation and analysis.
为了使机器人末端执行器的轨迹平滑,优化机器人的运行时间,提出了一种基于遗传混沌优化算法的机器人轨迹规划新方法。首先,利用规划的五次多项式对关节空间的位置节点进行插值,建立机器人的运行轨迹模型;随后,介绍了基于遗传算法和混沌算法的遗传混沌优化算法。最后,通过仿真和分析,证明了该方法在速度、加速度和加速度约束下,能够使机器人末端执行器的运行轨迹平滑且时间最优。
{"title":"Robot trajectory planning method based on genetic chaos optimization algorithm","authors":"Qiwan Zhang, Mingting Yuan, R. Song","doi":"10.1109/ICAR.2017.8023673","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023673","url":null,"abstract":"In order to smooth the trajectory of the robot end effector and optimize the running time of the robot, the paper presents a new robot trajectory planning method based on genetic chaos optimization algorithm. Firstly, the planned quintic polynomial is used to interpolate the position nodes in joint space to model the running trajectory of the robot. Subsequently, genetic chaos optimization algorithm based on genetic algorithm and chaos algorithm is introduced. Finally, it is proved that the novel method can make the running trajectory of the robot end effector smooth and time optimal under the constraints of velocity, acceleration and jerk through simulation and analysis.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"11 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123172468","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition 一种完全端到端的深度学习方法,用于实时同步3D重建和材料识别
Pub Date : 2017-03-14 DOI: 10.1109/ICAR.2017.8023499
Cheng Zhao, Li Sun, R. Stolkin
This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.
本文解决了同时进行三维重建和材料识别与分割的问题。使机器人能够识别场景中的不同材料(混凝土,金属等)对于许多任务都很重要,例如机器人干预核退役。之前关于3D语义重建的工作主要集中在日常家用物品(桌子、椅子等)的识别上,而之前关于材料识别的工作很大程度上局限于单个2D图像,没有任何3D重建。同时,大多数三维语义重建方法依赖于计算昂贵的后处理,使用全连接条件随机场(CRFs)来实现一致的分割。相比之下,我们提出了一种深度学习方法,该方法在进行3D重建的同时识别不同类型的材料并在像素级对其进行标记。与以前的方法不同,我们提出了一种完全端到端的方法,它不需要手工制作的功能或CRF后处理。相反,我们只使用学习到的特征,并将CRF分割约束合并到完全端到端学习系统中。我们展示了实验结果,在实验中,我们训练我们的系统在现实世界的应用中对23种不同的材料进行实时3D语义重建。使用传统的GPU,系统的运行时性能可以提升到10Hz左右,这足以使用30fps的RGB-D相机实现实时语义重建。据我们所知,这项工作是第一个同时进行3D重建和材料识别的实时端到端系统。
{"title":"A fully end-to-end deep learning approach for real-time simultaneous 3D reconstruction and material recognition","authors":"Cheng Zhao, Li Sun, R. Stolkin","doi":"10.1109/ICAR.2017.8023499","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023499","url":null,"abstract":"This paper addresses the problem of simultaneous 3D reconstruction and material recognition and segmentation. Enabling robots to recognise different materials (concrete, metal etc.) in a scene is important for many tasks, e.g. robotic interventions in nuclear decommissioning. Previous work on 3D semantic reconstruction has predominantly focused on recognition of everyday domestic objects (tables, chairs etc.), whereas previous work on material recognition has largely been confined to single 2D images without any 3D reconstruction. Meanwhile, most 3D semantic reconstruction methods rely on computationally expensive post-processing, using Fully-Connected Conditional Random Fields (CRFs), to achieve consistent segmentations. In contrast, we propose a deep learning method which performs 3D reconstruction while simultaneously recognising different types of materials and labeling them at the pixel level. Unlike previous methods, we propose a fully end-to-end approach, which does not require hand-crafted features or CRF post-processing. Instead, we use only learned features, and the CRF segmentation constraints are incorporated inside the fully end-to-end learned system. We present the results of experiments, in which we trained our system to perform real-time 3D semantic reconstruction for 23 different materials in a real-world application. The run-time performance of the system can be boosted to around 10Hz, using a conventional GPU, which is enough to achieve realtime semantic reconstruction using a 30fps RGB-D camera. To the best of our knowledge, this work is the first real-time end-to-end system for simultaneous 3D reconstruction and material recognition.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"39 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128777344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 38
Non-iterative SLAM 价值大满贯
Pub Date : 2017-01-19 DOI: 10.1109/ICAR.2017.8023500
Chen Wang, Junsong Yuan, Lihua Xie
The goal of this paper is to create a new framework for dense SLAM that is light enough for micro-robot systems based on depth camera and inertial sensor. Feature-based and direct methods are two mainstreams in visual SLAM. Both methods minimize photometric or reprojection error by iterative solutions, which are computationally expensive. To overcome this problem, we propose a non-iterative framework to reduce computational requirement. First, the attitude and heading reference system (AHRS) and axonometric projection are utilized to decouple the 6 Degree-of-Freedom (DoF) data, so that point clouds can be matched in independent spaces respectively. Second, based on single key-frame training, the matching process is carried out in frequency domain by Fourier transformation, which provides a closed-form non-iterative solution. In this manner, the time complexity is reduced to O(n log n), where n is the number of matched points in each frame. To the best of our knowledge, this method is the first non-iterative and online trainable approach for data association in visual SLAM. Compared with the state-of-the-arts, it runs at a faster speed and obtains 3-D maps with higher resolution yet still with comparable accuracy.
本文的目标是为基于深度相机和惯性传感器的微型机器人系统创建一个足够轻的高密度SLAM新框架。基于特征的方法和直接方法是视觉SLAM的两大主流。这两种方法都是通过迭代解最小化光度或重投影误差,这是计算昂贵的。为了克服这个问题,我们提出了一个非迭代框架来减少计算需求。首先,利用姿态和航向参考系统(AHRS)和轴测投影对6自由度数据进行解耦,使点云分别在独立空间进行匹配;其次,在单关键帧训练的基础上,通过傅里叶变换在频域进行匹配,得到封闭形式的非迭代解;这样,时间复杂度降低到O(n log n),其中n为每帧中匹配点的个数。据我们所知,该方法是视觉SLAM中第一个非迭代和在线可训练的数据关联方法。与最先进的技术相比,它的运行速度更快,获得的3d地图分辨率更高,但精度仍然相当。
{"title":"Non-iterative SLAM","authors":"Chen Wang, Junsong Yuan, Lihua Xie","doi":"10.1109/ICAR.2017.8023500","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023500","url":null,"abstract":"The goal of this paper is to create a new framework for dense SLAM that is light enough for micro-robot systems based on depth camera and inertial sensor. Feature-based and direct methods are two mainstreams in visual SLAM. Both methods minimize photometric or reprojection error by iterative solutions, which are computationally expensive. To overcome this problem, we propose a non-iterative framework to reduce computational requirement. First, the attitude and heading reference system (AHRS) and axonometric projection are utilized to decouple the 6 Degree-of-Freedom (DoF) data, so that point clouds can be matched in independent spaces respectively. Second, based on single key-frame training, the matching process is carried out in frequency domain by Fourier transformation, which provides a closed-form non-iterative solution. In this manner, the time complexity is reduced to O(n log n), where n is the number of matched points in each frame. To the best of our knowledge, this method is the first non-iterative and online trainable approach for data association in visual SLAM. Compared with the state-of-the-arts, it runs at a faster speed and obtains 3-D maps with higher resolution yet still with comparable accuracy.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2017-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131670236","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI pca辅助的全卷积网络在多通道fMRI语义分割中的应用
Pub Date : 2016-10-06 DOI: 10.1109/ICAR.2017.8023506
L. Tai, Haoyang Ye, Qiong Ye, Ming Liu
Semantic segmentation of functional magnetic resonance imaging (fMRI) makes great sense for pathology diagnosis and decision system of medical robots. The multi-channel fMRI provides more information of the pathological features. But the increased amount of data causes complexity in feature detections. This paper proposes a principal component analysis (PCA)-aided fully convolutional network to particularly deal with multi-channel fMRI. We transfer the learned weights of contemporary classification networks to the segmentation task by fine-tuning. The results of the convolutional network are compared with various methods e.g. k-NN. A new labeling strategy is proposed to solve the semantic segmentation problem with unclear boundaries. Even with a small-sized training dataset, the test results demonstrate that our model outperforms other pathological feature detection methods. Besides, its forward inference only takes 90 milliseconds for a single set of fMRI data. To our knowledge, this is the first time to realize pixel-wise labeling of multi-channel magnetic resonance image using FCN.
功能磁共振成像(fMRI)的语义分割对医疗机器人的病理诊断和决策系统具有重要意义。多通道功能磁共振成像提供了更多的病理特征信息。但是数据量的增加导致了特征检测的复杂性。本文提出了一种基于主成分分析(PCA)的全卷积神经网络,专门用于多通道功能磁共振成像。我们通过微调将当代分类网络的学习权值转移到分割任务中。将卷积网络的结果与k-NN等各种方法进行了比较。针对边界不清晰的语义分割问题,提出了一种新的标注策略。即使在较小的训练数据集上,测试结果也表明我们的模型优于其他病理特征检测方法。此外,它的前向推理只需要90毫秒来处理一组fMRI数据。据我们所知,这是第一次使用FCN实现多通道磁共振图像的逐像素标记。
{"title":"PCA-aided fully convolutional networks for semantic segmentation of multi-channel fMRI","authors":"L. Tai, Haoyang Ye, Qiong Ye, Ming Liu","doi":"10.1109/ICAR.2017.8023506","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023506","url":null,"abstract":"Semantic segmentation of functional magnetic resonance imaging (fMRI) makes great sense for pathology diagnosis and decision system of medical robots. The multi-channel fMRI provides more information of the pathological features. But the increased amount of data causes complexity in feature detections. This paper proposes a principal component analysis (PCA)-aided fully convolutional network to particularly deal with multi-channel fMRI. We transfer the learned weights of contemporary classification networks to the segmentation task by fine-tuning. The results of the convolutional network are compared with various methods e.g. k-NN. A new labeling strategy is proposed to solve the semantic segmentation problem with unclear boundaries. Even with a small-sized training dataset, the test results demonstrate that our model outperforms other pathological feature detection methods. Besides, its forward inference only takes 90 milliseconds for a single set of fMRI data. To our knowledge, this is the first time to realize pixel-wise labeling of multi-channel magnetic resonance image using FCN.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2016-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130240585","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 13
ROSRemote, using ROS on cloud to access robots remotely ROSRemote,使用云上的ROS远程访问机器人
Pub Date : 1900-01-01 DOI: 10.1109/ICAR.2017.8023621
Alyson B. M. Pereira, G. S. Bastos
Cloud computing is an area that, nowadays, has been attracting a lot of researches and is expanding not only for processing data, but also for robotics. Cloud robotics is becoming a well-known subject, but it only works in a way to find a faster manner of processing data, which is almost like the idea of cloud computing. In this paper we have created a way to use cloud not only for this kind of operation but, also, to create a framework that helps users to work with ROS in a remote master, giving the possibility to create several applications that may run remotely. Using SpaceBrew, we do not have to worry about finding the robots addresses, which makes this application easier to implement because programmers only have to code as if the application is local.
云计算是目前吸引了大量研究的一个领域,不仅在数据处理方面,而且在机器人技术方面也在不断扩大。云机器人正在成为一门众所周知的学科,但它只能以一种更快的方式处理数据,这几乎就像云计算的概念。在本文中,我们创建了一种方法,不仅可以将云用于此类操作,还可以创建一个框架,帮助用户在远程主机中使用ROS,从而可以创建多个可以远程运行的应用程序。使用SpaceBrew,我们不必担心寻找机器人地址,这使得这个应用程序更容易实现,因为程序员只需要像应用程序是本地的一样编写代码。
{"title":"ROSRemote, using ROS on cloud to access robots remotely","authors":"Alyson B. M. Pereira, G. S. Bastos","doi":"10.1109/ICAR.2017.8023621","DOIUrl":"https://doi.org/10.1109/ICAR.2017.8023621","url":null,"abstract":"Cloud computing is an area that, nowadays, has been attracting a lot of researches and is expanding not only for processing data, but also for robotics. Cloud robotics is becoming a well-known subject, but it only works in a way to find a faster manner of processing data, which is almost like the idea of cloud computing. In this paper we have created a way to use cloud not only for this kind of operation but, also, to create a framework that helps users to work with ROS in a remote master, giving the possibility to create several applications that may run remotely. Using SpaceBrew, we do not have to worry about finding the robots addresses, which makes this application easier to implement because programmers only have to code as if the application is local.","PeriodicalId":198633,"journal":{"name":"2017 18th International Conference on Advanced Robotics (ICAR)","volume":"143 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115814033","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 14
期刊
2017 18th International Conference on Advanced Robotics (ICAR)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1