首页 > 最新文献

2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)最新文献

英文 中文
Mosquito Staging Apparatus for Producing PfSPZ Malaria Vaccines 用于生产PfSPZ疟疾疫苗的蚊子分期装置
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843147
Mengdi Xu, Shengnan Lyu, Yingtian Xu, Can Kocabalkanli, Brian K. Chirikjian, John S. Chirikjian, Joshua D. Davis, J. S. Kim, I. Iordachita, R. Taylor, G. Chirikjian
This paper describes the design of a fully automated apparatus to dispense mosquitoes into isolate units. This automation system consists of several process units including (1) facilitating the water vortex with a fan-shape rotor to gently transport the mosquitoes to the sorting slides with a conical geometry, (2) exploiting slides to guide mosquitoes to turntables driven by gears one by one, and (3) reorienting the mosquito until its proboscis points outward along the radial direction of the cone, aided by computer vision. This automation system serves as the first processing stage for collecting mosquito salivary glands. The sporozoites contained in the mosquito glands are the source material to produce Sanaria’s first generation PfSPZ vaccines. The Mosquito Staging System can dramatically enhance the mass production of Malaria Vaccine which is essential to prevent the propagation of Malaria.
本文介绍了一种全自动蚊蝇分离装置的设计。该自动化系统由几个过程单元组成,包括:(1)利用扇形转子促进水涡流,将蚊子轻轻地输送到锥形的分选滑块上;(2)利用滑块将蚊子引导到一个接一个的齿轮驱动的转盘上;(3)在计算机视觉的辅助下,使蚊子重新定向,直到它的喙沿着圆锥体的径向指向外。该自动化系统是收集蚊子唾液腺的第一个处理阶段。蚊子腺体中含有的孢子虫是生产Sanaria第一代PfSPZ疫苗的来源材料。蚊子分期系统可以极大地促进疟疾疫苗的大规模生产,这对防止疟疾的传播至关重要。
{"title":"Mosquito Staging Apparatus for Producing PfSPZ Malaria Vaccines","authors":"Mengdi Xu, Shengnan Lyu, Yingtian Xu, Can Kocabalkanli, Brian K. Chirikjian, John S. Chirikjian, Joshua D. Davis, J. S. Kim, I. Iordachita, R. Taylor, G. Chirikjian","doi":"10.1109/COASE.2019.8843147","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843147","url":null,"abstract":"This paper describes the design of a fully automated apparatus to dispense mosquitoes into isolate units. This automation system consists of several process units including (1) facilitating the water vortex with a fan-shape rotor to gently transport the mosquitoes to the sorting slides with a conical geometry, (2) exploiting slides to guide mosquitoes to turntables driven by gears one by one, and (3) reorienting the mosquito until its proboscis points outward along the radial direction of the cone, aided by computer vision. This automation system serves as the first processing stage for collecting mosquito salivary glands. The sporozoites contained in the mosquito glands are the source material to produce Sanaria’s first generation PfSPZ vaccines. The Mosquito Staging System can dramatically enhance the mass production of Malaria Vaccine which is essential to prevent the propagation of Malaria.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"4 1","pages":"443-449"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85521314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
An Adaptive Interval Forecast CNN Model for Fault Detection Method 一种用于故障检测的自适应区间预测CNN模型
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843086
Junjie He, Junliang Wang, Lu Dai, Jie Zhang, Jin Bao
The machine fault detection (MFD) is critical for the safety operation of the petrochemical production. Aiming to automatically optimizing the pre-warning bounds of the control chart, an interval forecasting convolutional neural network (IFCNN) model has been proposed to forecast the warning interval of the signal with the raw dynamic data. Essentially, the IFCNN model is an improved convolutional neural network with dual output value to construct the warning interval directly and adaptively. To guide the model to learn the interval automatically during the model training, the loss function is customized to improve the fault detection accuracy. The proposed method is compared with the fixed threshold and the adaptive interval method with exponentially weighted moving average on a petrochemical equipment data set. The results indicated that the proposed method is of stronger robustness with lower failure rate in the fault detection of the petrochemical pump.
机械故障检测对石油化工生产的安全运行至关重要。为了自动优化控制图的预警边界,提出了一种区间预测卷积神经网络(IFCNN)模型,利用原始动态数据预测信号的预警区间。IFCNN模型本质上是一种改进的双输出卷积神经网络,直接自适应地构造预警区间。为了引导模型在训练过程中自动学习区间,我们定制了损失函数,提高了故障检测的准确率。在石化设备数据集上与固定阈值法和指数加权移动平均自适应区间法进行了比较。结果表明,该方法在石化泵故障检测中具有较强的鲁棒性和较低的故障率。
{"title":"An Adaptive Interval Forecast CNN Model for Fault Detection Method","authors":"Junjie He, Junliang Wang, Lu Dai, Jie Zhang, Jin Bao","doi":"10.1109/COASE.2019.8843086","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843086","url":null,"abstract":"The machine fault detection (MFD) is critical for the safety operation of the petrochemical production. Aiming to automatically optimizing the pre-warning bounds of the control chart, an interval forecasting convolutional neural network (IFCNN) model has been proposed to forecast the warning interval of the signal with the raw dynamic data. Essentially, the IFCNN model is an improved convolutional neural network with dual output value to construct the warning interval directly and adaptively. To guide the model to learn the interval automatically during the model training, the loss function is customized to improve the fault detection accuracy. The proposed method is compared with the fixed threshold and the adaptive interval method with exponentially weighted moving average on a petrochemical equipment data set. The results indicated that the proposed method is of stronger robustness with lower failure rate in the fault detection of the petrochemical pump.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"18 1","pages":"602-607"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81644609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
c-M2DP: A Fast Point Cloud Descriptor with Color Information to Perform Loop Closure Detection c-M2DP:一个带颜色信息的快速点云描述符,用于执行闭环检测
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8842896
Leonardo Perdomo, Diego Pittol, Mathias Mantelli, R. Maffei, M. Kolberg, Edson Prestes e Silva
We present c-M2DP, a fast global point cloud descriptor that combines color and shape information, and perform loop closure detection using it. Our approach extends the M2DP descriptor by incorporating color information. Along with M2DP shape signatures, we compute color signatures from multiple 2D projections of a point cloud. Then, a compact descriptor is computed by using SVD to reduce its dimensionality. We performed experiments on public available datasets using both camera-LIDAR fusion and stereo depth estimation. Our results show an overall accuracy improvement over M2DP while maintaining efficiency, and are competitive in comparison with another color and shape descriptor.
我们提出了c-M2DP,一种结合颜色和形状信息的快速全局点云描述符,并使用它进行闭环检测。我们的方法通过合并颜色信息扩展了M2DP描述符。与M2DP形状签名一起,我们从点云的多个2D投影中计算颜色签名。然后,利用奇异值分解对压缩描述子进行降维,计算压缩描述子。我们使用相机-激光雷达融合和立体深度估计在公共可用数据集上进行了实验。我们的结果表明,在保持效率的同时,总体精度比M2DP有所提高,并且与另一种颜色和形状描述符相比具有竞争力。
{"title":"c-M2DP: A Fast Point Cloud Descriptor with Color Information to Perform Loop Closure Detection","authors":"Leonardo Perdomo, Diego Pittol, Mathias Mantelli, R. Maffei, M. Kolberg, Edson Prestes e Silva","doi":"10.1109/COASE.2019.8842896","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842896","url":null,"abstract":"We present c-M2DP, a fast global point cloud descriptor that combines color and shape information, and perform loop closure detection using it. Our approach extends the M2DP descriptor by incorporating color information. Along with M2DP shape signatures, we compute color signatures from multiple 2D projections of a point cloud. Then, a compact descriptor is computed by using SVD to reduce its dimensionality. We performed experiments on public available datasets using both camera-LIDAR fusion and stereo depth estimation. Our results show an overall accuracy improvement over M2DP while maintaining efficiency, and are competitive in comparison with another color and shape descriptor.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"7 1","pages":"1145-1150"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84119784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Automated Extraction of Surgical Needles from Tissue Phantoms 从组织幻影中自动提取手术针头
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843089
Priya Sundaresan, Brijen Thananjeyan, Johnathan Chiu, Danyal Fer, Ken Goldberg
We consider the surgical subtask of automated extraction of embedded suturing needles from silicone phantoms and propose a four-step algorithm consisting of calibration, needle segmentation, grasp planning, and path planning. We implement autonomous extraction of needles using the da Vinci Research Kit (dVRK). The proposed calibration method yields an average of 1.3mm transformation error between the dVRK end-effector and its overhead endoscopic stereo camera compared to 2.0mm transformation error using a standard rigid body transformation. In 143/160 images where a needle was detected, the needle segmentation algorithm planned appropriate grasp points with an accuracy of 97.20% and planned an appropriate pull trajectory to achieve extraction in 85.31% of images. For images segmented with $gt50$% confidence, no errors in grasp or pull prediction occurred. In images segmented with 25-50% confidence, no erroneous grasps were planned, but a misdirected pull was planned in 6.45% of cases. In 100 physical trials, the dVRK successfully grasped needles in 75% of cases, and fully extracted needles in 70.7% of cases where a grasp was secured.
我们考虑了从硅胶假体中自动提取嵌入缝合针的手术子任务,并提出了一种四步算法,包括校准、针分割、抓取规划和路径规划。我们使用达芬奇研究工具包(dVRK)实现针头的自动提取。所提出的校准方法在dVRK末端执行器与其顶置内窥镜立体摄像机之间产生平均1.3mm的变换误差,而使用标准刚体变换的变换误差为2.0mm。在检测到针头的143/160张图像中,针头分割算法规划了合适的抓取点,准确率为97.20%,规划了合适的拉取轨迹,实现了85.31%的图像提取。对于以$ $ gt50$ $%置信度分割的图像,抓取或拉预测没有发生错误。在25-50%置信度分割的图像中,没有计划错误的抓取,但在6.45%的情况下计划错误的拉。在100次物理试验中,dVRK在75%的病例中成功地抓住了针头,在抓住的情况下,70.7%的病例完全拔出了针头。
{"title":"Automated Extraction of Surgical Needles from Tissue Phantoms","authors":"Priya Sundaresan, Brijen Thananjeyan, Johnathan Chiu, Danyal Fer, Ken Goldberg","doi":"10.1109/COASE.2019.8843089","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843089","url":null,"abstract":"We consider the surgical subtask of automated extraction of embedded suturing needles from silicone phantoms and propose a four-step algorithm consisting of calibration, needle segmentation, grasp planning, and path planning. We implement autonomous extraction of needles using the da Vinci Research Kit (dVRK). The proposed calibration method yields an average of 1.3mm transformation error between the dVRK end-effector and its overhead endoscopic stereo camera compared to 2.0mm transformation error using a standard rigid body transformation. In 143/160 images where a needle was detected, the needle segmentation algorithm planned appropriate grasp points with an accuracy of 97.20% and planned an appropriate pull trajectory to achieve extraction in 85.31% of images. For images segmented with $gt50$% confidence, no errors in grasp or pull prediction occurred. In images segmented with 25-50% confidence, no erroneous grasps were planned, but a misdirected pull was planned in 6.45% of cases. In 100 physical trials, the dVRK successfully grasped needles in 75% of cases, and fully extracted needles in 70.7% of cases where a grasp was secured.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"28 1","pages":"170-177"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84548324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
A Screen-Based Method for Automated Camera Intrinsic Calibration on Production Lines 基于屏幕的生产线摄像机内禀自动标定方法
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8842956
Wenliang Gao, Jiarong Lin, Fu Zhang, S. Shen
For the manufacture of visual system product, it is necessary to calibrate a massive number of cameras in a limited time and space with a high consistency quality. Traditional calibration method with chessboard pattern is not suitable in the manufacturing industry since its requirement of motions leads to the problem of consistency, cost of space and time. In this work, we present a screen-based solution for automated camera intrinsic calibration on production lines. With screens clearly and easily displaying pixel points, the whole calibration pattern is formed with the dense and uniform points captured by the camera. The calibration accuracy is comparable with the traditional method with chessboard pattern. Unlike a variety of existing methods, our method needs little human interaction, as well as only a limited amount of space, making it easy to be deployed and operated in the industrial environments. With some experiments, we show the comparable performance of the system for perspective cameras and its potential in fisheye cameras with the developments of screens.
对于视觉系统产品的制造,需要在有限的时间和空间内校准大量的相机,并保证高一致性的质量。传统的棋盘模式标定方法由于对运动的要求导致一致性、空间和时间成本等问题,在制造业中已不适用。在这项工作中,我们提出了一个基于屏幕的解决方案,用于生产线上的自动相机固有校准。通过屏幕清晰方便地显示像素点,将相机捕捉到的密集均匀的点组成整个校准模式。校正精度可与传统的棋盘图校正方法相媲美。与现有的各种方法不同,我们的方法几乎不需要人工交互,而且只有有限的空间,这使得它很容易在工业环境中部署和操作。通过一些实验,我们展示了该系统在透视相机上的相当性能,以及随着屏幕的发展,它在鱼眼相机上的潜力。
{"title":"A Screen-Based Method for Automated Camera Intrinsic Calibration on Production Lines","authors":"Wenliang Gao, Jiarong Lin, Fu Zhang, S. Shen","doi":"10.1109/COASE.2019.8842956","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842956","url":null,"abstract":"For the manufacture of visual system product, it is necessary to calibrate a massive number of cameras in a limited time and space with a high consistency quality. Traditional calibration method with chessboard pattern is not suitable in the manufacturing industry since its requirement of motions leads to the problem of consistency, cost of space and time. In this work, we present a screen-based solution for automated camera intrinsic calibration on production lines. With screens clearly and easily displaying pixel points, the whole calibration pattern is formed with the dense and uniform points captured by the camera. The calibration accuracy is comparable with the traditional method with chessboard pattern. Unlike a variety of existing methods, our method needs little human interaction, as well as only a limited amount of space, making it easy to be deployed and operated in the industrial environments. With some experiments, we show the comparable performance of the system for perspective cameras and its potential in fisheye cameras with the developments of screens.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"55 1","pages":"392-398"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75831628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Industrial Dataspace: A Broker to Run Cyber-Physical-Social Production System in Level of Machining Workshops 工业数据空间:机加工车间层面运行信息-物理-社会生产系统的中介
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843010
P. Jiang, Chao Liu, Pulin Li, Haoliang Shi
The rapid development of and deep integration of emerging information technologies has boosted cyber-physical-social production systems (CPSPS) which coordinates humans and machines in both physical and cyber worlds by tightening the cyber-physical-social conjoining of static manufacturing resources and dynamic machining processes. Industrial dataspace is regarded as a broker to run CPSPS by mediating between bottom data sources and upper applications with different data access needs via mappings. The presented research proposes a reference architecture for industrial-dataspace-enabled CPSPS. Based on that, three key enabled technologies are presented. Finally, a demonstrative example is conducted to validate the architecture.
新兴信息技术的快速发展和深度融合,推动了网络-物理-社会生产系统的发展,通过加强静态制造资源和动态加工过程的网络-物理-社会结合,协调物理世界和网络世界中的人与机器。工业数据空间被视为运行CPSPS的代理,它通过映射在具有不同数据访问需求的底层数据源和上层应用程序之间进行中介。本研究为工业数据空间支持的CPSPS提出了一个参考架构。在此基础上,提出了三种关键的使能技术。最后,通过实例验证了该体系结构的有效性。
{"title":"Industrial Dataspace: A Broker to Run Cyber-Physical-Social Production System in Level of Machining Workshops","authors":"P. Jiang, Chao Liu, Pulin Li, Haoliang Shi","doi":"10.1109/COASE.2019.8843010","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843010","url":null,"abstract":"The rapid development of and deep integration of emerging information technologies has boosted cyber-physical-social production systems (CPSPS) which coordinates humans and machines in both physical and cyber worlds by tightening the cyber-physical-social conjoining of static manufacturing resources and dynamic machining processes. Industrial dataspace is regarded as a broker to run CPSPS by mediating between bottom data sources and upper applications with different data access needs via mappings. The presented research proposes a reference architecture for industrial-dataspace-enabled CPSPS. Based on that, three key enabled technologies are presented. Finally, a demonstrative example is conducted to validate the architecture.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"422 1","pages":"1402-1407"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75871738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Visual-Guided Robot Arm Using Self-Supervised Deep Convolutional Neural Networks 基于自监督深度卷积神经网络的视觉引导机械臂
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8842899
Van-Thanh Nguyen, Chao-Wei Lin, C. G. Li, Shu-Mei Guo, J. Lien
Perception-based learning approaches to robotic grasping have shown significant promise. This is further reinforced by using supervised deep learning in robotic arm. However, to properly train deep networks and prevent overfitting, massive datasets of labelled samples must be available. Creating such datasets by human labelling is an exhaustive task since most objects can be grasped at multiple points and in several orientations. Accordingly, this work employs a self-supervised learning technique in which the training dataset is labelled by the robot itself. Above all, we propose a cascaded network that reduces the time of the grasping task by eliminating ungraspable samples from the inference process. In addition to grasping task which performs pose estimation, we enlarge the network to perform an auxiliary task, object classification in which data labelling can be done easily by human. Notably, our network is capable of estimating 18 grasping poses and classifying 4 objects simultaneously. The experimental results show that the proposed network achieves an accuracy of 94.8% in estimating the grasping pose and 100% in classifying the object category, in 0.65 seconds.
基于感知的学习方法在机器人抓取方面已经显示出巨大的前景。通过在机械臂上使用监督深度学习,进一步加强了这一点。然而,为了正确训练深度网络并防止过度拟合,必须有大量标记样本的数据集。通过人工标记创建这样的数据集是一项详尽的任务,因为大多数对象可以在多个点和几个方向上抓取。因此,这项工作采用了一种自监督学习技术,其中训练数据集由机器人自己标记。首先,我们提出了一个级联网络,通过从推理过程中消除不可抓取的样本来减少抓取任务的时间。除了抓取任务(姿态估计)外,我们还扩大了网络来执行辅助任务,即对象分类,其中数据标记可以很容易地由人类完成。值得注意的是,我们的网络能够同时估计18个抓取姿势并对4个物体进行分类。实验结果表明,该网络在0.65秒内对抓取姿态的估计准确率为94.8%,对物体类别的分类准确率为100%。
{"title":"Visual-Guided Robot Arm Using Self-Supervised Deep Convolutional Neural Networks","authors":"Van-Thanh Nguyen, Chao-Wei Lin, C. G. Li, Shu-Mei Guo, J. Lien","doi":"10.1109/COASE.2019.8842899","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842899","url":null,"abstract":"Perception-based learning approaches to robotic grasping have shown significant promise. This is further reinforced by using supervised deep learning in robotic arm. However, to properly train deep networks and prevent overfitting, massive datasets of labelled samples must be available. Creating such datasets by human labelling is an exhaustive task since most objects can be grasped at multiple points and in several orientations. Accordingly, this work employs a self-supervised learning technique in which the training dataset is labelled by the robot itself. Above all, we propose a cascaded network that reduces the time of the grasping task by eliminating ungraspable samples from the inference process. In addition to grasping task which performs pose estimation, we enlarge the network to perform an auxiliary task, object classification in which data labelling can be done easily by human. Notably, our network is capable of estimating 18 grasping poses and classifying 4 objects simultaneously. The experimental results show that the proposed network achieves an accuracy of 94.8% in estimating the grasping pose and 100% in classifying the object category, in 0.65 seconds.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"30 1","pages":"1415-1420"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82329121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
A New Electrostatic Gripper for Flexible Handling of Fabrics in Automated Garment Manufacturing 一种用于服装自动化生产中织物柔性处理的新型静电夹持器
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843149
Bin Sun, Xinyu Zhang
The handling of fabrics is a very challenging task throughout automated garment manufacturing. There are practical difficulties in designing and implementing a reliable gripper to efficiently handling fabric panels. In this paper, we present a new and flexible electrostatic gripper for the handling of fabrics. Our gripper consists of four flat pads and their embedded electrode patterns generate electrostatic adhesion fields. The coverage area varies with the expansion of the four electrostatic pads. This allows handling various size of fabric panels and flattening folded/wrinkled fabrics. We partially verified our new gripper in prototype form and experimentally evaluated its performance on a large number of fabric materials. Moreover, the proposed gripper can be used for handling and transporting garments while avoiding the damage of fabric surfaces.
在整个自动化服装制造过程中,织物的处理是一项非常具有挑战性的任务。设计和实现一种可靠的夹持器来有效地处理织物面板存在实际困难。本文介绍了一种新型的柔性静电抓布器。我们的夹具由四个平板组成,它们的嵌入式电极图案产生静电粘附场。覆盖面积随四个静电垫的扩展而不同。这允许处理各种尺寸的织物面板和平整折叠/起皱的织物。我们以原型形式部分验证了我们的新夹具,并通过实验评估了其在大量织物材料上的性能。此外,所提出的夹持器可用于处理和运输服装,同时避免损坏织物表面。
{"title":"A New Electrostatic Gripper for Flexible Handling of Fabrics in Automated Garment Manufacturing","authors":"Bin Sun, Xinyu Zhang","doi":"10.1109/COASE.2019.8843149","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843149","url":null,"abstract":"The handling of fabrics is a very challenging task throughout automated garment manufacturing. There are practical difficulties in designing and implementing a reliable gripper to efficiently handling fabric panels. In this paper, we present a new and flexible electrostatic gripper for the handling of fabrics. Our gripper consists of four flat pads and their embedded electrode patterns generate electrostatic adhesion fields. The coverage area varies with the expansion of the four electrostatic pads. This allows handling various size of fabric panels and flattening folded/wrinkled fabrics. We partially verified our new gripper in prototype form and experimentally evaluated its performance on a large number of fabric materials. Moreover, the proposed gripper can be used for handling and transporting garments while avoiding the damage of fabric surfaces.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"43 1","pages":"879-884"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82447475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 10
An improved Q-learning based rescheduling method for flexible job-shops with machine failures 一种改进的基于q学习的机器故障柔性作业车间重调度方法
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843100
Meng Zhao, Xinyu Li, Liang Gao, Ling Wang, Mi Xiao
Scheduling of flexible job shop has been researched over several decades and continues to attract the interests of many scholars. But in the real manufacturing system, dynamic events such as machine failures are major issues. In this paper, an improved Q-learning algorithm with double-layer actions is proposed to solve the dynamic flexible job-shop scheduling problem (DFJSP) considering machine failures. The initial scheduling scheme is obtained by Genetic Algorithm (GA), and the rescheduling strategy is acquired by the Agent of the proposed Q-learning based on dispatching rules. The agent of Q-learning is able to select both operations and alternative machines optimally when machine failure occurs. To testify this approach, experiments are designed and performed based on Mk03 problem of FJSP. Results demonstrate that the optimal rescheduling strategy varies in different machine failure status. And compared with adopting a single dispatching rule all the time, the proposed Q-learning can reduce time of delay in a frequent dynamic environment, which shows that agent-based method is suitable for DFJSP.
柔性作业车间调度问题的研究已经进行了几十年,并一直引起许多学者的兴趣。但在真实的制造系统中,机器故障等动态事件是主要问题。针对考虑机器故障的动态柔性作业车间调度问题,提出了一种改进的双层动作q -学习算法。通过遗传算法获得初始调度方案,通过基于调度规则的q学习Agent获得重调度策略。当机器发生故障时,Q-learning智能体能够选择最佳的操作和替代机器。为了验证这一方法,基于FJSP的Mk03问题设计并进行了实验。结果表明,在不同的机器故障状态下,最优重调度策略是不同的。与始终采用单一调度规则相比,所提出的q -学习方法可以减少频繁动态环境下的延迟时间,表明基于agent的方法适用于DFJSP。
{"title":"An improved Q-learning based rescheduling method for flexible job-shops with machine failures","authors":"Meng Zhao, Xinyu Li, Liang Gao, Ling Wang, Mi Xiao","doi":"10.1109/COASE.2019.8843100","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843100","url":null,"abstract":"Scheduling of flexible job shop has been researched over several decades and continues to attract the interests of many scholars. But in the real manufacturing system, dynamic events such as machine failures are major issues. In this paper, an improved Q-learning algorithm with double-layer actions is proposed to solve the dynamic flexible job-shop scheduling problem (DFJSP) considering machine failures. The initial scheduling scheme is obtained by Genetic Algorithm (GA), and the rescheduling strategy is acquired by the Agent of the proposed Q-learning based on dispatching rules. The agent of Q-learning is able to select both operations and alternative machines optimally when machine failure occurs. To testify this approach, experiments are designed and performed based on Mk03 problem of FJSP. Results demonstrate that the optimal rescheduling strategy varies in different machine failure status. And compared with adopting a single dispatching rule all the time, the proposed Q-learning can reduce time of delay in a frequent dynamic environment, which shows that agent-based method is suitable for DFJSP.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"57 1","pages":"331-337"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83350168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 16
A Labor-Efficient GAN-based Model Generation Scheme for Deep-Learning Defect Inspection among Dense Beans in Coffee Industry 一种高效的基于gan的咖啡豆深度学习缺陷检测模型生成方法
Pub Date : 2019-08-01 DOI: 10.1109/COASE.2019.8843259
Chen-Ju Kuo, Chao-Chun Chen, Tzu-Ting Chen, Zhi-Jing Tsai, Min-Hsiung Hung, Yu-Chuan Lin, Yi-Chung Chen, Ding-Chau Wang, Gwo-Jiun Homg, Wei-Tsung Su
Coffee beans are one of most valuable agricultural products in the world, and defective bean removal plays a critical role to produce high-quality coffee products. In this work, we propose a novel labor-efficient deep learning-based model generation scheme, aiming at providing an effective model with less human labeling effort. The key idea is to iteratively generate new training images containing defective beans in various locations by using a generative-adversarial network framework, and these images incur low successful detection rate so that they are useful for improving model quality. Our proposed scheme brings two main impacts to the intelligent agriculture. First, our proposed scheme is the first work to reduce human labeling effort among solutions of vision-based defective bean removal. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time. The above two advantages increase the degree of automation to the coffee industry. We implement the prototype of the proposed scheme for conducting integrated tests. Testin. results of a case study reveal that the proposed scheme ca] efficiently and effectively generating models for identifyin defect beans.Our implementation of the proposed scheme is available a https://github.com/Louis8582/LEGAN.
咖啡豆是世界上最有价值的农产品之一,去除缺陷豆对生产高质量的咖啡产品起着至关重要的作用。在这项工作中,我们提出了一种新的基于劳动效率的深度学习模型生成方案,旨在提供一个有效的模型,减少人工标记的工作量。关键思想是利用生成对抗网络框架,迭代生成包含不同位置缺陷豆子的新训练图像,这些图像的成功检测率较低,有助于提高模型质量。本文提出的方案对智能农业的发展有两方面的影响。首先,我们提出的方案是第一个在基于视觉的缺陷豆去除解决方案中减少人类标记工作量的工作。第二,我们的方案可以同时检测美国精品咖啡协会(Specialty Coffee Association of America, SCAA)分类的所有品类的次品咖啡豆。以上两个优势增加了咖啡行业的自动化程度。我们实现了所提出方案的原型进行综合测试。Testin。实例研究结果表明,该方法能够有效地生成缺陷bean识别模型。我们提出的方案的实施可以在https://github.com/Louis8582/LEGAN上找到。
{"title":"A Labor-Efficient GAN-based Model Generation Scheme for Deep-Learning Defect Inspection among Dense Beans in Coffee Industry","authors":"Chen-Ju Kuo, Chao-Chun Chen, Tzu-Ting Chen, Zhi-Jing Tsai, Min-Hsiung Hung, Yu-Chuan Lin, Yi-Chung Chen, Ding-Chau Wang, Gwo-Jiun Homg, Wei-Tsung Su","doi":"10.1109/COASE.2019.8843259","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843259","url":null,"abstract":"Coffee beans are one of most valuable agricultural products in the world, and defective bean removal plays a critical role to produce high-quality coffee products. In this work, we propose a novel labor-efficient deep learning-based model generation scheme, aiming at providing an effective model with less human labeling effort. The key idea is to iteratively generate new training images containing defective beans in various locations by using a generative-adversarial network framework, and these images incur low successful detection rate so that they are useful for improving model quality. Our proposed scheme brings two main impacts to the intelligent agriculture. First, our proposed scheme is the first work to reduce human labeling effort among solutions of vision-based defective bean removal. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time. The above two advantages increase the degree of automation to the coffee industry. We implement the prototype of the proposed scheme for conducting integrated tests. Testin. results of a case study reveal that the proposed scheme ca] efficiently and effectively generating models for identifyin defect beans.Our implementation of the proposed scheme is available a https://github.com/Louis8582/LEGAN.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"1 1","pages":"263-270"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90099694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 5
期刊
2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1