Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843147
Mengdi Xu, Shengnan Lyu, Yingtian Xu, Can Kocabalkanli, Brian K. Chirikjian, John S. Chirikjian, Joshua D. Davis, J. S. Kim, I. Iordachita, R. Taylor, G. Chirikjian
This paper describes the design of a fully automated apparatus to dispense mosquitoes into isolate units. This automation system consists of several process units including (1) facilitating the water vortex with a fan-shape rotor to gently transport the mosquitoes to the sorting slides with a conical geometry, (2) exploiting slides to guide mosquitoes to turntables driven by gears one by one, and (3) reorienting the mosquito until its proboscis points outward along the radial direction of the cone, aided by computer vision. This automation system serves as the first processing stage for collecting mosquito salivary glands. The sporozoites contained in the mosquito glands are the source material to produce Sanaria’s first generation PfSPZ vaccines. The Mosquito Staging System can dramatically enhance the mass production of Malaria Vaccine which is essential to prevent the propagation of Malaria.
{"title":"Mosquito Staging Apparatus for Producing PfSPZ Malaria Vaccines","authors":"Mengdi Xu, Shengnan Lyu, Yingtian Xu, Can Kocabalkanli, Brian K. Chirikjian, John S. Chirikjian, Joshua D. Davis, J. S. Kim, I. Iordachita, R. Taylor, G. Chirikjian","doi":"10.1109/COASE.2019.8843147","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843147","url":null,"abstract":"This paper describes the design of a fully automated apparatus to dispense mosquitoes into isolate units. This automation system consists of several process units including (1) facilitating the water vortex with a fan-shape rotor to gently transport the mosquitoes to the sorting slides with a conical geometry, (2) exploiting slides to guide mosquitoes to turntables driven by gears one by one, and (3) reorienting the mosquito until its proboscis points outward along the radial direction of the cone, aided by computer vision. This automation system serves as the first processing stage for collecting mosquito salivary glands. The sporozoites contained in the mosquito glands are the source material to produce Sanaria’s first generation PfSPZ vaccines. The Mosquito Staging System can dramatically enhance the mass production of Malaria Vaccine which is essential to prevent the propagation of Malaria.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"4 1","pages":"443-449"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"85521314","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843086
Junjie He, Junliang Wang, Lu Dai, Jie Zhang, Jin Bao
The machine fault detection (MFD) is critical for the safety operation of the petrochemical production. Aiming to automatically optimizing the pre-warning bounds of the control chart, an interval forecasting convolutional neural network (IFCNN) model has been proposed to forecast the warning interval of the signal with the raw dynamic data. Essentially, the IFCNN model is an improved convolutional neural network with dual output value to construct the warning interval directly and adaptively. To guide the model to learn the interval automatically during the model training, the loss function is customized to improve the fault detection accuracy. The proposed method is compared with the fixed threshold and the adaptive interval method with exponentially weighted moving average on a petrochemical equipment data set. The results indicated that the proposed method is of stronger robustness with lower failure rate in the fault detection of the petrochemical pump.
{"title":"An Adaptive Interval Forecast CNN Model for Fault Detection Method","authors":"Junjie He, Junliang Wang, Lu Dai, Jie Zhang, Jin Bao","doi":"10.1109/COASE.2019.8843086","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843086","url":null,"abstract":"The machine fault detection (MFD) is critical for the safety operation of the petrochemical production. Aiming to automatically optimizing the pre-warning bounds of the control chart, an interval forecasting convolutional neural network (IFCNN) model has been proposed to forecast the warning interval of the signal with the raw dynamic data. Essentially, the IFCNN model is an improved convolutional neural network with dual output value to construct the warning interval directly and adaptively. To guide the model to learn the interval automatically during the model training, the loss function is customized to improve the fault detection accuracy. The proposed method is compared with the fixed threshold and the adaptive interval method with exponentially weighted moving average on a petrochemical equipment data set. The results indicated that the proposed method is of stronger robustness with lower failure rate in the fault detection of the petrochemical pump.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"18 1","pages":"602-607"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81644609","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8842896
Leonardo Perdomo, Diego Pittol, Mathias Mantelli, R. Maffei, M. Kolberg, Edson Prestes e Silva
We present c-M2DP, a fast global point cloud descriptor that combines color and shape information, and perform loop closure detection using it. Our approach extends the M2DP descriptor by incorporating color information. Along with M2DP shape signatures, we compute color signatures from multiple 2D projections of a point cloud. Then, a compact descriptor is computed by using SVD to reduce its dimensionality. We performed experiments on public available datasets using both camera-LIDAR fusion and stereo depth estimation. Our results show an overall accuracy improvement over M2DP while maintaining efficiency, and are competitive in comparison with another color and shape descriptor.
{"title":"c-M2DP: A Fast Point Cloud Descriptor with Color Information to Perform Loop Closure Detection","authors":"Leonardo Perdomo, Diego Pittol, Mathias Mantelli, R. Maffei, M. Kolberg, Edson Prestes e Silva","doi":"10.1109/COASE.2019.8842896","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842896","url":null,"abstract":"We present c-M2DP, a fast global point cloud descriptor that combines color and shape information, and perform loop closure detection using it. Our approach extends the M2DP descriptor by incorporating color information. Along with M2DP shape signatures, we compute color signatures from multiple 2D projections of a point cloud. Then, a compact descriptor is computed by using SVD to reduce its dimensionality. We performed experiments on public available datasets using both camera-LIDAR fusion and stereo depth estimation. Our results show an overall accuracy improvement over M2DP while maintaining efficiency, and are competitive in comparison with another color and shape descriptor.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"7 1","pages":"1145-1150"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84119784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843089
Priya Sundaresan, Brijen Thananjeyan, Johnathan Chiu, Danyal Fer, Ken Goldberg
We consider the surgical subtask of automated extraction of embedded suturing needles from silicone phantoms and propose a four-step algorithm consisting of calibration, needle segmentation, grasp planning, and path planning. We implement autonomous extraction of needles using the da Vinci Research Kit (dVRK). The proposed calibration method yields an average of 1.3mm transformation error between the dVRK end-effector and its overhead endoscopic stereo camera compared to 2.0mm transformation error using a standard rigid body transformation. In 143/160 images where a needle was detected, the needle segmentation algorithm planned appropriate grasp points with an accuracy of 97.20% and planned an appropriate pull trajectory to achieve extraction in 85.31% of images. For images segmented with $gt50$% confidence, no errors in grasp or pull prediction occurred. In images segmented with 25-50% confidence, no erroneous grasps were planned, but a misdirected pull was planned in 6.45% of cases. In 100 physical trials, the dVRK successfully grasped needles in 75% of cases, and fully extracted needles in 70.7% of cases where a grasp was secured.
{"title":"Automated Extraction of Surgical Needles from Tissue Phantoms","authors":"Priya Sundaresan, Brijen Thananjeyan, Johnathan Chiu, Danyal Fer, Ken Goldberg","doi":"10.1109/COASE.2019.8843089","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843089","url":null,"abstract":"We consider the surgical subtask of automated extraction of embedded suturing needles from silicone phantoms and propose a four-step algorithm consisting of calibration, needle segmentation, grasp planning, and path planning. We implement autonomous extraction of needles using the da Vinci Research Kit (dVRK). The proposed calibration method yields an average of 1.3mm transformation error between the dVRK end-effector and its overhead endoscopic stereo camera compared to 2.0mm transformation error using a standard rigid body transformation. In 143/160 images where a needle was detected, the needle segmentation algorithm planned appropriate grasp points with an accuracy of 97.20% and planned an appropriate pull trajectory to achieve extraction in 85.31% of images. For images segmented with $gt50$% confidence, no errors in grasp or pull prediction occurred. In images segmented with 25-50% confidence, no erroneous grasps were planned, but a misdirected pull was planned in 6.45% of cases. In 100 physical trials, the dVRK successfully grasped needles in 75% of cases, and fully extracted needles in 70.7% of cases where a grasp was secured.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"28 1","pages":"170-177"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"84548324","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8842956
Wenliang Gao, Jiarong Lin, Fu Zhang, S. Shen
For the manufacture of visual system product, it is necessary to calibrate a massive number of cameras in a limited time and space with a high consistency quality. Traditional calibration method with chessboard pattern is not suitable in the manufacturing industry since its requirement of motions leads to the problem of consistency, cost of space and time. In this work, we present a screen-based solution for automated camera intrinsic calibration on production lines. With screens clearly and easily displaying pixel points, the whole calibration pattern is formed with the dense and uniform points captured by the camera. The calibration accuracy is comparable with the traditional method with chessboard pattern. Unlike a variety of existing methods, our method needs little human interaction, as well as only a limited amount of space, making it easy to be deployed and operated in the industrial environments. With some experiments, we show the comparable performance of the system for perspective cameras and its potential in fisheye cameras with the developments of screens.
{"title":"A Screen-Based Method for Automated Camera Intrinsic Calibration on Production Lines","authors":"Wenliang Gao, Jiarong Lin, Fu Zhang, S. Shen","doi":"10.1109/COASE.2019.8842956","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842956","url":null,"abstract":"For the manufacture of visual system product, it is necessary to calibrate a massive number of cameras in a limited time and space with a high consistency quality. Traditional calibration method with chessboard pattern is not suitable in the manufacturing industry since its requirement of motions leads to the problem of consistency, cost of space and time. In this work, we present a screen-based solution for automated camera intrinsic calibration on production lines. With screens clearly and easily displaying pixel points, the whole calibration pattern is formed with the dense and uniform points captured by the camera. The calibration accuracy is comparable with the traditional method with chessboard pattern. Unlike a variety of existing methods, our method needs little human interaction, as well as only a limited amount of space, making it easy to be deployed and operated in the industrial environments. With some experiments, we show the comparable performance of the system for perspective cameras and its potential in fisheye cameras with the developments of screens.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"55 1","pages":"392-398"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75831628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843010
P. Jiang, Chao Liu, Pulin Li, Haoliang Shi
The rapid development of and deep integration of emerging information technologies has boosted cyber-physical-social production systems (CPSPS) which coordinates humans and machines in both physical and cyber worlds by tightening the cyber-physical-social conjoining of static manufacturing resources and dynamic machining processes. Industrial dataspace is regarded as a broker to run CPSPS by mediating between bottom data sources and upper applications with different data access needs via mappings. The presented research proposes a reference architecture for industrial-dataspace-enabled CPSPS. Based on that, three key enabled technologies are presented. Finally, a demonstrative example is conducted to validate the architecture.
{"title":"Industrial Dataspace: A Broker to Run Cyber-Physical-Social Production System in Level of Machining Workshops","authors":"P. Jiang, Chao Liu, Pulin Li, Haoliang Shi","doi":"10.1109/COASE.2019.8843010","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843010","url":null,"abstract":"The rapid development of and deep integration of emerging information technologies has boosted cyber-physical-social production systems (CPSPS) which coordinates humans and machines in both physical and cyber worlds by tightening the cyber-physical-social conjoining of static manufacturing resources and dynamic machining processes. Industrial dataspace is regarded as a broker to run CPSPS by mediating between bottom data sources and upper applications with different data access needs via mappings. The presented research proposes a reference architecture for industrial-dataspace-enabled CPSPS. Based on that, three key enabled technologies are presented. Finally, a demonstrative example is conducted to validate the architecture.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"422 1","pages":"1402-1407"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"75871738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8842899
Van-Thanh Nguyen, Chao-Wei Lin, C. G. Li, Shu-Mei Guo, J. Lien
Perception-based learning approaches to robotic grasping have shown significant promise. This is further reinforced by using supervised deep learning in robotic arm. However, to properly train deep networks and prevent overfitting, massive datasets of labelled samples must be available. Creating such datasets by human labelling is an exhaustive task since most objects can be grasped at multiple points and in several orientations. Accordingly, this work employs a self-supervised learning technique in which the training dataset is labelled by the robot itself. Above all, we propose a cascaded network that reduces the time of the grasping task by eliminating ungraspable samples from the inference process. In addition to grasping task which performs pose estimation, we enlarge the network to perform an auxiliary task, object classification in which data labelling can be done easily by human. Notably, our network is capable of estimating 18 grasping poses and classifying 4 objects simultaneously. The experimental results show that the proposed network achieves an accuracy of 94.8% in estimating the grasping pose and 100% in classifying the object category, in 0.65 seconds.
{"title":"Visual-Guided Robot Arm Using Self-Supervised Deep Convolutional Neural Networks","authors":"Van-Thanh Nguyen, Chao-Wei Lin, C. G. Li, Shu-Mei Guo, J. Lien","doi":"10.1109/COASE.2019.8842899","DOIUrl":"https://doi.org/10.1109/COASE.2019.8842899","url":null,"abstract":"Perception-based learning approaches to robotic grasping have shown significant promise. This is further reinforced by using supervised deep learning in robotic arm. However, to properly train deep networks and prevent overfitting, massive datasets of labelled samples must be available. Creating such datasets by human labelling is an exhaustive task since most objects can be grasped at multiple points and in several orientations. Accordingly, this work employs a self-supervised learning technique in which the training dataset is labelled by the robot itself. Above all, we propose a cascaded network that reduces the time of the grasping task by eliminating ungraspable samples from the inference process. In addition to grasping task which performs pose estimation, we enlarge the network to perform an auxiliary task, object classification in which data labelling can be done easily by human. Notably, our network is capable of estimating 18 grasping poses and classifying 4 objects simultaneously. The experimental results show that the proposed network achieves an accuracy of 94.8% in estimating the grasping pose and 100% in classifying the object category, in 0.65 seconds.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"30 1","pages":"1415-1420"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82329121","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843149
Bin Sun, Xinyu Zhang
The handling of fabrics is a very challenging task throughout automated garment manufacturing. There are practical difficulties in designing and implementing a reliable gripper to efficiently handling fabric panels. In this paper, we present a new and flexible electrostatic gripper for the handling of fabrics. Our gripper consists of four flat pads and their embedded electrode patterns generate electrostatic adhesion fields. The coverage area varies with the expansion of the four electrostatic pads. This allows handling various size of fabric panels and flattening folded/wrinkled fabrics. We partially verified our new gripper in prototype form and experimentally evaluated its performance on a large number of fabric materials. Moreover, the proposed gripper can be used for handling and transporting garments while avoiding the damage of fabric surfaces.
{"title":"A New Electrostatic Gripper for Flexible Handling of Fabrics in Automated Garment Manufacturing","authors":"Bin Sun, Xinyu Zhang","doi":"10.1109/COASE.2019.8843149","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843149","url":null,"abstract":"The handling of fabrics is a very challenging task throughout automated garment manufacturing. There are practical difficulties in designing and implementing a reliable gripper to efficiently handling fabric panels. In this paper, we present a new and flexible electrostatic gripper for the handling of fabrics. Our gripper consists of four flat pads and their embedded electrode patterns generate electrostatic adhesion fields. The coverage area varies with the expansion of the four electrostatic pads. This allows handling various size of fabric panels and flattening folded/wrinkled fabrics. We partially verified our new gripper in prototype form and experimentally evaluated its performance on a large number of fabric materials. Moreover, the proposed gripper can be used for handling and transporting garments while avoiding the damage of fabric surfaces.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"43 1","pages":"879-884"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"82447475","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2019-08-01DOI: 10.1109/COASE.2019.8843100
Meng Zhao, Xinyu Li, Liang Gao, Ling Wang, Mi Xiao
Scheduling of flexible job shop has been researched over several decades and continues to attract the interests of many scholars. But in the real manufacturing system, dynamic events such as machine failures are major issues. In this paper, an improved Q-learning algorithm with double-layer actions is proposed to solve the dynamic flexible job-shop scheduling problem (DFJSP) considering machine failures. The initial scheduling scheme is obtained by Genetic Algorithm (GA), and the rescheduling strategy is acquired by the Agent of the proposed Q-learning based on dispatching rules. The agent of Q-learning is able to select both operations and alternative machines optimally when machine failure occurs. To testify this approach, experiments are designed and performed based on Mk03 problem of FJSP. Results demonstrate that the optimal rescheduling strategy varies in different machine failure status. And compared with adopting a single dispatching rule all the time, the proposed Q-learning can reduce time of delay in a frequent dynamic environment, which shows that agent-based method is suitable for DFJSP.
{"title":"An improved Q-learning based rescheduling method for flexible job-shops with machine failures","authors":"Meng Zhao, Xinyu Li, Liang Gao, Ling Wang, Mi Xiao","doi":"10.1109/COASE.2019.8843100","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843100","url":null,"abstract":"Scheduling of flexible job shop has been researched over several decades and continues to attract the interests of many scholars. But in the real manufacturing system, dynamic events such as machine failures are major issues. In this paper, an improved Q-learning algorithm with double-layer actions is proposed to solve the dynamic flexible job-shop scheduling problem (DFJSP) considering machine failures. The initial scheduling scheme is obtained by Genetic Algorithm (GA), and the rescheduling strategy is acquired by the Agent of the proposed Q-learning based on dispatching rules. The agent of Q-learning is able to select both operations and alternative machines optimally when machine failure occurs. To testify this approach, experiments are designed and performed based on Mk03 problem of FJSP. Results demonstrate that the optimal rescheduling strategy varies in different machine failure status. And compared with adopting a single dispatching rule all the time, the proposed Q-learning can reduce time of delay in a frequent dynamic environment, which shows that agent-based method is suitable for DFJSP.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"57 1","pages":"331-337"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"83350168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Coffee beans are one of most valuable agricultural products in the world, and defective bean removal plays a critical role to produce high-quality coffee products. In this work, we propose a novel labor-efficient deep learning-based model generation scheme, aiming at providing an effective model with less human labeling effort. The key idea is to iteratively generate new training images containing defective beans in various locations by using a generative-adversarial network framework, and these images incur low successful detection rate so that they are useful for improving model quality. Our proposed scheme brings two main impacts to the intelligent agriculture. First, our proposed scheme is the first work to reduce human labeling effort among solutions of vision-based defective bean removal. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time. The above two advantages increase the degree of automation to the coffee industry. We implement the prototype of the proposed scheme for conducting integrated tests. Testin. results of a case study reveal that the proposed scheme ca] efficiently and effectively generating models for identifyin defect beans.Our implementation of the proposed scheme is available a https://github.com/Louis8582/LEGAN.
咖啡豆是世界上最有价值的农产品之一,去除缺陷豆对生产高质量的咖啡产品起着至关重要的作用。在这项工作中,我们提出了一种新的基于劳动效率的深度学习模型生成方案,旨在提供一个有效的模型,减少人工标记的工作量。关键思想是利用生成对抗网络框架,迭代生成包含不同位置缺陷豆子的新训练图像,这些图像的成功检测率较低,有助于提高模型质量。本文提出的方案对智能农业的发展有两方面的影响。首先,我们提出的方案是第一个在基于视觉的缺陷豆去除解决方案中减少人类标记工作量的工作。第二,我们的方案可以同时检测美国精品咖啡协会(Specialty Coffee Association of America, SCAA)分类的所有品类的次品咖啡豆。以上两个优势增加了咖啡行业的自动化程度。我们实现了所提出方案的原型进行综合测试。Testin。实例研究结果表明,该方法能够有效地生成缺陷bean识别模型。我们提出的方案的实施可以在https://github.com/Louis8582/LEGAN上找到。
{"title":"A Labor-Efficient GAN-based Model Generation Scheme for Deep-Learning Defect Inspection among Dense Beans in Coffee Industry","authors":"Chen-Ju Kuo, Chao-Chun Chen, Tzu-Ting Chen, Zhi-Jing Tsai, Min-Hsiung Hung, Yu-Chuan Lin, Yi-Chung Chen, Ding-Chau Wang, Gwo-Jiun Homg, Wei-Tsung Su","doi":"10.1109/COASE.2019.8843259","DOIUrl":"https://doi.org/10.1109/COASE.2019.8843259","url":null,"abstract":"Coffee beans are one of most valuable agricultural products in the world, and defective bean removal plays a critical role to produce high-quality coffee products. In this work, we propose a novel labor-efficient deep learning-based model generation scheme, aiming at providing an effective model with less human labeling effort. The key idea is to iteratively generate new training images containing defective beans in various locations by using a generative-adversarial network framework, and these images incur low successful detection rate so that they are useful for improving model quality. Our proposed scheme brings two main impacts to the intelligent agriculture. First, our proposed scheme is the first work to reduce human labeling effort among solutions of vision-based defective bean removal. Second, our scheme can inspect all classes of defective beans categorized by the SCAA (Specialty Coffee Association of America) at the same time. The above two advantages increase the degree of automation to the coffee industry. We implement the prototype of the proposed scheme for conducting integrated tests. Testin. results of a case study reveal that the proposed scheme ca] efficiently and effectively generating models for identifyin defect beans.Our implementation of the proposed scheme is available a https://github.com/Louis8582/LEGAN.","PeriodicalId":6695,"journal":{"name":"2019 IEEE 15th International Conference on Automation Science and Engineering (CASE)","volume":"1 1","pages":"263-270"},"PeriodicalIF":0.0,"publicationDate":"2019-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"90099694","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}