Ishara Paranawithana, U-Xuan Tan, Liangjing Yang, Zhong Chen, K. Youcef-Toumi
{"title":"Scene-Adaptive Fusion of Visual and Motion Tracking for Vision-Guided Micromanipulation in Plant Cells","authors":"Ishara Paranawithana, U-Xuan Tan, Liangjing Yang, Zhong Chen, K. Youcef-Toumi","doi":"10.1109/COASE.2018.8560699","DOIUrl":null,"url":null,"abstract":"This work proposes a fusion mechanism that overcomes the traditional limitations in vision-guided micromanipulation in plant cells. Despite the recent advancement in vision-guided micromanipulation, only a handful of research addressed the intrinsic issues related to micromanipulation in plant cells. Unlike single cell manipulation, the structural complexity of plant cells makes visual tracking extremely challenging. There is therefore a need to complement the visual tracking approach with trajectory data from the manipulator. Fusion of the two sources of data is done by combining the projected trajectory data to the image domain and template tracking data using a score-based weighted averaging approach. Similarity score reflecting the confidence of a particular localization result is used as the basis of the weighted average. As the projected trajectory data of the manipulator is not at all affected by the visual disturbances such as regional occlusion, fusing estimations from two sources leads to improved tracking performance. Experimental results suggest that fusion-based tracking mechanism maintains a mean error of 2.15 pixels whereas template tracking and projected trajectory data has a mean error of 2.49 and 2.61 pixels, respectively. Path B of the square trajectory demonstrated a significant improvement with a mean error of 1.11 pixels with 50% of the tracking ROI occluded by plant specimen. Under these conditions, both template tracking and projected trajectory data show similar performances with a mean error of 2.59 and 2.58 pixels, respectively. By addressing the limitations and unmet needs in the application of plant cell bio-manipulation, we hope to bridge the gap in the development of automatic vision-guided micromanipulation in plant cells.","PeriodicalId":6518,"journal":{"name":"2018 IEEE 14th International Conference on Automation Science and Engineering (CASE)","volume":"51 1","pages":"1434-1440"},"PeriodicalIF":0.0000,"publicationDate":"2018-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"4","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2018 IEEE 14th International Conference on Automation Science and Engineering (CASE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/COASE.2018.8560699","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 4
Abstract
This work proposes a fusion mechanism that overcomes the traditional limitations in vision-guided micromanipulation in plant cells. Despite the recent advancement in vision-guided micromanipulation, only a handful of research addressed the intrinsic issues related to micromanipulation in plant cells. Unlike single cell manipulation, the structural complexity of plant cells makes visual tracking extremely challenging. There is therefore a need to complement the visual tracking approach with trajectory data from the manipulator. Fusion of the two sources of data is done by combining the projected trajectory data to the image domain and template tracking data using a score-based weighted averaging approach. Similarity score reflecting the confidence of a particular localization result is used as the basis of the weighted average. As the projected trajectory data of the manipulator is not at all affected by the visual disturbances such as regional occlusion, fusing estimations from two sources leads to improved tracking performance. Experimental results suggest that fusion-based tracking mechanism maintains a mean error of 2.15 pixels whereas template tracking and projected trajectory data has a mean error of 2.49 and 2.61 pixels, respectively. Path B of the square trajectory demonstrated a significant improvement with a mean error of 1.11 pixels with 50% of the tracking ROI occluded by plant specimen. Under these conditions, both template tracking and projected trajectory data show similar performances with a mean error of 2.59 and 2.58 pixels, respectively. By addressing the limitations and unmet needs in the application of plant cell bio-manipulation, we hope to bridge the gap in the development of automatic vision-guided micromanipulation in plant cells.