{"title":"Predicting Fruit Fly Behaviour using TOLC device and DeepLabCut","authors":"Sanghoon Lee, Brayden Waugh, Garret O'Dell, Xiji Zhao, Wook-Sung Yoo, Dalhyung Kim","doi":"10.1109/BIBE52308.2021.9635290","DOIUrl":null,"url":null,"abstract":"Animal behavior is an essential element in neuroscience study and noninvasive behavioral tracking of animals during experiments is crucial to many scientific pursuits. However, extracting detailed poses without markers in dynamically changing backgrounds has been a challenge. Transparent Omnidirectional Locomotion Compensator (TOLC), a tracking device, was recently developed to investigate longitudinal studies of a wide range of behavior in an unrestricted walking Drosophila without tethering and the conventional image segmentation method has been used to identify the centroids of the walking Drosophila. Since the shape or morphological features of the pixel-wise mask may vary depending on the captured images, however, the centroid calculation errors could occur when segmenting the walking Drosophila. To solve the problem, DeepLabCut, an open-source deep-learning toolbox performing markerless pose estimation on a sequence of images for quantitative behavioral analysis, was utilized to find the centroids of Drosophila melanogaster in a video recorded by TOLC. One hundred labeled images with centroids were created for the training of ResNet50 among 60,984 images and used for predicting 5,000 images in the experiment. The results of the experiment showed that the centroids predicted by the deep learning model are more accurate than the centroids from the morphological features in a specific part of the sequence of the images. Additionally, we created 200 labeled images with legs for the training of ResN et50 and predicted 5,000 images to investigate the difference between the centroids of a Drosophila melanogaster over the locations of the legs. The centroids generated from morphological features often provide incorrect information when the Drosophila melanogaster stretches out the front legs for some regions. Detailed analysis of experiment results and the future research direction with more extensive experiments are discussed.","PeriodicalId":343724,"journal":{"name":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","volume":"49 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"2","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 IEEE 21st International Conference on Bioinformatics and Bioengineering (BIBE)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/BIBE52308.2021.9635290","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 2
Abstract
Animal behavior is an essential element in neuroscience study and noninvasive behavioral tracking of animals during experiments is crucial to many scientific pursuits. However, extracting detailed poses without markers in dynamically changing backgrounds has been a challenge. Transparent Omnidirectional Locomotion Compensator (TOLC), a tracking device, was recently developed to investigate longitudinal studies of a wide range of behavior in an unrestricted walking Drosophila without tethering and the conventional image segmentation method has been used to identify the centroids of the walking Drosophila. Since the shape or morphological features of the pixel-wise mask may vary depending on the captured images, however, the centroid calculation errors could occur when segmenting the walking Drosophila. To solve the problem, DeepLabCut, an open-source deep-learning toolbox performing markerless pose estimation on a sequence of images for quantitative behavioral analysis, was utilized to find the centroids of Drosophila melanogaster in a video recorded by TOLC. One hundred labeled images with centroids were created for the training of ResNet50 among 60,984 images and used for predicting 5,000 images in the experiment. The results of the experiment showed that the centroids predicted by the deep learning model are more accurate than the centroids from the morphological features in a specific part of the sequence of the images. Additionally, we created 200 labeled images with legs for the training of ResN et50 and predicted 5,000 images to investigate the difference between the centroids of a Drosophila melanogaster over the locations of the legs. The centroids generated from morphological features often provide incorrect information when the Drosophila melanogaster stretches out the front legs for some regions. Detailed analysis of experiment results and the future research direction with more extensive experiments are discussed.