A Novel Approach on Unsupervised Dynamic Background Extraction Using Autoencoders

Ali Nuri Şeker, Hüseyin Doğan, Muhammet Üsame Öziç
{"title":"A Novel Approach on Unsupervised Dynamic Background Extraction Using Autoencoders","authors":"Ali Nuri Şeker, Hüseyin Doğan, Muhammet Üsame Öziç","doi":"10.1109/ISMSIT52890.2021.9604737","DOIUrl":null,"url":null,"abstract":"In this Study a novel method has been used for extracting the background from given images. Different from the existing approaches, a Convolutional Neural Network (CNN) Autoencoder (AE) has been trained with frames produced from the same stationary camera source, paired with random frames from the same pool for each sample as a label. A little over 4000 RGB images with the dimensions of 640x480 has been used for training and around 450 of them was used for testing. The mentioned model has 4 convolutional layers each in encoder and decoder sections. The training was conducted for 500 epochs and the value of epoch loss went down to 2.13x10-3 and 2.38x10-3 for training and validation respectively. After the training of the model, generated background samples were subtracted from the input images and was turned into a binary image using two different segmentation methods: HSV Thresholding and OTSU. To use as the ground truth, test images were hand labeled. Mentioned approach had an F1-score of 62.36% for HSV Thresholding and 69.63% for OTSU methods.","PeriodicalId":120997,"journal":{"name":"2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","volume":"51 2 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2021-10-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"2021 5th International Symposium on Multidisciplinary Studies and Innovative Technologies (ISMSIT)","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1109/ISMSIT52890.2021.9604737","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

In this Study a novel method has been used for extracting the background from given images. Different from the existing approaches, a Convolutional Neural Network (CNN) Autoencoder (AE) has been trained with frames produced from the same stationary camera source, paired with random frames from the same pool for each sample as a label. A little over 4000 RGB images with the dimensions of 640x480 has been used for training and around 450 of them was used for testing. The mentioned model has 4 convolutional layers each in encoder and decoder sections. The training was conducted for 500 epochs and the value of epoch loss went down to 2.13x10-3 and 2.38x10-3 for training and validation respectively. After the training of the model, generated background samples were subtracted from the input images and was turned into a binary image using two different segmentation methods: HSV Thresholding and OTSU. To use as the ground truth, test images were hand labeled. Mentioned approach had an F1-score of 62.36% for HSV Thresholding and 69.63% for OTSU methods.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
一种基于自编码器的无监督动态背景提取方法
本文提出了一种从给定图像中提取背景的新方法。与现有方法不同的是,卷积神经网络(CNN)自动编码器(AE)使用来自相同固定摄像机源的帧进行训练,并将来自同一样本池的随机帧配对作为标签。4000多张尺寸为640x480的RGB图像用于训练,其中约450张用于测试。上述模型在编码器和解码器部分各有4个卷积层。经过500次epoch的训练,训练和验证的epoch损失值分别降至2.13x10-3和2.38x10-3。模型训练完成后,从输入图像中减去生成的背景样本,使用HSV阈值分割和OTSU两种不同的分割方法将其分割为二值图像。为了作为基础事实,测试图像被手工标记。HSV阈值法的f1得分为62.36%,OTSU法的f1得分为69.63%。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Design Improvement of a Spoke-Type PMSG with Ferrite Magnets to Reduce Space Harmonics Fuzzy AHP-TOPSIS Hybrid Method for Indoor Positioning Technology Selection for Shipyards Assessment of Slime Mould Algorithm Based Real PID Plus Second-order Derivative Controller for Magnetic Levitation System ROS Validation for Fuzzy Logic Contro Implemented under Differential Drive Mobile Robot Physical and Digital Accessibility in Museums in the New Reality
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1