{"title":"利用深度学习加强讲座捕捉","authors":"R.M. Sales , S. Giani","doi":"10.1016/j.advengsoft.2024.103732","DOIUrl":null,"url":null,"abstract":"<div><p>This paper provides an insight into the development of a state-of-the-art video processing system to address limitations within Durham University’s ‘Encore’ lecture capture solution. The aim of the research described in this paper is to digitally remove the persons presenting from the view of a whiteboard to provide students with a more effective online learning experience. This work enlists a ‘human entity detection module’, which uses a remodelled version of the Fast Segmentation Neural Network to perform efficient binary image segmentation, and a ‘background restoration module’, which introduces a novel procedure to retain only background pixels in consecutive video frames. The segmentation network is trained from the outset with a Tversky loss function on a dataset of images extracted from various Tik-Tok dance videos. The most effective training techniques are described in detail, and it is found that these produce asymptotic convergence to within 5% of the final loss in only 40 training epochs. A cross-validation study then concludes that a Tversky parameter of 0.9 is optimal for balancing recall and precision in the context of this work. Finally, it is demonstrated that the system successfully removes the human form from the view of the whiteboard in a real lecture video. Whilst the system is believed to have the potential for real-time usage, it is not possible to prove this owing to hardware limitations. In the conclusions, wider application of this work is also suggested.</p></div>","PeriodicalId":50866,"journal":{"name":"Advances in Engineering Software","volume":"196 ","pages":"Article 103732"},"PeriodicalIF":4.0000,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S096599782400139X/pdfft?md5=a2906a6e69fc7570baf43b6aac3a15be&pid=1-s2.0-S096599782400139X-main.pdf","citationCount":"0","resultStr":"{\"title\":\"Enhancing lecture capture with deep learning\",\"authors\":\"R.M. Sales , S. Giani\",\"doi\":\"10.1016/j.advengsoft.2024.103732\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>This paper provides an insight into the development of a state-of-the-art video processing system to address limitations within Durham University’s ‘Encore’ lecture capture solution. The aim of the research described in this paper is to digitally remove the persons presenting from the view of a whiteboard to provide students with a more effective online learning experience. This work enlists a ‘human entity detection module’, which uses a remodelled version of the Fast Segmentation Neural Network to perform efficient binary image segmentation, and a ‘background restoration module’, which introduces a novel procedure to retain only background pixels in consecutive video frames. The segmentation network is trained from the outset with a Tversky loss function on a dataset of images extracted from various Tik-Tok dance videos. The most effective training techniques are described in detail, and it is found that these produce asymptotic convergence to within 5% of the final loss in only 40 training epochs. A cross-validation study then concludes that a Tversky parameter of 0.9 is optimal for balancing recall and precision in the context of this work. Finally, it is demonstrated that the system successfully removes the human form from the view of the whiteboard in a real lecture video. Whilst the system is believed to have the potential for real-time usage, it is not possible to prove this owing to hardware limitations. In the conclusions, wider application of this work is also suggested.</p></div>\",\"PeriodicalId\":50866,\"journal\":{\"name\":\"Advances in Engineering Software\",\"volume\":\"196 \",\"pages\":\"Article 103732\"},\"PeriodicalIF\":4.0000,\"publicationDate\":\"2024-07-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.sciencedirect.com/science/article/pii/S096599782400139X/pdfft?md5=a2906a6e69fc7570baf43b6aac3a15be&pid=1-s2.0-S096599782400139X-main.pdf\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Advances in Engineering Software\",\"FirstCategoryId\":\"5\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S096599782400139X\",\"RegionNum\":2,\"RegionCategory\":\"工程技术\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Advances in Engineering Software","FirstCategoryId":"5","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S096599782400139X","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMPUTER SCIENCE, INTERDISCIPLINARY APPLICATIONS","Score":null,"Total":0}
This paper provides an insight into the development of a state-of-the-art video processing system to address limitations within Durham University’s ‘Encore’ lecture capture solution. The aim of the research described in this paper is to digitally remove the persons presenting from the view of a whiteboard to provide students with a more effective online learning experience. This work enlists a ‘human entity detection module’, which uses a remodelled version of the Fast Segmentation Neural Network to perform efficient binary image segmentation, and a ‘background restoration module’, which introduces a novel procedure to retain only background pixels in consecutive video frames. The segmentation network is trained from the outset with a Tversky loss function on a dataset of images extracted from various Tik-Tok dance videos. The most effective training techniques are described in detail, and it is found that these produce asymptotic convergence to within 5% of the final loss in only 40 training epochs. A cross-validation study then concludes that a Tversky parameter of 0.9 is optimal for balancing recall and precision in the context of this work. Finally, it is demonstrated that the system successfully removes the human form from the view of the whiteboard in a real lecture video. Whilst the system is believed to have the potential for real-time usage, it is not possible to prove this owing to hardware limitations. In the conclusions, wider application of this work is also suggested.
期刊介绍:
The objective of this journal is to communicate recent and projected advances in computer-based engineering techniques. The fields covered include mechanical, aerospace, civil and environmental engineering, with an emphasis on research and development leading to practical problem-solving.
The scope of the journal includes:
• Innovative computational strategies and numerical algorithms for large-scale engineering problems
• Analysis and simulation techniques and systems
• Model and mesh generation
• Control of the accuracy, stability and efficiency of computational process
• Exploitation of new computing environments (eg distributed hetergeneous and collaborative computing)
• Advanced visualization techniques, virtual environments and prototyping
• Applications of AI, knowledge-based systems, computational intelligence, including fuzzy logic, neural networks and evolutionary computations
• Application of object-oriented technology to engineering problems
• Intelligent human computer interfaces
• Design automation, multidisciplinary design and optimization
• CAD, CAE and integrated process and product development systems
• Quality and reliability.