An Overview of Visual Sound Synthesis Generation Tasks Based on Deep Learning Networks

Hongyu Gao
{"title":"An Overview of Visual Sound Synthesis Generation Tasks Based on Deep Learning Networks","authors":"Hongyu Gao","doi":"10.62051/acf99a49","DOIUrl":null,"url":null,"abstract":"Visual sound synthesis (which refers to the process of recreating, as realistically as possible, the sound produced by the movements and actions of objects within a video, given specific conditions such as video content and accompanying text) is an important part of the composition of high-quality films at present. Most traditional methods of sound synthesis are based on the artificial creation of simulated props for sound effects synthesis, which is achieved by using various existing props and constructed scenes. However, traditional methods cannot meet specific conditions for sound effect synthesis and require large amounts of participant, material resources and time. It can take nearly ten hours to simulate realistic sound effects in a minute-long video. In this paper, we systematically summarize and consolidate current advances in deep learning in the field of visual sound synthesis, based on existing related papers. We focus on the exploration and development history of deep learning models for the task of visual sound synthesis, and classify detailed research methods and related dataset information based on their development characteristics. By analyzing the technical differences among various model approaches, we can summarize potential research directions in the field, thereby further promoting the rapid development and practical implementation of deep learning models in the video domain.","PeriodicalId":503289,"journal":{"name":"Transactions on Engineering and Technology Research","volume":"1 4","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2023-12-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transactions on Engineering and Technology Research","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.62051/acf99a49","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

Visual sound synthesis (which refers to the process of recreating, as realistically as possible, the sound produced by the movements and actions of objects within a video, given specific conditions such as video content and accompanying text) is an important part of the composition of high-quality films at present. Most traditional methods of sound synthesis are based on the artificial creation of simulated props for sound effects synthesis, which is achieved by using various existing props and constructed scenes. However, traditional methods cannot meet specific conditions for sound effect synthesis and require large amounts of participant, material resources and time. It can take nearly ten hours to simulate realistic sound effects in a minute-long video. In this paper, we systematically summarize and consolidate current advances in deep learning in the field of visual sound synthesis, based on existing related papers. We focus on the exploration and development history of deep learning models for the task of visual sound synthesis, and classify detailed research methods and related dataset information based on their development characteristics. By analyzing the technical differences among various model approaches, we can summarize potential research directions in the field, thereby further promoting the rapid development and practical implementation of deep learning models in the video domain.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
基于深度学习网络的视觉声音合成任务概述
视觉声音合成(指在视频内容和所附文字等特定条件下,尽可能逼真地再现视频中物体运动和动作所产生的声音的过程)是当前高品质电影创作的重要组成部分。传统的声音合成方法大多是通过人工制作模拟道具进行音效合成,利用各种现有道具和构建场景来实现。然而,传统方法无法满足音效合成的特定条件,需要大量的参与人员、物力和时间。要在一分钟长的视频中模拟出逼真的音效,可能需要近十个小时。本文在现有相关论文的基础上,系统地总结和归纳了当前深度学习在视觉声音合成领域的进展。我们重点介绍了针对视觉声音合成任务的深度学习模型的探索和发展历程,并根据其发展特点对详细的研究方法和相关数据集信息进行了分类。通过分析各种模型方法的技术差异,我们可以总结出该领域的潜在研究方向,从而进一步推动深度学习模型在视频领域的快速发展和实际应用。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Study on Optimization of Departure Interval of Pure Electric Bus in Airport Area Kinematic Analysis and Control Study of a 3-DOF Harvesting Robot Real-time Pedestrian Tracking Based on YOLOv3 and Prototype Clustering Data Reconstruction of Wireless Sensor Network Based on Graph Signal An Overview of Visual Sound Synthesis Generation Tasks Based on Deep Learning Networks
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1