Automatic cinematography for body movement involved virtual communication

IF 1.5 4区 计算机科学 Q3 ENGINEERING, ELECTRICAL & ELECTRONIC IET Communications Pub Date : 2024-03-20 DOI:10.1049/cmu2.12748
Zixiao Yu, Honghong Wang, Kim Un
{"title":"Automatic cinematography for body movement involved virtual communication","authors":"Zixiao Yu,&nbsp;Honghong Wang,&nbsp;Kim Un","doi":"10.1049/cmu2.12748","DOIUrl":null,"url":null,"abstract":"<p>The emergence of novel AI technologies and increasingly portable wearable devices have introduced a wider range of more liberated avenues for communication and interaction between human and virtual environments. In this context, the expression of distinct emotions and movements by users may convey a variety of meanings. Consequently, an emerging challenge is how to automatically enhance the visual representation of such interactions. Here, a novel Generative Adversarial Network (GAN) based model, AACOGAN, is introduced to tackle this challenge effectively. AACOGAN model establishes a relationship between player interactions, object locations, and camera movements, subsequently generating camera shots that augment player immersion. Experimental results demonstrate that AACOGAN enhances the correlation between player interactions and camera trajectories by an average of 73%, and improves multi-focus scene quality up to 32.9%. Consequently, AACOGAN is established as an efficient and economical solution for generating camera shots appropriate for a wide range of interactive motions. Exemplary video footage can be found at https://youtu.be/Syrwbnpzgx8.</p>","PeriodicalId":55001,"journal":{"name":"IET Communications","volume":"18 5","pages":"344-352"},"PeriodicalIF":1.5000,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1049/cmu2.12748","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"IET Communications","FirstCategoryId":"94","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1049/cmu2.12748","RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"ENGINEERING, ELECTRICAL & ELECTRONIC","Score":null,"Total":0}
引用次数: 0

Abstract

The emergence of novel AI technologies and increasingly portable wearable devices have introduced a wider range of more liberated avenues for communication and interaction between human and virtual environments. In this context, the expression of distinct emotions and movements by users may convey a variety of meanings. Consequently, an emerging challenge is how to automatically enhance the visual representation of such interactions. Here, a novel Generative Adversarial Network (GAN) based model, AACOGAN, is introduced to tackle this challenge effectively. AACOGAN model establishes a relationship between player interactions, object locations, and camera movements, subsequently generating camera shots that augment player immersion. Experimental results demonstrate that AACOGAN enhances the correlation between player interactions and camera trajectories by an average of 73%, and improves multi-focus scene quality up to 32.9%. Consequently, AACOGAN is established as an efficient and economical solution for generating camera shots appropriate for a wide range of interactive motions. Exemplary video footage can be found at https://youtu.be/Syrwbnpzgx8.

Abstract Image

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
涉及虚拟通信的肢体运动自动摄影技术
新型人工智能技术和便携式可穿戴设备的出现,为人类与虚拟环境之间的交流和互动提供了更广泛、更自由的途径。在这种情况下,用户所表达的不同情绪和动作可能会传达出不同的含义。因此,一个新出现的挑战是如何自动增强这种交互的视觉表现。为有效解决这一难题,本文引入了一种基于生成对抗网络(GAN)的新型模型 AACOGAN。AACOGAN 模型建立了玩家互动、物体位置和摄像机移动之间的关系,随后生成摄像机镜头,增强玩家的沉浸感。实验结果表明,AACOGAN 将玩家互动与摄像机运动轨迹之间的相关性平均提高了 73%,并将多焦点场景质量提高了 32.9%。因此,AACOGAN 是一种高效、经济的解决方案,可用于生成适合各种互动动作的摄像机镜头。示例视频片段可在 https://youtu.be/Syrwbnpzgx8 上找到。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
IET Communications
IET Communications 工程技术-工程:电子与电气
CiteScore
4.30
自引率
6.20%
发文量
220
审稿时长
5.9 months
期刊介绍: IET Communications covers the fundamental and generic research for a better understanding of communication technologies to harness the signals for better performing communication systems using various wired and/or wireless media. This Journal is particularly interested in research papers reporting novel solutions to the dominating problems of noise, interference, timing and errors for reduction systems deficiencies such as wasting scarce resources such as spectra, energy and bandwidth. Topics include, but are not limited to: Coding and Communication Theory; Modulation and Signal Design; Wired, Wireless and Optical Communication; Communication System Special Issues. Current Call for Papers: Cognitive and AI-enabled Wireless and Mobile - https://digital-library.theiet.org/files/IET_COM_CFP_CAWM.pdf UAV-Enabled Mobile Edge Computing - https://digital-library.theiet.org/files/IET_COM_CFP_UAV.pdf
期刊最新文献
A deep learning-based approach for pseudo-satellite positioning Analysis of interference effect in VL-NOMA network considering signal power parameters performance An innovative model for an enhanced dual intrusion detection system using LZ-JC-DBSCAN, EPRC-RPOA and EG-GELU-GRU A high-precision timing and frequency synchronization algorithm for multi-h CPM signals Dual-user joint sensing and communications with time-divisioned bi-static radar
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1