{"title":"Application of Multimedia Data Feature Extraction Technology in Teaching Classical Oil Painting","authors":"Zhuo Chen, Jianmiao Li","doi":"10.4018/ijwltt.333601","DOIUrl":null,"url":null,"abstract":"The cross-modal oil painting image generated by traditional methods makes it easy to miss the important information of the target part, and the generated image lacks realism. This paper combines the feature extraction technology of multimedia data with the generation confrontation network in deep learning, puts forward a generation model of classic oil painting, and applies it to university teaching. Firstly, the key frame extraction algorithm is used to extract the key frames in the video, and the channel attention network is introduced into the pre-trained ResNet-50 network to extract the static features of 2D images in short oil painting videos. Then, the depth feature mapping is carried out in the time dimension by using the double-stream I3D network, and the feature representation is enhanced by combining static and dynamic features. Finally, the high-dimensional features in the depth space are mapped to the two-dimensional space by using the opposition generation network to generate classic oil painting pictures.","PeriodicalId":39282,"journal":{"name":"International Journal of Web-Based Learning and Teaching Technologies","volume":"167 3","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"International Journal of Web-Based Learning and Teaching Technologies","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.4018/ijwltt.333601","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0
Abstract
The cross-modal oil painting image generated by traditional methods makes it easy to miss the important information of the target part, and the generated image lacks realism. This paper combines the feature extraction technology of multimedia data with the generation confrontation network in deep learning, puts forward a generation model of classic oil painting, and applies it to university teaching. Firstly, the key frame extraction algorithm is used to extract the key frames in the video, and the channel attention network is introduced into the pre-trained ResNet-50 network to extract the static features of 2D images in short oil painting videos. Then, the depth feature mapping is carried out in the time dimension by using the double-stream I3D network, and the feature representation is enhanced by combining static and dynamic features. Finally, the high-dimensional features in the depth space are mapped to the two-dimensional space by using the opposition generation network to generate classic oil painting pictures.