首页 > 最新文献

Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia最新文献

英文 中文
Session details: Multimedia Artworks Analysis 会议内容:多媒体艺术作品分析
T. Yamasaki
{"title":"Session details: Multimedia Artworks Analysis","authors":"T. Yamasaki","doi":"10.1145/3406282","DOIUrl":"https://doi.org/10.1145/3406282","url":null,"abstract":"","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"55 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133091567","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia 2020多媒体艺术作品分析与多媒体吸引力计算联合研讨会论文集
{"title":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","authors":"","doi":"10.1145/3379173","DOIUrl":"https://doi.org/10.1145/3379173","url":null,"abstract":"","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122063047","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Recommendations for Attractive Hairstyles 迷人发型的建议
Yuto Nakamae, Xueting Wang, T. Yamasaki
People change their hairstyles to make their appearance attractive, however it is difficult to determine which hairstyles are attractive. In this study, we aim to recommend a hairstyle that improves the attractiveness for an input face using attractiveness evaluation and image generation by deep learning. In the experiment, we first learned the attractiveness and obtained results similar to human intuition. Second, the hairstyle of the input image was changed using two methods: hairstyle attribute conversion and face swapping. Finally, a comparison experiment was performed by subjectively evaluating the input image and the image obtained by the proposed method. As a result, the proposed method was able to generate images with high evaluation value.
人们改变他们的发型,使他们的外表有吸引力,然而很难确定哪种发型是有吸引力的。在这项研究中,我们的目标是通过深度学习的吸引力评估和图像生成来推荐一种发型,以提高输入面部的吸引力。在实验中,我们首先了解了吸引力,得到了类似于人类直觉的结果。其次,采用发型属性转换和换脸两种方法改变输入图像的发型;最后,通过主观评价输入图像与所提方法得到的图像进行对比实验。结果表明,该方法能够生成具有较高评价价值的图像。
{"title":"Recommendations for Attractive Hairstyles","authors":"Yuto Nakamae, Xueting Wang, T. Yamasaki","doi":"10.1145/3379173.3393709","DOIUrl":"https://doi.org/10.1145/3379173.3393709","url":null,"abstract":"People change their hairstyles to make their appearance attractive, however it is difficult to determine which hairstyles are attractive. In this study, we aim to recommend a hairstyle that improves the attractiveness for an input face using attractiveness evaluation and image generation by deep learning. In the experiment, we first learned the attractiveness and obtained results similar to human intuition. Second, the hairstyle of the input image was changed using two methods: hairstyle attribute conversion and face swapping. Finally, a comparison experiment was performed by subjectively evaluating the input image and the image obtained by the proposed method. As a result, the proposed method was able to generate images with high evaluation value.","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"133349037","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Automatic YouTube-Thumbnail Generation and Its Evaluation 自动生成youtube缩略图及其评价
Akari Shimono, Yuki Kakui, T. Yamasaki
YouTubers have recently become highly popular. Generating eye-catching thumbnails is an important factor in attracting viewers. In this study, we propose an automatic YouTube-video-thumbnail generation method that ensures the following: rich facial expression of the YouTuber in the frame, clear presentation of the subject of the video, and clear description of the content through a headline. We compared thumbnails generated by the proposed method with those generated by existing methods or those of actually posted videos (i.e., ground truth), and we evaluated the results.
youtube用户最近变得非常受欢迎。制作醒目的缩略图是吸引观众的重要因素。在本研究中,我们提出了一种自动生成YouTube-video-thumbnail的方法,该方法可以保证帧中YouTuber丰富的面部表情、视频主题的清晰呈现以及通过标题对内容的清晰描述。我们将提出的方法生成的缩略图与现有方法生成的缩略图或实际发布的视频(即ground truth)生成的缩略图进行比较,并对结果进行评估。
{"title":"Automatic YouTube-Thumbnail Generation and Its Evaluation","authors":"Akari Shimono, Yuki Kakui, T. Yamasaki","doi":"10.1145/3379173.3393711","DOIUrl":"https://doi.org/10.1145/3379173.3393711","url":null,"abstract":"YouTubers have recently become highly popular. Generating eye-catching thumbnails is an important factor in attracting viewers. In this study, we propose an automatic YouTube-video-thumbnail generation method that ensures the following: rich facial expression of the YouTuber in the frame, clear presentation of the subject of the video, and clear description of the content through a headline. We compared thumbnails generated by the proposed method with those generated by existing methods or those of actually posted videos (i.e., ground truth), and we evaluated the results.","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115521053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 12
Session details: Attractiveness Computing in Multimedia 会议细节:多媒体中的吸引力计算
W. Chu
{"title":"Session details: Attractiveness Computing in Multimedia","authors":"W. Chu","doi":"10.1145/3406283","DOIUrl":"https://doi.org/10.1145/3406283","url":null,"abstract":"","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"10 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126527663","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BatikGAN: A Generative Adversarial Network for Batik Creation 蜡染gan:蜡染创作的生成对抗网络
W. Chu, Lin-Yu Ko
We propose a texture synthesis method based on generative adversarial networks, focusing on a cultural emblem, called Batik, of southeastern Asian countries. We propose a two-stage training approach to construct the network, first generating patches and then combining patches to generate the entire Batik image. Regular repetition and synthesis artifact removal are jointly considered to guide model training. In the evaluation, we show that the proposed generator fuses two Batik styles, removes blocking artifacts, and generates harmonious Batik images. Qualitative and quantitative evaluations are provided to show promising performance from several perspectives.
我们提出了一种基于生成对抗网络的纹理合成方法,重点关注东南亚国家的文化标志,称为蜡染。我们提出了一种两阶段的训练方法来构建网络,首先生成补丁,然后将补丁组合起来生成整个Batik图像。有规律的重复和合成工件的去除被联合考虑来指导模型训练。在评价中,我们证明了所提出的生成器融合了两种蜡染风格,去除了阻塞的伪影,并生成了和谐的蜡染图像。提供定性和定量评价,从几个角度显示有希望的表现。
{"title":"BatikGAN: A Generative Adversarial Network for Batik Creation","authors":"W. Chu, Lin-Yu Ko","doi":"10.1145/3379173.3393710","DOIUrl":"https://doi.org/10.1145/3379173.3393710","url":null,"abstract":"We propose a texture synthesis method based on generative adversarial networks, focusing on a cultural emblem, called Batik, of southeastern Asian countries. We propose a two-stage training approach to construct the network, first generating patches and then combining patches to generate the entire Batik image. Regular repetition and synthesis artifact removal are jointly considered to guide model training. In the evaluation, we show that the proposed generator fuses two Batik styles, removes blocking artifacts, and generates harmonious Batik images. Qualitative and quantitative evaluations are provided to show promising performance from several perspectives.","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130325517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Style Image Retrieval for Improving Material Translation Using Neural Style Transfer 基于神经风格迁移的风格图像检索改进材料翻译
Gibran Benitez-Garcia, Wataru Shimoda, Keiji Yanai
In this paper, we propose a CNN-feature-based image retrieval method to find the ideal style image that better translates the material of an object. An ideal style image must share semantic information with the content image, while containing distinctive characteristics of the desired material. Therefore, we first refine the search by selecting the most discriminative images from the target material. Subsequently, our search process focuses on the object semantics by removing the style information using instance normalization whitening. Thus, the search is performed using the normalized CNN features. In order to translate materials to object regions, we combine semantic segmentation with neural style transfer. We segment objects from the content image by using a weakly supervised segmentation method, and transfer the material of the retrieved style image to the segmented areas. We demonstrate quantitatively and qualitatively that by using ideal style images, the results of the conventional neural style transfer are significantly improved, overcoming state-of-the-art approaches, such as WCT, MUNIT, and StarGAN.
在本文中,我们提出了一种基于cnn特征的图像检索方法,以寻找更好地翻译物体材料的理想风格图像。一个理想的风格图像必须与内容图像共享语义信息,同时包含所需材料的鲜明特征。因此,我们首先通过从目标材料中选择最具判别性的图像来细化搜索。随后,我们的搜索过程将重点放在对象语义上,使用实例规范化白化去除样式信息。因此,使用归一化的CNN特征执行搜索。为了将材料翻译成目标区域,我们将语义分割与神经风格迁移相结合。我们使用弱监督分割方法从内容图像中分割出对象,并将检索到的风格图像的材料转移到分割的区域中。我们在定量和定性上证明,通过使用理想风格图像,传统神经风格迁移的结果显着改善,克服了最先进的方法,如WCT, MUNIT和StarGAN。
{"title":"Style Image Retrieval for Improving Material Translation Using Neural Style Transfer","authors":"Gibran Benitez-Garcia, Wataru Shimoda, Keiji Yanai","doi":"10.1145/3379173.3393707","DOIUrl":"https://doi.org/10.1145/3379173.3393707","url":null,"abstract":"In this paper, we propose a CNN-feature-based image retrieval method to find the ideal style image that better translates the material of an object. An ideal style image must share semantic information with the content image, while containing distinctive characteristics of the desired material. Therefore, we first refine the search by selecting the most discriminative images from the target material. Subsequently, our search process focuses on the object semantics by removing the style information using instance normalization whitening. Thus, the search is performed using the normalized CNN features. In order to translate materials to object regions, we combine semantic segmentation with neural style transfer. We segment objects from the content image by using a weakly supervised segmentation method, and transfer the material of the retrieved style image to the segmented areas. We demonstrate quantitatively and qualitatively that by using ideal style images, the results of the conventional neural style transfer are significantly improved, overcoming state-of-the-art approaches, such as WCT, MUNIT, and StarGAN.","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"70 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124415685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 4
Iconify: Converting Photographs into Icons Iconify:将照片转换成图标
Takuro Karamatsu, Gibran Benitez-Garcia, Keiji Yanai, S. Uchida
In this paper, we tackle a challenging domain conversion task between photo and icon images. Although icons often originate from real object images (i.e., photographs), severe abstractions and simplifications are applied to generate icon images by professional graphic designers. Moreover, there is no one-to-one correspondence between the two domains, for this reason we cannot use it as the ground-truth for learning a direct conversion function. Since generative adversarial networks (GAN) can undertake the problem of domain conversion without any correspondence, we test CycleGAN and UNIT to generate icons from objects segmented from photo images. Our experiments with several image datasets prove that CycleGAN learns sufficient abstraction and simplification ability to generate icon-like images.
在本文中,我们解决了照片和图标图像之间具有挑战性的域转换任务。虽然图标通常来源于真实的物体图像(即照片),但专业平面设计师在生成图标图像时采用了严格的抽象和简化。此外,这两个域之间没有一对一的对应关系,因此我们不能用它作为学习直接转换函数的基本真理。由于生成对抗网络(GAN)可以在没有任何对应的情况下处理域转换问题,我们测试了CycleGAN和UNIT从照片图像中分割的对象生成图标。我们在多个图像数据集上的实验证明,CycleGAN学习了足够的抽象和简化能力,可以生成类似图标的图像。
{"title":"Iconify: Converting Photographs into Icons","authors":"Takuro Karamatsu, Gibran Benitez-Garcia, Keiji Yanai, S. Uchida","doi":"10.1145/3379173.3393708","DOIUrl":"https://doi.org/10.1145/3379173.3393708","url":null,"abstract":"In this paper, we tackle a challenging domain conversion task between photo and icon images. Although icons often originate from real object images (i.e., photographs), severe abstractions and simplifications are applied to generate icon images by professional graphic designers. Moreover, there is no one-to-one correspondence between the two domains, for this reason we cannot use it as the ground-truth for learning a direct conversion function. Since generative adversarial networks (GAN) can undertake the problem of domain conversion without any correspondence, we test CycleGAN and UNIT to generate icons from objects segmented from photo images. Our experiments with several image datasets prove that CycleGAN learns sufficient abstraction and simplification ability to generate icon-like images.","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"33 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-04-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132064593","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 8
Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia 2020多媒体艺术作品分析与多媒体吸引力计算联合研讨会论文集
{"title":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","authors":"","doi":"10.5555/3403748","DOIUrl":"https://doi.org/10.5555/3403748","url":null,"abstract":"","PeriodicalId":416027,"journal":{"name":"Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129182628","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2020 Joint Workshop on Multimedia Artworks Analysis and Attractiveness Computing in Multimedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1