D. Shin, Shu He, G. Lee, Andrew Whinston, Suleyman Cetintas, Kuang-chih Lee
{"title":"用视觉数据分析增强社交媒体分析:一种深度学习方法","authors":"D. Shin, Shu He, G. Lee, Andrew Whinston, Suleyman Cetintas, Kuang-chih Lee","doi":"10.2139/SSRN.2830377","DOIUrl":null,"url":null,"abstract":"In the present study, we investigate the effect of social media content on subsequent customer engagement (likes and reblogs) using a large-scale dataset from Tumblr. Our study focuses on company-generated posts, which consist of two main information sources: visual (images) and textual (text and tags). We employ state-of-the-art machine learning approaches including deep learning to extract data-driven features from both sources that effectively capture their semantics in a systematic and scaleable manner. With such semantic representations, we develop novel complexity, similarity, and consistency measures of social media content. Our empirical results show that proper visual stimuli (e.g., beautiful images, adult-content, celebrities, etc.), complementary textual content, and consistent themes have positive effects on the engagement, and that content demanding significant concentration levels (e.g., video, images with complex semantics, text with diverse topics, complex sentences, etc.) have the opposite effects. Further analyses at different perspectives (industry-level, hedonic/utilitarian products, followers/non-followers, short/long-term engagements) show the heterogeneous effects of visual and textual features. This work contributes to the literature by exemplifying how unstructured multimedia data (image, video, and audio) can be translated into insights. Our framework for semantic content analysis, particularly for visual content, illustrates how to leverage deep learning methods to better model and analyze multimedia data for effective marketing and social media strategies.","PeriodicalId":18743,"journal":{"name":"MIS Q.","volume":"7 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":"{\"title\":\"Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach\",\"authors\":\"D. Shin, Shu He, G. Lee, Andrew Whinston, Suleyman Cetintas, Kuang-chih Lee\",\"doi\":\"10.2139/SSRN.2830377\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"In the present study, we investigate the effect of social media content on subsequent customer engagement (likes and reblogs) using a large-scale dataset from Tumblr. Our study focuses on company-generated posts, which consist of two main information sources: visual (images) and textual (text and tags). We employ state-of-the-art machine learning approaches including deep learning to extract data-driven features from both sources that effectively capture their semantics in a systematic and scaleable manner. With such semantic representations, we develop novel complexity, similarity, and consistency measures of social media content. Our empirical results show that proper visual stimuli (e.g., beautiful images, adult-content, celebrities, etc.), complementary textual content, and consistent themes have positive effects on the engagement, and that content demanding significant concentration levels (e.g., video, images with complex semantics, text with diverse topics, complex sentences, etc.) have the opposite effects. Further analyses at different perspectives (industry-level, hedonic/utilitarian products, followers/non-followers, short/long-term engagements) show the heterogeneous effects of visual and textual features. This work contributes to the literature by exemplifying how unstructured multimedia data (image, video, and audio) can be translated into insights. Our framework for semantic content analysis, particularly for visual content, illustrates how to leverage deep learning methods to better model and analyze multimedia data for effective marketing and social media strategies.\",\"PeriodicalId\":18743,\"journal\":{\"name\":\"MIS Q.\",\"volume\":\"7 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2020-10-12\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"3\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"MIS Q.\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.2139/SSRN.2830377\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"MIS Q.","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.2139/SSRN.2830377","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Enhancing Social Media Analysis with Visual Data Analytics: A Deep Learning Approach
In the present study, we investigate the effect of social media content on subsequent customer engagement (likes and reblogs) using a large-scale dataset from Tumblr. Our study focuses on company-generated posts, which consist of two main information sources: visual (images) and textual (text and tags). We employ state-of-the-art machine learning approaches including deep learning to extract data-driven features from both sources that effectively capture their semantics in a systematic and scaleable manner. With such semantic representations, we develop novel complexity, similarity, and consistency measures of social media content. Our empirical results show that proper visual stimuli (e.g., beautiful images, adult-content, celebrities, etc.), complementary textual content, and consistent themes have positive effects on the engagement, and that content demanding significant concentration levels (e.g., video, images with complex semantics, text with diverse topics, complex sentences, etc.) have the opposite effects. Further analyses at different perspectives (industry-level, hedonic/utilitarian products, followers/non-followers, short/long-term engagements) show the heterogeneous effects of visual and textual features. This work contributes to the literature by exemplifying how unstructured multimedia data (image, video, and audio) can be translated into insights. Our framework for semantic content analysis, particularly for visual content, illustrates how to leverage deep learning methods to better model and analyze multimedia data for effective marketing and social media strategies.