Mohammad Abuhassan, Tarique Anwar, Chengfei Liu, H. Jarman, M. Fuller‐Tyszkiewicz
{"title":"与饮食失调相关的Twitter用户分类的基于注意力的多模态表示","authors":"Mohammad Abuhassan, Tarique Anwar, Chengfei Liu, H. Jarman, M. Fuller‐Tyszkiewicz","doi":"10.1145/3543507.3583863","DOIUrl":null,"url":null,"abstract":"Social media platforms provide rich data sources in several domains. In mental health, individuals experiencing an Eating Disorder (ED) are often hesitant to seek help through conventional healthcare services. However, many people seek help with diet and body image issues on social media. To better distinguish at-risk users who may need help for an ED from those who are simply commenting on ED in social environments, highly sophisticated approaches are required. Assessment of ED risks in such a situation can be done in various ways, and each has its own strengths and weaknesses. Hence, there is a need for and potential benefit of a more complex multimodal approach. To this end, we collect historical tweets, user biographies, and online behaviours of relevant users from Twitter, and generate a reasonably large labelled benchmark dataset. Thereafter, we develop an advanced multimodal deep learning model called EDNet using these data to identify the different types of users with ED engagement (e.g., potential ED sufferers, healthcare professionals, or communicators) and distinguish them from those not experiencing EDs on Twitter. EDNet consists of five deep neural network layers. With the help of its embedding, representation and behaviour modeling layers, it effectively learns the multimodalities of social media. In our experiments, EDNet consistently outperforms all the baseline techniques by significant margins. It achieves an accuracy of up to 94.32% and F1 score of up to 93.91% F1 score. To the best of our knowledge, this is the first such study to propose a multimodal approach for user-level classification according to their engagement with ED content on social media.","PeriodicalId":296351,"journal":{"name":"Proceedings of the ACM Web Conference 2023","volume":"90 1","pages":"0"},"PeriodicalIF":0.0000,"publicationDate":"2023-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"EDNet: Attention-Based Multimodal Representation for Classification of Twitter Users Related to Eating Disorders\",\"authors\":\"Mohammad Abuhassan, Tarique Anwar, Chengfei Liu, H. Jarman, M. Fuller‐Tyszkiewicz\",\"doi\":\"10.1145/3543507.3583863\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Social media platforms provide rich data sources in several domains. In mental health, individuals experiencing an Eating Disorder (ED) are often hesitant to seek help through conventional healthcare services. However, many people seek help with diet and body image issues on social media. To better distinguish at-risk users who may need help for an ED from those who are simply commenting on ED in social environments, highly sophisticated approaches are required. Assessment of ED risks in such a situation can be done in various ways, and each has its own strengths and weaknesses. Hence, there is a need for and potential benefit of a more complex multimodal approach. To this end, we collect historical tweets, user biographies, and online behaviours of relevant users from Twitter, and generate a reasonably large labelled benchmark dataset. Thereafter, we develop an advanced multimodal deep learning model called EDNet using these data to identify the different types of users with ED engagement (e.g., potential ED sufferers, healthcare professionals, or communicators) and distinguish them from those not experiencing EDs on Twitter. EDNet consists of five deep neural network layers. With the help of its embedding, representation and behaviour modeling layers, it effectively learns the multimodalities of social media. In our experiments, EDNet consistently outperforms all the baseline techniques by significant margins. It achieves an accuracy of up to 94.32% and F1 score of up to 93.91% F1 score. To the best of our knowledge, this is the first such study to propose a multimodal approach for user-level classification according to their engagement with ED content on social media.\",\"PeriodicalId\":296351,\"journal\":{\"name\":\"Proceedings of the ACM Web Conference 2023\",\"volume\":\"90 1\",\"pages\":\"0\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2023-04-30\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Proceedings of the ACM Web Conference 2023\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1145/3543507.3583863\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Proceedings of the ACM Web Conference 2023","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1145/3543507.3583863","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
EDNet: Attention-Based Multimodal Representation for Classification of Twitter Users Related to Eating Disorders
Social media platforms provide rich data sources in several domains. In mental health, individuals experiencing an Eating Disorder (ED) are often hesitant to seek help through conventional healthcare services. However, many people seek help with diet and body image issues on social media. To better distinguish at-risk users who may need help for an ED from those who are simply commenting on ED in social environments, highly sophisticated approaches are required. Assessment of ED risks in such a situation can be done in various ways, and each has its own strengths and weaknesses. Hence, there is a need for and potential benefit of a more complex multimodal approach. To this end, we collect historical tweets, user biographies, and online behaviours of relevant users from Twitter, and generate a reasonably large labelled benchmark dataset. Thereafter, we develop an advanced multimodal deep learning model called EDNet using these data to identify the different types of users with ED engagement (e.g., potential ED sufferers, healthcare professionals, or communicators) and distinguish them from those not experiencing EDs on Twitter. EDNet consists of five deep neural network layers. With the help of its embedding, representation and behaviour modeling layers, it effectively learns the multimodalities of social media. In our experiments, EDNet consistently outperforms all the baseline techniques by significant margins. It achieves an accuracy of up to 94.32% and F1 score of up to 93.91% F1 score. To the best of our knowledge, this is the first such study to propose a multimodal approach for user-level classification according to their engagement with ED content on social media.