Most of the natural systems encountered in all kinds of disciplines consist of a set of elementary units connected by relationships of different kinds. These complex systems are commonly described in terms of networks, where nodes represent the entities and links represent their interactions. As multiple types of distinct interactions are often observed, these systems are described as multiplex networks where the different types of interactions between the nodes constitute the different layers of the network. The ever-increasing size of these networks introduces new computational challenges and is therefore imperative to be able to eliminate the redundant or irrelevant edges of a network and create a summary that maintains the intrinsic properties of the original network, with respect to the overall structure of the system. In this work, we present a summarization technique for multiplex networks designed to maintain the structural characteristics of such complex systems by utilizing the intrinsic multiplex structure of the network and taking into consideration the inter-connectivity of the various graph layers. We validate our approach on real-world systems from different domains and show that our approach allows for the creation of more compact summaries, with minimum change of the structure evaluation measures, when compared to baseline methods that aggregate contributions of multiple types of interactions.
{"title":"Cosine similarity for multiplex network summarization","authors":"A. Polychronopoulou, Fang Zhou, Z. Obradovic","doi":"10.1145/3487351.3488331","DOIUrl":"https://doi.org/10.1145/3487351.3488331","url":null,"abstract":"Most of the natural systems encountered in all kinds of disciplines consist of a set of elementary units connected by relationships of different kinds. These complex systems are commonly described in terms of networks, where nodes represent the entities and links represent their interactions. As multiple types of distinct interactions are often observed, these systems are described as multiplex networks where the different types of interactions between the nodes constitute the different layers of the network. The ever-increasing size of these networks introduces new computational challenges and is therefore imperative to be able to eliminate the redundant or irrelevant edges of a network and create a summary that maintains the intrinsic properties of the original network, with respect to the overall structure of the system. In this work, we present a summarization technique for multiplex networks designed to maintain the structural characteristics of such complex systems by utilizing the intrinsic multiplex structure of the network and taking into consideration the inter-connectivity of the various graph layers. We validate our approach on real-world systems from different domains and show that our approach allows for the creation of more compact summaries, with minimum change of the structure evaluation measures, when compared to baseline methods that aggregate contributions of multiple types of interactions.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"110 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"129679145","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Top-N sequential recommendation is to predict the next few items based on user's sequential interactions with past items. This paper aims at boosting the performance of top-N sequential recommendation based on a state-of-the-art model, Caser. We point out three insufficiencies of Caser - do not model variant-sized sequential patterns, treating the impact of each past time step equally, and cannot learn cumulative features. Then we propose a novel Convolutional Attentional Recurrent Embedding (CARE) learning model. Experiments conducted on a large-scale user-location check-in dataset exhibit promising performance, comparing to Caser.
{"title":"CARE: learning convolutional attentional recurrent embedding for sequential recommendation","authors":"Yu-Che Tsai, Cheng-te Li","doi":"10.1145/3487351.3489478","DOIUrl":"https://doi.org/10.1145/3487351.3489478","url":null,"abstract":"Top-N sequential recommendation is to predict the next few items based on user's sequential interactions with past items. This paper aims at boosting the performance of top-N sequential recommendation based on a state-of-the-art model, Caser. We point out three insufficiencies of Caser - do not model variant-sized sequential patterns, treating the impact of each past time step equally, and cannot learn cumulative features. Then we propose a novel Convolutional Attentional Recurrent Embedding (CARE) learning model. Experiments conducted on a large-scale user-location check-in dataset exhibit promising performance, comparing to Caser.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"184 8 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127547136","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Md Rashidul Hasan, Dheeman Saha, Farhan Asif Chowdhury, J. Degnan, A. Mueen
Post-specific diffusion network elucidates the who-saw-from-whom paths of a post on social media. A diffusion network for a specific post can reveal trustworthy and/or incentivized connections among users. Unfortunately, such a network is not observable from available information from social media platforms; hence an inference mechanism is needed. In this paper, we propose an algorithm to infer the diffusion network of a post, exploiting temporal, textual, and network modalities. The proposed algorithm identifies the maximum likely diffusion network using a conditional point process. The algorithm can scale up to thousands of shares from a single post and can be implemented as a real-time analytical tool. We analyze inferred diffusion networks and show discernible differences in information diffusion within various user groups (i.e. verified vs. unverified, conservative vs. liberal) and across local communities (political, entrepreneurial, etc.). We discover differences in inferred networks showing disproportionate presence of automated bots, a potential way to measure the true impact of a post.
{"title":"DiffuScope: inferring post-specific diffusion network","authors":"Md Rashidul Hasan, Dheeman Saha, Farhan Asif Chowdhury, J. Degnan, A. Mueen","doi":"10.1145/3487351.3490967","DOIUrl":"https://doi.org/10.1145/3487351.3490967","url":null,"abstract":"Post-specific diffusion network elucidates the who-saw-from-whom paths of a post on social media. A diffusion network for a specific post can reveal trustworthy and/or incentivized connections among users. Unfortunately, such a network is not observable from available information from social media platforms; hence an inference mechanism is needed. In this paper, we propose an algorithm to infer the diffusion network of a post, exploiting temporal, textual, and network modalities. The proposed algorithm identifies the maximum likely diffusion network using a conditional point process. The algorithm can scale up to thousands of shares from a single post and can be implemented as a real-time analytical tool. We analyze inferred diffusion networks and show discernible differences in information diffusion within various user groups (i.e. verified vs. unverified, conservative vs. liberal) and across local communities (political, entrepreneurial, etc.). We discover differences in inferred networks showing disproportionate presence of automated bots, a potential way to measure the true impact of a post.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"2 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"130263473","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Artificial Intelligence (AI), since the onset of the COVID-19 pandemic at the beginning of the last year, is playing an important role in supporting physicians and health authorities in different difficult tasks such as virus spreading, patient diagnosing and monitoring, contact tracing. In this paper, we provide an overview of the methods based on AI technologies proposed for COVID-19 forecasting. Summary statistics of the techniques adopted by researchers, categorized on the base of the underlying AI sub-area, are reported, along with publication venue of papers. The effectiveness of these approaches is investigated and their capabilities or weaknesses in providing reliable predictions are discussed. Future challenges are finally analyzed and research directions for improving current tools are suggested.
{"title":"Predicting COVID-19 with AI techniques: current research and future directions","authors":"C. Comito, C. Pizzuti","doi":"10.1145/3487351.3490958","DOIUrl":"https://doi.org/10.1145/3487351.3490958","url":null,"abstract":"Artificial Intelligence (AI), since the onset of the COVID-19 pandemic at the beginning of the last year, is playing an important role in supporting physicians and health authorities in different difficult tasks such as virus spreading, patient diagnosing and monitoring, contact tracing. In this paper, we provide an overview of the methods based on AI technologies proposed for COVID-19 forecasting. Summary statistics of the techniques adopted by researchers, categorized on the base of the underlying AI sub-area, are reported, along with publication venue of papers. The effectiveness of these approaches is investigated and their capabilities or weaknesses in providing reliable predictions are discussed. Future challenges are finally analyzed and research directions for improving current tools are suggested.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"62 16","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"113943280","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Although beneficial information abounds on social media, the dissemination of harmful information such as so-called "fake news" has become a serious issue. Therefore, many researchers have devoted considerable effort to limiting the diffusion of harmful information. A promising approach to limiting diffusion of such information is link deletion methods in social networks. Link deletion methods have been shown to be effective in reducing the size of information diffusion cascades generated by synthetic models on a given social network. In this study, we evaluate the effectiveness of link deletion methods on Twitter by using actual logs of retweet cascades, rather than by using synthetic diffusion models. Our results show that even after deleting 50% of links detected by the NetMelt method from a Twitter social network, the size of tweet cascades after link deletion is estimated to be only 50% the original size, which suggests that the effectiveness of the link deletion strategy for suppressing information diffusion on Twitter is limited. Moreover, our results also show that there is a considerable number of cascades with many seed users, which renders link deletion methods inefficient.
{"title":"Limitations of link deletion for suppressing real information diffusion on social media","authors":"Shiori Furukawa, Sho Tsugawa","doi":"10.1145/3487351.3488351","DOIUrl":"https://doi.org/10.1145/3487351.3488351","url":null,"abstract":"Although beneficial information abounds on social media, the dissemination of harmful information such as so-called \"fake news\" has become a serious issue. Therefore, many researchers have devoted considerable effort to limiting the diffusion of harmful information. A promising approach to limiting diffusion of such information is link deletion methods in social networks. Link deletion methods have been shown to be effective in reducing the size of information diffusion cascades generated by synthetic models on a given social network. In this study, we evaluate the effectiveness of link deletion methods on Twitter by using actual logs of retweet cascades, rather than by using synthetic diffusion models. Our results show that even after deleting 50% of links detected by the NetMelt method from a Twitter social network, the size of tweet cascades after link deletion is estimated to be only 50% the original size, which suggests that the effectiveness of the link deletion strategy for suppressing information diffusion on Twitter is limited. Moreover, our results also show that there is a considerable number of cascades with many seed users, which renders link deletion methods inefficient.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"19 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"122469103","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In recent years, social media has becoming a battle field, not only for online marketing but also for politic, etc. Such as Facebook, online advertisement is now the main revenue of their company and the main idea is to attract users to particular fans page and to create flow. Flow is king is now the important concept for who want to manage their business online. Thus, in this paper, we intend to develop a website based on the concept of auto-article generation (AAG), which can gather useful information or news from other resources from WWW. The techniques that used for the AAG website including web crawler, cloud storage and computing, content classification, etc. The main idea is to attract users to visit the website and by this to create website flow.
{"title":"Towards automatic generated content website based on content classification and auto-article generation","authors":"I. Ting, Chia-Sung Yen","doi":"10.1145/3487351.3488414","DOIUrl":"https://doi.org/10.1145/3487351.3488414","url":null,"abstract":"In recent years, social media has becoming a battle field, not only for online marketing but also for politic, etc. Such as Facebook, online advertisement is now the main revenue of their company and the main idea is to attract users to particular fans page and to create flow. Flow is king is now the important concept for who want to manage their business online. Thus, in this paper, we intend to develop a website based on the concept of auto-article generation (AAG), which can gather useful information or news from other resources from WWW. The techniques that used for the AAG website including web crawler, cloud storage and computing, content classification, etc. The main idea is to attract users to visit the website and by this to create website flow.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"12 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131613166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Social media platforms have democratized the publication process resulting into easy and viral propagation of information. Oftentimes this misinformation is accompanied by misleading or doctored images that quickly circulate across the internet and reach many unsuspecting users. Several manual as well as automated efforts have been undertaken in the past to solve this critical problem. While manual efforts cannot keep up with the rate at which this content is churned out, many automated approaches only leverage concatenation (of the image and text representations) thereby failing to build effective crossmodal embeddings. Architectures like this fail in many cases because the text or image doesn't need to be false for the corresponding text, image pair to be misinformation. While some recent work attempts to use attention techniques to compute a crossmodal representation using pretrained text and image embeddings, we show a more effective approach towards utilizing such pretrained embeddings to build richer representations that can be classified better. This involves several challenges like how to handle text variations on Twitter and Weibo, how to encode the image information and how to leverage the text and image encodings together effectively. Our architecture, SCATE (Shared Cross Attention Transformer Encoders), leverages deep convolutional neural networks and transformer-based methods to encode image and text information utilizing crossmodal attention and shared layers for the two modalities. Our experiments with three popular benchmark datasets (Twitter, WeiboA and WeiboB) show that our proposed methods outperform the state-of-the-art methods by approximately three percentage points on all three datasets.
{"title":"SCATE: shared cross attention transformer encoders for multimodal fake news detection","authors":"Tanmay Sachan, Nikhil Pinnaparaju, Manish Gupta, Vasudeva Varma","doi":"10.1145/3487351.3490965","DOIUrl":"https://doi.org/10.1145/3487351.3490965","url":null,"abstract":"Social media platforms have democratized the publication process resulting into easy and viral propagation of information. Oftentimes this misinformation is accompanied by misleading or doctored images that quickly circulate across the internet and reach many unsuspecting users. Several manual as well as automated efforts have been undertaken in the past to solve this critical problem. While manual efforts cannot keep up with the rate at which this content is churned out, many automated approaches only leverage concatenation (of the image and text representations) thereby failing to build effective crossmodal embeddings. Architectures like this fail in many cases because the text or image doesn't need to be false for the corresponding text, image pair to be misinformation. While some recent work attempts to use attention techniques to compute a crossmodal representation using pretrained text and image embeddings, we show a more effective approach towards utilizing such pretrained embeddings to build richer representations that can be classified better. This involves several challenges like how to handle text variations on Twitter and Weibo, how to encode the image information and how to leverage the text and image encodings together effectively. Our architecture, SCATE (Shared Cross Attention Transformer Encoders), leverages deep convolutional neural networks and transformer-based methods to encode image and text information utilizing crossmodal attention and shared layers for the two modalities. Our experiments with three popular benchmark datasets (Twitter, WeiboA and WeiboB) show that our proposed methods outperform the state-of-the-art methods by approximately three percentage points on all three datasets.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"20 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128000517","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deception technologies like honeypots generate large volumes of log data, which include illegal Unix shell commands used by latent intruders. Several prior works have reported promising results in overcoming the weaknesses of network-level and program-level Intrusion Detection Systems (IDSs) by fussing network traffic with data from honeypots. However, because honeypots lack the plug-in infrastructure to enable real-time parsing of log outputs, it remains technically challenging to feed illegal Unix commands into downstream predictive analytics. As a result, advances on honeypot-based user-level IDSs remain greatly hindered. This article presents a run-time system (GPT-2C) that leverages a large pre-trained language model (GPT-2) to parse dynamic logs generated by a live Cowrie SSH honeypot instance. After fine-tuning the GPT-2 model on an existing corpus of illegal Unix commands, the model achieved 89% inference accuracy in parsing Unix commands with acceptable execution latency.
{"title":"GPT-2C: a parser for honeypot logs using large pre-trained language models","authors":"Febrian Setianto, Erion Tsani, Fatima Sadiq, Georgios Domalis, Dimitris Tsakalidis, Panos Kostakos","doi":"10.1145/3487351.3492723","DOIUrl":"https://doi.org/10.1145/3487351.3492723","url":null,"abstract":"Deception technologies like honeypots generate large volumes of log data, which include illegal Unix shell commands used by latent intruders. Several prior works have reported promising results in overcoming the weaknesses of network-level and program-level Intrusion Detection Systems (IDSs) by fussing network traffic with data from honeypots. However, because honeypots lack the plug-in infrastructure to enable real-time parsing of log outputs, it remains technically challenging to feed illegal Unix commands into downstream predictive analytics. As a result, advances on honeypot-based user-level IDSs remain greatly hindered. This article presents a run-time system (GPT-2C) that leverages a large pre-trained language model (GPT-2) to parse dynamic logs generated by a live Cowrie SSH honeypot instance. After fine-tuning the GPT-2 model on an existing corpus of illegal Unix commands, the model achieved 89% inference accuracy in parsing Unix commands with acceptable execution latency.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"44 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"132646588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Fabián Riquelme, Francisco Muñoz, Rodrigo Olivares
The influence maximization problem (IMP) and the least cost influence problem (LCI) are two relevant and widely studied problems in social network analysis. The first one consists of maximizing the influence spread in a social network, starting with a given seed size of actors; the second one consists of minimizing the seed set to reach a given number of influenced nodes. Recently, both problems have been studied together with a multi-objective metaheuristic approach. In this work, diffusion filter restrictions based on the network topology are proposed to reduce the search space and thus improving the convergence speed of the solutions. This proposal allows increasing the quality of the results. As the influence spread model, the Linear Threshold model will be used. The solution is tested in three social networks of different sizes, finding promising improvements in harder instances.
{"title":"Social influence under improved multi-objective metaheuristics","authors":"Fabián Riquelme, Francisco Muñoz, Rodrigo Olivares","doi":"10.1145/3487351.3488558","DOIUrl":"https://doi.org/10.1145/3487351.3488558","url":null,"abstract":"The influence maximization problem (IMP) and the least cost influence problem (LCI) are two relevant and widely studied problems in social network analysis. The first one consists of maximizing the influence spread in a social network, starting with a given seed size of actors; the second one consists of minimizing the seed set to reach a given number of influenced nodes. Recently, both problems have been studied together with a multi-objective metaheuristic approach. In this work, diffusion filter restrictions based on the network topology are proposed to reduce the search space and thus improving the convergence speed of the solutions. This proposal allows increasing the quality of the results. As the influence spread model, the Linear Threshold model will be used. The solution is tested in three social networks of different sizes, finding promising improvements in harder instances.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"96 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124697571","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Twitter's Birdwatch is a new community-driven misinformation detection platform where users provide notes to label tweet accuracy, and rate the 'helpfulness' of other users' notes. This work investigates the robustness of Birdwatch against adversaries injecting fake ratings and shows that the current Birdwatch system is vulnerable to adversarial attacks. To overcome this vulnerability, we develop HawkEye, a cold-start-aware graph-based recursive algorithm and show that HawkEye is more robust against adversarial manipulation and outperforms Birdwatch in identifying accurate and misleading tweets. Code and data are available at https://github.com/srijankr/hawkeye.
{"title":"HawkEye","authors":"Rohit Mujumdar, Srijan Kumar","doi":"10.1145/3487351.3488343","DOIUrl":"https://doi.org/10.1145/3487351.3488343","url":null,"abstract":"Twitter's Birdwatch is a new community-driven misinformation detection platform where users provide notes to label tweet accuracy, and rate the 'helpfulness' of other users' notes. This work investigates the robustness of Birdwatch against adversaries injecting fake ratings and shows that the current Birdwatch system is vulnerable to adversarial attacks. To overcome this vulnerability, we develop HawkEye, a cold-start-aware graph-based recursive algorithm and show that HawkEye is more robust against adversarial manipulation and outperforms Birdwatch in identifying accurate and misleading tweets. Code and data are available at https://github.com/srijankr/hawkeye.","PeriodicalId":320904,"journal":{"name":"Proceedings of the 2021 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-11-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"123478338","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}