Pub Date : 2024-03-19DOI: 10.1109/TCSS.2024.3354508
Ming Gu;Tian-Fang Zhao;Liang Yang;Xiao-Kun Wu;Wei-Neng Chen
The formation of information cocoons, driven by limited disclosure and individual preferences, has resulted in the polarization of society. However, the underlying mechanisms and pathways to escape these cocoons remain unresolved. This article aims to solve it by developing an adaptive imitation process. In this process, the measurement of information cocoons across the population is based on Shannon's information entropy, taking into account neighborhood information. Incorporating the Dirac function to formulate information distribution over networks, theoretical results are validated by numerical simulation experiments. Results show that individual backgrounds and preferences are crucial factors in the formation of information cocoons, and the severity of information cocoon production increases with an individual capacity to stick to oneself. Encouraging connections among diverse communities can effectively mitigate the intensity of information cocoons. This research contributes to the advancement of computational communication systems and offers insights toward dismantling informational boundaries.
{"title":"Modeling Information Cocoons in Networked Populations: Insights From Backgrounds and Preferences","authors":"Ming Gu;Tian-Fang Zhao;Liang Yang;Xiao-Kun Wu;Wei-Neng Chen","doi":"10.1109/TCSS.2024.3354508","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3354508","url":null,"abstract":"The formation of information cocoons, driven by limited disclosure and individual preferences, has resulted in the polarization of society. However, the underlying mechanisms and pathways to escape these cocoons remain unresolved. This article aims to solve it by developing an adaptive imitation process. In this process, the measurement of information cocoons across the population is based on Shannon's information entropy, taking into account neighborhood information. Incorporating the Dirac function to formulate information distribution over networks, theoretical results are validated by numerical simulation experiments. Results show that individual backgrounds and preferences are crucial factors in the formation of information cocoons, and the severity of information cocoon production increases with an individual capacity to stick to oneself. Encouraging connections among diverse communities can effectively mitigate the intensity of information cocoons. This research contributes to the advancement of computational communication systems and offers insights toward dismantling informational boundaries.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/TCSS.2024.3367231
Akshi Kumar;Abhishek Mallik;Sanjay Kumar
Temporal link prediction (TLP) is a prominent problem in network analysis that focuses on predicting the existence of future connections or relationships between entities in a dynamic network over time. The predictive capabilities of existing models of TLP are often constrained due to their difficulty in adapting to the changes in dynamic network structures over time. In this article, an improved TLP model, denoted as TLP-NEGCN, is introduced by leveraging network embedding, graph convolutional networks (GCNs), and bidirectional long short-term memory (BiLSTM). This integration provides a robust model of TLP that leverages historical network structures and captures temporal dynamics leading to improved performances. We employ graph embedding with self-clustering (GEMSEC) to create lower dimensional vector representations for all nodes of the network at the initial timestamps. The node embeddings are fed into an iterative training process using GCNs across timestamps in the dataset. This process enhances the node embeddings by capturing the network's temporal dynamics and integrating neighborhood information. We obtain edge embeddings by concatenating the node embeddings of the end nodes of each edge, encapsulating the information about the relationships between nodes in the network. Subsequently, these edge embeddings are processed through a BiLSTM architecture to forecast upcoming links in the network. The performance of the proposed model is compared against several baselines and contemporary TLP models on various real-life temporal datasets. The obtained results based on various evaluation metrics demonstrate the superiority of the proposed work.
{"title":"TLP-NEGCN: Temporal Link Prediction via Network Embedding and Graph Convolutional Networks","authors":"Akshi Kumar;Abhishek Mallik;Sanjay Kumar","doi":"10.1109/TCSS.2024.3367231","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3367231","url":null,"abstract":"Temporal link prediction (TLP) is a prominent problem in network analysis that focuses on predicting the existence of future connections or relationships between entities in a dynamic network over time. The predictive capabilities of existing models of TLP are often constrained due to their difficulty in adapting to the changes in dynamic network structures over time. In this article, an improved TLP model, denoted as TLP-NEGCN, is introduced by leveraging network embedding, graph convolutional networks (GCNs), and bidirectional long short-term memory (BiLSTM). This integration provides a robust model of TLP that leverages historical network structures and captures temporal dynamics leading to improved performances. We employ graph embedding with self-clustering (GEMSEC) to create lower dimensional vector representations for all nodes of the network at the initial timestamps. The node embeddings are fed into an iterative training process using GCNs across timestamps in the dataset. This process enhances the node embeddings by capturing the network's temporal dynamics and integrating neighborhood information. We obtain edge embeddings by concatenating the node embeddings of the end nodes of each edge, encapsulating the information about the relationships between nodes in the network. Subsequently, these edge embeddings are processed through a BiLSTM architecture to forecast upcoming links in the network. The performance of the proposed model is compared against several baselines and contemporary TLP models on various real-life temporal datasets. The obtained results based on various evaluation metrics demonstrate the superiority of the proposed work.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/TCSS.2024.3372519
Shanyan Lai;Junfang Wu;Chunyang Ye;Zhiwei Ma
The utilization of text classification techniques has demonstrated great promise in the field of detecting consumer fraud based on consumer reviews. However, persistent challenges remain in handling large samples at the borders and identifying unforeseen fraud behaviors. To address these challenges, we propose a novel approach that combines a channel biattention convolutional neural network (CNN) with a pretrained language model. Specifically, we propose a similarity computation module for implicitly learning a metric matrix to characterize the similarity between prior knowledge and consumer reviews in vector space. Through this process, the model is able to learn and understand the relationship between prior knowledge and corresponding samples during training, thereby improving its ability to identify unforeseen fraudulent behaviors. Additionally, we propose a channel biattention CNN module to adaptively emphasize the importance of relevant prior knowledge to enhance the model's ability to accurately classify boundary samples. To ensure effective model training, we expand and organize a real-world dataset, reducing noise and increasing the number of fraud samples available for analysis. Experimental results demonstrate that our approach achieves state-of-the-art performance in fraud detection. Notably, our model is capable of detecting unforeseen fraud cases without the need for retraining or fine-tuning, making it highly adaptable and efficient in practical applications.
{"title":"UCF-PKS: Unforeseen Consumer Fraud Detection With Prior Knowledge and Semantic Features","authors":"Shanyan Lai;Junfang Wu;Chunyang Ye;Zhiwei Ma","doi":"10.1109/TCSS.2024.3372519","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3372519","url":null,"abstract":"The utilization of text classification techniques has demonstrated great promise in the field of detecting consumer fraud based on consumer reviews. However, persistent challenges remain in handling large samples at the borders and identifying unforeseen fraud behaviors. To address these challenges, we propose a novel approach that combines a channel biattention convolutional neural network (CNN) with a pretrained language model. Specifically, we propose a similarity computation module for implicitly learning a metric matrix to characterize the similarity between prior knowledge and consumer reviews in vector space. Through this process, the model is able to learn and understand the relationship between prior knowledge and corresponding samples during training, thereby improving its ability to identify unforeseen fraudulent behaviors. Additionally, we propose a channel biattention CNN module to adaptively emphasize the importance of relevant prior knowledge to enhance the model's ability to accurately classify boundary samples. To ensure effective model training, we expand and organize a real-world dataset, reducing noise and increasing the number of fraud samples available for analysis. Experimental results demonstrate that our approach achieves state-of-the-art performance in fraud detection. Notably, our model is capable of detecting unforeseen fraud cases without the need for retraining or fine-tuning, making it highly adaptable and efficient in practical applications.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141994015","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-18DOI: 10.1109/TCSS.2024.3363843
Nan Wang;Dan Liu;Jin Zeng;Lijin Mu;Jinbao Li
Recommendation systems have shifted from personalization for individual users to consensus for groups as a result of people's growing tendency to join groups to participate in various everyday activities, like family meals and workplace reunions. This is because social networks have made it easier for people to participate in these kinds of events. Group recommendation is the process of suggesting items to groups. To derive group preferences, the majority of current approaches combine the individual preferences of group members utilizing heuristic or attention mechanism-based techniques. These approaches, however, have three issues. First, these approaches ignore the complex high-order interactions that occur both inside and outside of groups, just modeling the preferences of individual groups of users. Second, a group's ultimate decision is not always determined by the members’ preferences. Nevertheless, current approaches are not adequate to represent such preferences across groups. Last, data sparsity affects group recommendations due to the sparsity of group–item interactions. To overcome the aforementioned constraints, we propose employing hypergraph convolutional networks for group recommendation. Specifically, our design aims to achieve excellent group preferences by establishing a high-order preference extraction view represented by the hypergraph, a consistent preference extraction view represented by the overlap graph, and a conventional preference extraction view represented by the bipartite graph. The linkages between the three various views are then established by using cross-view contrastive learning, and the information between different views can be complementary, thereby improving each other. Comprehensive experiments on three publicly available datasets show that our method performs better than the state-of-the-art baseline.
{"title":"HGRec: Group Recommendation With Hypergraph Convolutional Networks","authors":"Nan Wang;Dan Liu;Jin Zeng;Lijin Mu;Jinbao Li","doi":"10.1109/TCSS.2024.3363843","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3363843","url":null,"abstract":"Recommendation systems have shifted from personalization for individual users to consensus for groups as a result of people's growing tendency to join groups to participate in various everyday activities, like family meals and workplace reunions. This is because social networks have made it easier for people to participate in these kinds of events. Group recommendation is the process of suggesting items to groups. To derive group preferences, the majority of current approaches combine the individual preferences of group members utilizing heuristic or attention mechanism-based techniques. These approaches, however, have three issues. First, these approaches ignore the complex high-order interactions that occur both inside and outside of groups, just modeling the preferences of individual groups of users. Second, a group's ultimate decision is not always determined by the members’ preferences. Nevertheless, current approaches are not adequate to represent such preferences across groups. Last, data sparsity affects group recommendations due to the sparsity of group–item interactions. To overcome the aforementioned constraints, we propose employing hypergraph convolutional networks for group recommendation. Specifically, our design aims to achieve excellent group preferences by establishing a high-order preference extraction view represented by the hypergraph, a consistent preference extraction view represented by the overlap graph, and a conventional preference extraction view represented by the bipartite graph. The linkages between the three various views are then established by using cross-view contrastive learning, and the information between different views can be complementary, thereby improving each other. Comprehensive experiments on three publicly available datasets show that our method performs better than the state-of-the-art baseline.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319588","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-14DOI: 10.1109/TCSS.2024.3392288
Yingxu Qiao;Xing Wang;Hongmin Liu;Zhanqiang Huo
Learning-based dehazing methods using synthetic datasets cannot generalize well on real-world hazy images due to the large domain discrepancy. To tackle this issue, we propose a robust synthetic-to-real dehazing framework with the construction of an intermediate domain and ensemble learning strategy. First, by mapping all examples to the intermediate domain, the bidirectional match strategy with adversarial training and the constraint of intermediated results is proposed to suppress the rich domain-specific information, which can facilitate the adaptation and perform image dehazing simultaneously. Furthermore, an ensemble dehazing algorithm based on the intermediate domain is proposed in a semisupervised manner. The reconstruction constraint and the enhanced ground-truths are employed to keep the visual fidelity and remove the dim artifacts of unsupervised dehazing results. Finally, we propose the domain-aware residual groups to deal with the distribution discrepancy between the synthetic and real hazy images. Extensive experiments of various real-world hazy images demonstrate that the proposed method outperforms the state-of-the-art dehazing methods and significantly improves the generalization in the real world.
{"title":"Robust Synthetic-to-Real Ensemble Dehazing Algorithm With the Intermediate Domain","authors":"Yingxu Qiao;Xing Wang;Hongmin Liu;Zhanqiang Huo","doi":"10.1109/TCSS.2024.3392288","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3392288","url":null,"abstract":"Learning-based dehazing methods using synthetic datasets cannot generalize well on real-world hazy images due to the large domain discrepancy. To tackle this issue, we propose a robust synthetic-to-real dehazing framework with the construction of an intermediate domain and ensemble learning strategy. First, by mapping all examples to the intermediate domain, the bidirectional match strategy with adversarial training and the constraint of intermediated results is proposed to suppress the rich domain-specific information, which can facilitate the adaptation and perform image dehazing simultaneously. Furthermore, an ensemble dehazing algorithm based on the intermediate domain is proposed in a semisupervised manner. The reconstruction constraint and the enhanced ground-truths are employed to keep the visual fidelity and remove the dim artifacts of unsupervised dehazing results. Finally, we propose the domain-aware residual groups to deal with the distribution discrepancy between the synthetic and real hazy images. Extensive experiments of various real-world hazy images demonstrate that the proposed method outperforms the state-of-the-art dehazing methods and significantly improves the generalization in the real world.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993922","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-13DOI: 10.1109/TCSS.2024.3370669
Hongjie Jia;Yuhao Wu;Qirong Mao;Yang Li;Heping Song
Density peak clustering (DPC) has garnered growing interest over recent decades due to its capability to identify clusters with diverse shapes and its resilience to the presence of noisy data. Most DPC-based methods exhibit high computational complexity. One approach to mitigate this issue involves utilizing density subgraphs. Nevertheless, the utilization of density subgraphs may impose restrictions on cluster sizes and potentially lead to an excessive number of small clusters. Furthermore, effectively handling these small clusters, whether through merging or separation, to derive accurate results poses a significant challenge, particularly in scenarios where the number of clusters is unknown. To address these challenges, we propose an adaptive density subgraph clustering algorithm (ADSC). ADSC follows a systematic three-step procedure. First, the high-density regions in the dataset are recognized as density subgraphs based on k-nearest neighbor (KNN) density. Second, the initial clustering is carried out by utilizing an automated mechanism to identify the important density subgraphs and allocate outliers. Last, the obtained initial clustering results are further refined in an adaptive manner using the cluster self-ensemble technique, ultimately yielding the final clustering outcomes. The clustering performance of the proposed ADSC algorithm is evaluated on nineteen benchmark datasets. The experimental results demonstrate that ADSC possesses the ability to automatically determine the optimal number of clusters from intricate density data, all while maintaining high clustering efficiency. Comparative analysis against other well-known density clustering algorithms that require prior knowledge of cluster numbers reveals that ADSC consistently achieves comparable or superior clustering results.
{"title":"Adaptive Density Subgraph Clustering","authors":"Hongjie Jia;Yuhao Wu;Qirong Mao;Yang Li;Heping Song","doi":"10.1109/TCSS.2024.3370669","DOIUrl":"https://doi.org/10.1109/TCSS.2024.3370669","url":null,"abstract":"Density peak clustering (DPC) has garnered growing interest over recent decades due to its capability to identify clusters with diverse shapes and its resilience to the presence of noisy data. Most DPC-based methods exhibit high computational complexity. One approach to mitigate this issue involves utilizing density subgraphs. Nevertheless, the utilization of density subgraphs may impose restrictions on cluster sizes and potentially lead to an excessive number of small clusters. Furthermore, effectively handling these small clusters, whether through merging or separation, to derive accurate results poses a significant challenge, particularly in scenarios where the number of clusters is unknown. To address these challenges, we propose an adaptive density subgraph clustering algorithm (ADSC). ADSC follows a systematic three-step procedure. First, the high-density regions in the dataset are recognized as density subgraphs based on k-nearest neighbor (KNN) density. Second, the initial clustering is carried out by utilizing an automated mechanism to identify the important density subgraphs and allocate outliers. Last, the obtained initial clustering results are further refined in an adaptive manner using the cluster self-ensemble technique, ultimately yielding the final clustering outcomes. The clustering performance of the proposed ADSC algorithm is evaluated on nineteen benchmark datasets. The experimental results demonstrate that ADSC possesses the ability to automatically determine the optimal number of clusters from intricate density data, all while maintaining high clustering efficiency. Comparative analysis against other well-known density clustering algorithms that require prior knowledge of cluster numbers reveals that ADSC consistently achieves comparable or superior clustering results.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":4.5,"publicationDate":"2024-03-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141993910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
During any natural disaster or unfortunate accident, both civilians and responders need information on an urgent basis. In such events, microblogging sites particularly Twitter plays an important role in providing real-time information. The raw form of microblog tweets is prodigiously informative but massive in size. The end-users and data analysts have to go through millions of tweets before extraction of any information. To ease the process and extract only relevant information, artificial intelligence (AI)-based techniques can be incorporated to generate summaries from the incoming information. Moreover, tweets keep on arriving continuously in a streaming manner, and therefore in ideal cases, the summaries also need to be updated continuously. In this work, we have proposed a clustering-based summary generation approach that takes multiviewed representations of data and utilizes a new variant of generative adversarial network (GAN) named triple-GAN to perform clustering. Triple-GAN consists of three networks, a generator, a discriminator, and a separator. Maintaining equilibrium among these networks requires proper parameter tuning which makes training of GAN difficult. In the literature, GAN-based techniques have been extensively applied to image datasets. In the proposed method, we have explored the usage of GAN for text data in an unsupervised manner and the analysis of the training of GAN has also been reported. The developed method opens up a new direction in utilizing GAN for solving clustering problem of text data. The proposed method is applied to two versions of four disaster-based microblog datasets and obtained results are compared with many existing and a few baseline methods. The comparative study illustrates the superiority and efficacy of the developed method.
在任何自然灾害或不幸事故中,平民和救援人员都需要紧急信息。在此类事件中,微博网站尤其是 Twitter 在提供实时信息方面发挥了重要作用。微博推文的原始形式信息量巨大,但体积庞大。终端用户和数据分析师在提取任何信息之前,都必须浏览数百万条微博。为了简化这一过程并只提取相关信息,可以采用基于人工智能(AI)的技术,从接收到的信息中生成摘要。此外,推文以流式方式不断到达,因此在理想情况下,摘要也需要不断更新。在这项工作中,我们提出了一种基于聚类的摘要生成方法,该方法采用多视图数据表示,并利用生成式对抗网络(GAN)的一种新变体(名为三重-GAN)来执行聚类。三重对抗网络由生成器、判别器和分离器三个网络组成。要保持这些网络之间的平衡,需要对参数进行适当调整,这给 GAN 的训练带来了困难。在文献中,基于 GAN 的技术已被广泛应用于图像数据集。在所提出的方法中,我们以无监督的方式探索了 GAN 在文本数据中的应用,并对 GAN 的训练进行了分析。所开发的方法为利用 GAN 解决文本数据聚类问题开辟了一个新方向。将所提出的方法应用于四个基于灾难的微博数据集的两个版本,并将所获得的结果与许多现有方法和一些基线方法进行了比较。比较研究说明了所开发方法的优越性和有效性。
{"title":"Online Summarization of Microblog Data: An Aid in Handling Disaster Situations","authors":"Dipanjyoti Paul;Shivani Rana;Sriparna Saha;Jimson Mathew","doi":"10.1109/TCSS.2023.3347520","DOIUrl":"https://doi.org/10.1109/TCSS.2023.3347520","url":null,"abstract":"During any natural disaster or unfortunate accident, both civilians and responders need information on an urgent basis. In such events, microblogging sites particularly Twitter plays an important role in providing real-time information. The raw form of microblog tweets is prodigiously informative but massive in size. The end-users and data analysts have to go through millions of tweets before extraction of any information. To ease the process and extract only relevant information, artificial intelligence (AI)-based techniques can be incorporated to generate summaries from the incoming information. Moreover, tweets keep on arriving continuously in a streaming manner, and therefore in ideal cases, the summaries also need to be updated continuously. In this work, we have proposed a clustering-based summary generation approach that takes multiviewed representations of data and utilizes a new variant of generative adversarial network (GAN) named triple-GAN to perform clustering. Triple-GAN consists of three networks, a generator, a discriminator, and a separator. Maintaining equilibrium among these networks requires proper parameter tuning which makes training of GAN difficult. In the literature, GAN-based techniques have been extensively applied to image datasets. In the proposed method, we have explored the usage of GAN for text data in an unsupervised manner and the analysis of the training of GAN has also been reported. The developed method opens up a new direction in utilizing GAN for solving clustering problem of text data. The proposed method is applied to two versions of four disaster-based microblog datasets and obtained results are compared with many existing and a few baseline methods. The comparative study illustrates the superiority and efficacy of the developed method.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141319656","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1109/TCSS.2024.3367385
Jinhu Ren;Fuzhong Nian;Xiaochen Yang
Modern social media networks have become an important platform for information competition among countries, regions, companies, and other parties. This article utilizes the research method of spread dynamics to investigate the influence of the control role of announcements in social networks on the spreading process. This article distinguishes two spreading phases using the authentication intervention as a boundary: the unconfirmed spreading phase and the confirmed spreading phase. Based on the actual rules of spreading in online social networks, two kinds of verification results are defined: true information and false information. The two-stage information spreading dynamics model is developed to analyze the changes in spreading effects due to different validation results. The impact of the intervention time on the overall spread process is analyzed by combining important control factors such as response cost and time sensitivity. The validity of the model is verified by comparing the model simulation results with real cases and the adaptive capacity experiments. This work is analyzed and visualized from multiple perspectives, providing more quantitative results. The research content will provide a scientific basis for the intervention behavior of information management control by relevant departments or authorities.
{"title":"Two-Stage Information Spreading Evolution on the Control Role of Announcements","authors":"Jinhu Ren;Fuzhong Nian;Xiaochen Yang","doi":"10.1109/TCSS.2024.3367385","DOIUrl":"10.1109/TCSS.2024.3367385","url":null,"abstract":"Modern social media networks have become an important platform for information competition among countries, regions, companies, and other parties. This article utilizes the research method of spread dynamics to investigate the influence of the control role of announcements in social networks on the spreading process. This article distinguishes two spreading phases using the authentication intervention as a boundary: the unconfirmed spreading phase and the confirmed spreading phase. Based on the actual rules of spreading in online social networks, two kinds of verification results are defined: true information and false information. The two-stage information spreading dynamics model is developed to analyze the changes in spreading effects due to different validation results. The impact of the intervention time on the overall spread process is analyzed by combining important control factors such as response cost and time sensitivity. The validity of the model is verified by comparing the model simulation results with real cases and the adaptive capacity experiments. This work is analyzed and visualized from multiple perspectives, providing more quantitative results. The research content will provide a scientific basis for the intervention behavior of information management control by relevant departments or authorities.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":null,"pages":null},"PeriodicalIF":5.0,"publicationDate":"2024-03-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140428970","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-12DOI: 10.1109/TCSS.2024.3352979
Yuling Liu;Cuilin Wang;Jie Wang;Bo Ou;Xin Liao
Generative text steganography has received considerable attention in the covert communication community for the benefit of sending secret messages without the need to modify carriers. Existing methods typically choose the next word when generating a stego-text based on conditional probability encoding of candidates, which may lead to generating inadequate words for the underlying secret message. How to generate a semantically controllable stego-text with a high capacity on secure embedding of a secret message is a main challenge. We address this challenge by proposing a new paradigm to generative text steganography that takes advantage of certain social media through apparently normal behaviors from the sender. In particular, we make use of the live commenting feature provided by public video sharing platforms (PVSPs), which allow viewers to make comments on video scenes that will fly on screens when the scenes are shown. We show that this feature can be used to construct a generative steganographic system. The sender generates at random a number of distracting words and a certain invertible matrix called W- $d$