Pub Date : 2026-01-09DOI: 10.1109/TCSS.2026.3652476
{"title":"2025 Index IEEE Transactions on Computational Social Systems Vol. 12","authors":"","doi":"10.1109/TCSS.2026.3652476","DOIUrl":"https://doi.org/10.1109/TCSS.2026.3652476","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 6","pages":"1-98"},"PeriodicalIF":4.5,"publicationDate":"2026-01-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11345504","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145929412","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/TCSS.2025.3632585
{"title":"IEEE Transactions on Computational Social Systems Information for Authors","authors":"","doi":"10.1109/TCSS.2025.3632585","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3632585","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 6","pages":"C4-C4"},"PeriodicalIF":4.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272954","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652137","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-12-02DOI: 10.1109/TCSS.2025.3632503
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/TCSS.2025.3632503","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3632503","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 6","pages":"C3-C3"},"PeriodicalIF":4.5,"publicationDate":"2025-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11272955","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652142","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Multimodal knowledge graphs (MMKGs) have gained widespread adoption across various domains. However, existing transformer-based methods for MMKG representation learning primarily focus on enhancing representation performance, while overlooking time and memory costs, which reduces model efficiency. To tackle these limitations, we introduce a multimodal lightweight transformer (MLFormer) model, which not only ensures robust representation capabilities but also considerably improves computational efficiency. We find that the self-attention mechanism in transformers leads to substantial performance overheads. As a result, we optimize the traditional MMKGE model in two aspects: modality processing and modality fusion, by incorporating a filter gate and Fourier transform. Our experimental results on real-world multimodal knowledge graph completion datasets demonstrate that MLFormer achieves significant improvements in computational efficiency while maintaining competitive performance.
{"title":"MLFormer: Unleashing Efficiency Without Attention for Multimodal Knowledge Graph Embedding","authors":"Meng Wang;Changyu Li;Feiyu Chen;Jie Shao;Ke Qin;Shuang Liang","doi":"10.1109/TCSS.2025.3620089","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3620089","url":null,"abstract":"Multimodal knowledge graphs (MMKGs) have gained widespread adoption across various domains. However, existing transformer-based methods for MMKG representation learning primarily focus on enhancing representation performance, while overlooking time and memory costs, which reduces model efficiency. To tackle these limitations, we introduce a multimodal lightweight transformer (MLFormer) model, which not only ensures robust representation capabilities but also considerably improves computational efficiency. We find that the self-attention mechanism in transformers leads to substantial performance overheads. As a result, we optimize the traditional MMKGE model in two aspects: modality processing and modality fusion, by incorporating a filter gate and Fourier transform. Our experimental results on real-world multimodal knowledge graph completion datasets demonstrate that MLFormer achieves significant improvements in computational efficiency while maintaining competitive performance.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 6","pages":"5536-5549"},"PeriodicalIF":4.5,"publicationDate":"2025-11-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145652138","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1109/TCSS.2025.3608423
{"title":"IEEE Transactions on Computational Social Systems Information for Authors","authors":"","doi":"10.1109/TCSS.2025.3608423","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608423","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C4-C4"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11194050","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230025","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1109/TCSS.2025.3608419
{"title":"IEEE Transactions on Computational Social Systems Publication Information","authors":"","doi":"10.1109/TCSS.2025.3608419","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608419","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C2-C2"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11194049","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145315350","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1109/TCSS.2025.3606570
Amit Kumar Singh;Jungong Han;Stefano Berretti
{"title":"Guest Editorial: Special Issue on Trends in Social Multimedia Computing: Models, Methodologies, and Applications","authors":"Amit Kumar Singh;Jungong Han;Stefano Berretti","doi":"10.1109/TCSS.2025.3606570","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3606570","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3747-3750"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11193968","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230028","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-10-06DOI: 10.1109/TCSS.2025.3608421
{"title":"IEEE Systems, Man, and Cybernetics Society Information","authors":"","doi":"10.1109/TCSS.2025.3608421","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3608421","url":null,"abstract":"","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"C3-C3"},"PeriodicalIF":4.5,"publicationDate":"2025-10-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=11193967","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-07-10DOI: 10.1109/TCSS.2025.3579570
Xin Cheng;Lei Yang;Rui Li
Generative adversarial networks (GANs) have demonstrated potential in enhancing keyframe selection and video reconstruction via adversarial training among unsupervised approaches. Nevertheless, GANs struggle to encapsulate the intricate spatiotemporal dynamics in videos, which is essential for producing coherent and informative summaries. To address these challenges, we introduce an unsupervised video summarization framework that synergistically integrates temporal–spatial semantic graphs (TSSGraphs) with a bilinear additive attention (BAA) mechanism. TSSGraphs are designed to effectively model temporal and spatial relationships among video frames by combining temporal convolution and dynamic edge convolution, thereby extracting salient features while mitigating model complexity. The BAA mechanism enhances the framework’s ability to capture critical motion information by addressing feature sparsity and eliminating redundant parameters, ensuring robust attention to significant motion dynamics. Experimental assessments on the SumMe and TVSum benchmark datasets reveal that our method attains improvements of up to 4.0% and 3.3% in F-score, respectively, compared to current methodologies. Moreover, our system demonstrates diminished parameter overhead throughout training and inference stages, particularly excelling in contexts with significant motion content.
{"title":"Unsupervised Video Summarization Based on Spatiotemporal Semantic Graph and Enhanced Attention Mechanism","authors":"Xin Cheng;Lei Yang;Rui Li","doi":"10.1109/TCSS.2025.3579570","DOIUrl":"https://doi.org/10.1109/TCSS.2025.3579570","url":null,"abstract":"Generative adversarial networks (GANs) have demonstrated potential in enhancing keyframe selection and video reconstruction via adversarial training among unsupervised approaches. Nevertheless, GANs struggle to encapsulate the intricate spatiotemporal dynamics in videos, which is essential for producing coherent and informative summaries. To address these challenges, we introduce an unsupervised video summarization framework that synergistically integrates temporal–spatial semantic graphs (TSSGraphs) with a bilinear additive attention (BAA) mechanism. TSSGraphs are designed to effectively model temporal and spatial relationships among video frames by combining temporal convolution and dynamic edge convolution, thereby extracting salient features while mitigating model complexity. The BAA mechanism enhances the framework’s ability to capture critical motion information by addressing feature sparsity and eliminating redundant parameters, ensuring robust attention to significant motion dynamics. Experimental assessments on the SumMe and TVSum benchmark datasets reveal that our method attains improvements of up to 4.0% and 3.3% in F-score, respectively, compared to current methodologies. Moreover, our system demonstrates diminished parameter overhead throughout training and inference stages, particularly excelling in contexts with significant motion content.","PeriodicalId":13044,"journal":{"name":"IEEE Transactions on Computational Social Systems","volume":"12 5","pages":"3751-3764"},"PeriodicalIF":4.5,"publicationDate":"2025-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145230072","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}