The assessment of the impact of scholarly publications has garnered significant attention among researchers, particularly in predicting the future sequence of citation counts. However, current studies predominantly regard academic papers as static entities, failing to acknowledge the dynamic nature of their fixed content, which can undergo shifts in focus over time. To this end, we implement dynamic representations of the content to mirror chronological changes within the given paper, facilitating the sequential prediction of citation counts. Specifically, we propose a novel deep neural network called DynamIc Content-aware TrAnsformer (DICTA). The proposed model incorporates a dynamic content module that leverages the power of a sequential module to effectively capture the evolving focus information within each paper. To account for dependencies between the historical and future citation counts, our model utilizes a transformer-based framework as the backbone. With the encoder-decoder structure, it can effectively handle previous citation accumulations and then predict future citation potentials. Extensive experiments conducted on two scientific datasets demonstrate that DICTA achieves impressive performance and outperforms all baseline approaches. Further analyses underscore the significance of the dynamic content module. The code is available: https://github.com/ECNU-Text-Computing/DICTA
扫码关注我们
求助内容:
应助结果提醒方式:
