首页 > 最新文献

Knowledge and Information Systems最新文献

英文 中文
Fake review detection techniques, issues, and future research directions: a literature review 虚假评论检测技术、问题和未来研究方向:文献综述
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-17 DOI: 10.1007/s10115-024-02118-2
Ramadhani Ally Duma, Zhendong Niu, Ally S. Nyamawe, Jude Tchaye-Kondi, Nuru Jingili, Abdulganiyu Abdu Yusuf, Augustino Faustino Deve

Recently, the impact of product or service reviews on customers' purchasing decisions has become increasingly significant in online businesses. Consequently, manipulating reviews for fame or profit has become prevalent, with some businesses resorting to paying fake reviewers to post spam reviews. Given the importance of reviews in decision-making, detecting fake reviews is crucial to ensure fair competition and sustainable e-business practices. Although significant efforts have been made in the last decade to distinguish credible reviews from fake ones, it remains challenging. Our literature review has identified several gaps in the existing research: (1) most fake review detection techniques have been proposed for high-resource languages such as English and Chinese, and few studies have investigated low-resource and multilingual fake review detection, (2) there is a lack of research on deceptive review detection for reviews based on language code-switching (code-mix), (3) current multi-feature integration techniques extract review representations independently, ignoring correlations between them, and (4) there is a lack of a consolidated model that can mutually learn from review emotion, coarse-grained (overall rating), and fine-grained (aspect ratings) features to supplement the problem of sentiment and overall rating inconsistency. In light of these gaps, this study aims to provide an in-depth literature analysis describing strengths and weaknesses, open issues, and future research directions.

最近,产品或服务评论对客户购买决策的影响在在线业务中变得越来越重要。因此,为了名誉或利益而操纵评论的现象已变得十分普遍,一些企业不惜付钱给虚假评论者来发布垃圾评论。鉴于评论在决策中的重要性,检测虚假评论对于确保公平竞争和可持续的电子商务实践至关重要。尽管在过去十年中,人们在区分可信评论和虚假评论方面做出了巨大努力,但这仍然具有挑战性。我们的文献综述发现了现有研究中的几个空白:(1) 大多数虚假评论检测技术都是针对英语和中文等高资源语言提出的,很少有研究调查低资源和多语言的虚假评论检测;(2) 缺乏对基于语言代码转换(代码混合)的欺骗性评论检测的研究、(3) 当前的多特征整合技术独立提取评论表征,忽略了它们之间的相关性,以及 (4) 缺乏一个可以从评论情感、粗粒度(总体评分)和细粒度(方面评分)特征中相互学习的综合模型,以补充情感和总体评分不一致的问题。鉴于这些差距,本研究旨在提供深入的文献分析,描述优缺点、未决问题和未来研究方向。
{"title":"Fake review detection techniques, issues, and future research directions: a literature review","authors":"Ramadhani Ally Duma, Zhendong Niu, Ally S. Nyamawe, Jude Tchaye-Kondi, Nuru Jingili, Abdulganiyu Abdu Yusuf, Augustino Faustino Deve","doi":"10.1007/s10115-024-02118-2","DOIUrl":"https://doi.org/10.1007/s10115-024-02118-2","url":null,"abstract":"<p>Recently, the impact of product or service reviews on customers' purchasing decisions has become increasingly significant in online businesses. Consequently, manipulating reviews for fame or profit has become prevalent, with some businesses resorting to paying fake reviewers to post spam reviews. Given the importance of reviews in decision-making, detecting fake reviews is crucial to ensure fair competition and sustainable e-business practices. Although significant efforts have been made in the last decade to distinguish credible reviews from fake ones, it remains challenging. Our literature review has identified several gaps in the existing research: (1) most fake review detection techniques have been proposed for high-resource languages such as English and Chinese, and few studies have investigated low-resource and multilingual fake review detection, (2) there is a lack of research on deceptive review detection for reviews based on language code-switching (code-mix), (3) current multi-feature integration techniques extract review representations independently, ignoring correlations between them, and (4) there is a lack of a consolidated model that can mutually learn from review emotion, coarse-grained (overall rating), and fine-grained (aspect ratings) features to supplement the problem of sentiment and overall rating inconsistency. In light of these gaps, this study aims to provide an in-depth literature analysis describing strengths and weaknesses, open issues, and future research directions.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"48 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058605","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Misclassification-guided loss under the weighted cross-entropy loss framework 加权交叉熵损失框架下的误分类引导损失
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-12 DOI: 10.1007/s10115-024-02123-5
Yan-Xue Wu, Kai Du, Xian-Jie Wang, Fan Min

As deep neural networks for visual recognition gain momentum, many studies have modified the loss function to improve the classification performance on long-tailed data. Typical and effective improvement strategies are to assign different weights to different classes or samples, yielding a series of cost-sensitive re-weighting cross-entropy losses. Granted, most of these strategies only focus on the properties of the training data, such as the data distribution and the samples’ distinguishability. This paper works these strategies into a weighted cross-entropy loss framework with a simple production form ((text {WCEL}_{prod })), which takes into account different features of different losses. Also, there is this new loss function, misclassification-guided loss (MGL), that generalizes the class-wise difficulty-balanced loss and utilizes the misclassification rate on validation data to update class weights during training. In respect of MGL, a series of weighting functions with different relative preferences are introduced. Both softmax MGL and sigmoid MGL are derived to address the multi-class and multi-label classification problems. Experiments are undertaken on four public datasets, namely MNIST-LT, CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and a self-built dataset of 4 main-classes, 44 sub-classes, and altogether 57,944 images, where the results show that on the self-built dataset, the exponential weighting function achieves higher balanced accuracy than the polynomial function does. Ablation studies also show that MGL sees better performance in combination with most of other state-of-the-art loss functions under the (text {WCEL}_{prod }) framework.

随着用于视觉识别的深度神经网络的发展,许多研究都修改了损失函数,以提高长尾数据的分类性能。典型而有效的改进策略是为不同类别或样本分配不同权重,从而产生一系列对成本敏感的重新加权交叉熵损失。当然,这些策略大多只关注训练数据的属性,如数据分布和样本的可区分性。本文将这些策略整合到一个加权交叉熵损失框架中,该框架具有简单的生成形式((text {WCEL}_{prod }) ),考虑到了不同损失的不同特征。此外,还有一种新的损失函数--误分类指导损失(MGL),它概括了分类难度平衡损失,并利用验证数据上的误分类率来更新训练过程中的类权重。关于 MGL,引入了一系列具有不同相对偏好的加权函数。其中,softmax MGL 和 sigmoid MGL 均可用于解决多类和多标签分类问题。实验在四个公开数据集(MNIST-LT、CIFAR-10-LT、CIFAR-100-LT、ImageNet-LT)和一个包含 4 个主类、44 个子类、共 57,944 幅图像的自建数据集上进行,结果表明,在自建数据集上,指数加权函数比多项式函数获得了更高的平衡精度。消融研究还表明,在 (text {WCEL}_{prod }) 框架下,MGL 与大多数其他最先进的损失函数结合使用时性能更好。
{"title":"Misclassification-guided loss under the weighted cross-entropy loss framework","authors":"Yan-Xue Wu, Kai Du, Xian-Jie Wang, Fan Min","doi":"10.1007/s10115-024-02123-5","DOIUrl":"https://doi.org/10.1007/s10115-024-02123-5","url":null,"abstract":"<p>As deep neural networks for visual recognition gain momentum, many studies have modified the loss function to improve the classification performance on long-tailed data. Typical and effective improvement strategies are to assign different weights to different classes or samples, yielding a series of cost-sensitive re-weighting cross-entropy losses. Granted, most of these strategies only focus on the properties of the training data, such as the data distribution and the samples’ distinguishability. This paper works these strategies into a weighted cross-entropy loss framework with a simple production form (<span>(text {WCEL}_{prod })</span>), which takes into account different features of different losses. Also, there is this new loss function, misclassification-guided loss (MGL), that generalizes the class-wise difficulty-balanced loss and utilizes the misclassification rate on validation data to update class weights during training. In respect of MGL, a series of weighting functions with different relative preferences are introduced. Both softmax MGL and sigmoid MGL are derived to address the multi-class and multi-label classification problems. Experiments are undertaken on four public datasets, namely MNIST-LT, CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, and a self-built dataset of 4 main-classes, 44 sub-classes, and altogether 57,944 images, where the results show that on the self-built dataset, the exponential weighting function achieves higher balanced accuracy than the polynomial function does. Ablation studies also show that MGL sees better performance in combination with most of other state-of-the-art loss functions under the <span>(text {WCEL}_{prod })</span> framework.\u0000</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"10 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940186","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
C22MP: the marriage of catch22 and the matrix profile creates a fast, efficient and interpretable anomaly detector C22MP:Catch22 与矩阵剖面的结合创造了一种快速、高效、可解释的异常检测器
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-11 DOI: 10.1007/s10115-024-02107-5
Sadaf Tafazoli, Yue Lu, Renjie Wu, Thirumalai Vinjamoor Akhil Srinivas, Hannah Dela Cruz, Ryan Mercer, Eamonn Keogh

Many time series data mining algorithms work by reasoning about the relationships the conserved shapes of subsequences. To facilitate this, the Matrix Profile is a data structure that annotates a time series by recording each subsequence’s Euclidean distance to its nearest neighbor. In recent years, the community has shown that using the Matrix Profile it is possible to discover many useful properties of a time series, including repeated behaviors (motifs), anomalies, evolving patterns, regimes, etc. However, the Matrix Profile is limited to representing the relationship between the subsequence’s shapes. It is understood that, for some domains, useful information is conserved not in the subsequence’s shapes, but in the subsequence’s features. In recent years, a new set of features for time series called catch22 has revolutionized feature-based mining of time series. Combining these two ideas seems to offer many possibilities for novel data mining applications; however, there are two difficulties in attempting this. A direct application of the Matrix Profile with the catch22 features would be prohibitively slow. Less obviously, as we will demonstrate, in almost all domains, using all twenty-two of the catch22 features produces poor results, and we must somehow select the subset appropriate for the domain. In this work, we introduce novel algorithms to solve both problems and demonstrate that, for most domains, the proposed C22MP is a state-of-the-art anomaly detector.

许多时间序列数据挖掘算法都是通过推理子序列的保守形状之间的关系来实现的。为了方便推理,矩阵轮廓是一种数据结构,它通过记录每个子序列与其最近邻序列的欧氏距离来注释时间序列。近年来,研究界已经证明,使用矩阵剖面图可以发现时间序列的许多有用属性,包括重复行为(主题)、异常、演变模式、制度等。然而,矩阵轮廓仅限于表示子序列形状之间的关系。据了解,对于某些领域,有用的信息不是保存在子序列的形状中,而是保存在子序列的特征中。近年来,一套名为 catch22 的时间序列新特征彻底改变了基于特征的时间序列挖掘。将这两种想法结合起来,似乎为新颖的数据挖掘应用提供了许多可能性;然而,尝试这样做有两个困难。直接应用带有 catch22 特征的矩阵剖面图的速度会慢得令人望而却步。不那么明显的是,正如我们将要证明的,在几乎所有领域,使用全部 22 个 catch22 特征都会产生糟糕的结果,我们必须以某种方式选择适合该领域的子集。在这项工作中,我们引入了新颖的算法来解决这两个问题,并证明对于大多数领域,所提出的 C22MP 是最先进的异常检测器。
{"title":"C22MP: the marriage of catch22 and the matrix profile creates a fast, efficient and interpretable anomaly detector","authors":"Sadaf Tafazoli, Yue Lu, Renjie Wu, Thirumalai Vinjamoor Akhil Srinivas, Hannah Dela Cruz, Ryan Mercer, Eamonn Keogh","doi":"10.1007/s10115-024-02107-5","DOIUrl":"https://doi.org/10.1007/s10115-024-02107-5","url":null,"abstract":"<p>Many time series data mining algorithms work by reasoning about the relationships the conserved <i>shapes</i> of subsequences. To facilitate this, the Matrix Profile is a data structure that annotates a time series by recording each subsequence’s Euclidean distance to its nearest neighbor. In recent years, the community has shown that using the Matrix Profile it is possible to discover many useful properties of a time series, including repeated behaviors (motifs), anomalies, evolving patterns, regimes, etc. However, the Matrix Profile is limited to representing the relationship between the subsequence’s <i>shapes</i>. It is understood that, for some domains, useful information is conserved not in the subsequence’s shapes, but in the subsequence’s <i>features</i>. In recent years, a new set of features for time series called catch22 has revolutionized feature-based mining of time series. Combining these two ideas seems to offer many possibilities for novel data mining applications; however, there are two difficulties in attempting this. A direct application of the Matrix Profile with the catch22 features would be prohibitively slow. Less obviously, as we will demonstrate, in almost all domains, using all twenty-two of the catch22 features produces poor results, and we must somehow select the subset appropriate for the domain. In this work, we introduce novel algorithms to solve both problems and demonstrate that, for most domains, the proposed C<sup>22</sup>MP is a state-of-the-art anomaly detector.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"25 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940266","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
An analysis of large language models: their impact and potential applications 大型语言模型分析:其影响和潜在应用
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-11 DOI: 10.1007/s10115-024-02120-8
G. Bharathi Mohan, R. Prasanna Kumar, P. Vishal Krishh, A. Keerthinathan, G. Lavanya, Meka Kavya Uma Meghana, Sheba Sulthana, Srinath Doss

Large language models (LLMs) have transformed the interpretation and creation of human language in the rapidly developing field of computerized language processing. These models, which are based on deep learning techniques like transformer architectures, have been painstakingly trained on massive text datasets. This study paper takes an in-depth look into LLMs, including their architecture, historical evolution, and applications in education, healthcare, and finance sector. LLMs provide logical replies by interpreting complicated verbal patterns, making them beneficial in a variety of real-world scenarios. Their development and implementation, however, raise ethical concerns and have societal ramifications. Understanding the importance and limitations of LLMs is critical for guiding future research and ensuring the ethical use of their enormous potential. This survey exposes the influence of these models as they change, providing a roadmap for researchers, developers, and policymakers navigating the world of artificial intelligence and language processing.

在快速发展的计算机语言处理领域,大型语言模型(LLM)改变了人类语言的解释和创造。这些模型基于变压器架构等深度学习技术,在海量文本数据集上经过了艰苦的训练。本研究论文深入探讨了 LLM,包括其架构、历史演变以及在教育、医疗保健和金融领域的应用。LLM 通过解释复杂的语言模式来提供逻辑回复,使其在各种现实世界场景中大显身手。然而,LLM 的开发和实施引发了伦理问题,并对社会产生了影响。了解 LLMs 的重要性和局限性对于指导未来研究和确保合乎道德地利用其巨大潜力至关重要。本调查揭示了这些模型在变化过程中的影响,为研究人员、开发人员和政策制定者在人工智能和语言处理领域的探索提供了路线图。
{"title":"An analysis of large language models: their impact and potential applications","authors":"G. Bharathi Mohan, R. Prasanna Kumar, P. Vishal Krishh, A. Keerthinathan, G. Lavanya, Meka Kavya Uma Meghana, Sheba Sulthana, Srinath Doss","doi":"10.1007/s10115-024-02120-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02120-8","url":null,"abstract":"<p>Large language models (LLMs) have transformed the interpretation and creation of human language in the rapidly developing field of computerized language processing. These models, which are based on deep learning techniques like transformer architectures, have been painstakingly trained on massive text datasets. This study paper takes an in-depth look into LLMs, including their architecture, historical evolution, and applications in education, healthcare, and finance sector. LLMs provide logical replies by interpreting complicated verbal patterns, making them beneficial in a variety of real-world scenarios. Their development and implementation, however, raise ethical concerns and have societal ramifications. Understanding the importance and limitations of LLMs is critical for guiding future research and ensuring the ethical use of their enormous potential. This survey exposes the influence of these models as they change, providing a roadmap for researchers, developers, and policymakers navigating the world of artificial intelligence and language processing.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"6 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A comprehensive ensemble pruning framework based on dual-objective maximization trade-off 基于双目标最大化权衡的综合集合剪枝框架
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-10 DOI: 10.1007/s10115-024-02125-3
Anitha Gopalakrishnan, J. Martin Leo Manickam

Ensemble learning has gotten a lot of interest because of its capacity to increase predictive accuracy by merging numerous models. However, redundant data and a high level of computing complexity frequently plague ensembles. To choose a subset of models while maintaining the accuracy and diversity of the ensemble, ensemble pruning techniques are used to address these problems. Accuracy and diversity must coexist, even though their goals are conflicting. This is why we formulate the issue of ensemble pruning as a dual-objective maximization problem using the idea from information theory. Then, we propose a Comprehensive Ensemble Pruning Framework (CEPF) based on the dual-objective maximization (DOM) trade-off metric. Extensive evaluation of our framework on the exclusively collected PhysioSense dataset demonstrates the superiority of our method compared to existing pruning techniques. PhysioSense dataset was collected after getting approval from the Institutional Human Ethics Committee (IHEC) of Panimalar Medical College Hospital and Research Institute, Chennai, Tamil Nadu (Protocol No: PMCHRI-IHEC-059). The proposed framework not only preserves or improves ensemble accuracy and diversity but also achieves a significant reduction in actual ensemble size. Furthermore, the proposed method provides valuable insights into the dual-objective trade-off between accuracy and diversity paving the way for further research and advancements in ensemble pruning techniques.

集合学习因其通过合并众多模型来提高预测准确性的能力而备受关注。然而,冗余数据和高计算复杂度经常困扰着集合学习。为了选择模型子集,同时保持集合的准确性和多样性,集合剪枝技术被用来解决这些问题。准确性和多样性必须共存,尽管它们的目标相互冲突。因此,我们利用信息论的思想,将集合修剪问题表述为一个双目标最大化问题。然后,我们提出了一个基于双目标最大化(DOM)权衡指标的综合集合修剪框架(CEPF)。在专门收集的 PhysioSense 数据集上对我们的框架进行了广泛评估,结果表明我们的方法优于现有的剪枝技术。PhysioSense 数据集是在获得泰米尔纳德邦金奈市 Panimalar 医学院医院和研究所机构人类伦理委员会 (IHEC) 批准后收集的(协议编号:PMCHRI-IHEC-059)。所提出的框架不仅保留或提高了集合的准确性和多样性,还显著减少了实际的集合规模。此外,所提出的方法为准确性和多样性之间的双目标权衡提供了宝贵的见解,为进一步研究和改进集合修剪技术铺平了道路。
{"title":"A comprehensive ensemble pruning framework based on dual-objective maximization trade-off","authors":"Anitha Gopalakrishnan, J. Martin Leo Manickam","doi":"10.1007/s10115-024-02125-3","DOIUrl":"https://doi.org/10.1007/s10115-024-02125-3","url":null,"abstract":"<p>Ensemble learning has gotten a lot of interest because of its capacity to increase predictive accuracy by merging numerous models. However, redundant data and a high level of computing complexity frequently plague ensembles. To choose a subset of models while maintaining the accuracy and diversity of the ensemble, ensemble pruning techniques are used to address these problems. Accuracy and diversity must coexist, even though their goals are conflicting. This is why we formulate the issue of ensemble pruning as a dual-objective maximization problem using the idea from information theory. Then, we propose a Comprehensive Ensemble Pruning Framework (CEPF) based on the dual-objective maximization (DOM) trade-off metric. Extensive evaluation of our framework on the exclusively collected PhysioSense dataset demonstrates the superiority of our method compared to existing pruning techniques. PhysioSense dataset was collected after getting approval from the Institutional Human Ethics Committee (IHEC) of Panimalar Medical College Hospital and Research Institute, Chennai, Tamil Nadu (Protocol No: PMCHRI-IHEC-059). The proposed framework not only preserves or improves ensemble accuracy and diversity but also achieves a significant reduction in actual ensemble size. Furthermore, the proposed method provides valuable insights into the dual-objective trade-off between accuracy and diversity paving the way for further research and advancements in ensemble pruning techniques.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"46 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940262","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Trends and challenges in sentiment summarization: a systematic review of aspect extraction techniques 情感概括的趋势与挑战:对方面提取技术的系统回顾
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-09 DOI: 10.1007/s10115-024-02075-w
Nur Hayatin, Suraya Alias, Lai Po Hung

Sentiment Summarization is an automated technology that extracts important features of sentences and then reorganizes selected words or sentences by their aspect class and sentiment polarity. This emerging research area wields considerable influence, where a sentiment-based summary can provide insight into users’ subjective opinions, creating social engagement that benefits industry players and entrepreneurs. Meanwhile, systematic studies examining sentiment-based summarization, particularly those delving into aspect levels, are still limited. Whereas aspects are crucial to obtain a comprehensive assessment of a product or service for improving sentiment summarization results. Hence, we conducted a comprehensive survey of aspect extraction techniques in sentiment summarization by classifying techniques based on sentiment analysis levels and features. This work analyzes the current research trends and challenges in the research domain from a different perspective. More than 150 literature published from 2004 to 2023 are collected mainly from credible academic databases. We summarized and performed a comparative analysis of the sentiment summarization approaches and tabulated their performance based on different domains, sentiment levels, and features. We also derived a thematic taxonomy of aspect extraction techniques in sentiment summarization from the analysis and illustrated its usage in various applications. Finally, this study presents recommendations for the challenges and opportunities for future research development.

情感总结是一种自动技术,它可以提取句子的重要特征,然后根据其方面类别和情感极性对选定的词语或句子进行重组。这一新兴研究领域具有相当大的影响力,基于情感的摘要可以深入了解用户的主观意见,创造社会参与,从而使行业参与者和企业家受益。与此同时,对基于情感的摘要进行研究的系统性研究,尤其是那些深入研究方面层次的研究,仍然十分有限。而要想获得对产品或服务的全面评估,改善情感总结结果,方面是至关重要的。因此,我们对情感总结中的方面提取技术进行了全面调查,根据情感分析水平和特征对技术进行了分类。这项工作从不同角度分析了当前研究领域的研究趋势和挑战。我们主要从可靠的学术数据库中收集了从 2004 年到 2023 年发表的 150 多篇文献。我们总结并比较分析了各种情感概括方法,并根据不同领域、情感水平和特征列出了它们的性能。我们还通过分析得出了情感总结中方面提取技术的主题分类法,并说明了其在各种应用中的用法。最后,本研究针对未来研究发展的挑战和机遇提出了建议。
{"title":"Trends and challenges in sentiment summarization: a systematic review of aspect extraction techniques","authors":"Nur Hayatin, Suraya Alias, Lai Po Hung","doi":"10.1007/s10115-024-02075-w","DOIUrl":"https://doi.org/10.1007/s10115-024-02075-w","url":null,"abstract":"<p>Sentiment Summarization is an automated technology that extracts important features of sentences and then reorganizes selected words or sentences by their aspect class and sentiment polarity. This emerging research area wields considerable influence, where a sentiment-based summary can provide insight into users’ subjective opinions, creating social engagement that benefits industry players and entrepreneurs. Meanwhile, systematic studies examining sentiment-based summarization, particularly those delving into aspect levels, are still limited. Whereas aspects are crucial to obtain a comprehensive assessment of a product or service for improving sentiment summarization results. Hence, we conducted a comprehensive survey of aspect extraction techniques in sentiment summarization by classifying techniques based on sentiment analysis levels and features. This work analyzes the current research trends and challenges in the research domain from a different perspective. More than 150 literature published from 2004 to 2023 are collected mainly from credible academic databases. We summarized and performed a comparative analysis of the sentiment summarization approaches and tabulated their performance based on different domains, sentiment levels, and features. We also derived a thematic taxonomy of aspect extraction techniques in sentiment summarization from the analysis and illustrated its usage in various applications. Finally, this study presents recommendations for the challenges and opportunities for future research development.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"42 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940184","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Big data in transportation: a systematic literature analysis and topic classification 交通领域的大数据:系统文献分析和主题分类
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-08 DOI: 10.1007/s10115-024-02112-8
Danai Tzika-Kostopoulou, Eftihia Nathanail, Konstantinos Kokkinos

This paper identifies trends in the application of big data in the transport sector and categorizes research work across scientific subfields. The systematic analysis considered literature published between 2012 and 2022. A total of 2671 studies were evaluated from a dataset of 3532 collected papers, and bibliometric techniques were applied to capture the evolution of research interest over the years and identify the most influential studies. The proposed unsupervised classification model defined categories and classified the relevant articles based on their particular scientific interest using representative keywords from the title, abstract, and keywords (referred to as top words). The model’s performance was verified with an accuracy of 91% using Naïve Bayesian and Convolutional Neural Networks approach. The analysis identified eight research topics, with urban transport planning and smart city applications being the dominant categories. This paper contributes to the literature by proposing a methodology for literature analysis, identifying emerging scientific areas, and highlighting potential directions for future research.

本文确定了大数据在交通领域的应用趋势,并对各科学子领域的研究工作进行了分类。系统分析考虑了 2012 年至 2022 年间发表的文献。从收集到的 3532 篇论文的数据集中共评估了 2671 项研究,并应用文献计量学技术来捕捉这些年来研究兴趣的演变,并确定最有影响力的研究。所提出的无监督分类模型使用标题、摘要和关键词中的代表性关键字(称为热门词),根据其特定的科学兴趣定义类别并对相关文章进行分类。使用奈维贝叶斯和卷积神经网络方法验证了该模型的性能,准确率达到 91%。分析确定了八个研究课题,其中城市交通规划和智慧城市应用是主要类别。本文提出了一种文献分析方法,确定了新兴科学领域,并强调了未来研究的潜在方向,为文献研究做出了贡献。
{"title":"Big data in transportation: a systematic literature analysis and topic classification","authors":"Danai Tzika-Kostopoulou, Eftihia Nathanail, Konstantinos Kokkinos","doi":"10.1007/s10115-024-02112-8","DOIUrl":"https://doi.org/10.1007/s10115-024-02112-8","url":null,"abstract":"<p>This paper identifies trends in the application of big data in the transport sector and categorizes research work across scientific subfields. The systematic analysis considered literature published between 2012 and 2022. A total of 2671 studies were evaluated from a dataset of 3532 collected papers, and bibliometric techniques were applied to capture the evolution of research interest over the years and identify the most influential studies. The proposed unsupervised classification model defined categories and classified the relevant articles based on their particular scientific interest using representative keywords from the title, abstract, and keywords (referred to as top words). The model’s performance was verified with an accuracy of 91% using Naïve Bayesian and Convolutional Neural Networks approach. The analysis identified eight research topics, with urban transport planning and smart city applications being the dominant categories. This paper contributes to the literature by proposing a methodology for literature analysis, identifying emerging scientific areas, and highlighting potential directions for future research.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"27 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940420","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Relational multi-scale metric learning for few-shot knowledge graph completion 关系多尺度度量学习,用于完成少量知识图谱
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-08 DOI: 10.1007/s10115-024-02083-w
Yu Song, Mingyu Gui, Kunli Zhang, Zexi Xu, Dongming Dai, Dezhi Kong

Few-shot knowledge graph completion (FKGC) refers to the task of inferring missing facts in a knowledge graph by utilizing a limited number of reference entities. Most FKGC methods assume a single similarity metric, which leads to a single feature space and makes it difficult to separate positive and negative samples effectively. Therefore, we propose a multi-scale relational metric network (MSRMN) specifically designed for FKGC, which integrates multiple scales of measurement methods to learn a more comprehensive and compact feature space. In this study, we design a complete neighbor random sampling algorithm to sample complete one-hop neighbor information, and aggregate both one-hop and multi-hop neighbor information to enhance entity representations. Then, MSRMN adaptively obtains prototype representations of relations and integrates three different scales of measurement methods to learn a more comprehensive feature space and a more discriminative feature mapping, enabling positive query entity pairs to obtain higher measurement scores. Evaluation of MSRMN on two public datasets for link prediction demonstrates that MSRMN attains top-performing outcomes across various few-shot sizes on the NELL dataset.

少量知识图谱补全(FKGC)是指通过利用有限的参考实体来推断知识图谱中缺失事实的任务。大多数 FKGC 方法都假设了单一的相似性度量,这就导致了单一的特征空间,难以有效区分正样本和负样本。因此,我们提出了一种专为 FKGC 设计的多尺度关系度量网络 (MSRMN),它整合了多种尺度的度量方法,可以学习到更全面、更紧凑的特征空间。在本研究中,我们设计了一种完整邻居随机抽样算法来抽取完整的一跳邻居信息,并同时聚合一跳和多跳邻居信息来增强实体表示。然后,MSRMN 自适应地获取关系的原型表示,并整合三种不同尺度的测量方法,以学习更全面的特征空间和更具区分度的特征映射,从而使正向查询实体对获得更高的测量得分。在两个用于链接预测的公共数据集上对 MSRMN 进行的评估表明,在 NELL 数据集上,MSRMN 在不同数量级的数据中都取得了最佳性能。
{"title":"Relational multi-scale metric learning for few-shot knowledge graph completion","authors":"Yu Song, Mingyu Gui, Kunli Zhang, Zexi Xu, Dongming Dai, Dezhi Kong","doi":"10.1007/s10115-024-02083-w","DOIUrl":"https://doi.org/10.1007/s10115-024-02083-w","url":null,"abstract":"<p>Few-shot knowledge graph completion (FKGC) refers to the task of inferring missing facts in a knowledge graph by utilizing a limited number of reference entities. Most FKGC methods assume a single similarity metric, which leads to a single feature space and makes it difficult to separate positive and negative samples effectively. Therefore, we propose a multi-scale relational metric network (MSRMN) specifically designed for FKGC, which integrates multiple scales of measurement methods to learn a more comprehensive and compact feature space. In this study, we design a complete neighbor random sampling algorithm to sample complete one-hop neighbor information, and aggregate both one-hop and multi-hop neighbor information to enhance entity representations. Then, MSRMN adaptively obtains prototype representations of relations and integrates three different scales of measurement methods to learn a more comprehensive feature space and a more discriminative feature mapping, enabling positive query entity pairs to obtain higher measurement scores. Evaluation of MSRMN on two public datasets for link prediction demonstrates that MSRMN attains top-performing outcomes across various few-shot sizes on the NELL dataset.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"14 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140940183","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
CG-FHAUI: an efficient algorithm for simultaneously mining succinct pattern sets of frequent high average utility itemsets CG-FHAUI:同时挖掘频繁高平均效用项集的简洁模式集的高效算法
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-05-07 DOI: 10.1007/s10115-024-02121-7
Hai Duong, Tin Truong, Bac Le, Philippe Fournier-Viger

The identification of both closed frequent high average utility itemsets (CFHAUIs) and generators of frequent high average utility itemsets (GFHAUIs) has substantial significance because they play an essential and concise role in representing frequent high average utility itemsets (FHAUIs). These concise summaries offer a compact yet crucial overview that can be much smaller. In addition, they allow the generation of non-redundant high average utility association rules, a crucial factor for decision-makers to consider. However, difficulty arises from the complexity of discovering these representations, primarily because the average utility function does not satisfy both monotonic and anti-monotonic properties within each equivalence class, that is for itemsets sharing the same subset of transactions. To tackle this challenge, this paper proposes an innovative method for efficiently extracting CFHAUIs and GFHAUIs. This approach introduces novel bounds on the average utility, including a weak lower bound called (wlbau) and a lower bound named (auvlb). Efficient pruning strategies are also designed with the aim of early elimination of non-closed and/or non-generator FHAUIs based on the (wlbau) and (auvlb) bounds, leading to quicker execution and lower memory consumption. Additionally, the paper introduces a novel algorithm, CG-FHAUI, designed to concurrently discover both GFHAUIs and CFHAUIs. Empirical results highlight the superior performance of the proposed algorithm in terms of runtime, memory usage, and scalability when compared to a baseline algorithm.

识别封闭式频繁高平均效用项集(CFHAUIs)和频繁高平均效用项集生成器(GFHAUIs)具有重要意义,因为它们在表示频繁高平均效用项集(FHAUIs)方面发挥着重要而简洁的作用。这些简洁的摘要提供了一个紧凑而重要的概述,可以缩小很多。此外,它们还允许生成非冗余的高平均效用关联规则,这是决策者需要考虑的一个关键因素。然而,发现这些表征的复杂性带来了困难,这主要是因为平均效用函数在每个等价类(即共享相同事务子集的项目集)中并不同时满足单调性和反单调性。为了应对这一挑战,本文提出了一种创新方法,用于高效提取 CFHAUI 和 GFHAUI。这种方法对平均效用引入了新的约束,包括名为 (wlbau) 的弱下限和名为 (auvlb) 的下限。本文还设计了高效的剪枝策略,目的是基于 (wlbau) 和 (auvlb) 界值及早消除非封闭和/或非生成的 FHAUI,从而加快执行速度并降低内存消耗。此外,本文还介绍了一种新型算法 CG-FHAUI,旨在同时发现 GFHAUI 和 CFHAUI。实证结果表明,与基线算法相比,所提出的算法在运行时间、内存使用和可扩展性方面都具有卓越的性能。
{"title":"CG-FHAUI: an efficient algorithm for simultaneously mining succinct pattern sets of frequent high average utility itemsets","authors":"Hai Duong, Tin Truong, Bac Le, Philippe Fournier-Viger","doi":"10.1007/s10115-024-02121-7","DOIUrl":"https://doi.org/10.1007/s10115-024-02121-7","url":null,"abstract":"<p>The identification of both closed frequent high average utility itemsets (CFHAUIs) and generators of frequent high average utility itemsets (GFHAUIs) has substantial significance because they play an essential and concise role in representing frequent high average utility itemsets (FHAUIs). These concise summaries offer a compact yet crucial overview that can be much smaller. In addition, they allow the generation of non-redundant high average utility association rules, a crucial factor for decision-makers to consider. However, difficulty arises from the complexity of discovering these representations, primarily because the average utility function does not satisfy both monotonic and anti-monotonic properties within each equivalence class, that is for itemsets sharing the same subset of transactions. To tackle this challenge, this paper proposes an innovative method for efficiently extracting CFHAUIs and GFHAUIs. This approach introduces novel bounds on the average utility, including a weak lower bound called <span>(wlbau)</span> and a lower bound named <span>(auvlb)</span>. Efficient pruning strategies are also designed with the aim of early elimination of non-closed and/or non-generator FHAUIs based on the <span>(wlbau)</span> and <span>(auvlb)</span> bounds, leading to quicker execution and lower memory consumption. Additionally, the paper introduces a novel algorithm, CG-FHAUI, designed to concurrently discover both GFHAUIs and CFHAUIs. Empirical results highlight the superior performance of the proposed algorithm in terms of runtime, memory usage, and scalability when compared to a baseline algorithm.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"42 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-05-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140886550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing sentiment analysis via fusion of multiple embeddings using attention encoder with LSTM 利用注意力编码器和 LSTM 融合多种嵌入式编码,加强情感分析
IF 2.7 4区 计算机科学 Q3 COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE Pub Date : 2024-04-30 DOI: 10.1007/s10115-024-02102-w
Jitendra Soni, Kirti Mathur

Different embeddings capture various linguistic aspects, such as syntactic, semantic, and contextual information. Taking into account the diverse linguistic facets, we propose a novel hybrid model. This model hinges on the amalgamation of multiple embeddings through an attention encoder, subsequently channeled into an LSTM framework for sentiment classification. Our approach entails the fusion of Paragraph2vec, ELMo, and BERT embeddings to extract contextual information, while FastText is adeptly employed to capture syntactic characteristics. Subsequently, these embeddings were fused with the embeddings obtained from the attention encoder which forms the final embeddings. LSTM model is used for predicting the final classification. We conducted experiments utilizing both the Twitter Sentiment140 and Twitter US Airline Sentiment datasets. Our fusion model’s performance was evaluated and compared against established models such as LSTM, Bi-directional LSTM, BERT and Att-Coder. The test results clearly demonstrate that our approach surpasses the baseline models in terms of performance.

不同的嵌入可以捕捉不同的语言方面,如句法、语义和上下文信息。考虑到语言的多样性,我们提出了一种新型混合模型。该模型通过注意力编码器将多种嵌入式信息融合在一起,然后导入 LSTM 框架进行情感分类。我们的方法需要融合 Paragraph2vec、ELMo 和 BERT 嵌入来提取上下文信息,而 FastText 则被巧妙地用于捕捉句法特征。随后,这些嵌入信息与注意力编码器获得的嵌入信息融合,形成最终的嵌入信息。LSTM 模型用于预测最终分类。我们利用 Twitter Sentiment140 和 Twitter US Airline Sentiment 数据集进行了实验。我们对融合模型的性能进行了评估,并与 LSTM、双向 LSTM、BERT 和 Att-Coder 等成熟模型进行了比较。测试结果清楚地表明,我们的方法在性能上超越了基线模型。
{"title":"Enhancing sentiment analysis via fusion of multiple embeddings using attention encoder with LSTM","authors":"Jitendra Soni, Kirti Mathur","doi":"10.1007/s10115-024-02102-w","DOIUrl":"https://doi.org/10.1007/s10115-024-02102-w","url":null,"abstract":"<p>Different embeddings capture various linguistic aspects, such as syntactic, semantic, and contextual information. Taking into account the diverse linguistic facets, we propose a novel hybrid model. This model hinges on the amalgamation of multiple embeddings through an attention encoder, subsequently channeled into an LSTM framework for sentiment classification. Our approach entails the fusion of Paragraph2vec, ELMo, and BERT embeddings to extract contextual information, while FastText is adeptly employed to capture syntactic characteristics. Subsequently, these embeddings were fused with the embeddings obtained from the attention encoder which forms the final embeddings. LSTM model is used for predicting the final classification. We conducted experiments utilizing both the Twitter Sentiment140 and Twitter US Airline Sentiment datasets. Our fusion model’s performance was evaluated and compared against established models such as LSTM, Bi-directional LSTM, BERT and Att-Coder. The test results clearly demonstrate that our approach surpasses the baseline models in terms of performance.</p>","PeriodicalId":54749,"journal":{"name":"Knowledge and Information Systems","volume":"18 1","pages":""},"PeriodicalIF":2.7,"publicationDate":"2024-04-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140832189","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Knowledge and Information Systems
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1