首页 > 最新文献

Information Processing & Management最新文献

英文 中文
MOOCs video recommendation using low-rank and sparse matrix factorization with inter-entity relations and intra-entity affinity information 利用具有实体间关系和实体内亲和力信息的低秩稀疏矩阵因式分解进行 MOOC 视频推荐
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-06 DOI: 10.1016/j.ipm.2024.103861

Purpose

The serious information overload problem of MOOCs videos decreases the learning efficiency of the students and the utilization rate of the videos. There are two problems worthy of attention for the matrix factorization (MF)-based video learning resource recommender systems. Those methods suffer from the sparsity problem of the user-item rating matrix, while side information about user and item is seldom used to guide the learning procedure of the MF.

Method

To address those two problems, we proposed a new MOOCs video resource recommender LSMFERLI based on Low-rank and Sparse Matrix Factorization (LSMF) with the guidance of the inter-Entity Relations and intra-entity Latent Information of the students and videos. Firstly, we construct the inter-entity relation matrices and intra-entity latent preference matrix for the students. Secondly, we construct the inter-entity relation matrices and intra-entity affinity matrix for the videos. Lastly, with the guidance of the inter-entity relation and intra-entity affinity matrices of the students and videos, the student-video rating matrix is factorized into a low-rank matrix and a sparse matrix by the alternative iteration optimization scheme.

Conclusions

Experimental results on dataset MOOCcube indicate that LSMFERLI outperforms 7 state-of-the-art methods in terms of the HR@K and NDCG@K(K = 5,10,15) indicators increased by an average of 20.6 % and 21.0 %, respectively.

MOOCs 视频存在严重的信息过载问题,降低了学生的学习效率和视频的利用率。基于矩阵因式分解(MF)的视频学习资源推荐系统有两个问题值得关注。这些方法存在用户-项目评分矩阵的稀疏性问题,而用户和项目的侧面信息很少用于指导矩阵因式分解的学习过程。针对这两个问题,我们提出了一种基于低秩稀疏矩阵因式分解(LSMF)、以学生和视频的实体间关系和实体内潜在信息为指导的新型 MOOCs 视频资源推荐器 LSMFERLI。首先,我们构建学生的实体间关系矩阵和实体内潜在偏好矩阵。其次,我们构建视频的实体间关系矩阵和实体内亲和矩阵。最后,在学生和视频的实体间关系矩阵和实体内亲和矩阵的指导下,通过替代迭代优化方案将学生-视频评分矩阵因式分解为低秩矩阵和稀疏矩阵。在数据集 MOOCcube 上的实验结果表明,LSMFERLI 在 HR@ 和 NDCG@( = 5,10,15) 指标上优于 7 种最先进的方法,平均增幅分别为 20.6% 和 21.0%。
{"title":"MOOCs video recommendation using low-rank and sparse matrix factorization with inter-entity relations and intra-entity affinity information","authors":"","doi":"10.1016/j.ipm.2024.103861","DOIUrl":"10.1016/j.ipm.2024.103861","url":null,"abstract":"<div><h3>Purpose</h3><p>The serious information overload problem of MOOCs videos decreases the learning efficiency of the students and the utilization rate of the videos. There are two problems worthy of attention for the matrix factorization (MF)-based video learning resource recommender systems. Those methods suffer from the sparsity problem of the user-item rating matrix, while side information about user and item is seldom used to guide the learning procedure of the MF.</p></div><div><h3>Method</h3><p>To address those two problems, we proposed a new MOOCs video resource recommender LSMFERLI based on Low-rank and Sparse Matrix Factorization (LSMF) with the guidance of the inter-Entity Relations and intra-entity Latent Information of the students and videos. Firstly, we construct the inter-entity relation matrices and intra-entity latent preference matrix for the students. Secondly, we construct the inter-entity relation matrices and intra-entity affinity matrix for the videos. Lastly, with the guidance of the inter-entity relation and intra-entity affinity matrices of the students and videos, the student-video rating matrix is factorized into a low-rank matrix and a sparse matrix by the alternative iteration optimization scheme.</p></div><div><h3>Conclusions</h3><p>Experimental results on dataset MOOCcube indicate that LSMFERLI outperforms 7 state-of-the-art methods in terms of the HR@<em>K</em> and NDCG@<em>K</em>(<em>K</em> = 5,10,15) indicators increased by an average of 20.6 % and 21.0 %, respectively.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S0306457324002206/pdfft?md5=308b736cfd63725fb5781fb48c9b85f3&pid=1-s2.0-S0306457324002206-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A framework for predicting scientific disruption based on graph signal processing 基于图信号处理的科学干扰预测框架
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-05 DOI: 10.1016/j.ipm.2024.103863

Identifying scientific disruption is consistently recognized as challenging, and more so is to predict it. We suggest that better predictions are hindered by the inability to integrate multidimensional information and the limited scalability of existing methods. This paper develops a framework based on graph signal processing (GSP) to predict scientific disruption, achieving an average AUC of about 80 % on benchmark datasets, surpassing the performance of prior methods by 13.6 % on average. The framework is unified, adaptable to any type of information, and scalable, with the potential for further enhancements using technologies from GSP. The intuition of this framework is: scientific disruption is characterized by leading to dramatic changes in scientific evolution, which is recognized as a complex system represented by a graph, and GSP is a technique that specializes in analyzing data on graph structures; thus, we argue that GSP is well-suited for modeling scientific evolution and predicting disruption. Based on this proposed framework, we proceed with disruption predictions. The content, context, and (citation) structure information is respectively defined as graph signals. The total variations of these graph signals, which measure the evolutionary amplitude, are the main predictors. To illustrate the unity and scalability of our framework, altmetrics data (online mentions of the paper) that seldom considered previously is defined as graph signal, and another indicator, the dispersion entropy of graph signal (measuring chaos of scientific evolution), is used for predicting respectively. Our framework also provides advantages of interpretability for a better understanding on scientific disruption. The analysis indicates that the scientific disruption not only results in dramatic changes in the knowledge content, but also in context (e.g., journals and authors), and will lead to chaos in subsequent evolution. At last, several practical future directions for disruption predictions based on the framework are proposed.

识别科学混乱一直被认为是一项挑战,而预测科学混乱更是如此。我们认为,无法整合多维信息以及现有方法的可扩展性有限阻碍了更好的预测。本文开发了一个基于图信号处理(GSP)的框架来预测科学干扰,在基准数据集上实现了约 80% 的平均 AUC,平均超出先前方法 13.6% 的性能。该框架是统一的,可适应任何类型的信息,并具有可扩展性,有可能利用 GSP 技术进一步增强。该框架的直觉是:科学颠覆的特点是导致科学进化的巨大变化,而科学进化被认为是一个由图表示的复杂系统,而 GSP 是一种专门分析图结构数据的技术;因此,我们认为 GSP 非常适合科学进化建模和预测颠覆。在此框架基础上,我们开始进行破坏预测。内容、上下文和(引用)结构信息分别被定义为图信号。这些图信号的总变化是主要的预测指标,可以衡量进化幅度。为了说明我们框架的统一性和可扩展性,我们将以前很少考虑的 altmetrics 数据(论文的在线提及)定义为图信号,并使用另一个指标--图信号的分散熵(衡量科学进化的混乱程度)--分别进行预测。我们的框架还具有可解释性强的优势,有助于更好地理解科学混乱现象。分析表明,科学中断不仅会导致知识内容的巨大变化,还会导致上下文(如期刊和作者)的巨大变化,并将导致后续演化的混乱。最后,还提出了基于该框架的干扰预测的几个实用的未来方向。
{"title":"A framework for predicting scientific disruption based on graph signal processing","authors":"","doi":"10.1016/j.ipm.2024.103863","DOIUrl":"10.1016/j.ipm.2024.103863","url":null,"abstract":"<div><p>Identifying scientific disruption is consistently recognized as challenging, and more so is to predict it. We suggest that better predictions are hindered by the inability to integrate multidimensional information and the limited scalability of existing methods. This paper develops a framework based on graph signal processing (GSP) to predict scientific disruption, achieving an average AUC of about 80 % on benchmark datasets, surpassing the performance of prior methods by 13.6 % on average. The framework is unified, adaptable to any type of information, and scalable, with the potential for further enhancements using technologies from GSP. The intuition of this framework is: scientific disruption is characterized by leading to dramatic changes in scientific evolution, which is recognized as a complex system represented by a graph, and GSP is a technique that specializes in analyzing data on graph structures; thus, we argue that GSP is well-suited for modeling scientific evolution and predicting disruption. Based on this proposed framework, we proceed with disruption predictions. The content, context, and (citation) structure information is respectively defined as graph signals. The total variations of these graph signals, which measure the evolutionary amplitude, are the main predictors. To illustrate the unity and scalability of our framework, altmetrics data (online mentions of the paper) that seldom considered previously is defined as graph signal, and another indicator, the dispersion entropy of graph signal (measuring chaos of scientific evolution), is used for predicting respectively. Our framework also provides advantages of interpretability for a better understanding on scientific disruption. The analysis indicates that the scientific disruption not only results in dramatic changes in the knowledge content, but also in context (e.g., journals and authors), and will lead to chaos in subsequent evolution. At last, several practical future directions for disruption predictions based on the framework are proposed.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-05","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944168","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Evolutions of semantic consistency in research topic via contextualized word embedding 通过语境化词语嵌入实现研究课题语义一致性的演变
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-03 DOI: 10.1016/j.ipm.2024.103859

Topic evolution has been studied extensively in the field of the science of science. This study first analyzes topic evolution pattern from topics’ semantic consistency in the semantic vector space, and explore its possible causes. Specifically, we extract papers in the computer science field from Microsoft Academic Graph as our dataset. We propose a novel method for encoding a topic with numerous Contextualized Word Embeddings (CWE), in which the title and abstract fields of papers studying the topic is taken as its context. Subsequently, we employ three geometric metrics to analyze topics’ semantic consistency over time, from which the influence of the anisotropy of CWE is excluded. The K-Means clustering algorithm is employed to identify four general evolution patterns of semantic consistency, that is, semantic consistency increases (IM), decreases (DM), increases first and then decreases (Inverted U-shape), and decreases first and then increases (U-shape). We also find that research methods tend to show DM and U-shape, but research questions tend to be IM and Inverted U-shape. Finally, we further utilize the regression analysis to explore whether and, if so, how a series of key features of a topic affect its semantic consistency. Importantly, semantic consistency of a topic varies inversely with the semantic similarity between the topic and other topics. Overall, this study sheds light on the evolution law of topics, and helps researchers to understand these patterns from a geometric perspective.

在科学领域,话题演变已被广泛研究。本研究首先从主题在语义向量空间中的语义一致性分析主题演变模式,并探讨其可能的原因。具体来说,我们从微软学术图谱(Microsoft Academic Graph)中提取计算机科学领域的论文作为数据集。我们提出了一种用大量上下文词嵌入(CWE)对主题进行编码的新方法,其中研究该主题的论文的标题和摘要字段被视为其上下文。随后,我们采用三种几何度量来分析话题在一段时间内的语义一致性,其中排除了 CWE 各向异性的影响。通过 K-Means 聚类算法,我们发现了语义一致性的四种一般演变模式,即语义一致性增加(IM)、减少(DM)、先增加后减少(倒 U 型)和先减少后增加(U 型)。我们还发现,研究方法倾向于呈现 DM 和 U 型,但研究问题倾向于呈现 IM 和倒 U 型。最后,我们进一步利用回归分析来探讨一个主题的一系列关键特征是否会影响其语义一致性,如果会,又是如何影响的。重要的是,一个话题的语义一致性与该话题和其他话题之间的语义相似性成反比变化。总之,这项研究揭示了话题的演变规律,有助于研究人员从几何学的角度理解这些规律。
{"title":"Evolutions of semantic consistency in research topic via contextualized word embedding","authors":"","doi":"10.1016/j.ipm.2024.103859","DOIUrl":"10.1016/j.ipm.2024.103859","url":null,"abstract":"<div><p>Topic evolution has been studied extensively in the field of the science of science. This study first analyzes topic evolution pattern from topics’ semantic consistency in the semantic vector space, and explore its possible causes. Specifically, we extract papers in the computer science field from Microsoft Academic Graph as our dataset. We propose a novel method for encoding a topic with numerous Contextualized Word Embeddings (CWE), in which the title and abstract fields of papers studying the topic is taken as its context. Subsequently, we employ three geometric metrics to analyze topics’ semantic consistency over time, from which the influence of the anisotropy of CWE is excluded. The K-Means clustering algorithm is employed to identify four general evolution patterns of semantic consistency, that is, semantic consistency increases (IM), decreases (DM), increases first and then decreases (Inverted U-shape), and decreases first and then increases (U-shape). We also find that research methods tend to show DM and U-shape, but research questions tend to be IM and Inverted U-shape. Finally, we further utilize the regression analysis to explore whether and, if so, how a series of key features of a topic affect its semantic consistency. Importantly, semantic consistency of a topic varies inversely with the semantic similarity between the topic and other topics. Overall, this study sheds light on the evolution law of topics, and helps researchers to understand these patterns from a geometric perspective.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944161","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rank aggregation with limited information based on link prediction 基于链接预测的有限信息排名汇总
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-02 DOI: 10.1016/j.ipm.2024.103860

Rank aggregation is a vital tool in facilitating decision-making processes that consider multiple criteria or attributes. While in many applications, the available ranked lists are often limited and quite partial for various reasons. This scarcity of ranking information presents a significant challenge to rank aggregation effectiveness. To address this problem of rank aggregation with limited information, in this study, on the basis of networked representation of ranking information, we employ the link prediction technology to mine potential ranking information. It aims to optimize the aggregation process, and maximize the aggregation effectiveness using available limited information. Experimental results indicate that our proposed approach can significantly enhance the aggregation effectiveness of existing rank aggregation methods, such as Borda’s method, competition graph method and Markov chain method. Our work provides a new way to solve rank aggregation problem with limited information and develops a new research paradigm for future rank aggregation studies from the perspective of network science.

排名汇总是促进决策过程的一个重要工具,它需要考虑多个标准或属性。但在许多应用中,由于各种原因,可用的排序列表往往是有限的,而且相当片面。这种排名信息的稀缺性对排名聚合的有效性提出了巨大挑战。为了解决信息有限的排名聚合问题,本研究在排名信息网络化表示的基础上,采用链接预测技术挖掘潜在的排名信息。其目的是优化聚合过程,利用现有的有限信息最大限度地提高聚合效果。实验结果表明,我们提出的方法可以显著提高现有排名聚合方法的聚合效果,如博尔达方法、竞争图方法和马尔科夫链方法。我们的工作为解决信息有限的等级聚合问题提供了一种新方法,并从网络科学的角度为未来的等级聚合研究开发了一种新的研究范式。
{"title":"Rank aggregation with limited information based on link prediction","authors":"","doi":"10.1016/j.ipm.2024.103860","DOIUrl":"10.1016/j.ipm.2024.103860","url":null,"abstract":"<div><p>Rank aggregation is a vital tool in facilitating decision-making processes that consider multiple criteria or attributes. While in many applications, the available ranked lists are often limited and quite partial for various reasons. This scarcity of ranking information presents a significant challenge to rank aggregation effectiveness. To address this problem of rank aggregation with limited information, in this study, on the basis of networked representation of ranking information, we employ the link prediction technology to mine potential ranking information. It aims to optimize the aggregation process, and maximize the aggregation effectiveness using available limited information. Experimental results indicate that our proposed approach can significantly enhance the aggregation effectiveness of existing rank aggregation methods, such as Borda’s method, competition graph method and Markov chain method. Our work provides a new way to solve rank aggregation problem with limited information and develops a new research paradigm for future rank aggregation studies from the perspective of network science.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944162","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-stakeholder recommendation system through deep learning-based preference evaluation and aggregation model with multi-view information embedding 通过基于深度学习的偏好评估和多视角信息嵌入聚合模型实现多方利益相关者推荐系统
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.ipm.2024.103862

Learning the preferences of consumers, providers, and system stakeholders is a challenging problem in the Multi-Stakeholder Recommendation System (MSRS). Existing MSRS methods lack the ability to generate equitable recommendations and investigate implicit relationships between stakeholders and items. Hence, this study addresses this issue by proposing a multi-stakeholder preference learning-based recommendation model that exploits information from multiple views to evaluate stakeholders' preferences. The proposed model learns consumer preferences using users' ratings and reviews of an item, provider preferences with provider utility, and provider-item interaction. Furthermore, the proposed model learns the system-level preference of promoting long-tail items through the probabilistic evaluation of stakeholders' interest in popular and unpopular items. Finally, this study develops a multi-stakeholder, multi-view deep neural network model to aggregate stakeholders' preferences and deliver equitable recommendations. This work utilizes benchmark Movie Lens (ML) 25M, ML-100K, ML-1M, and TripAdvisor datasets to validate and compare the proposed model's performance with other baseline methods using standard evaluation metrics for each stakeholder. Examining the precision metrics, the proposed model attains the minimum enhancement of 7.91%, 18.24%, 10.72%, and 20.12% across the ML-25M, ML-100K, ML-1M, and TripAdvisor datasets. Further concerning the exposure, hit, and reach metrics, the model exhibits a substantial minimum improvement of 19.12%, 14.73%, 5.37%, and 28.46% over the ML-25M, ML-100K, ML-1M, and TripAdvisor datasets. Finally, the proposed model excels in promoting long-tail items and enhancing the cumulative utility gain of the stakeholders, surpassing the baseline methods.

在多利益相关者推荐系统(MSRS)中,了解消费者、供应商和系统利益相关者的偏好是一个具有挑战性的问题。现有的 MSRS 方法缺乏生成公平推荐和调查利益相关者与项目之间隐含关系的能力。因此,本研究针对这一问题,提出了一种基于多利益相关者偏好学习的推荐模型,该模型利用来自多个视图的信息来评估利益相关者的偏好。该模型利用用户对物品的评分和评论来学习消费者的偏好,利用提供者的效用和提供者与物品之间的互动来学习提供者的偏好。此外,所提出的模型还通过对利益相关者对受欢迎和不受欢迎的项目的兴趣进行概率评估,来学习系统层面对推广长尾项目的偏好。最后,本研究开发了一个多利益相关者、多视角的深度神经网络模型,以汇总利益相关者的偏好并提供公平的推荐。本研究利用基准 Movie Lens (ML) 25M、ML-100K、ML-1M 和 TripAdvisor 数据集,使用针对各利益相关者的标准评估指标,验证和比较了所提出模型的性能与其他基准方法。在精确度指标方面,建议的模型在 ML-25M、ML-100K、ML-1M 和 TripAdvisor 数据集上分别达到了 7.91%、18.24%、10.72% 和 20.12% 的最低增强率。此外,在曝光率、点击率和到达率指标方面,该模型在 ML-25M、ML-100K、ML-1M 和 TripAdvisor 数据集上分别实现了 19.12%、14.73%、5.37% 和 28.46% 的大幅提升。最后,建议的模型在推广长尾项目和提高利益相关者的累积效用收益方面表现出色,超过了基线方法。
{"title":"Multi-stakeholder recommendation system through deep learning-based preference evaluation and aggregation model with multi-view information embedding","authors":"","doi":"10.1016/j.ipm.2024.103862","DOIUrl":"10.1016/j.ipm.2024.103862","url":null,"abstract":"<div><p>Learning the preferences of consumers, providers, and system stakeholders is a challenging problem in the Multi-Stakeholder Recommendation System (MSRS). Existing MSRS methods lack the ability to generate equitable recommendations and investigate implicit relationships between stakeholders and items. Hence, this study addresses this issue by proposing a multi-stakeholder preference learning-based recommendation model that exploits information from multiple views to evaluate stakeholders' preferences. The proposed model learns consumer preferences using users' ratings and reviews of an item, provider preferences with provider utility, and provider-item interaction. Furthermore, the proposed model learns the system-level preference of promoting long-tail items through the probabilistic evaluation of stakeholders' interest in popular and unpopular items. Finally, this study develops a multi-stakeholder, multi-view deep neural network model to aggregate stakeholders' preferences and deliver equitable recommendations. This work utilizes benchmark Movie Lens (ML) 25M, ML-100K, ML-1M, and TripAdvisor datasets to validate and compare the proposed model's performance with other baseline methods using standard evaluation metrics for each stakeholder. Examining the precision metrics, the proposed model attains the minimum enhancement of 7.91%, 18.24%, 10.72%, and 20.12% across the ML-25M, ML-100K, ML-1M, and TripAdvisor datasets. Further concerning the exposure, hit, and reach metrics, the model exhibits a substantial minimum improvement of 19.12%, 14.73%, 5.37%, and 28.46% over the ML-25M, ML-100K, ML-1M, and TripAdvisor datasets. Finally, the proposed model excels in promoting long-tail items and enhancing the cumulative utility gain of the stakeholders, surpassing the baseline methods.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141951239","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
EIOA: A computing expectation-based influence evaluation method in weighted hypergraphs EIOA:加权超图中基于计算期望的影响力评估方法
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.ipm.2024.103856

Influence maximization (IM) is a key issue in network science. However, previous research on IM has previously explored binary interaction relationship in ordinary graphs, with little consideration for higher-order interaction that are more practical in hypergraphs, especially weighted hypergraphs. Therefore, this study focuses on solving the IM problem in weighted hypergraphs. Firstly, we adopt a novel and more reasonable dissemination model, namely the adaptive dissemination (AD), and incorporate it into weighted hypergraphs. Next, a computing expectation-based influence evaluation method is proposed to accurately obtain the expected influence in one-hop area (EIOA) of the seed node set. Meanwhile, three search algorithms are designed using the EIOA to effectively solve the initial seed set. Then, multi-level experiments are conducted to compare the proposed algorithms with other six advanced algorithms in eight weighted hypergraph datasets from the real world. The experimental results are visually analyzed and two nonparametric test processes are applied to verify the significant advantages of the proposed algorithms. Finally, the impact of different factors such as seed set correlation, model parameter setting, and weight attribute on dissemination is explored, and the efficiency and robustness of these algorithms are further validated.

影响最大化(IM)是网络科学的一个关键问题。然而,以往有关 IM 的研究都是探讨普通图中的二元交互关系,很少考虑在超图(尤其是加权超图)中更实用的高阶交互关系。因此,本研究将重点放在解决加权超图中的 IM 问题上。首先,我们采用了一种新颖且更合理的传播模型,即自适应传播(AD),并将其融入到加权超图中。接着,我们提出了一种基于计算期望的影响力评估方法,以精确获得种子节点集的单跳区域内期望影响力(EIOA)。同时,利用 EIOA 设计了三种搜索算法,以有效求解初始种子集。然后,在现实世界的八个加权超图数据集中进行了多层次实验,比较了提出的算法和其他六种先进算法。对实验结果进行了直观分析,并应用两个非参数检验过程来验证所提算法的显著优势。最后,探讨了种子集相关性、模型参数设置和权重属性等不同因素对传播的影响,进一步验证了这些算法的效率和鲁棒性。
{"title":"EIOA: A computing expectation-based influence evaluation method in weighted hypergraphs","authors":"","doi":"10.1016/j.ipm.2024.103856","DOIUrl":"10.1016/j.ipm.2024.103856","url":null,"abstract":"<div><p>Influence maximization (IM) is a key issue in network science. However, previous research on IM has previously explored binary interaction relationship in ordinary graphs, with little consideration for higher-order interaction that are more practical in hypergraphs, especially weighted hypergraphs. Therefore, this study focuses on solving the IM problem in weighted hypergraphs. Firstly, we adopt a novel and more reasonable dissemination model, namely the adaptive dissemination (AD), and incorporate it into weighted hypergraphs. Next, a computing expectation-based influence evaluation method is proposed to accurately obtain the expected influence in one-hop area (EIOA) of the seed node set. Meanwhile, three search algorithms are designed using the EIOA to effectively solve the initial seed set. Then, multi-level experiments are conducted to compare the proposed algorithms with other six advanced algorithms in eight weighted hypergraph datasets from the real world. The experimental results are visually analyzed and two nonparametric test processes are applied to verify the significant advantages of the proposed algorithms. Finally, the impact of different factors such as seed set correlation, model parameter setting, and weight attribute on dissemination is explored, and the efficiency and robustness of these algorithms are further validated.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944163","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Multi-View disentanglement-based bidirectional generalized distillation for diagnosis of liver cancers with ultrasound images 利用超声图像诊断肝癌的基于多视图纠缠的双向广义蒸馏法
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-08-01 DOI: 10.1016/j.ipm.2024.103855

B-mode ultrasound (BUS) mainly reflects the tissue structural, morphological, and echo characteristics of liver tumors, and contrast-enhanced ultrasound (CEUS) offers supplementary information on the dynamic blood perfusion pattern to promote diagnostic accuracy. Transfer learning (TL) is capable of improving the performance of BUS-based computer-aided diagnosis (CAD) for liver cancer by transferring information from CEUS. However, most multi-view TL algorithms cannot fully capture the view-common together with the view-unique information of three CEUS phase images in the source domain to further promote knowledge transfer. To this end, a multi-view disentanglement-based bidirectional generalized distillation (MD-BGD) algorithm is proposed to explore and learn more potential knowledge from three typical CEUS phase images for multi-view transfer. MD-BGD consists of the multi-view feature disentanglement module and the bidirectional distillation module. The former explores more potential and transferable privileged information by disentangling three CEUS phase image features in the source domain into view-common and view-unique components. The latter develops a bidirectional generalized distillation algorithm to enhance the multi-view knowledge transfer between the source and the target domains, guided by shared labels. Therefore, the BUS-based CAD model is significantly improved by our proposed MD-BGD. MD-BGD is evaluated on the bi-modal ultrasound imaging dataset. It gains the best results of 90.75±2.20 %, 89.50±3.49 %, and 91.89±3.78 %, on accuracy, sensitivity, and specificity, respectively. These results indicate the effectiveness of MD-BGD in the diagnosis of liver cancer.

B 型超声(BUS)主要反映肝脏肿瘤的组织结构、形态和回声特征,而对比增强超声(CEUS)则提供动态血液灌注模式的补充信息,以提高诊断的准确性。迁移学习(TL)通过转移 CEUS 的信息,能够提高基于 BUS 的肝癌计算机辅助诊断(CAD)的性能。然而,大多数多视图迁移学习算法无法充分捕捉源域中三个 CEUS 相位图像的视图共性和视图独特性信息,从而进一步促进知识迁移。为此,我们提出了一种基于多视图分解的双向广义蒸馏(MD-BGD)算法,从三幅典型的 CEUS 相位图像中挖掘和学习更多潜在知识,以实现多视图转移。MD-BGD 由多视图特征解切模块和双向蒸馏模块组成。前者通过将源域中的三个 CEUS 相位图像特征分解为视图共用和视图独有两个部分,挖掘出更多潜在和可转移的特权信息。后者开发了一种双向广义蒸馏算法,在共享标签的指导下,加强源域和目标域之间的多视图知识转移。因此,我们提出的 MD-BGD 显著改善了基于 BUS 的 CAD 模型。MD-BGD 在双模态超声成像数据集上进行了评估。其准确度、灵敏度和特异度分别达到了 90.75±2.20%、89.50±3.49% 和 91.89±3.78%。这些结果表明了 MD-BGD 在肝癌诊断中的有效性。
{"title":"Multi-View disentanglement-based bidirectional generalized distillation for diagnosis of liver cancers with ultrasound images","authors":"","doi":"10.1016/j.ipm.2024.103855","DOIUrl":"10.1016/j.ipm.2024.103855","url":null,"abstract":"<div><p>B-mode ultrasound (BUS) mainly reflects the tissue structural, morphological, and echo characteristics of liver tumors, and contrast-enhanced ultrasound (CEUS) offers supplementary information on the dynamic blood perfusion pattern to promote diagnostic accuracy. Transfer learning (TL) is capable of improving the performance of BUS-based computer-aided diagnosis (CAD) for liver cancer by transferring information from CEUS. However, most multi-view TL algorithms cannot fully capture the view-common together with the view-unique information of three CEUS phase images in the source domain to further promote knowledge transfer. To this end, a multi-view disentanglement-based bidirectional generalized distillation (MD-BGD) algorithm is proposed to explore and learn more potential knowledge from three typical CEUS phase images for multi-view transfer. MD-BGD consists of the multi-view feature disentanglement module and the bidirectional distillation module. The former explores more potential and transferable privileged information by disentangling three CEUS phase image features in the source domain into view-common and view-unique components. The latter develops a bidirectional generalized distillation algorithm to enhance the multi-view knowledge transfer between the source and the target domains, guided by shared labels. Therefore, the BUS-based CAD model is significantly improved by our proposed MD-BGD. MD-BGD is evaluated on the bi-modal ultrasound imaging dataset. It gains the best results of 90.75±2.20 %, 89.50±3.49 %, and 91.89±3.78 %, on accuracy, sensitivity, and specificity, respectively. These results indicate the effectiveness of MD-BGD in the diagnosis of liver cancer.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944164","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SCFL: Spatio-temporal consistency federated learning for next POI recommendation SCFL:用于下一个 POI 推荐的时空一致性联合学习
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-30 DOI: 10.1016/j.ipm.2024.103852

Existing personalized federated learning frameworks fail to significantly improve the personalization of user preference learning in next Point-Of-Interest (POI) recommendations, causing notable performance deficits. These frameworks do not fully consider crucial factors such as: (1) how to thoroughly explore spatial–temporal relationships within user trajectories to deeply understand personalized behavior patterns, and (2) the neglect of collaborative signals among users with similar spatio-temporal distributions, which results in the loss of valuable shared information. To tackle these challenges, this paper introduces the Spatio-temporal Consistency Federated Learning (SCFL) framework, which capitalizes on the spatio-temporal consistency of trajectories to boost the personalized performance of POI recommendation models in FL. Specifically, we have developed the trajectory optimization module SCA for clients in isolation to extract deeper behavioral patterns from the spatio-temporal distribution of sparse trajectories. Additionally, we present a hierarchical aggregation strategy based on distribution consistency, utilizing intermediate entities called Edges to aggregate similar users, thereby enhancing the model’s learning of shared information. Experimental validation across three real-world datasets (NYC, TKY and Gowalla) and two models (SASRec and SSEPT) with six scalability settings shows that SCFL substantially outperforms eight strong baselines. In six experimental configurations, SCFL achieves a personalized performance improvement of 10.65% over the best baselines. Additional experiments have also validated the superiority of SCFL from various perspectives.

现有的个性化联合学习框架无法在下一个兴趣点(POI)推荐中显著提高用户偏好学习的个性化程度,从而造成明显的性能缺陷。这些框架没有充分考虑以下关键因素:(1) 如何深入探索用户轨迹中的时空关系,以深入理解个性化行为模式;(2) 忽视具有相似时空分布的用户之间的协作信号,从而导致宝贵的共享信息丢失。为了应对这些挑战,本文提出了时空一致性联合学习(SCFL)框架,利用轨迹的时空一致性来提高 FL 中 POI 推荐模型的个性化性能。具体来说,我们为客户端单独开发了轨迹优化模块 SCA,以便从稀疏轨迹的时空分布中提取更深层次的行为模式。此外,我们还提出了一种基于分布一致性的分层聚合策略,利用称为 "边"(Edges)的中间实体来聚合相似用户,从而提高模型对共享信息的学习能力。在三个真实世界数据集(NYC、TKY 和 Gowalla)和两个模型(SASRec 和 SSEPT)的六种可扩展性设置下进行的实验验证表明,SCFL 的性能大大优于八个强大的基线模型。在六种实验配置中,SCFL 的个性化性能比最佳基线提高了 10.65%。其他实验也从不同角度验证了 SCFL 的优越性。
{"title":"SCFL: Spatio-temporal consistency federated learning for next POI recommendation","authors":"","doi":"10.1016/j.ipm.2024.103852","DOIUrl":"10.1016/j.ipm.2024.103852","url":null,"abstract":"<div><p>Existing personalized federated learning frameworks fail to significantly improve the personalization of user preference learning in next Point-Of-Interest (POI) recommendations, causing notable performance deficits. These frameworks do not fully consider crucial factors such as: (1) how to thoroughly explore spatial–temporal relationships within user trajectories to deeply understand personalized behavior patterns, and (2) the neglect of collaborative signals among users with similar spatio-temporal distributions, which results in the loss of valuable shared information. To tackle these challenges, this paper introduces the Spatio-temporal Consistency Federated Learning (SCFL) framework, which capitalizes on the spatio-temporal consistency of trajectories to boost the personalized performance of POI recommendation models in FL. Specifically, we have developed the trajectory optimization module SCA for clients in isolation to extract deeper behavioral patterns from the spatio-temporal distribution of sparse trajectories. Additionally, we present a hierarchical aggregation strategy based on distribution consistency, utilizing intermediate entities called Edges to aggregate similar users, thereby enhancing the model’s learning of shared information. Experimental validation across three real-world datasets (NYC, TKY and Gowalla) and two models (SASRec and SSEPT) with six scalability settings shows that SCFL substantially outperforms eight strong baselines. In six experimental configurations, SCFL achieves a personalized performance improvement of 10.65% over the best baselines. Additional experiments have also validated the superiority of SCFL from various perspectives.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944165","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Edge contrastive learning for link prediction 用于链接预测的边缘对比学习
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-29 DOI: 10.1016/j.ipm.2024.103847

Link prediction is a critical task within the realm of graph machine learning. While recent advancements mainly emphasize node representation learning, the rich information encapsulated within edges, proven advantageous in various graph-related tasks, has been somewhat overlooked. To bridge the gap, this paper explores the potential of incorporating edge representation learning for link prediction and identifies three inherent challenges associated with this approach. We introduce the Edge Contrastive Learning for Link Prediction (ECLiP) framework to tackle these challenges. ECLiP integrates edge information into node representations through edge-level contrastive learning, with a distinctive perspective on treating edges, rather than nodes, as the units of instance discrimination. We first illustrate the implementation of this framework using an established edge representation learning method. However, it incurs significant additional training overhead when the number of edges is huge. To mitigate this issue, we present a computationally efficient variant employing a multi-layer perceptron (MLP) for direct edge representation learning. Conducting rigorous experiments across eight distinct datasets with node counts spanning from 2k to 235k, we demonstrate a noteworthy improvement of over 10% on certain datasets, validating the efficacy of our proposed methodology.

链接预测是图机器学习领域的一项重要任务。虽然最近的进展主要强调节点表示学习,但在各种图相关任务中被证明具有优势的边缘所包含的丰富信息却在某种程度上被忽视了。为了弥补这一差距,本文探讨了将边缘表示学习用于链接预测的潜力,并指出了与这种方法相关的三个固有挑战。我们引入了用于链接预测的边缘对比学习(ECLiP)框架来应对这些挑战。ECLiP 通过边缘级对比学习将边缘信息整合到节点表征中,并以独特的视角将边缘而非节点视为实例判别的单位。我们首先使用一种成熟的边缘表征学习方法说明了这一框架的实现。然而,当边缘数量巨大时,这种方法会产生大量额外的训练开销。为了缓解这一问题,我们提出了一种计算效率高的变体,采用多层感知器(MLP)直接进行边缘表示学习。我们在节点数从 2k 到 235k 不等的八个不同数据集上进行了严格的实验,结果表明,在某些数据集上,我们的方法显著提高了 10%以上,验证了我们提出的方法的有效性。
{"title":"Edge contrastive learning for link prediction","authors":"","doi":"10.1016/j.ipm.2024.103847","DOIUrl":"10.1016/j.ipm.2024.103847","url":null,"abstract":"<div><p>Link prediction is a critical task within the realm of graph machine learning. While recent advancements mainly emphasize node representation learning, the rich information encapsulated within edges, proven advantageous in various graph-related tasks, has been somewhat overlooked. To bridge the gap, this paper explores the potential of incorporating edge representation learning for link prediction and identifies three inherent challenges associated with this approach. We introduce the Edge Contrastive Learning for Link Prediction (ECLiP) framework to tackle these challenges. ECLiP integrates edge information into node representations through edge-level contrastive learning, with a distinctive perspective on treating edges, rather than nodes, as the units of instance discrimination. We first illustrate the implementation of this framework using an established edge representation learning method. However, it incurs significant additional training overhead when the number of edges is huge. To mitigate this issue, we present a computationally efficient variant employing a multi-layer perceptron (MLP) for direct edge representation learning. Conducting rigorous experiments across eight distinct datasets with node counts spanning from 2k to 235k, we demonstrate a noteworthy improvement of over 10% on certain datasets, validating the efficacy of our proposed methodology.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944167","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Structure-aware sign language recognition with spatial–temporal scene graph 利用时空场景图进行结构感知手语识别
IF 7.4 1区 管理学 Q1 COMPUTER SCIENCE, INFORMATION SYSTEMS Pub Date : 2024-07-29 DOI: 10.1016/j.ipm.2024.103850

Continuous sign language recognition (CSLR) is essential for the social participation of deaf individuals. The structural information of sign language motion units plays a crucial role in semantic representation. However, most existing CSLR methods treat motion units as a whole appearance in the video sequence, neglecting the exploitation and explanation of structural information in the models. This paper proposes a Structure-Aware Graph Convolutional Neural Network (SA-GNN) model for CSLR. This model constructs a spatial–temporal scene graph, explicitly capturing motion units’ spatial structure and temporal variation. Furthermore, to effectively train the SA-GNN, we propose an adaptive bootstrap strategy that enhances weak supervision using dense pseudo labels. This strategy incorporates a confidence cross-entropy loss to adjust the distribution of pseudo labels adaptively. Extensive experiments validate the effectiveness of our proposed method, achieving competitive results on popular CSLR datasets.

连续手语识别(CSLR)对于聋人的社会参与至关重要。手语运动单元的结构信息在语义表征中起着至关重要的作用。然而,现有的 CSLR 方法大多将运动单元作为视频序列中的整体外观来处理,忽视了模型中结构信息的利用和解释。本文提出了一种用于 CSLR 的结构感知图卷积神经网络(SA-GNN)模型。该模型构建了一个时空场景图,明确捕捉运动单元的空间结构和时间变化。此外,为了有效训练 SA-GNN,我们提出了一种自适应引导策略,利用密集的伪标签增强弱监督。该策略结合了置信度交叉熵损失,可以自适应地调整伪标签的分布。广泛的实验验证了我们提出的方法的有效性,在流行的 CSLR 数据集上取得了具有竞争力的结果。
{"title":"Structure-aware sign language recognition with spatial–temporal scene graph","authors":"","doi":"10.1016/j.ipm.2024.103850","DOIUrl":"10.1016/j.ipm.2024.103850","url":null,"abstract":"<div><p>Continuous sign language recognition (CSLR) is essential for the social participation of deaf individuals. The structural information of sign language motion units plays a crucial role in semantic representation. However, most existing CSLR methods treat motion units as a whole appearance in the video sequence, neglecting the exploitation and explanation of structural information in the models. This paper proposes a Structure-Aware Graph Convolutional Neural Network (SA-GNN) model for CSLR. This model constructs a spatial–temporal scene graph, explicitly capturing motion units’ spatial structure and temporal variation. Furthermore, to effectively train the SA-GNN, we propose an adaptive bootstrap strategy that enhances weak supervision using dense pseudo labels. This strategy incorporates a confidence cross-entropy loss to adjust the distribution of pseudo labels adaptively. Extensive experiments validate the effectiveness of our proposed method, achieving competitive results on popular CSLR datasets.</p></div>","PeriodicalId":50365,"journal":{"name":"Information Processing & Management","volume":null,"pages":null},"PeriodicalIF":7.4,"publicationDate":"2024-07-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141944166","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Information Processing & Management
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1