首页 > 最新文献

AI Open最新文献

英文 中文
LLMKG+: Systematically improving knowledge quality and coverage in KGs using LLMs – A case study in medical domain LLMKG+:使用LLMs系统地提高知识质量和覆盖范围-医学领域的案例研究
IF 14.8 Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.11.003
Xincan Feng , Hejie Cui , Kazuki Hayashi , Huy Hien Vu , Kenta T. Suzuki , Noriki Nishida , Hidetaka Kamigaito , Yuji Matsumoto , Taro Watanabe , Carl Yang
Knowledge graphs (KGs) encode structured information about real-world entities and their relations, supporting core NLP tasks such as question answering and retrieval. Existing LLM-based methods for knowledge extraction and fusion often struggle to balance quality and coverage when adapting to emerging knowledge. We propose LLMKG+, a framework for KG expansion that integrates the generative strengths of LLMs with relevance verification. LLMKG+features (1) a two-stage pipeline with retrieval-augmented generation followed by hierarchical expansion filtering, where the latter is the first to jointly assess semantic equivalence to eliminate triple-level redundancy while ensuring factual correctness, and (2) a novel KG Reconstruction Test that recognizes semantically equivalent triples to enable more accurate quality and coverage assessment. Evaluated on PubMed abstracts and the UMLS semantic network using eight state-of-the-art LLMs, LLMKG+improves KG quality and coverage by 20.47%–73.71% over strong baselines. These results demonstrate that LLMKG+offers an effective solution for KG expansion in domains requiring high quality, broad coverage, and continual knowledge growth. Code: https://github.com/xincanfeng/llmkg.
知识图(Knowledge graphs, KGs)对现实世界实体及其关系的结构化信息进行编码,支持诸如问题回答和检索等核心NLP任务。现有的基于法学硕士的知识提取和融合方法在适应新兴知识时往往难以平衡质量和覆盖范围。我们提出LLMKG+,这是一个将llm的生成优势与相关性验证相结合的KG扩展框架。LLMKG+具有以下特点:(1)检索增强生成和分层扩展过滤的两阶段管道,其中分层扩展过滤首先联合评估语义等价性,以消除三级冗余,同时确保事实正确性;(2)识别语义等价三重的新颖KG重构测试,以实现更准确的质量和覆盖评估。使用8个最先进的llm对PubMed摘要和UMLS语义网络进行评估,LLMKG+在强基线上提高了KG的质量和覆盖率20.47%-73.71%。这些结果表明,LLMKG+为需要高质量、广泛覆盖和持续知识增长的领域的KG扩展提供了有效的解决方案。代码:https://github.com/xincanfeng/llmkg。
{"title":"LLMKG+: Systematically improving knowledge quality and coverage in KGs using LLMs – A case study in medical domain","authors":"Xincan Feng ,&nbsp;Hejie Cui ,&nbsp;Kazuki Hayashi ,&nbsp;Huy Hien Vu ,&nbsp;Kenta T. Suzuki ,&nbsp;Noriki Nishida ,&nbsp;Hidetaka Kamigaito ,&nbsp;Yuji Matsumoto ,&nbsp;Taro Watanabe ,&nbsp;Carl Yang","doi":"10.1016/j.aiopen.2025.11.003","DOIUrl":"10.1016/j.aiopen.2025.11.003","url":null,"abstract":"<div><div>Knowledge graphs (KGs) encode structured information about real-world entities and their relations, supporting core NLP tasks such as question answering and retrieval. Existing LLM-based methods for knowledge extraction and fusion often struggle to balance quality and coverage when adapting to emerging knowledge. We propose LLMKG+, a framework for KG expansion that integrates the generative strengths of LLMs with relevance verification. LLMKG+features (1) a two-stage pipeline with retrieval-augmented generation followed by hierarchical expansion filtering, where the latter is the first to jointly assess semantic equivalence to eliminate triple-level redundancy while ensuring factual correctness, and (2) a novel KG Reconstruction Test that recognizes semantically equivalent triples to enable more accurate quality and coverage assessment. Evaluated on PubMed abstracts and the UMLS semantic network using eight state-of-the-art LLMs, LLMKG+improves KG quality and coverage by 20.47%–73.71% over strong baselines. These results demonstrate that LLMKG+offers an effective solution for KG expansion in domains requiring high quality, broad coverage, and continual knowledge growth. Code: <span><span>https://github.com/xincanfeng/llmkg</span><svg><path></path></svg></span>.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 299-313"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145623234","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Advancing AI for science: From the revolution of tools to the tools for revolution 为科学推进人工智能:从工具革命到工具革命
IF 14.8 Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.11.002
Bowen Zhou , Ning Ding , Lei Bai , Hao Zhou
Scientific research is not a linear pipeline but a dynamic system built upon the ever-shifting interactions among three elements — research objects, tools, and researchers. And sustained progress depends on how quickly insights circulate within this network, not on optimizing a single node in isolation. With the impending arrival of more general artificial intelligence, we stand at a critical point in how AI might change scientific research in a systemic manner. Recent “AI for Science” achievements – from protein-structure prediction to accelerated climate simulations – have proven the value of task-level AI-driven solutions. Yet, potential still remains unrealized when these advances are siloed in disciplinary “archipelagos”. This paper argues that the real prize is systemic: AI that simultaneously expands the research objects’ data landscape (AI for Data), rewires computational research tools (AI for Computation), and co-creates hypotheses with researchers (AI for Innovation). When these three pushes converge, AI stops being merely a revolution of tools but becomes the tool of revolution — a catalyst that raises the frequency, breadth, and depth of discovery across disciplines. By enhancing the full research triad rather than isolated nodes, AI can raise the overall tempo and scope of discovery in a measured, discipline-agnostic way.
科学研究不是一个线性的管道,而是一个动态的系统,它建立在研究对象、工具和研究人员这三个要素之间不断变化的相互作用之上。持续的进步取决于洞察力在这个网络中传播的速度,而不是孤立地优化单个节点。随着更通用的人工智能的到来,我们正处于人工智能如何系统性地改变科学研究的关键时刻。最近的“科学人工智能”成就——从蛋白质结构预测到加速气候模拟——已经证明了任务级人工智能驱动解决方案的价值。然而,当这些进步被孤立在学科“群岛”中时,潜力仍然没有实现。本文认为,真正的奖励是系统性的:人工智能同时扩展了研究对象的数据景观(数据人工智能),重新连接了计算研究工具(计算人工智能),并与研究人员共同创造了假设(创新人工智能)。当这三种推动力汇聚在一起时,人工智能就不再仅仅是一种工具革命,而是成为革命的工具——一种提高跨学科发现频率、广度和深度的催化剂。通过增强完整的研究三位一体,而不是孤立的节点,人工智能可以以一种有分寸的、学科不可知论的方式提高发现的整体速度和范围。
{"title":"Advancing AI for science: From the revolution of tools to the tools for revolution","authors":"Bowen Zhou ,&nbsp;Ning Ding ,&nbsp;Lei Bai ,&nbsp;Hao Zhou","doi":"10.1016/j.aiopen.2025.11.002","DOIUrl":"10.1016/j.aiopen.2025.11.002","url":null,"abstract":"<div><div>Scientific research is not a linear pipeline but a dynamic system built upon the ever-shifting interactions among three elements — <em>research objects, tools, and researchers</em>. And sustained progress depends on how quickly insights circulate within this network, not on optimizing a single node in isolation. With the impending arrival of more general artificial intelligence, we stand at a critical point in how AI might change scientific research in a systemic manner. Recent “AI for Science” achievements – from protein-structure prediction to accelerated climate simulations – have proven the value of task-level AI-driven solutions. Yet, potential still remains unrealized when these advances are siloed in disciplinary “archipelagos”. This paper argues that the real prize is systemic: AI that simultaneously expands the research objects’ data landscape (AI for Data), rewires computational research tools (AI for Computation), and co-creates hypotheses with researchers (AI for Innovation). When these three pushes converge, AI stops being merely a revolution of tools but becomes the tool of revolution — a catalyst that raises the frequency, breadth, and depth of discovery across disciplines. By enhancing the full research triad rather than isolated nodes, AI can raise the overall tempo and scope of discovery in a measured, discipline-agnostic way.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 323-328"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145692937","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimal RoPE extension via Bayesian Optimization for training-free length generalization 基于贝叶斯优化的无训练长度泛化的最优RoPE扩展
Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.01.002
Xinrong Zhang , Shengding Hu , Weilin Zhao , Huadong Wang , Xu Han , Chaoqun He , Guoyang Zeng , Zhiyuan Liu , Maosong Sun
Transformers are designed to process input of variable length without resource constraints. However, their performance significantly deteriorates when the input surpasses a threshold slightly larger than the pre-training context window. This limitation on the effective context window confines the application of Transformer-based large language models (LLMs) that have been the subject of great anticipation. Consequently, the generalization of pre-trained LLMs to handle varying input lengths becomes a pivotal and formidable challenge. Previous research has endeavored to address this challenge by modifying the Rotary Position Embedding (RoPE), the primary factor responsible for disparities in handling different input lengths. These efforts have provided valuable insights, while they often lack a deep understanding of the root causes of performance degradation and rely heavily on manual parameter tuning. In response to these issues, we conduct a comprehensive analysis and identify two primary causes behind the performance drop: global distribution mismatch and local resolution degradation. In light of these challenges, we introduce an Optimal RoPE (ORoPE) extension using Bayesian Optimization (BO), which alleviates the need for additional model training. Our experiments demonstrate the efficacy of our approach, outperforming baselines by up to 21.9%, 32.1%, and 41.2% at evaluation lengths of 8K, 16K, and 32K, respectively. We will release all code and data when this paper is published.
变压器设计用于处理无资源限制的可变长度输入。然而,当输入超过略大于预训练上下文窗口的阈值时,它们的性能会显著下降。这种对有效上下文窗口的限制限制了基于transformer的大型语言模型(llm)的应用,而这些模型一直是备受期待的主题。因此,预训练的llm的泛化处理不同的输入长度成为一个关键和艰巨的挑战。先前的研究试图通过修改旋转位置嵌入(RoPE)来解决这一挑战,这是处理不同输入长度差异的主要因素。这些努力提供了有价值的见解,但它们通常缺乏对性能下降的根本原因的深刻理解,并且严重依赖手动参数调优。针对这些问题,我们进行了全面的分析,并确定了性能下降背后的两个主要原因:全局分布不匹配和局部分辨率下降。鉴于这些挑战,我们引入了使用贝叶斯优化(BO)的最优RoPE (ORoPE)扩展,这减轻了对额外模型训练的需求。我们的实验证明了我们的方法的有效性,在评估长度为8K、16K和32K时,分别比基线高出21.9%、32.1%和41.2%。本文发表后,我们将发布所有代码和数据。
{"title":"Optimal RoPE extension via Bayesian Optimization for training-free length generalization","authors":"Xinrong Zhang ,&nbsp;Shengding Hu ,&nbsp;Weilin Zhao ,&nbsp;Huadong Wang ,&nbsp;Xu Han ,&nbsp;Chaoqun He ,&nbsp;Guoyang Zeng ,&nbsp;Zhiyuan Liu ,&nbsp;Maosong Sun","doi":"10.1016/j.aiopen.2025.01.002","DOIUrl":"10.1016/j.aiopen.2025.01.002","url":null,"abstract":"<div><div>Transformers are designed to process input of variable length without resource constraints. However, their performance significantly deteriorates when the input surpasses a threshold slightly larger than the pre-training context window. This limitation on the effective context window confines the application of Transformer-based large language models (LLMs) that have been the subject of great anticipation. Consequently, the generalization of pre-trained LLMs to handle varying input lengths becomes a pivotal and formidable challenge. Previous research has endeavored to address this challenge by modifying the Rotary Position Embedding (RoPE), the primary factor responsible for disparities in handling different input lengths. These efforts have provided valuable insights, while they often lack a deep understanding of the root causes of performance degradation and rely heavily on manual parameter tuning. In response to these issues, we conduct a comprehensive analysis and identify two primary causes behind the performance drop: global distribution mismatch and local resolution degradation. In light of these challenges, we introduce an Optimal RoPE (ORoPE) extension using Bayesian Optimization (BO), which alleviates the need for additional model training. Our experiments demonstrate the efficacy of our approach, outperforming baselines by up to 21.9%, 32.1%, and 41.2% at evaluation lengths of 8K, 16K, and 32K, respectively. We will release all code and data when this paper is published.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 1-11"},"PeriodicalIF":0.0,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143134370","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Symbolic learning enables self-evolving agents 符号学习使代理人能够自我进化
IF 14.8 Pub Date : 2025-01-01 DOI: 10.1016/j.aiopen.2025.11.004
Yixin Ou , Wangchunshu Zhou , Shengwei Ding , Long Li , Jialong Wu , Tiannan Wang , Jiamin Chen , Shuai Wang , Xiaohua Xu , Ningyu Zhang , Huajun Chen , Yuchen Eleanor Jiang
The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing “language agents”, which are complex large language models (LLMs) workflows involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive capabilities for many real-world tasks, a fundamental limitation of current language agents research is that they are model-centric or engineering-centric. That is to say, the design of prompts, tools, and workflows of language agents requires substantial manual engineering efforts from human experts rather than automatically learning from data. We believe the transition from model-centric, or engineering-centric, to data-centric, i.e., the ability of language agents to autonomously learn and evolve in environments, is the key for them to possibly achieve AGI.
In this work, we introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using symbolic optimizers. Specifically, we consider agents as symbolic networks in which learnable weights are defined by prompts, tools, and the way they are stacked together. Agent symbolic learning is designed to optimize the symbolic network within language agents in a data-centric way by mimicking two fundamental algorithms in connectionist learning: back-propagation and gradient descent. Instead of dealing with numeric weights, agent symbolic learning works with text-based weights, loss, and gradients. We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks and show substantial improvements over static agent frameworks and simple prompt/tool optimization methods. In addition, agent symbolic learning enables language agents to update themselves after being created and deployed in the wild, resulting in “self-evolving agents”. We will open-source the agent symbolic learning framework to facilitate future research on data-centric agent learning.
人工智能社区一直在通过开发“语言代理”探索通往人工通用智能(AGI)的途径,语言代理是复杂的大型语言模型(llm)工作流,涉及提示技术和工具使用方法。虽然语言代理已经在许多现实世界的任务中展示了令人印象深刻的能力,但当前语言代理研究的一个基本限制是它们以模型为中心或以工程为中心。也就是说,语言代理的提示、工具和工作流的设计需要来自人类专家的大量手工工程工作,而不是自动从数据中学习。我们认为,从以模型为中心,或以工程为中心,到以数据为中心的转变,即语言代理在环境中自主学习和进化的能力,是他们可能实现AGI的关键。在这项工作中,我们引入了代理符号学习,这是一个系统框架,使语言代理能够使用符号优化器以数据为中心的方式自行优化自己。具体来说,我们将智能体视为符号网络,其中可学习的权重由提示、工具和它们堆叠在一起的方式定义。智能体符号学习旨在通过模仿连接主义学习中的两种基本算法:反向传播和梯度下降,以数据为中心的方式优化语言智能体中的符号网络。代理符号学习不是处理数字权重,而是处理基于文本的权重、损失和梯度。我们在标准基准测试和复杂的现实世界任务上进行了概念验证实验,并展示了相对于静态代理框架和简单的提示/工具优化方法的实质性改进。此外,智能体符号学习使语言智能体在被创建并部署到野外后能够自我更新,从而形成“自我进化的智能体”。我们将开源代理符号学习框架,以促进未来以数据为中心的代理学习的研究。
{"title":"Symbolic learning enables self-evolving agents","authors":"Yixin Ou ,&nbsp;Wangchunshu Zhou ,&nbsp;Shengwei Ding ,&nbsp;Long Li ,&nbsp;Jialong Wu ,&nbsp;Tiannan Wang ,&nbsp;Jiamin Chen ,&nbsp;Shuai Wang ,&nbsp;Xiaohua Xu ,&nbsp;Ningyu Zhang ,&nbsp;Huajun Chen ,&nbsp;Yuchen Eleanor Jiang","doi":"10.1016/j.aiopen.2025.11.004","DOIUrl":"10.1016/j.aiopen.2025.11.004","url":null,"abstract":"<div><div>The AI community has been exploring a pathway to artificial general intelligence (AGI) by developing “language agents”, which are complex large language models (LLMs) workflows involving both prompting techniques and tool usage methods. While language agents have demonstrated impressive capabilities for many real-world tasks, a fundamental limitation of current language agents research is that they are model-centric or engineering-centric. That is to say, the design of prompts, tools, and workflows of language agents requires substantial manual engineering efforts from human experts rather than automatically learning from data. We believe the transition from model-centric, or engineering-centric, to data-centric, i.e., the ability of language agents to autonomously learn and evolve in environments, is the key for them to possibly achieve AGI.</div><div>In this work, we introduce <em>agent symbolic learning</em>, a systematic framework that enables language agents to optimize themselves on their own in a data-centric way using <em>symbolic optimizers</em>. Specifically, we consider agents as symbolic networks in which learnable weights are defined by prompts, tools, and the way they are stacked together. Agent symbolic learning is designed to optimize the symbolic network within language agents in a <em>data-centric</em> way by mimicking two fundamental algorithms in connectionist learning: back-propagation and gradient descent. Instead of dealing with numeric weights, agent symbolic learning works with text-based weights, loss, and gradients. We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks and show substantial improvements over static agent frameworks and simple prompt/tool optimization methods. In addition, agent symbolic learning enables language agents to update themselves after being created and deployed in the wild, resulting in “self-evolving agents”. We will open-source the agent symbolic learning framework to facilitate future research on <em>data-centric</em> agent learning.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"6 ","pages":"Pages 314-322"},"PeriodicalIF":14.8,"publicationDate":"2025-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145692936","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
PM2.5 forecasting under distribution shift: A graph learning approach 分布变化下的 PM2.5 预测:图学习方法
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2023.11.001
Yachuan Liu , Jiaqi Ma , Paramveer Dhillon , Qiaozhu Mei

We present a new benchmark task for graph-based machine learning, aiming to predict future air quality (PM2.5 concentration) observed by a geographically distributed network of environmental sensors. While prior work has successfully applied Graph Neural Networks (GNNs) on a wide family of spatio-temporal prediction tasks, the new benchmark task introduced here brings a technical challenge that has been less studied in the context of graph-based spatio-temporal learning: distribution shift across a long period of time. An important goal of this paper is to understand the behavior of spatio-temporal GNNs under distribution shift. We conduct a comprehensive comparative study of both graph-based and non-graph-based machine learning models under two data split methods, one results in distribution shift and one does not. Our empirical results suggest that GNN models tend to suffer more from distribution shift compared to non-graph-based models, which calls for special attention when deploying spatio-temporal GNNs in practice.

我们为基于图的机器学习提出了一项新的基准任务,旨在预测由地理分布式环境传感器网络观测到的未来空气质量(PM2.5 浓度)。虽然之前的工作已经成功地将图神经网络(GNN)应用于一系列时空预测任务,但本文介绍的新基准任务带来了一个在基于图的时空学习方面研究较少的技术挑战:跨长时间的分布转移。本文的一个重要目标是了解时空 GNN 在分布转移下的行为。我们对基于图和非基于图的机器学习模型在两种数据拆分方法(一种会导致分布转移,另一种不会)下的表现进行了全面的比较研究。我们的实证结果表明,与非基于图的模型相比,基于图的 GNN 模型更容易受到分布转移的影响,这就要求在实际部署时空 GNN 时要特别注意这一点。
{"title":"PM2.5 forecasting under distribution shift: A graph learning approach","authors":"Yachuan Liu ,&nbsp;Jiaqi Ma ,&nbsp;Paramveer Dhillon ,&nbsp;Qiaozhu Mei","doi":"10.1016/j.aiopen.2023.11.001","DOIUrl":"10.1016/j.aiopen.2023.11.001","url":null,"abstract":"<div><p>We present a new benchmark task for graph-based machine learning, aiming to predict future air quality (PM2.5 concentration) observed by a geographically distributed network of environmental sensors. While prior work has successfully applied Graph Neural Networks (GNNs) on a wide family of spatio-temporal prediction tasks, the new benchmark task introduced here brings a technical challenge that has been less studied in the context of graph-based spatio-temporal learning: distribution shift across a long period of time. An important goal of this paper is to understand the behavior of spatio-temporal GNNs under distribution shift. We conduct a comprehensive comparative study of both graph-based and non-graph-based machine learning models under two data split methods, one results in distribution shift and one does not. Our empirical results suggest that GNN models tend to suffer more from distribution shift compared to non-graph-based models, which calls for special attention when deploying spatio-temporal GNNs in practice.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 23-29"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651023000220/pdfft?md5=cec5103867bd9723b31ac8d2aeadf3e7&pid=1-s2.0-S2666651023000220-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139013251","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
MindLLM: Lightweight large language model pre-training, evaluation and domain application MindLLM:轻量级大型语言模型的预训练、评估和领域应用
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2024.08.001
Yizhe Yang, Huashan Sun, Jiawei Li, Runheng Liu, Yinghao Li, Yuhang Liu, Yang Gao, Heyan Huang

Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another branch to develop lightweight custom models that better serve certain domains, taking into account the high cost of training and deploying LLMs and the scarcity of resources. In this paper, we present MindLLM, a novel series of bilingual lightweight large language models, trained from scratch, alleviating such burdens by offering models with 1.3 billion and 3 billion parameters. A thorough account of experiences accrued during large model development is given, covering every step of the process, including data construction, model architecture, evaluation, and applications. Such insights are hopefully valuable for fellow academics and developers. MindLLM consistently matches or surpasses the performance of other open-source larger models on some public benchmarks. We also introduce an innovative instruction tuning framework tailored for smaller models to enhance their capabilities efficiently. Moreover, we explore the application of MindLLM in specific vertical domains such as law and finance, underscoring the agility and adaptability of our lightweight models.

大型语言模型(LLM)在各种自然语言任务中表现出了卓越的性能,标志着在通用人工智能方面取得了重大进展。虽然通用人工智能是通过开发越来越大规模的模型来实现的,但考虑到训练和部署 LLM 的高成本以及资源的稀缺性,开发轻量级定制模型以更好地服务于某些领域可能是另一个分支。在本文中,我们介绍了从零开始训练的一系列新型双语轻量级大型语言模型--MindLLM,通过提供具有 13 亿和 30 亿参数的模型来减轻这些负担。本文全面介绍了大型模型开发过程中积累的经验,包括数据构建、模型架构、评估和应用等每一个步骤。希望这些见解对同行学者和开发人员有价值。在一些公共基准测试中,MindLLM 的性能始终与其他开源大型模型不相上下,甚至有过之而无不及。我们还介绍了一个为小型模型量身定制的创新指令调整框架,以有效增强其能力。此外,我们还探索了 MindLLM 在法律和金融等特定垂直领域的应用,强调了我们轻量级模型的灵活性和适应性。
{"title":"MindLLM: Lightweight large language model pre-training, evaluation and domain application","authors":"Yizhe Yang,&nbsp;Huashan Sun,&nbsp;Jiawei Li,&nbsp;Runheng Liu,&nbsp;Yinghao Li,&nbsp;Yuhang Liu,&nbsp;Yang Gao,&nbsp;Heyan Huang","doi":"10.1016/j.aiopen.2024.08.001","DOIUrl":"10.1016/j.aiopen.2024.08.001","url":null,"abstract":"<div><p>Large Language Models (LLMs) have demonstrated remarkable performance across various natural language tasks, marking significant strides towards general artificial intelligence. While general artificial intelligence is leveraged by developing increasingly large-scale models, there could be another branch to develop lightweight custom models that better serve certain domains, taking into account the high cost of training and deploying LLMs and the scarcity of resources. In this paper, we present MindLLM, a novel series of bilingual lightweight large language models, trained from scratch, alleviating such burdens by offering models with 1.3 billion and 3 billion parameters. A thorough account of experiences accrued during large model development is given, covering every step of the process, including data construction, model architecture, evaluation, and applications. Such insights are hopefully valuable for fellow academics and developers. MindLLM consistently matches or surpasses the performance of other open-source larger models on some public benchmarks. We also introduce an innovative instruction tuning framework tailored for smaller models to enhance their capabilities efficiently. Moreover, we explore the application of MindLLM in specific vertical domains such as law and finance, underscoring the agility and adaptability of our lightweight models.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 1-26"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651024000111/pdfft?md5=5c01070780bb0f7ea417c3293322b19c&pid=1-s2.0-S2666651024000111-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141992619","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Adaptive negative representations for graph contrastive learning 图形对比学习的自适应负表征
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2023.10.005
Qi Zhang, Cheng Yang, Chuan Shi

Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.

图形对比学习(GCL)已成为一种很有前途的图形表征学习范式。最近,GCL 引入了 "硬否定"(hard negatives)的概念,它可以提供更具挑战性的自我监督目标,并缓解过度拟合问题。这些方法使用同一迷你批次中的不同图形作为负面示例,并为真正的硬负面示例分配更大的权重。然而,这种加权策略的影响在实践中是有限的,因为一个小的迷你批次可能不包含任何足够有挑战性的负面示例。在本文中,我们旨在提供一种更灵活的解决方案,通过直接操作负面示例来影响负面的硬度。通过假设(1)好的否定表示不应该与真实图样本的表示有太大偏差,以及(2)图编码器的计算过程可能会给图表示带来偏差,我们首先设计了一个否定表示生成器(NRG),它(1)采用真实图作为扰动原型,以及(2)通过图编码器的前馈计算引入参数化扰动以匹配偏差。然后,我们设计了一种生成损失来训练 NRG 中的参数,并针对更具挑战性的对比目标自适应生成负表征。在八个基准数据集上的实验表明,我们提出的框架 ANGCL 比最佳基线有 1.6% 的相对改进,并能成功地与三种类型的图增强集成。消融研究和超参数实验进一步证明了 ANGCL 的有效性。
{"title":"Adaptive negative representations for graph contrastive learning","authors":"Qi Zhang,&nbsp;Cheng Yang,&nbsp;Chuan Shi","doi":"10.1016/j.aiopen.2023.10.005","DOIUrl":"10.1016/j.aiopen.2023.10.005","url":null,"abstract":"<div><p>Graph contrastive learning (GCL) has emerged as a promising paradigm for learning graph representations. Recently, the idea of hard negatives is introduced to GCL, which can provide more challenging self-supervised objectives and alleviate over-fitting issues. These methods use different graphs in the same mini-batch as negative examples, and assign larger weights to true hard negative ones. However, the influence of such weighting strategies is limited in practice, since a small mini-batch may not contain any challenging enough negative examples. In this paper, we aim to offer a more flexible solution to affect the hardness of negatives by directly manipulating the representations of negatives. By assuming that (1) good negative representations should not deviate far from the representations of real graph samples, and (2) the computation process of graph encoder may introduce biases to graph representations, we first design a negative representation generator (NRG) which (1) employs real graphs as prototypes to perturb, and (2) introduces parameterized perturbations through the feed-forward computation of the graph encoder to match the biases. Then we design a generation loss to train the parameters in NRG and adaptively generate negative representations for more challenging contrastive objectives. Experiments on eight benchmark datasets show that our proposed framework ANGCL has 1.6% relative improvement over the best baseline, and can be successfully integrated with three types of graph augmentations. Ablation studies and hyper-parameter experiments further demonstrate the effectiveness of ANGCL.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 79-86"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666651023000219/pdfft?md5=b0c3c461206c9fd2fcce93a0a80db1a1&pid=1-s2.0-S2666651023000219-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138992756","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Improving trajectory classification through Kramers–Moyal coefficients 通过克拉默-莫亚系数改进轨迹分类
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2024.06.001
G. Viera-López , J.J. Morgado-Vega , A. Reyes , E. Altshuler , Yudivián Almeida-Cruz , Giorgio Manganini

Trajectory classification focuses on predicting the class or category of a moving object based on its observed movement over time. The classification of trajectory data using classical approaches can be challenging due to the arbitrary and relatively long length of some trajectories. To overcome this, trajectories are often mapped into vector representations that aim to encode their most significant features and for a fixed number of dimensions. Here we propose a novel vector representation for trajectories that combines previously employed features with new ones derived from the computation of the Kramers–Moyal coefficients (KMC). Due to KMC originating from a Taylor expansion that progressively encapsulates more information about a stochastic process, their potential to be effective in trajectory classification is a logical anticipation. We evaluated our representation using different classifiers and several benchmark datasets previously used for trajectory classification. With the addition of features extracted from KMCs, our results indicate a reliable increase in classification accuracy and F1 score of around 4% across all datasets and models used for evaluation. Moreover, we observed an increase in accuracy of up to 20% and an increase in F1 score of up to 23% in some scenarios.

轨迹分类的重点是根据观察到的移动物体随时间的变化来预测其类别。由于某些轨迹的任意性和相对较长的长度,使用传统方法对轨迹数据进行分类具有挑战性。为了克服这一问题,通常会将轨迹映射到矢量表示中,目的是对其最重要的特征和固定维数进行编码。在这里,我们提出了一种新的轨迹向量表示法,它将以前使用的特征与通过计算克拉默-莫亚系数(KMC)得到的新特征相结合。由于 KMC 源自泰勒扩展,能逐步囊括随机过程的更多信息,因此它们在轨迹分类中的有效潜力是一个合乎逻辑的预期。我们使用不同的分类器和以前用于轨迹分类的几个基准数据集对我们的表示法进行了评估。我们的结果表明,加入从 KMC 提取的特征后,在所有用于评估的数据集和模型中,分类准确率和 F1 分数都有可靠的提高,提高幅度约为 4%。此外,我们还观察到在某些情况下,准确率提高了 20%,F1 分数提高了 23%。
{"title":"Improving trajectory classification through Kramers–Moyal coefficients","authors":"G. Viera-López ,&nbsp;J.J. Morgado-Vega ,&nbsp;A. Reyes ,&nbsp;E. Altshuler ,&nbsp;Yudivián Almeida-Cruz ,&nbsp;Giorgio Manganini","doi":"10.1016/j.aiopen.2024.06.001","DOIUrl":"10.1016/j.aiopen.2024.06.001","url":null,"abstract":"<div><p>Trajectory classification focuses on predicting the class or category of a moving object based on its observed movement over time. The classification of trajectory data using classical approaches can be challenging due to the arbitrary and relatively long length of some trajectories. To overcome this, trajectories are often mapped into vector representations that aim to encode their most significant features and for a fixed number of dimensions. Here we propose a novel vector representation for trajectories that combines previously employed features with new ones derived from the computation of the Kramers–Moyal coefficients (KMC). Due to KMC originating from a Taylor expansion that progressively encapsulates more information about a stochastic process, their potential to be effective in trajectory classification is a logical anticipation. We evaluated our representation using different classifiers and several benchmark datasets previously used for trajectory classification. With the addition of features extracted from KMCs, our results indicate a reliable increase in classification accuracy and F1 score of around 4% across all datasets and models used for evaluation. Moreover, we observed an increase in accuracy of up to 20% and an increase in F1 score of up to 23% in some scenarios.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 87-93"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665102400010X/pdfft?md5=1530eab784a46e13da719255a80cd3e1&pid=1-s2.0-S266665102400010X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141715791","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mining contacts from spatio-temporal trajectories 从时空轨迹挖掘联系人
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2024.10.002
Adikarige Randil Sanjeewa Madanayake, Kyungmi Lee, Ickjai Lee
Contact mining is discovering objects in close proximity in their movements in order to reveal possible interactions, infections, collisions or contacts. This process can be significantly beneficial in a spread of an infectious disease situation to identify potential victims from a known infected human or animal, especially when the victims are asymptomatic. Movements of objects are captured by spatio-temporal trajectories represented by a series of geospatial locations and corresponding timestamps. A large amount of spatio-temporal trajectory data is being gathered by various location acquiring sensor devices by tracking movement behaviours of people, animals, vehicles and natural events. Trajectory data mining techniques have been proposed to discover useful patterns to understand the behaviours of spatio-temporal trajectories. One unexplored pattern is to identify contacts of targeted trajectory in spatio-temporal trajectories, which is defined as contact mining. The aim of this study is to investigate contact mining from spatio-temporal trajectories. The approach will be initiated by preprocessing spatio-temporal data and then by investigating a robust contact mining framework to efficiently and effectively mine contacts of a trajectory of interest from a given set of trajectories. Experimental results demonstrate the efficiency, effectiveness and scalability of our approach. In addition, parameter sensitivity analysis reveals the robustness and insensitivity of our framework.
接触挖掘是指在物体移动过程中发现近距离的物体,以揭示可能的相互作用、感染、碰撞或接触。在传染病传播的情况下,尤其是在受害者没有症状的情况下,这一过程对于从已知受感染的人类或动物中识别潜在受害者大有裨益。物体的移动是通过一系列地理空间位置和相应的时间戳所代表的时空轨迹来捕捉的。通过跟踪人、动物、车辆和自然事件的移动行为,各种位置获取传感器设备正在收集大量的时空轨迹数据。有人提出了轨迹数据挖掘技术来发现有用的模式,以了解时空轨迹的行为。其中一种尚未探索的模式是在时空轨迹中识别目标轨迹的接触点,这被定义为接触点挖掘。本研究旨在研究从时空轨迹中挖掘接触点。该方法将首先对时空数据进行预处理,然后研究一种稳健的接触挖掘框架,以便从给定的轨迹集中高效、有效地挖掘感兴趣轨迹的接触点。实验结果证明了我们方法的效率、有效性和可扩展性。此外,参数敏感性分析揭示了我们框架的鲁棒性和不敏感性。
{"title":"Mining contacts from spatio-temporal trajectories","authors":"Adikarige Randil Sanjeewa Madanayake,&nbsp;Kyungmi Lee,&nbsp;Ickjai Lee","doi":"10.1016/j.aiopen.2024.10.002","DOIUrl":"10.1016/j.aiopen.2024.10.002","url":null,"abstract":"<div><div>Contact mining is discovering objects in close proximity in their movements in order to reveal possible interactions, infections, collisions or contacts. This process can be significantly beneficial in a spread of an infectious disease situation to identify potential victims from a known infected human or animal, especially when the victims are asymptomatic. Movements of objects are captured by spatio-temporal trajectories represented by a series of geospatial locations and corresponding timestamps. A large amount of spatio-temporal trajectory data is being gathered by various location acquiring sensor devices by tracking movement behaviours of people, animals, vehicles and natural events. Trajectory data mining techniques have been proposed to discover useful patterns to understand the behaviours of spatio-temporal trajectories. One unexplored pattern is to identify contacts of targeted trajectory in spatio-temporal trajectories, which is defined as contact mining. The aim of this study is to investigate contact mining from spatio-temporal trajectories. The approach will be initiated by preprocessing spatio-temporal data and then by investigating a robust contact mining framework to efficiently and effectively mine contacts of a trajectory of interest from a given set of trajectories. Experimental results demonstrate the efficiency, effectiveness and scalability of our approach. In addition, parameter sensitivity analysis reveals the robustness and insensitivity of our framework.</div></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 197-207"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142552714","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enhancing neural network classification using fractional-order activation functions 利用分数阶激活函数增强神经网络分类功能
Pub Date : 2024-01-01 DOI: 10.1016/j.aiopen.2023.12.003
Meshach Kumar , Utkal Mehta , Giansalvo Cirrincione

In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (RLCFD). This study investigates the use of fractional activation functions in Multilayer Perceptron (MLP) models and their impact on the performance of classification tasks, verified using the IRIS, MNIST and FMNIST datasets. Fractional activation functions introduce a non-integer power exponent, allowing for improved capturing of complex patterns and representations. The experiment compares MLP models employing fractional activation functions, such as fractional sigmoid, hyperbolic tangent and rectified linear units, against traditional models using standard activation functions, their improved versions and existing fractional functions. The numerical studies have confirmed the theoretical observations mentioned in the paper. The findings highlight the potential usage of new functions as a valuable tool in deep learning in classification. The study suggests incorporating fractional activation functions in MLP architectures can lead to superior accuracy and robustness.

本文介绍了一系列新颖的激活函数,这些函数是利用改进的黎曼-刘维尔顺应分数导数(RLCFD)推导出来的。本研究探讨了分数激活函数在多层感知器(MLP)模型中的使用及其对分类任务性能的影响,并使用 IRIS、MNIST 和 FMNIST 数据集进行了验证。分数激活函数引入了一个非整数幂指数,从而改进了对复杂模式和表征的捕捉。实验将采用分数激活函数(如分数 sigmoid、双曲正切和整流线性单元)的 MLP 模型与采用标准激活函数、其改进版本和现有分数函数的传统模型进行了比较。数值研究证实了论文中提到的理论观察结果。研究结果凸显了新函数作为深度学习分类的重要工具的潜在用途。研究表明,在 MLP 架构中加入分数激活函数可以提高准确性和鲁棒性。
{"title":"Enhancing neural network classification using fractional-order activation functions","authors":"Meshach Kumar ,&nbsp;Utkal Mehta ,&nbsp;Giansalvo Cirrincione","doi":"10.1016/j.aiopen.2023.12.003","DOIUrl":"https://doi.org/10.1016/j.aiopen.2023.12.003","url":null,"abstract":"<div><p>In this paper, a series of novel activation functions is presented, which is derived using the improved Riemann–Liouville conformable fractional derivative (<span><math><msup><mrow></mrow><mrow><mi>R</mi><mi>L</mi></mrow></msup></math></span>CFD). This study investigates the use of fractional activation functions in Multilayer Perceptron (MLP) models and their impact on the performance of classification tasks, verified using the IRIS, MNIST and FMNIST datasets. Fractional activation functions introduce a non-integer power exponent, allowing for improved capturing of complex patterns and representations. The experiment compares MLP models employing fractional activation functions, such as fractional sigmoid, hyperbolic tangent and rectified linear units, against traditional models using standard activation functions, their improved versions and existing fractional functions. The numerical studies have confirmed the theoretical observations mentioned in the paper. The findings highlight the potential usage of new functions as a valuable tool in deep learning in classification. The study suggests incorporating fractional activation functions in MLP architectures can lead to superior accuracy and robustness.</p></div>","PeriodicalId":100068,"journal":{"name":"AI Open","volume":"5 ","pages":"Pages 10-22"},"PeriodicalIF":0.0,"publicationDate":"2024-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665102300030X/pdfft?md5=2be839945dd6c63499655950e9809539&pid=1-s2.0-S266665102300030X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"139090006","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
AI Open
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1