首页 > 最新文献

Journal of Biomedical Semantics最新文献

英文 中文
An application-based ontological knowledge base of medications to support health literacy and adherence for the consumer population: an aging population use case. 基于应用程序的药物本体知识库,以支持消费者群体的健康素养和依从性:老龄化人口用例。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-29 DOI: 10.1186/s13326-026-00347-8
Clifford Chen, Muhammad Amith, Kirk Roberts, Rebecca Mauldin, Renata Komalasari, Cui Tao
{"title":"An application-based ontological knowledge base of medications to support health literacy and adherence for the consumer population: an aging population use case.","authors":"Clifford Chen, Muhammad Amith, Kirk Roberts, Rebecca Mauldin, Renata Komalasari, Cui Tao","doi":"10.1186/s13326-026-00347-8","DOIUrl":"https://doi.org/10.1186/s13326-026-00347-8","url":null,"abstract":"","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":" ","pages":""},"PeriodicalIF":2.0,"publicationDate":"2026-01-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146085878","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ontology development and use for cholangiocarcinoma risk factors and predictions: a term enrichment data analysis and machine learning classification. 胆管癌风险因素和预测的本体开发和使用:术语充实数据分析和机器学习分类。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-22 DOI: 10.1186/s13326-025-00345-2
Anuwat Pengput, Alexander D Diehl
{"title":"Ontology development and use for cholangiocarcinoma risk factors and predictions: a term enrichment data analysis and machine learning classification.","authors":"Anuwat Pengput, Alexander D Diehl","doi":"10.1186/s13326-025-00345-2","DOIUrl":"10.1186/s13326-025-00345-2","url":null,"abstract":"","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"17 1","pages":"2"},"PeriodicalIF":2.0,"publicationDate":"2026-01-22","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12829242/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146029611","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
ECLed- a tool supporting the effective use of the SNOMED CT Expression Constraint Language. ECLed-一个支持有效使用SNOMED CT表达约束语言的工具。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-06 DOI: 10.1186/s13326-025-00344-3
Tessa Ohlsen, André Sander, Josef Ingenerf

Background: The Expression Constraint Language (ECL) is a powerful query language for SNOMED CT, enabling precise semantic queries across clinical concepts. However, its complex syntax and reliance on the SNOMED CT Concept Model make it difficult for non-experts to use, limiting its broader adoption in clinical research and healthcare analytics.

Objective: This work presents ECLed, a web-based tool designed to simplify access to ECL queries by abstracting the complexity of ECL syntax and the SNOMED CT Concept Model. ECLed is aimed at non-technical users, enabling the creation and modification of ECL queries and facilitating the querying of patient data coded with SNOMED CT.

Methods: ECLed was developed following a detailed requirements analysis, addressing both functional and non-functional needs. The tool supports the creation and editing of SNOMED CT ECL queries, integrates a processed Concept Model, and uses FHIR terminology services for semantic validation. Its modular architecture, with a frontend based on Angular and a backend on Spring Boot, ensures seamless communication through RESTful interfaces.

Result: ECLed demonstrated high usability in a user survey. Technical validation confirmed that it reliably generates and edits complex ECL queries. The tool was successfully integrated into the DaWiMed research platform, enhancing clinical analysis workflows. It also worked effectively with clinical data in FHIR format, although scalability with larger datasets remains to be tested.

Discussion: ECLed overcomes the limitations of existing ECL tools by abstracting the complexity of both the syntax and the SNOMED CT Concept Model. It provides a user-friendly solution that enables both technical and non-technical users to easily create and edit ECL queries.

Conclusion: ECLed offers a practical, user-friendly solution for creating SNOMED CT ECL queries, effectively hiding the underlying complexity while optimizing clinical research and data analysis workflows. It holds significant potential for further development and integration into additional research platforms.

背景:表达约束语言(ECL)是一种功能强大的SNOMED CT查询语言,可以跨临床概念进行精确的语义查询。然而,其复杂的语法和对SNOMED CT概念模型的依赖使得非专家难以使用,限制了其在临床研究和医疗保健分析中的广泛采用。目的:这项工作提出了ECLed,一个基于web的工具,旨在通过抽象ECL语法和SNOMED CT概念模型的复杂性来简化对ECL查询的访问。ECLed针对非技术用户,允许创建和修改ECL查询,并方便查询用SNOMED CT编码的患者数据。方法:ECLed是在详细的需求分析之后开发的,解决了功能性和非功能性需求。该工具支持创建和编辑SNOMED CT ECL查询,集成处理过的概念模型,并使用FHIR术语服务进行语义验证。它的模块化架构,前端基于Angular,后端基于Spring Boot,确保通过RESTful接口进行无缝通信。结果:ECLed在用户调查中显示了高可用性。技术验证证实它可以可靠地生成和编辑复杂的ECL查询。该工具已成功集成到DaWiMed研究平台中,增强了临床分析工作流程。它也可以有效地处理FHIR格式的临床数据,尽管在更大数据集上的可扩展性仍有待测试。讨论:ECLed通过抽象语法和SNOMED CT概念模型的复杂性,克服了现有ECL工具的局限性。它提供了一个用户友好的解决方案,使技术和非技术用户都可以轻松地创建和编辑ECL查询。结论:ECLed为创建SNOMED CT ECL查询提供了一种实用、用户友好的解决方案,有效地隐藏了潜在的复杂性,同时优化了临床研究和数据分析工作流程。它具有进一步开发和整合到其他研究平台的巨大潜力。
{"title":"ECLed- a tool supporting the effective use of the SNOMED CT Expression Constraint Language.","authors":"Tessa Ohlsen, André Sander, Josef Ingenerf","doi":"10.1186/s13326-025-00344-3","DOIUrl":"10.1186/s13326-025-00344-3","url":null,"abstract":"<p><strong>Background: </strong>The Expression Constraint Language (ECL) is a powerful query language for SNOMED CT, enabling precise semantic queries across clinical concepts. However, its complex syntax and reliance on the SNOMED CT Concept Model make it difficult for non-experts to use, limiting its broader adoption in clinical research and healthcare analytics.</p><p><strong>Objective: </strong>This work presents ECLed, a web-based tool designed to simplify access to ECL queries by abstracting the complexity of ECL syntax and the SNOMED CT Concept Model. ECLed is aimed at non-technical users, enabling the creation and modification of ECL queries and facilitating the querying of patient data coded with SNOMED CT.</p><p><strong>Methods: </strong>ECLed was developed following a detailed requirements analysis, addressing both functional and non-functional needs. The tool supports the creation and editing of SNOMED CT ECL queries, integrates a processed Concept Model, and uses FHIR terminology services for semantic validation. Its modular architecture, with a frontend based on Angular and a backend on Spring Boot, ensures seamless communication through RESTful interfaces.</p><p><strong>Result: </strong>ECLed demonstrated high usability in a user survey. Technical validation confirmed that it reliably generates and edits complex ECL queries. The tool was successfully integrated into the DaWiMed research platform, enhancing clinical analysis workflows. It also worked effectively with clinical data in FHIR format, although scalability with larger datasets remains to be tested.</p><p><strong>Discussion: </strong>ECLed overcomes the limitations of existing ECL tools by abstracting the complexity of both the syntax and the SNOMED CT Concept Model. It provides a user-friendly solution that enables both technical and non-technical users to easily create and edit ECL queries.</p><p><strong>Conclusion: </strong>ECLed offers a practical, user-friendly solution for creating SNOMED CT ECL queries, effectively hiding the underlying complexity while optimizing clinical research and data analysis workflows. It holds significant potential for further development and integration into additional research platforms.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"17 1","pages":"1"},"PeriodicalIF":2.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12777381/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Annotating and indexing scientific articles with rare diseases. 对罕见病的科学文章进行注释和索引。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2026-01-06 DOI: 10.1186/s13326-025-00346-1
Hosein Azarbonyad, Zubair Afzal, Rik Iping, Max Dumoulin, Ilse Nederveen, Jiangtao Yu, Georgios Tsatsaronis

Background: Around 30 million people in Europe are affected by a rare (or orphan) disease, defined as a condition occurring in fewer than 1 in 2,000 individuals. The primary challenge is to automatically and efficiently identify scientific articles and guidelines that address a particular rare disease. We present a novel methodology to annotate and index scientific text with taxonomical concepts describing rare diseases from the OrphaNet taxonomy. This task is complicated by several technical challenges, including the lack of sufficiently large, human-annotated datasets for supervised training and the polysemy/synonymy and surface-form variation of rare disease names, which can hinder any annotation engine.

Results: We introduce a framework that operationalizes OrphaNet for large-scale literature annotation by integrating the TERMite engine with curated synonym expansion, label normalization (including deprecated/renamed concepts), and fuzzy matching. On benchmark datasets, the approach achieves precision = 92%, recall = 75%, and F1 = 83%, outperforming an string-matching baseline. Applying the pipeline to Scopus produces disease-specific corpora suitable for bibliometric and scientometric analyses (e.g., institution, country, and subject-area profiles). These outputs power the Rare Diseases Monitor dashboard for exploring national and global research activity.

Conclusion: To our knowledge, this is the first systematic, scalable semantic framework for annotating and indexing rare disease literature at scale. By operationalizing OrphaNet in an automated, reproducible pipeline and addressing data scarcity and lexical variability, the work advances biomedical semantics for rare diseases and enables disease-centric monitoring, evaluation, and discovery across the research landscape.

背景:欧洲约有3000万人患有罕见(或孤儿)疾病,定义为每2000人中发生不到1人的疾病。主要的挑战是自动和有效地识别针对特定罕见疾病的科学文章和指南。我们提出了一种新的方法来注释和索引科学文本与分类概念描述罕见病从孤儿分类法。这项任务由于一些技术挑战而变得复杂,包括缺乏足够大的、人类注释的数据集来进行监督训练,以及罕见疾病名称的多义/同义词和表面形式变化,这可能会阻碍任何注释引擎。结果:我们引入了一个框架,该框架通过将TERMite引擎与策划同义词扩展、标签规范化(包括弃用/重命名的概念)和模糊匹配集成在一起,实现了大规模文献注释的OrphaNet操作。在基准数据集上,该方法实现了精度= 92%,召回率= 75%,F1 = 83%,优于字符串匹配基线。将管道应用于Scopus产生适合文献计量和科学计量分析的疾病特异性语料库(例如,机构、国家和学科领域概况)。这些产出为探索国家和全球研究活动的罕见疾病监测仪表板提供了动力。结论:据我们所知,这是第一个系统化的、可扩展的用于大规模注释和索引罕见疾病文献的语义框架。通过在自动化、可重复的管道中运行OrphaNet,并解决数据稀缺性和词汇可变性问题,这项工作推进了罕见疾病的生物医学语义,并在整个研究领域实现了以疾病为中心的监测、评估和发现。
{"title":"Annotating and indexing scientific articles with rare diseases.","authors":"Hosein Azarbonyad, Zubair Afzal, Rik Iping, Max Dumoulin, Ilse Nederveen, Jiangtao Yu, Georgios Tsatsaronis","doi":"10.1186/s13326-025-00346-1","DOIUrl":"10.1186/s13326-025-00346-1","url":null,"abstract":"<p><strong>Background: </strong>Around 30 million people in Europe are affected by a rare (or orphan) disease, defined as a condition occurring in fewer than 1 in 2,000 individuals. The primary challenge is to automatically and efficiently identify scientific articles and guidelines that address a particular rare disease. We present a novel methodology to annotate and index scientific text with taxonomical concepts describing rare diseases from the OrphaNet taxonomy. This task is complicated by several technical challenges, including the lack of sufficiently large, human-annotated datasets for supervised training and the polysemy/synonymy and surface-form variation of rare disease names, which can hinder any annotation engine.</p><p><strong>Results: </strong>We introduce a framework that operationalizes OrphaNet for large-scale literature annotation by integrating the TERMite engine with curated synonym expansion, label normalization (including deprecated/renamed concepts), and fuzzy matching. On benchmark datasets, the approach achieves precision = 92%, recall = 75%, and F1 = 83%, outperforming an string-matching baseline. Applying the pipeline to Scopus produces disease-specific corpora suitable for bibliometric and scientometric analyses (e.g., institution, country, and subject-area profiles). These outputs power the Rare Diseases Monitor dashboard for exploring national and global research activity.</p><p><strong>Conclusion: </strong>To our knowledge, this is the first systematic, scalable semantic framework for annotating and indexing rare disease literature at scale. By operationalizing OrphaNet in an automated, reproducible pipeline and addressing data scarcity and lexical variability, the work advances biomedical semantics for rare diseases and enables disease-centric monitoring, evaluation, and discovery across the research landscape.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":" ","pages":"3"},"PeriodicalIF":2.0,"publicationDate":"2026-01-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12870340/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145911488","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
SimSUM - simulated benchmark with structured and unstructured medical records. SimSUM -具有结构化和非结构化病历的模拟基准。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-12-18 DOI: 10.1186/s13326-025-00341-6
Paloma Rabaey, Stefan Heytens, Thomas Demeester
{"title":"SimSUM - simulated benchmark with structured and unstructured medical records.","authors":"Paloma Rabaey, Stefan Heytens, Thomas Demeester","doi":"10.1186/s13326-025-00341-6","DOIUrl":"10.1186/s13326-025-00341-6","url":null,"abstract":"","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"16 1","pages":"20"},"PeriodicalIF":2.0,"publicationDate":"2025-12-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12713242/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145781327","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
BabelFSH-a toolkit for an effective HL7 FHIR-based terminology provision. babelfsh—用于有效的基于HL7 fhr的术语提供的工具包。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-11-29 DOI: 10.1186/s13326-025-00343-4
Joshua Wiedekopf, Tessa Ohlsen, Ann-Kristin Kock-Schoppenhauer, Josef Ingenerf

Background: HL7 FHIR terminological services (TS) are a valuable tool towards better healthcare interoperability, but require representations of terminologies using FHIR resources to provide their services. As most terminologies are not natively distributed using FHIR resources, converters are needed. Large-scale FHIR projects, especially those with a national or even an international scope, define enormous numbers of value sets and reference many large and complex code systems, which must be regularly updated in TS and other systems. This necessitates a flexible, scalable and efficient provision of these artifacts. This work aims to develop a comprehensive, extensible and accessible toolkit for FHIR terminology conversion, making it possible for terminology authors, FHIR profilers and other actors to provide standardized TS for large-scale terminological artifacts.

Implementation: Based on the prevalent HL7 FHIR Shorthand (FSH) specification, a converter toolkit, called BabelFSH, was created that utilizes an adaptable plugin architecture to separate the definition of content from that of the needed declarative metadata. The development process was guided by formalized design goals.

Results: All eight design goals were addressed by BabelFSH. Validation of the systems' performance and completeness was exemplarily demonstrated using Alpha-ID-SE, an important terminology used for diagnosis coding especially of rare diseases within Germany. The tool is now used extensively within the content delivery pipeline for a central FHIR TS with a national scope within the German Medical Informatics Initiative and Network University Medicine and demonstrates adequate usability for FHIR developers.

Discussion: The first development focus was geared towards the requirements of the central research FHIR TS for the federated FHIR infrastructure in Germany, and has proven to be very useful towards that goal. Opportunities for further improvement were identified in the validation process especially, as the validation messages are currently imprecise at times. The design of the application lends itself to the implementation of further use cases, such as direct connectivity to legacy systems for catalog conversion to FHIR.

Conclusions: The developed BabelFSH tool is a novel, powerful and open-source approach to making heterogenous sources of terminological knowledge accessible as FHIR resources, thus aiding semantic interoperability in healthcare in general.

背景:HL7 FHIR术语服务(TS)是实现更好的医疗互操作性的有价值的工具,但需要使用FHIR资源的术语表示来提供其服务。由于大多数术语不是使用FHIR资源本地分发的,因此需要转换器。大型的FHIR项目,特别是具有国家甚至国际范围的项目,定义了大量的值集,并引用了许多大型复杂的代码系统,这些代码系统必须在TS和其他系统中定期更新。这就需要灵活、可伸缩且高效地提供这些构件。这项工作旨在为FHIR术语转换开发一个全面、可扩展和可访问的工具包,使术语作者、FHIR分析器和其他参与者能够为大规模术语工件提供标准化的TS。实现:基于流行的HL7 FHIR简写(FSH)规范,创建了一个名为BabelFSH的转换器工具包,它利用可适应的插件体系结构将内容定义与所需的声明性元数据的定义分离开来。开发过程以形式化的设计目标为指导。结果:BabelFSH实现了所有8个设计目标。使用Alpha-ID-SE(一个用于诊断编码的重要术语,特别是在德国的罕见疾病)来验证系统的性能和完整性。该工具目前在德国医学信息学倡议和网络大学医学范围内的中央FHIR TS的内容交付管道中广泛使用,并为FHIR开发人员展示了足够的可用性。讨论:第一个开发重点是针对德国联邦FHIR基础设施的中央研究FHIR TS的需求,并且已被证明对该目标非常有用。特别是在验证过程中确定了进一步改进的机会,因为验证消息目前有时是不精确的。应用程序的设计有助于实现进一步的用例,例如直接连接到遗留系统,以便将目录转换为FHIR。结论:开发的BabelFSH工具是一种新颖、强大和开源的方法,可以将异质的术语知识来源作为FHIR资源访问,从而在总体上帮助医疗保健中的语义互操作性。
{"title":"BabelFSH-a toolkit for an effective HL7 FHIR-based terminology provision.","authors":"Joshua Wiedekopf, Tessa Ohlsen, Ann-Kristin Kock-Schoppenhauer, Josef Ingenerf","doi":"10.1186/s13326-025-00343-4","DOIUrl":"10.1186/s13326-025-00343-4","url":null,"abstract":"<p><strong>Background: </strong>HL7 FHIR terminological services (TS) are a valuable tool towards better healthcare interoperability, but require representations of terminologies using FHIR resources to provide their services. As most terminologies are not natively distributed using FHIR resources, converters are needed. Large-scale FHIR projects, especially those with a national or even an international scope, define enormous numbers of value sets and reference many large and complex code systems, which must be regularly updated in TS and other systems. This necessitates a flexible, scalable and efficient provision of these artifacts. This work aims to develop a comprehensive, extensible and accessible toolkit for FHIR terminology conversion, making it possible for terminology authors, FHIR profilers and other actors to provide standardized TS for large-scale terminological artifacts.</p><p><strong>Implementation: </strong>Based on the prevalent HL7 FHIR Shorthand (FSH) specification, a converter toolkit, called BabelFSH, was created that utilizes an adaptable plugin architecture to separate the definition of content from that of the needed declarative metadata. The development process was guided by formalized design goals.</p><p><strong>Results: </strong>All eight design goals were addressed by BabelFSH. Validation of the systems' performance and completeness was exemplarily demonstrated using Alpha-ID-SE, an important terminology used for diagnosis coding especially of rare diseases within Germany. The tool is now used extensively within the content delivery pipeline for a central FHIR TS with a national scope within the German Medical Informatics Initiative and Network University Medicine and demonstrates adequate usability for FHIR developers.</p><p><strong>Discussion: </strong>The first development focus was geared towards the requirements of the central research FHIR TS for the federated FHIR infrastructure in Germany, and has proven to be very useful towards that goal. Opportunities for further improvement were identified in the validation process especially, as the validation messages are currently imprecise at times. The design of the application lends itself to the implementation of further use cases, such as direct connectivity to legacy systems for catalog conversion to FHIR.</p><p><strong>Conclusions: </strong>The developed BabelFSH tool is a novel, powerful and open-source approach to making heterogenous sources of terminological knowledge accessible as FHIR resources, thus aiding semantic interoperability in healthcare in general.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":" ","pages":"19"},"PeriodicalIF":2.0,"publicationDate":"2025-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12679771/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145633792","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The CLEAR Principle: organizing data and metadata into semantically meaningful types of FAIR Digital Objects to increase their human explorability and cognitive interoperability. CLEAR原则:将数据和元数据组织成语义上有意义的FAIR数字对象类型,以增加其人类可探索性和认知互操作性。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-10-28 DOI: 10.1186/s13326-025-00340-7
Lars Vogt

Background: Ensuring the FAIRness (Findable, Accessible, Interoperable, Reusable) of data and metadata is an important goal in both research and industry. Knowledge graphs and ontologies have been central in achieving this goal, with interoperability of data and metadata receiving much attention. This paper argues that the emphasis on machine-actionability has overshadowed the essential need for human-actionability of data and metadata, and provides three examples that describe the lack of human-actionability within knowledge graphs.

Results: The paper propagates the incorporation of cognitive interoperability as another vital layer within the European Open Science Cloud Interoperability Framework and discusses the relation between human explorability of data and metadata and their cognitive interoperability. It suggests adding the CLEAR Principle to support the cognitive interoperability and human contextual explorability of data and metadata. The subsequent sections present the concept of semantic units, elucidating their important role in attaining CLEAR. Semantic units structure a knowledge graph into identifiable and semantically meaningful subgraphs, each represented with its own resource that constitutes a FAIR Digital Object (FDO) and that instantiates a corresponding FDO class. Various categories of FDOs are distinguished. Each semantic unit can be displayed in a user interface either as a mind-map-like graph or as natural language text.

Conclusions: Semantic units organize knowledge graphs into levels of representational granularity, distinct granularity trees, and diverse frames of reference. This organization supports the cognitive interoperability of data and metadata and facilitates their contextual explorability by humans. The development of innovative user interfaces enabled by FDOs that are based on semantic units would empower users to access, navigate, and explore information in CLEAR knowledge graphs with optimized efficiency.

背景:确保数据和元数据的公平性(可查找、可访问、可互操作、可重用)是研究和行业的一个重要目标。知识图和本体是实现这一目标的核心,数据和元数据的互操作性受到了广泛关注。本文认为,对机器可操作性的强调已经掩盖了对数据和元数据的人类可操作性的基本需求,并提供了三个例子来描述知识图中缺乏人类可操作性。结果:本文将认知互操作性作为欧洲开放科学云互操作性框架中的另一个重要层进行了推广,并讨论了人类对数据和元数据的可探索性与其认知互操作性之间的关系。它建议添加CLEAR原则来支持认知互操作性和人类对数据和元数据的上下文可探索性。接下来的部分将介绍语义单位的概念,阐明它们在获得CLEAR中的重要作用。语义单元将知识图结构为可识别的和语义上有意义的子图,每个子图都用自己的资源表示,这些资源构成FAIR数字对象(FDO),并实例化相应的FDO类。区分了各种类型的fdo。每个语义单元都可以以类似思维导图的图形或自然语言文本的形式显示在用户界面中。结论:语义单元将知识图组织成不同层次的表示粒度、不同的粒度树和不同的参考框架。该组织支持数据和元数据的认知互操作性,并促进人类对其上下文的探索。基于语义单元的fdo支持的创新用户界面的开发将使用户能够以优化的效率访问、导航和探索CLEAR知识图中的信息。
{"title":"The CLEAR Principle: organizing data and metadata into semantically meaningful types of FAIR Digital Objects to increase their human explorability and cognitive interoperability.","authors":"Lars Vogt","doi":"10.1186/s13326-025-00340-7","DOIUrl":"10.1186/s13326-025-00340-7","url":null,"abstract":"<p><strong>Background: </strong>Ensuring the FAIRness (Findable, Accessible, Interoperable, Reusable) of data and metadata is an important goal in both research and industry. Knowledge graphs and ontologies have been central in achieving this goal, with interoperability of data and metadata receiving much attention. This paper argues that the emphasis on machine-actionability has overshadowed the essential need for human-actionability of data and metadata, and provides three examples that describe the lack of human-actionability within knowledge graphs.</p><p><strong>Results: </strong>The paper propagates the incorporation of cognitive interoperability as another vital layer within the European Open Science Cloud Interoperability Framework and discusses the relation between human explorability of data and metadata and their cognitive interoperability. It suggests adding the CLEAR Principle to support the cognitive interoperability and human contextual explorability of data and metadata. The subsequent sections present the concept of semantic units, elucidating their important role in attaining CLEAR. Semantic units structure a knowledge graph into identifiable and semantically meaningful subgraphs, each represented with its own resource that constitutes a FAIR Digital Object (FDO) and that instantiates a corresponding FDO class. Various categories of FDOs are distinguished. Each semantic unit can be displayed in a user interface either as a mind-map-like graph or as natural language text.</p><p><strong>Conclusions: </strong>Semantic units organize knowledge graphs into levels of representational granularity, distinct granularity trees, and diverse frames of reference. This organization supports the cognitive interoperability of data and metadata and facilitates their contextual explorability by humans. The development of innovative user interfaces enabled by FDOs that are based on semantic units would empower users to access, navigate, and explore information in CLEAR knowledge graphs with optimized efficiency.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"16 1","pages":"18"},"PeriodicalIF":2.0,"publicationDate":"2025-10-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12570660/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145389754","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Three-layered semantic framework for public health intelligence. 公共卫生情报的三层语义框架。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-09-15 DOI: 10.1186/s13326-025-00338-1
Sathvik Guru Rao, Pranitha Rokkam, Bide Zhang, Astghik Sargsyan, Abish Kaladharan, Priya Sethumadhavan, Marc Jacobs, Martin Hofmann-Apitius, Alpha Tom Kodamullil

Background: Disease surveillance systems play a crucial role in monitoring and preventing infectious diseases. However, the current landscape, primarily focused on fragmented health data, poses challenges to contextual understanding and decision-making. This paper addresses this issue by proposing a semantic framework using ontologies to provide a unified data representation for seamless integration. The paper demonstrates the effectiveness of this approach using a case study of a COVID-19 incident at a football game in Italy.

Method: In this study, we undertook a comprehensive approach to gather and analyze data for the development of ontologies within the realm of pandemic intelligence. Multiple ontologies were meticulously crafted to cater to different domains related to pandemic intelligence, such as healthcare systems, mass gatherings, travel, and diseases. The ontologies were classified into top-level, domain, and application layers. This classification facilitated the development of a three-layered architecture, promoting reusability, and consistency in knowledge representation, and serving as the backbone of our semantic framework.

Result: Through the utilization of our semantic framework, we accomplished semantic enrichment of both structured and unstructured data. The integration of data from diverse sources involved mapping to ontology concepts, leading to the creation and storage of RDF triples in the triple store. This process resulted in the construction of linked data, ultimately enhancing the discoverability and accessibility of valuable insights. Furthermore, our anomaly detection algorithm effectively leveraged knowledge graphs extracted from the triple store, employing semantic relationships to discern patterns and anomalies within the data. Notably, this capability was exemplified by the identification of correlations between a football game and a COVID-19 event occurring at the same location and time.

Conclusion: The framework showcased its capability to address intricate, multi-domain queries and support diverse levels of detail. Additionally, it demonstrated proficiency in data analysis and visualization, generating graphs that depict patterns and trends; however, challenges related to ontology maintenance, alignment, and mapping must be addressed for the approach's optimal utilization.

背景:疾病监测系统在监测和预防传染病方面发挥着至关重要的作用。然而,目前的情况主要集中在零散的卫生数据上,这对背景理解和决策构成了挑战。本文通过提出一个使用本体的语义框架来解决这个问题,从而为无缝集成提供统一的数据表示。本文通过对意大利足球比赛中发生的COVID-19事件的案例研究,证明了这种方法的有效性。方法:在本研究中,我们采用了一种全面的方法来收集和分析数据,以便在流行病情报领域内开发本体论。精心设计了多个本体,以满足与流行病情报相关的不同领域,例如医疗保健系统、大规模聚会、旅行和疾病。本体被划分为顶级层、域层和应用层。这种分类促进了三层体系结构的开发,促进了知识表示的可重用性和一致性,并作为语义框架的支柱。结果:通过使用我们的语义框架,我们完成了结构化和非结构化数据的语义丰富。来自不同来源的数据的集成涉及到本体概念的映射,从而导致在三元组存储中创建和存储RDF三元组。这个过程导致了关联数据的构建,最终增强了有价值见解的可发现性和可访问性。此外,我们的异常检测算法有效地利用了从三重存储中提取的知识图,利用语义关系来识别数据中的模式和异常。值得注意的是,这种能力通过识别在同一地点和同一时间发生的足球比赛和COVID-19事件之间的相关性得到了体现。结论:该框架展示了其处理复杂、多领域查询和支持不同细节级别的能力。此外,它还展示了数据分析和可视化的熟练程度,生成了描绘模式和趋势的图形;然而,为了使该方法得到最佳利用,必须解决与本体维护、对齐和映射相关的挑战。
{"title":"Three-layered semantic framework for public health intelligence.","authors":"Sathvik Guru Rao, Pranitha Rokkam, Bide Zhang, Astghik Sargsyan, Abish Kaladharan, Priya Sethumadhavan, Marc Jacobs, Martin Hofmann-Apitius, Alpha Tom Kodamullil","doi":"10.1186/s13326-025-00338-1","DOIUrl":"10.1186/s13326-025-00338-1","url":null,"abstract":"<p><strong>Background: </strong>Disease surveillance systems play a crucial role in monitoring and preventing infectious diseases. However, the current landscape, primarily focused on fragmented health data, poses challenges to contextual understanding and decision-making. This paper addresses this issue by proposing a semantic framework using ontologies to provide a unified data representation for seamless integration. The paper demonstrates the effectiveness of this approach using a case study of a COVID-19 incident at a football game in Italy.</p><p><strong>Method: </strong>In this study, we undertook a comprehensive approach to gather and analyze data for the development of ontologies within the realm of pandemic intelligence. Multiple ontologies were meticulously crafted to cater to different domains related to pandemic intelligence, such as healthcare systems, mass gatherings, travel, and diseases. The ontologies were classified into top-level, domain, and application layers. This classification facilitated the development of a three-layered architecture, promoting reusability, and consistency in knowledge representation, and serving as the backbone of our semantic framework.</p><p><strong>Result: </strong>Through the utilization of our semantic framework, we accomplished semantic enrichment of both structured and unstructured data. The integration of data from diverse sources involved mapping to ontology concepts, leading to the creation and storage of RDF triples in the triple store. This process resulted in the construction of linked data, ultimately enhancing the discoverability and accessibility of valuable insights. Furthermore, our anomaly detection algorithm effectively leveraged knowledge graphs extracted from the triple store, employing semantic relationships to discern patterns and anomalies within the data. Notably, this capability was exemplified by the identification of correlations between a football game and a COVID-19 event occurring at the same location and time.</p><p><strong>Conclusion: </strong>The framework showcased its capability to address intricate, multi-domain queries and support diverse levels of detail. Additionally, it demonstrated proficiency in data analysis and visualization, generating graphs that depict patterns and trends; however, challenges related to ontology maintenance, alignment, and mapping must be addressed for the approach's optimal utilization.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"16 1","pages":"17"},"PeriodicalIF":2.0,"publicationDate":"2025-09-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12439389/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145069053","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A prototype ETL pipeline that uses HL7 FHIR RDF resources when deploying pure functions to enrich knowledge graph patient data. 一个原型ETL管道,在部署纯函数以丰富知识图谱患者数据时使用HL7 FHIR RDF资源。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-09-01 DOI: 10.1186/s13326-025-00335-4
Adeel Ansari, Marisa Conte, Allen Flynn, Avanti Paturkar

Background: For clinical care and research, knowledge graphs with patient data can be enriched by extracting parameters from a knowledge graph and then using them as inputs to compute new patient features with pure functions. Systematic and transparent methods for enriching knowledge graphs with newly computed patient features are of interest. When enriching the patient data in knowledge graphs this way, existing ontologies and well-known data resource standards can help promote semantic interoperability.

Results: We developed and tested a new data processing pipeline for extracting, computing, and returning newly computed results to a large knowledge graph populated with electronic health record and patient survey data. We show that RDF data resource types already specified by Health Level 7's FHIR RDF effort can be programmatically validated and then used by this new data processing pipeline to represent newly derived patient-level features.

Conclusions: Knowledge graph technology can be augmented with standards-based semantic data processing pipelines for deploying and tracing the use of pure functions to derive new patient-level features from existing data. Semantic data processing pipelines enable research enterprises to report on new patient-level computations of interest with linked metadata that details the origin and background of every new computation.

背景:对于临床护理和研究,可以通过从知识图中提取参数,然后将其作为输入,以纯函数计算新的患者特征,从而丰富具有患者数据的知识图。用新计算的患者特征丰富知识图谱的系统和透明的方法引起了人们的兴趣。当以这种方式丰富知识图中的患者数据时,现有的本体和已知的数据资源标准可以帮助促进语义互操作性。结果:我们开发并测试了一种新的数据处理管道,用于提取、计算并将新计算的结果返回到包含电子健康记录和患者调查数据的大型知识图谱。我们展示了Health Level 7的FHIR RDF工作已经指定的RDF数据资源类型可以通过编程验证,然后由这个新的数据处理管道使用,以表示新派生的患者级特征。结论:知识图技术可以通过基于标准的语义数据处理管道进行扩展,用于部署和跟踪纯函数的使用,从而从现有数据中派生出新的患者级特征。语义数据处理管道使研究企业能够通过链接的元数据报告感兴趣的新的患者级计算,这些元数据详细说明了每个新计算的起源和背景。
{"title":"A prototype ETL pipeline that uses HL7 FHIR RDF resources when deploying pure functions to enrich knowledge graph patient data.","authors":"Adeel Ansari, Marisa Conte, Allen Flynn, Avanti Paturkar","doi":"10.1186/s13326-025-00335-4","DOIUrl":"https://doi.org/10.1186/s13326-025-00335-4","url":null,"abstract":"<p><strong>Background: </strong>For clinical care and research, knowledge graphs with patient data can be enriched by extracting parameters from a knowledge graph and then using them as inputs to compute new patient features with pure functions. Systematic and transparent methods for enriching knowledge graphs with newly computed patient features are of interest. When enriching the patient data in knowledge graphs this way, existing ontologies and well-known data resource standards can help promote semantic interoperability.</p><p><strong>Results: </strong>We developed and tested a new data processing pipeline for extracting, computing, and returning newly computed results to a large knowledge graph populated with electronic health record and patient survey data. We show that RDF data resource types already specified by Health Level 7's FHIR RDF effort can be programmatically validated and then used by this new data processing pipeline to represent newly derived patient-level features.</p><p><strong>Conclusions: </strong>Knowledge graph technology can be augmented with standards-based semantic data processing pipelines for deploying and tracing the use of pure functions to derive new patient-level features from existing data. Semantic data processing pipelines enable research enterprises to report on new patient-level computations of interest with linked metadata that details the origin and background of every new computation.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"16 1","pages":"16"},"PeriodicalIF":2.0,"publicationDate":"2025-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12400713/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144955205","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping between clinical and preclinical terminologies: eTRANSAFE's Rosetta stone approach. 临床和临床前术语之间的映射:eTRANSAFE的罗塞塔石碑方法。
IF 2 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2025-08-21 DOI: 10.1186/s13326-025-00337-2
Erik M van Mulligen, Rowan Parry, Johan van der Lei, Jan A Kors

Background: The eTRANSAFE project developed tools that support translational research. One of the challenges in this project was to combine preclinical and clinical data, which are coded with different terminologies and granularities, and are expressed as single pre-coordinated, clinical concepts and as combinations of preclinical concepts from different terminologies. This study develops and evaluates the Rosetta Stone approach, which maps combinations of preclinical concepts to clinical, pre-coordinated concepts, allowing for different levels of exactness of mappings.

Methods: Concepts from preclinical and clinical terminologies used in eTRANSAFE have been mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). SNOMED CT acts as an intermediary terminology that provides the semantics to bridge between pre-coordinated clinical concepts and combinations of preclinical concepts with different levels of granularity. The mappings from clinical terminologies to SNOMED CT were taken from existing resources, while mappings from the preclinical terminologies to SNOMED CT were manually created. A coordination template defines the relation types that can be explored for a mapping and assigns a penalty score that reflects the inexactness of the mapping. A subset of 60 pre-coordinated concepts was mapped both with the Rosetta Stone semantic approach and with a lexical term matching approach. Both results were manually evaluated.

Results: A total of 34,308 concepts from preclinical terminologies (Histopathology terminology, Standard for Exchange of Nonclinical Data (SEND) code lists, Mouse Adult Gross Anatomy Ontology) and a clinical terminology (MedDRA) were mapped to SNOMED CT as the intermediary bridging terminology. A terminology service has been developed that returns dynamically the exact and inexact mappings between preclinical and clinical concepts. On the evaluation set, the precision of the mappings from the terminology service was high (95%), much higher than for lexical term matching (22%).

Conclusion: The Rosetta Stone approach uses a semantically rich intermediate terminology to map between pre-coordinated clinical concepts and a combination of preclinical concepts with different levels of exactness. The possibility to generate not only exact but also inexact mappings allows to relate larger amounts of preclinical and clinical data, which can be helpful in translational use cases.

背景:eTRANSAFE项目开发了支持转化研究的工具。该项目的挑战之一是将临床前和临床数据结合起来,这些数据用不同的术语和粒度编码,并以单一的预先协调的临床概念和来自不同术语的临床前概念的组合表示。本研究开发并评估了Rosetta Stone方法,该方法将临床前概念的组合映射到临床,预先协调的概念,允许不同程度的映射准确性。方法:eTRANSAFE中使用的临床前和临床术语的概念已被映射到医学临床术语系统化命名法(SNOMED CT)。SNOMED CT作为一个中间术语,提供语义,在预先协调的临床概念和不同粒度级别的临床前概念的组合之间架起桥梁。从临床术语到SNOMED CT的映射是从现有资源中获取的,而从临床前术语到SNOMED CT的映射是手动创建的。协调模板定义了可以为映射探索的关系类型,并分配了反映映射不精确的惩罚分数。使用Rosetta Stone语义方法和词法术语匹配方法对60个预先协调的概念子集进行映射。这两个结果都是手动评估的。结果:临床前术语(组织病理学术语,非临床数据交换标准(SEND)代码列表,小鼠成年大体解剖本体)和临床术语(MedDRA)中的34,308个概念被映射到SNOMED CT作为中间桥接术语。已经开发了一个术语服务,它可以动态返回临床前和临床概念之间的精确和不精确映射。在评估集上,来自术语服务的映射的精度很高(95%),远高于词法术语匹配的精度(22%)。结论:罗塞塔石碑方法使用语义丰富的中间术语来映射预先协调的临床概念和不同程度准确性的临床前概念组合。生成精确和不精确映射的可能性允许将大量的临床前和临床数据关联起来,这在翻译用例中是有帮助的。
{"title":"Mapping between clinical and preclinical terminologies: eTRANSAFE's Rosetta stone approach.","authors":"Erik M van Mulligen, Rowan Parry, Johan van der Lei, Jan A Kors","doi":"10.1186/s13326-025-00337-2","DOIUrl":"https://doi.org/10.1186/s13326-025-00337-2","url":null,"abstract":"<p><strong>Background: </strong>The eTRANSAFE project developed tools that support translational research. One of the challenges in this project was to combine preclinical and clinical data, which are coded with different terminologies and granularities, and are expressed as single pre-coordinated, clinical concepts and as combinations of preclinical concepts from different terminologies. This study develops and evaluates the Rosetta Stone approach, which maps combinations of preclinical concepts to clinical, pre-coordinated concepts, allowing for different levels of exactness of mappings.</p><p><strong>Methods: </strong>Concepts from preclinical and clinical terminologies used in eTRANSAFE have been mapped to the Systematized Nomenclature of Medicine Clinical Terms (SNOMED CT). SNOMED CT acts as an intermediary terminology that provides the semantics to bridge between pre-coordinated clinical concepts and combinations of preclinical concepts with different levels of granularity. The mappings from clinical terminologies to SNOMED CT were taken from existing resources, while mappings from the preclinical terminologies to SNOMED CT were manually created. A coordination template defines the relation types that can be explored for a mapping and assigns a penalty score that reflects the inexactness of the mapping. A subset of 60 pre-coordinated concepts was mapped both with the Rosetta Stone semantic approach and with a lexical term matching approach. Both results were manually evaluated.</p><p><strong>Results: </strong>A total of 34,308 concepts from preclinical terminologies (Histopathology terminology, Standard for Exchange of Nonclinical Data (SEND) code lists, Mouse Adult Gross Anatomy Ontology) and a clinical terminology (MedDRA) were mapped to SNOMED CT as the intermediary bridging terminology. A terminology service has been developed that returns dynamically the exact and inexact mappings between preclinical and clinical concepts. On the evaluation set, the precision of the mappings from the terminology service was high (95%), much higher than for lexical term matching (22%).</p><p><strong>Conclusion: </strong>The Rosetta Stone approach uses a semantically rich intermediate terminology to map between pre-coordinated clinical concepts and a combination of preclinical concepts with different levels of exactness. The possibility to generate not only exact but also inexact mappings allows to relate larger amounts of preclinical and clinical data, which can be helpful in translational use cases.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"16 1","pages":"15"},"PeriodicalIF":2.0,"publicationDate":"2025-08-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12372267/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"144955195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Biomedical Semantics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1