首页 > 最新文献

Journal of Biomedical Semantics最新文献

英文 中文
Semantically enabling clinical decision support recommendations. 语义上支持临床决策支持建议。
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-07-18 DOI: 10.1186/s13326-023-00285-9
Oshani Seneviratne, Amar K Das, Shruthi Chari, Nkechinyere N Agu, Sabbir M Rashid, Jamie McCusker, Jade S Franklin, Miao Qi, Kristin P Bennett, Ching-Hua Chen, James A Hendler, Deborah L McGuinness

Background: Clinical decision support systems have been widely deployed to guide healthcare decisions on patient diagnosis, treatment choices, and patient management through evidence-based recommendations. These recommendations are typically derived from clinical practice guidelines created by clinical specialties or healthcare organizations. Although there have been many different technical approaches to encoding guideline recommendations into decision support systems, much of the previous work has not focused on enabling system generated recommendations through the formalization of changes in a guideline, the provenance of a recommendation, and applicability of the evidence. Prior work indicates that healthcare providers may not find that guideline-derived recommendations always meet their needs for reasons such as lack of relevance, transparency, time pressure, and applicability to their clinical practice.

Results: We introduce several semantic techniques that model diseases based on clinical practice guidelines, provenance of the guidelines, and the study cohorts they are based on to enhance the capabilities of clinical decision support systems. We have explored ways to enable clinical decision support systems with semantic technologies that can represent and link to details in related items from the scientific literature and quickly adapt to changing information from the guidelines, identifying gaps, and supporting personalized explanations. Previous semantics-driven clinical decision systems have limited support in all these aspects, and we present the ontologies and semantic web based software tools in three distinct areas that are unified using a standard set of ontologies and a custom-built knowledge graph framework: (i) guideline modeling to characterize diseases, (ii) guideline provenance to attach evidence to treatment decisions from authoritative sources, and (iii) study cohort modeling to identify relevant research publications for complicated patients.

Conclusions: We have enhanced existing, evidence-based knowledge by developing ontologies and software that enables clinicians to conveniently access updates to and provenance of guidelines, as well as gather additional information from research studies applicable to their patients' unique circumstances. Our software solutions leverage many well-used existing biomedical ontologies and build upon decades of knowledge representation and reasoning work, leading to explainable results.

背景:临床决策支持系统已被广泛应用于通过循证建议来指导患者诊断、治疗选择和患者管理方面的医疗决策。这些建议通常来自临床专家或医疗保健组织创建的临床实践指南。虽然已经有许多不同的技术方法将指南建议编码到决策支持系统中,但是以前的大部分工作并没有集中在通过指南变更的形式化、建议的来源和证据的适用性来使系统生成建议。先前的工作表明,由于缺乏相关性、透明度、时间压力和对临床实践的适用性等原因,医疗保健提供者可能不会发现指南衍生的建议总是满足他们的需求。结果:我们引入了几种基于临床实践指南、指南来源及其所基于的研究队列的疾病建模语义技术,以增强临床决策支持系统的能力。我们已经探索了使用语义技术使临床决策支持系统能够表示和链接科学文献中相关项目的细节,并快速适应指南中不断变化的信息,识别差距,并支持个性化解释的方法。以前的语义驱动的临床决策系统在所有这些方面的支持都是有限的,我们在三个不同的领域提出了本体和基于语义网的软件工具,这些工具使用一组标准的本体和一个定制的知识图谱框架进行统一:(i)建立指导性模型以描述疾病特征,(ii)建立指导性来源,为权威来源的治疗决定提供证据,以及(iii)建立研究队列模型以确定有关复杂患者的研究出版物。结论:我们通过开发本体论和软件增强了现有的循证知识,使临床医生能够方便地访问指南的更新和来源,并从适用于其患者独特情况的研究中收集额外的信息。我们的软件解决方案利用了许多广泛使用的现有生物医学本体,并建立在数十年的知识表示和推理工作的基础上,从而产生可解释的结果。
{"title":"Semantically enabling clinical decision support recommendations.","authors":"Oshani Seneviratne,&nbsp;Amar K Das,&nbsp;Shruthi Chari,&nbsp;Nkechinyere N Agu,&nbsp;Sabbir M Rashid,&nbsp;Jamie McCusker,&nbsp;Jade S Franklin,&nbsp;Miao Qi,&nbsp;Kristin P Bennett,&nbsp;Ching-Hua Chen,&nbsp;James A Hendler,&nbsp;Deborah L McGuinness","doi":"10.1186/s13326-023-00285-9","DOIUrl":"https://doi.org/10.1186/s13326-023-00285-9","url":null,"abstract":"<p><strong>Background: </strong>Clinical decision support systems have been widely deployed to guide healthcare decisions on patient diagnosis, treatment choices, and patient management through evidence-based recommendations. These recommendations are typically derived from clinical practice guidelines created by clinical specialties or healthcare organizations. Although there have been many different technical approaches to encoding guideline recommendations into decision support systems, much of the previous work has not focused on enabling system generated recommendations through the formalization of changes in a guideline, the provenance of a recommendation, and applicability of the evidence. Prior work indicates that healthcare providers may not find that guideline-derived recommendations always meet their needs for reasons such as lack of relevance, transparency, time pressure, and applicability to their clinical practice.</p><p><strong>Results: </strong>We introduce several semantic techniques that model diseases based on clinical practice guidelines, provenance of the guidelines, and the study cohorts they are based on to enhance the capabilities of clinical decision support systems. We have explored ways to enable clinical decision support systems with semantic technologies that can represent and link to details in related items from the scientific literature and quickly adapt to changing information from the guidelines, identifying gaps, and supporting personalized explanations. Previous semantics-driven clinical decision systems have limited support in all these aspects, and we present the ontologies and semantic web based software tools in three distinct areas that are unified using a standard set of ontologies and a custom-built knowledge graph framework: (i) guideline modeling to characterize diseases, (ii) guideline provenance to attach evidence to treatment decisions from authoritative sources, and (iii) study cohort modeling to identify relevant research publications for complicated patients.</p><p><strong>Conclusions: </strong>We have enhanced existing, evidence-based knowledge by developing ontologies and software that enables clinicians to conveniently access updates to and provenance of guidelines, as well as gather additional information from research studies applicable to their patients' unique circumstances. Our software solutions leverage many well-used existing biomedical ontologies and build upon decades of knowledge representation and reasoning work, leading to explainable results.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"8"},"PeriodicalIF":1.9,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353186/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9847112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
FAIR-Checker: supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards. FAIR-Checker:通过知识图和语义网标准支持数字资源的可查找性和重用。
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-07-01 DOI: 10.1186/s13326-023-00289-5
Alban Gaignard, Thomas Rosnet, Frédéric De Lamotte, Vincent Lefort, Marie-Dominique Devignes

The current rise of Open Science and Reproducibility in the Life Sciences requires the creation of rich, machine-actionable metadata in order to better share and reuse biological digital resources such as datasets, bioinformatics tools, training materials, etc. For this purpose, FAIR principles have been defined for both data and metadata and adopted by large communities, leading to the definition of specific metrics. However, automatic FAIRness assessment is still difficult because computational evaluations frequently require technical expertise and can be time-consuming. As a first step to address these issues, we propose FAIR-Checker, a web-based tool to assess the FAIRness of metadata presented by digital resources. FAIR-Checker offers two main facets: a "Check" module providing a thorough metadata evaluation and recommendations, and an "Inspect" module which assists users in improving metadata quality and therefore the FAIRness of their resource. FAIR-Checker leverages Semantic Web standards and technologies such as SPARQL queries and SHACL constraints to automatically assess FAIR metrics. Users are notified of missing, necessary, or recommended metadata for various resource categories. We evaluate FAIR-Checker in the context of improving the FAIRification of individual resources, through better metadata, as well as analyzing the FAIRness of more than 25 thousand bioinformatics software descriptions.

当前开放科学和生命科学可重复性的兴起需要创建丰富的、机器可操作的元数据,以便更好地共享和重用生物数字资源,如数据集、生物信息学工具、培训材料等。为此,已经为数据和元数据定义了FAIR原则,并被大型社区采用,从而定义了特定指标。然而,自动公平性评估仍然很困难,因为计算性评估经常需要技术专长,并且可能很耗时。作为解决这些问题的第一步,我们提出了FAIR-Checker,这是一个基于网络的工具,用于评估数字资源所呈现的元数据的公平性。FAIR-Checker提供两个主要方面:“Check”模块提供全面的元数据评估和建议,“Inspect”模块帮助用户提高元数据质量,从而提高资源的公平性。FAIR- checker利用语义Web标准和技术,如SPARQL查询和acl约束来自动评估FAIR指标。系统会通知用户各种资源类别缺少、必要或推荐的元数据。我们通过更好的元数据,以及分析超过2.5万个生物信息学软件描述的公平性,在提高个体资源公平性的背景下评估了FAIR-Checker。
{"title":"FAIR-Checker: supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards.","authors":"Alban Gaignard,&nbsp;Thomas Rosnet,&nbsp;Frédéric De Lamotte,&nbsp;Vincent Lefort,&nbsp;Marie-Dominique Devignes","doi":"10.1186/s13326-023-00289-5","DOIUrl":"https://doi.org/10.1186/s13326-023-00289-5","url":null,"abstract":"<p><p>The current rise of Open Science and Reproducibility in the Life Sciences requires the creation of rich, machine-actionable metadata in order to better share and reuse biological digital resources such as datasets, bioinformatics tools, training materials, etc. For this purpose, FAIR principles have been defined for both data and metadata and adopted by large communities, leading to the definition of specific metrics. However, automatic FAIRness assessment is still difficult because computational evaluations frequently require technical expertise and can be time-consuming. As a first step to address these issues, we propose FAIR-Checker, a web-based tool to assess the FAIRness of metadata presented by digital resources. FAIR-Checker offers two main facets: a \"Check\" module providing a thorough metadata evaluation and recommendations, and an \"Inspect\" module which assists users in improving metadata quality and therefore the FAIRness of their resource. FAIR-Checker leverages Semantic Web standards and technologies such as SPARQL queries and SHACL constraints to automatically assess FAIR metrics. Users are notified of missing, necessary, or recommended metadata for various resource categories. We evaluate FAIR-Checker in the context of improving the FAIRification of individual resources, through better metadata, as well as analyzing the FAIRness of more than 25 thousand bioinformatics software descriptions.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"7"},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10315041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9799838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
Features of a FAIR vocabulary. FAIR词汇的特征。
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-06-01 DOI: 10.1186/s13326-023-00286-8
Fuqi Xu, Nick Juty, Carole Goble, Simon Jupp, Helen Parkinson, Mélanie Courtot

Background: The Findable, Accessible, Interoperable and Reusable(FAIR) Principles explicitly require the use of FAIR vocabularies, but what precisely constitutes a FAIR vocabulary remains unclear. Being able to define FAIR vocabularies, identify features of FAIR vocabularies, and provide assessment approaches against the features can guide the development of vocabularies.

Results: We differentiate data, data resources and vocabularies used for FAIR, examine the application of the FAIR Principles to vocabularies, align their requirements with the Open Biomedical Ontologies principles, and propose FAIR Vocabulary Features. We also design assessment approaches for FAIR vocabularies by mapping the FVFs with existing FAIR assessment indicators. Finally, we demonstrate how they can be used for evaluating and improving vocabularies using exemplary biomedical vocabularies.

Conclusions: Our work proposes features of FAIR vocabularies and corresponding indicators for assessing the FAIR levels of different types of vocabularies, identifies use cases for vocabulary engineers, and guides the evolution of vocabularies.

背景:可查找、可访问、可互操作和可重用(FAIR)原则明确要求使用FAIR词汇表,但具体是什么构成FAIR词汇表尚不清楚。能够定义FAIR词汇表,识别FAIR词汇表的特征,并提供针对这些特征的评估方法,可以指导词汇表的开发。结果:我们区分了FAIR使用的数据、数据资源和词汇表,考察了FAIR原则在词汇表中的应用,并将其要求与开放生物医学本体原则相结合,提出了FAIR词汇表特征。我们还通过将FVFs与现有的FAIR评估指标进行映射,设计了FAIR词汇表的评估方法。最后,我们将通过示例性生物医学词汇来演示如何使用它们来评估和改进词汇。结论:我们的工作提出了公平词汇的特征和相应的指标来评估不同类型词汇的公平水平,为词汇工程师确定了用例,并指导了词汇的演变。
{"title":"Features of a FAIR vocabulary.","authors":"Fuqi Xu,&nbsp;Nick Juty,&nbsp;Carole Goble,&nbsp;Simon Jupp,&nbsp;Helen Parkinson,&nbsp;Mélanie Courtot","doi":"10.1186/s13326-023-00286-8","DOIUrl":"https://doi.org/10.1186/s13326-023-00286-8","url":null,"abstract":"<p><strong>Background: </strong>The Findable, Accessible, Interoperable and Reusable(FAIR) Principles explicitly require the use of FAIR vocabularies, but what precisely constitutes a FAIR vocabulary remains unclear. Being able to define FAIR vocabularies, identify features of FAIR vocabularies, and provide assessment approaches against the features can guide the development of vocabularies.</p><p><strong>Results: </strong>We differentiate data, data resources and vocabularies used for FAIR, examine the application of the FAIR Principles to vocabularies, align their requirements with the Open Biomedical Ontologies principles, and propose FAIR Vocabulary Features. We also design assessment approaches for FAIR vocabularies by mapping the FVFs with existing FAIR assessment indicators. Finally, we demonstrate how they can be used for evaluating and improving vocabularies using exemplary biomedical vocabularies.</p><p><strong>Conclusions: </strong>Our work proposes features of FAIR vocabularies and corresponding indicators for assessing the FAIR levels of different types of vocabularies, identifies use cases for vocabulary engineers, and guides the evolution of vocabularies.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"6"},"PeriodicalIF":1.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10236849/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9672525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Multiple sampling schemes and deep learning improve active learning performance in drug-drug interaction information retrieval analysis from the literature. 多重采样方案和深度学习提高了文献中药物相互作用信息检索分析的主动学习性能。
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-05-30 DOI: 10.1186/s13326-023-00287-7
Weixin Xie, Kunjie Fan, Shijun Zhang, Lang Li

Background: Drug-drug interaction (DDI) information retrieval (IR) is an important natural language process (NLP) task from the PubMed literature. For the first time, active learning (AL) is studied in DDI IR analysis. DDI IR analysis from PubMed abstracts faces the challenges of relatively small positive DDI samples among overwhelmingly large negative samples. Random negative sampling and positive sampling are purposely designed to improve the efficiency of AL analysis. The consistency of random negative sampling and positive sampling is shown in the paper.

Results: PubMed abstracts are divided into two pools. Screened pool contains all abstracts that pass the DDI keywords query in PubMed, while unscreened pool includes all the other abstracts. At a prespecified recall rate of 0.95, DDI IR analysis precision is evaluated and compared. In screened pool IR analysis using supporting vector machine (SVM), similarity sampling plus uncertainty sampling improves the precision over uncertainty sampling, from 0.89 to 0.92 respectively. In the unscreened pool IR analysis, the integrated random negative sampling, positive sampling, and similarity sampling improve the precision over uncertainty sampling along, from 0.72 to 0.81 respectively. When we change the SVM to a deep learning method, all sampling schemes consistently improve DDI AL analysis in both screened pool and unscreened pool. Deep learning has significant improvement of precision over SVM, 0.96 vs. 0.92 in screened pool, and 0.90 vs. 0.81 in the unscreened pool, respectively.

Conclusions: By integrating various sampling schemes and deep learning algorithms into AL, the DDI IR analysis from literature is significantly improved. The random negative sampling and positive sampling are highly effective methods in improving AL analysis where the positive and negative samples are extremely imbalanced.

背景:药物相互作用(DDI)信息检索(IR)是从 PubMed 文献中提取的一项重要的自然语言处理(NLP)任务。在 DDI IR 分析中首次研究了主动学习(AL)。从 PubMed 摘要中进行 DDI IR 分析面临的挑战是,在大量的阴性样本中,DDI 阳性样本相对较少。为了提高 AL 分析的效率,特意设计了随机阴性采样和阳性采样。文中展示了随机阴性取样和阳性取样的一致性:PubMed 摘要分为两个池。筛选池包含所有通过 PubMed DDI 关键词查询的摘要,而未筛选池包含所有其他摘要。在预设召回率为 0.95 的条件下,对 DDI IR 分析的精确度进行评估和比较。在使用支持向量机(SVM)进行的筛选池 IR 分析中,相似性采样加不确定性采样比不确定性采样提高了精确度,分别从 0.89 提高到 0.92。在非筛选池红外分析中,综合随机负采样、正采样和相似性采样比不确定性采样的精度分别从 0.72 提高到 0.81。当我们将 SVM 改为深度学习方法时,所有采样方案在筛选池和非筛选池中都能持续改进 DDI AL 分析。深度学习比 SVM 的精确度有明显提高,在筛选池中分别为 0.96 对 0.92,在未筛选池中分别为 0.90 对 0.81:通过将各种采样方案和深度学习算法整合到 AL 中,文献中的 DDI IR 分析得到了显著改善。在正负样本极不平衡的情况下,随机负向采样和正向采样是改进 AL 分析的高效方法。
{"title":"Multiple sampling schemes and deep learning improve active learning performance in drug-drug interaction information retrieval analysis from the literature.","authors":"Weixin Xie, Kunjie Fan, Shijun Zhang, Lang Li","doi":"10.1186/s13326-023-00287-7","DOIUrl":"10.1186/s13326-023-00287-7","url":null,"abstract":"<p><strong>Background: </strong>Drug-drug interaction (DDI) information retrieval (IR) is an important natural language process (NLP) task from the PubMed literature. For the first time, active learning (AL) is studied in DDI IR analysis. DDI IR analysis from PubMed abstracts faces the challenges of relatively small positive DDI samples among overwhelmingly large negative samples. Random negative sampling and positive sampling are purposely designed to improve the efficiency of AL analysis. The consistency of random negative sampling and positive sampling is shown in the paper.</p><p><strong>Results: </strong>PubMed abstracts are divided into two pools. Screened pool contains all abstracts that pass the DDI keywords query in PubMed, while unscreened pool includes all the other abstracts. At a prespecified recall rate of 0.95, DDI IR analysis precision is evaluated and compared. In screened pool IR analysis using supporting vector machine (SVM), similarity sampling plus uncertainty sampling improves the precision over uncertainty sampling, from 0.89 to 0.92 respectively. In the unscreened pool IR analysis, the integrated random negative sampling, positive sampling, and similarity sampling improve the precision over uncertainty sampling along, from 0.72 to 0.81 respectively. When we change the SVM to a deep learning method, all sampling schemes consistently improve DDI AL analysis in both screened pool and unscreened pool. Deep learning has significant improvement of precision over SVM, 0.96 vs. 0.92 in screened pool, and 0.90 vs. 0.81 in the unscreened pool, respectively.</p><p><strong>Conclusions: </strong>By integrating various sampling schemes and deep learning algorithms into AL, the DDI IR analysis from literature is significantly improved. The random negative sampling and positive sampling are highly effective methods in improving AL analysis where the positive and negative samples are extremely imbalanced.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"5"},"PeriodicalIF":1.9,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10228061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9740363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Constructing a knowledge graph for open government data: the case of Nova Scotia disease datasets. 构建开放政府数据的知识图谱:以新斯科舍省疾病数据集为例。
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-04-18 DOI: 10.1186/s13326-023-00284-w
Enayat Rajabi, Rishi Midha, Jairo Francisco de Souza

The majority of available datasets in open government data are statistical. They are widely published by various governments to be used by the public and data consumers. However, most open government data portals do not provide the five-star Linked Data standard datasets. The published datasets are isolated from one another while conceptually connected. This paper constructs a knowledge graph for the disease-related datasets of a Canadian government data portal, Nova Scotia Open Data. We leveraged the Semantic Web technologies to transform the disease-related datasets into Resource Description Framework (RDF) and enriched them with semantic rules. An RDF data model using the RDF Cube vocabulary was designed in this work to develop a graph that adheres to best practices and standards, allowing for expansion, modification and flexible re-use. The study also discusses the lessons learned during the cross-dimensional knowledge graph construction and integration of open statistical datasets from multiple sources.

开放政府数据中的大多数可用数据集都是统计数据。它们由各国政府广泛发布,供公众和数据消费者使用。然而,大多数开放的政府数据门户网站不提供五星级的关联数据标准数据集。发布的数据集彼此隔离,但在概念上是连接的。本文构建了加拿大政府数据门户网站Nova Scotia Open data的疾病相关数据集的知识图谱。我们利用语义Web技术将疾病相关数据集转换为资源描述框架(RDF),并用语义规则对其进行丰富。本文设计了一个使用RDF Cube词汇表的RDF数据模型,用于开发符合最佳实践和标准的图,允许扩展、修改和灵活重用。研究还讨论了跨维知识图谱构建和多源开放统计数据集集成的经验教训。
{"title":"Constructing a knowledge graph for open government data: the case of Nova Scotia disease datasets.","authors":"Enayat Rajabi,&nbsp;Rishi Midha,&nbsp;Jairo Francisco de Souza","doi":"10.1186/s13326-023-00284-w","DOIUrl":"https://doi.org/10.1186/s13326-023-00284-w","url":null,"abstract":"<p><p>The majority of available datasets in open government data are statistical. They are widely published by various governments to be used by the public and data consumers. However, most open government data portals do not provide the five-star Linked Data standard datasets. The published datasets are isolated from one another while conceptually connected. This paper constructs a knowledge graph for the disease-related datasets of a Canadian government data portal, Nova Scotia Open Data. We leveraged the Semantic Web technologies to transform the disease-related datasets into Resource Description Framework (RDF) and enriched them with semantic rules. An RDF data model using the RDF Cube vocabulary was designed in this work to develop a graph that adheres to best practices and standards, allowing for expansion, modification and flexible re-use. The study also discusses the lessons learned during the cross-dimensional knowledge graph construction and integration of open statistical datasets from multiple sources.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"4"},"PeriodicalIF":1.9,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10111831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9478716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Role of Incremental and Superficial Processing in the Depth Charge Illusion: Experimental and Modeling Evidence 增量加工和浅表加工在深度电荷错觉中的作用:实验和模型证据
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-04-10 DOI: 10.1093/jos/ffad003
Dario Paape
The depth charge illusion occurs when compositionally incongruous sentences such as No detail is too unimportant to be left out are assigned plausible non-compositional meanings (Don’t leave out details). Results of two online reading and judgment experiments show that moving the incongruous degree phrase to the beginning of the sentence in German (lit. “Too unimportant to be left out is surely no detail”) results in an attenuation of this semantic illusion, implying a role for incremental processing. Two further experiments show that readers cannot consistently turn the communicated meaning of depth charge sentences into its opposite, and that acceptability varies greatly between sentences and subjects, which is consistent with superficial interpretation. A meta-analytic fit of the Wiener diffusion model to data from six experiments shows that world knowledge is a systematic driver of the illusion, leading to stable acceptability judgments. Other variables, such as sentiment polarity, influence subjects’ depth of processing. Overall, the results shed new light on the role of superficial processing on the one hand and of communicative competence on the other hand in creating the depth charge illusion. I conclude that the depth charge illusion combines aspects of being a persistent processing “bug” with aspects of being a beneficial communicative “feature”, making it a fascinating object of study.
当一些不协调的句子,如“没有细节是不重要的,不能省略”被赋予貌似合理的非构成意义时,就会产生深度冲击错觉。两项在线阅读和判断实验的结果表明,将不协调的程度短语移到德语句子的开头(例如,“太不重要而被忽略无疑是没有细节的”)会导致这种语义错觉的减弱,这意味着增量处理的作用。进一步的两个实验表明,读者不能一致地将深度电荷句的交流意义转化为相反的意思,句子和主语之间的可接受性差异很大,这与肤浅的解释是一致的。维纳扩散模型与六个实验数据的元分析拟合表明,世界知识是错觉的系统驱动因素,导致稳定的可接受性判断。其他变量,如情绪极性,也会影响被试的加工深度。总的来说,这些结果揭示了表面加工和交际能力在产生深度电荷错觉中的作用。我的结论是,深度冲击错觉结合了作为持续处理“漏洞”和作为有益交流“功能”的方面,使其成为一个迷人的研究对象。
{"title":"The Role of Incremental and Superficial Processing in the Depth Charge Illusion: Experimental and Modeling Evidence","authors":"Dario Paape","doi":"10.1093/jos/ffad003","DOIUrl":"https://doi.org/10.1093/jos/ffad003","url":null,"abstract":"\u0000 The depth charge illusion occurs when compositionally incongruous sentences such as No detail is too unimportant to be left out are assigned plausible non-compositional meanings (Don’t leave out details). Results of two online reading and judgment experiments show that moving the incongruous degree phrase to the beginning of the sentence in German (lit. “Too unimportant to be left out is surely no detail”) results in an attenuation of this semantic illusion, implying a role for incremental processing. Two further experiments show that readers cannot consistently turn the communicated meaning of depth charge sentences into its opposite, and that acceptability varies greatly between sentences and subjects, which is consistent with superficial interpretation. A meta-analytic fit of the Wiener diffusion model to data from six experiments shows that world knowledge is a systematic driver of the illusion, leading to stable acceptability judgments. Other variables, such as sentiment polarity, influence subjects’ depth of processing. Overall, the results shed new light on the role of superficial processing on the one hand and of communicative competence on the other hand in creating the depth charge illusion. I conclude that the depth charge illusion combines aspects of being a persistent processing “bug” with aspects of being a beneficial communicative “feature”, making it a fascinating object of study.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"40 1","pages":"93-125"},"PeriodicalIF":1.9,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77665550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Plural and Quantified Protagonists in Free Indirect Discourse and Protagonist Projection 自由间接语篇中的复数和量词主角与主角投射
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-04-03 DOI: 10.1093/jos/ffad004
Márta Abrusán
In this paper I observe a number of new plural and (apparently) quantified examples of free indirect discourse (FID) and protagonist projection (PP). I analyse them within major current theoretical approaches, proposing extensions to these approaches where needed. In order to derive the wide range of readings observed with plural protagonists, I show how we can exploit existing mechanisms for the interpretation of plural anaphora and plural predication. The upshot is that the interpretation of plural examples of perspective shift relies on a remarkable concert of covert semantic and pragmatic operations.
在本文中,我观察了一些新的复数和(显然)量化的自由间接语篇(FID)和主角投射(PP)的例子。我在当前主要的理论方法中分析它们,并在需要的地方提出对这些方法的扩展。为了推导出复数主人公所观察到的广泛的阅读,我展示了我们如何利用现有的机制来解释复数回指和复数谓语。结果是,对复数视角转换例子的解释依赖于隐蔽的语义和语用操作的显著协调。
{"title":"Plural and Quantified Protagonists in Free Indirect Discourse and Protagonist Projection","authors":"Márta Abrusán","doi":"10.1093/jos/ffad004","DOIUrl":"https://doi.org/10.1093/jos/ffad004","url":null,"abstract":"\u0000 In this paper I observe a number of new plural and (apparently) quantified examples of free indirect discourse (FID) and protagonist projection (PP). I analyse them within major current theoretical approaches, proposing extensions to these approaches where needed. In order to derive the wide range of readings observed with plural protagonists, I show how we can exploit existing mechanisms for the interpretation of plural anaphora and plural predication. The upshot is that the interpretation of plural examples of perspective shift relies on a remarkable concert of covert semantic and pragmatic operations.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"18 1","pages":"127-151"},"PeriodicalIF":1.9,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77176856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Are There Pluralities of Worlds? 世界是否存在多元性?
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-04-03 DOI: 10.1093/jos/ffad002
V. Schmitt
Indicative conditionals and configurations with neg-raising predicates have been brought up as potential candidates for constructions involving world pluralities. I argue against this hypothesis, showing that cumulativity and quantifiers targeting a plurality’s part structure cannot access the presumed world pluralities. I furthermore argue that this makes worlds special in the sense that the same tests provide evidence for pluralities in various other semantic domains.
指示性条件句和带否定谓语的结构被提出作为涉及世界复数的结构的潜在候选者。我反对这一假设,表明累积性和量词的目标是多元化的部分结构不能进入假定的世界多元化。我进一步认为,这使得世界变得特别,因为同样的测试为其他各种语义领域的多元性提供了证据。
{"title":"Are There Pluralities of Worlds?","authors":"V. Schmitt","doi":"10.1093/jos/ffad002","DOIUrl":"https://doi.org/10.1093/jos/ffad002","url":null,"abstract":"\u0000 Indicative conditionals and configurations with neg-raising predicates have been brought up as potential candidates for constructions involving world pluralities. I argue against this hypothesis, showing that cumulativity and quantifiers targeting a plurality’s part structure cannot access the presumed world pluralities. I furthermore argue that this makes worlds special in the sense that the same tests provide evidence for pluralities in various other semantic domains.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"7 1","pages":"153-178"},"PeriodicalIF":1.9,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81520300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Copredication and Meaning Transfer 合作和意义转移
IF 1.9 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-04-01 DOI: 10.1093/jos/ffad001
David Liebesman, Ofra Magidor
Copredication occurs when a sentence receives a true reading despite prima facie ascribing categorically incompatible properties to a single entity. For example, ‘The red book is by Tolstoy’ can have a true reading even though it seems that being red is only a property of physical copies, while being by Tolstoy is only a property of informational texts. A tempting strategy for resolving this tension is to claim that at least one of the predicates has a non-standard interpretation, with the salient proposal involving reinterpretation via meaning transfer. For example, in ‘The red book is by Tolstoy’, one could hold that the predicate ‘by Tolstoy’ is reinterpreted (or on the more specific proposal, transferred) to ascribe a property that physical copies can uncontroversially instantiate, such as expresses an informational text by Tolstoy. On this view, the truth of the copredicational sentence is no longer mysterious. Furthermore, such a reinterpretation view can give a straightforward account of a range of puzzling copredicational sentences involving counting an individuation. Despite these substantial virtues, we will argue that reinterpretation approaches to copredication are untenable. In §1 we introduce reinterpretation views of copredication and contrast them with key alternatives. In §2 we argue against a general reinterpretation theory of copredication on which every copredicational sentence contains at least one reinterpreted predicate. We also raise additional problems for the more specific proposal of implementing reinterpretation via meaning transfer. In §3 we argue against more limited appeals to reinterpretation on which only some copredicational sentences contain reinterpretation. In §4 we criticize a series of arguments in favour of reinterpretation theories. The upshot is that reinterpretation theories of copredication, and in particular, meaning transfer-based accounts, should be rejected.
当一个句子得到一个真实的解读时,尽管表面上把绝对不相容的属性归因于一个单独的实体,就会发生共译。例如,“红色的书是托尔斯泰的”可以有一个真正的阅读,即使看起来红色只是物理副本的属性,而托尔斯泰只是信息文本的属性。解决这种紧张关系的一个诱人策略是,声称至少有一个谓词具有非标准的解释,其突出建议涉及通过意义转移进行重新解释。例如,在“红皮书是由托尔斯泰写的”中,我们可以认为“由托尔斯泰写的”这个谓词被重新解释(或者更具体地说,被转移),以赋予物理副本可以毫无争议地实例化的属性,例如表达托尔斯泰的信息文本。根据这种观点,谓词句的真理不再是神秘的。此外,这种重新解释的观点可以直接说明一系列令人困惑的共谓词句子,包括计数个性化。尽管有这些实质性的优点,我们将认为,重新解释的方法来共同复制是站不住脚的。在§1中,我们介绍了共同复制的重新解释观点,并将它们与关键替代观点进行了对比。在§2中,我们论证了一种普遍的重复解释理论,即每个重复解释的句子至少包含一个重新解释的谓词。我们还提出了通过意义迁移实施重新解释的更具体建议的附加问题。在§3中,我们反对更有限的重新解释,因为只有某些谓词式的句子才有重新解释。在§4中,我们批判了一系列支持重新解释理论的论证。其结果是,应该拒绝重新解释共同合作的理论,尤其是基于转账的账户。
{"title":"Copredication and Meaning Transfer","authors":"David Liebesman, Ofra Magidor","doi":"10.1093/jos/ffad001","DOIUrl":"https://doi.org/10.1093/jos/ffad001","url":null,"abstract":"\u0000 Copredication occurs when a sentence receives a true reading despite prima facie ascribing categorically incompatible properties to a single entity. For example, ‘The red book is by Tolstoy’ can have a true reading even though it seems that being red is only a property of physical copies, while being by Tolstoy is only a property of informational texts.\u0000 A tempting strategy for resolving this tension is to claim that at least one of the predicates has a non-standard interpretation, with the salient proposal involving reinterpretation via meaning transfer. For example, in ‘The red book is by Tolstoy’, one could hold that the predicate ‘by Tolstoy’ is reinterpreted (or on the more specific proposal, transferred) to ascribe a property that physical copies can uncontroversially instantiate, such as expresses an informational text by Tolstoy. On this view, the truth of the copredicational sentence is no longer mysterious. Furthermore, such a reinterpretation view can give a straightforward account of a range of puzzling copredicational sentences involving counting an individuation.\u0000 Despite these substantial virtues, we will argue that reinterpretation approaches to copredication are untenable. In §1 we introduce reinterpretation views of copredication and contrast them with key alternatives. In §2 we argue against a general reinterpretation theory of copredication on which every copredicational sentence contains at least one reinterpreted predicate. We also raise additional problems for the more specific proposal of implementing reinterpretation via meaning transfer. In §3 we argue against more limited appeals to reinterpretation on which only some copredicational sentences contain reinterpretation. In §4 we criticize a series of arguments in favour of reinterpretation theories. The upshot is that reinterpretation theories of copredication, and in particular, meaning transfer-based accounts, should be rejected.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"3 1","pages":"69-91"},"PeriodicalIF":1.9,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81558248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Environmental Conditions, Treatments, and Exposures Ontology (ECTO): connecting toxicology and exposure to human health and beyond. 环境条件、治疗和暴露本体论(ECTO):将毒理学和暴露与人类健康及其他联系起来。
IF 1.6 3区 工程技术 Q3 MATHEMATICAL & COMPUTATIONAL BIOLOGY Pub Date : 2023-02-24 DOI: 10.1186/s13326-023-00283-x
Lauren E Chan, Anne E Thessen, William D Duncan, Nicolas Matentzoglu, Charles Schmitt, Cynthia J Grondin, Nicole Vasilevsky, Julie A McMurry, Peter N Robinson, Christopher J Mungall, Melissa A Haendel

Background: Evaluating the impact of environmental exposures on organism health is a key goal of modern biomedicine and is critically important in an age of greater pollution and chemicals in our environment. Environmental health utilizes many different research methods and generates a variety of data types. However, to date, no comprehensive database represents the full spectrum of environmental health data. Due to a lack of interoperability between databases, tools for integrating these resources are needed. In this manuscript we present the Environmental Conditions, Treatments, and Exposures Ontology (ECTO), a species-agnostic ontology focused on exposure events that occur as a result of natural and experimental processes, such as diet, work, or research activities. ECTO is intended for use in harmonizing environmental health data resources to support cross-study integration and inference for mechanism discovery.

Methods and findings: ECTO is an ontology designed for describing organismal exposures such as toxicological research, environmental variables, dietary features, and patient-reported data from surveys. ECTO utilizes the base model established within the Exposure Ontology (ExO). ECTO is developed using a combination of manual curation and Dead Simple OWL Design Patterns (DOSDP), and contains over 2700 environmental exposure terms, and incorporates chemical and environmental ontologies. ECTO is an Open Biological and Biomedical Ontology (OBO) Foundry ontology that is designed for interoperability, reuse, and axiomatization with other ontologies. ECTO terms have been utilized in axioms within the Mondo Disease Ontology to represent diseases caused or influenced by environmental factors, as well as for survey encoding for the Personalized Environment and Genes Study (PEGS).

Conclusions: We constructed ECTO to meet Open Biological and Biomedical Ontology (OBO) Foundry principles to increase translation opportunities between environmental health and other areas of biology. ECTO has a growing community of contributors consisting of toxicologists, public health epidemiologists, and health care providers to provide the necessary expertise for areas that have been identified previously as gaps.

背景:评估环境暴露对生物体健康的影响是现代生物医学的一个关键目标,在环境污染和化学品日益严重的时代,这一点至关重要。环境健康利用了许多不同的研究方法,并生成了各种类型的数据。然而,迄今为止,还没有一个全面的数据库能代表环境健康数据的全部内容。由于数据库之间缺乏互操作性,因此需要整合这些资源的工具。在本手稿中,我们介绍了环境条件、处理和暴露本体论(ECTO),这是一个与物种无关的本体论,重点关注因饮食、工作或研究活动等自然和实验过程而发生的暴露事件。ECTO 旨在用于协调环境健康数据资源,以支持跨研究整合和机制发现推论:ECTO 是一种本体论,用于描述生物体暴露,如毒理学研究、环境变量、饮食特征和来自调查的患者报告数据。ECTO 利用暴露本体(ExO)中建立的基础模型。ECTO 是通过手工整理和 Dead Simple OWL Design Patterns(DOSDP)相结合的方式开发的,包含 2700 多个环境暴露术语,并融合了化学和环境本体。ECTO 是一个开放式生物和生物医学本体论(OBO)基金会本体论,旨在实现与其他本体论的互操作性、重复使用和公理化。ECTO术语已被用于蒙多疾病本体的公理中,以表示由环境因素引起或影响的疾病,以及用于个性化环境与基因研究(PEGS)的调查编码:我们构建的 ECTO 符合开放生物和生物医学本体论 (OBO) 基金会的原则,以增加环境健康与其他生物学领域之间的转化机会。ECTO 的贡献者群体在不断壮大,其中包括毒理学家、公共卫生流行病学家和医疗保健提供者,他们为之前被确定为空白的领域提供了必要的专业知识。
{"title":"The Environmental Conditions, Treatments, and Exposures Ontology (ECTO): connecting toxicology and exposure to human health and beyond.","authors":"Lauren E Chan, Anne E Thessen, William D Duncan, Nicolas Matentzoglu, Charles Schmitt, Cynthia J Grondin, Nicole Vasilevsky, Julie A McMurry, Peter N Robinson, Christopher J Mungall, Melissa A Haendel","doi":"10.1186/s13326-023-00283-x","DOIUrl":"10.1186/s13326-023-00283-x","url":null,"abstract":"<p><strong>Background: </strong>Evaluating the impact of environmental exposures on organism health is a key goal of modern biomedicine and is critically important in an age of greater pollution and chemicals in our environment. Environmental health utilizes many different research methods and generates a variety of data types. However, to date, no comprehensive database represents the full spectrum of environmental health data. Due to a lack of interoperability between databases, tools for integrating these resources are needed. In this manuscript we present the Environmental Conditions, Treatments, and Exposures Ontology (ECTO), a species-agnostic ontology focused on exposure events that occur as a result of natural and experimental processes, such as diet, work, or research activities. ECTO is intended for use in harmonizing environmental health data resources to support cross-study integration and inference for mechanism discovery.</p><p><strong>Methods and findings: </strong>ECTO is an ontology designed for describing organismal exposures such as toxicological research, environmental variables, dietary features, and patient-reported data from surveys. ECTO utilizes the base model established within the Exposure Ontology (ExO). ECTO is developed using a combination of manual curation and Dead Simple OWL Design Patterns (DOSDP), and contains over 2700 environmental exposure terms, and incorporates chemical and environmental ontologies. ECTO is an Open Biological and Biomedical Ontology (OBO) Foundry ontology that is designed for interoperability, reuse, and axiomatization with other ontologies. ECTO terms have been utilized in axioms within the Mondo Disease Ontology to represent diseases caused or influenced by environmental factors, as well as for survey encoding for the Personalized Environment and Genes Study (PEGS).</p><p><strong>Conclusions: </strong>We constructed ECTO to meet Open Biological and Biomedical Ontology (OBO) Foundry principles to increase translation opportunities between environmental health and other areas of biology. ECTO has a growing community of contributors consisting of toxicologists, public health epidemiologists, and health care providers to provide the necessary expertise for areas that have been identified previously as gaps.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"3"},"PeriodicalIF":1.6,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9257159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of Biomedical Semantics
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1