Pub Date : 2023-07-18DOI: 10.1186/s13326-023-00285-9
Oshani Seneviratne, Amar K Das, Shruthi Chari, Nkechinyere N Agu, Sabbir M Rashid, Jamie McCusker, Jade S Franklin, Miao Qi, Kristin P Bennett, Ching-Hua Chen, James A Hendler, Deborah L McGuinness
Background: Clinical decision support systems have been widely deployed to guide healthcare decisions on patient diagnosis, treatment choices, and patient management through evidence-based recommendations. These recommendations are typically derived from clinical practice guidelines created by clinical specialties or healthcare organizations. Although there have been many different technical approaches to encoding guideline recommendations into decision support systems, much of the previous work has not focused on enabling system generated recommendations through the formalization of changes in a guideline, the provenance of a recommendation, and applicability of the evidence. Prior work indicates that healthcare providers may not find that guideline-derived recommendations always meet their needs for reasons such as lack of relevance, transparency, time pressure, and applicability to their clinical practice.
Results: We introduce several semantic techniques that model diseases based on clinical practice guidelines, provenance of the guidelines, and the study cohorts they are based on to enhance the capabilities of clinical decision support systems. We have explored ways to enable clinical decision support systems with semantic technologies that can represent and link to details in related items from the scientific literature and quickly adapt to changing information from the guidelines, identifying gaps, and supporting personalized explanations. Previous semantics-driven clinical decision systems have limited support in all these aspects, and we present the ontologies and semantic web based software tools in three distinct areas that are unified using a standard set of ontologies and a custom-built knowledge graph framework: (i) guideline modeling to characterize diseases, (ii) guideline provenance to attach evidence to treatment decisions from authoritative sources, and (iii) study cohort modeling to identify relevant research publications for complicated patients.
Conclusions: We have enhanced existing, evidence-based knowledge by developing ontologies and software that enables clinicians to conveniently access updates to and provenance of guidelines, as well as gather additional information from research studies applicable to their patients' unique circumstances. Our software solutions leverage many well-used existing biomedical ontologies and build upon decades of knowledge representation and reasoning work, leading to explainable results.
{"title":"Semantically enabling clinical decision support recommendations.","authors":"Oshani Seneviratne, Amar K Das, Shruthi Chari, Nkechinyere N Agu, Sabbir M Rashid, Jamie McCusker, Jade S Franklin, Miao Qi, Kristin P Bennett, Ching-Hua Chen, James A Hendler, Deborah L McGuinness","doi":"10.1186/s13326-023-00285-9","DOIUrl":"https://doi.org/10.1186/s13326-023-00285-9","url":null,"abstract":"<p><strong>Background: </strong>Clinical decision support systems have been widely deployed to guide healthcare decisions on patient diagnosis, treatment choices, and patient management through evidence-based recommendations. These recommendations are typically derived from clinical practice guidelines created by clinical specialties or healthcare organizations. Although there have been many different technical approaches to encoding guideline recommendations into decision support systems, much of the previous work has not focused on enabling system generated recommendations through the formalization of changes in a guideline, the provenance of a recommendation, and applicability of the evidence. Prior work indicates that healthcare providers may not find that guideline-derived recommendations always meet their needs for reasons such as lack of relevance, transparency, time pressure, and applicability to their clinical practice.</p><p><strong>Results: </strong>We introduce several semantic techniques that model diseases based on clinical practice guidelines, provenance of the guidelines, and the study cohorts they are based on to enhance the capabilities of clinical decision support systems. We have explored ways to enable clinical decision support systems with semantic technologies that can represent and link to details in related items from the scientific literature and quickly adapt to changing information from the guidelines, identifying gaps, and supporting personalized explanations. Previous semantics-driven clinical decision systems have limited support in all these aspects, and we present the ontologies and semantic web based software tools in three distinct areas that are unified using a standard set of ontologies and a custom-built knowledge graph framework: (i) guideline modeling to characterize diseases, (ii) guideline provenance to attach evidence to treatment decisions from authoritative sources, and (iii) study cohort modeling to identify relevant research publications for complicated patients.</p><p><strong>Conclusions: </strong>We have enhanced existing, evidence-based knowledge by developing ontologies and software that enables clinicians to conveniently access updates to and provenance of guidelines, as well as gather additional information from research studies applicable to their patients' unique circumstances. Our software solutions leverage many well-used existing biomedical ontologies and build upon decades of knowledge representation and reasoning work, leading to explainable results.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"8"},"PeriodicalIF":1.9,"publicationDate":"2023-07-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10353186/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9847112","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-07-01DOI: 10.1186/s13326-023-00289-5
Alban Gaignard, Thomas Rosnet, Frédéric De Lamotte, Vincent Lefort, Marie-Dominique Devignes
The current rise of Open Science and Reproducibility in the Life Sciences requires the creation of rich, machine-actionable metadata in order to better share and reuse biological digital resources such as datasets, bioinformatics tools, training materials, etc. For this purpose, FAIR principles have been defined for both data and metadata and adopted by large communities, leading to the definition of specific metrics. However, automatic FAIRness assessment is still difficult because computational evaluations frequently require technical expertise and can be time-consuming. As a first step to address these issues, we propose FAIR-Checker, a web-based tool to assess the FAIRness of metadata presented by digital resources. FAIR-Checker offers two main facets: a "Check" module providing a thorough metadata evaluation and recommendations, and an "Inspect" module which assists users in improving metadata quality and therefore the FAIRness of their resource. FAIR-Checker leverages Semantic Web standards and technologies such as SPARQL queries and SHACL constraints to automatically assess FAIR metrics. Users are notified of missing, necessary, or recommended metadata for various resource categories. We evaluate FAIR-Checker in the context of improving the FAIRification of individual resources, through better metadata, as well as analyzing the FAIRness of more than 25 thousand bioinformatics software descriptions.
{"title":"FAIR-Checker: supporting digital resource findability and reuse with Knowledge Graphs and Semantic Web standards.","authors":"Alban Gaignard, Thomas Rosnet, Frédéric De Lamotte, Vincent Lefort, Marie-Dominique Devignes","doi":"10.1186/s13326-023-00289-5","DOIUrl":"https://doi.org/10.1186/s13326-023-00289-5","url":null,"abstract":"<p><p>The current rise of Open Science and Reproducibility in the Life Sciences requires the creation of rich, machine-actionable metadata in order to better share and reuse biological digital resources such as datasets, bioinformatics tools, training materials, etc. For this purpose, FAIR principles have been defined for both data and metadata and adopted by large communities, leading to the definition of specific metrics. However, automatic FAIRness assessment is still difficult because computational evaluations frequently require technical expertise and can be time-consuming. As a first step to address these issues, we propose FAIR-Checker, a web-based tool to assess the FAIRness of metadata presented by digital resources. FAIR-Checker offers two main facets: a \"Check\" module providing a thorough metadata evaluation and recommendations, and an \"Inspect\" module which assists users in improving metadata quality and therefore the FAIRness of their resource. FAIR-Checker leverages Semantic Web standards and technologies such as SPARQL queries and SHACL constraints to automatically assess FAIR metrics. Users are notified of missing, necessary, or recommended metadata for various resource categories. We evaluate FAIR-Checker in the context of improving the FAIRification of individual resources, through better metadata, as well as analyzing the FAIRness of more than 25 thousand bioinformatics software descriptions.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"7"},"PeriodicalIF":1.9,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10315041/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9799838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-06-01DOI: 10.1186/s13326-023-00286-8
Fuqi Xu, Nick Juty, Carole Goble, Simon Jupp, Helen Parkinson, Mélanie Courtot
Background: The Findable, Accessible, Interoperable and Reusable(FAIR) Principles explicitly require the use of FAIR vocabularies, but what precisely constitutes a FAIR vocabulary remains unclear. Being able to define FAIR vocabularies, identify features of FAIR vocabularies, and provide assessment approaches against the features can guide the development of vocabularies.
Results: We differentiate data, data resources and vocabularies used for FAIR, examine the application of the FAIR Principles to vocabularies, align their requirements with the Open Biomedical Ontologies principles, and propose FAIR Vocabulary Features. We also design assessment approaches for FAIR vocabularies by mapping the FVFs with existing FAIR assessment indicators. Finally, we demonstrate how they can be used for evaluating and improving vocabularies using exemplary biomedical vocabularies.
Conclusions: Our work proposes features of FAIR vocabularies and corresponding indicators for assessing the FAIR levels of different types of vocabularies, identifies use cases for vocabulary engineers, and guides the evolution of vocabularies.
{"title":"Features of a FAIR vocabulary.","authors":"Fuqi Xu, Nick Juty, Carole Goble, Simon Jupp, Helen Parkinson, Mélanie Courtot","doi":"10.1186/s13326-023-00286-8","DOIUrl":"https://doi.org/10.1186/s13326-023-00286-8","url":null,"abstract":"<p><strong>Background: </strong>The Findable, Accessible, Interoperable and Reusable(FAIR) Principles explicitly require the use of FAIR vocabularies, but what precisely constitutes a FAIR vocabulary remains unclear. Being able to define FAIR vocabularies, identify features of FAIR vocabularies, and provide assessment approaches against the features can guide the development of vocabularies.</p><p><strong>Results: </strong>We differentiate data, data resources and vocabularies used for FAIR, examine the application of the FAIR Principles to vocabularies, align their requirements with the Open Biomedical Ontologies principles, and propose FAIR Vocabulary Features. We also design assessment approaches for FAIR vocabularies by mapping the FVFs with existing FAIR assessment indicators. Finally, we demonstrate how they can be used for evaluating and improving vocabularies using exemplary biomedical vocabularies.</p><p><strong>Conclusions: </strong>Our work proposes features of FAIR vocabularies and corresponding indicators for assessing the FAIR levels of different types of vocabularies, identifies use cases for vocabulary engineers, and guides the evolution of vocabularies.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"6"},"PeriodicalIF":1.9,"publicationDate":"2023-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10236849/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9672525","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-05-30DOI: 10.1186/s13326-023-00287-7
Weixin Xie, Kunjie Fan, Shijun Zhang, Lang Li
Background: Drug-drug interaction (DDI) information retrieval (IR) is an important natural language process (NLP) task from the PubMed literature. For the first time, active learning (AL) is studied in DDI IR analysis. DDI IR analysis from PubMed abstracts faces the challenges of relatively small positive DDI samples among overwhelmingly large negative samples. Random negative sampling and positive sampling are purposely designed to improve the efficiency of AL analysis. The consistency of random negative sampling and positive sampling is shown in the paper.
Results: PubMed abstracts are divided into two pools. Screened pool contains all abstracts that pass the DDI keywords query in PubMed, while unscreened pool includes all the other abstracts. At a prespecified recall rate of 0.95, DDI IR analysis precision is evaluated and compared. In screened pool IR analysis using supporting vector machine (SVM), similarity sampling plus uncertainty sampling improves the precision over uncertainty sampling, from 0.89 to 0.92 respectively. In the unscreened pool IR analysis, the integrated random negative sampling, positive sampling, and similarity sampling improve the precision over uncertainty sampling along, from 0.72 to 0.81 respectively. When we change the SVM to a deep learning method, all sampling schemes consistently improve DDI AL analysis in both screened pool and unscreened pool. Deep learning has significant improvement of precision over SVM, 0.96 vs. 0.92 in screened pool, and 0.90 vs. 0.81 in the unscreened pool, respectively.
Conclusions: By integrating various sampling schemes and deep learning algorithms into AL, the DDI IR analysis from literature is significantly improved. The random negative sampling and positive sampling are highly effective methods in improving AL analysis where the positive and negative samples are extremely imbalanced.
背景:药物相互作用(DDI)信息检索(IR)是从 PubMed 文献中提取的一项重要的自然语言处理(NLP)任务。在 DDI IR 分析中首次研究了主动学习(AL)。从 PubMed 摘要中进行 DDI IR 分析面临的挑战是,在大量的阴性样本中,DDI 阳性样本相对较少。为了提高 AL 分析的效率,特意设计了随机阴性采样和阳性采样。文中展示了随机阴性取样和阳性取样的一致性:PubMed 摘要分为两个池。筛选池包含所有通过 PubMed DDI 关键词查询的摘要,而未筛选池包含所有其他摘要。在预设召回率为 0.95 的条件下,对 DDI IR 分析的精确度进行评估和比较。在使用支持向量机(SVM)进行的筛选池 IR 分析中,相似性采样加不确定性采样比不确定性采样提高了精确度,分别从 0.89 提高到 0.92。在非筛选池红外分析中,综合随机负采样、正采样和相似性采样比不确定性采样的精度分别从 0.72 提高到 0.81。当我们将 SVM 改为深度学习方法时,所有采样方案在筛选池和非筛选池中都能持续改进 DDI AL 分析。深度学习比 SVM 的精确度有明显提高,在筛选池中分别为 0.96 对 0.92,在未筛选池中分别为 0.90 对 0.81:通过将各种采样方案和深度学习算法整合到 AL 中,文献中的 DDI IR 分析得到了显著改善。在正负样本极不平衡的情况下,随机负向采样和正向采样是改进 AL 分析的高效方法。
{"title":"Multiple sampling schemes and deep learning improve active learning performance in drug-drug interaction information retrieval analysis from the literature.","authors":"Weixin Xie, Kunjie Fan, Shijun Zhang, Lang Li","doi":"10.1186/s13326-023-00287-7","DOIUrl":"10.1186/s13326-023-00287-7","url":null,"abstract":"<p><strong>Background: </strong>Drug-drug interaction (DDI) information retrieval (IR) is an important natural language process (NLP) task from the PubMed literature. For the first time, active learning (AL) is studied in DDI IR analysis. DDI IR analysis from PubMed abstracts faces the challenges of relatively small positive DDI samples among overwhelmingly large negative samples. Random negative sampling and positive sampling are purposely designed to improve the efficiency of AL analysis. The consistency of random negative sampling and positive sampling is shown in the paper.</p><p><strong>Results: </strong>PubMed abstracts are divided into two pools. Screened pool contains all abstracts that pass the DDI keywords query in PubMed, while unscreened pool includes all the other abstracts. At a prespecified recall rate of 0.95, DDI IR analysis precision is evaluated and compared. In screened pool IR analysis using supporting vector machine (SVM), similarity sampling plus uncertainty sampling improves the precision over uncertainty sampling, from 0.89 to 0.92 respectively. In the unscreened pool IR analysis, the integrated random negative sampling, positive sampling, and similarity sampling improve the precision over uncertainty sampling along, from 0.72 to 0.81 respectively. When we change the SVM to a deep learning method, all sampling schemes consistently improve DDI AL analysis in both screened pool and unscreened pool. Deep learning has significant improvement of precision over SVM, 0.96 vs. 0.92 in screened pool, and 0.90 vs. 0.81 in the unscreened pool, respectively.</p><p><strong>Conclusions: </strong>By integrating various sampling schemes and deep learning algorithms into AL, the DDI IR analysis from literature is significantly improved. The random negative sampling and positive sampling are highly effective methods in improving AL analysis where the positive and negative samples are extremely imbalanced.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"5"},"PeriodicalIF":1.9,"publicationDate":"2023-05-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10228061/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9740363","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-04-18DOI: 10.1186/s13326-023-00284-w
Enayat Rajabi, Rishi Midha, Jairo Francisco de Souza
The majority of available datasets in open government data are statistical. They are widely published by various governments to be used by the public and data consumers. However, most open government data portals do not provide the five-star Linked Data standard datasets. The published datasets are isolated from one another while conceptually connected. This paper constructs a knowledge graph for the disease-related datasets of a Canadian government data portal, Nova Scotia Open Data. We leveraged the Semantic Web technologies to transform the disease-related datasets into Resource Description Framework (RDF) and enriched them with semantic rules. An RDF data model using the RDF Cube vocabulary was designed in this work to develop a graph that adheres to best practices and standards, allowing for expansion, modification and flexible re-use. The study also discusses the lessons learned during the cross-dimensional knowledge graph construction and integration of open statistical datasets from multiple sources.
开放政府数据中的大多数可用数据集都是统计数据。它们由各国政府广泛发布,供公众和数据消费者使用。然而,大多数开放的政府数据门户网站不提供五星级的关联数据标准数据集。发布的数据集彼此隔离,但在概念上是连接的。本文构建了加拿大政府数据门户网站Nova Scotia Open data的疾病相关数据集的知识图谱。我们利用语义Web技术将疾病相关数据集转换为资源描述框架(RDF),并用语义规则对其进行丰富。本文设计了一个使用RDF Cube词汇表的RDF数据模型,用于开发符合最佳实践和标准的图,允许扩展、修改和灵活重用。研究还讨论了跨维知识图谱构建和多源开放统计数据集集成的经验教训。
{"title":"Constructing a knowledge graph for open government data: the case of Nova Scotia disease datasets.","authors":"Enayat Rajabi, Rishi Midha, Jairo Francisco de Souza","doi":"10.1186/s13326-023-00284-w","DOIUrl":"https://doi.org/10.1186/s13326-023-00284-w","url":null,"abstract":"<p><p>The majority of available datasets in open government data are statistical. They are widely published by various governments to be used by the public and data consumers. However, most open government data portals do not provide the five-star Linked Data standard datasets. The published datasets are isolated from one another while conceptually connected. This paper constructs a knowledge graph for the disease-related datasets of a Canadian government data portal, Nova Scotia Open Data. We leveraged the Semantic Web technologies to transform the disease-related datasets into Resource Description Framework (RDF) and enriched them with semantic rules. An RDF data model using the RDF Cube vocabulary was designed in this work to develop a graph that adheres to best practices and standards, allowing for expansion, modification and flexible re-use. The study also discusses the lessons learned during the cross-dimensional knowledge graph construction and integration of open statistical datasets from multiple sources.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"4"},"PeriodicalIF":1.9,"publicationDate":"2023-04-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC10111831/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9478716","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The depth charge illusion occurs when compositionally incongruous sentences such as No detail is too unimportant to be left out are assigned plausible non-compositional meanings (Don’t leave out details). Results of two online reading and judgment experiments show that moving the incongruous degree phrase to the beginning of the sentence in German (lit. “Too unimportant to be left out is surely no detail”) results in an attenuation of this semantic illusion, implying a role for incremental processing. Two further experiments show that readers cannot consistently turn the communicated meaning of depth charge sentences into its opposite, and that acceptability varies greatly between sentences and subjects, which is consistent with superficial interpretation. A meta-analytic fit of the Wiener diffusion model to data from six experiments shows that world knowledge is a systematic driver of the illusion, leading to stable acceptability judgments. Other variables, such as sentiment polarity, influence subjects’ depth of processing. Overall, the results shed new light on the role of superficial processing on the one hand and of communicative competence on the other hand in creating the depth charge illusion. I conclude that the depth charge illusion combines aspects of being a persistent processing “bug” with aspects of being a beneficial communicative “feature”, making it a fascinating object of study.
{"title":"The Role of Incremental and Superficial Processing in the Depth Charge Illusion: Experimental and Modeling Evidence","authors":"Dario Paape","doi":"10.1093/jos/ffad003","DOIUrl":"https://doi.org/10.1093/jos/ffad003","url":null,"abstract":"\u0000 The depth charge illusion occurs when compositionally incongruous sentences such as No detail is too unimportant to be left out are assigned plausible non-compositional meanings (Don’t leave out details). Results of two online reading and judgment experiments show that moving the incongruous degree phrase to the beginning of the sentence in German (lit. “Too unimportant to be left out is surely no detail”) results in an attenuation of this semantic illusion, implying a role for incremental processing. Two further experiments show that readers cannot consistently turn the communicated meaning of depth charge sentences into its opposite, and that acceptability varies greatly between sentences and subjects, which is consistent with superficial interpretation. A meta-analytic fit of the Wiener diffusion model to data from six experiments shows that world knowledge is a systematic driver of the illusion, leading to stable acceptability judgments. Other variables, such as sentiment polarity, influence subjects’ depth of processing. Overall, the results shed new light on the role of superficial processing on the one hand and of communicative competence on the other hand in creating the depth charge illusion. I conclude that the depth charge illusion combines aspects of being a persistent processing “bug” with aspects of being a beneficial communicative “feature”, making it a fascinating object of study.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"40 1","pages":"93-125"},"PeriodicalIF":1.9,"publicationDate":"2023-04-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77665550","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper I observe a number of new plural and (apparently) quantified examples of free indirect discourse (FID) and protagonist projection (PP). I analyse them within major current theoretical approaches, proposing extensions to these approaches where needed. In order to derive the wide range of readings observed with plural protagonists, I show how we can exploit existing mechanisms for the interpretation of plural anaphora and plural predication. The upshot is that the interpretation of plural examples of perspective shift relies on a remarkable concert of covert semantic and pragmatic operations.
{"title":"Plural and Quantified Protagonists in Free Indirect Discourse and Protagonist Projection","authors":"Márta Abrusán","doi":"10.1093/jos/ffad004","DOIUrl":"https://doi.org/10.1093/jos/ffad004","url":null,"abstract":"\u0000 In this paper I observe a number of new plural and (apparently) quantified examples of free indirect discourse (FID) and protagonist projection (PP). I analyse them within major current theoretical approaches, proposing extensions to these approaches where needed. In order to derive the wide range of readings observed with plural protagonists, I show how we can exploit existing mechanisms for the interpretation of plural anaphora and plural predication. The upshot is that the interpretation of plural examples of perspective shift relies on a remarkable concert of covert semantic and pragmatic operations.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"18 1","pages":"127-151"},"PeriodicalIF":1.9,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"77176856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Indicative conditionals and configurations with neg-raising predicates have been brought up as potential candidates for constructions involving world pluralities. I argue against this hypothesis, showing that cumulativity and quantifiers targeting a plurality’s part structure cannot access the presumed world pluralities. I furthermore argue that this makes worlds special in the sense that the same tests provide evidence for pluralities in various other semantic domains.
{"title":"Are There Pluralities of Worlds?","authors":"V. Schmitt","doi":"10.1093/jos/ffad002","DOIUrl":"https://doi.org/10.1093/jos/ffad002","url":null,"abstract":"\u0000 Indicative conditionals and configurations with neg-raising predicates have been brought up as potential candidates for constructions involving world pluralities. I argue against this hypothesis, showing that cumulativity and quantifiers targeting a plurality’s part structure cannot access the presumed world pluralities. I furthermore argue that this makes worlds special in the sense that the same tests provide evidence for pluralities in various other semantic domains.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"7 1","pages":"153-178"},"PeriodicalIF":1.9,"publicationDate":"2023-04-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81520300","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Copredication occurs when a sentence receives a true reading despite prima facie ascribing categorically incompatible properties to a single entity. For example, ‘The red book is by Tolstoy’ can have a true reading even though it seems that being red is only a property of physical copies, while being by Tolstoy is only a property of informational texts. A tempting strategy for resolving this tension is to claim that at least one of the predicates has a non-standard interpretation, with the salient proposal involving reinterpretation via meaning transfer. For example, in ‘The red book is by Tolstoy’, one could hold that the predicate ‘by Tolstoy’ is reinterpreted (or on the more specific proposal, transferred) to ascribe a property that physical copies can uncontroversially instantiate, such as expresses an informational text by Tolstoy. On this view, the truth of the copredicational sentence is no longer mysterious. Furthermore, such a reinterpretation view can give a straightforward account of a range of puzzling copredicational sentences involving counting an individuation. Despite these substantial virtues, we will argue that reinterpretation approaches to copredication are untenable. In §1 we introduce reinterpretation views of copredication and contrast them with key alternatives. In §2 we argue against a general reinterpretation theory of copredication on which every copredicational sentence contains at least one reinterpreted predicate. We also raise additional problems for the more specific proposal of implementing reinterpretation via meaning transfer. In §3 we argue against more limited appeals to reinterpretation on which only some copredicational sentences contain reinterpretation. In §4 we criticize a series of arguments in favour of reinterpretation theories. The upshot is that reinterpretation theories of copredication, and in particular, meaning transfer-based accounts, should be rejected.
{"title":"Copredication and Meaning Transfer","authors":"David Liebesman, Ofra Magidor","doi":"10.1093/jos/ffad001","DOIUrl":"https://doi.org/10.1093/jos/ffad001","url":null,"abstract":"\u0000 Copredication occurs when a sentence receives a true reading despite prima facie ascribing categorically incompatible properties to a single entity. For example, ‘The red book is by Tolstoy’ can have a true reading even though it seems that being red is only a property of physical copies, while being by Tolstoy is only a property of informational texts.\u0000 A tempting strategy for resolving this tension is to claim that at least one of the predicates has a non-standard interpretation, with the salient proposal involving reinterpretation via meaning transfer. For example, in ‘The red book is by Tolstoy’, one could hold that the predicate ‘by Tolstoy’ is reinterpreted (or on the more specific proposal, transferred) to ascribe a property that physical copies can uncontroversially instantiate, such as expresses an informational text by Tolstoy. On this view, the truth of the copredicational sentence is no longer mysterious. Furthermore, such a reinterpretation view can give a straightforward account of a range of puzzling copredicational sentences involving counting an individuation.\u0000 Despite these substantial virtues, we will argue that reinterpretation approaches to copredication are untenable. In §1 we introduce reinterpretation views of copredication and contrast them with key alternatives. In §2 we argue against a general reinterpretation theory of copredication on which every copredicational sentence contains at least one reinterpreted predicate. We also raise additional problems for the more specific proposal of implementing reinterpretation via meaning transfer. In §3 we argue against more limited appeals to reinterpretation on which only some copredicational sentences contain reinterpretation. In §4 we criticize a series of arguments in favour of reinterpretation theories. The upshot is that reinterpretation theories of copredication, and in particular, meaning transfer-based accounts, should be rejected.","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"3 1","pages":"69-91"},"PeriodicalIF":1.9,"publicationDate":"2023-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"81558248","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-02-24DOI: 10.1186/s13326-023-00283-x
Lauren E Chan, Anne E Thessen, William D Duncan, Nicolas Matentzoglu, Charles Schmitt, Cynthia J Grondin, Nicole Vasilevsky, Julie A McMurry, Peter N Robinson, Christopher J Mungall, Melissa A Haendel
Background: Evaluating the impact of environmental exposures on organism health is a key goal of modern biomedicine and is critically important in an age of greater pollution and chemicals in our environment. Environmental health utilizes many different research methods and generates a variety of data types. However, to date, no comprehensive database represents the full spectrum of environmental health data. Due to a lack of interoperability between databases, tools for integrating these resources are needed. In this manuscript we present the Environmental Conditions, Treatments, and Exposures Ontology (ECTO), a species-agnostic ontology focused on exposure events that occur as a result of natural and experimental processes, such as diet, work, or research activities. ECTO is intended for use in harmonizing environmental health data resources to support cross-study integration and inference for mechanism discovery.
Methods and findings: ECTO is an ontology designed for describing organismal exposures such as toxicological research, environmental variables, dietary features, and patient-reported data from surveys. ECTO utilizes the base model established within the Exposure Ontology (ExO). ECTO is developed using a combination of manual curation and Dead Simple OWL Design Patterns (DOSDP), and contains over 2700 environmental exposure terms, and incorporates chemical and environmental ontologies. ECTO is an Open Biological and Biomedical Ontology (OBO) Foundry ontology that is designed for interoperability, reuse, and axiomatization with other ontologies. ECTO terms have been utilized in axioms within the Mondo Disease Ontology to represent diseases caused or influenced by environmental factors, as well as for survey encoding for the Personalized Environment and Genes Study (PEGS).
Conclusions: We constructed ECTO to meet Open Biological and Biomedical Ontology (OBO) Foundry principles to increase translation opportunities between environmental health and other areas of biology. ECTO has a growing community of contributors consisting of toxicologists, public health epidemiologists, and health care providers to provide the necessary expertise for areas that have been identified previously as gaps.
{"title":"The Environmental Conditions, Treatments, and Exposures Ontology (ECTO): connecting toxicology and exposure to human health and beyond.","authors":"Lauren E Chan, Anne E Thessen, William D Duncan, Nicolas Matentzoglu, Charles Schmitt, Cynthia J Grondin, Nicole Vasilevsky, Julie A McMurry, Peter N Robinson, Christopher J Mungall, Melissa A Haendel","doi":"10.1186/s13326-023-00283-x","DOIUrl":"10.1186/s13326-023-00283-x","url":null,"abstract":"<p><strong>Background: </strong>Evaluating the impact of environmental exposures on organism health is a key goal of modern biomedicine and is critically important in an age of greater pollution and chemicals in our environment. Environmental health utilizes many different research methods and generates a variety of data types. However, to date, no comprehensive database represents the full spectrum of environmental health data. Due to a lack of interoperability between databases, tools for integrating these resources are needed. In this manuscript we present the Environmental Conditions, Treatments, and Exposures Ontology (ECTO), a species-agnostic ontology focused on exposure events that occur as a result of natural and experimental processes, such as diet, work, or research activities. ECTO is intended for use in harmonizing environmental health data resources to support cross-study integration and inference for mechanism discovery.</p><p><strong>Methods and findings: </strong>ECTO is an ontology designed for describing organismal exposures such as toxicological research, environmental variables, dietary features, and patient-reported data from surveys. ECTO utilizes the base model established within the Exposure Ontology (ExO). ECTO is developed using a combination of manual curation and Dead Simple OWL Design Patterns (DOSDP), and contains over 2700 environmental exposure terms, and incorporates chemical and environmental ontologies. ECTO is an Open Biological and Biomedical Ontology (OBO) Foundry ontology that is designed for interoperability, reuse, and axiomatization with other ontologies. ECTO terms have been utilized in axioms within the Mondo Disease Ontology to represent diseases caused or influenced by environmental factors, as well as for survey encoding for the Personalized Environment and Genes Study (PEGS).</p><p><strong>Conclusions: </strong>We constructed ECTO to meet Open Biological and Biomedical Ontology (OBO) Foundry principles to increase translation opportunities between environmental health and other areas of biology. ECTO has a growing community of contributors consisting of toxicologists, public health epidemiologists, and health care providers to provide the necessary expertise for areas that have been identified previously as gaps.</p>","PeriodicalId":15055,"journal":{"name":"Journal of Biomedical Semantics","volume":"14 1","pages":"3"},"PeriodicalIF":1.6,"publicationDate":"2023-02-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9951428/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9257159","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":3,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}