The use of multi-objective optimization to build classifier ensembles is becoming increasingly popular. This approach optimizes more than one criterion simultaneously and returns a set of solutions. Thus the final solution can be more tailored to the user’s needs. The work proposes the MOONF method using one or two criteria depending on the method’s version. Optimization returns solutions as feature subspaces that are then used to train decision tree models. In this way, the ensemble is created non-randomly, unlike the popular Random Subspace approach (such as the Random Forest classifier). Experiments carried out on many imbalanced datasets compare the proposed methods with state-of-the-art methods and show the advantage of the MOONF method in the multi-objective version.
{"title":"Using Multi-Objective Optimization to build non-Random Forest","authors":"Joanna Klikowska, Michał Woźniak","doi":"10.1093/jigpal/jzae110","DOIUrl":"https://doi.org/10.1093/jigpal/jzae110","url":null,"abstract":"The use of multi-objective optimization to build classifier ensembles is becoming increasingly popular. This approach optimizes more than one criterion simultaneously and returns a set of solutions. Thus the final solution can be more tailored to the user’s needs. The work proposes the MOONF method using one or two criteria depending on the method’s version. Optimization returns solutions as feature subspaces that are then used to train decision tree models. In this way, the ensemble is created non-randomly, unlike the popular Random Subspace approach (such as the Random Forest classifier). Experiments carried out on many imbalanced datasets compare the proposed methods with state-of-the-art methods and show the advantage of the MOONF method in the multi-objective version.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142224532","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Santiago Iglesias Álvarez, Enrique Díez Alonso, Javier Rodríguez Rodríguez, Saúl Pérez Fernández, Ronny Steveen Anangonó Tutasig, Carlos González Gutiérrez, Alejandro Buendía Roca, Julia María Fernández Díaz, Maria Luisa Sánchez Rodríguez
In this research, we present two 1D Convolutional Neural Network (CNN) models that were trained, validated and tested using simulated light curves designed to mimic those expected from the Kepler Space Telescope during its extended mission (K2). We also tested them on real K2 data. Our light curve simulator considers different stellar variability phenomena, such as rotations, pulsations and flares, which along with the stellar noise expected for K2 data, hinders the transit signal detection, as in real data. The first model effectively identifies transit-like signals in light curves, classifying them based on the presence or absence of such signals. Furthermore, the second model not only phase-folds the light curves but also eliminates stellar noise, a crucial step when fitting transits to the Mandel and Agol theoretical transit shape. The obtained results include an accuracy of $sim 99%$ when classifying the light curves based on the presence or absence of transit-like signals, and $MAPEsim 6%$ regarding to the transits’ depth and duration when phase folding the light curves, showing the great capabilities of 1D-CNN for automatizing the transit search in light curves, both on simulated and real data.
{"title":"Detection of transiting exoplanets and phase-folding their host star’s light curves from K2 data with 1D-CNN","authors":"Santiago Iglesias Álvarez, Enrique Díez Alonso, Javier Rodríguez Rodríguez, Saúl Pérez Fernández, Ronny Steveen Anangonó Tutasig, Carlos González Gutiérrez, Alejandro Buendía Roca, Julia María Fernández Díaz, Maria Luisa Sánchez Rodríguez","doi":"10.1093/jigpal/jzae106","DOIUrl":"https://doi.org/10.1093/jigpal/jzae106","url":null,"abstract":"In this research, we present two 1D Convolutional Neural Network (CNN) models that were trained, validated and tested using simulated light curves designed to mimic those expected from the Kepler Space Telescope during its extended mission (K2). We also tested them on real K2 data. Our light curve simulator considers different stellar variability phenomena, such as rotations, pulsations and flares, which along with the stellar noise expected for K2 data, hinders the transit signal detection, as in real data. The first model effectively identifies transit-like signals in light curves, classifying them based on the presence or absence of such signals. Furthermore, the second model not only phase-folds the light curves but also eliminates stellar noise, a crucial step when fitting transits to the Mandel and Agol theoretical transit shape. The obtained results include an accuracy of $sim 99%$ when classifying the light curves based on the presence or absence of transit-like signals, and $MAPEsim 6%$ regarding to the transits’ depth and duration when phase folding the light curves, showing the great capabilities of 1D-CNN for automatizing the transit search in light curves, both on simulated and real data.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"8 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189195","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Díaz-Longueira, Paula Arcano-Bea, Roberto Casado-Vara, Andrés-José Piñón-Pazos, Esteban Jove
Green energy production is expanding in individual and large-scale electricity grids, driven by the imperative to reduce greenhouse gas emissions. This research performs a comparative analysis of several linear and non-linear regression models, intending to identify the most effective method to estimate the active power produced for a mini wind turbine using meteorological variables, looking for a reliable virtual sensor. The modeling process followed a feature selection step before applying eight machine learning techniques whose results were statistically analysed to determine the best performance. The implemented virtual sensor accurately estimated the active power, being an interesting tool for anomaly detection, maintenance management or decision-making.
{"title":"Virtual active power sensor for eolic self-consumption installations based on wind-related variables","authors":"Antonio Díaz-Longueira, Paula Arcano-Bea, Roberto Casado-Vara, Andrés-José Piñón-Pazos, Esteban Jove","doi":"10.1093/jigpal/jzae109","DOIUrl":"https://doi.org/10.1093/jigpal/jzae109","url":null,"abstract":"Green energy production is expanding in individual and large-scale electricity grids, driven by the imperative to reduce greenhouse gas emissions. This research performs a comparative analysis of several linear and non-linear regression models, intending to identify the most effective method to estimate the active power produced for a mini wind turbine using meteorological variables, looking for a reliable virtual sensor. The modeling process followed a feature selection step before applying eight machine learning techniques whose results were statistically analysed to determine the best performance. The implemented virtual sensor accurately estimated the active power, being an interesting tool for anomaly detection, maintenance management or decision-making.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"151 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-09","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189194","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Knowledge representation is one way to exploit expertise in a given domain by logical means. But, what kind of knowledge does one acquire from an inference (or inference on a query result over a knowledge base)? Such a question may appear awkward since the answer seems so obvious: from an inference, one simply acquires knowledge. This is undoubtedly the case when only one type of knowledge (for instance, expert knowledge) is involved in an inference. What if several types of knowledge are involved? What type of knowledge can one deduce from a plurality of knowledge types? I claim that reasoning with different knowledge concepts requires a fine-grained representation of knowledge in which every knowledge type finds a singular expression in order to avoid some epistemic equivocity associated with a coarse-grained representation of knowledge. In the first part of the paper, I revisit the Muddy Children Puzzle, which usually serves to illustrate common knowledge in dynamic epistemic logic. I try to show that this problem also shows some sort of epistemic equivocity between concepts of knowledge and, consequently, that the problem calls for some epistemological refinements concerning the representation of the types of knowledge at play in an inference. In the second part, I address this issue from a semantic point of view, and I develop a fragment of epistemic logic capable of providing a solution to the problem of epistemic equivocity.
知识表示是通过逻辑手段利用特定领域专业知识的一种方法。但是,人们从推理(或对知识库查询结果的推理)中获得的是什么样的知识呢?这个问题可能显得有些尴尬,因为答案似乎是显而易见的:从推理中,人们只是获得了知识。当推理只涉及一种知识(例如专家知识)时,情况无疑是这样。如果涉及几类知识呢?我们能从多种知识类型中推导出什么类型的知识呢?我认为,使用不同的知识概念进行推理需要一种细粒度的知识表示法,在这种表示法中,每种知识类型都能找到一种奇异的表达方式,以避免与粗粒度知识表示法相关的认识论等价性。在本文的第一部分,我重温了 "泥泞儿童之谜"(Muddy Children Puzzle),它通常用于说明动态认识论逻辑中的常识。我试图证明,这个问题也显示了知识概念之间的某种认识论等价性,因此,这个问题要求对推论中知识类型的表征进行一些认识论上的改进。在第二部分中,我将从语义学的角度来探讨这个问题,并发展出一个能够为认识论等价性问题提供解决方案的认识论逻辑片段。
{"title":"Inferential knowledge and epistemic dimensions","authors":"Yves Bouchard","doi":"10.1093/jigpal/jzae095","DOIUrl":"https://doi.org/10.1093/jigpal/jzae095","url":null,"abstract":"Knowledge representation is one way to exploit expertise in a given domain by logical means. But, what kind of knowledge does one acquire from an inference (or inference on a query result over a knowledge base)? Such a question may appear awkward since the answer seems so obvious: from an inference, one simply acquires knowledge. This is undoubtedly the case when only one type of knowledge (for instance, expert knowledge) is involved in an inference. What if several types of knowledge are involved? What type of knowledge can one deduce from a plurality of knowledge types? I claim that reasoning with different knowledge concepts requires a fine-grained representation of knowledge in which every knowledge type finds a singular expression in order to avoid some epistemic equivocity associated with a coarse-grained representation of knowledge. In the first part of the paper, I revisit the Muddy Children Puzzle, which usually serves to illustrate common knowledge in dynamic epistemic logic. I try to show that this problem also shows some sort of epistemic equivocity between concepts of knowledge and, consequently, that the problem calls for some epistemological refinements concerning the representation of the types of knowledge at play in an inference. In the second part, I address this issue from a semantic point of view, and I develop a fragment of epistemic logic capable of providing a solution to the problem of epistemic equivocity.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"68 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189197","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rui Pedro Lopes, Ahmed Ibrahim, José Barbosa, Paulo Leitao
The market’s demand for high-quality products necessitates innovative manufacturing approaches that emphasize flexibility, adaptability, and the reduction of defects. Traditional systems are currently evolving towards embracing I4.0 technologies, including data collection, processing, analytics and digital twin, aiming for zero-defect manufacturing quality. This paper introduces an open platform, compliant with RAMI4.0 standards, designed to improve manufacturing quality. The platform integrates data using Asset Administration Shells with microservices adaptation for data ingestion and advanced analytics. Additionally, it incorporates Non-Destructive Inspection tools, demonstrating a seamless integration of measurement and quality assurance. This study details the architectural specification and validation of the openZDM platform, employing microservices to ensure flexibility, modularity and scalability, aligning with the RAMI4.0 model. The platform was validated through deployment in an automotive assembly line, highlighting its effectiveness in integrating inspection scenarios and early defect detection tools. The architecture employs a choreographed approach to manage loosely coupled microservices, enabling efficient data lifecycle management from collection through analytics to visualization.
{"title":"Microservices architecture to enable an open platform for realizing zero defects in cyber-physical manufacturing","authors":"Rui Pedro Lopes, Ahmed Ibrahim, José Barbosa, Paulo Leitao","doi":"10.1093/jigpal/jzae112","DOIUrl":"https://doi.org/10.1093/jigpal/jzae112","url":null,"abstract":"The market’s demand for high-quality products necessitates innovative manufacturing approaches that emphasize flexibility, adaptability, and the reduction of defects. Traditional systems are currently evolving towards embracing I4.0 technologies, including data collection, processing, analytics and digital twin, aiming for zero-defect manufacturing quality. This paper introduces an open platform, compliant with RAMI4.0 standards, designed to improve manufacturing quality. The platform integrates data using Asset Administration Shells with microservices adaptation for data ingestion and advanced analytics. Additionally, it incorporates Non-Destructive Inspection tools, demonstrating a seamless integration of measurement and quality assurance. This study details the architectural specification and validation of the openZDM platform, employing microservices to ensure flexibility, modularity and scalability, aligning with the RAMI4.0 model. The platform was validated through deployment in an automotive assembly line, highlighting its effectiveness in integrating inspection scenarios and early defect detection tools. The architecture employs a choreographed approach to manage loosely coupled microservices, enabling efficient data lifecycle management from collection through analytics to visualization.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"33 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189200","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Heterogenous flows across system boundaries continue to pose significant problems for efficient resource allocation especially with respect to long term strategic planning and immediate problems about allocation to address particular resource shortages. The approach taken here to modelling such flows is an engineering change prediction one. This enables margin modelling by producing system models in dependency matrices with different linkage types. Change prediction approaches from engineering design can analyse where these bottlenecks in integrated systems would be so that resources can be deployed flexibility to avoid them and address them when they occur. Current state of the art of margin research can be furthered by identifying margins on multiple levels of system composition. It can usefully be complemented by a category theory based approach which allows representation of variable and constant properties of models under changing conditions, and the identification of flows within models. Category theory is useful for formalising such explanatory frameworks as it can both structure systems and permit analysis of their applications in a complementary way.
{"title":"Explanatory frameworks in complex change and resilience system modelling","authors":"Mark Addis, Claudia Eckert","doi":"10.1093/jigpal/jzae087","DOIUrl":"https://doi.org/10.1093/jigpal/jzae087","url":null,"abstract":"Heterogenous flows across system boundaries continue to pose significant problems for efficient resource allocation especially with respect to long term strategic planning and immediate problems about allocation to address particular resource shortages. The approach taken here to modelling such flows is an engineering change prediction one. This enables margin modelling by producing system models in dependency matrices with different linkage types. Change prediction approaches from engineering design can analyse where these bottlenecks in integrated systems would be so that resources can be deployed flexibility to avoid them and address them when they occur. Current state of the art of margin research can be furthered by identifying margins on multiple levels of system composition. It can usefully be complemented by a category theory based approach which allows representation of variable and constant properties of models under changing conditions, and the identification of flows within models. Category theory is useful for formalising such explanatory frameworks as it can both structure systems and permit analysis of their applications in a complementary way.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"9 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-09-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A Bucciarelli, P-L Curien, A Ledda, F Paoli, A Salibra
In recent research, some of the present authors introduced the concept of an $n$-dimensional Boolean algebra and its corresponding propositional logic $ntextrm{CL}$, generalizing the Boolean propositional calculus to $ngeq 2$ perfectly symmetric truth values. This paper presents a sound and complete sequent calculus for $ntextrm{CL}$, named $ntextrm{LK}$. We provide two proofs of completeness: one syntactic and one semantic. The former implies as a corollary that $ntextrm{LK}$ enjoys the cut admissibility property. The latter relies on the generalization to the $n$-ary case of the classical proof based on the Lindenbaum algebra of formulas and Boolean ultrafilters.
{"title":"The higher dimensional propositional calculus","authors":"A Bucciarelli, P-L Curien, A Ledda, F Paoli, A Salibra","doi":"10.1093/jigpal/jzae100","DOIUrl":"https://doi.org/10.1093/jigpal/jzae100","url":null,"abstract":"In recent research, some of the present authors introduced the concept of an $n$-dimensional Boolean algebra and its corresponding propositional logic $ntextrm{CL}$, generalizing the Boolean propositional calculus to $ngeq 2$ perfectly symmetric truth values. This paper presents a sound and complete sequent calculus for $ntextrm{CL}$, named $ntextrm{LK}$. We provide two proofs of completeness: one syntactic and one semantic. The former implies as a corollary that $ntextrm{LK}$ enjoys the cut admissibility property. The latter relies on the generalization to the $n$-ary case of the classical proof based on the Lindenbaum algebra of formulas and Boolean ultrafilters.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"8 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189199","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
A single significant instance may support general conclusions, with possible exceptions being tolerated. This is the case in practical human reasoning (e.g. moral and legal normativity: general rules tolerating exceptions), in theoretical human reasoning engaging with external reality (e.g. empirical and social sciences: the use of case studies and model organisms) and in abstract domains (possibly mind-unrelated, e.g. pure mathematics: the use of arbitrary objects). While this has been recognized in modern times, such a process is not captured by current models of supporting general conclusions. This paper articulates the thesis that there is a kind of reasoning, generic reasoning, previously unrecognized as an independent type of reasoning. A theory of generic reasoning explains how a single significant instance may support general conclusions, with possible exceptions being tolerated. This paper will adopt, as a working hypothesis, that generic reasoning is irreducible to currently recognized kinds of ‘pure’ reasoning. The aim is to understand generic reasoning, both theoretically and in its applications.
{"title":"Generic reasoning: A programmatic sketch","authors":"Federico L G Faroldi","doi":"10.1093/jigpal/jzae083","DOIUrl":"https://doi.org/10.1093/jigpal/jzae083","url":null,"abstract":"A single significant instance may support general conclusions, with possible exceptions being tolerated. This is the case in practical human reasoning (e.g. moral and legal normativity: general rules tolerating exceptions), in theoretical human reasoning engaging with external reality (e.g. empirical and social sciences: the use of case studies and model organisms) and in abstract domains (possibly mind-unrelated, e.g. pure mathematics: the use of arbitrary objects). While this has been recognized in modern times, such a process is not captured by current models of supporting general conclusions. This paper articulates the thesis that there is a kind of reasoning, generic reasoning, previously unrecognized as an independent type of reasoning. A theory of generic reasoning explains how a single significant instance may support general conclusions, with possible exceptions being tolerated. This paper will adopt, as a working hypothesis, that generic reasoning is irreducible to currently recognized kinds of ‘pure’ reasoning. The aim is to understand generic reasoning, both theoretically and in its applications.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"1 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189198","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper belongs to the field of probabilistic modal logic, focusing on a comparative analysis of two distinct semantics: one rooted in Kripke semantics and the other in neighbourhood semantics. The primary distinction lies in the following: The latter allows us to adequately express belief functions (lower probabilities) over propositions, whereas the former does not. Thus, neighbourhood semantics is more expressive. The main part of the work is a section in which we study the modal equivalence between probabilistic Kripke models and a subclass of belief neighbourhood models, namely additive ones. We study how to obtain modally equivalent structures.
{"title":"Modal semantics for reasoning with probability and uncertainty","authors":"Nino Guallart","doi":"10.1093/jigpal/jzae089","DOIUrl":"https://doi.org/10.1093/jigpal/jzae089","url":null,"abstract":"This paper belongs to the field of probabilistic modal logic, focusing on a comparative analysis of two distinct semantics: one rooted in Kripke semantics and the other in neighbourhood semantics. The primary distinction lies in the following: The latter allows us to adequately express belief functions (lower probabilities) over propositions, whereas the former does not. Thus, neighbourhood semantics is more expressive. The main part of the work is a section in which we study the modal equivalence between probabilistic Kripke models and a subclass of belief neighbourhood models, namely additive ones. We study how to obtain modally equivalent structures.","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"8 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189202","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jenner (1749–1826) is known as the father of the smallpox vaccine. Through an inferential analysis of Jenner’s report of inquiry, in which medical practice and medical research are intrinsically intertwined, we highlight his use of abductive hypotheses. Our understanding of abduction is based on the Gabbay and Woods (2005, The Reach of Abduction Insight and Trial, 2, 39) model, in which hypotheses can be activated even when they have not been confirmed, as well as Magnani’s Select and Test model (1992, 2017). We discuss the fundamental role of abductive hypotheses in experimental medicine and understood in the sense of Bernard (1865, Introduction à l’étude la médecine expérimentale). We conclude with remarks about mechanistic hypotheses discussed by Jenner, in relation to the thesis advocated by Russo and Williamson (2007, Int. Stud. Philos. Sci., 21, 157–170).
{"title":"The logic of medical discovery: the case of Jenner’s inquiry on variolae vaccinae","authors":"Cristina Barés Gómez, Matthieu Fontaine","doi":"10.1093/jigpal/jzae084","DOIUrl":"https://doi.org/10.1093/jigpal/jzae084","url":null,"abstract":"Jenner (1749–1826) is known as the father of the smallpox vaccine. Through an inferential analysis of Jenner’s report of inquiry, in which medical practice and medical research are intrinsically intertwined, we highlight his use of abductive hypotheses. Our understanding of abduction is based on the Gabbay and Woods (2005, The Reach of Abduction Insight and Trial, 2, 39) model, in which hypotheses can be activated even when they have not been confirmed, as well as Magnani’s Select and Test model (1992, 2017). We discuss the fundamental role of abductive hypotheses in experimental medicine and understood in the sense of Bernard (1865, Introduction à l’étude la médecine expérimentale). We conclude with remarks about mechanistic hypotheses discussed by Jenner, in relation to the thesis advocated by Russo and Williamson (2007, Int. Stud. Philos. Sci., 21, 157–170).","PeriodicalId":51114,"journal":{"name":"Logic Journal of the IGPL","volume":"16 1","pages":""},"PeriodicalIF":1.0,"publicationDate":"2024-08-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142189201","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":4,"RegionCategory":"数学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}