Koen Bruynseels , Lotte Asveld , Jeroen van den Hoven
{"title":"“Foundation models for research: A matter of trust?”","authors":"Koen Bruynseels , Lotte Asveld , Jeroen van den Hoven","doi":"10.1016/j.ailsci.2025.100126","DOIUrl":null,"url":null,"abstract":"<div><div>Science would not be possible without trust among experts, trust of the public in experts, and reliance on scientific instruments and methods. The rapid adoption of scientific foundation models and their use in AI agents is changing scientific practices and thereby impacting this epistemic fabric which hinges on trust and reliance. Foundation models are machine learning models that are trained on large bodies of data and can be applied to a multitude of tasks. Their application in science raises the question of whether scientific foundation models can be relied upon as a research tool and to what extent, or even be trusted as if they were research partners.</div><div>Conceptual clarification of the notions of trust and reliance in science is pivotal in the face of foundation models. Trust and reliance form the glue for the increasingly distributed epistemic labour within contemporary technoscientific systems. We build on two concepts of trust in science, namely trust in science as shared values, and trust in science based on commitments to processes that provide objective claims. We analyse whether scientific foundation models are research tools to which the concept of reliance applies, or research partners that can be trustworthy or not. We consider these foundation models within their socio-technical contexts.</div><div>Allocation of trust should be reserved for human agents and the organizations they operate in. Reliance applies to foundation models and artificial intelligence agents. This distinction is important to unambiguously allocate responsibility, which is crucial in maintaining the fabric of trust that underpins science.</div></div>","PeriodicalId":72304,"journal":{"name":"Artificial intelligence in the life sciences","volume":"7 ","pages":"Article 100126"},"PeriodicalIF":0.0000,"publicationDate":"2025-02-06","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Artificial intelligence in the life sciences","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2667318525000029","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0
Abstract
Science would not be possible without trust among experts, trust of the public in experts, and reliance on scientific instruments and methods. The rapid adoption of scientific foundation models and their use in AI agents is changing scientific practices and thereby impacting this epistemic fabric which hinges on trust and reliance. Foundation models are machine learning models that are trained on large bodies of data and can be applied to a multitude of tasks. Their application in science raises the question of whether scientific foundation models can be relied upon as a research tool and to what extent, or even be trusted as if they were research partners.
Conceptual clarification of the notions of trust and reliance in science is pivotal in the face of foundation models. Trust and reliance form the glue for the increasingly distributed epistemic labour within contemporary technoscientific systems. We build on two concepts of trust in science, namely trust in science as shared values, and trust in science based on commitments to processes that provide objective claims. We analyse whether scientific foundation models are research tools to which the concept of reliance applies, or research partners that can be trustworthy or not. We consider these foundation models within their socio-technical contexts.
Allocation of trust should be reserved for human agents and the organizations they operate in. Reliance applies to foundation models and artificial intelligence agents. This distinction is important to unambiguously allocate responsibility, which is crucial in maintaining the fabric of trust that underpins science.
Artificial intelligence in the life sciencesPharmacology, Biochemistry, Genetics and Molecular Biology (General), Computer Science Applications, Health Informatics, Drug Discovery, Veterinary Science and Veterinary Medicine (General)