Pub Date : 2026-02-04DOI: 10.1080/08989621.2026.2626740
Adrian Barnett, Jennifer Byrne
Scientific fakery is a centuries old problem. Twinned with the long history of hard-working scientists earning fame for genuine discoveries, runs a tawdry history of those who were willing fabricate results to falsely gain prestige. Fraud in the past relied on bespoke fakery, but today's fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale. Fraudsters are using open data sets to create meaningless analyses and combining these results with text from large language models. There has been an explosion of these low value papers using openly available and highly regarded data sets, such as the US National Health and Nutrition Examination Survey (NHANES). The paper miners will likely exploit whatever open data resources they can find until data custodians put more stringent controls in place, or journals and publishers push back. Some scientific data may be too open, even though making research data openly available is a recommended policy for increasing research integrity. Journals and researchers need to be aware of this new threat to research integrity.
{"title":"Closing the paper mines.","authors":"Adrian Barnett, Jennifer Byrne","doi":"10.1080/08989621.2026.2626740","DOIUrl":"https://doi.org/10.1080/08989621.2026.2626740","url":null,"abstract":"<p><p>Scientific fakery is a centuries old problem. Twinned with the long history of hard-working scientists earning fame for genuine discoveries, runs a tawdry history of those who were willing fabricate results to falsely gain prestige. Fraud in the past relied on bespoke fakery, but today's fraudsters can exploit the online scientific world to quickly create realistic looking papers on an industrial scale. Fraudsters are using open data sets to create meaningless analyses and combining these results with text from large language models. There has been an explosion of these low value papers using openly available and highly regarded data sets, such as the US National Health and Nutrition Examination Survey (NHANES). The paper miners will likely exploit whatever open data resources they can find until data custodians put more stringent controls in place, or journals and publishers push back. Some scientific data may be too open, even though making research data openly available is a recommended policy for increasing research integrity. Journals and researchers need to be aware of this new threat to research integrity.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2626740"},"PeriodicalIF":4.0,"publicationDate":"2026-02-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146120645","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-02-02DOI: 10.1080/08989621.2026.2623480
Gert Helgesson, William Bülow
The value of scientific knowledge and fairness in distribution of academic credit are core values in research publication. However, it is little discussed in the literature that these values may come into conflict, particularly in interdisciplinary research. The point of this paper is to acknowledge and describe the conflict and discuss potential solutions. We use collaborations between pre-clinical (laboratory) researchers and clinicians at hospitals as an exemplifying case. We conclude that, without changing the preconditions for the value conflict, there is no general solution involving systematically prioritizing one value over the other. However, a potential way out of the conflict would be a general shift from authorship to contributorship regarding evaluation of contributions, but required routines are presently not in place with most journals.
{"title":"On the potential value conflict between scientific knowledge production and fair recognition of authorship.","authors":"Gert Helgesson, William Bülow","doi":"10.1080/08989621.2026.2623480","DOIUrl":"https://doi.org/10.1080/08989621.2026.2623480","url":null,"abstract":"<p><p>The value of scientific knowledge and fairness in distribution of academic credit are core values in research publication. However, it is little discussed in the literature that these values may come into conflict, particularly in interdisciplinary research. The point of this paper is to acknowledge and describe the conflict and discuss potential solutions. We use collaborations between pre-clinical (laboratory) researchers and clinicians at hospitals as an exemplifying case. We conclude that, without changing the preconditions for the value conflict, there is no general solution involving systematically prioritizing one value over the other. However, a potential way out of the conflict would be a general shift from authorship to contributorship regarding evaluation of contributions, but required routines are presently not in place with most journals.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2623480"},"PeriodicalIF":4.0,"publicationDate":"2026-02-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146108160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1080/08989621.2026.2622302
Jacopo Ambrosj, Kris Dierickx, Hugh Desmond
In this paper, we document a tension in the European Code of Conduct for Research Integrity: the Code seeks to limit the influence of non-epistemic values, and yet, it recognizes that such values play a legitimate role in research. By comparing various versions of the Code, we argue that, over time, there has been less explicit recognition of the complexity of the relation between research and societal values. Currently, the Code does not give guidance on what value influence count as undesirable or as "undue pressure," and conflates the issues of value-freedom and scientific freedom. As the impact of non-epistemic values is becoming increasingly evident, we recommend that future codes start by explicitly acknowledging the challenges inherent in the relation between science and societal values, and we offer an example of how the ECoC could be revised to meet this recommendation.
{"title":"To be or not to be value-free? A tension in the European Code of Conduct for Research Integrity.","authors":"Jacopo Ambrosj, Kris Dierickx, Hugh Desmond","doi":"10.1080/08989621.2026.2622302","DOIUrl":"https://doi.org/10.1080/08989621.2026.2622302","url":null,"abstract":"<p><p>In this paper, we document a tension in the European Code of Conduct for Research Integrity: the Code seeks to limit the influence of non-epistemic values, and yet, it recognizes that such values play a legitimate role in research. By comparing various versions of the Code, we argue that, over time, there has been less explicit recognition of the complexity of the relation between research and societal values. Currently, the Code does not give guidance on what value influence count as undesirable or as \"undue pressure,\" and conflates the issues of value-freedom and scientific freedom. As the impact of non-epistemic values is becoming increasingly evident, we recommend that future codes start by explicitly acknowledging the challenges inherent in the relation between science and societal values, and we offer an example of how the ECoC could be revised to meet this recommendation.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2622302"},"PeriodicalIF":4.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094768","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-31DOI: 10.1080/08989621.2026.2623487
Ricardo Ayala, Pedro Hervé-Fernández
Background: Artificial intelligence (AI) is reshaping research practices, yet its ethical implications remain under‑examined, particularly in cross‑national contexts.
Objective: To explore how AI integration into environmental science complicates informed consent, privacy and data sovereignty, and to identify the ethical duties that follow for researchers.
Case context: Drawing on a Chilean case study that adopts the European Union's General Data Protection Regulation (GDPR) as a normative framework, we focus on everyday AI‑mediated tools embedded in research infrastructures (e.g., transcription, cloud services, meeting assistants) and the tensions they introduce.
Findings: AI intensifies -rather than replaces- ethical accountability, especially where legal protections are weak or infrastructures unequal. Algorithmic opacity constrains researcher autonomy and undermines data sovereignty.
Conclusions: A governance approach grounded in data sovereignty and researcher autonomy is required to safeguard consent, privacy, and accountability in AI‑mediated research.
Implications for policy and practice: We propose a revised model of ethical governance to support researchers working across fragmented regulations and opaque AI systems.
{"title":"Emerging ethical duties in AI-mediated research: A case of data sovereignty in applying cross-national regulation.","authors":"Ricardo Ayala, Pedro Hervé-Fernández","doi":"10.1080/08989621.2026.2623487","DOIUrl":"10.1080/08989621.2026.2623487","url":null,"abstract":"<p><strong>Background: </strong>Artificial intelligence (AI) is reshaping research practices, yet its ethical implications remain under‑examined, particularly in cross‑national contexts.</p><p><strong>Objective: </strong>To explore how AI integration into environmental science complicates informed consent, privacy and data sovereignty, and to identify the ethical duties that follow for researchers.</p><p><strong>Case context: </strong>Drawing on a Chilean case study that adopts the European Union's General Data Protection Regulation (GDPR) as a normative framework, we focus on everyday AI‑mediated tools embedded in research infrastructures (e.g., transcription, cloud services, meeting assistants) and the tensions they introduce.</p><p><strong>Findings: </strong>AI intensifies -rather than replaces- ethical accountability, especially where legal protections are weak or infrastructures unequal. Algorithmic opacity constrains researcher autonomy and undermines data sovereignty.</p><p><strong>Conclusions: </strong>A governance approach grounded in data sovereignty and researcher autonomy is required to safeguard consent, privacy, and accountability in AI‑mediated research.</p><p><strong>Implications for policy and practice: </strong>We propose a revised model of ethical governance to support researchers working across fragmented regulations and opaque AI systems.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2623487"},"PeriodicalIF":4.0,"publicationDate":"2026-01-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"146094834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1080/08989621.2025.2601225
Katherine Cheung, Rebecca Ehrenkranz, Brian D Earp, Edward Jacobs, David B Yaden
A number of proposals across different fields have suggested incorporating "independent" actors into the research process as a way to manage potential bias. For example, in response to allegations of bias in psychedelic science, some have suggested the idea of independent auditors for adverse events, as well as the incorporation of independent researchers into the research teams of psychedelic trials. However, despite growing interest in these methods, the concept of independence itself remains frequently undefined. Moreover, although introducing independent actors may seem like a prima facie beneficial solution to help reduce bias and improve the scientific rigor of research, it may come with significant drawbacks as well. Here, we argue that the sense of independence on which these proposals for independent actors implicitly rely on is freedom from any influence that might alter the actors' choices in a way that reduces the trustworthiness or accuracy of research findings. Whether it is possible to identify and involve such actors without incurring trade-offs with other scientific desiderata (e.g. due to the risk of inadequate expertise) is then further explored. We conclude by providing two models in law and science that may be helpful to draw upon if seeking to incorporate independent actors.
{"title":"Analyzing the concept of independence in psychedelic research.","authors":"Katherine Cheung, Rebecca Ehrenkranz, Brian D Earp, Edward Jacobs, David B Yaden","doi":"10.1080/08989621.2025.2601225","DOIUrl":"https://doi.org/10.1080/08989621.2025.2601225","url":null,"abstract":"<p><p>A number of proposals across different fields have suggested incorporating \"independent\" actors into the research process as a way to manage potential bias. For example, in response to allegations of bias in psychedelic science, some have suggested the idea of independent auditors for adverse events, as well as the incorporation of independent researchers into the research teams of psychedelic trials. However, despite growing interest in these methods, the concept of independence itself remains frequently undefined. Moreover, although introducing independent actors may seem like a <i>prima facie</i> beneficial solution to help reduce bias and improve the scientific rigor of research, it may come with significant drawbacks as well. Here, we argue that the sense of independence on which these proposals for independent actors implicitly rely on is freedom from any influence that might alter the actors' choices in a way that reduces the trustworthiness or accuracy of research findings. Whether it is possible to identify and involve such actors without incurring trade-offs with other scientific desiderata (e.g. due to the risk of inadequate expertise) is then further explored. We conclude by providing two models in law and science that may be helpful to draw upon if seeking to incorporate independent actors.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2601225"},"PeriodicalIF":4.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967708","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: Failure to declare a conflict of interest (COI) may bias research outcomes and undermine the integrity of readers' decision-making. This study aims to examine common practices in health sciences where COIs were inadequately disclosed.
Methods: We identified and analyzed papers with post-publication COI issues by searching PubMed/MEDLINE, Web of Science and Retraction Watch Databases.
Results: A total of 328 medical papers were identified with COI issues. Among them, 128 (39.0%) articles were retracted, 53 (16.2%) received expressions of concern, and 147 (44.8%) were corrected. Most actions (224, 68.2%) were initiated by editors or publishers. Despite these issues, papers reached a median of 4 post-publication citations. Of 189 papers failing to declare financial COIs, 33.8% were retracted, while 61.2% received corrections or expressions of concern.
Conclusions: Journals should adopt more detailed guidelines for COI disclosures and standardizing retraction notices to improve transparency. There is an urgent need for robust mechanisms to address potential COI issues effectively and to encourage authors to disclose COI transparently. Furthermore, to mitigate the risk of retractions, expressions of editorial concern, or corrections, these disclosure protocols must remain enforceable even post-publication.
背景:未申报利益冲突(COI)可能会使研究结果产生偏差,并破坏读者决策的完整性。本研究的目的是检查卫生科学中coi未充分披露的常见做法。方法:通过检索PubMed/MEDLINE、Web of Science和Retraction Watch数据库,对存在发表后COI问题的论文进行鉴定和分析。结果:共鉴定出328篇医学论文存在COI问题。其中,撤稿128篇(39.0%),关注表达53篇(16.2%),更正147篇(44.8%)。大多数行动(22.4,68.2%)是由编辑或出版商发起的。尽管存在这些问题,论文发表后被引用的中位数还是达到了4次。在189篇未申报财务coi的论文中,33.8%的论文被撤回,61.2%的论文得到了更正或关注。结论:期刊应采用更详细的COI披露指南,并规范撤稿通知,以提高透明度。迫切需要健全的机制来有效地解决潜在的COI问题,并鼓励作者透明地披露COI。此外,为了降低撤稿、表达编辑关注或更正的风险,这些披露协议即使在发表后也必须保持可执行性。
{"title":"Consequences of undisclosed conflicts of interest in academic publishing.","authors":"Lili Yang, Jiamin Lv, Jufang Shao, Panzhi Wang, Siyun Xu, Rongwang Yang","doi":"10.1080/08989621.2026.2616765","DOIUrl":"https://doi.org/10.1080/08989621.2026.2616765","url":null,"abstract":"<p><strong>Background: </strong>Failure to declare a conflict of interest (COI) may bias research outcomes and undermine the integrity of readers' decision-making. This study aims to examine common practices in health sciences where COIs were inadequately disclosed.</p><p><strong>Methods: </strong>We identified and analyzed papers with post-publication COI issues by searching PubMed/MEDLINE, Web of Science and Retraction Watch Databases.</p><p><strong>Results: </strong>A total of 328 medical papers were identified with COI issues. Among them, 128 (39.0%) articles were retracted, 53 (16.2%) received expressions of concern, and 147 (44.8%) were corrected. Most actions (224, 68.2%) were initiated by editors or publishers. Despite these issues, papers reached a median of 4 post-publication citations. Of 189 papers failing to declare financial COIs, 33.8% were retracted, while 61.2% received corrections or expressions of concern.</p><p><strong>Conclusions: </strong>Journals should adopt more detailed guidelines for COI disclosures and standardizing retraction notices to improve transparency. There is an urgent need for robust mechanisms to address potential COI issues effectively and to encourage authors to disclose COI transparently. Furthermore, to mitigate the risk of retractions, expressions of editorial concern, or corrections, these disclosure protocols must remain enforceable even post-publication.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2616765"},"PeriodicalIF":4.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967741","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-14DOI: 10.1080/08989621.2026.2614062
Liu Yiru, Liu Yi, Yuan Zihan
Purpose/significance: This study investigates the awareness, perceptions, and responses of library and information science (LIS) researchers toward retracted papers, aiming to inform the improvement of research integrity governance.
Method/process: A questionnaire survey of 280 LIS researchers examined their sources of retraction information, understanding of causes, perceived consequences, and attitudes toward evaluation. The influence of academic background, publication volume, and discipline was also explored.
Result/conclusion: Findings indicate generally low retraction awareness and a primary reliance on informal channels. Critically, the analysis reveals several nuanced patterns: (1) Significant disciplinary differences exist in perceiving retraction causes; (2) Opinions are sharply divided on including retraction records in research evaluation, reflecting concerns about uniform responsibility attribution; (3) A considerable proportion of researchers mistakenly view retraction's impact as reversible. These attitudes are strongly associated with educational background and publication experience. In response, this paper proposes five key recommendations: establishing authoritative retraction platforms, improving journal retraction mechanisms, differentiating retraction types in evaluation, strengthening integrity education, and building a coordinated governance framework. These measures contribute to fostering a more transparent, fair, and sustainable scholarly correction ecosystem.
{"title":"Low awareness, informal channels: How LIS researchers perceive retracted papers and its implications for research integrity.","authors":"Liu Yiru, Liu Yi, Yuan Zihan","doi":"10.1080/08989621.2026.2614062","DOIUrl":"https://doi.org/10.1080/08989621.2026.2614062","url":null,"abstract":"<p><strong>Purpose/significance: </strong>This study investigates the awareness, perceptions, and responses of library and information science (LIS) researchers toward retracted papers, aiming to inform the improvement of research integrity governance.</p><p><strong>Method/process: </strong>A questionnaire survey of 280 LIS researchers examined their sources of retraction information, understanding of causes, perceived consequences, and attitudes toward evaluation. The influence of academic background, publication volume, and discipline was also explored.</p><p><strong>Result/conclusion: </strong>Findings indicate generally low retraction awareness and a primary reliance on informal channels. Critically, the analysis reveals several nuanced patterns: (1) Significant disciplinary differences exist in perceiving retraction causes; (2) Opinions are sharply divided on including retraction records in research evaluation, reflecting concerns about uniform responsibility attribution; (3) A considerable proportion of researchers mistakenly view retraction's impact as reversible. These attitudes are strongly associated with educational background and publication experience. In response, this paper proposes five key recommendations: establishing authoritative retraction platforms, improving journal retraction mechanisms, differentiating retraction types in evaluation, strengthening integrity education, and building a coordinated governance framework. These measures contribute to fostering a more transparent, fair, and sustainable scholarly correction ecosystem.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2614062"},"PeriodicalIF":4.0,"publicationDate":"2026-01-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145967738","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-08DOI: 10.1080/08989621.2025.2612564
Mohammad Hosseini, Daniel Eisenman, James Riddle, Stephanie Pyle, Anju Peters, Nichelle Cobb, And Kristi Holmes
Three oversight bodies review research proposals to help ensure the safe and responsible conduct of biomedical research, each focusing on unique aspects of research ethics: institutional review boards (IRBs), institutional biosafety committees (IBCs), and institutional animal care and use committees (IACUCs). The role of artificial intelligence (AI) in research oversight is rapidly expanding, specifically when preparing and reviewing applications. Although using AI may reduce administrative costs and burdens, it also may create new concerns since AI tools can make mistakes of fact and reasoning, and are susceptible to bias. Furthermore, outsourcing ethical planning and oversight of research to AI could compromise ethical understanding. Although the arguments for/against using AI in preparation or review of IRB, IBC, or IACUC differ fundamentally from those concerning AI use in manuscript writing/peer review, currently there is minimal guidance about the responsible use of AI in research oversight from government agencies, professional organizations, universities, hospitals, and other entities that conduct research. We argue that 1) to minimize the risks of using AI in research oversight, additional guidance is urgently needed; and 2) humans must always be the final decider because ethical planning and oversight involve value judgments that should not be outsourced to AI.
{"title":"Guidelines needed for the use of AI in the preparation or review of IRB, IBC, and IACUC applications.","authors":"Mohammad Hosseini, Daniel Eisenman, James Riddle, Stephanie Pyle, Anju Peters, Nichelle Cobb, And Kristi Holmes","doi":"10.1080/08989621.2025.2612564","DOIUrl":"10.1080/08989621.2025.2612564","url":null,"abstract":"<p><p>Three oversight bodies review research proposals to help ensure the safe and responsible conduct of biomedical research, each focusing on unique aspects of research ethics: institutional review boards (IRBs), institutional biosafety committees (IBCs), and institutional animal care and use committees (IACUCs). The role of artificial intelligence (AI) in research oversight is rapidly expanding, specifically when preparing and reviewing applications. Although using AI may reduce administrative costs and burdens, it also may create new concerns since AI tools can make mistakes of fact and reasoning, and are susceptible to bias. Furthermore, outsourcing ethical planning and oversight of research to AI could compromise ethical understanding. Although the arguments for/against using AI in preparation or review of IRB, IBC, or IACUC differ fundamentally from those concerning AI use in manuscript writing/peer review, currently there is minimal guidance about the responsible use of AI in research oversight from government agencies, professional organizations, universities, hospitals, and other entities that conduct research. We argue that 1) to minimize the risks of using AI in research oversight, additional guidance is urgently needed; and 2) humans must always be the final decider because ethical planning and oversight involve value judgments that should not be outsourced to AI.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2612564"},"PeriodicalIF":4.0,"publicationDate":"2026-01-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC12823195/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145936049","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-02DOI: 10.1080/08989621.2025.2611383
Siun Gallagher, Sara Attinger, Ian Kerridge, Robert J Norman, Wendy Lipworth
In many parts of the world, an increasing number of clinical healthcare services are delivered through corporations. These corporations are also increasingly required to shape and undertake vital medical research. In this paper we outline the challenges of setting research priorities in corporatised clinics and ensuring that researchers are accountable to society and alert to the broader societal impacts of their work. We propose that the approach to research governance known as "Responsible Innovation" might provide a useful framework for selecting and shaping corporate research priorities so that they are grounded in population health priorities and wider social benefit. Responsible innovation also provides guidance for engaging patients, consumers, regulators and payers in constructive collaboration with researchers; encouraging ethical reflection by both corporations and individual scientists; and promoting responsiveness to contingencies in the processes, outcomes, and reception of research.
{"title":"Research prioritization and societal accountability in corporatised healthcare services - What can Responsible Innovation offer?","authors":"Siun Gallagher, Sara Attinger, Ian Kerridge, Robert J Norman, Wendy Lipworth","doi":"10.1080/08989621.2025.2611383","DOIUrl":"https://doi.org/10.1080/08989621.2025.2611383","url":null,"abstract":"<p><p>In many parts of the world, an increasing number of clinical healthcare services are delivered through corporations. These corporations are also increasingly required to shape and undertake vital medical research. In this paper we outline the challenges of setting research priorities in corporatised clinics and ensuring that researchers are accountable to society and alert to the broader societal impacts of their work. We propose that the approach to research governance known as \"Responsible Innovation\" might provide a useful framework for selecting and shaping corporate research priorities so that they are grounded in population health priorities and wider social benefit. Responsible innovation also provides guidance for engaging patients, consumers, regulators and payers in constructive collaboration with researchers; encouraging ethical reflection by both corporations and individual scientists; and promoting responsiveness to contingencies in the processes, outcomes, and reception of research.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"2611383"},"PeriodicalIF":4.0,"publicationDate":"2026-01-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"145893055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2026-01-01Epub Date: 2024-11-05DOI: 10.1080/08989621.2024.2420811
Seliem El-Sayed
Background: Computational Social Science (CSS) utilizes large digital datasets and computational methods to study human behavior, raising ethical concerns about data privacy, informed consent, and potential misuse.Methods: This study employs a constructivist grounded theory approach, analyzing 15 in-depth interviews with CSS practitioners in Germany, Austria, and Switzerland. These countries share a European legal context regarding data privacy and hereby provide a comparable regulatory environment for examining ethical considerations.Results: Findings highlight key challenges in CSS research, including power imbalances with data providers, uncertainties around surveillance and data privacy (especially with longitudinal data), and limitations of current ethics frameworks. Researchers face tensions between established ethical principles and practical realities, often feeling disempowered and lacking support from ethics boards due to their limited CSS expertise. Regulatory ambiguity further discourages research due to fear of sanctions.Conclusions: To foster responsible CSS practices, this paper recommends establishing specialized ethics boards with CSS expertise. It also advocates for acknowledging CSS's unique nature in research policy by developing tailored data guidelines and providing legal certainty through clear guidelines. Grounding recommendations in practitioners' experiences, this study offers actionable steps to help enable ethical CSS research.
{"title":"A practitioner-centered policy roadmap for ethical computational social science in Germany, Austria, and Switzerland.","authors":"Seliem El-Sayed","doi":"10.1080/08989621.2024.2420811","DOIUrl":"10.1080/08989621.2024.2420811","url":null,"abstract":"<p><p><b>Background:</b> Computational Social Science (CSS) utilizes large digital datasets and computational methods to study human behavior, raising ethical concerns about data privacy, informed consent, and potential misuse.<b>Methods:</b> This study employs a constructivist grounded theory approach, analyzing 15 in-depth interviews with CSS practitioners in Germany, Austria, and Switzerland. These countries share a European legal context regarding data privacy and hereby provide a comparable regulatory environment for examining ethical considerations.<b>Results:</b> Findings highlight key challenges in CSS research, including power imbalances with data providers, uncertainties around surveillance and data privacy (especially with longitudinal data), and limitations of current ethics frameworks. Researchers face tensions between established ethical principles and practical realities, often feeling disempowered and lacking support from ethics boards due to their limited CSS expertise. Regulatory ambiguity further discourages research due to fear of sanctions.<b>Conclusions:</b> To foster responsible CSS practices, this paper recommends establishing specialized ethics boards with CSS expertise. It also advocates for acknowledging CSS's unique nature in research policy by developing tailored data guidelines and providing legal certainty through clear guidelines. Grounding recommendations in practitioners' experiences, this study offers actionable steps to help enable ethical CSS research.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-21"},"PeriodicalIF":4.0,"publicationDate":"2026-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142577169","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}