Pub Date : 2024-09-08DOI: 10.1080/08989621.2024.2396940
Deanna Zarrillo, Mat Kelly, Erjia Yan, Chaoqun Ni
Background: Web archives offer researchers a promising source for large-scale longitudinal data collection; however, their complex social and technical infrastructures create an array of ethical concerns. In addition, there is a notable lack of guidance available for researchers hoping to conduct0 ethical research using web archives.
Methods: We present an ethical decision-making case study based on an ongoing research project using the Internet Archive's Wayback Machine to study faculty appointments and mobility at Historically Black Colleges and Universities (HBCUs).
Results: This paper contributes to information ethics discourse by expanding on the Association of Internet Researchers' recommendations for ethical decision-making, and mapping ethical considerations for each stage of the project within existing conceptual frameworks for research using web archives.
Conclusions: By utilizing internet research guidance and web archive research frameworks in a case study approach, we hope to aid future researchers conducting internet research of a similar nature by serving as a useful reference.
{"title":"Web archives for data collection: An ethics case study.","authors":"Deanna Zarrillo, Mat Kelly, Erjia Yan, Chaoqun Ni","doi":"10.1080/08989621.2024.2396940","DOIUrl":"https://doi.org/10.1080/08989621.2024.2396940","url":null,"abstract":"<p><strong>Background: </strong>Web archives offer researchers a promising source for large-scale longitudinal data collection; however, their complex social and technical infrastructures create an array of ethical concerns. In addition, there is a notable lack of guidance available for researchers hoping to conduct0 ethical research using web archives.</p><p><strong>Methods: </strong>We present an ethical decision-making case study based on an ongoing research project using the Internet Archive's Wayback Machine to study faculty appointments and mobility at Historically Black Colleges and Universities (HBCUs).</p><p><strong>Results: </strong>This paper contributes to information ethics discourse by expanding on the Association of Internet Researchers' recommendations for ethical decision-making, and mapping ethical considerations for each stage of the project within existing conceptual frameworks for research using web archives.</p><p><strong>Conclusions: </strong>By utilizing internet research guidance and web archive research frameworks in a case study approach, we hope to aid future researchers conducting internet research of a similar nature by serving as a useful reference.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-18"},"PeriodicalIF":2.8,"publicationDate":"2024-09-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142156565","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1080/08989621.2024.2388232
Roshni Jegan, Krishma Labib, Kris Dierickx, Noémie Aubert Bonn, Joeri Tijdink, Ana Marušić, Daniel Pizzolato
Research Funding Organizations (RFOs) play an important role in promoting research integrity (RI). Not only do they allocate resources to research institutions and researchers, but they also set and monitor research standards in their funded projects. In spite of their crucial role, there is a paucity of guidance on how RFOs can promote research integrity. As part of the EU-Funded SOPs4RI project, we aimed to address this gap by co-creating guidelines to help RFOs promote RI, engaging a diverse group of stakeholders. Based on a Delphi survey, reviews of evidence and stakeholder interviews, three guideline topics were identified: 1) the selection and evaluation of proposals; 2) monitoring of funded projects; and 3) prevention of unjustified interference. Four sets of co-creation workshops were conducted for each guideline topic, and the input revised and finalized. Understanding these debates could help RFOs from diverse cultural and organizational backgrounds who are developing their own RI guidelines. Therefore, in this paper, we summarize the key results and emphasize the final recommendations. Further, we provide the main points of discussion that occurred during the workshops and explain how they were addressed or resolved in the final guidelines and how they can help in future endeavors to improve funders' practices to foster RI.
{"title":"Promoting research integrity in funding: Co-creating guidelines for research funding organizations.","authors":"Roshni Jegan, Krishma Labib, Kris Dierickx, Noémie Aubert Bonn, Joeri Tijdink, Ana Marušić, Daniel Pizzolato","doi":"10.1080/08989621.2024.2388232","DOIUrl":"https://doi.org/10.1080/08989621.2024.2388232","url":null,"abstract":"<p><p>Research Funding Organizations (RFOs) play an important role in promoting research integrity (RI). Not only do they allocate resources to research institutions and researchers, but they also set and monitor research standards in their funded projects. In spite of their crucial role, there is a paucity of guidance on how RFOs can promote research integrity. As part of the EU-Funded SOPs4RI project, we aimed to address this gap by co-creating guidelines to help RFOs promote RI, engaging a diverse group of stakeholders. Based on a Delphi survey, reviews of evidence and stakeholder interviews, three guideline topics were identified: 1) the selection and evaluation of proposals; 2) monitoring of funded projects; and 3) prevention of unjustified interference. Four sets of co-creation workshops were conducted for each guideline topic, and the input revised and finalized. Understanding these debates could help RFOs from diverse cultural and organizational backgrounds who are developing their own RI guidelines. Therefore, in this paper, we summarize the key results and emphasize the final recommendations. Further, we provide the main points of discussion that occurred during the workshops and explain how they were addressed or resolved in the final guidelines and how they can help in future endeavors to improve funders' practices to foster RI.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-20"},"PeriodicalIF":2.8,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082492","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-28DOI: 10.1080/08989621.2024.2393813
Jonas Polfuß
Background: Scientific misinformation is a much-discussed topic, and the COVID-19 crisis has highlighted the importance of reliability in science and research. However, limiting misinformation is complicated because of the growing number of communication channels, in which scientific and nonscientific content are often mixed.
Methods: This case study combines the examination of references, online observation, and a content and frequency analysis to investigate the dissemination of scientific misinformation in the interplay of different genres and media.
Results: Using the example of the claimed existence of 34,000 human emotions, this study demonstrates how questionable statements are spread in science, popular science, and pseudoscience, making it particularly challenging to track and correct them.
Conclusions: The findings highlight epistemic authority, trust, and injustice within and between scientific and nonscientific communities. The author argues that, in the digital age, researchers should defend and monitor scientific principles beyond academia.
{"title":"Are there 34,000 human emotions? Deconstructing patterns of scientific misinformation.","authors":"Jonas Polfuß","doi":"10.1080/08989621.2024.2393813","DOIUrl":"https://doi.org/10.1080/08989621.2024.2393813","url":null,"abstract":"<p><strong>Background: </strong>Scientific misinformation is a much-discussed topic, and the COVID-19 crisis has highlighted the importance of reliability in science and research. However, limiting misinformation is complicated because of the growing number of communication channels, in which scientific and nonscientific content are often mixed.</p><p><strong>Methods: </strong>This case study combines the examination of references, online observation, and a content and frequency analysis to investigate the dissemination of scientific misinformation in the interplay of different genres and media.</p><p><strong>Results: </strong>Using the example of the claimed existence of 34,000 human emotions, this study demonstrates how questionable statements are spread in science, popular science, and pseudoscience, making it particularly challenging to track and correct them.</p><p><strong>Conclusions: </strong>The findings highlight epistemic authority, trust, and injustice within and between scientific and nonscientific communities. The author argues that, in the digital age, researchers should defend and monitor scientific principles beyond academia.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-20"},"PeriodicalIF":2.8,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142082491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-17DOI: 10.1080/08989621.2024.2387210
Anna Abalkina
Background: The study examines the prevalence of plagiarism in hijacked journals, a category of problematic journals that have proliferated over the past decade.
Methods: A quasi-random sample of 936 papers published in 58 hijacked journals that provided free access to their archive as of June 2021 was selected for the analysis. The study utilizes Urkund (Ouriginal) software and manual verification to investigate plagiarism and finds a significant prevalence of plagiarism in hijacked journals.
Results: Out of the analyzed sample papers, 618 (66%) were found to contain instances of plagiarism, and 28% of papers from the sample (n = 259) displayed text similarities of 25% or more. The analysis reveals that a majority of authors originate from developing and ex-Soviet countries, with limited affiliation ties to developed countries and scarce international cooperation in papers submitted to hijacked journals. The absence of rigorous publication requirements, peer review processes, and plagiarism checks in hijacked journals creates an environment where authors can publish texts with a significant amount of plagiarism.
Conclusions: These findings suggest a tendency for fraudulent journals to attract authors who do not uphold scientific integrity principles. The legitimization of papers from hijacked journals in bibliographic databases, along with their citation, poses significant challenges to scientific integrity.
{"title":"Prevalence of plagiarism in hijacked journals: A text similarity analysis.","authors":"Anna Abalkina","doi":"10.1080/08989621.2024.2387210","DOIUrl":"https://doi.org/10.1080/08989621.2024.2387210","url":null,"abstract":"<p><strong>Background: </strong>The study examines the prevalence of plagiarism in hijacked journals, a category of problematic journals that have proliferated over the past decade.</p><p><strong>Methods: </strong>A quasi-random sample of 936 papers published in 58 hijacked journals that provided free access to their archive as of June 2021 was selected for the analysis. The study utilizes Urkund (Ouriginal) software and manual verification to investigate plagiarism and finds a significant prevalence of plagiarism in hijacked journals.</p><p><strong>Results: </strong>Out of the analyzed sample papers, 618 (66%) were found to contain instances of plagiarism, and 28% of papers from the sample (n = 259) displayed text similarities of 25% or more. The analysis reveals that a majority of authors originate from developing and ex-Soviet countries, with limited affiliation ties to developed countries and scarce international cooperation in papers submitted to hijacked journals. The absence of rigorous publication requirements, peer review processes, and plagiarism checks in hijacked journals creates an environment where authors can publish texts with a significant amount of plagiarism.</p><p><strong>Conclusions: </strong>These findings suggest a tendency for fraudulent journals to attract authors who do not uphold scientific integrity principles. The legitimization of papers from hijacked journals in bibliographic databases, along with their citation, poses significant challenges to scientific integrity.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141996901","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1080/08989621.2024.2390007
Minal M Caron, Carolyn T Lye, Barbara E Bierer, Mark Barnes
The founders of PubPeer envisioned their website as an online form of a "journal club" that would facilitate post-publication peer review. Recently, PubPeer comments have led to a significant number of research misconduct proceedings - a development that could not have been anticipated when the current federal research misconduct regulations were developed two decades ago. Yet the number, frequency, and velocity of PubPeer comments identifying data integrity concerns, and institutional and government practices that treat all such comments as potential research misconduct allegations, have overwhelmed institutions and threaten to divert attention and resources away from other research integrity initiatives. Recent, high profile research misconduct cases accentuate the increasing public interest in research integrity and make it inevitable that the use of platforms such as PubPeer to challenge research findings will intensify. This article examines the origins of PubPeer and its central role in the modern era of online-based scouring of scientific publications for potential problems and outlines the challenges that institutions must manage in addressing issues identified on PubPeer. In conclusion, we discuss some potential enhancements to the investigatory process specified under federal regulations that could, if implemented, allow institutions to manage some of these challenges more efficiently.
{"title":"The PubPeer conundrum: Administrative challenges in research misconduct proceedings.","authors":"Minal M Caron, Carolyn T Lye, Barbara E Bierer, Mark Barnes","doi":"10.1080/08989621.2024.2390007","DOIUrl":"10.1080/08989621.2024.2390007","url":null,"abstract":"<p><p>The founders of PubPeer envisioned their website as an online form of a \"journal club\" that would facilitate post-publication peer review. Recently, PubPeer comments have led to a significant number of research misconduct proceedings - a development that could not have been anticipated when the current federal research misconduct regulations were developed two decades ago. Yet the number, frequency, and velocity of PubPeer comments identifying data integrity concerns, and institutional and government practices that treat all such comments as potential research misconduct allegations, have overwhelmed institutions and threaten to divert attention and resources away from other research integrity initiatives. Recent, high profile research misconduct cases accentuate the increasing public interest in research integrity and make it inevitable that the use of platforms such as PubPeer to challenge research findings will intensify. This article examines the origins of PubPeer and its central role in the modern era of online-based scouring of scientific publications for potential problems and outlines the challenges that institutions must manage in addressing issues identified on PubPeer. In conclusion, we discuss some potential enhancements to the investigatory process specified under federal regulations that could, if implemented, allow institutions to manage some of these challenges more efficiently.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141977141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-13DOI: 10.1080/08989621.2024.2383349
Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter
Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes.
{"title":"Replication studies in the Netherlands: Lessons learned and recommendations for funders, publishers and editors, and universities.","authors":"Maarten Derksen, Stephanie Meirmans, Jonna Brenninkmeijer, Jeannette Pols, Annemarijn de Boer, Hans van Eyghen, Surya Gayet, Rolf Groenwold, Dennis Hernaus, Pim Huijnen, Nienke Jonker, Renske de Kleijn, Charlotte F Kroll, Angelos-Miltiadis Krypotos, Nynke van der Laan, Kim Luijken, Ewout Meijer, Rachel S A Pear, Rik Peels, Robin Peeters, Charlotte C S Rulkens, Christin Scholz, Nienke Smit, Rombert Stapel, Joost de Winter","doi":"10.1080/08989621.2024.2383349","DOIUrl":"https://doi.org/10.1080/08989621.2024.2383349","url":null,"abstract":"<p><p>Drawing on our experiences conducting replications we describe the lessons we learned about replication studies and formulate recommendations for researchers, policy makers, and funders about the role of replication in science and how it should be supported and funded. We first identify a variety of benefits of doing replication studies. Next, we argue that it is often necessary to improve aspects of the original study, even if that means deviating from the original protocol. Thirdly, we argue that replication studies highlight the importance of and need for more transparency of the research process, but also make clear how difficult that is. Fourthly, we underline that it is worth trying out replication in the humanities. We finish by formulating recommendations regarding reproduction and replication research, aimed specifically at funders, editors and publishers, and universities and other research institutes.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-19"},"PeriodicalIF":2.8,"publicationDate":"2024-08-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141972304","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-07DOI: 10.1080/08989621.2024.2386285
Barton Moffatt, Alicia Hall
The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.
{"title":"Is AI my co-author? The ethics of using artificial intelligence in scientific publishing.","authors":"Barton Moffatt, Alicia Hall","doi":"10.1080/08989621.2024.2386285","DOIUrl":"10.1080/08989621.2024.2386285","url":null,"abstract":"<p><p>The recent emergence of Large Language Models (LLMs) and other forms of Artificial Intelligence (AI) has led people to wonder whether they could act as an author on a scientific paper. This paper argues that AI systems should not be included on the author by-line. We agree with current commentators that LLMs are incapable of taking responsibility for their work and thus do not meet current authorship guidelines. We identify other problems with responsibility and authorship. In addition, the problems go deeper as AI tools also do not write in a meaningful sense nor do they have persistent identities. From a broader publication ethics perspective, adopting AI authorship would have detrimental effects on an already overly competitive and stressed publishing ecosystem. Deterrence is possible as backward-looking tools will likely be able to identify past AI usage. Finally, we question the value of using AI to produce more research simply for publication's sake.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-17"},"PeriodicalIF":2.8,"publicationDate":"2024-08-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141898856","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-04DOI: 10.1080/08989621.2024.2376644
Simona Olivieri, Viktor Ullmann
Recent developments in the German academic landscape have seen a shifting approach to promoting research integrity. In 2019, the German Research Foundation (DFG) incentivized all research and higher education institutions to appoint ombudspersons who advise members of their institution in matters of good research practice or suspected research misconduct. These ombudspersons for good research practice, usually professors who act in this function on a voluntary basis, need institutional support to be prepared for and fulfill their diverse duties. The Ombuds-Modelle@BUA (2020) and OBUA - Ombudswesen@BUA (2021-2023) projects worked to advance the professionalization of ombudspersons in the Berlin research area by first investigating the current situation and then offering a meta-level of support in training, networking, and knowledge exchange. Furthermore, the OBUA project engaged in meta-research, investigating the status quo of local ombuds systems and demands for support. The project findings, discussed in this contribution, show that the professionalization of local ombuds systems has been evolving in past years, especially in the areas of training and networking. Infrastructural support measures, however, remain largely underdeveloped.
{"title":"Training, networking, and support infrastructure for ombudspersons for good research practice: A survey of the status quo in the Berlin research area.","authors":"Simona Olivieri, Viktor Ullmann","doi":"10.1080/08989621.2024.2376644","DOIUrl":"https://doi.org/10.1080/08989621.2024.2376644","url":null,"abstract":"<p><p>Recent developments in the German academic landscape have seen a shifting approach to promoting research integrity. In 2019, the German Research Foundation (DFG) incentivized all research and higher education institutions to appoint ombudspersons who advise members of their institution in matters of good research practice or suspected research misconduct. These ombudspersons for good research practice, usually professors who act in this function on a voluntary basis, need institutional support to be prepared for and fulfill their diverse duties. The Ombuds-Modelle@BUA (2020) and OBUA - Ombudswesen@BUA (2021-2023) projects worked to advance the professionalization of ombudspersons in the Berlin research area by first investigating the current situation and then offering a meta-level of support in training, networking, and knowledge exchange. Furthermore, the OBUA project engaged in meta-research, investigating the status quo of local ombuds systems and demands for support. The project findings, discussed in this contribution, show that the professionalization of local ombuds systems has been evolving in past years, especially in the areas of training and networking. Infrastructural support measures, however, remain largely underdeveloped.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"1-20"},"PeriodicalIF":2.8,"publicationDate":"2024-08-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141890860","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-01-14DOI: 10.1080/08989621.2022.2161049
Mohammad Hosseini, Bert Gordijn, Q Eileen Wafford, Kristi L Holmes
Contributor Role Ontologies and Taxonomies (CROTs) provide a standard list of roles to specify individual contributions to research. CROTs most common application has been their inclusion alongside author bylines in scholarly publications. With the recent uptake of CROTs among publishers -particularly the Contributor Role Taxonomy (CRediT)- some have anticipated a positive impact on ethical issues regarding the attribution of credit and responsibilities, but others have voiced concerns about CROTs shortcomings and ways they could be misunderstood or have unintended consequences. Since these discussions have never been consolidated, this review collated and explored published viewpoints about the ethics of CROTs. After searching Ovid Medline, Scopus, Web of Science, and Google Scholar, 30 papers met the inclusion criteria and were analyzed. We identified eight themes and 20 specific issues related to the ethics of CROTs and provided four recommendations for CROT developers, custodians, or others seeking to use CROTs in their workflows, policy and practice: 1) Compile comprehensive instructions that explain how CROTs should be used; 2) Improve the coherence of used terms, 3) Translate roles in languages other than English, 4) Communicate a clear vision about future development plans and be transparent about CROTs' strengths and weaknesses. We conclude that CROTs are not the panacea for unethical attributions and should be complemented with initiatives that support social and infrastructural transformation of scholarly publications.
{"title":"A systematic scoping review of the ethics of Contributor Role Ontologies and Taxonomies.","authors":"Mohammad Hosseini, Bert Gordijn, Q Eileen Wafford, Kristi L Holmes","doi":"10.1080/08989621.2022.2161049","DOIUrl":"10.1080/08989621.2022.2161049","url":null,"abstract":"<p><p>Contributor Role Ontologies and Taxonomies (CROTs) provide a standard list of roles to specify individual contributions to research. CROTs most common application has been their inclusion alongside author bylines in scholarly publications. With the recent uptake of CROTs among publishers -particularly the Contributor Role Taxonomy (CRediT)- some have anticipated a positive impact on ethical issues regarding the attribution of credit and responsibilities, but others have voiced concerns about CROTs shortcomings and ways they could be misunderstood or have unintended consequences. Since these discussions have never been consolidated, this review collated and explored published viewpoints about the ethics of CROTs. After searching Ovid Medline, Scopus, Web of Science, and Google Scholar, 30 papers met the inclusion criteria and were analyzed. We identified eight themes and 20 specific issues related to the ethics of CROTs and provided four recommendations for CROT developers, custodians, or others seeking to use CROTs in their workflows, policy and practice: 1) Compile comprehensive instructions that explain how CROTs should be used; 2) Improve the coherence of used terms, 3) Translate roles in languages other than English, 4) Communicate a clear vision about future development plans and be transparent about CROTs' strengths and weaknesses. We conclude that CROTs are not the panacea for unethical attributions and should be complemented with initiatives that support social and infrastructural transformation of scholarly publications.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"678-705"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"10533075","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-08-01Epub Date: 2023-01-15DOI: 10.1080/08989621.2022.2154154
Krishma Labib, Daniel Pizzolato, Pieter Jan Stappers, Natalie Evans, Iris Lechner, Guy Widdershoven, Lex Bouter, Kris Dierickx, Katinka Bergema, Joeri Tijdink
Existing research integrity (RI) guideline development methods are limited in including various perspectives. While co-creation methods could help to address this, there is little information available to researchers and practitioners on how, why and when to use co-creation for developing RI guidelines, nor what the outcomes of co-creation methods are. In this paper, we aim to address this gap. First, we discuss how co-creation methods can be used for RI guideline development, based on our experience of developing RI guidelines. We elaborate on steps including preparation of the aims and design; participant sensitization; organizing and facilitating workshops; and analyzing data and translating them into guidelines. Secondly, we present the resulting RI guidelines, to show what the outcome of co-creation methods are. Thirdly, we reflect on why and when researchers might want to use co-creation methods for developing RI guidelines. We discuss that stakeholder engagement and inclusion of diverse perspectives are key strengths of co-creation methods. We also reflect that co-creation methods have the potential to make guidelines implementable if followed by additional steps such as revision working groups. We conclude that co-creation methods are a valuable approach to creating new RI guidelines when used together with additional methods.
现有的研究诚信(RI)指南制定方法在纳入各种观点方面存在局限性。虽然共创方法可以帮助解决这一问题,但研究人员和从业人员却很少了解如何、为何以及何时使用共创方法来制定 RI 准则,也不知道共创方法的结果如何。本文旨在填补这一空白。首先,我们将根据自身制定 RI 指南的经验,讨论如何将共同创造方法用于 RI 指南的制定。我们详细阐述了包括目标和设计准备、参与者宣传、组织和促进研讨会、分析数据并将其转化为指南在内的各个步骤。其次,我们介绍由此产生的 RI 指导方针,以展示共同创造方法的成果。第三,我们反思了研究人员为什么以及在什么情况下可能希望使用共同创造方法来制定 RI 指南。我们讨论了利益相关者的参与和纳入不同观点是共同创造方法的主要优势。我们还反思到,如果采取更多步骤(如修订工作组),共同创造方法有可能使指南具有可实施性。我们的结论是,如果与其他方法一起使用,共同创造方法是制定新的 RI 准则的一种有价值的方法。
{"title":"Using co-creation methods for research integrity guideline development - how, what, why and when?","authors":"Krishma Labib, Daniel Pizzolato, Pieter Jan Stappers, Natalie Evans, Iris Lechner, Guy Widdershoven, Lex Bouter, Kris Dierickx, Katinka Bergema, Joeri Tijdink","doi":"10.1080/08989621.2022.2154154","DOIUrl":"10.1080/08989621.2022.2154154","url":null,"abstract":"<p><p>Existing research integrity (RI) guideline development methods are limited in including various perspectives. While co-creation methods could help to address this, there is little information available to researchers and practitioners on how, why and when to use co-creation for developing RI guidelines, nor what the outcomes of co-creation methods are. In this paper, we aim to address this gap. First, we discuss <i>how</i> co-creation methods can be used for RI guideline development, based on our experience of developing RI guidelines. We elaborate on steps including preparation of the aims and design; participant sensitization; organizing and facilitating workshops; and analyzing data and translating them into guidelines. Secondly, we present the resulting RI guidelines, to show <i>what</i> the outcome of co-creation methods are. Thirdly, we reflect on <i>why</i> and <i>when</i> researchers might want to use co-creation methods for developing RI guidelines. We discuss that stakeholder engagement and inclusion of diverse perspectives are key strengths of co-creation methods. We also reflect that co-creation methods have the potential to make guidelines implementable if followed by additional steps such as revision working groups. We conclude that co-creation methods are a valuable approach to creating new RI guidelines when used together with additional methods.</p>","PeriodicalId":50927,"journal":{"name":"Accountability in Research-Policies and Quality Assurance","volume":" ","pages":"531-556"},"PeriodicalIF":2.8,"publicationDate":"2024-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"9090783","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":1,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}