Pub Date : 2025-01-24DOI: 10.1007/s11948-025-00528-1
Dario Cecchini, Veljko Dubljević
The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.
{"title":"Moral Complexity in Traffic: Advancing the ADC Model for Automated Driving Systems.","authors":"Dario Cecchini, Veljko Dubljević","doi":"10.1007/s11948-025-00528-1","DOIUrl":"10.1007/s11948-025-00528-1","url":null,"abstract":"<p><p>The incorporation of ethical settings in Automated Driving Systems (ADSs) has been extensively discussed in recent years with the goal of enhancing potential stakeholders' trust in the new technology. However, a comprehensive ethical framework for ADS decision-making, capable of merging multiple ethical considerations and investigating their consistency is currently missing. This paper addresses this gap by providing a taxonomy of ADS decision-making based on the Agent-Deed-Consequences (ADC) model of moral judgment. Specifically, we identify three main components of traffic moral judgment: driving style, traffic rules compliance, and risk distribution. Then, we suggest distinguishable ethical settings for each traffic component.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"5"},"PeriodicalIF":2.7,"publicationDate":"2025-01-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11761772/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143034715","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2025-01-23DOI: 10.1007/s11948-025-00529-0
Mark Coeckelbergh
While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.
{"title":"LLMs, Truth, and Democracy: An Overview of Risks.","authors":"Mark Coeckelbergh","doi":"10.1007/s11948-025-00529-0","DOIUrl":"10.1007/s11948-025-00529-0","url":null,"abstract":"<p><p>While there are many public concerns about the impact of AI on truth and knowledge, especially when it comes to the widespread use of LLMs, there is not much systematic philosophical analysis of these problems and their political implications. This paper aims to assist this effort by providing an overview of some truth-related risks in which LLMs may play a role, including risks concerning hallucination and misinformation, epistemic agency and epistemic bubbles, bullshit and relativism, and epistemic anachronism and epistemic incest, and by offering arguments for why these problems are not only epistemic issues but also raise problems for democracy since they undermine its epistemic basis- especially if we assume democracy theories that go beyond minimalist views. I end with a short reflection on what can be done about these political-epistemic risks, pointing to education as one of the sites for change.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"4"},"PeriodicalIF":2.7,"publicationDate":"2025-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11759458/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"143030055","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-24DOI: 10.1007/s11948-024-00523-y
Theresa Willem, Marie-Christine Fritzsche, Bettina M Zimmermann, Anna Sierawska, Svenja Breuer, Maximilian Braun, Anja K Ruess, Marieke Bak, Franziska B Schönweitz, Lukas J Meier, Amelia Fiske, Daniel Tigard, Ruth Müller, Stuart McLennan, Alena Buyx
Integrating artificial intelligence (AI) into critical domains such as healthcare holds immense promise. Nevertheless, significant challenges must be addressed to avoid harm, promote the well-being of individuals and societies, and ensure ethically sound and socially just technology development. Innovative approaches like Embedded Ethics, which refers to integrating ethics and social science into technology development based on interdisciplinary collaboration, are emerging to address issues of bias, transparency, misrepresentation, and more. This paper aims to develop this approach further to enable future projects to effectively deploy it. Based on the practical experience of using ethics and social science methodology in interdisciplinary AI-related healthcare consortia, this paper presents several methods that have proven helpful for embedding ethical and social science analysis and inquiry. They include (1) stakeholder analyses, (2) literature reviews, (3) ethnographic approaches, (4) peer-to-peer interviews, (5) focus groups, (6) interviews with affected groups and external stakeholders, (7) bias analyses, (8) workshops, and (9) interdisciplinary results dissemination. We believe that applying Embedded Ethics offers a pathway to stimulate reflexivity, proactively anticipate social and ethical concerns, and foster interdisciplinary inquiry into such concerns at every stage of technology development. This approach can help shape responsible, inclusive, and ethically aware technology innovation in healthcare and beyond.
{"title":"Embedded Ethics in Practice: A Toolbox for Integrating the Analysis of Ethical and Social Issues into Healthcare AI Research.","authors":"Theresa Willem, Marie-Christine Fritzsche, Bettina M Zimmermann, Anna Sierawska, Svenja Breuer, Maximilian Braun, Anja K Ruess, Marieke Bak, Franziska B Schönweitz, Lukas J Meier, Amelia Fiske, Daniel Tigard, Ruth Müller, Stuart McLennan, Alena Buyx","doi":"10.1007/s11948-024-00523-y","DOIUrl":"10.1007/s11948-024-00523-y","url":null,"abstract":"<p><p>Integrating artificial intelligence (AI) into critical domains such as healthcare holds immense promise. Nevertheless, significant challenges must be addressed to avoid harm, promote the well-being of individuals and societies, and ensure ethically sound and socially just technology development. Innovative approaches like Embedded Ethics, which refers to integrating ethics and social science into technology development based on interdisciplinary collaboration, are emerging to address issues of bias, transparency, misrepresentation, and more. This paper aims to develop this approach further to enable future projects to effectively deploy it. Based on the practical experience of using ethics and social science methodology in interdisciplinary AI-related healthcare consortia, this paper presents several methods that have proven helpful for embedding ethical and social science analysis and inquiry. They include (1) stakeholder analyses, (2) literature reviews, (3) ethnographic approaches, (4) peer-to-peer interviews, (5) focus groups, (6) interviews with affected groups and external stakeholders, (7) bias analyses, (8) workshops, and (9) interdisciplinary results dissemination. We believe that applying Embedded Ethics offers a pathway to stimulate reflexivity, proactively anticipate social and ethical concerns, and foster interdisciplinary inquiry into such concerns at every stage of technology development. This approach can help shape responsible, inclusive, and ethically aware technology innovation in healthcare and beyond.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"3"},"PeriodicalIF":2.7,"publicationDate":"2024-12-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11668859/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142883535","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-19DOI: 10.1007/s11948-024-00525-w
Koji Ota, Tetsushi Tanibe, Takumi Watanabe, Kazuki Iijima, Mineki Oguchi
The moral status of human brain organoids (HBOs) has been debated in view of the future possibility that they may acquire phenomenal consciousness. This study empirically investigates the moral sensitivity in people's intuitive judgments about actions toward conscious HBOs. The results showed that the presence/absence of pain experience in HBOs affected the judgment about the moral permissibility of actions such as creating and destroying the HBOs; however, the presence/absence of visual experience in HBOs also affected the judgment. These findings suggest that people's intuitive judgments about the moral status of HBOs are sensitive to the valence-independent value of phenomenal consciousness. We discuss how these observations can have normative implications; particularly, we argue that they put pressure on the theoretical view that the moral status of conscious HBOs is grounded solely in the valence-dependent value of consciousness. We also discuss how our findings can be informative even when such a theoretical view is finally justified or when the future possibility of conscious HBOs is implausible.
{"title":"Moral Intuition Regarding the Possibility of Conscious Human Brain Organoids: An Experimental Ethics Study.","authors":"Koji Ota, Tetsushi Tanibe, Takumi Watanabe, Kazuki Iijima, Mineki Oguchi","doi":"10.1007/s11948-024-00525-w","DOIUrl":"10.1007/s11948-024-00525-w","url":null,"abstract":"<p><p>The moral status of human brain organoids (HBOs) has been debated in view of the future possibility that they may acquire phenomenal consciousness. This study empirically investigates the moral sensitivity in people's intuitive judgments about actions toward conscious HBOs. The results showed that the presence/absence of pain experience in HBOs affected the judgment about the moral permissibility of actions such as creating and destroying the HBOs; however, the presence/absence of visual experience in HBOs also affected the judgment. These findings suggest that people's intuitive judgments about the moral status of HBOs are sensitive to the valence-independent value of phenomenal consciousness. We discuss how these observations can have normative implications; particularly, we argue that they put pressure on the theoretical view that the moral status of conscious HBOs is grounded solely in the valence-dependent value of consciousness. We also discuss how our findings can be informative even when such a theoretical view is finally justified or when the future possibility of conscious HBOs is implausible.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"2"},"PeriodicalIF":2.7,"publicationDate":"2024-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11659373/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142856491","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-17DOI: 10.1007/s11948-024-00526-9
Gabriela Arriagada-Bruneau, Claudia López, Alexandra Davidoff
We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the "isolationist approach to AI bias," a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers' decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the "waiting list" project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.
{"title":"A Bias Network Approach (BNA) to Encourage Ethical Reflection Among AI Developers.","authors":"Gabriela Arriagada-Bruneau, Claudia López, Alexandra Davidoff","doi":"10.1007/s11948-024-00526-9","DOIUrl":"10.1007/s11948-024-00526-9","url":null,"abstract":"<p><p>We introduce the Bias Network Approach (BNA) as a sociotechnical method for AI developers to identify, map, and relate biases across the AI development process. This approach addresses the limitations of what we call the \"isolationist approach to AI bias,\" a trend in AI literature where biases are seen as separate occurrences linked to specific stages in an AI pipeline. Dealing with these multiple biases can trigger a sense of excessive overload in managing each potential bias individually or promote the adoption of an uncritical approach to understanding the influence of biases in developers' decision-making. The BNA fosters dialogue and a critical stance among developers, guided by external experts, using graphical representations to depict biased connections. To test the BNA, we conducted a pilot case study on the \"waiting list\" project, involving a small AI developer team creating a healthcare waiting list NPL model in Chile. The analysis showed promising findings: (i) the BNA aids in visualizing interconnected biases and their impacts, facilitating ethical reflection in a more accessible way; (ii) it promotes transparency in decision-making throughout AI development; and (iii) more focus is necessary on professional biases and material limitations as sources of bias in AI development.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"31 1","pages":"1"},"PeriodicalIF":2.7,"publicationDate":"2024-12-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11652403/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142840116","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-11DOI: 10.1007/s11948-024-00527-8
Justin L Hess, Elizabeth Sanders, Grant A Fore, Martin Coleman, Mary Price, Samuel Cornelius Nyarko, Brandon Sorge
{"title":"Correction: Transforming Ethics Education Through a Faculty Learning Community: \"I'm Coming Around to Seeing Ethics as Being Maybe as Important as Calculus\".","authors":"Justin L Hess, Elizabeth Sanders, Grant A Fore, Martin Coleman, Mary Price, Samuel Cornelius Nyarko, Brandon Sorge","doi":"10.1007/s11948-024-00527-8","DOIUrl":"10.1007/s11948-024-00527-8","url":null,"abstract":"","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"62"},"PeriodicalIF":2.7,"publicationDate":"2024-12-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11634907/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142814779","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1007/s11948-024-00521-0
Giovanni Rubeis, Andrew Sixsmith
AgeTech refers to a growing sector that is advancing the use of technologies, such as information and communication technologies (ICTs), mobile technologies, robotics, wearables and smart home systems to enhance the lives of older adults. Although AgeTech can be seen as an opportunity for empowering older people and enhance their overall quality of life, crucial ethical issues have to be addressed. The articles in this topical collection focus on these and other ethical questions, particularly in respect to key emerging technologies of AI and robotics. The overall aim is to explore the multifaceted ethical landscape of emerging AgeTech and to provide frameworks and strategies for ethically-appropriate technologies that support the health, well-being, and quality of life of older adults.
{"title":"Editorial: Topical Collection \"Ethical and Societal Implications of AgeTech\".","authors":"Giovanni Rubeis, Andrew Sixsmith","doi":"10.1007/s11948-024-00521-0","DOIUrl":"10.1007/s11948-024-00521-0","url":null,"abstract":"<p><p>AgeTech refers to a growing sector that is advancing the use of technologies, such as information and communication technologies (ICTs), mobile technologies, robotics, wearables and smart home systems to enhance the lives of older adults. Although AgeTech can be seen as an opportunity for empowering older people and enhance their overall quality of life, crucial ethical issues have to be addressed. The articles in this topical collection focus on these and other ethical questions, particularly in respect to key emerging technologies of AI and robotics. The overall aim is to explore the multifaceted ethical landscape of emerging AgeTech and to provide frameworks and strategies for ethically-appropriate technologies that support the health, well-being, and quality of life of older adults.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"61"},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611967/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774080","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-12-02DOI: 10.1007/s11948-024-00524-x
Henrik Skaug Sætra, Evan Selinger
Can technology resolve social problems by reducing them to engineering challenges? In the 1960s, Alvin Weinberg answered yes, popularizing the term "techno-fix" in the process. The concept was immediately criticized and over time evolved into a disparaging term-a synonym for unrealistic technological proposals and their advocates. As the debate progressed, skepticism grew to include condemnation of a related term: "techno-solutionism." Despite extensive criticism, both "techno-fix" and "techno-solutionism" remain ill-defined concepts. In this article, we provide more precise definitions and clearly distinguish between techno-fixes and techno-solutionism through conceptual engineering. By refining these concepts, we aim to advance the discussion and lay the groundwork for more productive analyses of the role of technology in solving social problems.
{"title":"Technological Remedies for Social Problems: Defining and Demarcating Techno-Fixes and Techno-Solutionism.","authors":"Henrik Skaug Sætra, Evan Selinger","doi":"10.1007/s11948-024-00524-x","DOIUrl":"10.1007/s11948-024-00524-x","url":null,"abstract":"<p><p>Can technology resolve social problems by reducing them to engineering challenges? In the 1960s, Alvin Weinberg answered yes, popularizing the term \"techno-fix\" in the process. The concept was immediately criticized and over time evolved into a disparaging term-a synonym for unrealistic technological proposals and their advocates. As the debate progressed, skepticism grew to include condemnation of a related term: \"techno-solutionism.\" Despite extensive criticism, both \"techno-fix\" and \"techno-solutionism\" remain ill-defined concepts. In this article, we provide more precise definitions and clearly distinguish between techno-fixes and techno-solutionism through conceptual engineering. By refining these concepts, we aim to advance the discussion and lay the groundwork for more productive analyses of the role of technology in solving social problems.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"60"},"PeriodicalIF":2.7,"publicationDate":"2024-12-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11611926/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142774082","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-29DOI: 10.1007/s11948-024-00481-5
Tom van Drimmelen, M Nienke Slagboom, Ria Reis, Lex M Bouter, Jenny T van der Steen
This paper is a study of the decisions that researchers take during the execution of a research plan: their researcher discretion. Flexible research methods are generally seen as undesirable, and many methodologists urge to eliminate these so-called 'researcher degrees of freedom' from the research practice. However, what this looks like in practice is unclear. Based on twelve months of ethnographic fieldwork in two end-of-life research groups in which we observed research practice, conducted interviews, and collected documents, we explore when researchers are required to make decisions, and what these decisions entail.An abductive analysis of this data showed that researchers are constantly required to further interpret research plans, indicating that there is no clear division between planning and plan execution. This discretion emerges either when a research protocol is underdetermined or overdetermined, in which case they need to operationalise or adapt the plans respectively. In addition, we found that many of these instances of researcher discretion are exercised implicitly. Within the research groups it was occasionally not clear which topic merited an active decision, or which action could retroactively be categorised as one.Our ethnographic study of research practice suggests that researcher discretion is an integral and inevitable aspect of research practice, as many elements of a research protocol will either need to be further operationalised or adapted during its execution. Moreover, it may be difficult for researchers to identify their own discretion, limiting their effectivity in transparency.
{"title":"Decisions, Decisions, Decisions: An Ethnographic Study of Researcher Discretion in Practice.","authors":"Tom van Drimmelen, M Nienke Slagboom, Ria Reis, Lex M Bouter, Jenny T van der Steen","doi":"10.1007/s11948-024-00481-5","DOIUrl":"10.1007/s11948-024-00481-5","url":null,"abstract":"<p><p>This paper is a study of the decisions that researchers take during the execution of a research plan: their researcher discretion. Flexible research methods are generally seen as undesirable, and many methodologists urge to eliminate these so-called 'researcher degrees of freedom' from the research practice. However, what this looks like in practice is unclear. Based on twelve months of ethnographic fieldwork in two end-of-life research groups in which we observed research practice, conducted interviews, and collected documents, we explore when researchers are required to make decisions, and what these decisions entail.An abductive analysis of this data showed that researchers are constantly required to further interpret research plans, indicating that there is no clear division between planning and plan execution. This discretion emerges either when a research protocol is underdetermined or overdetermined, in which case they need to operationalise or adapt the plans respectively. In addition, we found that many of these instances of researcher discretion are exercised implicitly. Within the research groups it was occasionally not clear which topic merited an active decision, or which action could retroactively be categorised as one.Our ethnographic study of research practice suggests that researcher discretion is an integral and inevitable aspect of research practice, as many elements of a research protocol will either need to be further operationalised or adapted during its execution. Moreover, it may be difficult for researchers to identify their own discretion, limiting their effectivity in transparency.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"59"},"PeriodicalIF":2.7,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11607100/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142752029","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-11-28DOI: 10.1007/s11948-024-00519-8
Omar F Khabour, Karem H Alzoubi, Wesal M Aldarabseh
The use of the open publishing is expected to be the dominant model in the future. However, along with the use of this model, predatory journals are increasingly appearing. In the current study, the awareness of researchers in Jordan about predatory journals and the strategies utilized to avoid them was investigated. The study included 558 researchers from Jordan. A total of 34.0% of the participants reported a high ability to identify predatory journals, while 27.0% reported a low ability to identify predatory journals. Most participants (64.0%) apply "Think. Check. Submit." strategy to avoid predatory journals. However, 11.9% of the sample reported being a victim of a predatory journal. Multinomial regression analysis showed gender, number of publications, using Beall's list of predatory journals, and applying "Think. Check. Submit." strategy were predictors of the high ability to identify predatory journals. Participants reported using databases such as Scopus, Clarivate, membership in the publishing ethics committee, and DOAJ to validate the journal before publication. Finally, most participants (88.4%) agreed to attend a training module on how to identify predatory journals. In conclusion, Jordanian researchers use valid strategies to avoid predatory journals. Implementing a training module may enhance researchers' ability to identify predatory journals.
{"title":"Awareness of Jordanian Researchers About Predatory Journals: A Need for Training.","authors":"Omar F Khabour, Karem H Alzoubi, Wesal M Aldarabseh","doi":"10.1007/s11948-024-00519-8","DOIUrl":"10.1007/s11948-024-00519-8","url":null,"abstract":"<p><p>The use of the open publishing is expected to be the dominant model in the future. However, along with the use of this model, predatory journals are increasingly appearing. In the current study, the awareness of researchers in Jordan about predatory journals and the strategies utilized to avoid them was investigated. The study included 558 researchers from Jordan. A total of 34.0% of the participants reported a high ability to identify predatory journals, while 27.0% reported a low ability to identify predatory journals. Most participants (64.0%) apply \"Think. Check. Submit.\" strategy to avoid predatory journals. However, 11.9% of the sample reported being a victim of a predatory journal. Multinomial regression analysis showed gender, number of publications, using Beall's list of predatory journals, and applying \"Think. Check. Submit.\" strategy were predictors of the high ability to identify predatory journals. Participants reported using databases such as Scopus, Clarivate, membership in the publishing ethics committee, and DOAJ to validate the journal before publication. Finally, most participants (88.4%) agreed to attend a training module on how to identify predatory journals. In conclusion, Jordanian researchers use valid strategies to avoid predatory journals. Implementing a training module may enhance researchers' ability to identify predatory journals.</p>","PeriodicalId":49564,"journal":{"name":"Science and Engineering Ethics","volume":"30 6","pages":"58"},"PeriodicalIF":2.7,"publicationDate":"2024-11-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11604683/pdf/","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142741196","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":2,"RegionCategory":"哲学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}