使用生成式人工智能进行定性数据分析的伦理问题

IF 6.5 2区 管理学 Q1 INFORMATION SCIENCE & LIBRARY SCIENCE Information Systems Journal Pub Date : 2024-01-21 DOI:10.1111/isj.12504
Robert M. Davison, Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, Manuel Trenz, Angsana A. Techatassanasoontorn, Antonio Díaz Andrade, Niki Panteli
{"title":"使用生成式人工智能进行定性数据分析的伦理问题","authors":"Robert M. Davison,&nbsp;Hameed Chughtai,&nbsp;Petter Nielsen,&nbsp;Marco Marabelli,&nbsp;Federico Iannacci,&nbsp;Marjolein van Offenbeek,&nbsp;Monideepa Tarafdar,&nbsp;Manuel Trenz,&nbsp;Angsana A. Techatassanasoontorn,&nbsp;Antonio Díaz Andrade,&nbsp;Niki Panteli","doi":"10.1111/isj.12504","DOIUrl":null,"url":null,"abstract":"<p>It is important to note that the text of this editorial is entirely written by humans without any Generative Artificial Intelligence (GAI) contribution or assistance. The Editor of the ISJ (Robert M. Davison) was contacted by one of the ISJ's Associate Editors (AE) (Marjolein van Offenbeek) who explained that the qualitative data analysis software ATLAS.ti was offering a free-of-charge analysis of research data if the researcher shared the same data with ATLAS.ti for training purposes for their GAI1 analysis tool. Marjolein believed that this spawned an ethical dilemma. Robert forwarded Marjolein's email to the ISJ's Senior Editors (SEs) and Associate Editors (AEs) and invited their comments. Nine of the SEs and AEs replied with feedback. We (the 11 contributing authors) then engaged in a couple of rounds of brainstorming before amalgamating the text in a shared document. This was initially created by Hameed Chughtai, but then commented on and edited by all the members of the team. The final version constitutes the shared opinion of the 11 members of the team, after several rounds of discussion. It is important to emphasise that the 11 authors have contrasting views about whether GAI should be used in qualitative data analysis, but we have reached broad agreement about the ethical issues associated with this use of GAI. Although many other topics related to the use of GAI in research could be discussed, for example, how GAI could be effectively used for qualitative analysis, we believe that ethical concerns overarch many of these other topics. Thus, in this editorial we exclusively focus on the ethics associated with using GAI for qualitative data analysis.</p><p>The emergence and ready availability of GAI has profound implications for research. This powerful technology, capable of generating human-like text, has the potential to create many opportunities for researchers in all disciplines. However, the technology brings ethical challenges and risks. We unearth and comment on many facets of qualitative data-related ethics. Our goal is to engage with and inform the many stakeholders of the ISJ, including other editors, (prospective) authors, reviewers and readers.</p><p>We intend that this discussion serves as a starting point for a broader conversation on how we can responsibly navigate the evolving landscape of GAI in research. It is important to point out that we are not advocating for or against the use of GAI in research, nor are we attempting to find ways to make it easier (or harder) for researchers to incorporate GAI in their research designs and practices. Our focus relates to the ethical issues associated with GAI use in analysing qualitative data that scholars, in the conduct of their academic research, may encounter and should consider.</p><p>One of the allures of GAI lies in its capability to discover patterns to produce new codes in a data corpus faster and more comprehensively than humans, by drawing from its trained data. This capability implies that GAI may identify patterns missed by humans. However, speed and comprehensiveness do not necessarily translate to appropriateness, substantive helpfulness or insightful understanding. More fundamentally, speed and comprehensiveness should not be achieved at the cost of unethical research practices or of the commitment to ‘do no harm’ to individuals, communities, organisations and society from research participation (Iphofen &amp; Tolich, <span>2018</span>). Thus, in our view, a utilitarian argument (the ends justify the means) constitutes an inadequate justification for GAI use for qualitative data analysis. Such a utilitarian argument would permit the unscrupulous or unprincipled researcher to use GAI, however they liked, in pursuit of a goal that might superficially benefit that individual researcher yet that would also violate the codes of ethics or behaviour that we esteem. Thus, the means that we employ to undertake research must be ethical, and must be seen to be ethical by our peers via the peer review system.</p><p>The use of GAI in research is not merely a question of tool selection, but also a matter that touches on the essence of research integrity, conduct and value. It challenges us to redefine what we consider as ‘doing research’ and pushes us to revisit how we maximise the benefits of research and minimise risks and harms for individuals and society (Gibbs, <span>2018</span>). It also challenges us to consider our understanding and the implications of authorship, data ownership and rights, responsibility, privacy and transparency. Thus, the overarching question that we ask is ‘What are the ethical issues that might surface when using GAI for the analysis of qualitative data’? (cf. UNESCO, <span>2021</span>).</p><p>To address this question, we focus on five areas: (1) data ownership and rights; (2) data privacy and transparency; (3) interpretive sufficiency; (4) biases manifested in GAI, and (5) researcher responsibilities and agency. We foresee that this exploration would eventually enable us to inform the development of living guidelines for qualitative data analysis, pertaining to ISJ and the Information Systems field more generally, in the context of GAI. Our hope is that such living guidelines will align with broader discussions in scholarship and emerging AI policies around the world2<sup>,</sup>3. In addition, this editorial reacts to GAI-related policies that other journals have already set, for instance, the recent Academy of Management Review's editorial (Grimes et al., <span>2023</span>), which are indicators of the siloed approach that not just academic fields but also individual journals are taking, for making sense of the use of GAI in scholarly settings. We instead aim at developing a fluid document that simply points to potential ethical implications of using GAI, and we do so by focusing specifically on qualitative data analysis.</p><p>We are concerned about surrendering research data to commercial entities, for example, sharing data with a GAI tool in exchange for automated analysis because that could violate data rights and confidentiality. While automated data analysis is standard for quantitative data, using Language Learning Models such as ChatGPT for qualitative data is different as they require data for training their models. Qualitative research is typically an in-depth inquiry that uses ‘relatively unstructured forms of data, whether produced through observation, interviewing and/or the analysis of documents’ (Hammersley &amp; Traianou, <span>2012</span>, p. 1). As such, the ‘production of such data can involve researchers in quite close, and sometimes long-term, relationships with people’ (ibid.), suggesting that the participant should not simply be viewed as a data machine. Instead, participants should be highly valued as collaborators in our research endeavours (Oakley, <span>2013</span>). Therefore, researchers can be considered as violating the commitment of non-maleficence given to our participants by volunteering their data to train Language Learning Models. This would happen mainly because we cannot guarantee that robust precautions are in place to avoid possible harm stemming from sharing their data with these models.</p><p>A related concern stems from providing research data to a profit-driven entity to enhance the quality of its product. We acknowledge that research organisations and universities may have internal GAI platforms for such purposes. However, unless explicitly included in Institutional Review Board (IRB) applications, providing research data to GAI platforms, whether hosted by internal or profit-driven entities, could conflict with data privacy protection laws and most IRB approvals.4 It is suggested that IRBs may soon start considering the use of GAI in their approval processes, that is, they will (and probably should) become more sensitive to the application of GAI by researchers, and may even proscribe or restrict such application.</p><p>One of the ethical principles agreed by the UK Academy of Social Sciences in 2015 is ‘[a]ll social science should respect the privacy, autonomy, diversity, values and dignity of individuals, groups and communities’ (UK Academy of Social Sciences, <span>2015</span>). Therefore, the application of GAI raises privacy concerns, particularly when sensitive research data is shared with an AI tool. When a research study involves organisations, privacy issues surface when they require the signing of Non-Disclosure Agreements (NDAs) as a condition for data collection but are not made aware that the researchers may share the same data with an AI tool and its owners (Pearlman, <span>2017</span>). Researchers will need to ensure that the involved organisations affirm their consent to this data exchange, if they plan to use a GAI analysis tool that uses the data they collected for further training. In addition, individual data is also subject to privacy protection. NDAs are negotiated between the researcher and the organisation's leadership or their legal office. However, the data collected relates to employees, clients and potentially other stakeholders, who might not be aware that their conversations with researchers will be handed over to third parties that do not fully disclose to which ends and how they will use these data and whether they will operate appropriate, and for future re-combinations, sufficient deidentification practices. Thus, the researcher must arguably also obtain consent from each employee and any other stakeholder, to transfer their data to each and every specific third party, such as a qualitative GAI-powered software. Participants who decline to provide such consent, or who simply fail to respond to the request for consent, must have their data excluded from what is shared. Steps to ensure data privacy of individuals must be included in the IRB approval documents. In the European Union and the United States, for instance, human subjects participating in research must consent to how data about them is processed and have the right to withdraw any consent previously given with the consequence that this person's personal data must be deleted.</p><p>We highlight two intertwined concerns about interpretive sufficiency in qualitative analysis. Software for qualitative data analysis has been in use since at least the early 1980s.5 For instance, both ATLAS.ti and NVivo (among others) allow automated coding by structure, style or existing coding patterns. However, the incorporation of GAI into this process has new implications because it allows coding ‘from scratch’ based on the external datasets with which the GAI was trained. There is an inherent value associated with manual (i.e., not software assisted) data analysis, especially in qualitative interpretive and ethnographic research. Analysis of qualitative data often relies on the researcher's creative and conceptual ability to discern meaning, salience and interconnectedness of logic in emerging themes (Amis &amp; Silk, <span>2008</span>). GAI tools, while powerful and efficient, only have access to the text. As of today, they cannot capture the nuances of the research environment, body language, facial expressions, tone of voice, interactions between the researcher and the research subjects, and the researcher's own accumulated understanding of the domain. Moreover, automated systems detecting these ‘soft’ characteristics of human interactions are highly questioned because of their poor reliability and potential for discriminations (Crawford, <span>2021</span>). Relying on such tools thus constitutes both an abdication of our responsibilities as researchers and a voluntary diminishment in our own agency and human consciousness required in the construction of knowledge (Amis &amp; Silk, <span>2008</span>).</p><p>A deeper and more contextually nuanced analysis of qualitative data is important because knowledge encompasses syntactic, semantic and pragmatic layers (Mingers, <span>2008</span>). Automated qualitative coding can only examine syntax, but cannot genuinely grasp data's semantic and pragmatic aspects. As researchers, we do not claim to be neutral or value-free in our analyses, and indeed being value-free may be inappropriate; for instance, with respect to critical interpretive studies, an automated coding process could lead to a banal and neutral analysis that fails to identify or disclose hidden aspects in the qualitative data. The output analysis will then incorporate an incomplete and potentially superficial reading of the data. Further, mainstream (or neutral) chunking and coding could influence and limit our potential learning from the data analysis. For instance, it is important that the researcher be aware of the risks associated with introducing and reinforcing existing biases, especially in research on marginalisation, oppression, activism, conflicts and decolonisation. In addition, in an investigation of socially relevant problems stemming from digital technology, the researcher bounded by personal and community responsibility joins forces with the researched to produce understanding that empowers those disadvantaged by the technology (Amis &amp; Silk, <span>2008</span>). These are areas where human insight, empathy and understanding (verstehen) are crucial. For these reasons, the use of GAI as a data analysis assistant becomes ethically questionable because it will necessarily exclude some aspects that are central to the analysis and that would be central if it was undertaken by humans.</p><p>OpenAI's ChatGPT6 service openly acknowledges that their results are ‘not free from biases and stereotypes’, are ‘skewed towards Western views’ and can ‘reinforce a user's biases’. As an example, ATLAS.ti (qualitative analysis software) uses OpenAI's GPT model and clearly acknowledges that their results may encode ‘social biases, for example, via stereotypes or negative sentiment towards certain groups’.7 Thus, GAI may produce analyses that are biased, unjust or discriminatory to certain groups or individuals, based on the data they are trained on or the criteria they use to analyse qualitative data. GAI may generate patterns (text, images, etc.) that reinforce stereotypes, biases or prejudices against people of different race, gender, culture or background. GAI may not account for the different needs, preferences and values of diverse stakeholders and communities, and may impose a dominant or hegemonic perspective on the data analysis process. GAI-based analysis can also perpetuate or exacerbate the colonisation or marginalisation of other modes of knowledge, cultures, or values, by privileging a certain perspective on the data analysis process, for instance, one reflecting Western cultures, because of training data prevalently collected online (Bender &amp; Friedman, <span>2018</span>). GAI may rely on data sources, methods or frameworks that are derived from or influenced by colonial or imperial histories, ideologies or power structures. GAI may not acknowledge or address any of these ethical, social or political implications of its data coding.</p><p>In the sections above, we have noted the difficulties associated with developing a fair and objective analysis with GAI. As a result, an interpretation developed partially or fully through GAI-based data analysis may be difficult to critically explain, as the algorithms and models that underlie the data analysis process may be complex, opaque or black-boxed. For example, GAI tools often use a combination of neural networks, genetic algorithms and machine learning techniques that are not easily interpretable or transparent to human users or researchers. GAI also does not provide clear and coherent rationales for the outputs it produces and may not allow for feedback or correction.</p><p>The emergent and interactional nature of most qualitative research requires more scrutiny of the researcher and their conduct (Iphofen &amp; Tolich, <span>2018</span>). Drawing on the discussion on data ethics in the age of algorithms, some authors argue that ‘the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others’ (Floridi &amp; Taddeo, <span>2016</span>, p. 2). In addition to the researcher's responsibilities and obligations to participants, the researcher also has to take epistemic responsibility, which involves being accountable to the evidence where evidence is relationally constituted between the researcher and the researched and to assume responsibility for what the researcher claims to know (Code, <span>2001</span>). Some researchers may argue that GAI could help them identify preliminary patterns from large datasets, providing them with initial insights. However, it is recognised that GAI is not infallible. For instance, it is prone to what are known as ‘hallucinations’, where it ‘lies’ and ‘fabricates facts’ (Ji et al., <span>2023</span>). Thus, the veracity of any ‘preliminary patterns’ identified through technology (such as GAI) must be checked by the researcher who must both claim authorship of them and so take responsibility for the text; accountability remains with the researcher (Gregor, <span>2024</span>). GAI cannot be listed as a co-author (this is publisher policy at ISJ), and thus cannot be permitted to have any agency in the research or its outputs. To wit, we consider blind, automated applications of GAI for data analysis without human agency unethical in every aspect of the research process.</p><p>We acknowledge that some ethical issues are specific to particular GAI implementations, which change over time, emphasising the need for clear quality criteria. GAI implementations could also be private. For example, some research organisations have established their own GAI service, enabling students and researchers to use OpenAI's GPT models within university and national data privacy requirements.8 In this view, the ethical concerns, such as privacy, are specific to implementations of GAI and are not necessarily general issues with the technology class. However, private Language Learning Models are not necessarily expected to improve the quality of coding; they might still be too generic to address specific research questions. While they can, to a certain extent, address privacy issues, they cannot unequivocally improve analysis quality and their biases may still be present.</p><p>Following our analysis and given all these characteristics of GAI, we suggest that researchers should engage in critical reflexivity and vigilance to identify, understand and robustly address the ethical issues regarding the use of GAI in their research practices involving qualitative data analysis. We do not wish to see a situation where we are lulled into thinking that GAI use is ‘normal’ and that researchers do not need either to pay particular attention to it, or to report their use of it.</p><p>Robert M. Davison, Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, Manuel Trenz, Angsana A. Techatassanasoontorn, Antonio Díaz Andrade, and Niki Panteli contributed equally to this editorial.</p>","PeriodicalId":48049,"journal":{"name":"Information Systems Journal","volume":null,"pages":null},"PeriodicalIF":6.5000,"publicationDate":"2024-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12504","citationCount":"0","resultStr":"{\"title\":\"The ethics of using generative AI for qualitative data analysis\",\"authors\":\"Robert M. Davison,&nbsp;Hameed Chughtai,&nbsp;Petter Nielsen,&nbsp;Marco Marabelli,&nbsp;Federico Iannacci,&nbsp;Marjolein van Offenbeek,&nbsp;Monideepa Tarafdar,&nbsp;Manuel Trenz,&nbsp;Angsana A. Techatassanasoontorn,&nbsp;Antonio Díaz Andrade,&nbsp;Niki Panteli\",\"doi\":\"10.1111/isj.12504\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>It is important to note that the text of this editorial is entirely written by humans without any Generative Artificial Intelligence (GAI) contribution or assistance. The Editor of the ISJ (Robert M. Davison) was contacted by one of the ISJ's Associate Editors (AE) (Marjolein van Offenbeek) who explained that the qualitative data analysis software ATLAS.ti was offering a free-of-charge analysis of research data if the researcher shared the same data with ATLAS.ti for training purposes for their GAI1 analysis tool. Marjolein believed that this spawned an ethical dilemma. Robert forwarded Marjolein's email to the ISJ's Senior Editors (SEs) and Associate Editors (AEs) and invited their comments. Nine of the SEs and AEs replied with feedback. We (the 11 contributing authors) then engaged in a couple of rounds of brainstorming before amalgamating the text in a shared document. This was initially created by Hameed Chughtai, but then commented on and edited by all the members of the team. The final version constitutes the shared opinion of the 11 members of the team, after several rounds of discussion. It is important to emphasise that the 11 authors have contrasting views about whether GAI should be used in qualitative data analysis, but we have reached broad agreement about the ethical issues associated with this use of GAI. Although many other topics related to the use of GAI in research could be discussed, for example, how GAI could be effectively used for qualitative analysis, we believe that ethical concerns overarch many of these other topics. Thus, in this editorial we exclusively focus on the ethics associated with using GAI for qualitative data analysis.</p><p>The emergence and ready availability of GAI has profound implications for research. This powerful technology, capable of generating human-like text, has the potential to create many opportunities for researchers in all disciplines. However, the technology brings ethical challenges and risks. We unearth and comment on many facets of qualitative data-related ethics. Our goal is to engage with and inform the many stakeholders of the ISJ, including other editors, (prospective) authors, reviewers and readers.</p><p>We intend that this discussion serves as a starting point for a broader conversation on how we can responsibly navigate the evolving landscape of GAI in research. It is important to point out that we are not advocating for or against the use of GAI in research, nor are we attempting to find ways to make it easier (or harder) for researchers to incorporate GAI in their research designs and practices. Our focus relates to the ethical issues associated with GAI use in analysing qualitative data that scholars, in the conduct of their academic research, may encounter and should consider.</p><p>One of the allures of GAI lies in its capability to discover patterns to produce new codes in a data corpus faster and more comprehensively than humans, by drawing from its trained data. This capability implies that GAI may identify patterns missed by humans. However, speed and comprehensiveness do not necessarily translate to appropriateness, substantive helpfulness or insightful understanding. More fundamentally, speed and comprehensiveness should not be achieved at the cost of unethical research practices or of the commitment to ‘do no harm’ to individuals, communities, organisations and society from research participation (Iphofen &amp; Tolich, <span>2018</span>). Thus, in our view, a utilitarian argument (the ends justify the means) constitutes an inadequate justification for GAI use for qualitative data analysis. Such a utilitarian argument would permit the unscrupulous or unprincipled researcher to use GAI, however they liked, in pursuit of a goal that might superficially benefit that individual researcher yet that would also violate the codes of ethics or behaviour that we esteem. Thus, the means that we employ to undertake research must be ethical, and must be seen to be ethical by our peers via the peer review system.</p><p>The use of GAI in research is not merely a question of tool selection, but also a matter that touches on the essence of research integrity, conduct and value. It challenges us to redefine what we consider as ‘doing research’ and pushes us to revisit how we maximise the benefits of research and minimise risks and harms for individuals and society (Gibbs, <span>2018</span>). It also challenges us to consider our understanding and the implications of authorship, data ownership and rights, responsibility, privacy and transparency. Thus, the overarching question that we ask is ‘What are the ethical issues that might surface when using GAI for the analysis of qualitative data’? (cf. UNESCO, <span>2021</span>).</p><p>To address this question, we focus on five areas: (1) data ownership and rights; (2) data privacy and transparency; (3) interpretive sufficiency; (4) biases manifested in GAI, and (5) researcher responsibilities and agency. We foresee that this exploration would eventually enable us to inform the development of living guidelines for qualitative data analysis, pertaining to ISJ and the Information Systems field more generally, in the context of GAI. Our hope is that such living guidelines will align with broader discussions in scholarship and emerging AI policies around the world2<sup>,</sup>3. In addition, this editorial reacts to GAI-related policies that other journals have already set, for instance, the recent Academy of Management Review's editorial (Grimes et al., <span>2023</span>), which are indicators of the siloed approach that not just academic fields but also individual journals are taking, for making sense of the use of GAI in scholarly settings. We instead aim at developing a fluid document that simply points to potential ethical implications of using GAI, and we do so by focusing specifically on qualitative data analysis.</p><p>We are concerned about surrendering research data to commercial entities, for example, sharing data with a GAI tool in exchange for automated analysis because that could violate data rights and confidentiality. While automated data analysis is standard for quantitative data, using Language Learning Models such as ChatGPT for qualitative data is different as they require data for training their models. Qualitative research is typically an in-depth inquiry that uses ‘relatively unstructured forms of data, whether produced through observation, interviewing and/or the analysis of documents’ (Hammersley &amp; Traianou, <span>2012</span>, p. 1). As such, the ‘production of such data can involve researchers in quite close, and sometimes long-term, relationships with people’ (ibid.), suggesting that the participant should not simply be viewed as a data machine. Instead, participants should be highly valued as collaborators in our research endeavours (Oakley, <span>2013</span>). Therefore, researchers can be considered as violating the commitment of non-maleficence given to our participants by volunteering their data to train Language Learning Models. This would happen mainly because we cannot guarantee that robust precautions are in place to avoid possible harm stemming from sharing their data with these models.</p><p>A related concern stems from providing research data to a profit-driven entity to enhance the quality of its product. We acknowledge that research organisations and universities may have internal GAI platforms for such purposes. However, unless explicitly included in Institutional Review Board (IRB) applications, providing research data to GAI platforms, whether hosted by internal or profit-driven entities, could conflict with data privacy protection laws and most IRB approvals.4 It is suggested that IRBs may soon start considering the use of GAI in their approval processes, that is, they will (and probably should) become more sensitive to the application of GAI by researchers, and may even proscribe or restrict such application.</p><p>One of the ethical principles agreed by the UK Academy of Social Sciences in 2015 is ‘[a]ll social science should respect the privacy, autonomy, diversity, values and dignity of individuals, groups and communities’ (UK Academy of Social Sciences, <span>2015</span>). Therefore, the application of GAI raises privacy concerns, particularly when sensitive research data is shared with an AI tool. When a research study involves organisations, privacy issues surface when they require the signing of Non-Disclosure Agreements (NDAs) as a condition for data collection but are not made aware that the researchers may share the same data with an AI tool and its owners (Pearlman, <span>2017</span>). Researchers will need to ensure that the involved organisations affirm their consent to this data exchange, if they plan to use a GAI analysis tool that uses the data they collected for further training. In addition, individual data is also subject to privacy protection. NDAs are negotiated between the researcher and the organisation's leadership or their legal office. However, the data collected relates to employees, clients and potentially other stakeholders, who might not be aware that their conversations with researchers will be handed over to third parties that do not fully disclose to which ends and how they will use these data and whether they will operate appropriate, and for future re-combinations, sufficient deidentification practices. Thus, the researcher must arguably also obtain consent from each employee and any other stakeholder, to transfer their data to each and every specific third party, such as a qualitative GAI-powered software. Participants who decline to provide such consent, or who simply fail to respond to the request for consent, must have their data excluded from what is shared. Steps to ensure data privacy of individuals must be included in the IRB approval documents. In the European Union and the United States, for instance, human subjects participating in research must consent to how data about them is processed and have the right to withdraw any consent previously given with the consequence that this person's personal data must be deleted.</p><p>We highlight two intertwined concerns about interpretive sufficiency in qualitative analysis. Software for qualitative data analysis has been in use since at least the early 1980s.5 For instance, both ATLAS.ti and NVivo (among others) allow automated coding by structure, style or existing coding patterns. However, the incorporation of GAI into this process has new implications because it allows coding ‘from scratch’ based on the external datasets with which the GAI was trained. There is an inherent value associated with manual (i.e., not software assisted) data analysis, especially in qualitative interpretive and ethnographic research. Analysis of qualitative data often relies on the researcher's creative and conceptual ability to discern meaning, salience and interconnectedness of logic in emerging themes (Amis &amp; Silk, <span>2008</span>). GAI tools, while powerful and efficient, only have access to the text. As of today, they cannot capture the nuances of the research environment, body language, facial expressions, tone of voice, interactions between the researcher and the research subjects, and the researcher's own accumulated understanding of the domain. Moreover, automated systems detecting these ‘soft’ characteristics of human interactions are highly questioned because of their poor reliability and potential for discriminations (Crawford, <span>2021</span>). Relying on such tools thus constitutes both an abdication of our responsibilities as researchers and a voluntary diminishment in our own agency and human consciousness required in the construction of knowledge (Amis &amp; Silk, <span>2008</span>).</p><p>A deeper and more contextually nuanced analysis of qualitative data is important because knowledge encompasses syntactic, semantic and pragmatic layers (Mingers, <span>2008</span>). Automated qualitative coding can only examine syntax, but cannot genuinely grasp data's semantic and pragmatic aspects. As researchers, we do not claim to be neutral or value-free in our analyses, and indeed being value-free may be inappropriate; for instance, with respect to critical interpretive studies, an automated coding process could lead to a banal and neutral analysis that fails to identify or disclose hidden aspects in the qualitative data. The output analysis will then incorporate an incomplete and potentially superficial reading of the data. Further, mainstream (or neutral) chunking and coding could influence and limit our potential learning from the data analysis. For instance, it is important that the researcher be aware of the risks associated with introducing and reinforcing existing biases, especially in research on marginalisation, oppression, activism, conflicts and decolonisation. In addition, in an investigation of socially relevant problems stemming from digital technology, the researcher bounded by personal and community responsibility joins forces with the researched to produce understanding that empowers those disadvantaged by the technology (Amis &amp; Silk, <span>2008</span>). These are areas where human insight, empathy and understanding (verstehen) are crucial. For these reasons, the use of GAI as a data analysis assistant becomes ethically questionable because it will necessarily exclude some aspects that are central to the analysis and that would be central if it was undertaken by humans.</p><p>OpenAI's ChatGPT6 service openly acknowledges that their results are ‘not free from biases and stereotypes’, are ‘skewed towards Western views’ and can ‘reinforce a user's biases’. As an example, ATLAS.ti (qualitative analysis software) uses OpenAI's GPT model and clearly acknowledges that their results may encode ‘social biases, for example, via stereotypes or negative sentiment towards certain groups’.7 Thus, GAI may produce analyses that are biased, unjust or discriminatory to certain groups or individuals, based on the data they are trained on or the criteria they use to analyse qualitative data. GAI may generate patterns (text, images, etc.) that reinforce stereotypes, biases or prejudices against people of different race, gender, culture or background. GAI may not account for the different needs, preferences and values of diverse stakeholders and communities, and may impose a dominant or hegemonic perspective on the data analysis process. GAI-based analysis can also perpetuate or exacerbate the colonisation or marginalisation of other modes of knowledge, cultures, or values, by privileging a certain perspective on the data analysis process, for instance, one reflecting Western cultures, because of training data prevalently collected online (Bender &amp; Friedman, <span>2018</span>). GAI may rely on data sources, methods or frameworks that are derived from or influenced by colonial or imperial histories, ideologies or power structures. GAI may not acknowledge or address any of these ethical, social or political implications of its data coding.</p><p>In the sections above, we have noted the difficulties associated with developing a fair and objective analysis with GAI. As a result, an interpretation developed partially or fully through GAI-based data analysis may be difficult to critically explain, as the algorithms and models that underlie the data analysis process may be complex, opaque or black-boxed. For example, GAI tools often use a combination of neural networks, genetic algorithms and machine learning techniques that are not easily interpretable or transparent to human users or researchers. GAI also does not provide clear and coherent rationales for the outputs it produces and may not allow for feedback or correction.</p><p>The emergent and interactional nature of most qualitative research requires more scrutiny of the researcher and their conduct (Iphofen &amp; Tolich, <span>2018</span>). Drawing on the discussion on data ethics in the age of algorithms, some authors argue that ‘the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others’ (Floridi &amp; Taddeo, <span>2016</span>, p. 2). In addition to the researcher's responsibilities and obligations to participants, the researcher also has to take epistemic responsibility, which involves being accountable to the evidence where evidence is relationally constituted between the researcher and the researched and to assume responsibility for what the researcher claims to know (Code, <span>2001</span>). Some researchers may argue that GAI could help them identify preliminary patterns from large datasets, providing them with initial insights. However, it is recognised that GAI is not infallible. For instance, it is prone to what are known as ‘hallucinations’, where it ‘lies’ and ‘fabricates facts’ (Ji et al., <span>2023</span>). Thus, the veracity of any ‘preliminary patterns’ identified through technology (such as GAI) must be checked by the researcher who must both claim authorship of them and so take responsibility for the text; accountability remains with the researcher (Gregor, <span>2024</span>). GAI cannot be listed as a co-author (this is publisher policy at ISJ), and thus cannot be permitted to have any agency in the research or its outputs. To wit, we consider blind, automated applications of GAI for data analysis without human agency unethical in every aspect of the research process.</p><p>We acknowledge that some ethical issues are specific to particular GAI implementations, which change over time, emphasising the need for clear quality criteria. GAI implementations could also be private. For example, some research organisations have established their own GAI service, enabling students and researchers to use OpenAI's GPT models within university and national data privacy requirements.8 In this view, the ethical concerns, such as privacy, are specific to implementations of GAI and are not necessarily general issues with the technology class. However, private Language Learning Models are not necessarily expected to improve the quality of coding; they might still be too generic to address specific research questions. While they can, to a certain extent, address privacy issues, they cannot unequivocally improve analysis quality and their biases may still be present.</p><p>Following our analysis and given all these characteristics of GAI, we suggest that researchers should engage in critical reflexivity and vigilance to identify, understand and robustly address the ethical issues regarding the use of GAI in their research practices involving qualitative data analysis. We do not wish to see a situation where we are lulled into thinking that GAI use is ‘normal’ and that researchers do not need either to pay particular attention to it, or to report their use of it.</p><p>Robert M. Davison, Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, Manuel Trenz, Angsana A. Techatassanasoontorn, Antonio Díaz Andrade, and Niki Panteli contributed equally to this editorial.</p>\",\"PeriodicalId\":48049,\"journal\":{\"name\":\"Information Systems Journal\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":6.5000,\"publicationDate\":\"2024-01-21\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/isj.12504\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Information Systems Journal\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/isj.12504\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"INFORMATION SCIENCE & LIBRARY SCIENCE\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Information Systems Journal","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/isj.12504","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"INFORMATION SCIENCE & LIBRARY SCIENCE","Score":null,"Total":0}
引用次数: 0

摘要

值得注意的是,这篇社论的文字完全由人类撰写,没有任何生成人工智能(GAI)的贡献或协助。ISJ的一位副主编(AE)(Marjolein van Offenbeek)与ISJ的编辑(Robert M. Davison)取得了联系,她解释说,定性数据分析软件ATLAS.ti提供免费的研究数据分析服务,条件是研究人员将相同的数据与ATLAS.ti共享,以便为其GAI1分析工具提供培训。Marjolein 认为这引发了道德难题。罗伯特将 Marjolein 的电子邮件转发给了 ISJ 的高级编辑 (SE) 和副编辑 (AE),并邀请他们发表评论。九位高级编辑和副编辑回复了反馈意见。随后,我们(11 位撰稿人)进行了几轮头脑风暴,最后将文本合并到一份共享文件中。该文件最初由 Hameed Chughtai 创建,但随后由团队所有成员进行了评论和编辑。经过几轮讨论后,最终版本构成了团队 11 名成员的共同意见。需要强调的是,11 位作者对是否应在定性数据分析中使用 GAI 持有截然不同的观点,但我们对与使用 GAI 相关的伦理问题达成了广泛共识。尽管我们还可以讨论与在研究中使用 GAI 相关的许多其他话题,例如,如何将 GAI 有效地用于定性分析,但我们认为,伦理问题是许多其他话题的重中之重。因此,在这篇社论中,我们将专门讨论与使用 GAI 进行定性数据分析相关的伦理问题。这种能够生成类人文本的强大技术有可能为所有学科的研究人员创造许多机会。然而,这项技术也带来了伦理挑战和风险。我们发掘并评论了定性数据相关伦理的许多方面。我们的目标是让 ISJ 的众多利益相关者(包括其他编辑、(潜在)作者、审稿人和读者)参与进来,并为他们提供信息。我们希望以此次讨论为起点,就如何在研究中负责任地驾驭不断变化的 GAI 环境展开更广泛的对话。需要指出的是,我们并不是在提倡或反对在研究中使用 GAI,也不是在试图寻找方法,让研究人员更容易(或更难)将 GAI 纳入其研究设计和实践中。我们关注的重点是与使用 GAI 分析定性数据相关的伦理问题,学者们在进行学术研究时可能会遇到这些问题,也应该考虑这些问题。GAI 的诱惑之一在于它能够通过利用训练有素的数据,比人类更快、更全面地发现模式,从而在数据语料库中生成新的代码。这种能力意味着 GAI 可以识别人类遗漏的模式。然而,速度和全面性并不一定意味着适当性、实质性帮助或深刻的理解。更根本的是,速度和全面性的实现不应以不道德的研究实践或研究参与对个人、社区、组织和社会 "无害 "的承诺为代价(Iphofen &amp; Tolich, 2018)。因此,在我们看来,功利主义论点(目的证明手段的正当性)不足以作为将 GAI 用于定性数据分析的理由。这样的功利主义论点会允许不择手段或无原则的研究人员随心所欲地使用 GAI,以追求表面上可能有利于研究人员个人的目标,但同时也违反了我们所推崇的道德规范或行为准则。因此,我们开展研究的手段必须符合道德规范,而且必须通过同行评审制度被同行视为符合道德规范。在研究中使用 GAI 不仅仅是一个工具选择问题,还是一个触及研究诚信、行为和价值本质的问题。它挑战我们重新定义我们认为的 "做研究",促使我们重新审视如何最大限度地提高研究效益,最大限度地降低对个人和社会的风险和危害(Gibbs,2018 年)。它还挑战我们思考对作者身份、数据所有权和权利、责任、隐私和透明度的理解和影响。因此,我们提出的首要问题是 "在使用 GAI 分析定性数据时,可能会出现哪些伦理问题"(参见 UNESCO, 2021)。为了解决这个问题,我们重点关注五个方面:(1) 数据所有权和权利;(2) 数据隐私和透明度;(3) 解释的充分性;(4) GAI 中表现出的偏见;以及 (5) 研究人员的责任和代理。 GAI 可能依赖于源自殖民或帝国历史、意识形态或权力结构或受其影响的数据来源、方法或框架。GAI 可能不承认或不处理其数据编码的任何这些道德、社会或政治影响。在以上各节中,我们已经指出了使用 GAI 进行公正客观分析的相关困难。因此,部分或全部通过基于 GAI 的数据分析得出的解释可能难以批判性地解释,因为数据分析过程所依据的算法和模型可能很复杂、不透明或黑箱化。例如,GAI 工具通常使用神经网络、遗传算法和机器学习技术的组合,这些技术对人类用户或研究人员来说不容易解释或不透明。大多数定性研究的突发性和互动性要求对研究人员及其行为进行更严格的审查(Iphofen &amp; Tolich, 2018)。借鉴算法时代数据伦理的讨论,一些作者认为,"人类对许多自动流程的参与甚至监督逐渐减少,带来了公平、责任和尊重人权等紧迫问题"(Floridi &amp; Taddeo, 2016, p.2)。除了研究者对参与者的责任和义务外,研究者还必须承担认识论责任,这涉及到对证据负责,因为证据是研究者与被研究者之间的关系构成,并对研究者声称知道的事情承担责任(Code, 2001)。一些研究人员可能会认为,全球信息获取方法可以帮助他们从大型数据集中识别出初步模式,为他们提供初步见解。不过,人们也认识到,GAI 并非无懈可击。例如,它容易产生所谓的 "幻觉",即 "说谎 "和 "捏造事实"(Ji 等人,2023 年)。因此,通过技术(如 GAI)确定的任何 "初步模式 "的真实性都必须由研究人员进行检查,研究人员必须声称自己是这些模式的作者,从而对文本负责;责任仍由研究人员承担(Gregor, 2024)。GAI 不能被列为共同作者(这是 ISJ 出版商的政策),因此不能在研究或其成果中拥有任何代理权。也就是说,我们认为在研究过程的各个方面,盲目、自动地应用GAI进行数据分析而没有人的参与是不道德的。我们承认,有些道德问题是特定的GAI实施所特有的,它们会随着时间的推移而改变,这就强调了明确质量标准的必要性。GAI 的实施也可能是私下进行的。例如,一些研究机构已经建立了自己的 GAI 服务,使学生和研究人员能够在符合大学和国家数据隐私要求的情况下使用 OpenAI 的 GPT 模型。然而,私人语言学习模型并不一定能提高编码质量;它们可能仍然过于通用,无法解决具体的研究问题。根据我们的分析并考虑到 GAI 的所有这些特点,我们建议研究人员应进行批判性反思并保持警惕,以识别、理解并有力地解决在涉及定性数据分析的研究实践中使用 GAI 的伦理问题。Robert M. Davison、Hameed Chughtai、Petter Nielsen、Marco Marabelli、Federico Iannacci、Marjolein van Offenbeek、Monideepa Tarafdar、Manuel Trenz、Angsana A. Techatassanasoontorn、Antonio Díaz Andrade 和 Niki Panteli 为本社论做出了同样的贡献。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
The ethics of using generative AI for qualitative data analysis

It is important to note that the text of this editorial is entirely written by humans without any Generative Artificial Intelligence (GAI) contribution or assistance. The Editor of the ISJ (Robert M. Davison) was contacted by one of the ISJ's Associate Editors (AE) (Marjolein van Offenbeek) who explained that the qualitative data analysis software ATLAS.ti was offering a free-of-charge analysis of research data if the researcher shared the same data with ATLAS.ti for training purposes for their GAI1 analysis tool. Marjolein believed that this spawned an ethical dilemma. Robert forwarded Marjolein's email to the ISJ's Senior Editors (SEs) and Associate Editors (AEs) and invited their comments. Nine of the SEs and AEs replied with feedback. We (the 11 contributing authors) then engaged in a couple of rounds of brainstorming before amalgamating the text in a shared document. This was initially created by Hameed Chughtai, but then commented on and edited by all the members of the team. The final version constitutes the shared opinion of the 11 members of the team, after several rounds of discussion. It is important to emphasise that the 11 authors have contrasting views about whether GAI should be used in qualitative data analysis, but we have reached broad agreement about the ethical issues associated with this use of GAI. Although many other topics related to the use of GAI in research could be discussed, for example, how GAI could be effectively used for qualitative analysis, we believe that ethical concerns overarch many of these other topics. Thus, in this editorial we exclusively focus on the ethics associated with using GAI for qualitative data analysis.

The emergence and ready availability of GAI has profound implications for research. This powerful technology, capable of generating human-like text, has the potential to create many opportunities for researchers in all disciplines. However, the technology brings ethical challenges and risks. We unearth and comment on many facets of qualitative data-related ethics. Our goal is to engage with and inform the many stakeholders of the ISJ, including other editors, (prospective) authors, reviewers and readers.

We intend that this discussion serves as a starting point for a broader conversation on how we can responsibly navigate the evolving landscape of GAI in research. It is important to point out that we are not advocating for or against the use of GAI in research, nor are we attempting to find ways to make it easier (or harder) for researchers to incorporate GAI in their research designs and practices. Our focus relates to the ethical issues associated with GAI use in analysing qualitative data that scholars, in the conduct of their academic research, may encounter and should consider.

One of the allures of GAI lies in its capability to discover patterns to produce new codes in a data corpus faster and more comprehensively than humans, by drawing from its trained data. This capability implies that GAI may identify patterns missed by humans. However, speed and comprehensiveness do not necessarily translate to appropriateness, substantive helpfulness or insightful understanding. More fundamentally, speed and comprehensiveness should not be achieved at the cost of unethical research practices or of the commitment to ‘do no harm’ to individuals, communities, organisations and society from research participation (Iphofen & Tolich, 2018). Thus, in our view, a utilitarian argument (the ends justify the means) constitutes an inadequate justification for GAI use for qualitative data analysis. Such a utilitarian argument would permit the unscrupulous or unprincipled researcher to use GAI, however they liked, in pursuit of a goal that might superficially benefit that individual researcher yet that would also violate the codes of ethics or behaviour that we esteem. Thus, the means that we employ to undertake research must be ethical, and must be seen to be ethical by our peers via the peer review system.

The use of GAI in research is not merely a question of tool selection, but also a matter that touches on the essence of research integrity, conduct and value. It challenges us to redefine what we consider as ‘doing research’ and pushes us to revisit how we maximise the benefits of research and minimise risks and harms for individuals and society (Gibbs, 2018). It also challenges us to consider our understanding and the implications of authorship, data ownership and rights, responsibility, privacy and transparency. Thus, the overarching question that we ask is ‘What are the ethical issues that might surface when using GAI for the analysis of qualitative data’? (cf. UNESCO, 2021).

To address this question, we focus on five areas: (1) data ownership and rights; (2) data privacy and transparency; (3) interpretive sufficiency; (4) biases manifested in GAI, and (5) researcher responsibilities and agency. We foresee that this exploration would eventually enable us to inform the development of living guidelines for qualitative data analysis, pertaining to ISJ and the Information Systems field more generally, in the context of GAI. Our hope is that such living guidelines will align with broader discussions in scholarship and emerging AI policies around the world2,3. In addition, this editorial reacts to GAI-related policies that other journals have already set, for instance, the recent Academy of Management Review's editorial (Grimes et al., 2023), which are indicators of the siloed approach that not just academic fields but also individual journals are taking, for making sense of the use of GAI in scholarly settings. We instead aim at developing a fluid document that simply points to potential ethical implications of using GAI, and we do so by focusing specifically on qualitative data analysis.

We are concerned about surrendering research data to commercial entities, for example, sharing data with a GAI tool in exchange for automated analysis because that could violate data rights and confidentiality. While automated data analysis is standard for quantitative data, using Language Learning Models such as ChatGPT for qualitative data is different as they require data for training their models. Qualitative research is typically an in-depth inquiry that uses ‘relatively unstructured forms of data, whether produced through observation, interviewing and/or the analysis of documents’ (Hammersley & Traianou, 2012, p. 1). As such, the ‘production of such data can involve researchers in quite close, and sometimes long-term, relationships with people’ (ibid.), suggesting that the participant should not simply be viewed as a data machine. Instead, participants should be highly valued as collaborators in our research endeavours (Oakley, 2013). Therefore, researchers can be considered as violating the commitment of non-maleficence given to our participants by volunteering their data to train Language Learning Models. This would happen mainly because we cannot guarantee that robust precautions are in place to avoid possible harm stemming from sharing their data with these models.

A related concern stems from providing research data to a profit-driven entity to enhance the quality of its product. We acknowledge that research organisations and universities may have internal GAI platforms for such purposes. However, unless explicitly included in Institutional Review Board (IRB) applications, providing research data to GAI platforms, whether hosted by internal or profit-driven entities, could conflict with data privacy protection laws and most IRB approvals.4 It is suggested that IRBs may soon start considering the use of GAI in their approval processes, that is, they will (and probably should) become more sensitive to the application of GAI by researchers, and may even proscribe or restrict such application.

One of the ethical principles agreed by the UK Academy of Social Sciences in 2015 is ‘[a]ll social science should respect the privacy, autonomy, diversity, values and dignity of individuals, groups and communities’ (UK Academy of Social Sciences, 2015). Therefore, the application of GAI raises privacy concerns, particularly when sensitive research data is shared with an AI tool. When a research study involves organisations, privacy issues surface when they require the signing of Non-Disclosure Agreements (NDAs) as a condition for data collection but are not made aware that the researchers may share the same data with an AI tool and its owners (Pearlman, 2017). Researchers will need to ensure that the involved organisations affirm their consent to this data exchange, if they plan to use a GAI analysis tool that uses the data they collected for further training. In addition, individual data is also subject to privacy protection. NDAs are negotiated between the researcher and the organisation's leadership or their legal office. However, the data collected relates to employees, clients and potentially other stakeholders, who might not be aware that their conversations with researchers will be handed over to third parties that do not fully disclose to which ends and how they will use these data and whether they will operate appropriate, and for future re-combinations, sufficient deidentification practices. Thus, the researcher must arguably also obtain consent from each employee and any other stakeholder, to transfer their data to each and every specific third party, such as a qualitative GAI-powered software. Participants who decline to provide such consent, or who simply fail to respond to the request for consent, must have their data excluded from what is shared. Steps to ensure data privacy of individuals must be included in the IRB approval documents. In the European Union and the United States, for instance, human subjects participating in research must consent to how data about them is processed and have the right to withdraw any consent previously given with the consequence that this person's personal data must be deleted.

We highlight two intertwined concerns about interpretive sufficiency in qualitative analysis. Software for qualitative data analysis has been in use since at least the early 1980s.5 For instance, both ATLAS.ti and NVivo (among others) allow automated coding by structure, style or existing coding patterns. However, the incorporation of GAI into this process has new implications because it allows coding ‘from scratch’ based on the external datasets with which the GAI was trained. There is an inherent value associated with manual (i.e., not software assisted) data analysis, especially in qualitative interpretive and ethnographic research. Analysis of qualitative data often relies on the researcher's creative and conceptual ability to discern meaning, salience and interconnectedness of logic in emerging themes (Amis & Silk, 2008). GAI tools, while powerful and efficient, only have access to the text. As of today, they cannot capture the nuances of the research environment, body language, facial expressions, tone of voice, interactions between the researcher and the research subjects, and the researcher's own accumulated understanding of the domain. Moreover, automated systems detecting these ‘soft’ characteristics of human interactions are highly questioned because of their poor reliability and potential for discriminations (Crawford, 2021). Relying on such tools thus constitutes both an abdication of our responsibilities as researchers and a voluntary diminishment in our own agency and human consciousness required in the construction of knowledge (Amis & Silk, 2008).

A deeper and more contextually nuanced analysis of qualitative data is important because knowledge encompasses syntactic, semantic and pragmatic layers (Mingers, 2008). Automated qualitative coding can only examine syntax, but cannot genuinely grasp data's semantic and pragmatic aspects. As researchers, we do not claim to be neutral or value-free in our analyses, and indeed being value-free may be inappropriate; for instance, with respect to critical interpretive studies, an automated coding process could lead to a banal and neutral analysis that fails to identify or disclose hidden aspects in the qualitative data. The output analysis will then incorporate an incomplete and potentially superficial reading of the data. Further, mainstream (or neutral) chunking and coding could influence and limit our potential learning from the data analysis. For instance, it is important that the researcher be aware of the risks associated with introducing and reinforcing existing biases, especially in research on marginalisation, oppression, activism, conflicts and decolonisation. In addition, in an investigation of socially relevant problems stemming from digital technology, the researcher bounded by personal and community responsibility joins forces with the researched to produce understanding that empowers those disadvantaged by the technology (Amis & Silk, 2008). These are areas where human insight, empathy and understanding (verstehen) are crucial. For these reasons, the use of GAI as a data analysis assistant becomes ethically questionable because it will necessarily exclude some aspects that are central to the analysis and that would be central if it was undertaken by humans.

OpenAI's ChatGPT6 service openly acknowledges that their results are ‘not free from biases and stereotypes’, are ‘skewed towards Western views’ and can ‘reinforce a user's biases’. As an example, ATLAS.ti (qualitative analysis software) uses OpenAI's GPT model and clearly acknowledges that their results may encode ‘social biases, for example, via stereotypes or negative sentiment towards certain groups’.7 Thus, GAI may produce analyses that are biased, unjust or discriminatory to certain groups or individuals, based on the data they are trained on or the criteria they use to analyse qualitative data. GAI may generate patterns (text, images, etc.) that reinforce stereotypes, biases or prejudices against people of different race, gender, culture or background. GAI may not account for the different needs, preferences and values of diverse stakeholders and communities, and may impose a dominant or hegemonic perspective on the data analysis process. GAI-based analysis can also perpetuate or exacerbate the colonisation or marginalisation of other modes of knowledge, cultures, or values, by privileging a certain perspective on the data analysis process, for instance, one reflecting Western cultures, because of training data prevalently collected online (Bender & Friedman, 2018). GAI may rely on data sources, methods or frameworks that are derived from or influenced by colonial or imperial histories, ideologies or power structures. GAI may not acknowledge or address any of these ethical, social or political implications of its data coding.

In the sections above, we have noted the difficulties associated with developing a fair and objective analysis with GAI. As a result, an interpretation developed partially or fully through GAI-based data analysis may be difficult to critically explain, as the algorithms and models that underlie the data analysis process may be complex, opaque or black-boxed. For example, GAI tools often use a combination of neural networks, genetic algorithms and machine learning techniques that are not easily interpretable or transparent to human users or researchers. GAI also does not provide clear and coherent rationales for the outputs it produces and may not allow for feedback or correction.

The emergent and interactional nature of most qualitative research requires more scrutiny of the researcher and their conduct (Iphofen & Tolich, 2018). Drawing on the discussion on data ethics in the age of algorithms, some authors argue that ‘the gradual reduction of human involvement or even oversight over many automatic processes, pose pressing issues of fairness, responsibility and respect of human rights, among others’ (Floridi & Taddeo, 2016, p. 2). In addition to the researcher's responsibilities and obligations to participants, the researcher also has to take epistemic responsibility, which involves being accountable to the evidence where evidence is relationally constituted between the researcher and the researched and to assume responsibility for what the researcher claims to know (Code, 2001). Some researchers may argue that GAI could help them identify preliminary patterns from large datasets, providing them with initial insights. However, it is recognised that GAI is not infallible. For instance, it is prone to what are known as ‘hallucinations’, where it ‘lies’ and ‘fabricates facts’ (Ji et al., 2023). Thus, the veracity of any ‘preliminary patterns’ identified through technology (such as GAI) must be checked by the researcher who must both claim authorship of them and so take responsibility for the text; accountability remains with the researcher (Gregor, 2024). GAI cannot be listed as a co-author (this is publisher policy at ISJ), and thus cannot be permitted to have any agency in the research or its outputs. To wit, we consider blind, automated applications of GAI for data analysis without human agency unethical in every aspect of the research process.

We acknowledge that some ethical issues are specific to particular GAI implementations, which change over time, emphasising the need for clear quality criteria. GAI implementations could also be private. For example, some research organisations have established their own GAI service, enabling students and researchers to use OpenAI's GPT models within university and national data privacy requirements.8 In this view, the ethical concerns, such as privacy, are specific to implementations of GAI and are not necessarily general issues with the technology class. However, private Language Learning Models are not necessarily expected to improve the quality of coding; they might still be too generic to address specific research questions. While they can, to a certain extent, address privacy issues, they cannot unequivocally improve analysis quality and their biases may still be present.

Following our analysis and given all these characteristics of GAI, we suggest that researchers should engage in critical reflexivity and vigilance to identify, understand and robustly address the ethical issues regarding the use of GAI in their research practices involving qualitative data analysis. We do not wish to see a situation where we are lulled into thinking that GAI use is ‘normal’ and that researchers do not need either to pay particular attention to it, or to report their use of it.

Robert M. Davison, Hameed Chughtai, Petter Nielsen, Marco Marabelli, Federico Iannacci, Marjolein van Offenbeek, Monideepa Tarafdar, Manuel Trenz, Angsana A. Techatassanasoontorn, Antonio Díaz Andrade, and Niki Panteli contributed equally to this editorial.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
Information Systems Journal
Information Systems Journal INFORMATION SCIENCE & LIBRARY SCIENCE-
CiteScore
14.60
自引率
7.80%
发文量
44
期刊介绍: The Information Systems Journal (ISJ) is an international journal promoting the study of, and interest in, information systems. Articles are welcome on research, practice, experience, current issues and debates. The ISJ encourages submissions that reflect the wide and interdisciplinary nature of the subject and articles that integrate technological disciplines with social, contextual and management issues, based on research using appropriate research methods.The ISJ has particularly built its reputation by publishing qualitative research and it continues to welcome such papers. Quantitative research papers are also welcome but they need to emphasise the context of the research and the theoretical and practical implications of their findings.The ISJ does not publish purely technical papers.
期刊最新文献
Issue Information Issue Information The visibility paradox: Impediment or benefit to vicarious learning in hybrid work environments? Reconfiguring digital embeddedness in hybrid work: The case of employee experience management platforms Governing digital platform ecosystems for social options
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1