商业和管理领域生成式人工智能的理论驱动视角

IF 4.5 2区 管理学 Q1 BUSINESS British Journal of Management Pub Date : 2024-01-19 DOI:10.1111/1467-8551.12788
Olivia Brown, Robert M. Davison, Stephanie Decker, David A. Ellis, James Faulconbridge, Julie Gore, Michelle Greenwood, Gazi Islam, Christina Lubinski, Niall G. MacKenzie, Renate Meyer, Daniel Muzio, Paolo Quattrone, M. N. Ravishankar, Tammar Zilber, Shuang Ren, Riikka M. Sarala, Paul Hibbert
{"title":"商业和管理领域生成式人工智能的理论驱动视角","authors":"Olivia Brown,&nbsp;Robert M. Davison,&nbsp;Stephanie Decker,&nbsp;David A. Ellis,&nbsp;James Faulconbridge,&nbsp;Julie Gore,&nbsp;Michelle Greenwood,&nbsp;Gazi Islam,&nbsp;Christina Lubinski,&nbsp;Niall G. MacKenzie,&nbsp;Renate Meyer,&nbsp;Daniel Muzio,&nbsp;Paolo Quattrone,&nbsp;M. N. Ravishankar,&nbsp;Tammar Zilber,&nbsp;Shuang Ren,&nbsp;Riikka M. Sarala,&nbsp;Paul Hibbert","doi":"10.1111/1467-8551.12788","DOIUrl":null,"url":null,"abstract":"<p>Shuang Ren, Riikka M. Sarala, Paul Hibbert</p><p>The advent of generative artificial intelligence (GAI) has sparked both enthusiasm and anxiety as different stakeholders grapple with the potential to reshape the business and management landscape. This dynamic discourse extends beyond GAI itself to encompass closely related innovations that have existed for some time, for example, machine learning, thereby creating a collective anticipation of opportunities and dilemmas surrounding the transformative or disruptive capacities of these emerging technologies. Recently, ChatGPT's ability to access information from the web in real time marks a significant advancement with profound implications for businesses. This feature is argued to enhance the model's capacity to provide up-to-date, contextually relevant information, enabling more dynamic customer interactions. For businesses, this could mean improvements in areas like market analysis, trend tracking, customer service and real-time data-driven problem-solving. However, this also raises concerns about the accuracy and reliability of the information sourced, given the dynamic and sometimes unverified nature of web content. Additionally, real-time web access might complicate data privacy and security, as the boundaries of GAI interactions extend into the vast and diverse Internet landscape. These factors necessitate a careful and responsible approach to evaluating and using advanced GAI capabilities in business and management contexts.</p><p>GAI is attracting much interest both in the academic and business practitioner literature. A quick search in Google Scholar, using the search terms ‘generative artificial intelligence’ and ‘business’ or ‘management’, yields approximately 1740 results. Within this extensive repository, scholars delve into diverse facets, exploring GAI's potential applications across various business and management functions, contemplating its implications for management educators and scrutinizing specific technological applications. Learned societies such as the British Academy of Management have also joined forces in leading the discussion on AI and digitalization in business and management academe. Meanwhile, practitioners and consultants alike (e.g. McKinsey &amp; Company, PWC, World Economic Forum) have produced dedicated discussions, reports and forums to offer insights into the multifaceted impacts and considerations surrounding the integration of GAI in contemporary business and management practices. Table 1 illustrates some current applications of GAI as documented in the practitioner literature.</p><p>In an attempt to capture the new opportunities and challenges brought about by this technology and to hopefully find a way forward to guide research and practice, management journals have been swift to embrace the trend, introducing special issues on GAI. These issues aim to promote intellectual debate, for instance in relation to specific business disciplines (e.g. Benbya, Pachidi and Jarvenpaa, <span>2021</span>) or organizational possibilities and pitfalls (Chalmers <i>et al.</i>, <span>2023</span>). However, amidst these commendable efforts that reflect a broad spectrum of perspectives, a critical examination of the burgeoning hype around GAI reveals a significant gap. Despite the proliferation of discussions from scholars, practitioners and the general public, the prevailing discourse is often speculative, lacking a robust theoretical foundation. This deficiency points to the challenges to existing theories in terms of their efficacy in explaining the unique demands created by GAI and indicates an urgent need for refining prior theories or even redeveloping new theories. There is a pressing need to move beyond the current wave of hype and explore the theoretical underpinnings of GAI and the dynamics of its potential impact, to ensure a more nuanced and informed discussion that can guide future research and application in this rapidly evolving area.</p><p>In this direction, the <i>British Journal of Management</i> (BJM) invited prominent scholars who serve as editors in leading business and management journals to weigh in and contribute with their diverse theoretical knowledge to this symposium paper on the emerging GAI phenomenon. This collaborative effort aims to advance the theorization of business and management research in relation to the intricacies associated with the impact of GAI by engaging in intensive discussions on how theoretical attempts can be made to make sense of the myths and truths around GAI.</p><p>The quest for theory, either seeking or refining, is a long-standing tradition in business and management research (e.g. Colquitt and Zapata-Phelan, <span>2007</span>). While the seven pieces below place different elements under the spotlight of theoretical scrutiny, one common thread is the need to reconceptualize the relational realm of workplaces. The introduction of GAI in the workplace refines the norm of working together as a person-to-person group to working in a human–GAI group, with the latter illustrating three novel conceptual contributions in comparison to traditional understandings of the dynamics in the workplace.</p><p>Paolo Quattrone, Tammar Zilber, Renate Meyer</p><p>The etymology of words is often a source of insights to not only make sense of their meaning, but also speculate and imagine meanings that are not so obvious and thereby see the phenomena signalled by these words in new and surprising ways. The etymology of ‘artificial’ and ‘intelligence’ does not disappoint. ‘Artificial’ comes from ‘art’ and <i>-fex</i> ‘maker’, from <i>facere</i> ‘to do, make’. ‘Intelligence’ comes from <i>inter</i> ‘between’ and <i>legere</i> ‘choose, pick out, read’ but also ‘collect, gather’. There is enough in these etymologies to offer a few speculations and imagine the contours of generative artificial intelligence (GAI) and its possible futures.</p><p>The first of these is inspired by the craft of making and relates to the very function and use of AI. Most of the current fascinations with AI emphasize the predictive capacity of the various tools increasingly available and at easy disposal. Indeed, marketers know well in advance when we will need the next toothbrush, fuel our cars, buy new clothes, and so forth. The list is long. This feature of AI enchants us when, for instance, one thinks of a product and, invariably, an advertisement related to that product appears on our social media page. This quasi-magical predictive ability captures collective imaginations and draws upon very well-ingrained forms of knowledge production which presuppose that data techniques are there to represent the world, paradoxically, even when it is not there, as is the case with predictions. The issue is that the future is not out there; we do not know what future generations want from us and still, we are increasingly called to respond to their demands. Despite the availability of huge amounts of data points and intelligence, the future, even if proximal and mundane – as our examples above, always holds surprises. This means that AI may be useful not to predict the future, but to actually imagine and make it, as the -<i>fex</i> in ‘artificial’ reveals. This is the art in the ‘artificial’ and points to the possibility of conceiving AI as a compositional art, which helps us to create images of the future, sparks imagination and creativity and, hopefully, offers a space for speculation and reflection.</p><p>The word intelligence is our second cue, which stresses how ‘inter’ means to be and explore what is ‘in between’. As entrepreneurs are in between different ventures and explore what is not yet there (Hjorth and Holt, <span>2022</span>), AI may be useful to probe grey areas between statuses and courses of action. It can be used to create scenarios, to make sure that the very same set of data produces alternative options that leave space for juggling among different decision-making criteria without reducing decisions about complex states of affairs to single criteria, most likely, value rather than values. This is how, for instance, one could wisely refrain from both apocalyptic and salvific scenarios that characterize the debate about AI. On the one hand, AI is seen as one of the worst possible menaces to humankind. It will take control of our minds and direct our habits, making us entirely dependent. Very likely, as the Luddites were proven wrong (but not completely) when looking at the first and second Industrial Revolutions, the pessimist views will prove wrong, but not completely, as it is clear that AI has agency (Latour, <span>1987</span>) in informing our judgement and it does so through various forms of multimodal affects, that is, relying on our vast repertoire of senses, all mobilized by new forms of technology (e.g. think of smartwatches and how they influence our training habits). On the other hand, AI – similar to the first enterprise resource planning (ERP) systems – is seen as a panacea for many of our problems, diseases and grand challenges, from poverty to climate change, at least until one realizes that SAP does not stand for ‘Solves All Problems’ (Quattrone and Hopper, <span>2006</span>). These dystopian and utopian attitudes will soon be debunked and leave room for more balanced views, which will acknowledge that AI is both a means to address wicked problems and a wicked problem itself, and, again, realize that wisdom is always to be found in the middle, the very same middle in between views. In this case, a more balanced in-between view is to realize that AI itself is a construction. Like all resources (Feldman and Worline, <span>2006</span>) and technologies (Orlikowski, <span>2000</span>), their function and effect are not pre-given but will be determined by our use thereof. For example, AI will be productive of ‘facts’ but of those that are reminiscent of the fact that facts are ‘made’, and that there is nothing less factual than a fact for, as the Romans knew so well (from <i>factum</i>, i.e. made), a fact is always constructed, and AI will be making them in huge quantities. This will be good to speculate, to foster imagination by having a huge amount of them available, but also potentially bad, as those who will own the ability to establish them as facts will magnify Foucault's adage that knowledge is power.</p><p>The third cue stands in the root <i>leg</i>-, which originates so many words that characterize our contemporary world, both academic and not, including <i>legere</i> (to read, but also to pick and choose), <i>legare</i> (to knot) and indeed a religion. As much as medieval classifying techniques used <i>inventories</i> of data to <i>invent</i> new solutions to old problems by recombining such data in novel forms, by choosing and picking data depending on the purpose of the calculation, to imagine the future and reimagine the past (Carruthers, <span>1998</span>), AI will use even bigger inventories of data to generate inventions until we finally realize that to explore ‘what is not’ and could become is much more fruitful in imagining the future and the unprecedented than to define ‘what is’ (Quattrone, <span>2017</span>). Only then will AI be truly generative. As was the case with Steve Ballmer, then CEO of Microsoft, when presented with the first iPhone. He exclaimed ‘who would want to pay five hundred dollars for a phone?’. He had not realized that to comprehend the power and complexities of technologies, it is better to think in terms of what they are not, rather than what they are. The cell phone is not a phone so much as it is a camera, a TV or cinema, a newspaper, a journal/calendar. Google begins a search with X, a negative, and then by creating correlations defines what Z could be (a phone may be a cinema) and what it could become (a meeting place). This move from the negative to the potential, from what is not to what can be, is the core of AI. AI can facilitate this exploration into what is not obvious and help us avoid taking things for granted. So, predicting how AI will develop and affect our lives is bound to fail as there are so many ways this can go and many unintended consequences. At this stage, it may be more fruitful not to predict the future but to explore how we try to make sense of the unknowable future in the present and which potential pathways we thereby open and which we close. Exploring the framing contests around AI, the actors involved and the various interests they attempt to serve may tell us more about ourselves than about AI – about our collective fantasies, fears and hopes that shape our present and future.</p><p>This brings us to whether and to what extent AI can inform human thinking and actions. That technologies influence our behaviour is now taken for granted, but given that this influence is not deterministic, and technologies have affordances that go beyond the intentions of the designers, what counts as agency and where to find it is possibly a black box that GAI can contribute to reopen. Since the invention of the printing press, and the debate between Roland Barthes and Michael Foucault, the notion of authorship has been questioned (Barthes, <span>1994</span>; Foucault, <span>1980</span>), along with authors’ authority and accountability. This is even truer now, when algorithms of various kinds already take decisions seemingly autonomously, from high-frequency trading in finance to digital twins in construction, and now also being able to write meaningful sentences that potentially disrupt not only research but also the outlets where these texts are typically published, that is, academic journals (Conroy, <span>2023</span>). We are moving from a non-human ‘decision-maker’, be it a self-driving car or a rover autonomously exploring Mars, to non-human ‘makers’ <i>tout court</i>, with the difference that they have no responsibility and no accountability. And yet they influence the world and affect our personal, social and work lives. This has policy and theoretical implications. In policy terms, as much as the legal form of the corporation emerged to limit and regulate individual greed (Meyer, Leixnering and Veldman, <span>2022</span>), we may witness the emergence of a new fictitious persona, this time even more virtual than the corporation, with no factories and employees, while still producing and distributing value through, and to, them, respectively. Designing anticipatory governance is even more intricate than with corporations, as these non-human ‘makers’ are even more dispersed and ephemeral, not to say slippery.</p><p>Theoretically, we may be at the edge of a revolution as important as the emergence of organization theory in the twentieth century. It was Herbert Simon (<span>1969</span>) who foresaw the need for a science of the artificial, that is, a science the object of which was the organization of the production of artefacts of various kinds, of the need for making sense of the relationship between means and ends when new forms of bounded rationality informed decision-making. We would not be surprised if a ‘New Science of the Artificial’, this time related to the study of AI rationality, emerged in the twenty-first century. For sure, there will be a need to govern AI and study how the governance and organization of AI intertwine with human rationality, possibly changing the contours of both.</p><p>Niall G. MacKenzie, Stephanie Decker, Christina Lubinski</p><p>Recently, generative artificial intelligence (GAI) has been subject to breathless treatments by academics and commentators alike, with claims of impending ubiquity (or doom, depending on your perspective) and life as we know it being upended, with millions of jobs destroyed (Eglash <i>et al.</i>, <span>2020</span>). Historians will, of course, point out that this is nothing new. Technological innovation and adoption have a long and generally well-researched history (Chandler, <span>2006</span>; Scranton, <span>2018</span>) and the same is true for resistance to these innovations (Juma, <span>2016</span>; Mokyr, <span>1990</span>; Thompson, <span>1963</span>) and moral panics (Orben, <span>2020</span>). What, if anything, does history have to tell us about GAI from a theoretical perspective other than ‘it's not new…’?</p><p>Good historical practice requires a dialogue between past and present (Wadhwani and Decker, <span>2017</span>). Thus, if we want to understand GAI we should understand the character of its development and the context in which it occurred and occurs. GAI's history was/is underpinned by progression in several other areas including mathematics, information technology and telecommunications, warfare, mining and computing science (amongst many more) (Buchanan, <span>2006</span>; Chalmers, MacKenzie, and Carter, <span>2021</span>; Haenlein and Kaplan, <span>2019</span>). This means that despite GAI's rapid recent progress, it is still the result of iterative developments across various other sectors which enable(d) and facilitate(d) it. Consistent within this is the imagined futures (Beckert, <span>2016</span>) pushed by technologists, entrepreneurs, policymakers and futurists about what it could mean for society.</p><p>The value of historical thinking with regard to new technologies like GAI can be illustrated by considering the social imaginaries (Taylor, <span>2004</span>) that have been generated as part of the experience of previous technologies and their development and adoption. When a technology emerges, there may be a fanfare about how it will change our lives for the better, and/or concerns about how it will disrupt settled societal arrangements (Budhwar, <span>2023</span>). Ubiquity-posited technologies like GAI are then often subject to competing claims – promises of imagined new futures where existing ways of doing things are improved, better alternatives averred and economic and societal benefits promised, but are also often accompanied by challenges and concerns regarding job destruction, societal upheaval and the threat of machines taking over. As a consequence, the imaginaries compete with each other and are generative in and of themselves in that they create spaces of possibility that frame experiments of adoption (Wadhwani and Viebig, <span>2021</span>). We can analyse past imaginaries of existing technologies to better understand what the emergence of new technologies and the auguries posited with them tell us about how societies adopt and adapt to the changes they bring. However, it is only in a post-hoc fashion that we can understand the efficacy of such claims. For example, recent work by business historians has considered how we understand posited past futures of entrepreneurs across a range of technological and non-technological transformations (Lubinski <i>et al.</i>, <span>2023</span>), illustrating the value that historical work brings to theorizing societal change brought about by such actions.</p><p>The imaginaries, good and bad, associated with technologies like GAI play an important role in their legitimation and adoption, as well as their opposition. Given the contested nature of such societally important technologies, it is therefore important to also recognize and consider the context in which new technologies such as GAI emerge in terms of the promises associated with them, the societal effect they have and how they unfold in order to provide appropriate theories and conceptual lenses to better understand them. When exploring the integration of new technologies in context, historical analysis of both the technology in question and other technologies illustrates nuances and insights to inform deeper theory to understand what a technology like GAI can mean to society. The different imaginaries associated with GAI possess clear parallels with what has come in the past.</p><p>The Luddite riots of the nineteenth century, whereby agricultural workers sought to destroy machinery that was replacing their labour (Mokyr, <span>1990</span>; Thompson, <span>1963</span>), are probably the most famous negative societal response to the introduction of new technology, giving rise to the term ‘Luddite’ that is still commonly used today to describe someone opposed to technology. Contrastingly, the playwright Oscar Wilde posited in his 1900 essay ‘The soul of man under socialism’ that ‘All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery’ (Wilde, <span>1891</span>/2007). More recently, Lawrence Katz, a labour economist at Harvard, repeated Wilde's suggestion by predicting that ‘information technology and robots will eliminate traditional jobs and make possible a new artisanal economy’ (Thompson, <span>2015</span>). Both Wilde's and Katz's comments tilt at the imaginary of the benefits that technology and automation can bring in freeing up people's time to focus on more creative and rewarding work and pursuits, whilst the Luddites were expressing serious misgivings about the imaginary that their jobs, livelihoods and way of life were under serious threat from mechanization.</p><p>Good and bad imaginaries are a necessary part of the development of all new technologies but are only really understood post hoc and within context. As Mary O'Sullivan recently pointed out, based on her analysis of the emergence of steam engine use in Cornish copper mines in the eighteenth century, technology itself does not bring the general societal rewards suggested if the economic system in which it is developed remains controlled by small groups of powerful individuals (O'Sullivan, <span>2023</span>). Similar concerns have been made about GAI with its principal proponents comprising a few global multinationals, as well as state-controlled interests such as the military, racing for dominance in the technology (Piper, <span>2023</span>). The economic and political systems in which GAI is being developed are important to understand in relation to the imaginaries and promises being made concerning its value and warnings of its threats, particularly in light of the history of societally important technological shifts.</p><p>As scholars, we face ongoing challenges to explain new, ubiquity-focused technologies and the accompanying imaginaries (which often constitute noise, albeit with kernels of truth/accuracy hidden therein). In this sense, when we seek to theorize about GAI and its potential impact on business and management (and vice versa), it is important to recognize that historical analysis does not foretell the future, but rather provides a critical understanding of how new innovations impact and are impacted by the societies they take place in. Interrogating the contested imaginaries through the incorporation of historical thinking in our conceptualization of new technologies such as GAI will provide a deeper understanding of their impact, which in turn will allow us to better harness them for the greater good.</p><p>Olivia Brown, David A. Ellis, Julie Gore</p><p>Digital technologies continue to permeate across society, not least the way in which they allow individuals and teams to collaborate (Barley, Bechky and Milikhen, <span>2017</span>). For instance, innovations in communication have led to a shift towards virtual working and the proliferation of globally distributed corporate teams (see Gilson <i>et al.</i>, <span>2015</span>). As the volume and variety of data types that can be linked together has also accelerated, we have witnessed the emergence of large language models (LLMs), with the introduction of ChatGPT bringing them to the attention of a much wider audience. Broadly referred to as a form of generative artificial intelligence (GAI), ChatGPT allows individuals (or teams) to ask questions and quickly be provided with detailed, actionable, conversational responses. Sometimes referred to as virtual agents as part of customer service and information retrieval systems, these conversational responses can effectively become virtual team members.</p><p>The view of technology as a means with which to facilitate effective teamwork in organizations has now shifted towards questions of whether, and under what circumstances, we can consider this GAI as a ‘team member’ (Malone, <span>2018</span>). Conceptualizing GAI in this manner suggests a trend away from viewing technology as a supportive tool that is adjunct to human decision-making (see Robert, <span>2019</span> for a discussion of this in healthcare) to, instead, having a direct and intrinsic role within the decision-making and task-execution processes in teams (O'Neill <i>et al.</i>, <span>2022</span>). New questions are therefore being raised as to whether AI team members improve the performance of a team, and would organizations trust them? And if so, how much? To what degree are AI team members merely adjunct to, or replacements for real team members when it comes to decision-making? When a hybrid AI team completes a task, who takes responsibility for successes and failures? How can or should managers or leaders quantify accountability? Addressing these early questions dictates that it may soon be necessary to reframe and readdress the way in which teams are studied from theoretical, practical and ethical perspectives.</p><p>From a <b>theoretical</b> perspective, across the many definitions of teams that have been developed within the management literature, one constant is that they are generally understood to comprise ‘two or more individuals’ (Kozlowski and Bell, <span>2003</span>; Kozlowski and Ilgen, <span>2006</span>; Salas <i>et al.</i>, <span>1992</span>). If we are indeed approaching the point at which AI will ‘become an increasingly valuable team member’ (Salmon <i>et al.</i>, <span>2023</span>, p. 371), we will need to reconsider our definitions of what constitutes a team (i.e. is one human individual sufficient when paired with an AI member). In turn, we then need to assess how theoretical frameworks and constructs that facilitate teamwork operate within the context of AI–human teams. For instance, in Ilgen <i>et al.</i>’s (<span>2005</span>) widely adopted input–mediator–output–input (IMOI) model of teamwork, the input element has typically focused on the composition of the team (i.e. individual characteristics), alongside the structure of the team and the environment in which they are operating (see also Mathieu <i>et al.</i>, <span>2008</span>). As GAI is incorporated into organizational structure and design, it is pertinent to consider where (and indeed whether) it ought to be placed within this framework. Should GAI be considered part of the team composition as an input factor or is it best accounted for in the technological capabilities of the wider organizational context? The answer to this question will have important implications both for research designs and for the way in which the academic community relays findings to practitioners. Time will tell, and the answers to these questions will require further systematic thought, however, this may perhaps then warrant the start of a ‘necessary scientific revolution’, the like of which Kuhn advocated (Kuhn, <span>1962</span>).</p><p>Alongside situating GAI's place within our theoretical framing, we must also consider how established team constructs operate within this new frontier of teaming. For example, interpersonal trust is a key component in the performance of highly functioning teams, especially in instances where there is a high level of task interdependence between team members (De Jong, Dirks and Gillespie, <span>2016</span>). Research has shown that communication behaviours (e.g. style, openness, responsiveness; see Henttonen and Bloqvist, <span>2005</span>) influence the development of trust in virtual teams, thus it begs the question that, in a wholly virtual interaction, how do we conceptualize and explore the development of interpersonal trust in AI–human teams? Is it possible that individuals will develop trust in AI in the same manner that they would their human team members, and how might this then impact organizational performance and transform our understanding of what it means to interpersonally relate to technology?</p><p>These questions, amongst others, are documented in a growing body of literature (see O'Neill <i>et al.</i>, <span>2022</span>; Salmon <i>et al.</i>, <span>2023</span>; Seeber <i>et al.</i>, <span>2020</span>), however, at present, empirical research on GAI within the management literature remains limited (Dwivedi <i>et al.</i>, <span>2023</span>). It is now pertinent for management scholars to begin addressing these questions empirically, as we face rapidly evolving and potentially disruptive changes to the world of work, not seen since the beginning of the digital age. For example, lab-based experiments could manipulate AI team members by changing their ‘personality’ or adding/removing them from teams. This could be further understood in terms of effects in different contexts (where an AI team member is given a greater or reduced physical presence, e.g. via a robot, or tasks, creative vs procedural). Observational research and interview studies will also be valuable in providing an initial understanding of the perceptions of GAI, alongside insights into how GAI is being incorporated into working structures and organizational teams at present and where managers and employees perceive it might be incorporated in the future.</p><p>Alongside definitional issues and the need to re-examine how teamwork constructs operate within human–GAI teams, there are <b>practical considerations</b> posed by the introduction of GAI at work. As researchers, we are already facing a poignant challenge in connecting the myriad of ways individuals can interact with networked technologies with their offline behaviours (Brown <i>et al.</i>, <span>2022</span>; Smith <i>et al.</i>, <span>2023</span>). At present, efforts to capture the interplay between actions taken online and actions taken in the real world have largely failed to understand the nuanced behavioural and psychological mechanisms that might link the two (see Smith <i>et al.</i>, <span>2023</span>). For instance, while digital technologies such as Microsoft Teams, Slack and Zoom are now widespread across organizations, scholars have noted that our understanding of how teams engage with these technologies and how they might improve, or hamper, team effectiveness remains limited when compared to the individual and organizational level impacts (Larson and DeChurch, <span>2020</span>). The introduction of GAI may only serve to widen this gap in understanding, as the line between technologically driven and human-driven behaviour becomes increasingly blurred (see Dwivedi <i>et al.</i>, <span>2023</span>). To overcome this, management scholars must carefully consider the methods that will be required to study GAI in teams and be open to utilizing innovative practices from other disciplines (e.g. human factors, computer science, psychology). This will allow for the triangulation of findings from experimental and observational studies with data derived directly from the digital services that sit at the centre of modern working life.</p><p>Finally, at the forefront of our exploration of GAI in work teams, ethical considerations must be addressed. Indeed, there has been much conjecture about the perils of AI in organizational psychology and human resource management amongst both scholars and practitioners (CIPD, <span>2021, 2022a</span>, <span>2022b, 2022c</span>). Practitioner-centred outlets and public discourse are filled with a focus on risk mitigation, the implications for recruitment practices, legal and cross-country considerations, unwanted employee monitoring software and a somewhat Luddite philosophy surrounding the dark side of AI (Cheatham, Javanmardian and Samandari, <span>2019</span>; Giermindl <i>et al.</i>, <span>2022</span>; McLean <i>et al.</i>, <span>2023</span>). Despite this, it remains plausible that in the coming years, ChatGPT will become an everyday reality at work, such that it is used as frequently as virtual meeting platforms and email. While, for some, the prospect of a team that is readily supported by GAI might be a welcome addition, the potential of such a reality could also be perceived as a dystopian nightmare, with any number of ethical challenges (see Mozur, <span>2018</span>). This equally applies to how we study any effects on people and organizations. In considering the ethical implications of GAI in teams, it is, of course, important to outline the recognized potential for societal benefits. For instance, many challenges whereby teams become unable to make decisions due to increased cognitive load, especially in atypical, high-reliability organizations, could be mitigated with the use of AI (Brown, Power and Conchie, <span>2020, 2021</span>). For example, an artificial agent with no cognitive limitations could remind a team that some solutions will bring risks that members have failed to consider (Steyvers and Kumar, <span>2023</span>).</p><p>On the other hand, ChatGPT and similar systems have been predominantly trained on English text, and such systems build in existing societal biases that are then further magnified (Dwivedi <i>et al.</i>, <span>2023</span>; Weinberger, <span>2019</span>). Furthermore, whereas traditional software is developed by humans, whereby computer code provides explicit step-by-step instructions, ChatGPT is built on a neural network that was trained using billions of words. Therefore, while there is some understanding about how such systems work at a technical level, there are also many gaps in existing knowledge which will not be filled overnight, generating issues relating to the transparency of these systems (Dwivedi <i>et al.</i>, <span>2023</span>; Robert, <span>2019</span>). While there are no easy answers to the current (and yet-to-come) ethical concerns that accompany the study of AI in teams, there are uncontroversial processes by which we can perpetually operate and self-reflect. Our developing ability to make comprehensive assessments of digital, hybrid and traditional teams’ performance carries with it heavy questions about how this power will be used and who will be using it. We must therefore consider how organizations (and indeed we, as researchers) might incorporate these tools into teamwork <i>and</i> research processes thoughtfully, but humanely. Introducing interdisciplinary ethics committees that include a wider range of stakeholders (e.g. members of the public, technology developers) offers a potential solution here, and will help to engender responsible and innovative research into GAI within management studies.</p><p>Encompassing all the above, management scholars will need to become increasingly comfortable when engaging with other disciplines, the public and policymakers, all of whom have unique perspectives (Kindon, Pain and Kesby, <span>2007</span>), as part of an interdisciplinary endeavour to address the methodological and theoretical challenges that lie ahead. This involves accepting that while the study of GAI in teams for management scholars is certainly not staring into the abyss, our current theories, methods, expertise and ethical explorations remain far from conclusive.</p><p>Daniel Muzio, James Faulconbridge</p><p>There has been a lot of journalistic, practitioner and academic attention on the topic of artificial intelligence (AI) and the professions. Some authors (Armour and Sako, <span>2020</span>; Faulconbridge, Sarwar and Spring, <span>2023</span>; Goto, <span>2021</span>; Pemer and Werr, <span>2023</span>; Spring, Faulconbridge and Sarwar, <span>2022</span>) have focused on how professional services firms introduce and use increasingly sophisticated technological solutions. Others (Leicht and Fennell, <span>2023</span>; Sako, Qian and Attolini, <span>2022</span>) have focused on the impact of AI on professional labour markets. Indeed, the consensus seems to be that unlike previous technological revolutions, this current one will concern primarily professional and knowledge workers. However, given the prospect of wide-ranging change, surprisingly little attention has been paid to how AI may affect our theoretical understanding of professionalism as a distinct work organization principle. This is unfortunate, since the new AI revolution is likely to challenge some deeply held assumptions and understandings which underpin the sociology of the professions as a distinct body of knowledge (Abbott, <span>1988</span>; Johnson, <span>1972</span>; Larson, <span>1977</span>; Muzio, Kirkpatrick and Aluack, <span>2019</span>). In this contribution, we focus on this issue and reflect on how AI might affect the way we understand professionalism.</p><p>Gazi Islam, Michelle Greenwood</p><p>The founding of the scholarly <i>Journal of the Royal Society of London</i>, described by the historian Biagioli (as cited in Strathern, <span>2017</span>), illustrates how scientific production rests on paradoxes and precarious relationships at a distance. Biagioli describes how the Royal Society became the locus of a plethora of scholarly correspondence from distant geographies, which it acknowledged in its title as ‘giving some accompt (account)… of the ingenious in many parts of the world’ (Royal Society London, <span>1865</span>, cover). In contrast to its sparsely attended, gentlemanly, in-person meetings, the broadening of the transactions through correspondence produced a publicly available, globalized scholarly record, but also led to a problem regarding the credibility of the interlocuters. The Society's solution was to develop an ‘epistolary etiquette’, by which the value of contributions could be assessed without direct personal relationships. The current system of scholarly peer review and journal publication descends from this system of partial connections and evaluation at a distance (Strathern, <span>2017</span>).</p><p>The case of the Royal Society journal is interesting because it lays bare the relational infrastructure that undergirded the production of scholarship. Both collegial (because it required ongoing scholarly interaction and etiquette) and impersonal (because it required judgement at a distance between strangers), scholarly production involved a balancing act between proximity and distance, a system of partial relations that was itself emblematic of emerging modern conceptions of civil society (Strathern, <span>2020</span>). Beyond flashes of creative insight or financial patronage – although both were present – it was this relational infrastructure that allowed the emergence of modern scholarship within newly forming national civil societies.</p><p>We do not argue that such epistolary conventions are the only (or best) way to produce scholarly advancement, but these are the structures we have inherited, and are being quickly called into question by the emergence of recent technologies. One of these – not the only one – is generative artificial intelligence (GAI), or its recent incarnation in large language models (LLMs) like ChatGPT. LLMs promise to intervene in the scholarly process at virtually every point of knowledge production, from writing text and simulating data to ‘peer’ reviewing and editing. It is likely that the mix of human creation and mechanical supplement already woven into scholarly publishing will shift considerably. With what results?</p><p>Taking a relational perspective to knowledge production allows us to imagine how scholarly knowledge may be shaped by LLMs. Specifically, drawing on Strathern's (<span>2000, 2004</span>, <span>2020</span>) work around relations and knowledge practices, we argue that networks of relationships (and the actors thereby constituted) change both the production of knowledge and the nature of its accountability. The embeddedness of LLMs in these networks could be a radically reshaped research landscape, with unpredictable consequences for what counts as knowledge in our field.</p><p>Robert M. Davison, M. N. Ravishankar</p><p>Theorizing is a messy business. It involves multiple sources of evidence and multiple possible explanations. The sources of data may include interviews, observations, literature, documents and diaries. They may be coded in multiple (human) languages and in multiple registers from the formal to the informal, from the technical to the mundane. While there are clear guidelines for how researchers can approach theorizing (Gioia, Corley and Hamilton, <span>2013</span>; Hassan, Lowry and Mathiassen, <span>2022</span>; Martinsons, Davison and Ou, <span>2015</span>; Weick, <span>1989</span>), in practice, theorizing is an idiosyncratic activity that reflects the style, personality, values and culture of the theorizer. Thus, the most convincing theoretical explanation may be one that is more parsimonious, interesting, counterintuitive and/or provocative. Crafting that convincing theoretical explanation requires adherence to multiple standards (parsimony, interestingness, etc.), each of which competes with the others for attention.</p><p>Generative artificial intelligence (GAI) programs like ChatGPT have several useful attributes that might assist researchers as they theorize. For instance, GAI programs may be able to synthesize some of the literature or other documents. Such syntheses can be invaluable as they often require considerable time. But synthesizing the literature is not simply a mechanical task with a precise end state: the synthesis. It is also a way of understanding how prior research has been conceived, or not conceived. When reading a series of research papers, the perspicacious researcher will, in addition to synthesizing, note both the prominent and the absent trends or patterns. For instance, the researcher may recall a study or method or theory from some years previously in a different field or discipline that could usefully be compared with or inform this literature. Naturally, the human brain is somewhat selective: the researcher is unlikely to have read the entirety of the literature across multiple disciplines, and so this comparison is limited by the researcher's own reading. Can the GAI program help here, perhaps suggesting the relevance of a study in a very different discipline? To give two real examples, when writing a paper (Liu <i>et al.</i>, <span>2023</span>) about the role of Chief Digital Officers in digital transformation, one of us employed punctuated equilibrium theory (PET), a theory first proposed in evolutionary biology (Eldredge and Gould, <span>1972</span>) and occasionally encountered in the management and information systems literatures (Gersick, <span>1991</span>; Wong and Davison, <span>2018</span>). In our discussion, we found that we needed to examine more closely the way PET had been applied in recent business research, and then to draw parallels between the focus on digital transformation with the evolutionary biology sources. The literature in the latter area is huge: perhaps GAI could have helped identify salient sources, in effect working as a research assistant? No doubt GAI could also synthesize those sources and even render their technical jargon into a form that an information science researcher could more readily comprehend. But what will this type of non-active participation in the research process cause researchers to lose? As it turned out, in these examples we were not assisted by GAI. We simply Googled the relevant terms and quickly enough found exactly the paper that we needed to support and develop our arguments (see Liu <i>et al.</i>, <span>2023</span>). Similarly, when writing papers on the role of framing in IT-enabled sourcing, the other author could have benefitted immensely from GAI's ability to synthesize the huge corpus of scholarship on framing in the social psychology literature (Ravishankar, <span>2015</span>; Sandeep and Ravishankar, <span>2016</span>). However, we had to do the dejargonizing work ourselves, a process that admittedly took some time but was intellectually stimulating. Indeed, these examples neatly encapsulate many of the things that we appreciate about research, and we would be loath to relinquish them to GAI.</p><p>A second example where GAI may help out concerns data transcription. As researchers, we often collect data through interviews. Traditionally, we transcribe the interviews to text and where necessary translate them into the language that we wish to code them in, often English. GAI programs can certainly be used for interview transcription and translation. The GAI software can certainly speed up the initial process but the error rate of the software is non-trivial, that is, careful manual checking of the transcription/translation is needed. For instance, we recently used GAI to transcribe and then translate interviews from Chinese to English. As part of our preparation, we needed to inform the software that the source material was in Chinese (Mandarin), so that the Chinese language module would be applied. However, the audio text included English words embedded in it, that is, the interviewees spoke both Chinese and English in their interview responses. This is technically referred to as code mixing, and is quite common among second-language users, that is, they use their first language for much of their communication but mix in words from second languages on an ad hoc basis, often because the second-language word expresses an idea or concept more succinctly than would the corresponding first-language word. Such code mixing exists in both spoken and written communication. The GAI transcription software accurately recognized and transcribed the Chinese words, but was unable to deal with the English words because it was not expecting them, so it rendered them by converting them phonetically. For instance, the abbreviation EDI (electronic data interchange) was rendered in Chinese characters not as the correct translation of EDI (電子數據交換) but as characters that approximated the sound of the letters E D I (一點愛). However, these inserted characters (which actually mean ‘a little love’) were totally inappropriate in the context and made no sense at all. Perhaps in the future, GAI programs could be instructed to look out for words in specific languages and so transcribe or translate appropriately.</p><p>When it comes to the analysis of data, that is, the identification of themes and patterns, and the generation of theoretical arguments, our earlier comments about parsimony, interestingness, counterintuitiveness and provocativeness come to the fore. Although the efficiency of human analytical capacity may not be superior to GAI, given GAI's potential to analyse vast quantities of data quickly, to compare that data with past literature, and presumably to generate many possible options, we suggest that the effectiveness of human intuition is superior because of our ability to identify an interesting or provocative or counterintuitive angle that is worth exploring. Quite what is interesting or provocative or counterintuitive is hard to pin down, as it depends to a large extent on the subjective assessment of the researcher who is going to create an argument to justify that interesting, provocative or counterintuitive theoretical explanation. This human capability goes beyond creating new content using patterns in data, and it is central to theorizing: the researcher(s) need to draw on their innate imagination and creativity to craft that theoretical explanation. Could a GAI program be trained to identify potentially interesting, provocative or counterintuitive positions, and then to craft the supporting arguments? The answer must be yes, but how convincing they would be is moot. They might help the researcher to identify promising new lines of thought, or might stimulate further intellectual engagement, with the GAI program acting as an agent provocateur. A final point, which slightly contradicts our arguments so far, is worth making. Fears are being expressed about how the limits of GAI are really the inability of users to ask the system the ‘right’ questions. If GAI intuition and reasoning powers appear unable to produce sophisticated theorizing, could it be that the issue is less about GAI capability and more about scholars’ relatively limited experience and knowledge around employing the ‘prompts’? This line of thought opens the intriguing possibility that GAI is far more potent than we realize, and that it may indeed produce academically sound, rigorous, novel and elegant theorizing of significant value.</p>","PeriodicalId":48342,"journal":{"name":"British Journal of Management","volume":null,"pages":null},"PeriodicalIF":4.5000,"publicationDate":"2024-01-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1467-8551.12788","citationCount":"0","resultStr":"{\"title\":\"Theory-Driven Perspectives on Generative Artificial Intelligence in Business and Management\",\"authors\":\"Olivia Brown,&nbsp;Robert M. Davison,&nbsp;Stephanie Decker,&nbsp;David A. Ellis,&nbsp;James Faulconbridge,&nbsp;Julie Gore,&nbsp;Michelle Greenwood,&nbsp;Gazi Islam,&nbsp;Christina Lubinski,&nbsp;Niall G. MacKenzie,&nbsp;Renate Meyer,&nbsp;Daniel Muzio,&nbsp;Paolo Quattrone,&nbsp;M. N. Ravishankar,&nbsp;Tammar Zilber,&nbsp;Shuang Ren,&nbsp;Riikka M. Sarala,&nbsp;Paul Hibbert\",\"doi\":\"10.1111/1467-8551.12788\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p>Shuang Ren, Riikka M. Sarala, Paul Hibbert</p><p>The advent of generative artificial intelligence (GAI) has sparked both enthusiasm and anxiety as different stakeholders grapple with the potential to reshape the business and management landscape. This dynamic discourse extends beyond GAI itself to encompass closely related innovations that have existed for some time, for example, machine learning, thereby creating a collective anticipation of opportunities and dilemmas surrounding the transformative or disruptive capacities of these emerging technologies. Recently, ChatGPT's ability to access information from the web in real time marks a significant advancement with profound implications for businesses. This feature is argued to enhance the model's capacity to provide up-to-date, contextually relevant information, enabling more dynamic customer interactions. For businesses, this could mean improvements in areas like market analysis, trend tracking, customer service and real-time data-driven problem-solving. However, this also raises concerns about the accuracy and reliability of the information sourced, given the dynamic and sometimes unverified nature of web content. Additionally, real-time web access might complicate data privacy and security, as the boundaries of GAI interactions extend into the vast and diverse Internet landscape. These factors necessitate a careful and responsible approach to evaluating and using advanced GAI capabilities in business and management contexts.</p><p>GAI is attracting much interest both in the academic and business practitioner literature. A quick search in Google Scholar, using the search terms ‘generative artificial intelligence’ and ‘business’ or ‘management’, yields approximately 1740 results. Within this extensive repository, scholars delve into diverse facets, exploring GAI's potential applications across various business and management functions, contemplating its implications for management educators and scrutinizing specific technological applications. Learned societies such as the British Academy of Management have also joined forces in leading the discussion on AI and digitalization in business and management academe. Meanwhile, practitioners and consultants alike (e.g. McKinsey &amp; Company, PWC, World Economic Forum) have produced dedicated discussions, reports and forums to offer insights into the multifaceted impacts and considerations surrounding the integration of GAI in contemporary business and management practices. Table 1 illustrates some current applications of GAI as documented in the practitioner literature.</p><p>In an attempt to capture the new opportunities and challenges brought about by this technology and to hopefully find a way forward to guide research and practice, management journals have been swift to embrace the trend, introducing special issues on GAI. These issues aim to promote intellectual debate, for instance in relation to specific business disciplines (e.g. Benbya, Pachidi and Jarvenpaa, <span>2021</span>) or organizational possibilities and pitfalls (Chalmers <i>et al.</i>, <span>2023</span>). However, amidst these commendable efforts that reflect a broad spectrum of perspectives, a critical examination of the burgeoning hype around GAI reveals a significant gap. Despite the proliferation of discussions from scholars, practitioners and the general public, the prevailing discourse is often speculative, lacking a robust theoretical foundation. This deficiency points to the challenges to existing theories in terms of their efficacy in explaining the unique demands created by GAI and indicates an urgent need for refining prior theories or even redeveloping new theories. There is a pressing need to move beyond the current wave of hype and explore the theoretical underpinnings of GAI and the dynamics of its potential impact, to ensure a more nuanced and informed discussion that can guide future research and application in this rapidly evolving area.</p><p>In this direction, the <i>British Journal of Management</i> (BJM) invited prominent scholars who serve as editors in leading business and management journals to weigh in and contribute with their diverse theoretical knowledge to this symposium paper on the emerging GAI phenomenon. This collaborative effort aims to advance the theorization of business and management research in relation to the intricacies associated with the impact of GAI by engaging in intensive discussions on how theoretical attempts can be made to make sense of the myths and truths around GAI.</p><p>The quest for theory, either seeking or refining, is a long-standing tradition in business and management research (e.g. Colquitt and Zapata-Phelan, <span>2007</span>). While the seven pieces below place different elements under the spotlight of theoretical scrutiny, one common thread is the need to reconceptualize the relational realm of workplaces. The introduction of GAI in the workplace refines the norm of working together as a person-to-person group to working in a human–GAI group, with the latter illustrating three novel conceptual contributions in comparison to traditional understandings of the dynamics in the workplace.</p><p>Paolo Quattrone, Tammar Zilber, Renate Meyer</p><p>The etymology of words is often a source of insights to not only make sense of their meaning, but also speculate and imagine meanings that are not so obvious and thereby see the phenomena signalled by these words in new and surprising ways. The etymology of ‘artificial’ and ‘intelligence’ does not disappoint. ‘Artificial’ comes from ‘art’ and <i>-fex</i> ‘maker’, from <i>facere</i> ‘to do, make’. ‘Intelligence’ comes from <i>inter</i> ‘between’ and <i>legere</i> ‘choose, pick out, read’ but also ‘collect, gather’. There is enough in these etymologies to offer a few speculations and imagine the contours of generative artificial intelligence (GAI) and its possible futures.</p><p>The first of these is inspired by the craft of making and relates to the very function and use of AI. Most of the current fascinations with AI emphasize the predictive capacity of the various tools increasingly available and at easy disposal. Indeed, marketers know well in advance when we will need the next toothbrush, fuel our cars, buy new clothes, and so forth. The list is long. This feature of AI enchants us when, for instance, one thinks of a product and, invariably, an advertisement related to that product appears on our social media page. This quasi-magical predictive ability captures collective imaginations and draws upon very well-ingrained forms of knowledge production which presuppose that data techniques are there to represent the world, paradoxically, even when it is not there, as is the case with predictions. The issue is that the future is not out there; we do not know what future generations want from us and still, we are increasingly called to respond to their demands. Despite the availability of huge amounts of data points and intelligence, the future, even if proximal and mundane – as our examples above, always holds surprises. This means that AI may be useful not to predict the future, but to actually imagine and make it, as the -<i>fex</i> in ‘artificial’ reveals. This is the art in the ‘artificial’ and points to the possibility of conceiving AI as a compositional art, which helps us to create images of the future, sparks imagination and creativity and, hopefully, offers a space for speculation and reflection.</p><p>The word intelligence is our second cue, which stresses how ‘inter’ means to be and explore what is ‘in between’. As entrepreneurs are in between different ventures and explore what is not yet there (Hjorth and Holt, <span>2022</span>), AI may be useful to probe grey areas between statuses and courses of action. It can be used to create scenarios, to make sure that the very same set of data produces alternative options that leave space for juggling among different decision-making criteria without reducing decisions about complex states of affairs to single criteria, most likely, value rather than values. This is how, for instance, one could wisely refrain from both apocalyptic and salvific scenarios that characterize the debate about AI. On the one hand, AI is seen as one of the worst possible menaces to humankind. It will take control of our minds and direct our habits, making us entirely dependent. Very likely, as the Luddites were proven wrong (but not completely) when looking at the first and second Industrial Revolutions, the pessimist views will prove wrong, but not completely, as it is clear that AI has agency (Latour, <span>1987</span>) in informing our judgement and it does so through various forms of multimodal affects, that is, relying on our vast repertoire of senses, all mobilized by new forms of technology (e.g. think of smartwatches and how they influence our training habits). On the other hand, AI – similar to the first enterprise resource planning (ERP) systems – is seen as a panacea for many of our problems, diseases and grand challenges, from poverty to climate change, at least until one realizes that SAP does not stand for ‘Solves All Problems’ (Quattrone and Hopper, <span>2006</span>). These dystopian and utopian attitudes will soon be debunked and leave room for more balanced views, which will acknowledge that AI is both a means to address wicked problems and a wicked problem itself, and, again, realize that wisdom is always to be found in the middle, the very same middle in between views. In this case, a more balanced in-between view is to realize that AI itself is a construction. Like all resources (Feldman and Worline, <span>2006</span>) and technologies (Orlikowski, <span>2000</span>), their function and effect are not pre-given but will be determined by our use thereof. For example, AI will be productive of ‘facts’ but of those that are reminiscent of the fact that facts are ‘made’, and that there is nothing less factual than a fact for, as the Romans knew so well (from <i>factum</i>, i.e. made), a fact is always constructed, and AI will be making them in huge quantities. This will be good to speculate, to foster imagination by having a huge amount of them available, but also potentially bad, as those who will own the ability to establish them as facts will magnify Foucault's adage that knowledge is power.</p><p>The third cue stands in the root <i>leg</i>-, which originates so many words that characterize our contemporary world, both academic and not, including <i>legere</i> (to read, but also to pick and choose), <i>legare</i> (to knot) and indeed a religion. As much as medieval classifying techniques used <i>inventories</i> of data to <i>invent</i> new solutions to old problems by recombining such data in novel forms, by choosing and picking data depending on the purpose of the calculation, to imagine the future and reimagine the past (Carruthers, <span>1998</span>), AI will use even bigger inventories of data to generate inventions until we finally realize that to explore ‘what is not’ and could become is much more fruitful in imagining the future and the unprecedented than to define ‘what is’ (Quattrone, <span>2017</span>). Only then will AI be truly generative. As was the case with Steve Ballmer, then CEO of Microsoft, when presented with the first iPhone. He exclaimed ‘who would want to pay five hundred dollars for a phone?’. He had not realized that to comprehend the power and complexities of technologies, it is better to think in terms of what they are not, rather than what they are. The cell phone is not a phone so much as it is a camera, a TV or cinema, a newspaper, a journal/calendar. Google begins a search with X, a negative, and then by creating correlations defines what Z could be (a phone may be a cinema) and what it could become (a meeting place). This move from the negative to the potential, from what is not to what can be, is the core of AI. AI can facilitate this exploration into what is not obvious and help us avoid taking things for granted. So, predicting how AI will develop and affect our lives is bound to fail as there are so many ways this can go and many unintended consequences. At this stage, it may be more fruitful not to predict the future but to explore how we try to make sense of the unknowable future in the present and which potential pathways we thereby open and which we close. Exploring the framing contests around AI, the actors involved and the various interests they attempt to serve may tell us more about ourselves than about AI – about our collective fantasies, fears and hopes that shape our present and future.</p><p>This brings us to whether and to what extent AI can inform human thinking and actions. That technologies influence our behaviour is now taken for granted, but given that this influence is not deterministic, and technologies have affordances that go beyond the intentions of the designers, what counts as agency and where to find it is possibly a black box that GAI can contribute to reopen. Since the invention of the printing press, and the debate between Roland Barthes and Michael Foucault, the notion of authorship has been questioned (Barthes, <span>1994</span>; Foucault, <span>1980</span>), along with authors’ authority and accountability. This is even truer now, when algorithms of various kinds already take decisions seemingly autonomously, from high-frequency trading in finance to digital twins in construction, and now also being able to write meaningful sentences that potentially disrupt not only research but also the outlets where these texts are typically published, that is, academic journals (Conroy, <span>2023</span>). We are moving from a non-human ‘decision-maker’, be it a self-driving car or a rover autonomously exploring Mars, to non-human ‘makers’ <i>tout court</i>, with the difference that they have no responsibility and no accountability. And yet they influence the world and affect our personal, social and work lives. This has policy and theoretical implications. In policy terms, as much as the legal form of the corporation emerged to limit and regulate individual greed (Meyer, Leixnering and Veldman, <span>2022</span>), we may witness the emergence of a new fictitious persona, this time even more virtual than the corporation, with no factories and employees, while still producing and distributing value through, and to, them, respectively. Designing anticipatory governance is even more intricate than with corporations, as these non-human ‘makers’ are even more dispersed and ephemeral, not to say slippery.</p><p>Theoretically, we may be at the edge of a revolution as important as the emergence of organization theory in the twentieth century. It was Herbert Simon (<span>1969</span>) who foresaw the need for a science of the artificial, that is, a science the object of which was the organization of the production of artefacts of various kinds, of the need for making sense of the relationship between means and ends when new forms of bounded rationality informed decision-making. We would not be surprised if a ‘New Science of the Artificial’, this time related to the study of AI rationality, emerged in the twenty-first century. For sure, there will be a need to govern AI and study how the governance and organization of AI intertwine with human rationality, possibly changing the contours of both.</p><p>Niall G. MacKenzie, Stephanie Decker, Christina Lubinski</p><p>Recently, generative artificial intelligence (GAI) has been subject to breathless treatments by academics and commentators alike, with claims of impending ubiquity (or doom, depending on your perspective) and life as we know it being upended, with millions of jobs destroyed (Eglash <i>et al.</i>, <span>2020</span>). Historians will, of course, point out that this is nothing new. Technological innovation and adoption have a long and generally well-researched history (Chandler, <span>2006</span>; Scranton, <span>2018</span>) and the same is true for resistance to these innovations (Juma, <span>2016</span>; Mokyr, <span>1990</span>; Thompson, <span>1963</span>) and moral panics (Orben, <span>2020</span>). What, if anything, does history have to tell us about GAI from a theoretical perspective other than ‘it's not new…’?</p><p>Good historical practice requires a dialogue between past and present (Wadhwani and Decker, <span>2017</span>). Thus, if we want to understand GAI we should understand the character of its development and the context in which it occurred and occurs. GAI's history was/is underpinned by progression in several other areas including mathematics, information technology and telecommunications, warfare, mining and computing science (amongst many more) (Buchanan, <span>2006</span>; Chalmers, MacKenzie, and Carter, <span>2021</span>; Haenlein and Kaplan, <span>2019</span>). This means that despite GAI's rapid recent progress, it is still the result of iterative developments across various other sectors which enable(d) and facilitate(d) it. Consistent within this is the imagined futures (Beckert, <span>2016</span>) pushed by technologists, entrepreneurs, policymakers and futurists about what it could mean for society.</p><p>The value of historical thinking with regard to new technologies like GAI can be illustrated by considering the social imaginaries (Taylor, <span>2004</span>) that have been generated as part of the experience of previous technologies and their development and adoption. When a technology emerges, there may be a fanfare about how it will change our lives for the better, and/or concerns about how it will disrupt settled societal arrangements (Budhwar, <span>2023</span>). Ubiquity-posited technologies like GAI are then often subject to competing claims – promises of imagined new futures where existing ways of doing things are improved, better alternatives averred and economic and societal benefits promised, but are also often accompanied by challenges and concerns regarding job destruction, societal upheaval and the threat of machines taking over. As a consequence, the imaginaries compete with each other and are generative in and of themselves in that they create spaces of possibility that frame experiments of adoption (Wadhwani and Viebig, <span>2021</span>). We can analyse past imaginaries of existing technologies to better understand what the emergence of new technologies and the auguries posited with them tell us about how societies adopt and adapt to the changes they bring. However, it is only in a post-hoc fashion that we can understand the efficacy of such claims. For example, recent work by business historians has considered how we understand posited past futures of entrepreneurs across a range of technological and non-technological transformations (Lubinski <i>et al.</i>, <span>2023</span>), illustrating the value that historical work brings to theorizing societal change brought about by such actions.</p><p>The imaginaries, good and bad, associated with technologies like GAI play an important role in their legitimation and adoption, as well as their opposition. Given the contested nature of such societally important technologies, it is therefore important to also recognize and consider the context in which new technologies such as GAI emerge in terms of the promises associated with them, the societal effect they have and how they unfold in order to provide appropriate theories and conceptual lenses to better understand them. When exploring the integration of new technologies in context, historical analysis of both the technology in question and other technologies illustrates nuances and insights to inform deeper theory to understand what a technology like GAI can mean to society. The different imaginaries associated with GAI possess clear parallels with what has come in the past.</p><p>The Luddite riots of the nineteenth century, whereby agricultural workers sought to destroy machinery that was replacing their labour (Mokyr, <span>1990</span>; Thompson, <span>1963</span>), are probably the most famous negative societal response to the introduction of new technology, giving rise to the term ‘Luddite’ that is still commonly used today to describe someone opposed to technology. Contrastingly, the playwright Oscar Wilde posited in his 1900 essay ‘The soul of man under socialism’ that ‘All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery’ (Wilde, <span>1891</span>/2007). More recently, Lawrence Katz, a labour economist at Harvard, repeated Wilde's suggestion by predicting that ‘information technology and robots will eliminate traditional jobs and make possible a new artisanal economy’ (Thompson, <span>2015</span>). Both Wilde's and Katz's comments tilt at the imaginary of the benefits that technology and automation can bring in freeing up people's time to focus on more creative and rewarding work and pursuits, whilst the Luddites were expressing serious misgivings about the imaginary that their jobs, livelihoods and way of life were under serious threat from mechanization.</p><p>Good and bad imaginaries are a necessary part of the development of all new technologies but are only really understood post hoc and within context. As Mary O'Sullivan recently pointed out, based on her analysis of the emergence of steam engine use in Cornish copper mines in the eighteenth century, technology itself does not bring the general societal rewards suggested if the economic system in which it is developed remains controlled by small groups of powerful individuals (O'Sullivan, <span>2023</span>). Similar concerns have been made about GAI with its principal proponents comprising a few global multinationals, as well as state-controlled interests such as the military, racing for dominance in the technology (Piper, <span>2023</span>). The economic and political systems in which GAI is being developed are important to understand in relation to the imaginaries and promises being made concerning its value and warnings of its threats, particularly in light of the history of societally important technological shifts.</p><p>As scholars, we face ongoing challenges to explain new, ubiquity-focused technologies and the accompanying imaginaries (which often constitute noise, albeit with kernels of truth/accuracy hidden therein). In this sense, when we seek to theorize about GAI and its potential impact on business and management (and vice versa), it is important to recognize that historical analysis does not foretell the future, but rather provides a critical understanding of how new innovations impact and are impacted by the societies they take place in. Interrogating the contested imaginaries through the incorporation of historical thinking in our conceptualization of new technologies such as GAI will provide a deeper understanding of their impact, which in turn will allow us to better harness them for the greater good.</p><p>Olivia Brown, David A. Ellis, Julie Gore</p><p>Digital technologies continue to permeate across society, not least the way in which they allow individuals and teams to collaborate (Barley, Bechky and Milikhen, <span>2017</span>). For instance, innovations in communication have led to a shift towards virtual working and the proliferation of globally distributed corporate teams (see Gilson <i>et al.</i>, <span>2015</span>). As the volume and variety of data types that can be linked together has also accelerated, we have witnessed the emergence of large language models (LLMs), with the introduction of ChatGPT bringing them to the attention of a much wider audience. Broadly referred to as a form of generative artificial intelligence (GAI), ChatGPT allows individuals (or teams) to ask questions and quickly be provided with detailed, actionable, conversational responses. Sometimes referred to as virtual agents as part of customer service and information retrieval systems, these conversational responses can effectively become virtual team members.</p><p>The view of technology as a means with which to facilitate effective teamwork in organizations has now shifted towards questions of whether, and under what circumstances, we can consider this GAI as a ‘team member’ (Malone, <span>2018</span>). Conceptualizing GAI in this manner suggests a trend away from viewing technology as a supportive tool that is adjunct to human decision-making (see Robert, <span>2019</span> for a discussion of this in healthcare) to, instead, having a direct and intrinsic role within the decision-making and task-execution processes in teams (O'Neill <i>et al.</i>, <span>2022</span>). New questions are therefore being raised as to whether AI team members improve the performance of a team, and would organizations trust them? And if so, how much? To what degree are AI team members merely adjunct to, or replacements for real team members when it comes to decision-making? When a hybrid AI team completes a task, who takes responsibility for successes and failures? How can or should managers or leaders quantify accountability? Addressing these early questions dictates that it may soon be necessary to reframe and readdress the way in which teams are studied from theoretical, practical and ethical perspectives.</p><p>From a <b>theoretical</b> perspective, across the many definitions of teams that have been developed within the management literature, one constant is that they are generally understood to comprise ‘two or more individuals’ (Kozlowski and Bell, <span>2003</span>; Kozlowski and Ilgen, <span>2006</span>; Salas <i>et al.</i>, <span>1992</span>). If we are indeed approaching the point at which AI will ‘become an increasingly valuable team member’ (Salmon <i>et al.</i>, <span>2023</span>, p. 371), we will need to reconsider our definitions of what constitutes a team (i.e. is one human individual sufficient when paired with an AI member). In turn, we then need to assess how theoretical frameworks and constructs that facilitate teamwork operate within the context of AI–human teams. For instance, in Ilgen <i>et al.</i>’s (<span>2005</span>) widely adopted input–mediator–output–input (IMOI) model of teamwork, the input element has typically focused on the composition of the team (i.e. individual characteristics), alongside the structure of the team and the environment in which they are operating (see also Mathieu <i>et al.</i>, <span>2008</span>). As GAI is incorporated into organizational structure and design, it is pertinent to consider where (and indeed whether) it ought to be placed within this framework. Should GAI be considered part of the team composition as an input factor or is it best accounted for in the technological capabilities of the wider organizational context? The answer to this question will have important implications both for research designs and for the way in which the academic community relays findings to practitioners. Time will tell, and the answers to these questions will require further systematic thought, however, this may perhaps then warrant the start of a ‘necessary scientific revolution’, the like of which Kuhn advocated (Kuhn, <span>1962</span>).</p><p>Alongside situating GAI's place within our theoretical framing, we must also consider how established team constructs operate within this new frontier of teaming. For example, interpersonal trust is a key component in the performance of highly functioning teams, especially in instances where there is a high level of task interdependence between team members (De Jong, Dirks and Gillespie, <span>2016</span>). Research has shown that communication behaviours (e.g. style, openness, responsiveness; see Henttonen and Bloqvist, <span>2005</span>) influence the development of trust in virtual teams, thus it begs the question that, in a wholly virtual interaction, how do we conceptualize and explore the development of interpersonal trust in AI–human teams? Is it possible that individuals will develop trust in AI in the same manner that they would their human team members, and how might this then impact organizational performance and transform our understanding of what it means to interpersonally relate to technology?</p><p>These questions, amongst others, are documented in a growing body of literature (see O'Neill <i>et al.</i>, <span>2022</span>; Salmon <i>et al.</i>, <span>2023</span>; Seeber <i>et al.</i>, <span>2020</span>), however, at present, empirical research on GAI within the management literature remains limited (Dwivedi <i>et al.</i>, <span>2023</span>). It is now pertinent for management scholars to begin addressing these questions empirically, as we face rapidly evolving and potentially disruptive changes to the world of work, not seen since the beginning of the digital age. For example, lab-based experiments could manipulate AI team members by changing their ‘personality’ or adding/removing them from teams. This could be further understood in terms of effects in different contexts (where an AI team member is given a greater or reduced physical presence, e.g. via a robot, or tasks, creative vs procedural). Observational research and interview studies will also be valuable in providing an initial understanding of the perceptions of GAI, alongside insights into how GAI is being incorporated into working structures and organizational teams at present and where managers and employees perceive it might be incorporated in the future.</p><p>Alongside definitional issues and the need to re-examine how teamwork constructs operate within human–GAI teams, there are <b>practical considerations</b> posed by the introduction of GAI at work. As researchers, we are already facing a poignant challenge in connecting the myriad of ways individuals can interact with networked technologies with their offline behaviours (Brown <i>et al.</i>, <span>2022</span>; Smith <i>et al.</i>, <span>2023</span>). At present, efforts to capture the interplay between actions taken online and actions taken in the real world have largely failed to understand the nuanced behavioural and psychological mechanisms that might link the two (see Smith <i>et al.</i>, <span>2023</span>). For instance, while digital technologies such as Microsoft Teams, Slack and Zoom are now widespread across organizations, scholars have noted that our understanding of how teams engage with these technologies and how they might improve, or hamper, team effectiveness remains limited when compared to the individual and organizational level impacts (Larson and DeChurch, <span>2020</span>). The introduction of GAI may only serve to widen this gap in understanding, as the line between technologically driven and human-driven behaviour becomes increasingly blurred (see Dwivedi <i>et al.</i>, <span>2023</span>). To overcome this, management scholars must carefully consider the methods that will be required to study GAI in teams and be open to utilizing innovative practices from other disciplines (e.g. human factors, computer science, psychology). This will allow for the triangulation of findings from experimental and observational studies with data derived directly from the digital services that sit at the centre of modern working life.</p><p>Finally, at the forefront of our exploration of GAI in work teams, ethical considerations must be addressed. Indeed, there has been much conjecture about the perils of AI in organizational psychology and human resource management amongst both scholars and practitioners (CIPD, <span>2021, 2022a</span>, <span>2022b, 2022c</span>). Practitioner-centred outlets and public discourse are filled with a focus on risk mitigation, the implications for recruitment practices, legal and cross-country considerations, unwanted employee monitoring software and a somewhat Luddite philosophy surrounding the dark side of AI (Cheatham, Javanmardian and Samandari, <span>2019</span>; Giermindl <i>et al.</i>, <span>2022</span>; McLean <i>et al.</i>, <span>2023</span>). Despite this, it remains plausible that in the coming years, ChatGPT will become an everyday reality at work, such that it is used as frequently as virtual meeting platforms and email. While, for some, the prospect of a team that is readily supported by GAI might be a welcome addition, the potential of such a reality could also be perceived as a dystopian nightmare, with any number of ethical challenges (see Mozur, <span>2018</span>). This equally applies to how we study any effects on people and organizations. In considering the ethical implications of GAI in teams, it is, of course, important to outline the recognized potential for societal benefits. For instance, many challenges whereby teams become unable to make decisions due to increased cognitive load, especially in atypical, high-reliability organizations, could be mitigated with the use of AI (Brown, Power and Conchie, <span>2020, 2021</span>). For example, an artificial agent with no cognitive limitations could remind a team that some solutions will bring risks that members have failed to consider (Steyvers and Kumar, <span>2023</span>).</p><p>On the other hand, ChatGPT and similar systems have been predominantly trained on English text, and such systems build in existing societal biases that are then further magnified (Dwivedi <i>et al.</i>, <span>2023</span>; Weinberger, <span>2019</span>). Furthermore, whereas traditional software is developed by humans, whereby computer code provides explicit step-by-step instructions, ChatGPT is built on a neural network that was trained using billions of words. Therefore, while there is some understanding about how such systems work at a technical level, there are also many gaps in existing knowledge which will not be filled overnight, generating issues relating to the transparency of these systems (Dwivedi <i>et al.</i>, <span>2023</span>; Robert, <span>2019</span>). While there are no easy answers to the current (and yet-to-come) ethical concerns that accompany the study of AI in teams, there are uncontroversial processes by which we can perpetually operate and self-reflect. Our developing ability to make comprehensive assessments of digital, hybrid and traditional teams’ performance carries with it heavy questions about how this power will be used and who will be using it. We must therefore consider how organizations (and indeed we, as researchers) might incorporate these tools into teamwork <i>and</i> research processes thoughtfully, but humanely. Introducing interdisciplinary ethics committees that include a wider range of stakeholders (e.g. members of the public, technology developers) offers a potential solution here, and will help to engender responsible and innovative research into GAI within management studies.</p><p>Encompassing all the above, management scholars will need to become increasingly comfortable when engaging with other disciplines, the public and policymakers, all of whom have unique perspectives (Kindon, Pain and Kesby, <span>2007</span>), as part of an interdisciplinary endeavour to address the methodological and theoretical challenges that lie ahead. This involves accepting that while the study of GAI in teams for management scholars is certainly not staring into the abyss, our current theories, methods, expertise and ethical explorations remain far from conclusive.</p><p>Daniel Muzio, James Faulconbridge</p><p>There has been a lot of journalistic, practitioner and academic attention on the topic of artificial intelligence (AI) and the professions. Some authors (Armour and Sako, <span>2020</span>; Faulconbridge, Sarwar and Spring, <span>2023</span>; Goto, <span>2021</span>; Pemer and Werr, <span>2023</span>; Spring, Faulconbridge and Sarwar, <span>2022</span>) have focused on how professional services firms introduce and use increasingly sophisticated technological solutions. Others (Leicht and Fennell, <span>2023</span>; Sako, Qian and Attolini, <span>2022</span>) have focused on the impact of AI on professional labour markets. Indeed, the consensus seems to be that unlike previous technological revolutions, this current one will concern primarily professional and knowledge workers. However, given the prospect of wide-ranging change, surprisingly little attention has been paid to how AI may affect our theoretical understanding of professionalism as a distinct work organization principle. This is unfortunate, since the new AI revolution is likely to challenge some deeply held assumptions and understandings which underpin the sociology of the professions as a distinct body of knowledge (Abbott, <span>1988</span>; Johnson, <span>1972</span>; Larson, <span>1977</span>; Muzio, Kirkpatrick and Aluack, <span>2019</span>). In this contribution, we focus on this issue and reflect on how AI might affect the way we understand professionalism.</p><p>Gazi Islam, Michelle Greenwood</p><p>The founding of the scholarly <i>Journal of the Royal Society of London</i>, described by the historian Biagioli (as cited in Strathern, <span>2017</span>), illustrates how scientific production rests on paradoxes and precarious relationships at a distance. Biagioli describes how the Royal Society became the locus of a plethora of scholarly correspondence from distant geographies, which it acknowledged in its title as ‘giving some accompt (account)… of the ingenious in many parts of the world’ (Royal Society London, <span>1865</span>, cover). In contrast to its sparsely attended, gentlemanly, in-person meetings, the broadening of the transactions through correspondence produced a publicly available, globalized scholarly record, but also led to a problem regarding the credibility of the interlocuters. The Society's solution was to develop an ‘epistolary etiquette’, by which the value of contributions could be assessed without direct personal relationships. The current system of scholarly peer review and journal publication descends from this system of partial connections and evaluation at a distance (Strathern, <span>2017</span>).</p><p>The case of the Royal Society journal is interesting because it lays bare the relational infrastructure that undergirded the production of scholarship. Both collegial (because it required ongoing scholarly interaction and etiquette) and impersonal (because it required judgement at a distance between strangers), scholarly production involved a balancing act between proximity and distance, a system of partial relations that was itself emblematic of emerging modern conceptions of civil society (Strathern, <span>2020</span>). Beyond flashes of creative insight or financial patronage – although both were present – it was this relational infrastructure that allowed the emergence of modern scholarship within newly forming national civil societies.</p><p>We do not argue that such epistolary conventions are the only (or best) way to produce scholarly advancement, but these are the structures we have inherited, and are being quickly called into question by the emergence of recent technologies. One of these – not the only one – is generative artificial intelligence (GAI), or its recent incarnation in large language models (LLMs) like ChatGPT. LLMs promise to intervene in the scholarly process at virtually every point of knowledge production, from writing text and simulating data to ‘peer’ reviewing and editing. It is likely that the mix of human creation and mechanical supplement already woven into scholarly publishing will shift considerably. With what results?</p><p>Taking a relational perspective to knowledge production allows us to imagine how scholarly knowledge may be shaped by LLMs. Specifically, drawing on Strathern's (<span>2000, 2004</span>, <span>2020</span>) work around relations and knowledge practices, we argue that networks of relationships (and the actors thereby constituted) change both the production of knowledge and the nature of its accountability. The embeddedness of LLMs in these networks could be a radically reshaped research landscape, with unpredictable consequences for what counts as knowledge in our field.</p><p>Robert M. Davison, M. N. Ravishankar</p><p>Theorizing is a messy business. It involves multiple sources of evidence and multiple possible explanations. The sources of data may include interviews, observations, literature, documents and diaries. They may be coded in multiple (human) languages and in multiple registers from the formal to the informal, from the technical to the mundane. While there are clear guidelines for how researchers can approach theorizing (Gioia, Corley and Hamilton, <span>2013</span>; Hassan, Lowry and Mathiassen, <span>2022</span>; Martinsons, Davison and Ou, <span>2015</span>; Weick, <span>1989</span>), in practice, theorizing is an idiosyncratic activity that reflects the style, personality, values and culture of the theorizer. Thus, the most convincing theoretical explanation may be one that is more parsimonious, interesting, counterintuitive and/or provocative. Crafting that convincing theoretical explanation requires adherence to multiple standards (parsimony, interestingness, etc.), each of which competes with the others for attention.</p><p>Generative artificial intelligence (GAI) programs like ChatGPT have several useful attributes that might assist researchers as they theorize. For instance, GAI programs may be able to synthesize some of the literature or other documents. Such syntheses can be invaluable as they often require considerable time. But synthesizing the literature is not simply a mechanical task with a precise end state: the synthesis. It is also a way of understanding how prior research has been conceived, or not conceived. When reading a series of research papers, the perspicacious researcher will, in addition to synthesizing, note both the prominent and the absent trends or patterns. For instance, the researcher may recall a study or method or theory from some years previously in a different field or discipline that could usefully be compared with or inform this literature. Naturally, the human brain is somewhat selective: the researcher is unlikely to have read the entirety of the literature across multiple disciplines, and so this comparison is limited by the researcher's own reading. Can the GAI program help here, perhaps suggesting the relevance of a study in a very different discipline? To give two real examples, when writing a paper (Liu <i>et al.</i>, <span>2023</span>) about the role of Chief Digital Officers in digital transformation, one of us employed punctuated equilibrium theory (PET), a theory first proposed in evolutionary biology (Eldredge and Gould, <span>1972</span>) and occasionally encountered in the management and information systems literatures (Gersick, <span>1991</span>; Wong and Davison, <span>2018</span>). In our discussion, we found that we needed to examine more closely the way PET had been applied in recent business research, and then to draw parallels between the focus on digital transformation with the evolutionary biology sources. The literature in the latter area is huge: perhaps GAI could have helped identify salient sources, in effect working as a research assistant? No doubt GAI could also synthesize those sources and even render their technical jargon into a form that an information science researcher could more readily comprehend. But what will this type of non-active participation in the research process cause researchers to lose? As it turned out, in these examples we were not assisted by GAI. We simply Googled the relevant terms and quickly enough found exactly the paper that we needed to support and develop our arguments (see Liu <i>et al.</i>, <span>2023</span>). Similarly, when writing papers on the role of framing in IT-enabled sourcing, the other author could have benefitted immensely from GAI's ability to synthesize the huge corpus of scholarship on framing in the social psychology literature (Ravishankar, <span>2015</span>; Sandeep and Ravishankar, <span>2016</span>). However, we had to do the dejargonizing work ourselves, a process that admittedly took some time but was intellectually stimulating. Indeed, these examples neatly encapsulate many of the things that we appreciate about research, and we would be loath to relinquish them to GAI.</p><p>A second example where GAI may help out concerns data transcription. As researchers, we often collect data through interviews. Traditionally, we transcribe the interviews to text and where necessary translate them into the language that we wish to code them in, often English. GAI programs can certainly be used for interview transcription and translation. The GAI software can certainly speed up the initial process but the error rate of the software is non-trivial, that is, careful manual checking of the transcription/translation is needed. For instance, we recently used GAI to transcribe and then translate interviews from Chinese to English. As part of our preparation, we needed to inform the software that the source material was in Chinese (Mandarin), so that the Chinese language module would be applied. However, the audio text included English words embedded in it, that is, the interviewees spoke both Chinese and English in their interview responses. This is technically referred to as code mixing, and is quite common among second-language users, that is, they use their first language for much of their communication but mix in words from second languages on an ad hoc basis, often because the second-language word expresses an idea or concept more succinctly than would the corresponding first-language word. Such code mixing exists in both spoken and written communication. The GAI transcription software accurately recognized and transcribed the Chinese words, but was unable to deal with the English words because it was not expecting them, so it rendered them by converting them phonetically. For instance, the abbreviation EDI (electronic data interchange) was rendered in Chinese characters not as the correct translation of EDI (電子數據交換) but as characters that approximated the sound of the letters E D I (一點愛). However, these inserted characters (which actually mean ‘a little love’) were totally inappropriate in the context and made no sense at all. Perhaps in the future, GAI programs could be instructed to look out for words in specific languages and so transcribe or translate appropriately.</p><p>When it comes to the analysis of data, that is, the identification of themes and patterns, and the generation of theoretical arguments, our earlier comments about parsimony, interestingness, counterintuitiveness and provocativeness come to the fore. Although the efficiency of human analytical capacity may not be superior to GAI, given GAI's potential to analyse vast quantities of data quickly, to compare that data with past literature, and presumably to generate many possible options, we suggest that the effectiveness of human intuition is superior because of our ability to identify an interesting or provocative or counterintuitive angle that is worth exploring. Quite what is interesting or provocative or counterintuitive is hard to pin down, as it depends to a large extent on the subjective assessment of the researcher who is going to create an argument to justify that interesting, provocative or counterintuitive theoretical explanation. This human capability goes beyond creating new content using patterns in data, and it is central to theorizing: the researcher(s) need to draw on their innate imagination and creativity to craft that theoretical explanation. Could a GAI program be trained to identify potentially interesting, provocative or counterintuitive positions, and then to craft the supporting arguments? The answer must be yes, but how convincing they would be is moot. They might help the researcher to identify promising new lines of thought, or might stimulate further intellectual engagement, with the GAI program acting as an agent provocateur. A final point, which slightly contradicts our arguments so far, is worth making. Fears are being expressed about how the limits of GAI are really the inability of users to ask the system the ‘right’ questions. If GAI intuition and reasoning powers appear unable to produce sophisticated theorizing, could it be that the issue is less about GAI capability and more about scholars’ relatively limited experience and knowledge around employing the ‘prompts’? This line of thought opens the intriguing possibility that GAI is far more potent than we realize, and that it may indeed produce academically sound, rigorous, novel and elegant theorizing of significant value.</p>\",\"PeriodicalId\":48342,\"journal\":{\"name\":\"British Journal of Management\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":4.5000,\"publicationDate\":\"2024-01-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://onlinelibrary.wiley.com/doi/epdf/10.1111/1467-8551.12788\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"British Journal of Management\",\"FirstCategoryId\":\"91\",\"ListUrlMain\":\"https://onlinelibrary.wiley.com/doi/10.1111/1467-8551.12788\",\"RegionNum\":2,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"British Journal of Management","FirstCategoryId":"91","ListUrlMain":"https://onlinelibrary.wiley.com/doi/10.1111/1467-8551.12788","RegionNum":2,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
引用次数: 0

摘要

丹尼尔-穆齐奥(Daniel Muzio)、詹姆斯-福尔康布里奇(James Faulconbridge)关于人工智能(AI)与职业的话题已经引起了新闻界、从业人员和学术界的广泛关注。一些作者(Armour 和 Sako,2020 年;Faulconbridge、Sarwar 和 Spring,2023 年;Goto,2021 年;Pemer 和 Werr,2023 年;Spring、Faulconbridge 和 Sarwar,2022 年)专注于专业服务公司如何引入和使用日益复杂的技术解决方案。其他研究(Leicht 和 Fennell,2023 年;Sako、Qian 和 Attolini,2022 年)则侧重于人工智能对专业劳动力市场的影响。事实上,人们的共识似乎是,与以往的技术革命不同,当前的技术革命将主要涉及专业和知识工作者。然而,考虑到广泛变革的前景,令人惊讶的是,人们很少关注人工智能会如何影响我们对专业精神作为一种独特的工作组织原则的理论理解。这是令人遗憾的,因为新的人工智能革命很可能会挑战一些根深蒂固的假设和理解,而这些假设和理解正是专业社会学作为一个独特知识体系的基础(Abbott, 1988; Johnson, 1972; Larson, 1977; Muzio, Kirkpatrick and Aluack, 2019)。在本文中,我们将重点关注这一问题,并思考人工智能可能会如何影响我们理解专业精神的方式。历史学家比亚乔利(引自斯特拉森,2017)描述的《伦敦皇家学会学术期刊》的创刊,说明了科学生产是如何依赖于悖论和不稳定的远距离关系的。比亚乔利描述了英国皇家学会如何成为来自遥远地域的大量学术通信的中心,英国皇家学会在其书名中承认 "提供了一些......世界上许多地方的巧妙之处"(《伦敦皇家学会》,1865 年,封面)。与参加人数稀少、绅士风度十足的面对面会议相比,通过通信扩大交易范围的做法产生了可公开获取的全球化学术记录,但同时也带来了对话者可信度的问题。学会的解决方案是制定一种 "书信礼仪",在没有直接个人关系的情况下,对贡献的价值进行评估。目前的学术同行评议和期刊出版制度就源于这种部分联系和远距离评估制度(Strathern,2017 年)。学术成果的产生既有同事关系(因为它需要持续的学术互动和礼仪),也有非个人关系(因为它需要在陌生人之间进行远距离评判),涉及到近距离和远距离之间的平衡,这种局部关系系统本身就是新兴的现代公民社会概念的象征(Strathern,2020)。除了闪烁的创造性洞察力或资金赞助--尽管两者都存在--正是这种关系基础结构使现代学术得以在新形成的国家公民社会中出现。我们并不主张这种书信惯例是产生学术进步的唯一(或最佳)方式,但这些是我们继承下来的结构,并很快因最新技术的出现而受到质疑。生成式人工智能(GAI)是其中之一,也是最近出现的大型语言模型(LLMs),如 ChatGPT。从撰写文本、模拟数据到 "同行 "评审和编辑,LLMs 几乎可以在知识生产的每一个环节介入学术过程。学术出版中已经存在的人类创造与机械补充的结合很可能会发生重大变化。从知识生产的关系视角出发,我们可以想象法律硕士如何塑造学术知识。具体来说,借鉴斯特拉森(2000、2004、2020)围绕关系和知识实践所做的工作,我们认为,关系网络(以及由此构成的行动者)既改变了知识的生产,也改变了知识问责的性质。将法律硕士嵌入这些网络中,可能会从根本上重塑研究格局,对我们领域中的知识产生难以预料的后果。它涉及多种证据来源和多种可能的解释。数据来源可能包括访谈、观察、文献、文件和日记。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Theory-Driven Perspectives on Generative Artificial Intelligence in Business and Management

Shuang Ren, Riikka M. Sarala, Paul Hibbert

The advent of generative artificial intelligence (GAI) has sparked both enthusiasm and anxiety as different stakeholders grapple with the potential to reshape the business and management landscape. This dynamic discourse extends beyond GAI itself to encompass closely related innovations that have existed for some time, for example, machine learning, thereby creating a collective anticipation of opportunities and dilemmas surrounding the transformative or disruptive capacities of these emerging technologies. Recently, ChatGPT's ability to access information from the web in real time marks a significant advancement with profound implications for businesses. This feature is argued to enhance the model's capacity to provide up-to-date, contextually relevant information, enabling more dynamic customer interactions. For businesses, this could mean improvements in areas like market analysis, trend tracking, customer service and real-time data-driven problem-solving. However, this also raises concerns about the accuracy and reliability of the information sourced, given the dynamic and sometimes unverified nature of web content. Additionally, real-time web access might complicate data privacy and security, as the boundaries of GAI interactions extend into the vast and diverse Internet landscape. These factors necessitate a careful and responsible approach to evaluating and using advanced GAI capabilities in business and management contexts.

GAI is attracting much interest both in the academic and business practitioner literature. A quick search in Google Scholar, using the search terms ‘generative artificial intelligence’ and ‘business’ or ‘management’, yields approximately 1740 results. Within this extensive repository, scholars delve into diverse facets, exploring GAI's potential applications across various business and management functions, contemplating its implications for management educators and scrutinizing specific technological applications. Learned societies such as the British Academy of Management have also joined forces in leading the discussion on AI and digitalization in business and management academe. Meanwhile, practitioners and consultants alike (e.g. McKinsey & Company, PWC, World Economic Forum) have produced dedicated discussions, reports and forums to offer insights into the multifaceted impacts and considerations surrounding the integration of GAI in contemporary business and management practices. Table 1 illustrates some current applications of GAI as documented in the practitioner literature.

In an attempt to capture the new opportunities and challenges brought about by this technology and to hopefully find a way forward to guide research and practice, management journals have been swift to embrace the trend, introducing special issues on GAI. These issues aim to promote intellectual debate, for instance in relation to specific business disciplines (e.g. Benbya, Pachidi and Jarvenpaa, 2021) or organizational possibilities and pitfalls (Chalmers et al., 2023). However, amidst these commendable efforts that reflect a broad spectrum of perspectives, a critical examination of the burgeoning hype around GAI reveals a significant gap. Despite the proliferation of discussions from scholars, practitioners and the general public, the prevailing discourse is often speculative, lacking a robust theoretical foundation. This deficiency points to the challenges to existing theories in terms of their efficacy in explaining the unique demands created by GAI and indicates an urgent need for refining prior theories or even redeveloping new theories. There is a pressing need to move beyond the current wave of hype and explore the theoretical underpinnings of GAI and the dynamics of its potential impact, to ensure a more nuanced and informed discussion that can guide future research and application in this rapidly evolving area.

In this direction, the British Journal of Management (BJM) invited prominent scholars who serve as editors in leading business and management journals to weigh in and contribute with their diverse theoretical knowledge to this symposium paper on the emerging GAI phenomenon. This collaborative effort aims to advance the theorization of business and management research in relation to the intricacies associated with the impact of GAI by engaging in intensive discussions on how theoretical attempts can be made to make sense of the myths and truths around GAI.

The quest for theory, either seeking or refining, is a long-standing tradition in business and management research (e.g. Colquitt and Zapata-Phelan, 2007). While the seven pieces below place different elements under the spotlight of theoretical scrutiny, one common thread is the need to reconceptualize the relational realm of workplaces. The introduction of GAI in the workplace refines the norm of working together as a person-to-person group to working in a human–GAI group, with the latter illustrating three novel conceptual contributions in comparison to traditional understandings of the dynamics in the workplace.

Paolo Quattrone, Tammar Zilber, Renate Meyer

The etymology of words is often a source of insights to not only make sense of their meaning, but also speculate and imagine meanings that are not so obvious and thereby see the phenomena signalled by these words in new and surprising ways. The etymology of ‘artificial’ and ‘intelligence’ does not disappoint. ‘Artificial’ comes from ‘art’ and -fex ‘maker’, from facere ‘to do, make’. ‘Intelligence’ comes from inter ‘between’ and legere ‘choose, pick out, read’ but also ‘collect, gather’. There is enough in these etymologies to offer a few speculations and imagine the contours of generative artificial intelligence (GAI) and its possible futures.

The first of these is inspired by the craft of making and relates to the very function and use of AI. Most of the current fascinations with AI emphasize the predictive capacity of the various tools increasingly available and at easy disposal. Indeed, marketers know well in advance when we will need the next toothbrush, fuel our cars, buy new clothes, and so forth. The list is long. This feature of AI enchants us when, for instance, one thinks of a product and, invariably, an advertisement related to that product appears on our social media page. This quasi-magical predictive ability captures collective imaginations and draws upon very well-ingrained forms of knowledge production which presuppose that data techniques are there to represent the world, paradoxically, even when it is not there, as is the case with predictions. The issue is that the future is not out there; we do not know what future generations want from us and still, we are increasingly called to respond to their demands. Despite the availability of huge amounts of data points and intelligence, the future, even if proximal and mundane – as our examples above, always holds surprises. This means that AI may be useful not to predict the future, but to actually imagine and make it, as the -fex in ‘artificial’ reveals. This is the art in the ‘artificial’ and points to the possibility of conceiving AI as a compositional art, which helps us to create images of the future, sparks imagination and creativity and, hopefully, offers a space for speculation and reflection.

The word intelligence is our second cue, which stresses how ‘inter’ means to be and explore what is ‘in between’. As entrepreneurs are in between different ventures and explore what is not yet there (Hjorth and Holt, 2022), AI may be useful to probe grey areas between statuses and courses of action. It can be used to create scenarios, to make sure that the very same set of data produces alternative options that leave space for juggling among different decision-making criteria without reducing decisions about complex states of affairs to single criteria, most likely, value rather than values. This is how, for instance, one could wisely refrain from both apocalyptic and salvific scenarios that characterize the debate about AI. On the one hand, AI is seen as one of the worst possible menaces to humankind. It will take control of our minds and direct our habits, making us entirely dependent. Very likely, as the Luddites were proven wrong (but not completely) when looking at the first and second Industrial Revolutions, the pessimist views will prove wrong, but not completely, as it is clear that AI has agency (Latour, 1987) in informing our judgement and it does so through various forms of multimodal affects, that is, relying on our vast repertoire of senses, all mobilized by new forms of technology (e.g. think of smartwatches and how they influence our training habits). On the other hand, AI – similar to the first enterprise resource planning (ERP) systems – is seen as a panacea for many of our problems, diseases and grand challenges, from poverty to climate change, at least until one realizes that SAP does not stand for ‘Solves All Problems’ (Quattrone and Hopper, 2006). These dystopian and utopian attitudes will soon be debunked and leave room for more balanced views, which will acknowledge that AI is both a means to address wicked problems and a wicked problem itself, and, again, realize that wisdom is always to be found in the middle, the very same middle in between views. In this case, a more balanced in-between view is to realize that AI itself is a construction. Like all resources (Feldman and Worline, 2006) and technologies (Orlikowski, 2000), their function and effect are not pre-given but will be determined by our use thereof. For example, AI will be productive of ‘facts’ but of those that are reminiscent of the fact that facts are ‘made’, and that there is nothing less factual than a fact for, as the Romans knew so well (from factum, i.e. made), a fact is always constructed, and AI will be making them in huge quantities. This will be good to speculate, to foster imagination by having a huge amount of them available, but also potentially bad, as those who will own the ability to establish them as facts will magnify Foucault's adage that knowledge is power.

The third cue stands in the root leg-, which originates so many words that characterize our contemporary world, both academic and not, including legere (to read, but also to pick and choose), legare (to knot) and indeed a religion. As much as medieval classifying techniques used inventories of data to invent new solutions to old problems by recombining such data in novel forms, by choosing and picking data depending on the purpose of the calculation, to imagine the future and reimagine the past (Carruthers, 1998), AI will use even bigger inventories of data to generate inventions until we finally realize that to explore ‘what is not’ and could become is much more fruitful in imagining the future and the unprecedented than to define ‘what is’ (Quattrone, 2017). Only then will AI be truly generative. As was the case with Steve Ballmer, then CEO of Microsoft, when presented with the first iPhone. He exclaimed ‘who would want to pay five hundred dollars for a phone?’. He had not realized that to comprehend the power and complexities of technologies, it is better to think in terms of what they are not, rather than what they are. The cell phone is not a phone so much as it is a camera, a TV or cinema, a newspaper, a journal/calendar. Google begins a search with X, a negative, and then by creating correlations defines what Z could be (a phone may be a cinema) and what it could become (a meeting place). This move from the negative to the potential, from what is not to what can be, is the core of AI. AI can facilitate this exploration into what is not obvious and help us avoid taking things for granted. So, predicting how AI will develop and affect our lives is bound to fail as there are so many ways this can go and many unintended consequences. At this stage, it may be more fruitful not to predict the future but to explore how we try to make sense of the unknowable future in the present and which potential pathways we thereby open and which we close. Exploring the framing contests around AI, the actors involved and the various interests they attempt to serve may tell us more about ourselves than about AI – about our collective fantasies, fears and hopes that shape our present and future.

This brings us to whether and to what extent AI can inform human thinking and actions. That technologies influence our behaviour is now taken for granted, but given that this influence is not deterministic, and technologies have affordances that go beyond the intentions of the designers, what counts as agency and where to find it is possibly a black box that GAI can contribute to reopen. Since the invention of the printing press, and the debate between Roland Barthes and Michael Foucault, the notion of authorship has been questioned (Barthes, 1994; Foucault, 1980), along with authors’ authority and accountability. This is even truer now, when algorithms of various kinds already take decisions seemingly autonomously, from high-frequency trading in finance to digital twins in construction, and now also being able to write meaningful sentences that potentially disrupt not only research but also the outlets where these texts are typically published, that is, academic journals (Conroy, 2023). We are moving from a non-human ‘decision-maker’, be it a self-driving car or a rover autonomously exploring Mars, to non-human ‘makers’ tout court, with the difference that they have no responsibility and no accountability. And yet they influence the world and affect our personal, social and work lives. This has policy and theoretical implications. In policy terms, as much as the legal form of the corporation emerged to limit and regulate individual greed (Meyer, Leixnering and Veldman, 2022), we may witness the emergence of a new fictitious persona, this time even more virtual than the corporation, with no factories and employees, while still producing and distributing value through, and to, them, respectively. Designing anticipatory governance is even more intricate than with corporations, as these non-human ‘makers’ are even more dispersed and ephemeral, not to say slippery.

Theoretically, we may be at the edge of a revolution as important as the emergence of organization theory in the twentieth century. It was Herbert Simon (1969) who foresaw the need for a science of the artificial, that is, a science the object of which was the organization of the production of artefacts of various kinds, of the need for making sense of the relationship between means and ends when new forms of bounded rationality informed decision-making. We would not be surprised if a ‘New Science of the Artificial’, this time related to the study of AI rationality, emerged in the twenty-first century. For sure, there will be a need to govern AI and study how the governance and organization of AI intertwine with human rationality, possibly changing the contours of both.

Niall G. MacKenzie, Stephanie Decker, Christina Lubinski

Recently, generative artificial intelligence (GAI) has been subject to breathless treatments by academics and commentators alike, with claims of impending ubiquity (or doom, depending on your perspective) and life as we know it being upended, with millions of jobs destroyed (Eglash et al., 2020). Historians will, of course, point out that this is nothing new. Technological innovation and adoption have a long and generally well-researched history (Chandler, 2006; Scranton, 2018) and the same is true for resistance to these innovations (Juma, 2016; Mokyr, 1990; Thompson, 1963) and moral panics (Orben, 2020). What, if anything, does history have to tell us about GAI from a theoretical perspective other than ‘it's not new…’?

Good historical practice requires a dialogue between past and present (Wadhwani and Decker, 2017). Thus, if we want to understand GAI we should understand the character of its development and the context in which it occurred and occurs. GAI's history was/is underpinned by progression in several other areas including mathematics, information technology and telecommunications, warfare, mining and computing science (amongst many more) (Buchanan, 2006; Chalmers, MacKenzie, and Carter, 2021; Haenlein and Kaplan, 2019). This means that despite GAI's rapid recent progress, it is still the result of iterative developments across various other sectors which enable(d) and facilitate(d) it. Consistent within this is the imagined futures (Beckert, 2016) pushed by technologists, entrepreneurs, policymakers and futurists about what it could mean for society.

The value of historical thinking with regard to new technologies like GAI can be illustrated by considering the social imaginaries (Taylor, 2004) that have been generated as part of the experience of previous technologies and their development and adoption. When a technology emerges, there may be a fanfare about how it will change our lives for the better, and/or concerns about how it will disrupt settled societal arrangements (Budhwar, 2023). Ubiquity-posited technologies like GAI are then often subject to competing claims – promises of imagined new futures where existing ways of doing things are improved, better alternatives averred and economic and societal benefits promised, but are also often accompanied by challenges and concerns regarding job destruction, societal upheaval and the threat of machines taking over. As a consequence, the imaginaries compete with each other and are generative in and of themselves in that they create spaces of possibility that frame experiments of adoption (Wadhwani and Viebig, 2021). We can analyse past imaginaries of existing technologies to better understand what the emergence of new technologies and the auguries posited with them tell us about how societies adopt and adapt to the changes they bring. However, it is only in a post-hoc fashion that we can understand the efficacy of such claims. For example, recent work by business historians has considered how we understand posited past futures of entrepreneurs across a range of technological and non-technological transformations (Lubinski et al., 2023), illustrating the value that historical work brings to theorizing societal change brought about by such actions.

The imaginaries, good and bad, associated with technologies like GAI play an important role in their legitimation and adoption, as well as their opposition. Given the contested nature of such societally important technologies, it is therefore important to also recognize and consider the context in which new technologies such as GAI emerge in terms of the promises associated with them, the societal effect they have and how they unfold in order to provide appropriate theories and conceptual lenses to better understand them. When exploring the integration of new technologies in context, historical analysis of both the technology in question and other technologies illustrates nuances and insights to inform deeper theory to understand what a technology like GAI can mean to society. The different imaginaries associated with GAI possess clear parallels with what has come in the past.

The Luddite riots of the nineteenth century, whereby agricultural workers sought to destroy machinery that was replacing their labour (Mokyr, 1990; Thompson, 1963), are probably the most famous negative societal response to the introduction of new technology, giving rise to the term ‘Luddite’ that is still commonly used today to describe someone opposed to technology. Contrastingly, the playwright Oscar Wilde posited in his 1900 essay ‘The soul of man under socialism’ that ‘All unintellectual labour, all monotonous, dull labour, all labour that deals with dreadful things, and involves unpleasant conditions, must be done by machinery’ (Wilde, 1891/2007). More recently, Lawrence Katz, a labour economist at Harvard, repeated Wilde's suggestion by predicting that ‘information technology and robots will eliminate traditional jobs and make possible a new artisanal economy’ (Thompson, 2015). Both Wilde's and Katz's comments tilt at the imaginary of the benefits that technology and automation can bring in freeing up people's time to focus on more creative and rewarding work and pursuits, whilst the Luddites were expressing serious misgivings about the imaginary that their jobs, livelihoods and way of life were under serious threat from mechanization.

Good and bad imaginaries are a necessary part of the development of all new technologies but are only really understood post hoc and within context. As Mary O'Sullivan recently pointed out, based on her analysis of the emergence of steam engine use in Cornish copper mines in the eighteenth century, technology itself does not bring the general societal rewards suggested if the economic system in which it is developed remains controlled by small groups of powerful individuals (O'Sullivan, 2023). Similar concerns have been made about GAI with its principal proponents comprising a few global multinationals, as well as state-controlled interests such as the military, racing for dominance in the technology (Piper, 2023). The economic and political systems in which GAI is being developed are important to understand in relation to the imaginaries and promises being made concerning its value and warnings of its threats, particularly in light of the history of societally important technological shifts.

As scholars, we face ongoing challenges to explain new, ubiquity-focused technologies and the accompanying imaginaries (which often constitute noise, albeit with kernels of truth/accuracy hidden therein). In this sense, when we seek to theorize about GAI and its potential impact on business and management (and vice versa), it is important to recognize that historical analysis does not foretell the future, but rather provides a critical understanding of how new innovations impact and are impacted by the societies they take place in. Interrogating the contested imaginaries through the incorporation of historical thinking in our conceptualization of new technologies such as GAI will provide a deeper understanding of their impact, which in turn will allow us to better harness them for the greater good.

Olivia Brown, David A. Ellis, Julie Gore

Digital technologies continue to permeate across society, not least the way in which they allow individuals and teams to collaborate (Barley, Bechky and Milikhen, 2017). For instance, innovations in communication have led to a shift towards virtual working and the proliferation of globally distributed corporate teams (see Gilson et al., 2015). As the volume and variety of data types that can be linked together has also accelerated, we have witnessed the emergence of large language models (LLMs), with the introduction of ChatGPT bringing them to the attention of a much wider audience. Broadly referred to as a form of generative artificial intelligence (GAI), ChatGPT allows individuals (or teams) to ask questions and quickly be provided with detailed, actionable, conversational responses. Sometimes referred to as virtual agents as part of customer service and information retrieval systems, these conversational responses can effectively become virtual team members.

The view of technology as a means with which to facilitate effective teamwork in organizations has now shifted towards questions of whether, and under what circumstances, we can consider this GAI as a ‘team member’ (Malone, 2018). Conceptualizing GAI in this manner suggests a trend away from viewing technology as a supportive tool that is adjunct to human decision-making (see Robert, 2019 for a discussion of this in healthcare) to, instead, having a direct and intrinsic role within the decision-making and task-execution processes in teams (O'Neill et al., 2022). New questions are therefore being raised as to whether AI team members improve the performance of a team, and would organizations trust them? And if so, how much? To what degree are AI team members merely adjunct to, or replacements for real team members when it comes to decision-making? When a hybrid AI team completes a task, who takes responsibility for successes and failures? How can or should managers or leaders quantify accountability? Addressing these early questions dictates that it may soon be necessary to reframe and readdress the way in which teams are studied from theoretical, practical and ethical perspectives.

From a theoretical perspective, across the many definitions of teams that have been developed within the management literature, one constant is that they are generally understood to comprise ‘two or more individuals’ (Kozlowski and Bell, 2003; Kozlowski and Ilgen, 2006; Salas et al., 1992). If we are indeed approaching the point at which AI will ‘become an increasingly valuable team member’ (Salmon et al., 2023, p. 371), we will need to reconsider our definitions of what constitutes a team (i.e. is one human individual sufficient when paired with an AI member). In turn, we then need to assess how theoretical frameworks and constructs that facilitate teamwork operate within the context of AI–human teams. For instance, in Ilgen et al.’s (2005) widely adopted input–mediator–output–input (IMOI) model of teamwork, the input element has typically focused on the composition of the team (i.e. individual characteristics), alongside the structure of the team and the environment in which they are operating (see also Mathieu et al., 2008). As GAI is incorporated into organizational structure and design, it is pertinent to consider where (and indeed whether) it ought to be placed within this framework. Should GAI be considered part of the team composition as an input factor or is it best accounted for in the technological capabilities of the wider organizational context? The answer to this question will have important implications both for research designs and for the way in which the academic community relays findings to practitioners. Time will tell, and the answers to these questions will require further systematic thought, however, this may perhaps then warrant the start of a ‘necessary scientific revolution’, the like of which Kuhn advocated (Kuhn, 1962).

Alongside situating GAI's place within our theoretical framing, we must also consider how established team constructs operate within this new frontier of teaming. For example, interpersonal trust is a key component in the performance of highly functioning teams, especially in instances where there is a high level of task interdependence between team members (De Jong, Dirks and Gillespie, 2016). Research has shown that communication behaviours (e.g. style, openness, responsiveness; see Henttonen and Bloqvist, 2005) influence the development of trust in virtual teams, thus it begs the question that, in a wholly virtual interaction, how do we conceptualize and explore the development of interpersonal trust in AI–human teams? Is it possible that individuals will develop trust in AI in the same manner that they would their human team members, and how might this then impact organizational performance and transform our understanding of what it means to interpersonally relate to technology?

These questions, amongst others, are documented in a growing body of literature (see O'Neill et al., 2022; Salmon et al., 2023; Seeber et al., 2020), however, at present, empirical research on GAI within the management literature remains limited (Dwivedi et al., 2023). It is now pertinent for management scholars to begin addressing these questions empirically, as we face rapidly evolving and potentially disruptive changes to the world of work, not seen since the beginning of the digital age. For example, lab-based experiments could manipulate AI team members by changing their ‘personality’ or adding/removing them from teams. This could be further understood in terms of effects in different contexts (where an AI team member is given a greater or reduced physical presence, e.g. via a robot, or tasks, creative vs procedural). Observational research and interview studies will also be valuable in providing an initial understanding of the perceptions of GAI, alongside insights into how GAI is being incorporated into working structures and organizational teams at present and where managers and employees perceive it might be incorporated in the future.

Alongside definitional issues and the need to re-examine how teamwork constructs operate within human–GAI teams, there are practical considerations posed by the introduction of GAI at work. As researchers, we are already facing a poignant challenge in connecting the myriad of ways individuals can interact with networked technologies with their offline behaviours (Brown et al., 2022; Smith et al., 2023). At present, efforts to capture the interplay between actions taken online and actions taken in the real world have largely failed to understand the nuanced behavioural and psychological mechanisms that might link the two (see Smith et al., 2023). For instance, while digital technologies such as Microsoft Teams, Slack and Zoom are now widespread across organizations, scholars have noted that our understanding of how teams engage with these technologies and how they might improve, or hamper, team effectiveness remains limited when compared to the individual and organizational level impacts (Larson and DeChurch, 2020). The introduction of GAI may only serve to widen this gap in understanding, as the line between technologically driven and human-driven behaviour becomes increasingly blurred (see Dwivedi et al., 2023). To overcome this, management scholars must carefully consider the methods that will be required to study GAI in teams and be open to utilizing innovative practices from other disciplines (e.g. human factors, computer science, psychology). This will allow for the triangulation of findings from experimental and observational studies with data derived directly from the digital services that sit at the centre of modern working life.

Finally, at the forefront of our exploration of GAI in work teams, ethical considerations must be addressed. Indeed, there has been much conjecture about the perils of AI in organizational psychology and human resource management amongst both scholars and practitioners (CIPD, 2021, 2022a, 2022b, 2022c). Practitioner-centred outlets and public discourse are filled with a focus on risk mitigation, the implications for recruitment practices, legal and cross-country considerations, unwanted employee monitoring software and a somewhat Luddite philosophy surrounding the dark side of AI (Cheatham, Javanmardian and Samandari, 2019; Giermindl et al., 2022; McLean et al., 2023). Despite this, it remains plausible that in the coming years, ChatGPT will become an everyday reality at work, such that it is used as frequently as virtual meeting platforms and email. While, for some, the prospect of a team that is readily supported by GAI might be a welcome addition, the potential of such a reality could also be perceived as a dystopian nightmare, with any number of ethical challenges (see Mozur, 2018). This equally applies to how we study any effects on people and organizations. In considering the ethical implications of GAI in teams, it is, of course, important to outline the recognized potential for societal benefits. For instance, many challenges whereby teams become unable to make decisions due to increased cognitive load, especially in atypical, high-reliability organizations, could be mitigated with the use of AI (Brown, Power and Conchie, 2020, 2021). For example, an artificial agent with no cognitive limitations could remind a team that some solutions will bring risks that members have failed to consider (Steyvers and Kumar, 2023).

On the other hand, ChatGPT and similar systems have been predominantly trained on English text, and such systems build in existing societal biases that are then further magnified (Dwivedi et al., 2023; Weinberger, 2019). Furthermore, whereas traditional software is developed by humans, whereby computer code provides explicit step-by-step instructions, ChatGPT is built on a neural network that was trained using billions of words. Therefore, while there is some understanding about how such systems work at a technical level, there are also many gaps in existing knowledge which will not be filled overnight, generating issues relating to the transparency of these systems (Dwivedi et al., 2023; Robert, 2019). While there are no easy answers to the current (and yet-to-come) ethical concerns that accompany the study of AI in teams, there are uncontroversial processes by which we can perpetually operate and self-reflect. Our developing ability to make comprehensive assessments of digital, hybrid and traditional teams’ performance carries with it heavy questions about how this power will be used and who will be using it. We must therefore consider how organizations (and indeed we, as researchers) might incorporate these tools into teamwork and research processes thoughtfully, but humanely. Introducing interdisciplinary ethics committees that include a wider range of stakeholders (e.g. members of the public, technology developers) offers a potential solution here, and will help to engender responsible and innovative research into GAI within management studies.

Encompassing all the above, management scholars will need to become increasingly comfortable when engaging with other disciplines, the public and policymakers, all of whom have unique perspectives (Kindon, Pain and Kesby, 2007), as part of an interdisciplinary endeavour to address the methodological and theoretical challenges that lie ahead. This involves accepting that while the study of GAI in teams for management scholars is certainly not staring into the abyss, our current theories, methods, expertise and ethical explorations remain far from conclusive.

Daniel Muzio, James Faulconbridge

There has been a lot of journalistic, practitioner and academic attention on the topic of artificial intelligence (AI) and the professions. Some authors (Armour and Sako, 2020; Faulconbridge, Sarwar and Spring, 2023; Goto, 2021; Pemer and Werr, 2023; Spring, Faulconbridge and Sarwar, 2022) have focused on how professional services firms introduce and use increasingly sophisticated technological solutions. Others (Leicht and Fennell, 2023; Sako, Qian and Attolini, 2022) have focused on the impact of AI on professional labour markets. Indeed, the consensus seems to be that unlike previous technological revolutions, this current one will concern primarily professional and knowledge workers. However, given the prospect of wide-ranging change, surprisingly little attention has been paid to how AI may affect our theoretical understanding of professionalism as a distinct work organization principle. This is unfortunate, since the new AI revolution is likely to challenge some deeply held assumptions and understandings which underpin the sociology of the professions as a distinct body of knowledge (Abbott, 1988; Johnson, 1972; Larson, 1977; Muzio, Kirkpatrick and Aluack, 2019). In this contribution, we focus on this issue and reflect on how AI might affect the way we understand professionalism.

Gazi Islam, Michelle Greenwood

The founding of the scholarly Journal of the Royal Society of London, described by the historian Biagioli (as cited in Strathern, 2017), illustrates how scientific production rests on paradoxes and precarious relationships at a distance. Biagioli describes how the Royal Society became the locus of a plethora of scholarly correspondence from distant geographies, which it acknowledged in its title as ‘giving some accompt (account)… of the ingenious in many parts of the world’ (Royal Society London, 1865, cover). In contrast to its sparsely attended, gentlemanly, in-person meetings, the broadening of the transactions through correspondence produced a publicly available, globalized scholarly record, but also led to a problem regarding the credibility of the interlocuters. The Society's solution was to develop an ‘epistolary etiquette’, by which the value of contributions could be assessed without direct personal relationships. The current system of scholarly peer review and journal publication descends from this system of partial connections and evaluation at a distance (Strathern, 2017).

The case of the Royal Society journal is interesting because it lays bare the relational infrastructure that undergirded the production of scholarship. Both collegial (because it required ongoing scholarly interaction and etiquette) and impersonal (because it required judgement at a distance between strangers), scholarly production involved a balancing act between proximity and distance, a system of partial relations that was itself emblematic of emerging modern conceptions of civil society (Strathern, 2020). Beyond flashes of creative insight or financial patronage – although both were present – it was this relational infrastructure that allowed the emergence of modern scholarship within newly forming national civil societies.

We do not argue that such epistolary conventions are the only (or best) way to produce scholarly advancement, but these are the structures we have inherited, and are being quickly called into question by the emergence of recent technologies. One of these – not the only one – is generative artificial intelligence (GAI), or its recent incarnation in large language models (LLMs) like ChatGPT. LLMs promise to intervene in the scholarly process at virtually every point of knowledge production, from writing text and simulating data to ‘peer’ reviewing and editing. It is likely that the mix of human creation and mechanical supplement already woven into scholarly publishing will shift considerably. With what results?

Taking a relational perspective to knowledge production allows us to imagine how scholarly knowledge may be shaped by LLMs. Specifically, drawing on Strathern's (2000, 2004, 2020) work around relations and knowledge practices, we argue that networks of relationships (and the actors thereby constituted) change both the production of knowledge and the nature of its accountability. The embeddedness of LLMs in these networks could be a radically reshaped research landscape, with unpredictable consequences for what counts as knowledge in our field.

Robert M. Davison, M. N. Ravishankar

Theorizing is a messy business. It involves multiple sources of evidence and multiple possible explanations. The sources of data may include interviews, observations, literature, documents and diaries. They may be coded in multiple (human) languages and in multiple registers from the formal to the informal, from the technical to the mundane. While there are clear guidelines for how researchers can approach theorizing (Gioia, Corley and Hamilton, 2013; Hassan, Lowry and Mathiassen, 2022; Martinsons, Davison and Ou, 2015; Weick, 1989), in practice, theorizing is an idiosyncratic activity that reflects the style, personality, values and culture of the theorizer. Thus, the most convincing theoretical explanation may be one that is more parsimonious, interesting, counterintuitive and/or provocative. Crafting that convincing theoretical explanation requires adherence to multiple standards (parsimony, interestingness, etc.), each of which competes with the others for attention.

Generative artificial intelligence (GAI) programs like ChatGPT have several useful attributes that might assist researchers as they theorize. For instance, GAI programs may be able to synthesize some of the literature or other documents. Such syntheses can be invaluable as they often require considerable time. But synthesizing the literature is not simply a mechanical task with a precise end state: the synthesis. It is also a way of understanding how prior research has been conceived, or not conceived. When reading a series of research papers, the perspicacious researcher will, in addition to synthesizing, note both the prominent and the absent trends or patterns. For instance, the researcher may recall a study or method or theory from some years previously in a different field or discipline that could usefully be compared with or inform this literature. Naturally, the human brain is somewhat selective: the researcher is unlikely to have read the entirety of the literature across multiple disciplines, and so this comparison is limited by the researcher's own reading. Can the GAI program help here, perhaps suggesting the relevance of a study in a very different discipline? To give two real examples, when writing a paper (Liu et al., 2023) about the role of Chief Digital Officers in digital transformation, one of us employed punctuated equilibrium theory (PET), a theory first proposed in evolutionary biology (Eldredge and Gould, 1972) and occasionally encountered in the management and information systems literatures (Gersick, 1991; Wong and Davison, 2018). In our discussion, we found that we needed to examine more closely the way PET had been applied in recent business research, and then to draw parallels between the focus on digital transformation with the evolutionary biology sources. The literature in the latter area is huge: perhaps GAI could have helped identify salient sources, in effect working as a research assistant? No doubt GAI could also synthesize those sources and even render their technical jargon into a form that an information science researcher could more readily comprehend. But what will this type of non-active participation in the research process cause researchers to lose? As it turned out, in these examples we were not assisted by GAI. We simply Googled the relevant terms and quickly enough found exactly the paper that we needed to support and develop our arguments (see Liu et al., 2023). Similarly, when writing papers on the role of framing in IT-enabled sourcing, the other author could have benefitted immensely from GAI's ability to synthesize the huge corpus of scholarship on framing in the social psychology literature (Ravishankar, 2015; Sandeep and Ravishankar, 2016). However, we had to do the dejargonizing work ourselves, a process that admittedly took some time but was intellectually stimulating. Indeed, these examples neatly encapsulate many of the things that we appreciate about research, and we would be loath to relinquish them to GAI.

A second example where GAI may help out concerns data transcription. As researchers, we often collect data through interviews. Traditionally, we transcribe the interviews to text and where necessary translate them into the language that we wish to code them in, often English. GAI programs can certainly be used for interview transcription and translation. The GAI software can certainly speed up the initial process but the error rate of the software is non-trivial, that is, careful manual checking of the transcription/translation is needed. For instance, we recently used GAI to transcribe and then translate interviews from Chinese to English. As part of our preparation, we needed to inform the software that the source material was in Chinese (Mandarin), so that the Chinese language module would be applied. However, the audio text included English words embedded in it, that is, the interviewees spoke both Chinese and English in their interview responses. This is technically referred to as code mixing, and is quite common among second-language users, that is, they use their first language for much of their communication but mix in words from second languages on an ad hoc basis, often because the second-language word expresses an idea or concept more succinctly than would the corresponding first-language word. Such code mixing exists in both spoken and written communication. The GAI transcription software accurately recognized and transcribed the Chinese words, but was unable to deal with the English words because it was not expecting them, so it rendered them by converting them phonetically. For instance, the abbreviation EDI (electronic data interchange) was rendered in Chinese characters not as the correct translation of EDI (電子數據交換) but as characters that approximated the sound of the letters E D I (一點愛). However, these inserted characters (which actually mean ‘a little love’) were totally inappropriate in the context and made no sense at all. Perhaps in the future, GAI programs could be instructed to look out for words in specific languages and so transcribe or translate appropriately.

When it comes to the analysis of data, that is, the identification of themes and patterns, and the generation of theoretical arguments, our earlier comments about parsimony, interestingness, counterintuitiveness and provocativeness come to the fore. Although the efficiency of human analytical capacity may not be superior to GAI, given GAI's potential to analyse vast quantities of data quickly, to compare that data with past literature, and presumably to generate many possible options, we suggest that the effectiveness of human intuition is superior because of our ability to identify an interesting or provocative or counterintuitive angle that is worth exploring. Quite what is interesting or provocative or counterintuitive is hard to pin down, as it depends to a large extent on the subjective assessment of the researcher who is going to create an argument to justify that interesting, provocative or counterintuitive theoretical explanation. This human capability goes beyond creating new content using patterns in data, and it is central to theorizing: the researcher(s) need to draw on their innate imagination and creativity to craft that theoretical explanation. Could a GAI program be trained to identify potentially interesting, provocative or counterintuitive positions, and then to craft the supporting arguments? The answer must be yes, but how convincing they would be is moot. They might help the researcher to identify promising new lines of thought, or might stimulate further intellectual engagement, with the GAI program acting as an agent provocateur. A final point, which slightly contradicts our arguments so far, is worth making. Fears are being expressed about how the limits of GAI are really the inability of users to ask the system the ‘right’ questions. If GAI intuition and reasoning powers appear unable to produce sophisticated theorizing, could it be that the issue is less about GAI capability and more about scholars’ relatively limited experience and knowledge around employing the ‘prompts’? This line of thought opens the intriguing possibility that GAI is far more potent than we realize, and that it may indeed produce academically sound, rigorous, novel and elegant theorizing of significant value.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
10.00
自引率
12.50%
发文量
87
期刊介绍: The British Journal of Management provides a valuable outlet for research and scholarship on management-orientated themes and topics. It publishes articles of a multi-disciplinary and interdisciplinary nature as well as empirical research from within traditional disciplines and managerial functions. With contributions from around the globe, the journal includes articles across the full range of business and management disciplines. A subscription to British Journal of Management includes International Journal of Management Reviews, also published on behalf of the British Academy of Management.
期刊最新文献
Discretion in the Governance Work of Internal Auditors: Interplay Between Institutional Complexity and Organizational Embeddedness Social Impact Business Angels as New Impact Investors Are Prestigious Directors Mere Attractive Ornaments on the Corporate Christmas Tree? Determinants of IPO Overpricing R&D Alliance and Innovation: The Interaction of Network Structure and the Quality of the Relationship
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1