This paper acts as a counterweight to the recent swell of optimism among some accounting scholars about how the performative potential of environmental accounting might be harnessed to address the planetary crisis. Mimicking the style of Ruth Hines’ seminal contribution, ‘Financial accounting: In communicating reality, we construct reality’ (1988), the paper questions whether environmental accounting can play the transformational role that many envisage for it.
This research explores director and shareholder two-way face-to-face interactions at the shareholder meetings of Cecil Rhodes’s British South Africa Company, as reflected in verbatim minutes of those meetings. Shareholder meetings are traditionally viewed as being for accountability. Alternatively, directors may engage in manipulative game-playing through skillful political maneuvering of shareholders, thereby compromising accountability. The research analyzes unique data, comprising 25 full sets of verbatim meeting minutes of 29 shareholder meetings over the period 1895–1925, using manual close-reading interpretive content analysis. Our unit of analysis is the interactions between directors and supportive/approving and dissenting shareholders. Our research question is how did the directors and shareholders interact at the shareholder meetings. We develop a typology of 16 types of interactions the directors and shareholders used in their exchanges. We complement our analysis with illustrations of types of interactions extracted from our verbatim minutes. We demonstrate how powerful directors mobilized supportive/approving shareholders to quash the dissenting shareholders’ voices. Our findings have implications for how modern-day shareholder meetings are conducted.
This paper compares the extent of engagement between the Ghanaian government and the nation’s oil and gas firms with the nature of the financial accountability offered by each of these parties to the citizenry. The findings indicate that whilst the Ghanaian government and the nation’s oil firms pay (at best) cursory regard to societal needs for information and engagement, when interacting with each other an effective and unapologetic form of discharge exists, suggesting the existence of an exclusionary hegemony. We mobilize elements of work by Jessop, 2003a, Jessop, 2003b, Joseph, 2002, Joseph, 2003 and Andrew and Baker (2020) to contextualise the evidence around recent theoretical debates on selectivity in information flows and accountability discharge in developing nation settings.
This article draws on the Foucauldian concept of “discursive formation” to conceptualize generative artificial intelligence (GAI)’s potential influence on management. It shows how ChatGPT, a typical GAI, can affect practices, decision-making responsibility and the management disciplines through a dual discourse. The first one emanates from technological solutionism, a belief that any problem can be solved with the assistance of technology (the technosolutionist discourse), and the second one concerns the utterances generated by ChatGPT itself, which are shaped by various algorithmic, epistemic and linguistic influences (the generative discourse). In simpler terms, what ChatGPT can do to management appears to depend on “what is said about it” and “what it says”. In contrast to the existing literature on ChatGPT’s potential influence on organizations, this article, through its discursive approach, takes a non-normative position, to reveal the subtler influences of generative artificial intelligence, and highlight the individual and organizational responsibilities of actors interacting with these two discourses. The conclusions may be of interest to management readers in general, and of more particular interest to the accounting profession, as the conceptualization is based more broadly on examples taken from accounting, given the close link between the accounting profession and information technologies.
The European Commission (EC) conducted a major reform of statutory audit between 2010 and 2014, with the primary aim of improving competition in the market. The Big 4, who were targeted by this reform, were strongly opposed to it. The present study employs the typology of Oliver (1991) and the “presentation of self” theory developed by Goffman (1956) to identify the strategic responses employed, frontstage and backstage, by the regulator (EC) and the regulatees (Big 4) to influence the outcome of the reform. The results reveal that passive strategic responses tend to take place frontstage, while active and aggressive strategies generally take place backstage. This work also reveals the incursion of the neoliberal model into the European regulatory arena, and allows us to characterize the Big 4 as cynical actors defending a commercial logic.
This article advances understanding of how tax administration influences social justice. Critical accounting research is paying increasing attention to social justice, but conceptualisations and empirical studies of tax administration are scarce. Drawing on Bourdieu’s social theory, we analyse how accounting technologies exercise relational power that reproduces or worsens socio-economic inequalities. Our critical ethnography of the Tax Credits (TC) system in the United Kingdom identifies four original practices through which claimants interact with accounting technologies. We reveal how claimants utilise certain types of capital to play the game of the TC system and reproduce their habitus and position of powerlessness in the field. While some claimants manage to play the game successfully and improve their position, most end up ‘giving in’ to living with financial and emotional hardship and accepting the relational power of the field. We conclude by developing a research and reform agenda for analysing and changing the relational power of accounting technologies in tax administration towards social justice.
Capitalizing on what we currently know about artificial intelligence (AI), the editorial of this special issue, entitled “Artificial Intelligence in the Spotlight”, adds our voice to a call to order in the face of the unbridled enthusiasm we often encounter regarding the benefits of AI. In short, we maintain that there is a crucial need for skepticism about the all-out colonization project vigorously pursued by AI and its sustaining infrastructure. We draw on our own analysis and that of the contributors to this special issue to consider what we see as a bold agenda for colonizing our communities, our ways of doing, and our minds – so that we become fundamentally dependent on technologies whose reliability is dubious and whose algorithms are secretly maintained behind the safety of corporate walls. Our thesis is that the cacophony of aberrations, disorder, and worries that emerge in the wake of AI can be meaningfully viewed as a juggernaut, an inexorable force that is ready to unsettle all things in its tedious path. The juggernaut metaphor constitutes our way of putting “artificial intelligence in the spotlight”. We call for researchers from all disciplines to engage in the study of the AI juggernaut and speak out as much as they can, in public and in academic spheres, about its dangers.