This study undertakes a hermeneutic analysis of the growing literature on algorithmic management. Algorithmic management is a subset of algorithmic decision-making, also referred to as algorithmic work. To date, the underlying norms, and assumptions of researchers, and how assumptions shape understandings of algorithmic management, have been under investigated. Using a hermeneutic methodology, we uncover four different onto-epistemological positions in the literature based on two overarching worldviews. The first is techno-human dualism, rooted in dualist ontological assumptions foregrounding entities. The second is techno-human entanglement, grounded in relational perspectives that view the social and material as inseparable. The worldviews are comprised of four meta-understandings that form our framework: (1) the ‘techno-centric’ view gives primacy to the technology, with humans seen as a secondary feature; (2) the ‘techno-mediated control’ view focuses on managerial power with technology a tool for control and the organization of labor; (3) the ‘techno-human enactment’ view focuses on the performative aspects of algorithmic management; and (4) the ‘techno-human being’ view explores how algorithmic management affects identity (re)formation and meaning-making. We demonstrate how onto-epistemological assumptions configure interpretations of algorithmic management. We focus on algorithms (as a foundational and integral characteristic), organizational control (a core function), and human-in-the-loop configurations (as a possible safeguard). By surfacing the plurality of assumptions in algorithmic management research, we seek to foster more engaged scholarship and encourage the virtue of choosing a research position rather than inheriting it.
This study offers a nuanced exploration into the intersection of expertise and AI-powered decision-making, particularly within the realm of high-volume recruitment. It leverages theory from the evolving discourse on relational expertise and human-AI interaction to examine how experts navigate, interpret, and sometimes challenge AI tool outputs. Through in-depth interviews with 42 recruitment experts, the study focuses on the concept of algorithmic folk theories—the interpretive frameworks through which experts engage with algorithmic recommendations. Central to the study's findings is the range of perceptions among experts toward AI technologies, viewed through the lens of expert-AI pairings. These perceptions oscillate between viewing AI as either a complementary ally or a challenging rival, significantly shaped by organizational contexts. Factors influencing these views include oversight levels, trust in AI outputs, and the prioritization of AI tools in decision-making processes. Findings also reveal instances of algoactivism, where experts actively resist or workaround AI outputs to align with their professional judgment. In turn, algorithmic folk theories are interpretive frameworks informed by and situated within organizational structures.
Theoretically, this study deepens our understanding of the relational dynamics between human expertise and AI systems in professional settings. It highlights the critical role of context-specific factors in shaping these interactions and offers new perspectives on the complexities of AI integration for workplace decision-making. I explain my work's findings in relation to our broader discourse around artificial intelligence use at work. Finally, I offer theoretical and practical considerations for future research and practice.