Ico Maly is associate professor Digital Culture Studies (Tilburg University, The Netherlands).
In her opening essay, Hellen Kelly-Holmes asks herself and us ‘how Artificial intelligence will change the way that sociolinguists carry out research’. Instead of giving a clear-cut answer to that question, I would like to take one step back. Before we can think about the concrete ways sociolinguists can use artificial intelligence (AI), it would not be a luxury to first have a sociolinguistic theory on AI. AI is not a neutral tool, it has its own epistemology, produces specific discourses and changes sociolinguistic environments. I do not pretend to have such a full-blown sociolinguistic theory of AI, but I would like to use this opportunity to give a first preliminary sketch of what such a sociolinguistic theorization of AI could look like.
Starting with the latter, it strikes me how Kelly-Holmes downplays her own work and states that ‘the writing (of ChatGPT) is substantially more correct than my own rambling’ (Kelly-Holmes, 2024). She is clearly not alone in such an assessment of AI. Most users of ChatGPT are equally impressed. It explains the success of the app among our students, and the world at large. By February 2023, the app had 100 million people using it on a weekly basis. And in 2024, that number would rise to 180 million. ChatGPT is now so omnipresent that we have to understand it as a cultural force.
The discourses ChatGPT produces are being used in a vast number of fields: journalism, law, academia, marketing, politics and digital culture in general. And more, the app is now also embedded in social media like Instagram. Other companies have their own LLMs implemented in search engines, smartphones and social media platforms. AI generates language and is used to moderate language, to help you search, to give you a more personalized digital experience and much more. AI has become a central social structure (re)producing and policing language. And in that sense it gives direction to discourse and culture.
It is exactly this success that warrants sociolinguistic attention as it has effects on individuals, society and language. On the most micro-level, understanding the relation between AI-produced language and society warrants studying it as interaction. When we do that, we see that users are entering a specific type of communicative relation with specific communicative norms. One entity—the human—is taking up the role of the one asking for information, placing the other—the AI—system in a position of knowledge. This framing of the AI bot as the producer of knowledge is a cultural format. It is steered by the example prompts on the ChatGPT website, but also by the many social media pages and YouTube videos that are dedicated to developing the ‘correct prompts’. The other side of the interaction—the chatbot—is programmed to respond in particular ways. This specifically programmed relation is inherent in the d
In this article, I build upon calls for an intersectional approach in sociocultural linguistic research – particularly in the context of language, gender and sexuality – which attends robustly to the question of race. Through the analysis of four moments of discourse between young LGBTQ+ people, I show how their queer positionality is informed and shaped by their experience as white or racialised youths. In doing so, I demonstrate the intra-categorical nature of identity and the benefits of a ‘thick’ analytical approach which pays close attention to individual speakers’ positionalities. Furthermore, I argue for sociocultural linguistic research which honours the origins of intersectionality theory by accounting explicitly for the role of systemic racism and white privilege on speakers’ identity constructions.

