Membership categorisation, sociological description and role prompt engineering with ChatGPT

IF 2.1 2区 文学 Q2 COMMUNICATION Discourse & Communication Pub Date : 2024-08-19 DOI:10.1177/17504813241267068
William Housley, Patrik Dahl
{"title":"Membership categorisation, sociological description and role prompt engineering with ChatGPT","authors":"William Housley, Patrik Dahl","doi":"10.1177/17504813241267068","DOIUrl":null,"url":null,"abstract":"Large Language Models (LLMs) and generative Artificial Intelligence (A.I.) have become the latest disruptive digital technologies to breach the dividing lines between scientific endeavour and public consciousness. LLMs such as ChatGPT are platformed through commercial providers such as OpenAI, which provide a conduit through which interaction is realised, via a series of exchanges in the form of written natural language text called ‘prompt engineering’. In this paper, we use Membership Categorisation Analysis to interrogate a collection of prompt engineering examples gathered from the endogenous ranking of prompting guides hosted on emerging generative AI community and practitioner-relevant social media. We show how both formal and vernacular ideas surrounding ‘natural’ sociological concepts are mobilised in order to configure LLMs for useful generative output. In addition, we identify some of the interactional limitations and affordances of using role prompt engineering for generating interactional stances with generative AI chatbots and (potentially) other formats. We conclude by reflecting the consequences of these everyday social-technical routines and the rise of ‘ethno-programming’ for generative AI that is realised through natural language and everyday sociological competencies.","PeriodicalId":46726,"journal":{"name":"Discourse & Communication","volume":"50 1","pages":""},"PeriodicalIF":2.1000,"publicationDate":"2024-08-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Discourse & Communication","FirstCategoryId":"98","ListUrlMain":"https://doi.org/10.1177/17504813241267068","RegionNum":2,"RegionCategory":"文学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"COMMUNICATION","Score":null,"Total":0}
引用次数: 0

Abstract

Large Language Models (LLMs) and generative Artificial Intelligence (A.I.) have become the latest disruptive digital technologies to breach the dividing lines between scientific endeavour and public consciousness. LLMs such as ChatGPT are platformed through commercial providers such as OpenAI, which provide a conduit through which interaction is realised, via a series of exchanges in the form of written natural language text called ‘prompt engineering’. In this paper, we use Membership Categorisation Analysis to interrogate a collection of prompt engineering examples gathered from the endogenous ranking of prompting guides hosted on emerging generative AI community and practitioner-relevant social media. We show how both formal and vernacular ideas surrounding ‘natural’ sociological concepts are mobilised in order to configure LLMs for useful generative output. In addition, we identify some of the interactional limitations and affordances of using role prompt engineering for generating interactional stances with generative AI chatbots and (potentially) other formats. We conclude by reflecting the consequences of these everyday social-technical routines and the rise of ‘ethno-programming’ for generative AI that is realised through natural language and everyday sociological competencies.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
使用 ChatGPT 进行成员分类、社会学描述和角色提示工程
大型语言模型(LLM)和生成式人工智能(A.I.)已成为最新的颠覆性数字技术,突破了科学研究与公众意识之间的界限。像 ChatGPT 这样的大型语言模型是通过 OpenAI 这样的商业供应商提供的平台,这些供应商通过一系列被称为 "提示工程 "的书面自然语言文本形式的交流,提供了实现互动的渠道。在本文中,我们使用成员分类分析法(Membership Categorisation Analysis)对从新兴生成式人工智能社区和从业人员相关社交媒体上的提示指南内生排名中收集到的提示工程实例进行了分析。我们展示了如何调动围绕 "自然 "社会学概念的正式和方言想法,以配置 LLM,从而实现有用的生成输出。此外,我们还指出了使用角色提示工程与生成式人工智能聊天机器人和(潜在的)其他格式生成交互立场的一些交互限制和能力。最后,我们将反思这些日常社会技术例程的后果,以及通过自然语言和日常社会学能力实现的生成式人工智能 "民族编程 "的兴起。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
Discourse & Communication
Discourse & Communication COMMUNICATION-
CiteScore
3.30
自引率
5.30%
发文量
41
期刊介绍: Discourse & Communication is an international, peer-reviewed journal that publishes articles that pay specific attention to the qualitative, discourse analytical approach to issues in communication research. Besides the classical social scientific methods in communication research, such as content analysis and frame analysis, a more explicit study of the structures of discourse (text, talk, images or multimedia messages) allows unprecedented empirical insights into the many phenomena of communication. Since contemporary discourse study is not limited to the account of "texts" or "conversation" alone, but has extended its field to the study of the cognitive, interactional, social, cultural.
期刊最新文献
Interactive probes: Towards action-level evaluation for dialogue systems Book review: Hiroki Nomoto and Elin McCready, Discourse Particles in Asian Languages Volume II ‘Have you insured yourself in any way?’ Salespersons’ mapping questions and their follow-ups in insurance sales negotiations Book review: Zsófia Demjén, Sarah Atkins and Elena Semino, Researching Language and Health: A Student Guide Book review: Claudio Scarvaglieri, Eva-Maria Graf, and Thomas Spranz-Fogasy (eds), Relationships in Organized Helping: Analyzing Interaction in Psychotherapy, Medical Encounters, Coaching and in Social Media
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1