Ronald A. Beghetto , Wendy Ross , Maciej Karwowski , Vlad P. Glăveanu
{"title":"与人工智能合作开发仪器:可能性与陷阱","authors":"Ronald A. Beghetto , Wendy Ross , Maciej Karwowski , Vlad P. Glăveanu","doi":"10.1016/j.newideapsych.2024.101121","DOIUrl":null,"url":null,"abstract":"<div><p>Recent advances in generative artificial intelligence (AI), specifically large language models (LLMs), provide new possibilities for researchers to partner with AI when developing and refining psychological instruments. In this paper we demonstrate how LLMs, such as OpenAI's ChatGPT 4 model, might be used to support the development of new psychometric scales. Partnering with AI for the purpose of developing and refining instruments, however, comes with its share of potential pitfalls. We thereby discuss throughout the paper that instrument development and refinement start and end with human judgment and expertise. We open with two use-cases that describe how we used LLMs in the development and refinement of two new psychological instruments. Next, we discuss possibilities for where and how researchers can use LLMs in the process of instrument development more broadly, including considerations for maximizing the benefits of LLMs and addressing the potential hazards when working with LLMs. Finally, we close by offering initial suggestions for psychology researchers interested in partnering with LLMs in this capacity.</p></div>","PeriodicalId":51556,"journal":{"name":"New Ideas in Psychology","volume":"76 ","pages":"Article 101121"},"PeriodicalIF":2.3000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Partnering with AI for instrument development: Possibilities and pitfalls\",\"authors\":\"Ronald A. Beghetto , Wendy Ross , Maciej Karwowski , Vlad P. Glăveanu\",\"doi\":\"10.1016/j.newideapsych.2024.101121\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<div><p>Recent advances in generative artificial intelligence (AI), specifically large language models (LLMs), provide new possibilities for researchers to partner with AI when developing and refining psychological instruments. In this paper we demonstrate how LLMs, such as OpenAI's ChatGPT 4 model, might be used to support the development of new psychometric scales. Partnering with AI for the purpose of developing and refining instruments, however, comes with its share of potential pitfalls. We thereby discuss throughout the paper that instrument development and refinement start and end with human judgment and expertise. We open with two use-cases that describe how we used LLMs in the development and refinement of two new psychological instruments. Next, we discuss possibilities for where and how researchers can use LLMs in the process of instrument development more broadly, including considerations for maximizing the benefits of LLMs and addressing the potential hazards when working with LLMs. Finally, we close by offering initial suggestions for psychology researchers interested in partnering with LLMs in this capacity.</p></div>\",\"PeriodicalId\":51556,\"journal\":{\"name\":\"New Ideas in Psychology\",\"volume\":\"76 \",\"pages\":\"Article 101121\"},\"PeriodicalIF\":2.3000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"New Ideas in Psychology\",\"FirstCategoryId\":\"102\",\"ListUrlMain\":\"https://www.sciencedirect.com/science/article/pii/S0732118X24000497\",\"RegionNum\":3,\"RegionCategory\":\"心理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q2\",\"JCRName\":\"PSYCHOLOGY, EXPERIMENTAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"New Ideas in Psychology","FirstCategoryId":"102","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S0732118X24000497","RegionNum":3,"RegionCategory":"心理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q2","JCRName":"PSYCHOLOGY, EXPERIMENTAL","Score":null,"Total":0}
Partnering with AI for instrument development: Possibilities and pitfalls
Recent advances in generative artificial intelligence (AI), specifically large language models (LLMs), provide new possibilities for researchers to partner with AI when developing and refining psychological instruments. In this paper we demonstrate how LLMs, such as OpenAI's ChatGPT 4 model, might be used to support the development of new psychometric scales. Partnering with AI for the purpose of developing and refining instruments, however, comes with its share of potential pitfalls. We thereby discuss throughout the paper that instrument development and refinement start and end with human judgment and expertise. We open with two use-cases that describe how we used LLMs in the development and refinement of two new psychological instruments. Next, we discuss possibilities for where and how researchers can use LLMs in the process of instrument development more broadly, including considerations for maximizing the benefits of LLMs and addressing the potential hazards when working with LLMs. Finally, we close by offering initial suggestions for psychology researchers interested in partnering with LLMs in this capacity.
期刊介绍:
New Ideas in Psychology is a journal for theoretical psychology in its broadest sense. We are looking for new and seminal ideas, from within Psychology and from other fields that have something to bring to Psychology. We welcome presentations and criticisms of theory, of background metaphysics, and of fundamental issues of method, both empirical and conceptual. We put special emphasis on the need for informed discussion of psychological theories to be interdisciplinary. Empirical papers are accepted at New Ideas in Psychology, but only as long as they focus on conceptual issues and are theoretically creative. We are also open to comments or debate, interviews, and book reviews.