Can Deniz Deveci, Jason Joe Baker, Binyamin Sikander, Jacob Rosenberg
{"title":"ChatGPT-4和人类写的求职信的比较。","authors":"Can Deniz Deveci, Jason Joe Baker, Binyamin Sikander, Jacob Rosenberg","doi":"","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence has started to become a part of scientific studies and may help researchers with a wide range of tasks. However, no scientific studies have been published on its ussefulness in writing cover letters for scientific articles. This study aimed to determine whether Generative Pre-Trained Transformer (GPT)-4 is as good as humans in writing cover letters for scientific papers.</p><p><strong>Methods: </strong>In this randomised non-inferiority study, we included two parallel arms consisting of cover letters written by humans and by GPT-4. Each arm had 18 cover letters, which were assessed by three different blinded assessors. The assessors completed a questionnaire in which they had to assess the cover letters with respect to impression, readability, criteria satisfaction, and degree of detail. Subsequently, we performed readability tests with Lix score and Flesch Kincaid grade level.</p><p><strong>Results: </strong>No significant or relevant difference was found on any parameter. A total of 61% of the blinded assessors guessed correctly as to whether the cover letter was written by GPT-4 or a human. GPT-4 had a higher score according to our objective readability tests. Nevertheless, it performed better than human writing on readability in the subjective assessments.</p><p><strong>Conclusion: </strong>We found that GPT-4 was non-inferior at writing cover letters compared to humans. This may be used to streamline cover letters for researchers, providing an equal chance to all researchers for advancement to peer-review.</p><p><strong>Funding: </strong>This study received no financial support from external sources.</p><p><strong>Trial registration: </strong>This study was not registered before the study commenced.</p>","PeriodicalId":11119,"journal":{"name":"Danish medical journal","volume":"70 12","pages":""},"PeriodicalIF":1.0000,"publicationDate":"2023-11-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"A comparison of cover letters written by ChatGPT-4 or humans.\",\"authors\":\"Can Deniz Deveci, Jason Joe Baker, Binyamin Sikander, Jacob Rosenberg\",\"doi\":\"\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Artificial intelligence has started to become a part of scientific studies and may help researchers with a wide range of tasks. However, no scientific studies have been published on its ussefulness in writing cover letters for scientific articles. This study aimed to determine whether Generative Pre-Trained Transformer (GPT)-4 is as good as humans in writing cover letters for scientific papers.</p><p><strong>Methods: </strong>In this randomised non-inferiority study, we included two parallel arms consisting of cover letters written by humans and by GPT-4. Each arm had 18 cover letters, which were assessed by three different blinded assessors. The assessors completed a questionnaire in which they had to assess the cover letters with respect to impression, readability, criteria satisfaction, and degree of detail. Subsequently, we performed readability tests with Lix score and Flesch Kincaid grade level.</p><p><strong>Results: </strong>No significant or relevant difference was found on any parameter. A total of 61% of the blinded assessors guessed correctly as to whether the cover letter was written by GPT-4 or a human. GPT-4 had a higher score according to our objective readability tests. Nevertheless, it performed better than human writing on readability in the subjective assessments.</p><p><strong>Conclusion: </strong>We found that GPT-4 was non-inferior at writing cover letters compared to humans. This may be used to streamline cover letters for researchers, providing an equal chance to all researchers for advancement to peer-review.</p><p><strong>Funding: </strong>This study received no financial support from external sources.</p><p><strong>Trial registration: </strong>This study was not registered before the study commenced.</p>\",\"PeriodicalId\":11119,\"journal\":{\"name\":\"Danish medical journal\",\"volume\":\"70 12\",\"pages\":\"\"},\"PeriodicalIF\":1.0000,\"publicationDate\":\"2023-11-23\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Danish medical journal\",\"FirstCategoryId\":\"3\",\"ListUrlMain\":\"\",\"RegionNum\":4,\"RegionCategory\":\"医学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q3\",\"JCRName\":\"MEDICINE, GENERAL & INTERNAL\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Danish medical journal","FirstCategoryId":"3","ListUrlMain":"","RegionNum":4,"RegionCategory":"医学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q3","JCRName":"MEDICINE, GENERAL & INTERNAL","Score":null,"Total":0}
A comparison of cover letters written by ChatGPT-4 or humans.
Introduction: Artificial intelligence has started to become a part of scientific studies and may help researchers with a wide range of tasks. However, no scientific studies have been published on its ussefulness in writing cover letters for scientific articles. This study aimed to determine whether Generative Pre-Trained Transformer (GPT)-4 is as good as humans in writing cover letters for scientific papers.
Methods: In this randomised non-inferiority study, we included two parallel arms consisting of cover letters written by humans and by GPT-4. Each arm had 18 cover letters, which were assessed by three different blinded assessors. The assessors completed a questionnaire in which they had to assess the cover letters with respect to impression, readability, criteria satisfaction, and degree of detail. Subsequently, we performed readability tests with Lix score and Flesch Kincaid grade level.
Results: No significant or relevant difference was found on any parameter. A total of 61% of the blinded assessors guessed correctly as to whether the cover letter was written by GPT-4 or a human. GPT-4 had a higher score according to our objective readability tests. Nevertheless, it performed better than human writing on readability in the subjective assessments.
Conclusion: We found that GPT-4 was non-inferior at writing cover letters compared to humans. This may be used to streamline cover letters for researchers, providing an equal chance to all researchers for advancement to peer-review.
Funding: This study received no financial support from external sources.
Trial registration: This study was not registered before the study commenced.
期刊介绍:
The Danish Medical Journal (DMJ) is a general medical journal. The journal publish original research in English – conducted in or in relation to the Danish health-care system. When writing for the Danish Medical Journal please remember target audience which is the general reader. This means that the research area should be relevant to many readers and the paper should be presented in a way that most readers will understand the content.
DMJ will publish the following articles:
• Original articles
• Protocol articles from large randomized clinical trials
• Systematic reviews and meta-analyses
• PhD theses from Danish faculties of health sciences
• DMSc theses from Danish faculties of health sciences.