Akiko Hanai, Tetsuo Ishikawa, Shoichiro Kawauchi, Yuta Iida, Eiryo Kawakami
{"title":"生成式人工智能与非药物偏差:癌症患者性健康沟通实验研究","authors":"Akiko Hanai, Tetsuo Ishikawa, Shoichiro Kawauchi, Yuta Iida, Eiryo Kawakami","doi":"10.1136/bmjhci-2023-100924","DOIUrl":null,"url":null,"abstract":"Objectives The objective of this study was to explore the feature of generative artificial intelligence (AI) in asking sexual health among cancer survivors, which are often challenging for patients to discuss. Methods We employed the Generative Pre-trained Transformer-3.5 (GPT) as the generative AI platform and used DocsBot for citation retrieval (June 2023). A structured prompt was devised to generate 100 questions from the AI, based on epidemiological survey data regarding sexual difficulties among cancer survivors. These questions were submitted to Bot1 (standard GPT) and Bot2 (sourced from two clinical guidelines). Results No censorship of sexual expressions or medical terms occurred. Despite the lack of reflection on guideline recommendations, ‘consultation’ was significantly more prevalent in both bots’ responses compared with pharmacological interventions, with ORs of 47.3 (p<0.001) in Bot1 and 97.2 (p<0.001) in Bot2. Discussion Generative AI can serve to provide health information on sensitive topics such as sexual health, despite the potential for policy-restricted content. Responses were biased towards non-pharmacological interventions, which is probably due to a GPT model designed with the ’s prohibition policy on replying to medical topics. This shift warrants attention as it could potentially trigger patients’ expectations for non-pharmacological interventions.","PeriodicalId":9050,"journal":{"name":"BMJ Health & Care Informatics","volume":"213 1","pages":""},"PeriodicalIF":4.1000,"publicationDate":"2024-04-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Generative artificial intelligence and non-pharmacological bias: an experimental study on cancer patient sexual health communications\",\"authors\":\"Akiko Hanai, Tetsuo Ishikawa, Shoichiro Kawauchi, Yuta Iida, Eiryo Kawakami\",\"doi\":\"10.1136/bmjhci-2023-100924\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Objectives The objective of this study was to explore the feature of generative artificial intelligence (AI) in asking sexual health among cancer survivors, which are often challenging for patients to discuss. Methods We employed the Generative Pre-trained Transformer-3.5 (GPT) as the generative AI platform and used DocsBot for citation retrieval (June 2023). A structured prompt was devised to generate 100 questions from the AI, based on epidemiological survey data regarding sexual difficulties among cancer survivors. These questions were submitted to Bot1 (standard GPT) and Bot2 (sourced from two clinical guidelines). Results No censorship of sexual expressions or medical terms occurred. Despite the lack of reflection on guideline recommendations, ‘consultation’ was significantly more prevalent in both bots’ responses compared with pharmacological interventions, with ORs of 47.3 (p<0.001) in Bot1 and 97.2 (p<0.001) in Bot2. Discussion Generative AI can serve to provide health information on sensitive topics such as sexual health, despite the potential for policy-restricted content. Responses were biased towards non-pharmacological interventions, which is probably due to a GPT model designed with the ’s prohibition policy on replying to medical topics. This shift warrants attention as it could potentially trigger patients’ expectations for non-pharmacological interventions.\",\"PeriodicalId\":9050,\"journal\":{\"name\":\"BMJ Health & Care Informatics\",\"volume\":\"213 1\",\"pages\":\"\"},\"PeriodicalIF\":4.1000,\"publicationDate\":\"2024-04-01\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"BMJ Health & Care Informatics\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1136/bmjhci-2023-100924\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"BMJ Health & Care Informatics","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmjhci-2023-100924","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
Generative artificial intelligence and non-pharmacological bias: an experimental study on cancer patient sexual health communications
Objectives The objective of this study was to explore the feature of generative artificial intelligence (AI) in asking sexual health among cancer survivors, which are often challenging for patients to discuss. Methods We employed the Generative Pre-trained Transformer-3.5 (GPT) as the generative AI platform and used DocsBot for citation retrieval (June 2023). A structured prompt was devised to generate 100 questions from the AI, based on epidemiological survey data regarding sexual difficulties among cancer survivors. These questions were submitted to Bot1 (standard GPT) and Bot2 (sourced from two clinical guidelines). Results No censorship of sexual expressions or medical terms occurred. Despite the lack of reflection on guideline recommendations, ‘consultation’ was significantly more prevalent in both bots’ responses compared with pharmacological interventions, with ORs of 47.3 (p<0.001) in Bot1 and 97.2 (p<0.001) in Bot2. Discussion Generative AI can serve to provide health information on sensitive topics such as sexual health, despite the potential for policy-restricted content. Responses were biased towards non-pharmacological interventions, which is probably due to a GPT model designed with the ’s prohibition policy on replying to medical topics. This shift warrants attention as it could potentially trigger patients’ expectations for non-pharmacological interventions.