{"title":"患者如何使用人工智能","authors":"Chris Stokel-Walker","doi":"10.1136/bmj.q2393","DOIUrl":null,"url":null,"abstract":"Artificial intelligence (AI) tools such as ChatGPT have hundreds of millions of users—but are they medically safe and reliable? Chris Stokel-Walker asks patients and physicians about the benefits and risks in an AI world In August this year Hayley Brackley lost a large part of her vision, completely out of the blue. She’d gone to her local chemist with eye pain, and a prescribing pharmacist diagnosed sinusitis. She took the recommended medicine to try to resolve the pain, but it began affecting her ability to see. Her first thought was to turn to ChatGPT for advice on what to do next. The chatbot advised her to go back and get the problem checked out more, which she did. Further examination by an optician found that she had significant inflammation and a haemorrhage in her optic nerve, which is currently being treated. It’s not surprising that Brackley’s first port of call was ChatGPT. She prefers ChatGPT to a search engine such as Google because it can hold a conversation and more quickly find the information she wants. She’s not alone: 200 million of us use the world’s most popular generative AI chatbot every day.1 Neither is it surprising that, before her meeting with the eye consultant in which her condition was diagnosed, she sought to use ChatGPT to see what sorts of questions might be asked. Brackley has attention deficit/hyperactivity disorder (ADHD) and autism, and she thought that being forewarned about what she might be asked could help her in the interaction. But this begs several questions. Should patients be using AI tools? How should the healthcare system react to patients using a new, often untested, tool in addition to human diagnoses? And what does patients’ use of AI tell us about the gaps in the health service and how …","PeriodicalId":22388,"journal":{"name":"The BMJ","volume":"35 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-11-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"How patients are using AI\",\"authors\":\"Chris Stokel-Walker\",\"doi\":\"10.1136/bmj.q2393\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Artificial intelligence (AI) tools such as ChatGPT have hundreds of millions of users—but are they medically safe and reliable? Chris Stokel-Walker asks patients and physicians about the benefits and risks in an AI world In August this year Hayley Brackley lost a large part of her vision, completely out of the blue. She’d gone to her local chemist with eye pain, and a prescribing pharmacist diagnosed sinusitis. She took the recommended medicine to try to resolve the pain, but it began affecting her ability to see. Her first thought was to turn to ChatGPT for advice on what to do next. The chatbot advised her to go back and get the problem checked out more, which she did. Further examination by an optician found that she had significant inflammation and a haemorrhage in her optic nerve, which is currently being treated. It’s not surprising that Brackley’s first port of call was ChatGPT. She prefers ChatGPT to a search engine such as Google because it can hold a conversation and more quickly find the information she wants. She’s not alone: 200 million of us use the world’s most popular generative AI chatbot every day.1 Neither is it surprising that, before her meeting with the eye consultant in which her condition was diagnosed, she sought to use ChatGPT to see what sorts of questions might be asked. Brackley has attention deficit/hyperactivity disorder (ADHD) and autism, and she thought that being forewarned about what she might be asked could help her in the interaction. But this begs several questions. Should patients be using AI tools? How should the healthcare system react to patients using a new, often untested, tool in addition to human diagnoses? And what does patients’ use of AI tell us about the gaps in the health service and how …\",\"PeriodicalId\":22388,\"journal\":{\"name\":\"The BMJ\",\"volume\":\"35 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-11-19\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"The BMJ\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.1136/bmj.q2393\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"The BMJ","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1136/bmj.q2393","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Artificial intelligence (AI) tools such as ChatGPT have hundreds of millions of users—but are they medically safe and reliable? Chris Stokel-Walker asks patients and physicians about the benefits and risks in an AI world In August this year Hayley Brackley lost a large part of her vision, completely out of the blue. She’d gone to her local chemist with eye pain, and a prescribing pharmacist diagnosed sinusitis. She took the recommended medicine to try to resolve the pain, but it began affecting her ability to see. Her first thought was to turn to ChatGPT for advice on what to do next. The chatbot advised her to go back and get the problem checked out more, which she did. Further examination by an optician found that she had significant inflammation and a haemorrhage in her optic nerve, which is currently being treated. It’s not surprising that Brackley’s first port of call was ChatGPT. She prefers ChatGPT to a search engine such as Google because it can hold a conversation and more quickly find the information she wants. She’s not alone: 200 million of us use the world’s most popular generative AI chatbot every day.1 Neither is it surprising that, before her meeting with the eye consultant in which her condition was diagnosed, she sought to use ChatGPT to see what sorts of questions might be asked. Brackley has attention deficit/hyperactivity disorder (ADHD) and autism, and she thought that being forewarned about what she might be asked could help her in the interaction. But this begs several questions. Should patients be using AI tools? How should the healthcare system react to patients using a new, often untested, tool in addition to human diagnoses? And what does patients’ use of AI tell us about the gaps in the health service and how …