不同观点的人工智能在精神卫生保健患者:横断面调查研究。

IF 3.2 Q1 HEALTH CARE SCIENCES & SERVICES Frontiers in digital health Pub Date : 2024-11-29 eCollection Date: 2024-01-01 DOI:10.3389/fdgth.2024.1410758
Meghan Reading Turchioe, Pooja Desai, Sarah Harkins, Jessica Kim, Shiveen Kumar, Yiye Zhang, Rochelle Joly, Jyotishman Pathak, Alison Hermann, Natalie Benda
{"title":"不同观点的人工智能在精神卫生保健患者:横断面调查研究。","authors":"Meghan Reading Turchioe, Pooja Desai, Sarah Harkins, Jessica Kim, Shiveen Kumar, Yiye Zhang, Rochelle Joly, Jyotishman Pathak, Alison Hermann, Natalie Benda","doi":"10.3389/fdgth.2024.1410758","DOIUrl":null,"url":null,"abstract":"<p><strong>Introduction: </strong>Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health.</p><p><strong>Methods: </strong>We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs.</p><p><strong>Results: </strong>Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI.</p><p><strong>Discussion: </strong>Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.</p>","PeriodicalId":73078,"journal":{"name":"Frontiers in digital health","volume":"6 ","pages":"1410758"},"PeriodicalIF":3.2000,"publicationDate":"2024-11-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638230/pdf/","citationCount":"0","resultStr":"{\"title\":\"Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study.\",\"authors\":\"Meghan Reading Turchioe, Pooja Desai, Sarah Harkins, Jessica Kim, Shiveen Kumar, Yiye Zhang, Rochelle Joly, Jyotishman Pathak, Alison Hermann, Natalie Benda\",\"doi\":\"10.3389/fdgth.2024.1410758\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<p><strong>Introduction: </strong>Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health.</p><p><strong>Methods: </strong>We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs.</p><p><strong>Results: </strong>Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI.</p><p><strong>Discussion: </strong>Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.</p>\",\"PeriodicalId\":73078,\"journal\":{\"name\":\"Frontiers in digital health\",\"volume\":\"6 \",\"pages\":\"1410758\"},\"PeriodicalIF\":3.2000,\"publicationDate\":\"2024-11-29\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"https://www.ncbi.nlm.nih.gov/pmc/articles/PMC11638230/pdf/\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Frontiers in digital health\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/10.3389/fdgth.2024.1410758\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"2024/1/1 0:00:00\",\"PubModel\":\"eCollection\",\"JCR\":\"Q1\",\"JCRName\":\"HEALTH CARE SCIENCES & SERVICES\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Frontiers in digital health","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.3389/fdgth.2024.1410758","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"2024/1/1 0:00:00","PubModel":"eCollection","JCR":"Q1","JCRName":"HEALTH CARE SCIENCES & SERVICES","Score":null,"Total":0}
引用次数: 0

摘要

引言:人工智能(AI)正被开发用于精神医疗,但患者对其使用的看法尚不清楚。本研究根据精神病史、目前的精神健康状况、人口统计学特征和健康的社会决定因素,探讨了人们对人工智能用于精神医疗的态度差异:我们对 500 名成年人进行了在线横断面调查,调查内容包括一般观点、对人工智能的舒适度、具体担忧、可解释性和透明度、责任和信任以及相关生物伦理概念的重要性:结果:多个弱势群体认为人工智能应用于精神卫生保健领域可能会造成危害,重视维护生物伦理建设,如果人工智能造成危害或评估结果出现冲突,他们会指责包括精神卫生保健专业人员在内的多方或降低对多方的信任:未来对人工智能伦理实施策略的研究,以及对临床医生人工智能素养的支持,对于患者和临床医生在精神医疗中与人工智能的最佳互动至关重要。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
Differing perspectives on artificial intelligence in mental healthcare among patients: a cross-sectional survey study.

Introduction: Artificial intelligence (AI) is being developed for mental healthcare, but patients' perspectives on its use are unknown. This study examined differences in attitudes towards AI being used in mental healthcare by history of mental illness, current mental health status, demographic characteristics, and social determinants of health.

Methods: We conducted a cross-sectional survey of an online sample of 500 adults asking about general perspectives, comfort with AI, specific concerns, explainability and transparency, responsibility and trust, and the importance of relevant bioethical constructs.

Results: Multiple vulnerable subgroups perceive potential harms related to AI being used in mental healthcare, place importance on upholding bioethical constructs, and would blame or reduce trust in multiple parties, including mental healthcare professionals, if harm or conflicting assessments resulted from AI.

Discussion: Future research examining strategies for ethical AI implementation and supporting clinician AI literacy is critical for optimal patient and clinician interactions with AI in mental healthcare.

求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
CiteScore
4.20
自引率
0.00%
发文量
0
审稿时长
13 weeks
期刊最新文献
Support in digital health skill development for vulnerable groups in a public library setting: perspectives of trainers. A novel machine learning based framework for developing composite digital biomarkers of disease progression. Building an open-source community to enhance autonomic nervous system signal analysis: DBDP-autonomic. Cyber-bioethics: the new ethical discipline for digital health. Augmented Reality for extremity hemorrhage training: a usability study.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1