Safer interaction with IVAs: The impact of privacy literacy training on competent use of intelligent voice assistants

André Markus , Maximilian Baumann , Jan Pfister , Astrid Carolus , Andreas Hotho , Carolin Wienrich
{"title":"Safer interaction with IVAs: The impact of privacy literacy training on competent use of intelligent voice assistants","authors":"André Markus ,&nbsp;Maximilian Baumann ,&nbsp;Jan Pfister ,&nbsp;Astrid Carolus ,&nbsp;Andreas Hotho ,&nbsp;Carolin Wienrich","doi":"10.1016/j.caeai.2025.100372","DOIUrl":null,"url":null,"abstract":"<div><div>Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.</div></div>","PeriodicalId":34469,"journal":{"name":"Computers and Education Artificial Intelligence","volume":"8 ","pages":"Article 100372"},"PeriodicalIF":0.0000,"publicationDate":"2025-01-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Computers and Education Artificial Intelligence","FirstCategoryId":"1085","ListUrlMain":"https://www.sciencedirect.com/science/article/pii/S2666920X25000128","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"Social Sciences","Score":null,"Total":0}
引用次数: 0

Abstract

Intelligent voice assistants (IVAs) are widely used in households but can compromise privacy by inadvertently recording or encouraging personal disclosures through social cues. Against this backdrop, interventions that promote privacy literacy, sensitize users to privacy risks, and empower them to self-determine IVA interactions are becoming increasingly important. This work aims to develop and evaluate two online training modules that promote privacy literacy in the context of IVAs by providing knowledge about the institutional practices of IVA providers and clarifying users' privacy rights when using IVAs. Results show that the training modules have distinct strengths. For example, Training Module 1 increases subjective privacy literacy, raises specific concerns about IVA companies, and fosters the intention to engage more reflectively with IVAs. In contrast, Training Module 2 increases users' perceptions of control over their privacy and raises concerns about devices. Both modules share common outcomes, including increased privacy awareness, decreased trust, and social anthropomorphic perceptions of IVAs. Overall, these modules represent a significant advance in promoting the competent use of speech-based technology and provide valuable insights for future research and education on privacy in AI applications.
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
与IVAs更安全的交互:隐私素养培训对智能语音助手的有效使用的影响
智能语音助手(IVAs)在家庭中广泛使用,但可能会通过社交暗示无意中记录或鼓励个人信息泄露,从而损害隐私。在此背景下,促进隐私素养、提高用户对隐私风险的敏感度并赋予他们自主决定IVA互动的干预措施变得越来越重要。这项工作旨在开发和评估两个在线培训模块,通过提供有关IVA提供者制度实践的知识和澄清用户在使用IVA时的隐私权,促进IVA背景下的隐私素养。结果表明,各训练模块各具优势。例如,培训模块1提高了主观隐私素养,提出了对IVA公司的具体关注,并培养了与IVA进行更多反思的意愿。相比之下,培训模块2增加了用户对隐私控制的认知,并引起了对设备的担忧。这两个模块都有共同的结果,包括提高隐私意识,降低信任,以及对IVAs的社会拟人化感知。总的来说,这些模块在促进基于语音的技术的有效使用方面取得了重大进展,并为未来人工智能应用中的隐私研究和教育提供了宝贵的见解。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
CiteScore
16.80
自引率
0.00%
发文量
66
审稿时长
50 days
期刊最新文献
Conversational AI in children's home literacy learning: effectiveness, advantages, challenges, and family perception Enhancing AI literacy for educators: Where to start and to what end? Artificial intelligence literacy at school: A systematic review with a focus on psychological foundations Large language models for education: An open-source paradigm for automated Q&A in the graduate classroom Generative AI in higher education: A bibliometric review of emerging trends, power dynamics, and global research landscapes
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:604180095
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1