Elham Naghizade, Kaixin Ji, Benjamin Tag, Flora Salim
Privacy is dynamic, sensitive, and contextual, much like our emotions. Previous studies have explored the interplay between privacy and context, privacy and emotion, and emotion and context. However, there remains a significant gap in understanding the interplay of these aspects simultaneously. In this paper, we present a preliminary study investigating the role of emotions in driving individuals' information sharing behaviour, particularly in relation to urban locations and social ties. We adopt a novel methodology that integrates context (location and time), emotion, and personal information sharing behaviour, providing a comprehensive analysis of how contextual emotions affect privacy. The emotions are assessed with both self-reporting and electrodermal activity (EDA). Our findings reveal that self-reported emotions influence personal information-sharing behaviour with distant social groups, while neutral emotions lead individuals to share less precise information with close social circles, a pattern is potentially detectable with wrist-worn EDA. Our study helps lay the foundation for personalised emotion-aware strategies to mitigate oversharing risks and enhance user privacy in the digital age.
{"title":"Inside Out or Not: Privacy Implications of Emotional Disclosure","authors":"Elham Naghizade, Kaixin Ji, Benjamin Tag, Flora Salim","doi":"arxiv-2409.11805","DOIUrl":"https://doi.org/arxiv-2409.11805","url":null,"abstract":"Privacy is dynamic, sensitive, and contextual, much like our emotions.\u0000Previous studies have explored the interplay between privacy and context,\u0000privacy and emotion, and emotion and context. However, there remains a\u0000significant gap in understanding the interplay of these aspects simultaneously.\u0000In this paper, we present a preliminary study investigating the role of\u0000emotions in driving individuals' information sharing behaviour, particularly in\u0000relation to urban locations and social ties. We adopt a novel methodology that\u0000integrates context (location and time), emotion, and personal information\u0000sharing behaviour, providing a comprehensive analysis of how contextual\u0000emotions affect privacy. The emotions are assessed with both self-reporting and\u0000electrodermal activity (EDA). Our findings reveal that self-reported emotions\u0000influence personal information-sharing behaviour with distant social groups,\u0000while neutral emotions lead individuals to share less precise information with\u0000close social circles, a pattern is potentially detectable with wrist-worn EDA.\u0000Our study helps lay the foundation for personalised emotion-aware strategies to\u0000mitigate oversharing risks and enhance user privacy in the digital age.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"11 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263686","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Background: The integration of the General Data Protection Regulation (GDPR) and the Medical Device Regulation (MDR) creates complexities in conducting Data Protection Impact Assessments (DPIAs) for medical devices. The adoption of non-binding standards like ISO and IEC can harmonize these processes by enhancing accountability and privacy by design. Methods: This study employs a multidisciplinary literature review, focusing on GDPR and MDR intersection in medical devices that process personal health data. It evaluates key standards, including ISO/IEC 29134 and IEC 62304, to propose a unified approach for DPIAs that aligns with legal and technical frameworks. Results: The analysis reveals the benefits of integrating ISO/IEC standards into DPIAs, which provide detailed guidance on implementing privacy by design, risk assessment, and mitigation strategies specific to medical devices. The proposed framework ensures that DPIAs are living documents, continuously updated to adapt to evolving data protection challenges. Conclusions: A unified approach combining European Union (EU) regulations and international standards offers a robust framework for conducting DPIAs in medical devices. This integration balances security, innovation, and privacy, enhancing compliance and fostering trust in medical technologies. The study advocates for leveraging both hard law and standards to systematically address privacy and safety in the design and operation of medical devices, thereby raising the maturity of the MedTech ecosystem.
{"title":"Law-based and standards-oriented approach for privacy impact assessment in medical devices: a topic for lawyers, engineers and healthcare practitioners in MedTech","authors":"Yuri R. Ladeia, David M. Pereira","doi":"arxiv-2409.11845","DOIUrl":"https://doi.org/arxiv-2409.11845","url":null,"abstract":"Background: The integration of the General Data Protection Regulation (GDPR)\u0000and the Medical Device Regulation (MDR) creates complexities in conducting Data\u0000Protection Impact Assessments (DPIAs) for medical devices. The adoption of\u0000non-binding standards like ISO and IEC can harmonize these processes by\u0000enhancing accountability and privacy by design. Methods: This study employs a\u0000multidisciplinary literature review, focusing on GDPR and MDR intersection in\u0000medical devices that process personal health data. It evaluates key standards,\u0000including ISO/IEC 29134 and IEC 62304, to propose a unified approach for DPIAs\u0000that aligns with legal and technical frameworks. Results: The analysis reveals\u0000the benefits of integrating ISO/IEC standards into DPIAs, which provide\u0000detailed guidance on implementing privacy by design, risk assessment, and\u0000mitigation strategies specific to medical devices. The proposed framework\u0000ensures that DPIAs are living documents, continuously updated to adapt to\u0000evolving data protection challenges. Conclusions: A unified approach combining\u0000European Union (EU) regulations and international standards offers a robust\u0000framework for conducting DPIAs in medical devices. This integration balances\u0000security, innovation, and privacy, enhancing compliance and fostering trust in\u0000medical technologies. The study advocates for leveraging both hard law and\u0000standards to systematically address privacy and safety in the design and\u0000operation of medical devices, thereby raising the maturity of the MedTech\u0000ecosystem.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"31 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263685","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Andrew Conway, Michelle Blom, Alexander Ek, Peter J. Stuckey, Vanessa J. Teague, Damjan Vukcevic
Single Transferable Vote (STV) counting, used in several jurisdictions in Australia, is a system for choosing multiple election winners given voters' preferences over candidates. There are a variety of different versions of STV legislated and/or applied across Australia. This paper shows some of the unintuitive properties of some of these systems.
{"title":"Idiosyncratic properties of Australian STV election counting","authors":"Andrew Conway, Michelle Blom, Alexander Ek, Peter J. Stuckey, Vanessa J. Teague, Damjan Vukcevic","doi":"arxiv-2409.11627","DOIUrl":"https://doi.org/arxiv-2409.11627","url":null,"abstract":"Single Transferable Vote (STV) counting, used in several jurisdictions in\u0000Australia, is a system for choosing multiple election winners given voters'\u0000preferences over candidates. There are a variety of different versions of STV\u0000legislated and/or applied across Australia. This paper shows some of the\u0000unintuitive properties of some of these systems.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"16 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263687","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Li Qiwei, Shihui Zhang, Andrew Timothy Kasper, Joshua Ashkinaze, Asia A. Eaton, Sarita Schoenebeck, Eric Gilbert
Non-consensual intimate media (NCIM) inflicts significant harm. Currently, victim-survivors can use two mechanisms to report NCIM - as a non-consensual nudity violation or as copyright infringement. We conducted an audit study of takedown speed of NCIM reported to X (formerly Twitter) of both mechanisms. We uploaded 50 AI-generated nude images and reported half under X's "non-consensual nudity" reporting mechanism and half under its "copyright infringement" mechanism. The copyright condition resulted in successful image removal within 25 hours for all images (100% removal rate), while non-consensual nudity reports resulted in no image removal for over three weeks (0% removal rate). We stress the need for targeted legislation to regulate NCIM removal online. We also discuss ethical considerations for auditing NCIM on social platforms.
{"title":"Reporting Non-Consensual Intimate Media: An Audit Study of Deepfakes","authors":"Li Qiwei, Shihui Zhang, Andrew Timothy Kasper, Joshua Ashkinaze, Asia A. Eaton, Sarita Schoenebeck, Eric Gilbert","doi":"arxiv-2409.12138","DOIUrl":"https://doi.org/arxiv-2409.12138","url":null,"abstract":"Non-consensual intimate media (NCIM) inflicts significant harm. Currently,\u0000victim-survivors can use two mechanisms to report NCIM - as a non-consensual\u0000nudity violation or as copyright infringement. We conducted an audit study of\u0000takedown speed of NCIM reported to X (formerly Twitter) of both mechanisms. We\u0000uploaded 50 AI-generated nude images and reported half under X's\u0000\"non-consensual nudity\" reporting mechanism and half under its \"copyright\u0000infringement\" mechanism. The copyright condition resulted in successful image\u0000removal within 25 hours for all images (100% removal rate), while\u0000non-consensual nudity reports resulted in no image removal for over three weeks\u0000(0% removal rate). We stress the need for targeted legislation to regulate NCIM\u0000removal online. We also discuss ethical considerations for auditing NCIM on\u0000social platforms.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"18 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263932","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Somonnoy Banerjee, Sujan Dutta, Soumyajit Datta, Ashiqur R. KhudaBukhsh
This paper makes three key contributions. First, via a substantial corpus of 51,278 interview questions sourced from 888 YouTube videos of mock interviews of Indian civil service candidates, we demonstrate stark gender bias in the broad nature of questions asked to male and female candidates. Second, our experiments with large language models show a strong presence of gender bias in explanations provided by the LLMs on the gender inference task. Finally, we present a novel dataset of 51,278 interview questions that can inform future social science studies.
{"title":"Gender Representation and Bias in Indian Civil Service Mock Interviews","authors":"Somonnoy Banerjee, Sujan Dutta, Soumyajit Datta, Ashiqur R. KhudaBukhsh","doi":"arxiv-2409.12194","DOIUrl":"https://doi.org/arxiv-2409.12194","url":null,"abstract":"This paper makes three key contributions. First, via a substantial corpus of\u000051,278 interview questions sourced from 888 YouTube videos of mock interviews\u0000of Indian civil service candidates, we demonstrate stark gender bias in the\u0000broad nature of questions asked to male and female candidates. Second, our\u0000experiments with large language models show a strong presence of gender bias in\u0000explanations provided by the LLMs on the gender inference task. Finally, we\u0000present a novel dataset of 51,278 interview questions that can inform future\u0000social science studies.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"307 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263688","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
At the beginning of 2022, a simplistic word-guessing game took the world by storm and was further adapted to many languages beyond the original English version. In this paper, we examine the strategies of daily word-guessing game players that have evolved during a period of over two years. A survey gathered from 25% of frequent players reveals their strategies and motivations for continuing the daily journey. We also explore the capability of several popular open-access large language model systems and open-source models at comprehending and playing the game in two different languages. Results highlight the struggles of certain models to maintain correct guess length and generate repetitions, as well as hallucinations of non-existent words and inflections.
{"title":"Strategic Insights in Human and Large Language Model Tactics at Word Guessing Games","authors":"Matīss Rikters, Sanita Reinsone","doi":"arxiv-2409.11112","DOIUrl":"https://doi.org/arxiv-2409.11112","url":null,"abstract":"At the beginning of 2022, a simplistic word-guessing game took the world by\u0000storm and was further adapted to many languages beyond the original English\u0000version. In this paper, we examine the strategies of daily word-guessing game\u0000players that have evolved during a period of over two years. A survey gathered\u0000from 25% of frequent players reveals their strategies and motivations for\u0000continuing the daily journey. We also explore the capability of several popular\u0000open-access large language model systems and open-source models at\u0000comprehending and playing the game in two different languages. Results\u0000highlight the struggles of certain models to maintain correct guess length and\u0000generate repetitions, as well as hallucinations of non-existent words and\u0000inflections.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"48 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263691","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
International standards are crucial for ensuring that frontier AI systems are developed and deployed safely around the world. Since the AI Safety Institutes (AISIs) possess in-house technical expertise, mandate for international engagement, and convening power in the national AI ecosystem while being a government institution, we argue that they are particularly well-positioned to contribute to the international standard-setting processes for AI safety. In this paper, we propose and evaluate three models for AISI involvement: 1. Seoul Declaration Signatories, 2. US (and other Seoul Declaration Signatories) and China, and 3. Globally Inclusive. Leveraging their diverse strengths, these models are not mutually exclusive. Rather, they offer a multi-track system solution in which the central role of AISIs guarantees coherence among the different tracks and consistency in their AI safety focus.
{"title":"The Role of AI Safety Institutes in Contributing to International Standards for Frontier AI Safety","authors":"Kristina Fort","doi":"arxiv-2409.11314","DOIUrl":"https://doi.org/arxiv-2409.11314","url":null,"abstract":"International standards are crucial for ensuring that frontier AI systems are\u0000developed and deployed safely around the world. Since the AI Safety Institutes\u0000(AISIs) possess in-house technical expertise, mandate for international\u0000engagement, and convening power in the national AI ecosystem while being a\u0000government institution, we argue that they are particularly well-positioned to\u0000contribute to the international standard-setting processes for AI safety. In\u0000this paper, we propose and evaluate three models for AISI involvement: 1. Seoul\u0000Declaration Signatories, 2. US (and other Seoul Declaration Signatories) and\u0000China, and 3. Globally Inclusive. Leveraging their diverse strengths, these\u0000models are not mutually exclusive. Rather, they offer a multi-track system\u0000solution in which the central role of AISIs guarantees coherence among the\u0000different tracks and consistency in their AI safety focus.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"6 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263689","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Rosemarie Santa Gonzalez, Ryan Piansky, Sue M Bae, Justin Biddle, Daniel Molzahn
The integration of artificial intelligence (AI) and optimization hold substantial promise for improving the efficiency, reliability, and resilience of engineered systems. Due to the networked nature of many engineered systems, ethically deploying methodologies at this intersection poses challenges that are distinct from other AI settings, thus motivating the development of ethical guidelines tailored to AI-enabled optimization. This paper highlights the need to go beyond fairness-driven algorithms to systematically address ethical decisions spanning the stages of modeling, data curation, results analysis, and implementation of optimization-based decision support tools. Accordingly, this paper identifies ethical considerations required when deploying algorithms at the intersection of AI and optimization via case studies in power systems as well as supply chain and logistics. Rather than providing a prescriptive set of rules, this paper aims to foster reflection and awareness among researchers and encourage consideration of ethical implications at every step of the decision-making process.
{"title":"Beyond Algorithmic Fairness: A Guide to Develop and Deploy Ethical AI-Enabled Decision-Support Tools","authors":"Rosemarie Santa Gonzalez, Ryan Piansky, Sue M Bae, Justin Biddle, Daniel Molzahn","doi":"arxiv-2409.11489","DOIUrl":"https://doi.org/arxiv-2409.11489","url":null,"abstract":"The integration of artificial intelligence (AI) and optimization hold\u0000substantial promise for improving the efficiency, reliability, and resilience\u0000of engineered systems. Due to the networked nature of many engineered systems,\u0000ethically deploying methodologies at this intersection poses challenges that\u0000are distinct from other AI settings, thus motivating the development of ethical\u0000guidelines tailored to AI-enabled optimization. This paper highlights the need\u0000to go beyond fairness-driven algorithms to systematically address ethical\u0000decisions spanning the stages of modeling, data curation, results analysis, and\u0000implementation of optimization-based decision support tools. Accordingly, this\u0000paper identifies ethical considerations required when deploying algorithms at\u0000the intersection of AI and optimization via case studies in power systems as\u0000well as supply chain and logistics. Rather than providing a prescriptive set of\u0000rules, this paper aims to foster reflection and awareness among researchers and\u0000encourage consideration of ethical implications at every step of the\u0000decision-making process.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"210 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269355","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Abe Bohan Hou, William Jurayj, Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme
Large Language Models (LLMs) show promise as a writing aid for professionals performing legal analyses. However, LLMs can often hallucinate in this setting, in ways difficult to recognize by non-professionals and existing text evaluation metrics. In this work, we pose the question: when can machine-generated legal analysis be evaluated as acceptable? We introduce the neutral notion of gaps, as opposed to hallucinations in a strict erroneous sense, to refer to the difference between human-written and machine-generated legal analysis. Gaps do not always equate to invalid generation. Working with legal experts, we consider the CLERC generation task proposed in Hou et al. (2024b), leading to a taxonomy, a fine-grained detector for predicting gap categories, and an annotated dataset for automatic evaluation. Our best detector achieves 67% F1 score and 80% precision on the test set. Employing this detector as an automated metric on legal analysis generated by SOTA LLMs, we find around 80% contain hallucinations of different kinds.
{"title":"Gaps or Hallucinations? Gazing into Machine-Generated Legal Analysis for Fine-grained Text Evaluations","authors":"Abe Bohan Hou, William Jurayj, Nils Holzenberger, Andrew Blair-Stanek, Benjamin Van Durme","doi":"arxiv-2409.09947","DOIUrl":"https://doi.org/arxiv-2409.09947","url":null,"abstract":"Large Language Models (LLMs) show promise as a writing aid for professionals\u0000performing legal analyses. However, LLMs can often hallucinate in this setting,\u0000in ways difficult to recognize by non-professionals and existing text\u0000evaluation metrics. In this work, we pose the question: when can\u0000machine-generated legal analysis be evaluated as acceptable? We introduce the\u0000neutral notion of gaps, as opposed to hallucinations in a strict erroneous\u0000sense, to refer to the difference between human-written and machine-generated\u0000legal analysis. Gaps do not always equate to invalid generation. Working with\u0000legal experts, we consider the CLERC generation task proposed in Hou et al.\u0000(2024b), leading to a taxonomy, a fine-grained detector for predicting gap\u0000categories, and an annotated dataset for automatic evaluation. Our best\u0000detector achieves 67% F1 score and 80% precision on the test set. Employing\u0000this detector as an automated metric on legal analysis generated by SOTA LLMs,\u0000we find around 80% contain hallucinations of different kinds.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"21 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142269358","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mykola Makhortykh, Ani Baghumyan, Victoria Vziatysheva, Maryna Sydorova, Elizaveta Kuznetsova
The rise of large language models (LLMs) has a significant impact on information warfare. By facilitating the production of content related to disinformation and propaganda campaigns, LLMs can amplify different types of information operations and mislead online users. In our study, we empirically investigate how LLM-powered chatbots, developed by Google, Microsoft, and Perplexity, handle disinformation about Russia's war in Ukraine and whether the chatbots' ability to provide accurate information on the topic varies across languages and over time. Our findings indicate that while for some chatbots (Perplexity), there is a significant improvement in performance over time in several languages, for others (Gemini), the performance improves only in English but deteriorates in low-resource languages.
{"title":"LLMs as information warriors? Auditing how LLM-powered chatbots tackle disinformation about Russia's war in Ukraine","authors":"Mykola Makhortykh, Ani Baghumyan, Victoria Vziatysheva, Maryna Sydorova, Elizaveta Kuznetsova","doi":"arxiv-2409.10697","DOIUrl":"https://doi.org/arxiv-2409.10697","url":null,"abstract":"The rise of large language models (LLMs) has a significant impact on\u0000information warfare. By facilitating the production of content related to\u0000disinformation and propaganda campaigns, LLMs can amplify different types of\u0000information operations and mislead online users. In our study, we empirically\u0000investigate how LLM-powered chatbots, developed by Google, Microsoft, and\u0000Perplexity, handle disinformation about Russia's war in Ukraine and whether the\u0000chatbots' ability to provide accurate information on the topic varies across\u0000languages and over time. Our findings indicate that while for some chatbots\u0000(Perplexity), there is a significant improvement in performance over time in\u0000several languages, for others (Gemini), the performance improves only in\u0000English but deteriorates in low-resource languages.","PeriodicalId":501112,"journal":{"name":"arXiv - CS - Computers and Society","volume":"93 1","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-09-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142263690","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}