{"title":"使用机器学习的假新闻检测:一种对抗性协作方法","authors":"Karen M. DSouza, Aaron M. French","doi":"10.1108/intr-03-2022-0176","DOIUrl":null,"url":null,"abstract":"<h3>Purpose</h3>\n<p>Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.</p><!--/ Abstract__block -->\n<h3>Design/methodology/approach</h3>\n<p>First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.</p><!--/ Abstract__block -->\n<h3>Findings</h3>\n<p>This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.</p><!--/ Abstract__block -->\n<h3>Originality/value</h3>\n<p>Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.</p><!--/ Abstract__block -->","PeriodicalId":54925,"journal":{"name":"Internet Research","volume":"6 6","pages":""},"PeriodicalIF":5.9000,"publicationDate":"2023-10-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Fake news detection using machine learning: an adversarial collaboration approach\",\"authors\":\"Karen M. DSouza, Aaron M. French\",\"doi\":\"10.1108/intr-03-2022-0176\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"<h3>Purpose</h3>\\n<p>Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.</p><!--/ Abstract__block -->\\n<h3>Design/methodology/approach</h3>\\n<p>First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.</p><!--/ Abstract__block -->\\n<h3>Findings</h3>\\n<p>This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.</p><!--/ Abstract__block -->\\n<h3>Originality/value</h3>\\n<p>Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.</p><!--/ Abstract__block -->\",\"PeriodicalId\":54925,\"journal\":{\"name\":\"Internet Research\",\"volume\":\"6 6\",\"pages\":\"\"},\"PeriodicalIF\":5.9000,\"publicationDate\":\"2023-10-11\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"Internet Research\",\"FirstCategoryId\":\"94\",\"ListUrlMain\":\"https://doi.org/10.1108/intr-03-2022-0176\",\"RegionNum\":3,\"RegionCategory\":\"管理学\",\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"Q1\",\"JCRName\":\"BUSINESS\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"Internet Research","FirstCategoryId":"94","ListUrlMain":"https://doi.org/10.1108/intr-03-2022-0176","RegionNum":3,"RegionCategory":"管理学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"BUSINESS","Score":null,"Total":0}
Fake news detection using machine learning: an adversarial collaboration approach
Purpose
Purveyors of fake news perpetuate information that can harm society, including businesses. Social media's reach quickly amplifies distortions of fake news. Research has not yet fully explored the mechanisms of such adversarial behavior or the adversarial techniques of machine learning that might be deployed to detect fake news. Debiasing techniques are also explored to combat against the generation of fake news using adversarial data. The purpose of this paper is to present the challenges and opportunities in fake news detection.
Design/methodology/approach
First, this paper provides an overview of adversarial behaviors and current machine learning techniques. Next, it describes the use of long short-term memory (LSTM) to identify fake news in a corpus of articles. Finally, it presents the novel adversarial behavior approach to protect targeted business datasets from attacks.
Findings
This research highlights the need for a corpus of fake news that can be used to evaluate classification methods. Adversarial debiasing using IBM's Artificial Intelligence Fairness 360 (AIF360) toolkit can improve the disparate impact of unfavorable characteristics of a dataset. Debiasing also demonstrates significant potential to reduce fake news generation based on the inherent bias in the data. These findings provide avenues for further research on adversarial collaboration and robust information systems.
Originality/value
Adversarial debiasing of datasets demonstrates that by reducing bias related to protected attributes, such as sex, race and age, businesses can reduce the potential of exploitation to generate fake news through adversarial data.
期刊介绍:
This wide-ranging interdisciplinary journal looks at the social, ethical, economic and political implications of the internet. Recent issues have focused on online and mobile gaming, the sharing economy, and the dark side of social media.