{"title":"社交和对抗性数据源下值得信赖的机器学习","authors":"Han Shao","doi":"arxiv-2408.01596","DOIUrl":null,"url":null,"abstract":"Machine learning has witnessed remarkable breakthroughs in recent years. As\nmachine learning permeates various aspects of daily life, individuals and\norganizations increasingly interact with these systems, exhibiting a wide range\nof social and adversarial behaviors. These behaviors may have a notable impact\non the behavior and performance of machine learning systems. Specifically,\nduring these interactions, data may be generated by strategic individuals,\ncollected by self-interested data collectors, possibly poisoned by adversarial\nattackers, and used to create predictors, models, and policies satisfying\nmultiple objectives. As a result, the machine learning systems' outputs might\ndegrade, such as the susceptibility of deep neural networks to adversarial\nexamples (Shafahi et al., 2018; Szegedy et al., 2013) and the diminished\nperformance of classic algorithms in the presence of strategic individuals\n(Ahmadi et al., 2021). Addressing these challenges is imperative for the\nsuccess of machine learning in societal settings.","PeriodicalId":501316,"journal":{"name":"arXiv - CS - Computer Science and Game Theory","volume":"57 1","pages":""},"PeriodicalIF":0.0000,"publicationDate":"2024-08-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"Trustworthy Machine Learning under Social and Adversarial Data Sources\",\"authors\":\"Han Shao\",\"doi\":\"arxiv-2408.01596\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"Machine learning has witnessed remarkable breakthroughs in recent years. As\\nmachine learning permeates various aspects of daily life, individuals and\\norganizations increasingly interact with these systems, exhibiting a wide range\\nof social and adversarial behaviors. These behaviors may have a notable impact\\non the behavior and performance of machine learning systems. Specifically,\\nduring these interactions, data may be generated by strategic individuals,\\ncollected by self-interested data collectors, possibly poisoned by adversarial\\nattackers, and used to create predictors, models, and policies satisfying\\nmultiple objectives. As a result, the machine learning systems' outputs might\\ndegrade, such as the susceptibility of deep neural networks to adversarial\\nexamples (Shafahi et al., 2018; Szegedy et al., 2013) and the diminished\\nperformance of classic algorithms in the presence of strategic individuals\\n(Ahmadi et al., 2021). Addressing these challenges is imperative for the\\nsuccess of machine learning in societal settings.\",\"PeriodicalId\":501316,\"journal\":{\"name\":\"arXiv - CS - Computer Science and Game Theory\",\"volume\":\"57 1\",\"pages\":\"\"},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-08-02\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computer Science and Game Theory\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2408.01596\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computer Science and Game Theory","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2408.01596","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
Trustworthy Machine Learning under Social and Adversarial Data Sources
Machine learning has witnessed remarkable breakthroughs in recent years. As
machine learning permeates various aspects of daily life, individuals and
organizations increasingly interact with these systems, exhibiting a wide range
of social and adversarial behaviors. These behaviors may have a notable impact
on the behavior and performance of machine learning systems. Specifically,
during these interactions, data may be generated by strategic individuals,
collected by self-interested data collectors, possibly poisoned by adversarial
attackers, and used to create predictors, models, and policies satisfying
multiple objectives. As a result, the machine learning systems' outputs might
degrade, such as the susceptibility of deep neural networks to adversarial
examples (Shafahi et al., 2018; Szegedy et al., 2013) and the diminished
performance of classic algorithms in the presence of strategic individuals
(Ahmadi et al., 2021). Addressing these challenges is imperative for the
success of machine learning in societal settings.