首页 > 最新文献

Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia最新文献

英文 中文
Fighting Filterbubbles with Adversarial Training 对抗过滤气泡训练
Lukas Pfahler, K. Morik
Recommender engines play a role in the emergence and reinforcement of filter bubbles. When these systems learn that a user prefers content from a particular site, the user will be less likely to be exposed to different sources or opinions and, ultimately, is more likely to develop extremist tendencies. We trace roots of this phenomenon to the way the recommender engine represents news articles. The vectorial features modern systems extract from the plain text of news articles are already highly predictive of the associated news outlet. We propose a new training scheme based on adversarial machine learning to tackle this issue . Our preliminary experiments show that the features we can extract this way are significantly less predictive of the news outlet and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.
推荐引擎在过滤气泡的出现和强化中起作用。当这些系统了解到用户更喜欢某个特定网站的内容时,用户就不太可能接触到不同的来源或观点,最终,更有可能发展出极端主义倾向。我们将这种现象的根源追溯到推荐引擎表示新闻文章的方式。现代系统从新闻文章的纯文本中提取的向量特征已经高度预测了相关的新闻出口。我们提出了一种新的基于对抗性机器学习的训练方案来解决这个问题。我们的初步实验表明,我们可以通过这种方式提取的特征对新闻出口的预测性显着降低,从而提供了降低出现新过滤气泡的风险的可能性。
{"title":"Fighting Filterbubbles with Adversarial Training","authors":"Lukas Pfahler, K. Morik","doi":"10.1145/3422841.3423535","DOIUrl":"https://doi.org/10.1145/3422841.3423535","url":null,"abstract":"Recommender engines play a role in the emergence and reinforcement of filter bubbles. When these systems learn that a user prefers content from a particular site, the user will be less likely to be exposed to different sources or opinions and, ultimately, is more likely to develop extremist tendencies. We trace roots of this phenomenon to the way the recommender engine represents news articles. The vectorial features modern systems extract from the plain text of news articles are already highly predictive of the associated news outlet. We propose a new training scheme based on adversarial machine learning to tackle this issue . Our preliminary experiments show that the features we can extract this way are significantly less predictive of the news outlet and thus offer the possibility to reduce the risk of manifestation of new filter bubbles.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127111308","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models 基于多黑盒模型的情感检测公平性和准确性平衡
Abdulaziz A. Almuzaini, V. Singh
Sentiment detection is an important building block for multiple information retrieval tasks such as product recommendation, cyberbullying, fake news and misinformation detection. Unsurprisingly, multiple commercial APIs, each with different levels of accuracy and fairness, are now publicly available for sentiment detection. Users can easily incorporate these APIs in their applications. While combining inputs from multiple modalities or black-box models for increasing accuracy is commonly studied in multimedia computing literature, there has been little work on combining different modalities for increasingfairness of the resulting decision. In this work, we audit multiple commercial sentiment detection APIs for the gender bias in two-actor news headlines settings and report on the level of bias observed. Next, we propose a "Flexible Fair Regression" approach, which ensures satisfactory accuracy and fairness by jointly learning from multiple black-box models. The results pave way for fair yet accurate sentiment detectors for multiple applications.
情感检测是产品推荐、网络欺凌、假新闻和错误信息检测等多种信息检索任务的重要组成部分。不出所料,多个商业api(每个api都具有不同的准确性和公平性水平)现已公开用于情感检测。用户可以很容易地将这些api合并到他们的应用程序中。虽然在多媒体计算文献中经常研究从多种模式或黑箱模型中组合输入以提高准确性,但很少有工作是结合不同的模式来提高最终决策的公平性。在这项工作中,我们审计了多个商业情感检测api,以检测双角色新闻标题设置中的性别偏见,并报告了观察到的偏见水平。接下来,我们提出了一种“灵活公平回归”方法,该方法通过联合学习多个黑盒模型来保证满意的准确性和公平性。结果为公平而准确的情感检测器的多种应用铺平了道路。
{"title":"Balancing Fairness and Accuracy in Sentiment Detection using Multiple Black Box Models","authors":"Abdulaziz A. Almuzaini, V. Singh","doi":"10.1145/3422841.3423536","DOIUrl":"https://doi.org/10.1145/3422841.3423536","url":null,"abstract":"Sentiment detection is an important building block for multiple information retrieval tasks such as product recommendation, cyberbullying, fake news and misinformation detection. Unsurprisingly, multiple commercial APIs, each with different levels of accuracy and fairness, are now publicly available for sentiment detection. Users can easily incorporate these APIs in their applications. While combining inputs from multiple modalities or black-box models for increasing accuracy is commonly studied in multimedia computing literature, there has been little work on combining different modalities for increasingfairness of the resulting decision. In this work, we audit multiple commercial sentiment detection APIs for the gender bias in two-actor news headlines settings and report on the level of bias observed. Next, we propose a \"Flexible Fair Regression\" approach, which ensures satisfactory accuracy and fairness by jointly learning from multiple black-box models. The results pave way for fair yet accurate sentiment detectors for multiple applications.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"102 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126993019","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research 不以封面判断用户:了解社交媒体研究中多模态处理的危害
Jiachen Jiang, Soroush Vosoughi
Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.
社交媒体已经动摇了我们社会的基础,尽管看起来不太可能。然而,许多用于缓和有害数字内容的流行工具因表现一般和缺乏问责制而受到学术界和公共领域的广泛批评。虽然社会媒体研究被认为主要集中在自然语言处理上,但我们证明了社区理解多媒体处理及其独特的伦理考虑的必要性。具体来说,我们确定了在提供不同形式的信息时,Amazon Turk (MTurk)注释器性能的统计差异,并讨论了由众包的人类人口统计预测产生的危害模式。最后,我们通过审计一个名为Perspective API的毒性检测器对各种人口统计类别的Twitter用户的语言的性能来讨论这些偏差的后果。
{"title":"Not Judging a User by Their Cover: Understanding Harm in Multi-Modal Processing within Social Media Research","authors":"Jiachen Jiang, Soroush Vosoughi","doi":"10.1145/3422841.3423534","DOIUrl":"https://doi.org/10.1145/3422841.3423534","url":null,"abstract":"Social media has shaken the foundations of our society, unlikely as it may seem. Many of the popular tools used to moderate harmful digital content, however, have received widespread criticism from both the academic community and the public sphere for middling performance and lack of accountability. Though social media research is thought to center primarily on natural language processing, we demonstrate the need for the community to understand multimedia processing and its unique ethical considerations. Specifically, we identify statistical differences in the performance of Amazon Turk (MTurk) annotators when different modalities of information are provided and discuss the patterns of harm that arise from crowd-sourced human demographic prediction. Finally, we discuss the consequences of those biases through auditing the performance of a toxicity detector called Perspective API on the language of Twitter users across a variety of demographic categories.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"116 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-10-12","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124785353","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation 性别斜率:基于属性操作的计算机视觉模型的反事实公平性
Jungseock Joo, Kimmo Kärkkäinen
Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices, but recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups. Diagnosing and understanding the underlying true causes of model biases, however, are challenging tasks because modern computer vision systems rely on complex black-box models whose behaviors are hard to decode. We propose to use an encoder-decoder network developed for image attribute manipulation to synthesize facial images varying in the dimensions of gender and race while keeping other signals intact. We use these synthesized images to measure counterfactual fairness of commercial computer vision classifiers by examining the degree to which these classifiers are affected by gender and racial cues controlled in the images, e.g., feminine faces may elicit higher scores for the concept of nurse and lower scores for STEM-related concepts.
自动计算机视觉系统已经应用于许多领域,包括安全、执法和个人设备,但最近的报告表明,这些系统可能会产生有偏见的结果,歧视某些人口群体的人。然而,诊断和理解模型偏差的潜在真正原因是一项具有挑战性的任务,因为现代计算机视觉系统依赖于复杂的黑箱模型,其行为难以解码。我们建议使用为图像属性操作开发的编码器-解码器网络来合成在性别和种族维度上不同的面部图像,同时保持其他信号不变。我们使用这些合成图像来衡量商业计算机视觉分类器的反事实公平性,通过检查这些分类器受图像中控制的性别和种族线索的影响程度,例如,女性面孔可能会在护士概念上获得更高的分数,而在stem相关概念上获得更低的分数。
{"title":"Gender Slopes: Counterfactual Fairness for Computer Vision Models by Attribute Manipulation","authors":"Jungseock Joo, Kimmo Kärkkäinen","doi":"10.1145/3422841.3423533","DOIUrl":"https://doi.org/10.1145/3422841.3423533","url":null,"abstract":"Automated computer vision systems have been applied in many domains including security, law enforcement, and personal devices, but recent reports suggest that these systems may produce biased results, discriminating against people in certain demographic groups. Diagnosing and understanding the underlying true causes of model biases, however, are challenging tasks because modern computer vision systems rely on complex black-box models whose behaviors are hard to decode. We propose to use an encoder-decoder network developed for image attribute manipulation to synthesize facial images varying in the dimensions of gender and race while keeping other signals intact. We use these synthesized images to measure counterfactual fairness of commercial computer vision classifiers by examining the degree to which these classifiers are affected by gender and racial cues controlled in the images, e.g., feminine faces may elicit higher scores for the concept of nurse and lower scores for STEM-related concepts.","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"113 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115957766","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 36
Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia 第二届多媒体公平、问责、透明与道德国际研讨会论文集
{"title":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","authors":"","doi":"10.1145/3422841","DOIUrl":"https://doi.org/10.1145/3422841","url":null,"abstract":"","PeriodicalId":428850,"journal":{"name":"Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia","volume":"78 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"1900-01-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116342555","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Proceedings of the 2nd International Workshop on Fairness, Accountability, Transparency and Ethics in Multimedia
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1