SAGED:可定制公平校准的语言模型整体偏差基准管道

Xin Guan, Nathaniel Demchak, Saloni Gupta, Ze Wang, Ediz Ertekin Jr., Adriano Koshiyama, Emre Kazim, Zekun Wu
{"title":"SAGED:可定制公平校准的语言模型整体偏差基准管道","authors":"Xin Guan, Nathaniel Demchak, Saloni Gupta, Ze Wang, Ediz Ertekin Jr., Adriano Koshiyama, Emre Kazim, Zekun Wu","doi":"arxiv-2409.11149","DOIUrl":null,"url":null,"abstract":"The development of unbiased large language models is widely recognized as\ncrucial, yet existing benchmarks fall short in detecting biases due to limited\nscope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the\nfirst holistic benchmarking pipeline to address these problems. The pipeline\nencompasses five core stages: scraping materials, assembling benchmarks,\ngenerating responses, extracting numeric features, and diagnosing with\ndisparity metrics. SAGED includes metrics for max disparity, such as impact\nratio, and bias concentration, such as Max Z-scores. Noticing that assessment\ntool bias and contextual bias in prompts can distort evaluation, SAGED\nimplements counterfactual branching and baseline calibration for mitigation.\nFor demonstration, we use SAGED on G20 Countries with popular 8b-level models\nincluding Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we\nfind that while Mistral and Qwen2 show lower max disparity and higher bias\nconcentration than Gemma2 and Llama3.1, all models are notably biased against\ncountries like Russia and (except for Qwen2) China. With further experiments to\nhave models role-playing U.S. (vice-/former-) presidents, we see bias amplifies\nand shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not\nengage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more\nintensively than Biden and Harris, indicating role-playing performance bias in\nthese models.","PeriodicalId":501030,"journal":{"name":"arXiv - CS - Computation and Language","volume":null,"pages":null},"PeriodicalIF":0.0000,"publicationDate":"2024-09-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":"{\"title\":\"SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration\",\"authors\":\"Xin Guan, Nathaniel Demchak, Saloni Gupta, Ze Wang, Ediz Ertekin Jr., Adriano Koshiyama, Emre Kazim, Zekun Wu\",\"doi\":\"arxiv-2409.11149\",\"DOIUrl\":null,\"url\":null,\"abstract\":\"The development of unbiased large language models is widely recognized as\\ncrucial, yet existing benchmarks fall short in detecting biases due to limited\\nscope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the\\nfirst holistic benchmarking pipeline to address these problems. The pipeline\\nencompasses five core stages: scraping materials, assembling benchmarks,\\ngenerating responses, extracting numeric features, and diagnosing with\\ndisparity metrics. SAGED includes metrics for max disparity, such as impact\\nratio, and bias concentration, such as Max Z-scores. Noticing that assessment\\ntool bias and contextual bias in prompts can distort evaluation, SAGED\\nimplements counterfactual branching and baseline calibration for mitigation.\\nFor demonstration, we use SAGED on G20 Countries with popular 8b-level models\\nincluding Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we\\nfind that while Mistral and Qwen2 show lower max disparity and higher bias\\nconcentration than Gemma2 and Llama3.1, all models are notably biased against\\ncountries like Russia and (except for Qwen2) China. With further experiments to\\nhave models role-playing U.S. (vice-/former-) presidents, we see bias amplifies\\nand shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not\\nengage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more\\nintensively than Biden and Harris, indicating role-playing performance bias in\\nthese models.\",\"PeriodicalId\":501030,\"journal\":{\"name\":\"arXiv - CS - Computation and Language\",\"volume\":null,\"pages\":null},\"PeriodicalIF\":0.0000,\"publicationDate\":\"2024-09-17\",\"publicationTypes\":\"Journal Article\",\"fieldsOfStudy\":null,\"isOpenAccess\":false,\"openAccessPdf\":\"\",\"citationCount\":\"0\",\"resultStr\":null,\"platform\":\"Semanticscholar\",\"paperid\":null,\"PeriodicalName\":\"arXiv - CS - Computation and Language\",\"FirstCategoryId\":\"1085\",\"ListUrlMain\":\"https://doi.org/arxiv-2409.11149\",\"RegionNum\":0,\"RegionCategory\":null,\"ArticlePicture\":[],\"TitleCN\":null,\"AbstractTextCN\":null,\"PMCID\":null,\"EPubDate\":\"\",\"PubModel\":\"\",\"JCR\":\"\",\"JCRName\":\"\",\"Score\":null,\"Total\":0}","platform":"Semanticscholar","paperid":null,"PeriodicalName":"arXiv - CS - Computation and Language","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/arxiv-2409.11149","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

摘要

开发无偏大型语言模型被广泛认为是至关重要的,然而现有的基准由于范围有限、污染和缺乏公平基线,在检测偏差方面存在不足。SAGED(-Bias) 是第一个解决这些问题的整体基准管道。该管道包括五个核心阶段:收集材料、组合基准、生成响应、提取数字特征和诊断差异度量。SAGED 包括最大差异度量(如影响比率)和偏差集中度(如最大 Z 分数)。SAGED 注意到提示中的评估工具偏差和上下文偏差可能会扭曲评估,因此实现了反事实分支和基线校准以减轻影响。通过情感分析,我们发现,与 Gemma2 和 Llama3.1 相比,Mistral 和 Qwen2 显示出较低的最大差异和较高的偏差集中度,但所有模型都明显对俄罗斯和中国(Qwen2 除外)等国家存在偏差。在进一步让模型扮演美国(副总统/前总统)的实验中,我们发现偏差扩大了,并向不同的方向移动。此外,我们发现 Qwen2 和 Mistral 不参与角色扮演,而 Llama3.1 和 Gemma2 对特朗普的角色扮演明显比拜登和哈里斯更深入,这表明这些模型在角色扮演方面存在偏差。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
SAGED: A Holistic Bias-Benchmarking Pipeline for Language Models with Customisable Fairness Calibration
The development of unbiased large language models is widely recognized as crucial, yet existing benchmarks fall short in detecting biases due to limited scope, contamination, and lack of a fairness baseline. SAGED(-Bias) is the first holistic benchmarking pipeline to address these problems. The pipeline encompasses five core stages: scraping materials, assembling benchmarks, generating responses, extracting numeric features, and diagnosing with disparity metrics. SAGED includes metrics for max disparity, such as impact ratio, and bias concentration, such as Max Z-scores. Noticing that assessment tool bias and contextual bias in prompts can distort evaluation, SAGED implements counterfactual branching and baseline calibration for mitigation. For demonstration, we use SAGED on G20 Countries with popular 8b-level models including Gemma2, Llama3.1, Mistral, and Qwen2. With sentiment analysis, we find that while Mistral and Qwen2 show lower max disparity and higher bias concentration than Gemma2 and Llama3.1, all models are notably biased against countries like Russia and (except for Qwen2) China. With further experiments to have models role-playing U.S. (vice-/former-) presidents, we see bias amplifies and shifts in heterogeneous directions. Moreover, we see Qwen2 and Mistral not engage in role-playing, while Llama3.1 and Gemma2 role-play Trump notably more intensively than Biden and Harris, indicating role-playing performance bias in these models.
求助全文
通过发布文献求助,成功后即可免费获取论文全文。 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
LLMs + Persona-Plug = Personalized LLMs MEOW: MEMOry Supervised LLM Unlearning Via Inverted Facts Extract-and-Abstract: Unifying Extractive and Abstractive Summarization within Single Encoder-Decoder Framework Development and bilingual evaluation of Japanese medical large language model within reasonably low computational resources Human-like Affective Cognition in Foundation Models
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1