首页 > 最新文献

ACM Journal on Responsible Computing最新文献

英文 中文
Improving Group Fairness Assessments with Proxies 用代理改进小组公平性评估
Pub Date : 2024-07-24 DOI: 10.1145/3677175
Emma Harvey, M. S. Lee, Jatinder Singh
Although algorithms are increasingly used to guide real-world decision-making, their potential for propagating bias remains challenging to measure. A common approach for researchers and practitioners examining algorithms for unintended discriminatory biases is to assess group fairness, which compares outcomes across typically sensitive or protected demographic features like race, gender, or age. In practice, however, data representing these group attributes is often not collected, or may be unavailable due to policy, legal, or other constraints. As a result, practitioners often find themselves tasked with assessing fairness in the face of these missing features. In such cases, they can either forgo a bias audit, obtain the missing data directly, or impute it. Because obtaining additional data is often prohibitively expensive or raises privacy concerns, many practitioners attempt to impute missing data using proxies. Through a survey of the data used in algorithmic fairness literature, which we make public to facilitate future research, we show that when available at all, most publicly available proxy sources are in the form of summary tables , which contain only aggregate statistics about a population. Prior work has found that these proxies are not predictive enough on their own to accurately measure group fairness. Even proxy variables that are correlated with group attributes also contain noise (i.e. will predict attributes for a subset of the population effectively at random). Here, we outline a method for improving accuracy in measuring group fairness using summary tables. Specifically, we propose improving accuracy by focusing only on highly predictive values within proxy variables, and outline the conditions under which these proxies can estimate fairness disparities with high accuracy. We then show that a major disqualifying criterion—an association between the proxy and the outcome—can be controlled for using causal inference. Finally, we show that when proxy data is missing altogether, our approach is applicable to rule-based proxies constructed using subject-matter context applied to the original data alone. Crucially, we are able to extract information on group disparities from proxies that may have low discriminatory power at the population level. We illustrate our results through a variety of case studies with real and simulated data. In all, we present a viable method allowing the assessment of fairness in the face of missing data, with limited privacy implications and without needing to rely on complex, expensive, or proprietary data sources.
尽管算法越来越多地被用于指导现实世界的决策,但其传播偏见的可能性仍然难以衡量。研究人员和从业人员在检查算法是否存在无意的歧视性偏见时,常用的方法是评估群体公平性,即比较不同典型敏感或受保护人口特征(如种族、性别或年龄)的结果。但在实践中,代表这些群体属性的数据往往没有收集到,或者由于政策、法律或其他限制而无法获得。因此,实践者经常发现自己的任务是在这些特征缺失的情况下评估公平性。在这种情况下,他们要么放弃偏差审计,要么直接获取缺失数据,要么估算缺失数据。由于获取额外数据的成本往往过高,或者会引起隐私方面的担忧,因此许多从业人员尝试使用替代数据来估算缺失数据。通过对算法公平性文献中使用的数据进行调查(我们公开这些数据以促进未来的研究),我们发现,如果可以获得这些数据,大多数公开的代理数据源都是汇总表的形式,其中只包含有关人口的总体统计数据。先前的研究发现,这些替代变量本身的预测能力不足以准确衡量群体公平性。即使是与群体属性相关的代用变量,也含有噪声(即会随机有效地预测人口子集的属性)。 在此,我们概述了一种利用汇总表提高群体公平性测量准确性的方法。具体来说,我们建议通过只关注替代变量中的高预测值来提高准确性,并概述了这些替代变量能够高精度估计公平性差异的条件。然后,我们展示了一个主要的不合格标准--代理变量与结果之间的关联--可以通过因果推理加以控制。最后,我们证明,当代理数据完全缺失时,我们的方法适用于使用主题背景构建的基于规则的代理,并仅适用于原始数据。最重要的是,我们能够从在群体层面上歧视力可能较低的代理数据中提取群体差异信息。我们通过使用真实数据和模拟数据进行各种案例研究来说明我们的成果。总之,我们提出了一种可行的方法,可以在数据缺失的情况下评估公平性,对隐私影响有限,而且无需依赖复杂、昂贵或专有的数据源。
{"title":"Improving Group Fairness Assessments with Proxies","authors":"Emma Harvey, M. S. Lee, Jatinder Singh","doi":"10.1145/3677175","DOIUrl":"https://doi.org/10.1145/3677175","url":null,"abstract":"\u0000 Although algorithms are increasingly used to guide real-world decision-making, their potential for propagating bias remains challenging to measure. A common approach for researchers and practitioners examining algorithms for unintended discriminatory biases is to assess group fairness, which compares outcomes across typically sensitive or protected demographic features like race, gender, or age. In practice, however, data representing these group attributes is often not collected, or may be unavailable due to policy, legal, or other constraints. As a result, practitioners often find themselves tasked with assessing fairness in the face of these missing features. In such cases, they can either forgo a bias audit, obtain the missing data directly, or impute it. Because obtaining additional data is often prohibitively expensive or raises privacy concerns, many practitioners attempt to impute missing data using proxies. Through a survey of the data used in algorithmic fairness literature, which we make public to facilitate future research, we show that when available at all, most publicly available proxy sources are in the form of\u0000 summary tables\u0000 , which contain only aggregate statistics about a population. Prior work has found that these proxies are not predictive enough on their own to accurately measure group fairness. Even proxy variables that are correlated with group attributes also contain noise (i.e. will predict attributes for a subset of the population effectively at random).\u0000 \u0000 \u0000 Here, we outline a method for improving accuracy in measuring group fairness using summary tables. Specifically, we propose improving accuracy by focusing only on\u0000 highly predictive values\u0000 within proxy variables, and outline the conditions under which these proxies can estimate fairness disparities with high accuracy. We then show that a major disqualifying criterion—an association between the proxy and the outcome—can be controlled for using causal inference. Finally, we show that when proxy data is missing altogether, our approach is applicable to rule-based proxies constructed using subject-matter context applied to the original data alone. Crucially, we are able to extract information on group disparities from proxies that may have low discriminatory power at the population level. We illustrate our results through a variety of case studies with real and simulated data. In all, we present a viable method allowing the assessment of fairness in the face of missing data, with limited privacy implications and without needing to rely on complex, expensive, or proprietary data sources.\u0000","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"6 11","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141807641","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Navigating the EU AI Act Maze using a Decision-Tree Approach 使用决策树方法驾驭欧盟人工智能法案迷宫
Pub Date : 2024-07-10 DOI: 10.1145/3677174
Hilmy Hanif, Jorge Constantino, Marie-Theres Sekwenz, M. van Eeten, J. Ubacht, Ben Wagner, Yury Zhauniarovich
The AI Act represents a significant legislative effort by the European Union to govern the use of AI systems according to different risk-related classes, imposing different degrees of compliance obligations to users and providers of AI systems. However, it is often critiqued due to the lack of general public comprehension and effectiveness regarding the classification of AI systems to the corresponding risk classes. To mitigate these shortcomings, we propose a Decision-Tree-based framework aimed at increasing legal compliance and classification clarity. By performing a quantitative evaluation, we show that our framework is especially beneficial to individuals without a legal background, allowing them to enhance the accuracy and speed of AI system classification according to the AI Act. The qualitative study results show that the framework is helpful to all participants, allowing them to justify intuitively made decisions and making the classification process clearer.
人工智能法》是欧盟在立法方面做出的一项重大努力,它按照不同的风险等级对人工智能系统的使用进行管理,对人工智能系统的用户和提供商规定了不同程度的合规义务。然而,由于公众对将人工智能系统划分为相应的风险等级缺乏理解和有效性,该法案经常受到批评。为了弥补这些不足,我们提出了一个基于决策树的框架,旨在提高法律合规性和分类清晰度。通过进行定量评估,我们表明我们的框架尤其有利于没有法律背景的个人,使他们能够根据《人工智能法》提高人工智能系统分类的准确性和速度。定性研究结果表明,该框架对所有参与者都有帮助,使他们能够证明直观决策的合理性,并使分类过程更加清晰。
{"title":"Navigating the EU AI Act Maze using a Decision-Tree Approach","authors":"Hilmy Hanif, Jorge Constantino, Marie-Theres Sekwenz, M. van Eeten, J. Ubacht, Ben Wagner, Yury Zhauniarovich","doi":"10.1145/3677174","DOIUrl":"https://doi.org/10.1145/3677174","url":null,"abstract":"The AI Act represents a significant legislative effort by the European Union to govern the use of AI systems according to different risk-related classes, imposing different degrees of compliance obligations to users and providers of AI systems. However, it is often critiqued due to the lack of general public comprehension and effectiveness regarding the classification of AI systems to the corresponding risk classes. To mitigate these shortcomings, we propose a Decision-Tree-based framework aimed at increasing legal compliance and classification clarity. By performing a quantitative evaluation, we show that our framework is especially beneficial to individuals without a legal background, allowing them to enhance the accuracy and speed of AI system classification according to the AI Act. The qualitative study results show that the framework is helpful to all participants, allowing them to justify intuitively made decisions and making the classification process clearer.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"3 2","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141658642","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
This Is Going on Your Permanent Record: A Legal Analysis of Educational Data in the Cloud 这将成为你的永久记录:云中教育数据的法律分析
Pub Date : 2024-07-04 DOI: 10.1145/3675230
Ben Cohen, Ashley Hu, Deisy Patino, Joel Coffman
Moving operations to the cloud has become a way of life for many educational institutions. Much of the information these institutions store in the cloud is protected by Family Educational Rights and Privacy Act (FERPA), which was last amended in 2002, well before cloud computing became ubiquitous. The application of a 1974 law to 21st-century technology presents a plethora of legal and technical questions. In this article, we present an interdisciplinary analysis of these issues. We examine both existing statutes and case law and contemporary research into cloud security, focusing on the impact of the latter on the former. We find that FERPA excludes information that students and faculty often believe is protected and that lower-court decisions have created further ambiguity. We additionally find that given current technology, the statute is no longer sufficient to protect student data, and we present recommendations for revisions.
对许多教育机构来说,将业务转移到云端已成为一种生活方式。这些机构存储在云中的大部分信息都受到《家庭教育权利与隐私法案》(FERPA)的保护,该法案最近一次修订是在 2002 年,远在云计算普及之前。将 1974 年的法律应用于 21 世纪的技术,会带来大量的法律和技术问题。在本文中,我们将对这些问题进行跨学科分析。我们研究了现有法规和判例法以及当代云安全研究,重点关注后者对前者的影响。我们发现,FERPA 排除了学生和教职员工通常认为受到保护的信息,而下级法院的判决则造成了进一步的模糊性。我们还发现,鉴于当前的技术,该法规已不足以保护学生数据,因此我们提出了修订建议。
{"title":"This Is Going on Your Permanent Record: A Legal Analysis of Educational Data in the Cloud","authors":"Ben Cohen, Ashley Hu, Deisy Patino, Joel Coffman","doi":"10.1145/3675230","DOIUrl":"https://doi.org/10.1145/3675230","url":null,"abstract":"Moving operations to the cloud has become a way of life for many educational institutions. Much of the information these institutions store in the cloud is protected by Family Educational Rights and Privacy Act (FERPA), which was last amended in 2002, well before cloud computing became ubiquitous. The application of a 1974 law to 21st-century technology presents a plethora of legal and technical questions. In this article, we present an interdisciplinary analysis of these issues. We examine both existing statutes and case law and contemporary research into cloud security, focusing on the impact of the latter on the former. We find that FERPA excludes information that students and faculty often believe is protected and that lower-court decisions have created further ambiguity. We additionally find that given current technology, the statute is no longer sufficient to protect student data, and we present recommendations for revisions.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":" 4","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-07-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141677081","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Mapping the complexity of legal challenges for trustworthy drones on construction sites in the United Kingdom 绘制英国建筑工地上值得信赖的无人机所面临的复杂法律挑战图
Pub Date : 2024-05-14 DOI: 10.1145/3664617
Joshua Krook, David M. Bossens, Peter D. Winter, John Downer, Shane Windsor
Drones, unmanned aircraft controlled remotely and equipped with cameras, have seen widespread deployment across military, industrial, and commercial domains. The commercial sector, in particular, has experienced rapid growth, outpacing regulatory developments due to substantial financial incentives. The UK construction sector exemplifies a case where the regulatory framework for drones remains unclear. This article investigates the state of UK legislation on commercial drone use in construction through a thematic analysis of peer-reviewed literature. Four main themes, including opportunities, safety risks, privacy risks, and the regulatory context, were identified along with twenty-one sub-themes such as noise and falling materials. Findings reveal a fragmented regulatory landscape, combining byelaws, national laws, and EU regulations, creating business uncertainty. Our study recommends the establishment of specific national guidelines for commercial drone use, addressing uncertainties and building public trust, especially in anticipation of the integration of ‘autonomous’ drones. This research contributes to the responsible computing domain by uncovering regulatory gaps and issues in UK drone law, particularly within the often-overlooked context of the construction sector. The insights provided aim to inform future responsible computing practices and policy development in the evolving landscape of commercial drone technology.
无人机是一种可远程控制并装有摄像头的无人驾驶飞机,已在军事、工业和商业领域得到广泛部署。尤其是商业领域,由于大量的经济激励措施,其增长速度超过了监管发展的速度。英国建筑行业是无人机监管框架仍不明确的典型案例。本文通过对同行评议的文献进行专题分析,调查了英国建筑业商用无人机的立法状况。文章确定了四大主题,包括机遇、安全风险、隐私风险和监管环境,以及 21 个子主题,如噪音和材料坠落。研究结果表明,监管环境支离破碎,既有附则,又有国家法律和欧盟法规,给企业带来了不确定性。我们的研究建议为商用无人机的使用制定具体的国家指导方针,以解决不确定性并建立公众信任,特别是在 "自主 "无人机的整合方面。本研究揭示了英国无人机法律中的监管漏洞和问题,尤其是在经常被忽视的建筑领域,为责任计算领域做出了贡献。在商用无人机技术不断发展的背景下,所提供的见解旨在为未来的责任计算实践和政策制定提供参考。
{"title":"Mapping the complexity of legal challenges for trustworthy drones on construction sites in the United Kingdom","authors":"Joshua Krook, David M. Bossens, Peter D. Winter, John Downer, Shane Windsor","doi":"10.1145/3664617","DOIUrl":"https://doi.org/10.1145/3664617","url":null,"abstract":"Drones, unmanned aircraft controlled remotely and equipped with cameras, have seen widespread deployment across military, industrial, and commercial domains. The commercial sector, in particular, has experienced rapid growth, outpacing regulatory developments due to substantial financial incentives. The UK construction sector exemplifies a case where the regulatory framework for drones remains unclear. This article investigates the state of UK legislation on commercial drone use in construction through a thematic analysis of peer-reviewed literature. Four main themes, including opportunities, safety risks, privacy risks, and the regulatory context, were identified along with twenty-one sub-themes such as noise and falling materials. Findings reveal a fragmented regulatory landscape, combining byelaws, national laws, and EU regulations, creating business uncertainty. Our study recommends the establishment of specific national guidelines for commercial drone use, addressing uncertainties and building public trust, especially in anticipation of the integration of ‘autonomous’ drones. This research contributes to the responsible computing domain by uncovering regulatory gaps and issues in UK drone law, particularly within the often-overlooked context of the construction sector. The insights provided aim to inform future responsible computing practices and policy development in the evolving landscape of commercial drone technology.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"32 3","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-05-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140979441","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents 优化人机协作,高效提取文本文件中的高精度信息
Pub Date : 2024-03-26 DOI: 10.1145/3652591
Bradley Butcher, Miri Zilka, Jiri Hron, Darren Cook, Adrian Weller
From science to law enforcement, many research questions are answerable only by poring over a large amount of unstructured text documents. While people can extract information from such documents with high accuracy, this is often too time-consuming to be practical. On the other hand, automated approaches produce nearly-immediate results, but are not reliable enough for applications where near-perfect precision is essential. Motivated by two use cases from criminal justice, we consider the benefits and drawbacks of various human-only, human-machine, and machine-only approaches. Finding no tool well suited for our use cases, we develop a human-in-the-loop method for fast but accurate extraction of structured data from unstructured text. The tool is based on automated extraction followed by human validation, and is particularly useful in cases where purely manual extraction is not practical. Testing on three criminal justice datasets, we find that the combination of the computer speed and human understanding yields precision comparable to manual annotation while requiring only a fraction of time, and significantly outperforms the precision of all fully automated baselines.
从科学到执法,许多研究问题只能通过研究大量的非结构化文本文档才能找到答案。虽然人们可以从这些文档中提取出高精度的信息,但这往往过于耗时,不切实际。另一方面,自动方法几乎可以立即产生结果,但对于需要近乎完美的精确度的应用来说却不够可靠。受刑事司法中两个使用案例的启发,我们考虑了各种纯人工、人机和纯机器方法的优点和缺点。我们发现没有一种工具非常适合我们的使用案例,因此我们开发了一种 "人在回路中 "的方法,用于从非结构化文本中快速而准确地提取结构化数据。该工具以自动提取为基础,然后进行人工验证,在纯人工提取不可行的情况下特别有用。我们在三个刑事司法数据集上进行了测试,发现将计算机速度和人的理解力结合起来,可以获得与人工标注相当的精确度,而所需时间仅为人工标注的一小部分,其精确度大大超过了所有全自动基线。
{"title":"Optimising Human-Machine Collaboration for Efficient High-Precision Information Extraction from Text Documents","authors":"Bradley Butcher, Miri Zilka, Jiri Hron, Darren Cook, Adrian Weller","doi":"10.1145/3652591","DOIUrl":"https://doi.org/10.1145/3652591","url":null,"abstract":"From science to law enforcement, many research questions are answerable only by poring over a large amount of unstructured text documents. While people can extract information from such documents with high accuracy, this is often too time-consuming to be practical. On the other hand, automated approaches produce nearly-immediate results, but are not reliable enough for applications where near-perfect precision is essential. Motivated by two use cases from criminal justice, we consider the benefits and drawbacks of various human-only, human-machine, and machine-only approaches. Finding no tool well suited for our use cases, we develop a human-in-the-loop method for fast but accurate extraction of structured data from unstructured text. The tool is based on automated extraction followed by human validation, and is particularly useful in cases where purely manual extraction is not practical. Testing on three criminal justice datasets, we find that the combination of the computer speed and human understanding yields precision comparable to manual annotation while requiring only a fraction of time, and significantly outperforms the precision of all fully automated baselines.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"86 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140377966","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Investigating gender and racial biases in DALL-E Mini Images 调查《DALL-E》迷你影像中的性别和种族偏见
Pub Date : 2024-03-01 DOI: 10.1145/3649883
Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano, Colin Klein
Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces tend to represent dozens of different occupations as populated either solely by men (e.g., pilot, builder, plumber) or solely by women (e.g., hairdresser, receptionist, dietitian). In addition, the images DALL-E Mini produces tend to represent most occupations as populated primarily or solely by White people (e.g., farmer, painter, prison officer, software engineer) and very few by non-White people (e.g., pastor, rapper). These findings suggest that exciting new AI technologies should be critically scrutinized and perhaps regulated before they are unleashed on society.
基于变压器的人工智能生成系统,包括 GPT-4 等文本生成器和 DALL-E 3 等图像生成器,最近已进入大众视野。这些工具虽然给人留下了深刻印象,但也有可能复制、加剧和强化人类现存的社会偏见,如性别和种族偏见。在本文中,我们将系统回顾《迷你机器人达利》在多大程度上存在这一问题。与《迷你达利》的制作者同时发布的 "模型卡 "相一致,我们发现它所生成的图像往往代表了数十种不同的职业,要么只有男性(如飞行员、建筑工人、水管工),要么只有女性(如理发师、接待员、营养师)。此外,"迷你达利 "生成的图像倾向于表现大多数职业主要或仅由白人从事(如农民、画家、监狱官、软件工程师),而很少有非白人从事(如牧师、说唱歌手)。这些研究结果表明,令人兴奋的人工智能新技术在向社会释放之前,应该受到严格的审查和监管。
{"title":"Investigating gender and racial biases in DALL-E Mini Images","authors":"Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano, Colin Klein","doi":"10.1145/3649883","DOIUrl":"https://doi.org/10.1145/3649883","url":null,"abstract":"Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces tend to represent dozens of different occupations as populated either solely by men (e.g., pilot, builder, plumber) or solely by women (e.g., hairdresser, receptionist, dietitian). In addition, the images DALL-E Mini produces tend to represent most occupations as populated primarily or solely by White people (e.g., farmer, painter, prison officer, software engineer) and very few by non-White people (e.g., pastor, rapper). These findings suggest that exciting new AI technologies should be critically scrutinized and perhaps regulated before they are unleashed on society.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"37 24","pages":""},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"140086393","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems 透明度检查:研究和设计人工智能个性化系统透明度的工具
Pub Date : 2023-12-08 DOI: 10.1145/3636508
Laura Schelenz, Avi Segal, Oduma Adelio, K. Gal
As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.
随着基于人工智能的系统在我们的日常生活中变得司空见惯,它们需要向用户提供可理解的信息,告诉他们如何收集、处理和输出与他们有关的信息。由于最近的道德准则和监管,以及研究表明基于人工智能的系统的透明度与用户满意度之间存在正相关关系,这种透明度实践的重要性变得越来越重要。本文为使用个性化的基于人工智能的系统的透明度设计和研究提供了一种新的工具。该工具名为“透明度检查”(transparency - check),它基于一个关于系统四个方面透明度问题的清单:输入(数据收集)、处理(算法模型)、输出(个性化建议)和用户控制(调整系统要素的用户反馈机制)。研究人员、设计人员和计算机系统的最终用户都可以使用transparent - check。为了从研究人员的角度展示透明度检查的有用性,我们收集了108名学生参与者的回答,他们使用透明度检查表对五个流行的现实世界系统(亚马逊,Facebook, Netflix, Spotify和YouTube)进行了评级。根据用户的主观评价,这些系统对透明度标准的遵从程度较低,在个别类别(特别是数据收集、处理、用户控制)上存在一些细微差别。我们使用这些结果来编写设计建议,以提高基于人工智能的系统的透明度,例如在用户与系统交互期间集成有关系统行为的信息。
{"title":"Transparency-Check: An Instrument for the Study and Design of Transparency in AI-based Personalization Systems","authors":"Laura Schelenz, Avi Segal, Oduma Adelio, K. Gal","doi":"10.1145/3636508","DOIUrl":"https://doi.org/10.1145/3636508","url":null,"abstract":"As AI-based systems become commonplace in our daily lives, they need to provide understandable information to their users about how they collect, process, and output information that concerns them. The importance of such transparency practices has gained significance due to recent ethical guidelines and regulation, as well as research suggesting a positive relationship between the transparency of AI-based systems and users’ satisfaction. This paper provides a new tool for the design and study of transparency in AI-based systems that use personalization. The tool, called Transparency-Check, is based on a checklist of questions about transparency in four areas of a system: input (data collection), processing (algorithmic models), output (personalized recommendations) and user control (user feedback mechanisms to adjust elements of the system). Transparency-Check can be used by researchers, designers, and end users of computer systems. To demonstrate the usefulness of Transparency-Check from a researcher perspective, we collected the responses of 108 student participants who used the transparency checklist to rate five popular real-world systems (Amazon, Facebook, Netflix, Spotify, and YouTube). Based on users’ subjective evaluations, the systems showed low compliance with transparency standards, with some nuances about individual categories (specifically data collection, processing, user control). We use these results to compile design recommendations for improving transparency in AI-based systems, such as integrating information about the system’s behavior during the user’s interactions with it.","PeriodicalId":486991,"journal":{"name":"ACM Journal on Responsible Computing","volume":"6 6","pages":""},"PeriodicalIF":0.0,"publicationDate":"2023-12-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"138587542","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
ACM Journal on Responsible Computing
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1