Advances in automatically rating the trustworthiness of text processing services

Biplav Srivastava, Kausik Lakkaraju, Mariana Bernagozzi, Marco Valtorta
{"title":"Advances in automatically rating the trustworthiness of text processing services","authors":"Biplav Srivastava,&nbsp;Kausik Lakkaraju,&nbsp;Mariana Bernagozzi,&nbsp;Marco Valtorta","doi":"10.1007/s43681-023-00391-5","DOIUrl":null,"url":null,"abstract":"<div><p>AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.</p></div>","PeriodicalId":72137,"journal":{"name":"AI and ethics","volume":"4 1","pages":"5 - 13"},"PeriodicalIF":0.0000,"publicationDate":"2023-12-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"AI and ethics","FirstCategoryId":"1085","ListUrlMain":"https://link.springer.com/article/10.1007/s43681-023-00391-5","RegionNum":0,"RegionCategory":null,"ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"","JCRName":"","Score":null,"Total":0}
引用次数: 0

Abstract

AI services are known to have unstable behavior when subjected to changes in data, models or users. Such behaviors, whether triggered by omission or commission, lead to trust issues when AI works with humans. The current approach of assessing AI services in a black-box setting, where the consumer does not have access to the AI’s source code or training data, is limited. The consumer has to rely on the AI developer’s documentation and trust that the system has been built as stated. Further, if the AI consumer reuses the service to build other services which they sell to their customers, the consumer is at the risk of the service providers (both data and model providers). Our approach, in this context, is inspired by the success of nutritional labeling in food industry to promote health and seeks to assess and rate AI services for trust from the perspective of an independent stakeholder. The ratings become a means to communicate the behavior of AI systems, so that the consumer is informed about the risks and can make an informed decision. In this paper, we will first describe recent progress in developing rating methods for text-based machine translator AI services that have been found promising with user studies. Then, we will outline challenges and vision for a principled, multimodal, causality-based rating methodologies and its implication for decision-support in real-world scenarios like health and food recommendation.

查看原文
分享 分享
微信好友 朋友圈 QQ好友 复制链接
本刊更多论文
自动评定文本处理服务可信度的进展
众所周知,当数据、模型或用户发生变化时,人工智能服务会出现不稳定的行为。当人工智能与人类合作时,这种行为,无论是因疏忽还是故意引发的,都会导致信任问题。目前在黑箱环境中评估人工智能服务的方法是有限的,消费者无法访问人工智能的源代码或训练数据。消费者必须依赖人工智能开发者的文档,并相信系统是按其所述构建的。此外,如果人工智能消费者重复使用该服务来构建其他服务,并将其出售给客户,那么消费者将面临服务提供商(包括数据和模型提供商)的风险。在这种情况下,我们的方法受到了食品行业营养标签在促进健康方面的成功经验的启发,力求从独立利益相关者的角度对人工智能服务的信任度进行评估和评级。评级将成为宣传人工智能系统行为的一种手段,从而让消费者了解风险并做出明智的决定。在本文中,我们将首先介绍最近在开发基于文本的机器翻译人工智能服务评级方法方面取得的进展,这些方法在用户研究中被认为很有前景。然后,我们将概述一种有原则的、多模态的、基于因果关系的评级方法所面临的挑战和愿景,及其对健康和食品推荐等现实世界场景中的决策支持的影响。
本文章由计算机程序翻译,如有差异,请以英文原文为准。
求助全文
约1分钟内获得全文 去求助
来源期刊
自引率
0.00%
发文量
0
期刊最新文献
Exploring the mutations of society in the era of generative AI The need for an empirical research program regarding human–AI relational norms AI to renew public employment services? Explanation and trust of domain experts Waging warfare against states: the deployment of artificial intelligence in cyber espionage Technology, liberty, and guardrails
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
已复制链接
已复制链接
快去分享给好友吧!
我知道了
×
扫码分享
扫码分享
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1