首页 > 最新文献

IEEE transactions on technology and society最新文献

英文 中文
A Framework for the Interpretable Modeling of Household Wealth in Rural Communities From Satellite Data 根据卫星数据建立农村社区家庭财富可解释模型的框架
Pub Date : 2024-03-14 DOI: 10.1109/TTS.2024.3377541
Emily J. Zuetell;Paulina Jaramillo
Data-driven policy development and investment are necessary for aligning policies across administrative levels, targeting interventions, and meeting the 2030 Sustainable Development Goals. However, local-level economic well-being data at timely intervals, critical to informing policy development and ensuring equity of outcomes, are unavailable in many parts of the world. Yet, filling these data gaps with black-box predictive models like neural networks introduces risk and inequity to the decision- making process. In this work, we construct an alternative interpretable model to these black-box models to predict household wealth, a key socioeconomic well-being indicator, at 5-km scale from widely available satellite data. Our interpretable model promotes transparency, the identification of potential drivers of bias and harmful outcomes, and inclusive design for human-ML decision-making. We model household wealth as a low- order function of productive land use that can be interpreted and integrated with domain knowledge by decision-makers. We aggregate remotely sensed land cover change data from 2006–2019 to construct an interpretable linear regression model for household wealth and wealth change in Uganda at a 5-km scale with $r^{2},,{=}$ 72%. Our results demonstrate that there is not a clear performance-interpretability tradeoff in modeling household wealth from satellite imagery at high spatial and temporal resolution. Finally, we recommend a tiered framework to model socioeconomic outcomes from remote sensing data.
数据驱动的政策制定和投资对于调整各行政级别的政策、有针对性地采取干预措施以及实现 2030 年可持续发展目标是必不可少的。然而,世界上许多地方都无法及时获得地方一级的经济福祉数据,而这些数据对政策制定和确保结果公平至关重要。然而,用神经网络等黑箱预测模型来填补这些数据缺口,会给决策过程带来风险和不公平。在这项工作中,我们构建了一个替代这些黑箱模型的可解释模型,利用广泛可用的卫星数据在 5 公里范围内预测家庭财富这一关键的社会经济福利指标。我们的可解释模型提高了透明度,识别了导致偏差和有害结果的潜在因素,并为人类-多边合作决策提供了包容性设计。我们将家庭财富建模为生产性土地利用的低阶函数,决策者可对其进行解释并将其与领域知识相结合。我们汇总了 2006-2019 年的遥感土地覆被变化数据,在 5 公里范围内构建了一个可解释的乌干达家庭财富和财富变化线性回归模型,r^{2},,{=}$ 72%。我们的结果表明,在利用高时空分辨率卫星图像建立家庭财富模型时,并不存在明显的性能-可解释性权衡。最后,我们建议采用分层框架来模拟遥感数据的社会经济结果。
{"title":"A Framework for the Interpretable Modeling of Household Wealth in Rural Communities From Satellite Data","authors":"Emily J. Zuetell;Paulina Jaramillo","doi":"10.1109/TTS.2024.3377541","DOIUrl":"https://doi.org/10.1109/TTS.2024.3377541","url":null,"abstract":"Data-driven policy development and investment are necessary for aligning policies across administrative levels, targeting interventions, and meeting the 2030 Sustainable Development Goals. However, local-level economic well-being data at timely intervals, critical to informing policy development and ensuring equity of outcomes, are unavailable in many parts of the world. Yet, filling these data gaps with black-box predictive models like neural networks introduces risk and inequity to the decision- making process. In this work, we construct an alternative interpretable model to these black-box models to predict household wealth, a key socioeconomic well-being indicator, at 5-km scale from widely available satellite data. Our interpretable model promotes transparency, the identification of potential drivers of bias and harmful outcomes, and inclusive design for human-ML decision-making. We model household wealth as a low- order function of productive land use that can be interpreted and integrated with domain knowledge by decision-makers. We aggregate remotely sensed land cover change data from 2006–2019 to construct an interpretable linear regression model for household wealth and wealth change in Uganda at a 5-km scale with \u0000<inline-formula> <tex-math>$r^{2},,{=}$ </tex-math></inline-formula>\u0000 72%. Our results demonstrate that there is not a clear performance-interpretability tradeoff in modeling household wealth from satellite imagery at high spatial and temporal resolution. Finally, we recommend a tiered framework to model socioeconomic outcomes from remote sensing data.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"36-44"},"PeriodicalIF":0.0,"publicationDate":"2024-03-14","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164734","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Rating Sentiment Analysis Systems for Bias Through a Causal Lens 从因果角度评定情感分析系统是否存在偏见
Pub Date : 2024-03-11 DOI: 10.1109/TTS.2024.3375519
Kausik Lakkaraju;Biplav Srivastava;Marco Valtorta
Sentiment Analysis Systems (SASs) are data-driven Artificial Intelligence (AI) systems that assign one or more numbers to convey the polarity and emotional intensity of a given piece of text. However, like other automatic machine learning systems, SASs can exhibit model uncertainty, resulting in drastic swings in output with even small changes in input. This issue becomes more problematic when inputs involve protected attributes like gender or race, as it can be perceived as bias or unfairness. To address this, we propose a novel method to assess and rate SASs. We perturb inputs in a controlled causal setting to test if the output sentiment is sensitive to protected attributes while keeping other components of the textual input, such as chosen emotion words, fixed. Based on the results, we assign labels (ratings) at both fine-grained and overall levels to indicate the robustness of the SAS to input changes. The ratings can help decision-makers improve online content by reducing hate speech, often fueled by biases related to protected attributes such as gender and race. These ratings provide a principled basis for comparing SASs and making informed choices based on their behavior. The ratings also benefit all users, especially developers who reuse off-the-shelf SASs to build larger AI systems but do not have access to their code or training data to compare.
情感分析系统(SAS)是一种数据驱动的人工智能(AI)系统,它可以分配一个或多个数字来表达给定文本的极性和情感强度。然而,与其他自动机器学习系统一样,SAS 也会表现出模型的不确定性,导致即使输入的微小变化也会导致输出的剧烈波动。当输入涉及性别或种族等受保护的属性时,这个问题就变得更加棘手,因为这可能被视为偏见或不公平。为了解决这个问题,我们提出了一种评估和评价 SAS 的新方法。我们在受控的因果关系设置中对输入进行扰动,以测试输出情感是否对受保护属性敏感,同时保持文本输入的其他成分(如所选情感词)固定不变。根据结果,我们在细粒度和整体层面上分配标签(评级),以显示 SAS 对输入变化的鲁棒性。这些评级可以帮助决策者通过减少仇恨言论来改进在线内容,而仇恨言论往往是由性别和种族等受保护属性的偏见所助长的。这些评级为比较 SAS 和根据其行为做出明智选择提供了原则性依据。这些评级也有利于所有用户,特别是那些重复使用现成的 SAS 构建大型人工智能系统,但却无法获得其代码或训练数据进行比较的开发人员。
{"title":"Rating Sentiment Analysis Systems for Bias Through a Causal Lens","authors":"Kausik Lakkaraju;Biplav Srivastava;Marco Valtorta","doi":"10.1109/TTS.2024.3375519","DOIUrl":"https://doi.org/10.1109/TTS.2024.3375519","url":null,"abstract":"Sentiment Analysis Systems (SASs) are data-driven Artificial Intelligence (AI) systems that assign one or more numbers to convey the polarity and emotional intensity of a given piece of text. However, like other automatic machine learning systems, SASs can exhibit model uncertainty, resulting in drastic swings in output with even small changes in input. This issue becomes more problematic when inputs involve protected attributes like gender or race, as it can be perceived as bias or unfairness. To address this, we propose a novel method to assess and rate SASs. We perturb inputs in a controlled causal setting to test if the output sentiment is sensitive to protected attributes while keeping other components of the textual input, such as chosen emotion words, fixed. Based on the results, we assign labels (ratings) at both fine-grained and overall levels to indicate the robustness of the SAS to input changes. The ratings can help decision-makers improve online content by reducing hate speech, often fueled by biases related to protected attributes such as gender and race. These ratings provide a principled basis for comparing SASs and making informed choices based on their behavior. The ratings also benefit all users, especially developers who reuse off-the-shelf SASs to build larger AI systems but do not have access to their code or training data to compare.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"82-92"},"PeriodicalIF":0.0,"publicationDate":"2024-03-11","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164678","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Understanding Data Valuation: Valuing Google’s Data Assets 了解数据估值:评估谷歌的数据资产
Pub Date : 2024-03-08 DOI: 10.1109/TTS.2024.3398400
Kean Birch;Sarah Marquis;Guilherme Cavalcante Silva
Digital personal data are increasingly understood as a key asset in our digital economies. But how should we value such data? Numerous policymakers, regulators, and stakeholders are trying to work out how to manage the collection, use, and valuation of data in order to balance the advantages and disadvantages of its collection and use. The negative implications of data practices may include privacy loss, data breaches, or declining market competition, while social and economic benefits include improved service delivery, more efficient welfare systems, or better products. Increasingly, data are conceptualized as an asset. To understand the value of data as an asset means understanding how data are configured as an asset; data value does not reflect ownership and property rights per se, but rather diverse modes of access and use restrictions (usually delineated by opaque contractual agreements). Data are increasingly controlled by a few, large digital technology firms, especially so-called ‘Big Tech’ firms. In this paper, we use Google as a case study of how Big Tech firms configure and value digital data as an asset. We analyse how Google understands, frames, values, and monetizes the data they collect from users. We qualitatively analyse an extensive dataset of financial documentary materials produced by and about Google to identify the different modes of access and use restrictions that Google deploys to turn digital data into a valuable asset. We conclude that, despite being highly ambiguous, Google’s approach to data value focuses on monetizing users.
数字个人数据日益被视为数字经济中的重要资产。但我们应该如何评估这些数据的价值呢?众多政策制定者、监管者和利益相关者都在努力探索如何管理数据的收集、使用和估值,以平衡数据收集和使用的利弊。数据实践的负面影响可能包括隐私损失、数据泄露或市场竞争力下降,而社会和经济效益则包括更好的服务提供、更高效的福利系统或更好的产品。数据越来越被视为一种资产。要了解数据作为资产的价值,就意味着要了解数据是如何配置为资产的;数据价值并不反映所有权和产权本身,而是反映访问和使用限制的不同模式(通常由不透明的合同协议划定)。数据越来越多地被少数大型数字技术公司所控制,尤其是所谓的 "大科技 "公司。在本文中,我们以谷歌为案例,研究大科技公司如何将数字数据作为一种资产进行配置和估值。我们分析了谷歌是如何理解、构建、评估从用户那里收集到的数据并将其货币化的。我们定性分析了由谷歌制作的和与谷歌有关的财务文件资料的大量数据集,以确定谷歌将数字数据转化为有价值资产的不同访问和使用限制模式。我们得出的结论是,尽管谷歌的数据价值非常模糊,但其数据价值的重点是用户货币化。
{"title":"Understanding Data Valuation: Valuing Google’s Data Assets","authors":"Kean Birch;Sarah Marquis;Guilherme Cavalcante Silva","doi":"10.1109/TTS.2024.3398400","DOIUrl":"https://doi.org/10.1109/TTS.2024.3398400","url":null,"abstract":"Digital personal data are increasingly understood as a key asset in our digital economies. But how should we value such data? Numerous policymakers, regulators, and stakeholders are trying to work out how to manage the collection, use, and valuation of data in order to balance the advantages and disadvantages of its collection and use. The negative implications of data practices may include privacy loss, data breaches, or declining market competition, while social and economic benefits include improved service delivery, more efficient welfare systems, or better products. Increasingly, data are conceptualized as an asset. To understand the value of data as an asset means understanding how data are configured as an asset; data value does not reflect ownership and property rights per se, but rather diverse modes of access and use restrictions (usually delineated by opaque contractual agreements). Data are increasingly controlled by a few, large digital technology firms, especially so-called ‘Big Tech’ firms. In this paper, we use Google as a case study of how Big Tech firms configure and value digital data as an asset. We analyse how Google understands, frames, values, and monetizes the data they collect from users. We qualitatively analyse an extensive dataset of financial documentary materials produced by and about Google to identify the different modes of access and use restrictions that Google deploys to turn digital data into a valuable asset. We conclude that, despite being highly ambiguous, Google’s approach to data value focuses on monetizing users.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"183-190"},"PeriodicalIF":0.0,"publicationDate":"2024-03-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10525235","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964835","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Social and Ethical Norms in Annotation Task Design 注释任务设计中的社会和伦理规范
Pub Date : 2024-03-07 DOI: 10.1109/TTS.2024.3374639
Razvan Amironesei;Mark Díaz
The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.
许多机器学习(ML)和人工智能(AI)系统的开发都依赖于人类标签数据。人类提供的标签可以作为标签或丰富信息,使算法能够更轻松地学习数据中的模式,从而训练或评估各种人工智能系统。这些注释最终会影响人工智能系统的行为。鉴于人工智能数据集的规模(可能包含数千到数十亿个数据点),成本和效率在如何收集数据注释方面发挥着重要作用。然而,既要满足与规模相关的需求,又要以反映真实世界细微差别和变化的方式收集数据,这两个目标之间存在着巨大的挑战。注释者通常被视为可互换的工作人员,他们提供的是 "无处不在的视角"。我们将重点放在影响注释任务设计的社会和伦理方面,从而对普遍基本事实的假设提出质疑。
{"title":"Social and Ethical Norms in Annotation Task Design","authors":"Razvan Amironesei;Mark Díaz","doi":"10.1109/TTS.2024.3374639","DOIUrl":"https://doi.org/10.1109/TTS.2024.3374639","url":null,"abstract":"The development of many machine learning (ML) and artificial intelligence (AI) systems depends on human-labeled data. Human-provided labels act as tags or enriching information that enable algorithms to more easily learn patterns in data in order to train or evaluate a wide range of AI systems. These annotations ultimately shape the behavior of AI systems. Given the scale of ML datasets, which can contain thousands to billions of data points, cost and efficiency play a major role in how data annotations are collected. Yet, important challenges arise between the goals of meeting scale-related needs while also collecting data in a way that reflects real-world nuance and variation. Annotators are typically treated as interchangeable workers who provide a ‘view from nowhere’. We question assumptions of universal ground truth by focusing on the social and ethical aspects that shape annotation task design.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"45-47"},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164724","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Future Role of Clinical Artificial Intelligence: View of Chronic Patients 临床人工智能的未来作用:慢性病患者视角
Pub Date : 2024-03-07 DOI: 10.1109/TTS.2024.3374647
Bijun Wang;Onur Asan;Ting Liao;Mo Mansouri
Artificial intelligence (AI) can transform various aspects of healthcare, including diagnosis, treatment, monitoring, and preventative care. Patients’ attitudes and views are considered critical factors for the development and success of AI-based technology in healthcare delivery. This study seeks to explore the chronic patients’ perceptions, including their knowledge, concerns regarding misuse and abuse, their attitude toward AI involvement, and their views on the future role of AI in healthcare delivery. Using the convenience sampling technique, 219 chronic-condition participants completed an online survey. This study leveraged Hayes Process Macro to develop a moderated mediation model to analyze the collected data. Our results showed that patients’ knowledge of AI did not directly influence their perceptions of the future of AI in healthcare. Nonetheless, the evidence from the mediational analysis revealed an indirect effect, where concerns about AI misuse and abuse and extensive AI involvement played a role in that. Additionally, the level of trustworthiness moderated the relationship between acceptance of extensive AI involvement and patients’ perception of AI’s future role. These findings highlight the importance of considering patients’ views and attitudes towards AI and addressing any concerns or fears they may have in order to build trust and confidence in clinical AI systems, which can ultimately lead to better health outcomes.
人工智能(AI)可以改变医疗保健的各个方面,包括诊断、治疗、监测和预防性护理。患者的态度和观点被认为是人工智能技术在医疗保健领域的发展和成功的关键因素。本研究旨在探讨慢性病患者的看法,包括他们的知识、对误用和滥用的担忧、他们对人工智能参与的态度以及他们对人工智能在医疗保健服务中未来角色的看法。通过方便抽样技术,219 名慢性病患者完成了在线调查。本研究利用 Hayes Process Macro 建立了一个调节中介模型来分析收集到的数据。我们的研究结果表明,患者对人工智能的了解程度并不直接影响他们对人工智能在医疗保健领域未来发展的看法。不过,中介分析的证据显示了一种间接影响,即对人工智能误用和滥用的担忧以及人工智能的广泛参与在其中发挥了作用。此外,可信赖程度调节了接受人工智能广泛参与与患者对人工智能未来作用的看法之间的关系。这些研究结果突显了考虑患者对人工智能的看法和态度以及解决他们可能存在的任何担忧或恐惧的重要性,从而建立对临床人工智能系统的信任和信心,最终实现更好的医疗效果。
{"title":"The Future Role of Clinical Artificial Intelligence: View of Chronic Patients","authors":"Bijun Wang;Onur Asan;Ting Liao;Mo Mansouri","doi":"10.1109/TTS.2024.3374647","DOIUrl":"https://doi.org/10.1109/TTS.2024.3374647","url":null,"abstract":"Artificial intelligence (AI) can transform various aspects of healthcare, including diagnosis, treatment, monitoring, and preventative care. Patients’ attitudes and views are considered critical factors for the development and success of AI-based technology in healthcare delivery. This study seeks to explore the chronic patients’ perceptions, including their knowledge, concerns regarding misuse and abuse, their attitude toward AI involvement, and their views on the future role of AI in healthcare delivery. Using the convenience sampling technique, 219 chronic-condition participants completed an online survey. This study leveraged Hayes Process Macro to develop a moderated mediation model to analyze the collected data. Our results showed that patients’ knowledge of AI did not directly influence their perceptions of the future of AI in healthcare. Nonetheless, the evidence from the mediational analysis revealed an indirect effect, where concerns about AI misuse and abuse and extensive AI involvement played a role in that. Additionally, the level of trustworthiness moderated the relationship between acceptance of extensive AI involvement and patients’ perception of AI’s future role. These findings highlight the importance of considering patients’ views and attitudes towards AI and addressing any concerns or fears they may have in order to build trust and confidence in clinical AI systems, which can ultimately lead to better health outcomes.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"71-81"},"PeriodicalIF":0.0,"publicationDate":"2024-03-07","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164722","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions 当人工智能失败时,我们该怪谁?人与人工智能互动中的责任归属
Pub Date : 2024-03-01 DOI: 10.1109/TTS.2024.3370095
Jordan Richard Schoenherr;Robert Thomson
While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.
以往对人工智能信任度的研究主要集中在用户感知的信任度上,而本文则研究了外部代理(如审计师)如何分配责任、感知可信度以及解释人工智能的成功与失败。在两项实验中,参与者(大学生)回顾了有关自动化失败的情景,并分配了感知责任、可信度和首选解释类型。参与者对三个代理(操作员、开发人员和人工智能)的累计责任评级超过了 100%,这意味着参与者并非完全理性地归因于信任,对人工智能的信任可能代表了对人类软件开发人员的信任。责任感与可信度之间的分离表明,参与者使用了不同的线索,技术类型和感知到的自主性会影响判断。最后,我们还发现,根据人工智能的成功或失败,用于理解情况的解释类型也有所不同。
{"title":"When AI Fails, Who Do We Blame? Attributing Responsibility in Human–AI Interactions","authors":"Jordan Richard Schoenherr;Robert Thomson","doi":"10.1109/TTS.2024.3370095","DOIUrl":"https://doi.org/10.1109/TTS.2024.3370095","url":null,"abstract":"While previous studies of trust in artificial intelligence have focused on perceived user trust, the paper examines how an external agent (e.g., an auditor) assigns responsibility, perceives trustworthiness, and explains the successes and failures of AI. In two experiments, participants (university students) reviewed scenarios about automation failures and assigned perceived responsibility, trustworthiness, and preferred explanation type. Participants’ cumulative responsibility ratings for three agents (operators, developers, and AI) exceeded 100%, implying that participants were not attributing trust in a wholly rational manner, and that trust in the AI might serve as a proxy for trust in the human software developer. Dissociation between responsibility and trustworthiness suggested that participants used different cues, with the kind of technology and perceived autonomy affecting judgments. Finally, we additionally found that the kind of explanation used to understand a situation differed based on whether the AI succeeded or failed.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"61-70"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164703","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Public Interest Technology for Innovation in Global Development: Recommendations for Launching PIT Projects 公益技术促进全球发展创新:启动 PIT 项目的建议
Pub Date : 2024-03-01 DOI: 10.1109/TTS.2024.3375431
Roba Abbas;Katina Michael;Dinara Davlembayeva;Savvas Papagiannidis;Jeremy Pitt
This paper serves as an Introduction to the Public Interest Technology (PIT) for Innovation in Global Development Special Issue, based on a workshop of the same title held in September 2023. The paper’s contribution is in proposing recommendations and practical guidance to aid in launching PIT projects. We begin by situating the Special Issue in evolving definitions of PIT in Section II, followed by an overview of the PIT ecosystem in Section III to offer a succinct account of the current state of PIT scholarship. The corresponding links to the innovation in global development context are subsequently described in Section IV, in keeping with the theme of the workshop. These links relate to an overview of adjacent fields and concepts; an illustrative example in the information technology for development (ICT4D) field; the identification of gaps in current PIT scholarship; and the preliminary questions that require attention. Next, Section V presents workshop outcomes, in the form of a general overview of the event; the identification of prevalent themes emerging from and / or are reinforced in the workshop; and a summary of Special Issue papers. The workshop is used as an interdisciplinary catalyst for the explication of more recent PIT developments. These developments are encapsulated in ten recommendations for launching PIT projects in Section VI, intended to direct PIT project managers or lead investigators prior to project launch or during the initial stages of a project.
本文是 "促进全球发展创新的公共利益技术(PIT)"特刊的导言,以 2023 年 9 月举行的同名研讨会为基础。本文的贡献在于提出建议和实用指南,以帮助启动 PIT 项目。在第二部分中,我们首先将特刊置于不断演变的PIT定义中,然后在第三部分中概述了PIT生态系统,简明扼要地介绍了PIT学术研究的现状。随后,根据研讨会的主题,第四部分介绍了与全球发展背景下的创新之间的相应联系。这些联系涉及对相邻领域和概念的概述;信息技术促进发展(ICT4D)领域的示例;确定当前 PIT 学术研究中的差距;以及需要关注的初步问题。接下来,第五部分介绍了研讨会的成果,其形式包括:活动概览;确定研讨会中出现和/或得到强化的普遍主题;以及特刊论文摘要。本次研讨会作为跨学科的催化剂,促进了 PIT 的最新发展。这些发展概括为第六部分中关于启动 PIT 项目的十项建议,旨在指导 PIT 项目经理或主要研究人员在项目启动前或项目初始阶段开展工作。
{"title":"Public Interest Technology for Innovation in Global Development: Recommendations for Launching PIT Projects","authors":"Roba Abbas;Katina Michael;Dinara Davlembayeva;Savvas Papagiannidis;Jeremy Pitt","doi":"10.1109/TTS.2024.3375431","DOIUrl":"https://doi.org/10.1109/TTS.2024.3375431","url":null,"abstract":"This paper serves as an Introduction to the Public Interest Technology (PIT) for Innovation in Global Development Special Issue, based on a workshop of the same title held in September 2023. The paper’s contribution is in proposing recommendations and practical guidance to aid in launching PIT projects. We begin by situating the Special Issue in evolving definitions of PIT in \u0000<xref>Section II</xref>\u0000, followed by an overview of the PIT ecosystem in \u0000<xref>Section III</xref>\u0000 to offer a succinct account of the current state of PIT scholarship. The corresponding links to the innovation in global development context are subsequently described in \u0000<xref>Section IV</xref>\u0000, in keeping with the theme of the workshop. These links relate to an overview of adjacent fields and concepts; an illustrative example in the information technology for development (ICT4D) field; the identification of gaps in current PIT scholarship; and the preliminary questions that require attention. Next, \u0000<xref>Section V</xref>\u0000 presents workshop outcomes, in the form of a general overview of the event; the identification of prevalent themes emerging from and / or are reinforced in the workshop; and a summary of Special Issue papers. The workshop is used as an interdisciplinary catalyst for the explication of more recent PIT developments. These developments are encapsulated in ten recommendations for launching PIT projects in \u0000<xref>Section VI</xref>\u0000, intended to direct PIT project managers or lead investigators prior to project launch or during the initial stages of a project.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"14-23"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539349","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164723","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Failures in the Loop: Human Leadership in AI-Based Decision-Making 循环中的失败:人工智能决策中的人类领导力
Pub Date : 2024-03-01 DOI: 10.1109/TTS.2024.3378587
Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel
The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” [1]. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” [2]. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an all-or-nothing approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features [3]. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ [4], [5]. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes [6]. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.
人工智能的阴暗面一直是科普和学术界讨论的焦点(附录 A),有些人声称人工智能是 "邪恶的"[1]。许多评论家为他们的担忧提出了令人信服的论据。技术精英也助长了这些讨论的两极分化,他们最后通牒说,在人工智能工业化的新时代,公民需要 "与人工智能一起,否则就有被抛在后面的危险"[2]。在这种两极分化的语言下,关于采用人工智能的辩论有被过度简化的风险。关于技术信任的讨论经常采用一种全有或全无的方法。所有技术--认知技术、社会技术、物质技术或数字技术--在被采用时都会带来权衡,并包含 "光明与黑暗 "的特征[3]。但是,对这些特征的描述可能会带有欺骗性(或无意的)拟人化色彩,尤其是当利益相关者将这些特征称为 "代理 "时[4],[5]。当用作类比启发式时,这可以为人工智能的设计提供信息,为人工智能的运行提供知识,甚至有可能预测其结果[6]。然而,如果人工智能代理的表面价值被接受,我们就有可能对这些系统的能力抱有不切实际的期望。
{"title":"Failures in the Loop: Human Leadership in AI-Based Decision-Making","authors":"Katina Michael;Jordan Richard Schoenherr;Kathleen M. Vogel","doi":"10.1109/TTS.2024.3378587","DOIUrl":"https://doi.org/10.1109/TTS.2024.3378587","url":null,"abstract":"The dark side of AI has been a persistent focus in discussions of popular science and academia (Appendix A), with some claiming that AI is “evil” \u0000<xref>[1]</xref>\u0000. Many commentators make compelling arguments for their concerns. Techno-elites have also contributed to the polarization of these discussions, with ultimatums that in this new era of industrialized AI, citizens will need to “[join] with the AI or risk being left behind” \u0000<xref>[2]</xref>\u0000. With such polarizing language, debates about AI adoption run the risk of being oversimplified. Discussion of technological trust frequently takes an \u0000<italic>all-or-nothing</i>\u0000 approach. All technologies – cognitive, social, material, or digital – introduce tradeoffs when they are adopted, and contain both ‘light and dark’ features \u0000<xref>[3]</xref>\u0000. But descriptions of these features can take on deceptively (or unintentionally) anthropomorphic tones, especially when stakeholders refer to the features as ‘agents’ \u0000<xref>[4]</xref>\u0000, \u0000<xref>[5]</xref>\u0000. When used as an analogical heuristic, this can inform the design of AI, provide knowledge for AI operations, and potentially even predict its outcomes \u0000<xref>[6]</xref>\u0000. However, if AI agency is accepted at face value, we run the risk of having unrealistic expectations for the capabilities of these systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"2-13"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539317","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164676","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Locating Responsibility in the Future of Human–AI Interactions 定位未来人与人工智能互动中的责任
Pub Date : 2024-03-01 DOI: 10.1109/TTS.2024.3386247
Ehsan Nabavi;Rob Nicholls;George Roussos
Whether we, as end-users of technology, are aware of it or not, our societies are becoming increasingly entangled in a complex network of interactions with Artificial Intelligence (AI) systems. This goes beyond what is often called ‘Human-AI collaboration’, involving the broader socio-political systems supporting these technologies [1]. Faced with these complex interactions, society grapples with the timeless question: where does responsibility lie for the consequences or results produced by AI systems or applications, whether they are successful or not? Is it with the human operator, AI developer, user, or the AI agent itself? In the case of failure, AI cannot be held accountable, as such software systems are not yet recognized as separate legal entities [2].
无论作为技术最终用户的我们是否意识到,我们的社会正日益与人工智能(AI)系统纠缠在一个复杂的互动网络中。这超出了通常所说的 "人类与人工智能的合作",涉及到支持这些技术的更广泛的社会政治系统[1]。面对这些复杂的互动,社会在努力解决一个永恒的问题:人工智能系统或应用产生的后果或结果,无论成功与否,责任在哪里?是人类操作者、人工智能开发者、用户,还是人工智能代理本身?在失败的情况下,人工智能无法承担责任,因为这类软件系统尚未被承认为独立的法律实体[2]。
{"title":"Locating Responsibility in the Future of Human–AI Interactions","authors":"Ehsan Nabavi;Rob Nicholls;George Roussos","doi":"10.1109/TTS.2024.3386247","DOIUrl":"https://doi.org/10.1109/TTS.2024.3386247","url":null,"abstract":"Whether we, as end-users of technology, are aware of it or not, our societies are becoming increasingly entangled in a complex network of interactions with Artificial Intelligence (AI) systems. This goes beyond what is often called ‘Human-AI collaboration’, involving the broader socio-political systems supporting these technologies \u0000<xref>[1]</xref>\u0000. Faced with these complex interactions, society grapples with the timeless question: where does responsibility lie for the consequences or results produced by AI systems or applications, whether they are successful or not? Is it with the human operator, AI developer, user, or the AI agent itself? In the case of failure, AI cannot be held accountable, as such software systems are not yet recognized as separate legal entities \u0000<xref>[2]</xref>\u0000.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"58-60"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539358","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164721","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
IEEE Transactions on Technology and Society Publication Information 电气和电子工程师学会《技术与社会》杂志出版信息
Pub Date : 2024-03-01 DOI: 10.1109/TTS.2024.3374980
{"title":"IEEE Transactions on Technology and Society Publication Information","authors":"","doi":"10.1109/TTS.2024.3374980","DOIUrl":"https://doi.org/10.1109/TTS.2024.3374980","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-03-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10539307","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164679","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
IEEE transactions on technology and society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1