首页 > 最新文献

IEEE transactions on technology and society最新文献

英文 中文
Explaining Technology We Do Not Understand 解释我们不了解的技术
Pub Date : 2023-01-30 DOI: 10.1109/TTS.2023.3240107
Greg Adamson
Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.
自2016年以来,美国国防高级研究计划局(DARPA)启动了一项名为“可解释人工智能”(XAI)的重大工作计划。该项目被认为对人工智能的采用很重要,在这种情况下,它包括了作战人员与人工智能“合作伙伴”有效合作的需求。技术采用通常是基于信念来推动的,这与技术将提供的好处几乎没有关系。这些信念包括“进步”、技术优势和作为聚宝盆的技术。XAI项目广泛推广了一种新的信念:人工智能总体上是可以解释的。由于人工智能系统通常具有隐藏或黑箱特征,因此可解释性问题非常重要。本文认为,由于人工智能系统的复杂性,应该以类似于使用科学方法处理自然现象的方式来处理人工智能系统。DARPA鼓励的一种方法是基于事后推理的模型归纳。这种归纳推理是符合科学方法的。然而,这种方法有一段控制的历史,用于在不确定的、归纳的结果中建立信心。本文提出了一些与黑盒哲学检验相一致的控制方法。随着人工智能系统被用来决定谁应该获得稀缺资源,谁应该受到惩罚,以及以何种方式受到惩罚,人工智能可以被解释的说法很重要。最近对ChatGPT的广泛试验也突显了人工智能系统的挑战和期望。
{"title":"Explaining Technology We Do Not Understand","authors":"Greg Adamson","doi":"10.1109/TTS.2023.3240107","DOIUrl":"10.1109/TTS.2023.3240107","url":null,"abstract":"Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"34-45"},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45789424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems 这不是“准确性与可解释性”——值得信赖的人工智能系统需要两者
Pub Date : 2023-01-30 DOI: 10.1109/TTS.2023.3239921
Dragutin Petkovic
We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.
我们正在见证“人工智能经济和社会”的出现,人工智能技术和应用正在日益影响医疗保健、商业、交通、国防和日常生活的许多方面。据报道,人工智能系统的准确性甚至超过了人类专家。然而,人工智能系统可能会产生错误,可能表现出偏见,可能对数据中的噪声敏感,并且往往缺乏技术和司法透明度,导致信任度降低,并对其采用提出挑战。这些最近的缺点和担忧已经在科学界和普通媒体上得到了记录,比如自动驾驶汽车的事故、医疗保健或雇佣方面的偏见以及有色人种的人脸识别系统,以及后来发现由于错误原因做出的看似正确的决定等。这导致了许多政府和监管举措的出现,要求值得信赖和道德的人工智能提供准确性和稳健性、某种形式的可解释性、人为控制和监督、消除偏见、司法透明度和安全性。在交付值得信赖的人工智能系统方面的挑战促使人们对可解释的人工智能(XAI)进行了深入的研究。XAI的最初目的是提供人工智能系统如何做出决策的人类可理解信息,以增加用户的信任。在本文中,我们首先非常简要地总结了当前的XAI工作,然后对最近的论点提出质疑,这些论点认为“准确性与可解释性”是相互排斥的,并且主要关注具有有限XAI能力的深度学习。然后,我们提出了在交付高风险、值得信赖的人工智能系统的所有阶段广泛使用XAI的建议,例如开发;验证/认证;值得信赖的生产和维护。
{"title":"It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems","authors":"Dragutin Petkovic","doi":"10.1109/TTS.2023.3239921","DOIUrl":"10.1109/TTS.2023.3239921","url":null,"abstract":"We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"46-53"},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48834986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
Smart Education in Smart Cities: Layered Implications for Networked and Ubiquitous Learning 智慧城市中的智慧教育:网络化和普遍性学习的分层含义
Pub Date : 2023-01-25 DOI: 10.1109/TTS.2023.3239586
Jason C. K. Tham;Gustav Verhulsdonck
The development of smart cities worldwide is bringing about new processes and methods for enhancing teaching and learning in a networked age. As smart cities rely on analytics and digital capabilities to connect people and everyday activities so as to improve the quality of life, they can bring new layers of concerns for schools and educational institutions engaging the next-gen learning environment. Drawing from cases from around the world and specifically from developing smart cities, this paper calls attention to key implications of smart cities and smart education design on networked learning. We focus on layers of design ethics, data practices, roles, and delivery afforded by new learning infrastructures in smart cities, then proposing a “stack” analogy for designing ubiquitous learning.
全球智慧城市的发展为网络时代的教与学带来了新的流程和方法。随着智慧城市依靠分析和数字能力将人与日常活动联系起来,从而提高生活质量,它们可能会给参与下一代学习环境的学校和教育机构带来新的担忧。本文借鉴世界各地的案例,特别是智慧城市发展的案例,呼吁人们关注智慧城市和智慧教育设计对网络化学习的关键影响。我们关注智能城市中新的学习基础设施所提供的设计伦理、数据实践、角色和交付的各个层面,然后提出了一个设计泛在学习的“堆栈”类比。
{"title":"Smart Education in Smart Cities: Layered Implications for Networked and Ubiquitous Learning","authors":"Jason C. K. Tham;Gustav Verhulsdonck","doi":"10.1109/TTS.2023.3239586","DOIUrl":"10.1109/TTS.2023.3239586","url":null,"abstract":"The development of smart cities worldwide is bringing about new processes and methods for enhancing teaching and learning in a networked age. As smart cities rely on analytics and digital capabilities to connect people and everyday activities so as to improve the quality of life, they can bring new layers of concerns for schools and educational institutions engaging the next-gen learning environment. Drawing from cases from around the world and specifically from developing smart cities, this paper calls attention to key implications of smart cities and smart education design on networked learning. We focus on layers of design ethics, data practices, roles, and delivery afforded by new learning infrastructures in smart cities, then proposing a “stack” analogy for designing ubiquitous learning.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"87-95"},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48428918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 6
Automatic and Efficient Framework for Identifying Multiple Neurological Disorders From EEG Signals 从脑电图信号中识别多种神经系统疾病的自动高效框架
Pub Date : 2023-01-25 DOI: 10.1109/TTS.2023.3239526
Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang
The burden of neurological disorders is huge on global health and recognized as major causes of death and disability worldwide. There are more than 600 neurological diseases, but there is no unique automatic standard detection system yet to identify multiple neurological disorders using a single framework. Hence, this study aims to develop a common computer-aided diagnosis (CAD) system for automatic detection of multiple neurological disorders from EEG signals. In this study, we introduce a new single framework for automatic identification of four common neurological disorders, namely autism, epilepsy, parkinson’s disease, and schizophrenia, from EEG data. The proposed framework is designed based on convolutional neural network (CNN) and spectrogram images of EEG signal for classifying four neurological disorders from healthy subjects (five classes). In the proposed design, firstly, the EEG signals are pre-processed for removing artifacts and noises and then converted into two-dimensional time-frequency-based spectrogram images using short-time Fourier transform. Afterwards, a CNN model is designed to perform five-class classification using those spectrogram images. The proposed method achieves much better performance in both efficiency and accuracy compared to two other popular CNN models: AlexNet and ResNet50. In addition, the performance of the proposed model is also evaluated on binary classification (disease vs. healthy) which also outperforms the state-of-the-art results for tested datasets. The obtained results recommend that our proposed framework will be helpful for developing a CAD system to assist the clinicians and experts in the automatic diagnosis process.
神经系统疾病对全球健康造成巨大负担,并被公认为全世界死亡和残疾的主要原因。目前有600多种神经系统疾病,但目前还没有一种独特的自动标准检测系统,可以使用单一框架识别多种神经系统疾病。因此,本研究旨在开发一种通用的计算机辅助诊断(CAD)系统,用于从EEG信号中自动检测多种神经系统疾病。在这项研究中,我们引入了一个新的单一框架,用于从脑电图数据中自动识别四种常见的神经系统疾病,即自闭症、癫痫、帕金森病和精神分裂症。该框架基于卷积神经网络(CNN)和脑电图信号的频谱图图像,对健康受试者的四种神经系统疾病(五类)进行分类。在该设计中,首先对脑电信号进行预处理,去除伪影和噪声,然后利用短时傅立叶变换将其转换成二维时频谱图。然后,设计一个CNN模型,利用这些谱图图像进行五类分类。与其他两种流行的CNN模型AlexNet和ResNet50相比,该方法在效率和准确性方面都取得了更好的性能。此外,所提出的模型的性能也在二元分类(疾病与健康)上进行了评估,这也优于测试数据集的最新结果。结果表明,我们提出的框架将有助于开发CAD系统,以协助临床医生和专家进行自动诊断过程。
{"title":"Automatic and Efficient Framework for Identifying Multiple Neurological Disorders From EEG Signals","authors":"Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang","doi":"10.1109/TTS.2023.3239526","DOIUrl":"10.1109/TTS.2023.3239526","url":null,"abstract":"The burden of neurological disorders is huge on global health and recognized as major causes of death and disability worldwide. There are more than 600 neurological diseases, but there is no unique automatic standard detection system yet to identify multiple neurological disorders using a single framework. Hence, this study aims to develop a common computer-aided diagnosis (CAD) system for automatic detection of multiple neurological disorders from EEG signals. In this study, we introduce a new single framework for automatic identification of four common neurological disorders, namely autism, epilepsy, parkinson’s disease, and schizophrenia, from EEG data. The proposed framework is designed based on convolutional neural network (CNN) and spectrogram images of EEG signal for classifying four neurological disorders from healthy subjects (five classes). In the proposed design, firstly, the EEG signals are pre-processed for removing artifacts and noises and then converted into two-dimensional time-frequency-based spectrogram images using short-time Fourier transform. Afterwards, a CNN model is designed to perform five-class classification using those spectrogram images. The proposed method achieves much better performance in both efficiency and accuracy compared to two other popular CNN models: AlexNet and ResNet50. In addition, the performance of the proposed model is also evaluated on binary classification (disease vs. healthy) which also outperforms the state-of-the-art results for tested datasets. The obtained results recommend that our proposed framework will be helpful for developing a CAD system to assist the clinicians and experts in the automatic diagnosis process.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"76-86"},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43529626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 3
Exploring the Application of Design Thinking Methodology in Cellular Communications Network Planning and Deployment 探索设计思维方法在蜂窝通信网络规划与部署中的应用
Pub Date : 2023-01-23 DOI: 10.1109/TTS.2023.3239261
Iñigo Cuiñas;Anna Laska-Leśniewicz;Katarzyna Znajdek;Dorota Kamińska
Cellular network planning and deployment represent a set of activities that operators have been performing since the advent of mobile wireless communication systems. Decisions along this route traditionally have been based on engineering or commercial criteria, focusing on providing the best service to users. Although operator companies gather information regarding perceived quality of service among their subscribers, mainly using surveys, engineers work with numerical data more than with people’s interests, feelings, perceptions, or hopes. In other sectors, Design Thinking has arisen as a methodology that allows designers to involve real-world users in implementing new products or services. Thus, we are introducing this methodology for cellular networking, exploring its application in a process that could be designed in a more human-centered way. With this aim, we analyzed the different steps that could be included in the network planning and deployment. From this analysis, we detected which actions have chances to be explored in a human-centered view and thus, can be improved with some Design Thinking tips. Finally, we developed an experience of limited potential, comparing the insights obtained by an empathic interview to those given by traditional surveys, to show what Design Thinking could provide.
蜂窝网络规划和部署代表了自移动无线通信系统出现以来运营商一直在执行的一系列活动。沿着这条路线的决策传统上是基于工程或商业标准的,重点是为用户提供最佳服务。尽管运营商公司主要通过调查来收集用户对服务质量的感知信息,但工程师们更多的是处理数字数据,而不是人们的兴趣、感受、看法或希望。在其他领域,设计思维已经成为一种方法论,它允许设计师在实现新产品或服务时让现实世界的用户参与进来。因此,我们正在将这种方法引入蜂窝网络,探索其在一个可以以更以人为中心的方式设计的过程中的应用。为此,我们分析了网络规划和部署中可能包含的不同步骤。从这个分析中,我们发现了哪些行动有机会在以人为中心的视角中进行探索,从而可以通过一些设计思维技巧来改进。最后,我们开发了一种有限潜力的经验,将共情访谈获得的见解与传统调查获得的见解进行比较,以显示设计思维可以提供什么。
{"title":"Exploring the Application of Design Thinking Methodology in Cellular Communications Network Planning and Deployment","authors":"Iñigo Cuiñas;Anna Laska-Leśniewicz;Katarzyna Znajdek;Dorota Kamińska","doi":"10.1109/TTS.2023.3239261","DOIUrl":"10.1109/TTS.2023.3239261","url":null,"abstract":"Cellular network planning and deployment represent a set of activities that operators have been performing since the advent of mobile wireless communication systems. Decisions along this route traditionally have been based on engineering or commercial criteria, focusing on providing the best service to users. Although operator companies gather information regarding perceived quality of service among their subscribers, mainly using surveys, engineers work with numerical data more than with people’s interests, feelings, perceptions, or hopes. In other sectors, Design Thinking has arisen as a methodology that allows designers to involve real-world users in implementing new products or services. Thus, we are introducing this methodology for cellular networking, exploring its application in a process that could be designed in a more human-centered way. With this aim, we analyzed the different steps that could be included in the network planning and deployment. From this analysis, we detected which actions have chances to be explored in a human-centered view and thus, can be improved with some Design Thinking tips. Finally, we developed an experience of limited potential, comparing the insights obtained by an empathic interview to those given by traditional surveys, to show what Design Thinking could provide.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 3","pages":"269-278"},"PeriodicalIF":0.0,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 1
The Value of Trust in Encryption: Impact and Implications on Technology Law and Policy 加密中的信任价值:对技术法律和政策的影响和意义
Pub Date : 2023-01-18 DOI: 10.1109/TTS.2023.3237987
Michael Anthony C. Dizon
Encryption is an enigmatic technology from the viewpoint of technology law and policy. It is essential for ensuring information security and data privacy, but it can be similarly used for illicit means and ends. This article contends that understanding the underlying values of encryption can help clarify the legal and policy debates about whether or how to regulate this technology. This article specifically focuses on the value of trust and examines it from the perspective of three groups of stakeholders: members of the general public, business, and government. In particular, the article analyses the four direct objects of trust in relation to encryption: the technology, specific persons, institutions, and general others involved in encryption. It further delves into how this value impacts technology law and policy. The article concludes that trust is a paramount value of encryption and should be used as a principal consideration and guide when evaluating existing or proposed encryption regulations.
从技术法律和政策的角度来看,加密是一种神秘的技术。它对确保信息安全和数据隐私至关重要,但同样也可被用于非法手段和目的。本文认为,了解加密的基本价值有助于澄清关于是否或如何监管这项技术的法律和政策辩论。本文特别关注信任的价值,并从三类利益相关者(公众、企业和政府)的角度对其进行了研究。文章特别分析了与加密技术有关的四个直接信任对象:技术、特定的人、机构和与加密技术有关的其他人。文章还进一步探讨了这种价值观如何影响技术法律和政策。文章的结论是,信任是加密的首要价值,在评估现有或拟议的加密法规时应将其作为主要考虑因素和指南。
{"title":"The Value of Trust in Encryption: Impact and Implications on Technology Law and Policy","authors":"Michael Anthony C. Dizon","doi":"10.1109/TTS.2023.3237987","DOIUrl":"10.1109/TTS.2023.3237987","url":null,"abstract":"Encryption is an enigmatic technology from the viewpoint of technology law and policy. It is essential for ensuring information security and data privacy, but it can be similarly used for illicit means and ends. This article contends that understanding the underlying values of encryption can help clarify the legal and policy debates about whether or how to regulate this technology. This article specifically focuses on the value of trust and examines it from the perspective of three groups of stakeholders: members of the general public, business, and government. In particular, the article analyses the four direct objects of trust in relation to encryption: the technology, specific persons, institutions, and general others involved in encryption. It further delves into how this value impacts technology law and policy. The article concludes that trust is a paramount value of encryption and should be used as a principal consideration and guide when evaluating existing or proposed encryption regulations.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"343-351"},"PeriodicalIF":0.0,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults 面向老年人的近未来社交支持型智能助理的伦理问题
Pub Date : 2023-01-16 DOI: 10.1109/TTS.2023.3237124
Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette
This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.
本文探讨了与近未来人工智能(AI)系统有关的新伦理问题,这些系统旨在支持、维持或增强老年人在衰老和认知能力衰退时的能力。我们尤其关注智能助理(SAs),它们将寻求提供积极主动的帮助,并调解用户与其社交或支持网络中其他成员之间的社交互动。如果这类系统能减轻老年人在执行任务时的认知负担,帮助他们保持自主性和独立性,那么它们将对用户及其护理人员产生巨大的潜在效用。然而,即使是简单的任务,例如为用户提供会议或谈话摘要,也需要未来的 SA 参与人类互动的伦理方面,而目前的计算系统很难识别、跟踪和导航这些方面。如果 SA 无法感知社会互动中与道德相关的方面,那么由此产生的道德辨别力缺陷将威胁到用户自主性和福祉的重要方面。在描述了产生这些道德挑战的动力之后,我们注意到促使用户监督此类系统的简单策略也可能会削弱它们的效用。最后,我们考虑了不久的将来,智能系统会如何加剧当前对隐私、用户商品化、信任校准和不公正的担忧。
{"title":"Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults","authors":"Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette","doi":"10.1109/TTS.2023.3237124","DOIUrl":"10.1109/TTS.2023.3237124","url":null,"abstract":"This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"291-301"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
The Re-Conception of AI: Beyond Artificial, and Beyond Intelligence 人工智能的重新定义:超越人工,超越智能
Pub Date : 2023-01-04 DOI: 10.1109/TTS.2023.3234051
Roger Clarke
The original conception of artificial intelligence (old-AI) was as a simulation of human intelligence. That has proven to be an ill-judged quest. It has led too many researchers repetitively down too many blind alleys, and embodies many threats to individuals, societies and economies. To increase value and reduce harm, it is necessary to re-conceptualise the field. A review is undertaken of old-AI’s flavours, operational definitions and important exemplars. The heart of the problem is argued to be an inappropriate focus on achieving substitution for human intelligence, either by replicating it in silicon or by inventing something functionally equivalent to it. Humankind instead needs its artefacts to deliver intellectual value different from human intelligence. By devising complementary artefact intelligence (CAI), and combining it with human intelligence, the mission becomes the delivery of augmented intelligence (new-AI). These alternative conceptions can serve the needs of the human race far better than either human or artefact intelligence can alone. The proposed re-conception goes a step further. Inferencing and decision-making lay the foundations for action. Old-AI has tended to compartmentalise discussion, with robotics considered as though it were a parallel or at best overlapping field of endeavour. Combining the intellectual with the physical leads to broader conceptions of far greater value: complementary artefact capability (CAC) and augmented capability (AC). These enable the re-orientation of research to avoid dead-ends and misdirected designs, and deliver techniques that serve real-world needs and amplify humankind’s capacity for responsible innovation.
人工智能(旧ai)的最初概念是对人类智能的模拟。事实证明,这是一个欠考虑的探索。它导致太多的研究人员重复地进入太多的死胡同,并体现了对个人、社会和经济的许多威胁。为了增加价值和减少伤害,有必要重新定义该领域。回顾了旧ai的风格、操作定义和重要范例。有人认为,问题的核心在于,人们不恰当地把重点放在实现人类智能的替代上,要么在硅中复制人类智能,要么发明功能上与人类智能相当的东西。相反,人类需要人工制品来传递不同于人类智能的智力价值。通过设计互补的人工智能(CAI),并将其与人类智能相结合,任务变成了增强智能(new-AI)的交付。这些不同的概念可以比人类或人工智能单独更好地满足人类的需要。提议的重新定义更进一步。推理和决策是行动的基础。旧的人工智能倾向于将讨论划分开来,机器人被认为是一个平行的领域,或者充其量是重叠的领域。将智力与物理相结合会产生更广泛的概念,这些概念具有更大的价值:补充人工能力(CAC)和增强能力(AC)。这使研究能够重新定位,以避免死胡同和误入歧途的设计,并提供服务于现实世界需求的技术,并增强人类负责任的创新能力。
{"title":"The Re-Conception of AI: Beyond Artificial, and Beyond Intelligence","authors":"Roger Clarke","doi":"10.1109/TTS.2023.3234051","DOIUrl":"10.1109/TTS.2023.3234051","url":null,"abstract":"The original conception of artificial intelligence (old-AI) was as a simulation of human intelligence. That has proven to be an ill-judged quest. It has led too many researchers repetitively down too many blind alleys, and embodies many threats to individuals, societies and economies. To increase value and reduce harm, it is necessary to re-conceptualise the field. A review is undertaken of old-AI’s flavours, operational definitions and important exemplars. The heart of the problem is argued to be an inappropriate focus on achieving substitution for human intelligence, either by replicating it in silicon or by inventing something functionally equivalent to it. Humankind instead needs its artefacts to deliver intellectual value different from human intelligence. By devising complementary artefact intelligence (CAI), and combining it with human intelligence, the mission becomes the delivery of augmented intelligence (new-AI). These alternative conceptions can serve the needs of the human race far better than either human or artefact intelligence can alone. The proposed re-conception goes a step further. Inferencing and decision-making lay the foundations for action. Old-AI has tended to compartmentalise discussion, with robotics considered as though it were a parallel or at best overlapping field of endeavour. Combining the intellectual with the physical leads to broader conceptions of far greater value: complementary artefact capability (CAC) and augmented capability (AC). These enable the re-orientation of research to avoid dead-ends and misdirected designs, and deliver techniques that serve real-world needs and amplify humankind’s capacity for responsible innovation.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"24-33"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44601858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Challenges of Deep Learning in Medical Image Analysis—Improving Explainability and Trust 深度学习在医学图像分析中的挑战——提高可解释性和可信度
Pub Date : 2023-01-04 DOI: 10.1109/TTS.2023.3234203
Tribikram Dhar;Nilanjan Dey;Surekha Borra;R. Simon Sherratt
Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. However, deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.
深度学习已经彻底改变了疾病的检测,并帮助医疗保健部门打破准确性和稳健性方面的障碍,实现高效和稳健的计算机辅助诊断系统。深度学习技术的应用使基于人工智能的自动化实用程序能够执行与骨折、肿瘤和内出血的医学诊断相关的任何任务,只需最少的人工监督;术前计划;然而,深度学习对蓬勃发展的医疗保健领域面临着一些重大威胁。本文探讨了研究人员和工程师的深度学习社区面临的主要挑战,特别是在医学图像诊断方面,如平衡注释医学图像数据的不可用性、深度神经网络和架构因噪声医学图像数据而面临的对抗性攻击、用户和患者之间缺乏信任性,以及与医疗数据相关的道德和隐私问题。这项研究通过克服社会对自主智能系统的信任问题,探索了人工智能在医疗保健中自主的可能性。
{"title":"Challenges of Deep Learning in Medical Image Analysis—Improving Explainability and Trust","authors":"Tribikram Dhar;Nilanjan Dey;Surekha Borra;R. Simon Sherratt","doi":"10.1109/TTS.2023.3234203","DOIUrl":"10.1109/TTS.2023.3234203","url":null,"abstract":"Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. However, deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"68-75"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47545868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 25
Abstracted Power and Responsibility in Computer Science Ethics Education 计算机科学伦理教育中的抽象权力与责任
Pub Date : 2023-01-03 DOI: 10.1109/TTS.2022.3233776
Tina L. Peterson;Rodrigo Ferreira;Moshe Y. Vardi
As computing becomes more powerful and extends the reach of those who wield it, the imperative grows for computing professionals to make ethical decisions regarding the use of that power. We propose the concept of abstracted power to help computer science students understand how technology may distance them perceptually from consequences of their actions. Specifically, we identify technological intermediation and computational thinking as two factors in computer science that contribute to this distancing. To counter the abstraction of power, we argue for increased emotional engagement in computer science ethics education, to encourage students to feel as well as think regarding the potential impacts of their power on others. We suggest four concrete pedagogical approaches to enable this emotional engagement in computer science ethics curriculum, and we share highlights of student reactions to the material.
随着计算变得越来越强大,并扩大了使用它的人的范围,计算专业人员就使用这种权力做出合乎道德的决定的必要性也越来越大。我们提出了抽象权力的概念,以帮助计算机科学专业的学生理解技术如何在感知上使他们与行为的后果拉开距离。具体来说,我们认为技术中介和计算思维是计算机科学中促成这种距离的两个因素。为了反对权力的抽象,我们主张在计算机科学伦理教育中增加情感参与,鼓励学生感受和思考自己的权力对他人的潜在影响。我们提出了四种具体的教学方法,以使这种情感参与计算机科学伦理课程,我们分享了学生对材料的反应。
{"title":"Abstracted Power and Responsibility in Computer Science Ethics Education","authors":"Tina L. Peterson;Rodrigo Ferreira;Moshe Y. Vardi","doi":"10.1109/TTS.2022.3233776","DOIUrl":"10.1109/TTS.2022.3233776","url":null,"abstract":"As computing becomes more powerful and extends the reach of those who wield it, the imperative grows for computing professionals to make ethical decisions regarding the use of that power. We propose the concept of abstracted power to help computer science students understand how technology may distance them perceptually from consequences of their actions. Specifically, we identify technological intermediation and computational thinking as two factors in computer science that contribute to this distancing. To counter the abstraction of power, we argue for increased emotional engagement in computer science ethics education, to encourage students to feel as well as think regarding the potential impacts of their power on others. We suggest four concrete pedagogical approaches to enable this emotional engagement in computer science ethics curriculum, and we share highlights of student reactions to the material.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"96-102"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45166270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 2
期刊
IEEE transactions on technology and society
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1