Pub Date : 2023-01-30DOI: 10.1109/TTS.2023.3240107
Greg Adamson
Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.
{"title":"Explaining Technology We Do Not Understand","authors":"Greg Adamson","doi":"10.1109/TTS.2023.3240107","DOIUrl":"10.1109/TTS.2023.3240107","url":null,"abstract":"Since 2016 a significant program of work has been initiated by the U.S. Defense Advanced Research Projects Agency (DARPA) under the title of explainable artificial intelligence (XAI). This program is seen as important for AI adoption, in this case to include the needs of warfighters to effectively collaborate with AI “partners.” Technology adoption is often promoted based on beliefs, which bears little relationship to the benefit a technology will provide. These beliefs include “progress,” technology superiority, and technology as cornucopia. The XAI program has widely promoted a new belief: that AI is in general explainable. As AI systems often have concealed or black box characteristics, the problem of explainability is significant. This paper argues that due to their complexity, AI systems should be approached in a way similar to the way the scientific method is used to approach natural phenomena. One approach encouraged by DARPA, model induction, is based on post-hoc reasoning. Such inductive reasoning is consistent with the scientific method. However, that method has a history of controls that are applied to create confidence in an uncertain, inductive, outcome. The paper proposes some controls consistent with a philosophical examination of black boxes. As AI systems are being used to determine who should have access to scarce resources and who should be punished and in what way, the claim that AI can be explained is important. Widespread recent experimentation with ChatGPT has also highlighted the challenges and expectations of AI systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"34-45"},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45789424","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-30DOI: 10.1109/TTS.2023.3239921
Dragutin Petkovic
We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.
{"title":"It is Not “Accuracy vs. Explainability”—We Need Both for Trustworthy AI Systems","authors":"Dragutin Petkovic","doi":"10.1109/TTS.2023.3239921","DOIUrl":"10.1109/TTS.2023.3239921","url":null,"abstract":"We are witnessing the emergence of an “AI economy and society” where AI technologies and applications are increasingly impacting health care, business, transportation, defense and many aspects of everyday life. Many successes have been reported where AI systems even surpassed the accuracy of human experts. However, AI systems may produce errors, can exhibit bias, may be sensitive to noise in the data, and often lack technical and judicial transparency resulting in reduction in trust and challenges to their adoption. These recent shortcomings and concerns have been documented in both the scientific and general press such as accidents with self-driving cars, biases in healthcare or hiring and face recognition systems for people of color, and seemingly correct decisions later found to be made due to wrong reasons etc. This has resulted in the emergence of many government and regulatory initiatives requiring trustworthy and ethical AI to provide accuracy and robustness, some form of explainability, human control and oversight, elimination of bias, judicial transparency and safety. The challenges in delivery of trustworthy AI systems have motivated intense research on explainable AI systems (XAI). The original aim of XAI is to provide human understandable information of how AI systems make their decisions in order to increase user trust. In this paper we first very briefly summarize current XAI work and then challenge the recent arguments that present “accuracy vs. explainability” as being mutually exclusive and for focusing mainly on deep learning with its limited XAI capabilities. We then present our recommendations for the broad use of XAI in all stages of delivery of high stakes trustworthy AI systems, e.g., development; validation/certification; and trustworthy production and maintenance.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"46-53"},"PeriodicalIF":0.0,"publicationDate":"2023-01-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48834986","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-25DOI: 10.1109/TTS.2023.3239586
Jason C. K. Tham;Gustav Verhulsdonck
The development of smart cities worldwide is bringing about new processes and methods for enhancing teaching and learning in a networked age. As smart cities rely on analytics and digital capabilities to connect people and everyday activities so as to improve the quality of life, they can bring new layers of concerns for schools and educational institutions engaging the next-gen learning environment. Drawing from cases from around the world and specifically from developing smart cities, this paper calls attention to key implications of smart cities and smart education design on networked learning. We focus on layers of design ethics, data practices, roles, and delivery afforded by new learning infrastructures in smart cities, then proposing a “stack” analogy for designing ubiquitous learning.
{"title":"Smart Education in Smart Cities: Layered Implications for Networked and Ubiquitous Learning","authors":"Jason C. K. Tham;Gustav Verhulsdonck","doi":"10.1109/TTS.2023.3239586","DOIUrl":"10.1109/TTS.2023.3239586","url":null,"abstract":"The development of smart cities worldwide is bringing about new processes and methods for enhancing teaching and learning in a networked age. As smart cities rely on analytics and digital capabilities to connect people and everyday activities so as to improve the quality of life, they can bring new layers of concerns for schools and educational institutions engaging the next-gen learning environment. Drawing from cases from around the world and specifically from developing smart cities, this paper calls attention to key implications of smart cities and smart education design on networked learning. We focus on layers of design ethics, data practices, roles, and delivery afforded by new learning infrastructures in smart cities, then proposing a “stack” analogy for designing ubiquitous learning.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"87-95"},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"48428918","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-25DOI: 10.1109/TTS.2023.3239526
Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang
The burden of neurological disorders is huge on global health and recognized as major causes of death and disability worldwide. There are more than 600 neurological diseases, but there is no unique automatic standard detection system yet to identify multiple neurological disorders using a single framework. Hence, this study aims to develop a common computer-aided diagnosis (CAD) system for automatic detection of multiple neurological disorders from EEG signals. In this study, we introduce a new single framework for automatic identification of four common neurological disorders, namely autism, epilepsy, parkinson’s disease, and schizophrenia, from EEG data. The proposed framework is designed based on convolutional neural network (CNN) and spectrogram images of EEG signal for classifying four neurological disorders from healthy subjects (five classes). In the proposed design, firstly, the EEG signals are pre-processed for removing artifacts and noises and then converted into two-dimensional time-frequency-based spectrogram images using short-time Fourier transform. Afterwards, a CNN model is designed to perform five-class classification using those spectrogram images. The proposed method achieves much better performance in both efficiency and accuracy compared to two other popular CNN models: AlexNet and ResNet50. In addition, the performance of the proposed model is also evaluated on binary classification (disease vs. healthy) which also outperforms the state-of-the-art results for tested datasets. The obtained results recommend that our proposed framework will be helpful for developing a CAD system to assist the clinicians and experts in the automatic diagnosis process.
{"title":"Automatic and Efficient Framework for Identifying Multiple Neurological Disorders From EEG Signals","authors":"Md. Nurul Ahad Tawhid;Siuly Siuly;Kate Wang;Hua Wang","doi":"10.1109/TTS.2023.3239526","DOIUrl":"10.1109/TTS.2023.3239526","url":null,"abstract":"The burden of neurological disorders is huge on global health and recognized as major causes of death and disability worldwide. There are more than 600 neurological diseases, but there is no unique automatic standard detection system yet to identify multiple neurological disorders using a single framework. Hence, this study aims to develop a common computer-aided diagnosis (CAD) system for automatic detection of multiple neurological disorders from EEG signals. In this study, we introduce a new single framework for automatic identification of four common neurological disorders, namely autism, epilepsy, parkinson’s disease, and schizophrenia, from EEG data. The proposed framework is designed based on convolutional neural network (CNN) and spectrogram images of EEG signal for classifying four neurological disorders from healthy subjects (five classes). In the proposed design, firstly, the EEG signals are pre-processed for removing artifacts and noises and then converted into two-dimensional time-frequency-based spectrogram images using short-time Fourier transform. Afterwards, a CNN model is designed to perform five-class classification using those spectrogram images. The proposed method achieves much better performance in both efficiency and accuracy compared to two other popular CNN models: AlexNet and ResNet50. In addition, the performance of the proposed model is also evaluated on binary classification (disease vs. healthy) which also outperforms the state-of-the-art results for tested datasets. The obtained results recommend that our proposed framework will be helpful for developing a CAD system to assist the clinicians and experts in the automatic diagnosis process.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"76-86"},"PeriodicalIF":0.0,"publicationDate":"2023-01-25","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"43529626","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Cellular network planning and deployment represent a set of activities that operators have been performing since the advent of mobile wireless communication systems. Decisions along this route traditionally have been based on engineering or commercial criteria, focusing on providing the best service to users. Although operator companies gather information regarding perceived quality of service among their subscribers, mainly using surveys, engineers work with numerical data more than with people’s interests, feelings, perceptions, or hopes. In other sectors, Design Thinking has arisen as a methodology that allows designers to involve real-world users in implementing new products or services. Thus, we are introducing this methodology for cellular networking, exploring its application in a process that could be designed in a more human-centered way. With this aim, we analyzed the different steps that could be included in the network planning and deployment. From this analysis, we detected which actions have chances to be explored in a human-centered view and thus, can be improved with some Design Thinking tips. Finally, we developed an experience of limited potential, comparing the insights obtained by an empathic interview to those given by traditional surveys, to show what Design Thinking could provide.
{"title":"Exploring the Application of Design Thinking Methodology in Cellular Communications Network Planning and Deployment","authors":"Iñigo Cuiñas;Anna Laska-Leśniewicz;Katarzyna Znajdek;Dorota Kamińska","doi":"10.1109/TTS.2023.3239261","DOIUrl":"10.1109/TTS.2023.3239261","url":null,"abstract":"Cellular network planning and deployment represent a set of activities that operators have been performing since the advent of mobile wireless communication systems. Decisions along this route traditionally have been based on engineering or commercial criteria, focusing on providing the best service to users. Although operator companies gather information regarding perceived quality of service among their subscribers, mainly using surveys, engineers work with numerical data more than with people’s interests, feelings, perceptions, or hopes. In other sectors, Design Thinking has arisen as a methodology that allows designers to involve real-world users in implementing new products or services. Thus, we are introducing this methodology for cellular networking, exploring its application in a process that could be designed in a more human-centered way. With this aim, we analyzed the different steps that could be included in the network planning and deployment. From this analysis, we detected which actions have chances to be explored in a human-centered view and thus, can be improved with some Design Thinking tips. Finally, we developed an experience of limited potential, comparing the insights obtained by an empathic interview to those given by traditional surveys, to show what Design Thinking could provide.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 3","pages":"269-278"},"PeriodicalIF":0.0,"publicationDate":"2023-01-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589289","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-18DOI: 10.1109/TTS.2023.3237987
Michael Anthony C. Dizon
Encryption is an enigmatic technology from the viewpoint of technology law and policy. It is essential for ensuring information security and data privacy, but it can be similarly used for illicit means and ends. This article contends that understanding the underlying values of encryption can help clarify the legal and policy debates about whether or how to regulate this technology. This article specifically focuses on the value of trust and examines it from the perspective of three groups of stakeholders: members of the general public, business, and government. In particular, the article analyses the four direct objects of trust in relation to encryption: the technology, specific persons, institutions, and general others involved in encryption. It further delves into how this value impacts technology law and policy. The article concludes that trust is a paramount value of encryption and should be used as a principal consideration and guide when evaluating existing or proposed encryption regulations.
{"title":"The Value of Trust in Encryption: Impact and Implications on Technology Law and Policy","authors":"Michael Anthony C. Dizon","doi":"10.1109/TTS.2023.3237987","DOIUrl":"10.1109/TTS.2023.3237987","url":null,"abstract":"Encryption is an enigmatic technology from the viewpoint of technology law and policy. It is essential for ensuring information security and data privacy, but it can be similarly used for illicit means and ends. This article contends that understanding the underlying values of encryption can help clarify the legal and policy debates about whether or how to regulate this technology. This article specifically focuses on the value of trust and examines it from the perspective of three groups of stakeholders: members of the general public, business, and government. In particular, the article analyses the four direct objects of trust in relation to encryption: the technology, specific persons, institutions, and general others involved in encryption. It further delves into how this value impacts technology law and policy. The article concludes that trust is a paramount value of encryption and should be used as a principal consideration and guide when evaluating existing or proposed encryption regulations.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"343-351"},"PeriodicalIF":0.0,"publicationDate":"2023-01-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589160","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-16DOI: 10.1109/TTS.2023.3237124
Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette
This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.
本文探讨了与近未来人工智能(AI)系统有关的新伦理问题,这些系统旨在支持、维持或增强老年人在衰老和认知能力衰退时的能力。我们尤其关注智能助理(SAs),它们将寻求提供积极主动的帮助,并调解用户与其社交或支持网络中其他成员之间的社交互动。如果这类系统能减轻老年人在执行任务时的认知负担,帮助他们保持自主性和独立性,那么它们将对用户及其护理人员产生巨大的潜在效用。然而,即使是简单的任务,例如为用户提供会议或谈话摘要,也需要未来的 SA 参与人类互动的伦理方面,而目前的计算系统很难识别、跟踪和导航这些方面。如果 SA 无法感知社会互动中与道德相关的方面,那么由此产生的道德辨别力缺陷将威胁到用户自主性和福祉的重要方面。在描述了产生这些道德挑战的动力之后,我们注意到促使用户监督此类系统的简单策略也可能会削弱它们的效用。最后,我们考虑了不久的将来,智能系统会如何加剧当前对隐私、用户商品化、信任校准和不公正的担忧。
{"title":"Ethical Issues in Near-Future Socially Supportive Smart Assistants for Older Adults","authors":"Alex John London;Yosef S. Razin;Jason Borenstein;Motahhare Eslami;Russell Perkins;Paul Robinette","doi":"10.1109/TTS.2023.3237124","DOIUrl":"10.1109/TTS.2023.3237124","url":null,"abstract":"This paper considers novel ethical issues pertaining to near-future artificial intelligence (AI) systems that seek to support, maintain, or enhance the capabilities of older adults as they age and experience cognitive decline. In particular, we focus on smart assistants (SAs) that would seek to provide proactive assistance and mediate social interactions between users and other members of their social or support networks. Such systems would potentially have significant utility for users and their caregivers if they could reduce the cognitive load for tasks that help older adults maintain their autonomy and independence. However, proactively supporting even simple tasks, such as providing the user with a summary of a meeting or a conversation, would require a future SA to engage with ethical aspects of human interactions which computational systems currently have difficulty identifying, tracking, and navigating. If SAs fail to perceive ethically relevant aspects of social interactions, the resulting deficit in moral discernment would threaten important aspects of user autonomy and well-being. After describing the dynamic that generates these ethical challenges, we note how simple strategies for prompting user oversight of such systems might also undermine their utility. We conclude by considering how near-future SAs could exacerbate current worries about privacy, commodification of users, trust calibration and injustice.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 4","pages":"291-301"},"PeriodicalIF":0.0,"publicationDate":"2023-01-16","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"62589094","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-04DOI: 10.1109/TTS.2023.3234051
Roger Clarke
The original conception of artificial intelligence (old-AI) was as a simulation of human intelligence. That has proven to be an ill-judged quest. It has led too many researchers repetitively down too many blind alleys, and embodies many threats to individuals, societies and economies. To increase value and reduce harm, it is necessary to re-conceptualise the field. A review is undertaken of old-AI’s flavours, operational definitions and important exemplars. The heart of the problem is argued to be an inappropriate focus on achieving substitution for human intelligence, either by replicating it in silicon or by inventing something functionally equivalent to it. Humankind instead needs its artefacts to deliver intellectual value different from human intelligence. By devising complementary artefact intelligence (CAI), and combining it with human intelligence, the mission becomes the delivery of augmented intelligence (new-AI). These alternative conceptions can serve the needs of the human race far better than either human or artefact intelligence can alone. The proposed re-conception goes a step further. Inferencing and decision-making lay the foundations for action. Old-AI has tended to compartmentalise discussion, with robotics considered as though it were a parallel or at best overlapping field of endeavour. Combining the intellectual with the physical leads to broader conceptions of far greater value: complementary artefact capability (CAC) and augmented capability (AC). These enable the re-orientation of research to avoid dead-ends and misdirected designs, and deliver techniques that serve real-world needs and amplify humankind’s capacity for responsible innovation.
{"title":"The Re-Conception of AI: Beyond Artificial, and Beyond Intelligence","authors":"Roger Clarke","doi":"10.1109/TTS.2023.3234051","DOIUrl":"10.1109/TTS.2023.3234051","url":null,"abstract":"The original conception of artificial intelligence (old-AI) was as a simulation of human intelligence. That has proven to be an ill-judged quest. It has led too many researchers repetitively down too many blind alleys, and embodies many threats to individuals, societies and economies. To increase value and reduce harm, it is necessary to re-conceptualise the field. A review is undertaken of old-AI’s flavours, operational definitions and important exemplars. The heart of the problem is argued to be an inappropriate focus on achieving substitution for human intelligence, either by replicating it in silicon or by inventing something functionally equivalent to it. Humankind instead needs its artefacts to deliver intellectual value different from human intelligence. By devising complementary artefact intelligence (CAI), and combining it with human intelligence, the mission becomes the delivery of augmented intelligence (new-AI). These alternative conceptions can serve the needs of the human race far better than either human or artefact intelligence can alone. The proposed re-conception goes a step further. Inferencing and decision-making lay the foundations for action. Old-AI has tended to compartmentalise discussion, with robotics considered as though it were a parallel or at best overlapping field of endeavour. Combining the intellectual with the physical leads to broader conceptions of far greater value: complementary artefact capability (CAC) and augmented capability (AC). These enable the re-orientation of research to avoid dead-ends and misdirected designs, and deliver techniques that serve real-world needs and amplify humankind’s capacity for responsible innovation.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"24-33"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"44601858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-04DOI: 10.1109/TTS.2023.3234203
Tribikram Dhar;Nilanjan Dey;Surekha Borra;R. Simon Sherratt
Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. However, deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.
{"title":"Challenges of Deep Learning in Medical Image Analysis—Improving Explainability and Trust","authors":"Tribikram Dhar;Nilanjan Dey;Surekha Borra;R. Simon Sherratt","doi":"10.1109/TTS.2023.3234203","DOIUrl":"10.1109/TTS.2023.3234203","url":null,"abstract":"Deep learning has revolutionized the detection of diseases and is helping the healthcare sector break barriers in terms of accuracy and robustness to achieve efficient and robust computer-aided diagnostic systems. The application of deep learning techniques empowers automated AI-based utilities requiring minimal human supervision to perform any task related to medical diagnosis of fractures, tumors, and internal hemorrhage; preoperative planning; intra-operative guidance, etc. However, deep learning faces some major threats to the flourishing healthcare domain. This paper traverses the major challenges that the deep learning community of researchers and engineers faces, particularly in medical image diagnosis, like the unavailability of balanced annotated medical image data, adversarial attacks faced by deep neural networks and architectures due to noisy medical image data, a lack of trustability among users and patients, and ethical and privacy issues related to medical data. This study explores the possibilities of AI autonomy in healthcare by overcoming the concerns about trust that society has in autonomous intelligent systems.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"68-75"},"PeriodicalIF":0.0,"publicationDate":"2023-01-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"47545868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2023-01-03DOI: 10.1109/TTS.2022.3233776
Tina L. Peterson;Rodrigo Ferreira;Moshe Y. Vardi
As computing becomes more powerful and extends the reach of those who wield it, the imperative grows for computing professionals to make ethical decisions regarding the use of that power. We propose the concept of abstracted power to help computer science students understand how technology may distance them perceptually from consequences of their actions. Specifically, we identify technological intermediation and computational thinking as two factors in computer science that contribute to this distancing. To counter the abstraction of power, we argue for increased emotional engagement in computer science ethics education, to encourage students to feel as well as think regarding the potential impacts of their power on others. We suggest four concrete pedagogical approaches to enable this emotional engagement in computer science ethics curriculum, and we share highlights of student reactions to the material.
{"title":"Abstracted Power and Responsibility in Computer Science Ethics Education","authors":"Tina L. Peterson;Rodrigo Ferreira;Moshe Y. Vardi","doi":"10.1109/TTS.2022.3233776","DOIUrl":"10.1109/TTS.2022.3233776","url":null,"abstract":"As computing becomes more powerful and extends the reach of those who wield it, the imperative grows for computing professionals to make ethical decisions regarding the use of that power. We propose the concept of abstracted power to help computer science students understand how technology may distance them perceptually from consequences of their actions. Specifically, we identify technological intermediation and computational thinking as two factors in computer science that contribute to this distancing. To counter the abstraction of power, we argue for increased emotional engagement in computer science ethics education, to encourage students to feel as well as think regarding the potential impacts of their power on others. We suggest four concrete pedagogical approaches to enable this emotional engagement in computer science ethics curriculum, and we share highlights of student reactions to the material.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"4 1","pages":"96-102"},"PeriodicalIF":0.0,"publicationDate":"2023-01-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"45166270","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}