Pub Date : 2024-06-01DOI: 10.1109/TTS.2024.3413591
Joseph R. Carvalko
This paper discusses generative pre-trained transformer technology and its intersection with forms of creativity and law. It highlights the potential of generative AI to change considerable elements of society, including modes of creative endeavors, problem-solving, employment, education, justice, medicine, and governance. The author emphasizes the need for policymakers and experts to join in regulating against the potential risks and implications of this technology. The European Commission has taken steps to address the risks of AI through the European AI Act (EIA), which categorizes AI uses based on their potential harm. The legislation aims to ensure scrutiny and control in extreme cases like autonomous weapons or medical devices. However, the author criticizes the lack of meaningful AI oversight in the United States and argues that time has come for government to step in and offer meaningful regulation given the technology’s (1) rate of diffusion (2) virtually uncountable product permutations, the purposes, extent and depths to which it is anticipated to penetrate institutional and daily life.
{"title":"Generative AI, Ingenuity, and Law","authors":"Joseph R. Carvalko","doi":"10.1109/TTS.2024.3413591","DOIUrl":"https://doi.org/10.1109/TTS.2024.3413591","url":null,"abstract":"This paper discusses generative pre-trained transformer technology and its intersection with forms of creativity and law. It highlights the potential of generative AI to change considerable elements of society, including modes of creative endeavors, problem-solving, employment, education, justice, medicine, and governance. The author emphasizes the need for policymakers and experts to join in regulating against the potential risks and implications of this technology. The European Commission has taken steps to address the risks of AI through the European AI Act (EIA), which categorizes AI uses based on their potential harm. The legislation aims to ensure scrutiny and control in extreme cases like autonomous weapons or medical devices. However, the author criticizes the lack of meaningful AI oversight in the United States and argues that time has come for government to step in and offer meaningful regulation given the technology’s (1) rate of diffusion (2) virtually uncountable product permutations, the purposes, extent and depths to which it is anticipated to penetrate institutional and daily life.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"169-182"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964838","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1109/TTS.2024.3421490
{"title":"IEEE Transactions on Technology and Society Publication Information","authors":"","doi":"10.1109/TTS.2024.3421490","DOIUrl":"https://doi.org/10.1109/TTS.2024.3421490","url":null,"abstract":"","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"C2-C2"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10632875","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964842","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-06-01DOI: 10.1109/TTS.2024.3423208
Katina Michael
In November 2023, IEEE TTS underwent its first Periodicals Review and Advisory Committee (PRAC) with the Technical Activities Board (TAB). It was successful in its demonstration of key indicators as required by the Institute. It now embarks on a growth period where it has invited new board members that have been successful through a competitive application process with clear demonstration to dedication in the field. This paper provides an overview of Editorial Board members and their respective profiles. We celebrate the appointment of new board members, and thank those who have completed their terms. We also appreciate the ongoing support of members who have stayed on to continue participation for a second term with the publication, given their role in the Society on the Social Implications of Technology (IEEE SSIT), and recognized standing in the international community of interdisciplinary scholars. It is important to note, the criteria for choosing board members was stipulated in September 2023 IEEE TTS issue and required a holistic demonstration to the field of technology and society, prior evidence of service to the field, as previous reviewers, authorship in IEEE TSM/IEEE TTS or other related publications, participation at conferences sponsored by the Society on Social Implications of Technology or related societies, requirements to satisfy diversity as understood by IEEE, and more as specified. In this paper we present a summary of the PRAC results related to the editorial board between 2020–2023, and include a complete list of profiles for the Editorial Board of IEEE TTS.
{"title":"Editorial IEEE Transactions on Technology and Society Editorial Board Profiles","authors":"Katina Michael","doi":"10.1109/TTS.2024.3423208","DOIUrl":"https://doi.org/10.1109/TTS.2024.3423208","url":null,"abstract":"In November 2023, IEEE TTS underwent its first Periodicals Review and Advisory Committee (PRAC) with the Technical Activities Board (TAB). It was successful in its demonstration of key indicators as required by the Institute. It now embarks on a growth period where it has invited new board members that have been successful through a competitive application process with clear demonstration to dedication in the field. This paper provides an overview of Editorial Board members and their respective profiles. We celebrate the appointment of new board members, and thank those who have completed their terms. We also appreciate the ongoing support of members who have stayed on to continue participation for a second term with the publication, given their role in the Society on the Social Implications of Technology (IEEE SSIT), and recognized standing in the international community of interdisciplinary scholars. It is important to note, the criteria for choosing board members was stipulated in September 2023 IEEE TTS issue and required a holistic demonstration to the field of technology and society, prior evidence of service to the field, as previous reviewers, authorship in IEEE TSM/IEEE TTS or other related publications, participation at conferences sponsored by the Society on Social Implications of Technology or related societies, requirements to satisfy diversity as understood by IEEE, and more as specified. In this paper we present a summary of the PRAC results related to the editorial board between 2020–2023, and include a complete list of profiles for the Editorial Board of IEEE TTS.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"119-148"},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10632876","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964834","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-04-29DOI: 10.1109/TTS.2024.3395175
Victor Stroele;Lorenza Leão Oliveira Moreno;Jorão Gomes;Thalita Thamires de Oliveira Silva;Enayat Rajabi;Jairo Francisco de Souza
Although the disappearance of individuals is not a recent phenomenon, it remains a prevalent issue that inflicts significant emotional distress upon the families of the missing. Unfortunately, state action about this matter is lacking in several countries. One promising approach to address this problem involves appealing for information and reaching out to a wider network of individuals who may possess the ability to assist in locating the missing person. Social media platforms, such as Twitter, have proven to be particularly effective in disseminating information. However, the effectiveness of information dissemination is crucial to raise awareness within the community as a whole. This paper presents a method for identifying influential individuals on Twitter, with a focus on their geographic location, to maximize the diffusion of information about missing persons. Given the significance of the social circles and communities associated with the disappeared individuals, incorporating location data becomes an essential feature in the missing person domain. The contribution of this paper is threefold: (i) a novel method to identify location-aware influencers on Twitter based on an operational research model, (ii) an analysis of the information dissemination using publicly available missing person data collected from Brazilian non-governmental organizations and state websites, and (iii) a new missing person dataset that can serve as a valuable resource for further research.
{"title":"Who is Going to Help? Detecting Social Media Influencers to Spread Information About Missing Persons","authors":"Victor Stroele;Lorenza Leão Oliveira Moreno;Jorão Gomes;Thalita Thamires de Oliveira Silva;Enayat Rajabi;Jairo Francisco de Souza","doi":"10.1109/TTS.2024.3395175","DOIUrl":"https://doi.org/10.1109/TTS.2024.3395175","url":null,"abstract":"Although the disappearance of individuals is not a recent phenomenon, it remains a prevalent issue that inflicts significant emotional distress upon the families of the missing. Unfortunately, state action about this matter is lacking in several countries. One promising approach to address this problem involves appealing for information and reaching out to a wider network of individuals who may possess the ability to assist in locating the missing person. Social media platforms, such as Twitter, have proven to be particularly effective in disseminating information. However, the effectiveness of information dissemination is crucial to raise awareness within the community as a whole. This paper presents a method for identifying influential individuals on Twitter, with a focus on their geographic location, to maximize the diffusion of information about missing persons. Given the significance of the social circles and communities associated with the disappeared individuals, incorporating location data becomes an essential feature in the missing person domain. The contribution of this paper is threefold: (i) a novel method to identify location-aware influencers on Twitter based on an operational research model, (ii) an analysis of the information dissemination using publicly available missing person data collected from Brazilian non-governmental organizations and state websites, and (iii) a new missing person dataset that can serve as a valuable resource for further research.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"242-251"},"PeriodicalIF":0.0,"publicationDate":"2024-04-29","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964821","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-30DOI: 10.1109/TTS.2024.3406513
Theodore C. McCullough
An interdisciplinary approach to Artificial Intelligence (AI) and Machine Learning (ML) is necessary to address issues arising from the overlap in the areas of Reinforcement Learning (RL), ethics, and the law. Some types of RL, due to their use of evaluative feedback in combination with function approximation, give rise to new strategies for problem-solving that are not easily foreseen or anticipated, and embody the monkey paw problem. This is the problem related to RL that grants what one asked for, and not what one should have asked for or in terms of what was intended. Sometimes these new strategies can be characterized as promoting a social good, but there is the possibility that they could give rise to outcomes that are not aligned with social goods. Control applications in the form of supervised learning (SL)-based solutions may be used to control for unaligned new strategies. These control applications, however, may introduce bias such that ethical and legal regimes may need to be put into place to solve for such biases. These ethical and legal regimes may be based upon generally agreed to social conventions as traditional ethical regimes in the form of utilitarianism and deontological ethics may provide an incomplete solution. Further, these social conventions may need to be implemented by people and ultimately the corporations instructing these people on how to perform their jobs.
{"title":"Explaining and Exploring Ethical and Trustworthy AI in the Context of Reinforcement Learning","authors":"Theodore C. McCullough","doi":"10.1109/TTS.2024.3406513","DOIUrl":"https://doi.org/10.1109/TTS.2024.3406513","url":null,"abstract":"An interdisciplinary approach to Artificial Intelligence (AI) and Machine Learning (ML) is necessary to address issues arising from the overlap in the areas of Reinforcement Learning (RL), ethics, and the law. Some types of RL, due to their use of evaluative feedback in combination with function approximation, give rise to new strategies for problem-solving that are not easily foreseen or anticipated, and embody the monkey paw problem. This is the problem related to RL that grants what one asked for, and not what one should have asked for or in terms of what was intended. Sometimes these new strategies can be characterized as promoting a social good, but there is the possibility that they could give rise to outcomes that are not aligned with social goods. Control applications in the form of supervised learning (SL)-based solutions may be used to control for unaligned new strategies. These control applications, however, may introduce bias such that ethical and legal regimes may need to be put into place to solve for such biases. These ethical and legal regimes may be based upon generally agreed to social conventions as traditional ethical regimes in the form of utilitarianism and deontological ethics may provide an incomplete solution. Further, these social conventions may need to be implemented by people and ultimately the corporations instructing these people on how to perform their jobs.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"198-204"},"PeriodicalIF":0.0,"publicationDate":"2024-03-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964817","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.
{"title":"On the Ethics of Employing Artificial Intelligent Automation in Military Operational Contexts","authors":"Wolfgang Koch;Dierk Spreen;Kairi Talves;Wolfgang Wagner;Eleri Lillemäe;Matthias Klaus;Auli Viidalepp;Camilla Guldahl Cooper;Janar Pekarev","doi":"10.1109/TTS.2024.3405309","DOIUrl":"https://doi.org/10.1109/TTS.2024.3405309","url":null,"abstract":"In this paper, we explore the ethical dimension of artificial intelligent automation (often called AI) in military systems engineering, and present conclusions. Morality, ethics, and ethos, as well as technical excellence, need to be strengthened in both the developers and users of artificial intelligent automation. Only then can critical innovations like cognitive and volitive assistance systems or automated weapon systems be wielded efficiently and beneficially within the given legal constraints. Meaningful human control takes center stage here, which we understand in a broad sense as involving both technical controllability and accountability for outcomes. Explainable AI is essential for this task and requires rigorous testing to ensure deliberate decision making by the user. The military and industrial communities must work together to ensure adequate training for responsible use of AI-automation. Finally, these developments need to be accompanied by a politically supported open discourse, involving as many stakeholders from diverse backgrounds as possible. This serves as an extensive approach to both manage the risks of these new technologies and prevent exaggerated risk avoidance impeding necessary development.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"231-241"},"PeriodicalIF":0.0,"publicationDate":"2024-03-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964819","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-23DOI: 10.1109/TTS.2024.3403482
Maya Menon;Marie C. Paretti
Engineering education for sustainable development (EESD) has gained increasing attention since the early 1990s, reflecting the broader integration of sustainable development (SD) principles in education worldwide. While SD has received global support and recognition, its adoption in engineering education (termed EESD – engineering education for sustainable development) varies by country; within the United States, it also varies widely by institution. To better support the widespread, sustainable implementation of EESD, this study examines factors influencing instructors’ involvement in EESD via a U.S.-based case study. Drawing upon Lattuca and Pollard’s model of instructor decision-making in curricular change, this research characterizes the perspectives of instructors at a large public U.S. university. Using the United Nations Sustainable Development Goals (SDGs) to bound the study and operationalize SD, we explore the external, internal, and individual factors that influence engineering instructors in incorporating the SDGs into their courses. The findings reveal that all three levels of influence are present, but engagement in EESD at the case study site was driven primarily by individual factors, representing a bottom-up phenomenon with limited external and internal supports. Importantly, the findings indicate that while individuals can act as change agents in the absence of strong external and internal influences, their efforts alone may have limited sustained impact on the practice of EESD.
{"title":"Faculty Perspectives on Integrating Sustainable Development Into Engineering Education","authors":"Maya Menon;Marie C. Paretti","doi":"10.1109/TTS.2024.3403482","DOIUrl":"https://doi.org/10.1109/TTS.2024.3403482","url":null,"abstract":"Engineering education for sustainable development (EESD) has gained increasing attention since the early 1990s, reflecting the broader integration of sustainable development (SD) principles in education worldwide. While SD has received global support and recognition, its adoption in engineering education (termed EESD – engineering education for sustainable development) varies by country; within the United States, it also varies widely by institution. To better support the widespread, sustainable implementation of EESD, this study examines factors influencing instructors’ involvement in EESD via a U.S.-based case study. Drawing upon Lattuca and Pollard’s model of instructor decision-making in curricular change, this research characterizes the perspectives of instructors at a large public U.S. university. Using the United Nations Sustainable Development Goals (SDGs) to bound the study and operationalize SD, we explore the external, internal, and individual factors that influence engineering instructors in incorporating the SDGs into their courses. The findings reveal that all three levels of influence are present, but engagement in EESD at the case study site was driven primarily by individual factors, representing a bottom-up phenomenon with limited external and internal supports. Importantly, the findings indicate that while individuals can act as change agents in the absence of strong external and internal influences, their efforts alone may have limited sustained impact on the practice of EESD.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 3","pages":"316-324"},"PeriodicalIF":0.0,"publicationDate":"2024-03-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142235784","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-21DOI: 10.1109/TTS.2024.3403681
J. Berengueres
Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.
大型语言模型(LLM)是一种预测性概率模型,能够通过多项专业测试,其水平可与人类媲美。然而,这些能力也伴随着道德问题。一些基于 LLM 的产品在伦理方面存在疏漏,包括(i) 缺乏内容或来源归属,以及 (ii) 用于训练模型的内容缺乏透明度。本文指出了可以应用道德保障措施的四个接触点,以便在 LLM 中实现更负责任的人工智能。主要发现是,在训练之前应用保障措施符合从源头解决问题的既定工程实践。然而,这种方法目前却遭到了回避。最后,我们将历史与美国汽车行业相提并论,汽车行业最初抵制安全法规,但后来随着消费者态度的转变而接受了这些法规。
{"title":"How to Regulate Large Language Models for Responsible AI","authors":"J. Berengueres","doi":"10.1109/TTS.2024.3403681","DOIUrl":"https://doi.org/10.1109/TTS.2024.3403681","url":null,"abstract":"Large Language Models (LLMs) are predictive probabilistic models capable of passing several professional tests at a level comparable to humans. However, these capabilities come with ethical concerns. Ethical oversights in several LLM-based products include: (i) a lack of content or source attribution, and (ii) a lack of transparency in what was used to train the model. This paper identifies four touchpoints where ethical safeguards can be applied to realize a more responsible AI in LLMs. The key finding is that applying safeguards before the training occurs aligns with established engineering practices of addressing issues at the source. However, this approach is currently shunned. Finally, historical parallels are drawn with the U.S. automobile industry, which initially resisted safety regulations but later embraced them once consumer attitudes evolved.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"191-197"},"PeriodicalIF":0.0,"publicationDate":"2024-03-21","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10536000","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964839","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Pub Date : 2024-03-20DOI: 10.1109/TTS.2024.3403412
Clinton J. Andrews
If people want the benefits of innovations, must they simply accept the unintended adverse consequences? Versions of this question haunt many who care about the social implications of technology. Technological design processes could include impact assessment steps, but not all do. Adoption in the marketplace may ignore spillover effects. Jurisprudence is often reactive and focused on remediating obvious wrongs. Public policy also often requires evidence of harm before legislators or administrators are willing to act. The failure to anticipate adverse consequences is sometimes framed as a moral lapse, but it could equally be about competence or incentives. This paper considers the relative merits of methodology (analogizing, interpolating, projecting,) and procedure (reflecting, reasoning, discourse) as systematic approaches to anticipating unintended consequences of innovation. It weighs the efficacy of such approaches against current reactive remedies, highlighting the importance of tailoring approach to context, and building in early learning opportunities (observing and testing). Several examples suggest that society is often playing catch-up and trying to avoid adverse consequences before the innovation is widely deployed rather than before it is initially introduced.
{"title":"Better Anticipating Unintended Consequences","authors":"Clinton J. Andrews","doi":"10.1109/TTS.2024.3403412","DOIUrl":"https://doi.org/10.1109/TTS.2024.3403412","url":null,"abstract":"If people want the benefits of innovations, must they simply accept the unintended adverse consequences? Versions of this question haunt many who care about the social implications of technology. Technological design processes could include impact assessment steps, but not all do. Adoption in the marketplace may ignore spillover effects. Jurisprudence is often reactive and focused on remediating obvious wrongs. Public policy also often requires evidence of harm before legislators or administrators are willing to act. The failure to anticipate adverse consequences is sometimes framed as a moral lapse, but it could equally be about competence or incentives. This paper considers the relative merits of methodology (analogizing, interpolating, projecting,) and procedure (reflecting, reasoning, discourse) as systematic approaches to anticipating unintended consequences of innovation. It weighs the efficacy of such approaches against current reactive remedies, highlighting the importance of tailoring approach to context, and building in early learning opportunities (observing and testing). Several examples suggest that society is often playing catch-up and trying to avoid adverse consequences before the innovation is widely deployed rather than before it is initially introduced.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 2","pages":"205-216"},"PeriodicalIF":0.0,"publicationDate":"2024-03-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10535391","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141964818","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The Human(e) Technology Design Studio is a discourse-driven, action-oriented modality developed by the Lincoln Center for Applied Ethics at Arizona State University to shape generative opportunities for critical technology discussions with user groups closest to the problem. We outline the rationale for the creation of this modality, with theoretical commitments rooted in the domains of participatory action research and co-creation, as well as the design aspirations informing the studios’ rhythms of insight identification, integration, and activation. We then present a detailed case study of this model that outlines the collective insights and actions generated by our first cohort of academics and technologists across six Design Studios, which culminated in the creation of a Humane Tech Oracle Deck. That two-year process allowed us to iterate the model in response to challenges, as we now move toward creating a public Design Studio toolkit.
人类(e)技术设计工作室是亚利桑那州立大学林肯应用伦理学中心(Lincoln Center for Applied Ethics at Arizona State University)开发的一种以话语为驱动、以行动为导向的模式,旨在为与最接近问题的用户群体进行关键技术讨论创造机会。我们概述了创建这种模式的理论依据,包括植根于参与式行动研究和共同创造领域的理论承诺,以及指导工作室洞察识别、整合和激活节奏的设计愿望。随后,我们介绍了这一模式的详细案例研究,概述了我们的第一批学者和技术专家在六个设计工作室中产生的集体见解和行动,最终形成了人道技术甲骨文牌。在这两年的过程中,我们不断改进这一模式,以应对各种挑战,现在我们正朝着创建一个公共设计工作室工具包的方向迈进。
{"title":"The Human(e) Technology Design Studios: An Action-Oriented, Co-Creative Modality for Centering the Human in Critical Technology Discussions","authors":"Erica O’Neil;Elizabeth Grumbach;Gaymon Bennett;Elizabeth Langland","doi":"10.1109/TTS.2024.3378057","DOIUrl":"https://doi.org/10.1109/TTS.2024.3378057","url":null,"abstract":"The Human(e) Technology Design Studio is a discourse-driven, action-oriented modality developed by the Lincoln Center for Applied Ethics at Arizona State University to shape generative opportunities for critical technology discussions with user groups closest to the problem. We outline the rationale for the creation of this modality, with theoretical commitments rooted in the domains of participatory action research and co-creation, as well as the design aspirations informing the studios’ rhythms of insight identification, integration, and activation. We then present a detailed case study of this model that outlines the collective insights and actions generated by our first cohort of academics and technologists across six Design Studios, which culminated in the creation of a Humane Tech Oracle Deck. That two-year process allowed us to iterate the model in response to challenges, as we now move toward creating a public Design Studio toolkit.","PeriodicalId":73324,"journal":{"name":"IEEE transactions on technology and society","volume":"5 1","pages":"24-35"},"PeriodicalIF":0.0,"publicationDate":"2024-03-18","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141164730","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}