首页 > 最新文献

Journal of responsible technology最新文献

英文 中文
Start doing the right thing: Indicators for socially responsible start-ups and investors 开始做正确的事:具有社会责任感的初创企业和投资者指标
Pub Date : 2024-09-27 DOI: 10.1016/j.jrt.2024.100094
This paper explores the gap in the literature on social responsibility guidance for start-ups and start-up investors. It begins by evaluating research conducted in two different fields (namely, socially responsible investment (SRI) and responsible research and innovation (RRI)) and how they can guide social responsibility in STEM (Science, Technology, Engineering, Mathematics) start-ups. To do this, we evaluate an industry-standard SRI catalogue of metrics - the Global Impact Investing Network's (GIIN) Impact Reporting and Investment Standards (IRIS+) - and indicators from 12 EC-funded RRI projects. Based on this analysis, we propose a framework of 24 indicators to assess the social responsibility of start-ups and investors. The purpose of our framework is twofold: firstly, to provide clear guidance for start-ups aiming to implement socially responsible behaviours, and secondly, to provide start-up investors with criteria to identify if start-ups are socially responsible. While the indicators are phrased in a prescriptive way for start-ups, they can also be used by investors to identify if start-ups are implementing the indicators in practice.
本文探讨了为初创企业和初创企业投资者提供社会责任指导方面的文献空白。本文首先评估了两个不同领域(即社会责任投资 (SRI) 和责任研究与创新 (RRI))的研究,以及它们如何指导 STEM(科学、技术、工程和数学)初创企业的社会责任。为此,我们评估了行业标准 SRI 指标目录--全球影响力投资网络(GIIN)的影响力报告和投资标准(IRIS+)--以及 12 个欧盟委员会资助的 RRI 项目的指标。在此分析基础上,我们提出了一个由 24 个指标组成的框架,用于评估初创企业和投资者的社会责任。我们的框架有两个目的:首先,为旨在实施社会责任行为的初创企业提供明确的指导;其次,为初创企业投资者提供识别初创企业是否具有社会责任感的标准。虽然这些指标的表述方式是针对初创企业的,但投资者也可以利用这些指标来识别初创企业是否在实践中落实了这些指标。
{"title":"Start doing the right thing: Indicators for socially responsible start-ups and investors","authors":"","doi":"10.1016/j.jrt.2024.100094","DOIUrl":"10.1016/j.jrt.2024.100094","url":null,"abstract":"<div><div>This paper explores the gap in the literature on social responsibility guidance for start-ups and start-up investors. It begins by evaluating research conducted in two different fields (namely, socially responsible investment (SRI) and responsible research and innovation (RRI)) and how they can guide social responsibility in STEM (Science, Technology, Engineering, Mathematics) start-ups. To do this, we evaluate an industry-standard SRI catalogue of metrics - the Global Impact Investing Network's (GIIN) <em>Impact Reporting and Investment Standards</em> (IRIS+) - and indicators from 12 EC-funded RRI projects. Based on this analysis, we propose a framework of 24 indicators to assess the social responsibility of start-ups and investors. The purpose of our framework is twofold: firstly, to provide clear guidance for start-ups aiming to implement socially responsible behaviours, and secondly, to provide start-up investors with criteria to identify if start-ups are socially responsible. While the indicators are phrased in a prescriptive way for start-ups, they can also be used by investors to identify if start-ups are implementing the indicators in practice.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-27","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420397","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Virtual Social Labs – Requirements and Challenges for Effective Team Collaboration 虚拟社交实验室--有效团队协作的要求和挑战
Pub Date : 2024-09-19 DOI: 10.1016/j.jrt.2024.100095
In response to the challenges posed by the complex field of food safety, the FOODSAFETY4EU project established four social labs conducting multi-actor co-creation processes. These labs served as platforms for developing and piloting innovative ideas aimed at addressing these challenges. Due to COVID-19 pandemic, the lab process, typically held in-person, had to be converted to the virtual space. This means that all workshops, meetings, and collaboration processes and pilot activities solely took place online. This resulted in the novel situation of teams collaborating virtually throughout the entire social lab processes. Virtual collaborations were already on the rise before the pandemic, evidenced by an increase in virtual meetings and workshops. This study examines the requirements and challenges for effective team collaboration in virtual social lab processes. It investigates virtual collaboration, team dynamics, and the use of online tools. Findings reveal advantages such as increased participation, but also drawbacks including technical issues and role accountability. Despite challenges, all four virtual social labs finally succeeded in engaging diverse stakeholders and achieving significant outcomes addressing food safety challenges.
为应对复杂的食品安全领域带来的挑战,FOODSAFETY4EU 项目建立了四个社会实验室,开展多方共同创造进程。这些实验室是开发和试行旨在应对这些挑战的创新理念的平台。由于 COVID-19 大流行,通常在现场进行的实验室程序必须转换到虚拟空间。这意味着所有的研讨会、会议、合作过程和试点活动都只能在网上进行。这就出现了团队在整个社会实验室过程中进行虚拟协作的新情况。在大流行病发生之前,虚拟协作就已经开始兴起,虚拟会议和研讨会的增加就是证明。本研究探讨了在虚拟社会实验室过程中开展有效团队协作的要求和挑战。它调查了虚拟协作、团队动力和在线工具的使用。研究结果显示了一些优势,如参与度的提高,但也发现了一些缺点,包括技术问题和角色责任。尽管存在挑战,但所有四个虚拟社会实验室最终都成功地吸引了不同利益相关者的参与,并在应对食品安全挑战方面取得了重大成果。
{"title":"Virtual Social Labs – Requirements and Challenges for Effective Team Collaboration","authors":"","doi":"10.1016/j.jrt.2024.100095","DOIUrl":"10.1016/j.jrt.2024.100095","url":null,"abstract":"<div><div>In response to the challenges posed by the complex field of food safety, the FOODSAFETY4EU project established four social labs conducting multi-actor co-creation processes. These labs served as platforms for developing and piloting innovative ideas aimed at addressing these challenges. Due to COVID-19 pandemic, the lab process, typically held in-person, had to be converted to the virtual space. This means that all workshops, meetings, and collaboration processes and pilot activities solely took place online. This resulted in the novel situation of teams collaborating virtually throughout the entire social lab processes. Virtual collaborations were already on the rise before the pandemic, evidenced by an increase in virtual meetings and workshops. This study examines the requirements and challenges for effective team collaboration in virtual social lab processes. It investigates virtual collaboration, team dynamics, and the use of online tools. Findings reveal advantages such as increased participation, but also drawbacks including technical issues and role accountability. Despite challenges, all four virtual social labs finally succeeded in engaging diverse stakeholders and achieving significant outcomes addressing food safety challenges.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-19","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142420398","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
A call to action: Designing a more transparent online world for children and young people 行动呼吁:为儿童和青少年设计一个更加透明的网络世界
Pub Date : 2024-09-01 DOI: 10.1016/j.jrt.2024.100093

This paper reports on a qualitative research study that explored the practical and emotional experiences of young people aged 13–17 using algorithmically-mediated online platforms. It demonstrates an RI-based methodology for responsible two-way dialogue with the public, through listening to young people's needs and responding to their concerns. Participants discussed in detail how online algorithms work, enabling the young people to reflect, question, and develop their own critiques on issues related to the use of internet technologies. The paper closes with action areas from the young people for a fairer, usefully transparent and more responsible online environment. These actions include a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.

本文报告了一项定性研究,该研究探索了 13-17 岁青少年使用以算法为中介的在线平台的实际和情感体验。它展示了一种基于 RI 的方法,通过倾听年轻人的需求和回应他们的关切,与公众进行负责任的双向对话。参与者详细讨论了网络算法的工作原理,使年轻人能够对与互联网技术使用相关的问题进行反思、质疑并提出自己的批评意见。本文最后提出了年轻人为建立一个更公平、更透明、更负责任的网络环境而采取的行动。这些行动包括希望了解哪些数据(包括个人数据和情景数据)被收集、如何收集、谁在使用这些数据、为什么使用,以及关于有意义的算法透明度和问责制的政策建议。最后,与会者声称,虽然透明度是一项重要的首要原则,但他们也需要对平台如何使用从用户那里收集的信息有更多的控制,包括更多的监管,以确保透明度既有意义又可持续。
{"title":"A call to action: Designing a more transparent online world for children and young people","authors":"","doi":"10.1016/j.jrt.2024.100093","DOIUrl":"10.1016/j.jrt.2024.100093","url":null,"abstract":"<div><p>This paper reports on a qualitative research study that explored the practical and emotional experiences of young people aged 13–17 using algorithmically-mediated online platforms. It demonstrates an RI-based methodology for responsible two-way dialogue with the public, through listening to young people's needs and responding to their concerns. Participants discussed in detail how online algorithms work, enabling the young people to reflect, question, and develop their own critiques on issues related to the use of internet technologies. The paper closes with action areas from the young people for a fairer, usefully transparent and more responsible online environment. These actions include a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000192/pdfft?md5=e8823953ad735110aa03c82787c93158&pid=1-s2.0-S2666659624000192-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142172260","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Pub Date : 2024-08-28 DOI: 10.1016/j.jrt.2024.100092
{"title":"","authors":"","doi":"10.1016/j.jrt.2024.100092","DOIUrl":"10.1016/j.jrt.2024.100092","url":null,"abstract":"","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-28","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000180/pdfft?md5=d9c8095bd00852928af3785c8ad1b315&pid=1-s2.0-S2666659624000180-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142271890","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Embedding responsible innovation into R&D practices: A case study of socially assistive robot development 将负责任的创新纳入研发实践:社会辅助机器人开发案例研究
Pub Date : 2024-08-10 DOI: 10.1016/j.jrt.2024.100091

The Responsible Innovation (RI) approach aims to transform research and development (R&D) into being more anticipatory, inclusive, reflective, and responsive. This study highlights the challenges of embedding RI in R&D practices. We fostered collective learning on RI in a socially assistive robot development project through applying participatory action research (PAR). In the PAR, we employed a mixed-methods approach, combining interviews, workshops, and online questionnaires, to collectively explore opportunities for RI, and elicit team member perceptions, opinions, and beliefs about RI. Our PAR led to some modest yet purposeful, deliberate efforts to address particular concerns regarding, for instance, privacy, control, and energy consumption. However, we also found that the embedding of RI in R&D practices can be hampered by four partly interrelated barriers: lack of an action perspective, the noncommittal nature of RI, the misconception that co-design equals RI, and limited integration between different R&D task groups. In this paper, we discuss the implications of these barriers for R&D teams and funding bodies, and we recommend PAR as a solution to address these barriers.

负责任的创新(RI)方法旨在将研究与开发(R&D)转变为更具预见性、包容性、反思性和响应性的方法。本研究强调了在研发实践中嵌入责任创新所面临的挑战。我们通过应用参与式行动研究(PAR),在一个社会辅助机器人开发项目中促进了有关 RI 的集体学习。在参与式行动研究中,我们采用了一种混合方法,将访谈、研讨会和在线问卷调查结合起来,共同探索 RI 的机会,并征求团队成员对 RI 的看法、意见和信念。我们的 PAR 导致了一些适度但有目的的、深思熟虑的努力,以解决诸如隐私、控制和能源消耗等方面的特定问题。不过,我们也发现,在研发实践中嵌入 RI 可能会受到四个部分相互关联的障碍的阻碍:缺乏行动视角、RI 的非承诺性、认为共同设计等于 RI 的误解,以及不同研发任务小组之间有限的整合。在本文中,我们将讨论这些障碍对研发团队和资助机构的影响,并推荐 PAR 作为解决这些障碍的方案。
{"title":"Embedding responsible innovation into R&D practices: A case study of socially assistive robot development","authors":"","doi":"10.1016/j.jrt.2024.100091","DOIUrl":"10.1016/j.jrt.2024.100091","url":null,"abstract":"<div><p>The Responsible Innovation (RI) approach aims to transform research and development (R&amp;D) into being more anticipatory, inclusive, reflective, and responsive. This study highlights the challenges of embedding RI in R&amp;D practices. We fostered collective learning on RI in a socially assistive robot development project through applying participatory action research (PAR). In the PAR, we employed a mixed-methods approach, combining interviews, workshops, and online questionnaires, to collectively explore opportunities for RI, and elicit team member perceptions, opinions, and beliefs about RI. Our PAR led to some modest yet purposeful, deliberate efforts to address particular concerns regarding, for instance, privacy, control, and energy consumption. However, we also found that the embedding of RI in R&amp;D practices can be hampered by four partly interrelated barriers: lack of an action perspective, the noncommittal nature of RI, the misconception that co-design equals RI, and limited integration between different R&amp;D task groups. In this paper, we discuss the implications of these barriers for R&amp;D teams and funding bodies, and we recommend PAR as a solution to address these barriers.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-10","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000179/pdfft?md5=4f57f50c76da7b7d23cec3401fba2a9f&pid=1-s2.0-S2666659624000179-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141990870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Jürgen Habermas revisited via Tim Cook's Wikipedia biography: A hermeneutic approach to critical Information Systems research 通过蒂姆-库克的维基百科传记重访于尔根-哈贝马斯:以诠释学方法开展批判性信息系统研究
Pub Date : 2024-08-03 DOI: 10.1016/j.jrt.2024.100090
Critical Information Systems (IS) research is sometimes appreciated for the shades of gray it adds to sunny portraits of technology's emancipatory potential. In this article, we revisit a theory about Wikipedia’s putative freedom from the authority of corporate media's editors and authors. We present the curious example of Tim Cook's Wikipedia biography and its history of crowd-sourced editorial decisions, published on Wikipedia's talk pages. We use a hermeneutic method to subject the theory about Wikipedia's “rational discourse” and “emancipatory potential” to a soft, empirical test. When we examined Cook's Wikipedia biography and its editorial decisions, what we found pertained to authoritative discourse – the opposite of “rational discourse” – as well as Jürgen Habermas's concept of dramaturgical action. Our discussion aims to change how critical scholars think about IS's Habermasian theories and emancipatory technology. Our contribution – a critical intervention – is a clear alternative to mainstream IS research's moral prescriptions and mechanistic causes.
批判性信息系统(IS)研究有时因其为技术解放潜力的阳光画卷增添了灰色阴影而备受赞赏。在这篇文章中,我们重新审视了维基百科摆脱企业媒体编辑和作者权威的理论。我们以蒂姆-库克(Tim Cook)的维基百科传记为例,介绍了其在维基百科讨论页上发表的众包编辑决定的历史。我们使用诠释学方法对维基百科的 "理性话语 "和 "解放潜力 "理论进行了软性实证检验。当我们研究库克的维基百科传记及其编辑决定时,我们发现了与 "理性话语 "相反的权威话语,以及于尔根-哈贝马斯的戏剧化行动概念。我们的讨论旨在改变批判学者对 IS 的哈贝马斯理论和解放技术的思考方式。我们的贡献--批判性的干预--是对主流信息系统研究的道德规定和机械论原因的明确替代。
{"title":"Jürgen Habermas revisited via Tim Cook's Wikipedia biography: A hermeneutic approach to critical Information Systems research","authors":"","doi":"10.1016/j.jrt.2024.100090","DOIUrl":"10.1016/j.jrt.2024.100090","url":null,"abstract":"<div><div>Critical Information Systems (IS) research is sometimes appreciated for the shades of gray it adds to sunny portraits of technology's emancipatory potential. In this article, we revisit a theory about Wikipedia’s putative freedom from the authority of corporate media's editors and authors. We present the curious example of Tim Cook's Wikipedia biography and its history of crowd-sourced editorial decisions, published on Wikipedia's talk pages. We use a hermeneutic method to subject the theory about Wikipedia's “rational discourse” and “emancipatory potential” to a soft, empirical test. When we examined Cook's Wikipedia biography and its editorial decisions, what we found pertained to authoritative discourse – the opposite of “rational discourse” – as well as Jürgen Habermas's concept of dramaturgical action. Our discussion aims to change how critical scholars think about IS's Habermasian theories and emancipatory technology. Our contribution – a critical intervention – is a clear alternative to mainstream IS research's moral prescriptions and mechanistic causes.</div></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000167/pdfft?md5=d36142a2d3fc5a0c1844cf9be7f0ce77&pid=1-s2.0-S2666659624000167-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142310858","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Decoding faces: Misalignments of gender identification in automated systems 解码面孔:自动系统中的性别识别误差
Pub Date : 2024-06-17 DOI: 10.1016/j.jrt.2024.100089

Automated Facial Analysis technologies, predominantly used for facial detection and recognition, have garnered significant attention in recent years. Although these technologies have seen advancements and widespread adoption, biases embedded within systems have raised ethical concerns. This research aims to delve into the disparities of Automatic Gender Recognition systems (AGRs), particularly their oversimplification of gender identities through a binary lens. Such a reductionist perspective is known to marginalize and misgender individuals. This study set out to investigate the alignment of an individual's gender identity and its expression through the face with societal norms, and the perceived difference between misgendering experiences from machines versus humans. Insights were gathered through an online survey, utilizing an AGR system to simulate misgendering experiences. The overarching goal is to shed light on gender identity nuances and guide the creation of more ethically responsible and inclusive facial recognition software.

主要用于面部检测和识别的自动面部分析技术近年来备受关注。虽然这些技术取得了进步并得到了广泛应用,但系统中蕴含的偏见也引发了伦理方面的关注。本研究旨在深入探讨自动性别识别系统(AGRs)的差异,特别是其通过二元视角对性别身份的过度简化。众所周知,这种简化视角会导致个人边缘化和性别错误。本研究旨在调查个人的性别认同及其通过面部的表达方式与社会规范的一致性,以及机器与人类错误性别体验之间的感知差异。研究人员通过在线调查,利用 AGR 系统模拟误认性别的经历,收集了相关见解。其总体目标是揭示性别认同的细微差别,并指导创建更具道德责任感和包容性的面部识别软件。
{"title":"Decoding faces: Misalignments of gender identification in automated systems","authors":"","doi":"10.1016/j.jrt.2024.100089","DOIUrl":"10.1016/j.jrt.2024.100089","url":null,"abstract":"<div><p>Automated Facial Analysis technologies, predominantly used for facial detection and recognition, have garnered significant attention in recent years. Although these technologies have seen advancements and widespread adoption, biases embedded within systems have raised ethical concerns. This research aims to delve into the disparities of Automatic Gender Recognition systems (AGRs), particularly their oversimplification of gender identities through a binary lens. Such a reductionist perspective is known to marginalize and misgender individuals. This study set out to investigate the alignment of an individual's gender identity and its expression through the face with societal norms, and the perceived difference between misgendering experiences from machines versus humans. Insights were gathered through an online survey, utilizing an AGR system to simulate misgendering experiences. The overarching goal is to shed light on gender identity nuances and guide the creation of more ethically responsible and inclusive facial recognition software.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-17","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000155/pdfft?md5=24b180fd999b7d4970841ecf98f18ac7&pid=1-s2.0-S2666659624000155-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141630428","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Infrastructural justice for responsible software engineering, 负责任软件工程的基础公正、
Pub Date : 2024-06-04 DOI: 10.1016/j.jrt.2024.100087
Sarah Robinson , Jim Buckley , Luigina Ciolfi , Conor Linehan , Clare McInerney , Bashar Nuseibeh , John Twomey , Irum Rauf , John McCarthy

In recent years, we have seen many examples of software products unintentionally causing demonstrable harm. Many guidelines for ethical and responsible computing have been developed in response. Dominant approaches typically attribute liability and blame to individual companies or actors, rather than understanding how the working practices, norms, and cultural understandings in the software industry contribute to such outcomes. In this paper, we propose an understanding of responsibility that is infrastructural, relational, and cultural; thus, providing a foundation to better enable responsible software engineering into the future. Our approach draws on Young's (2006) social connection model of responsibility and Star and Ruhleder's (1994) concept of infrastructure. By bringing these theories together we introduce a concept called infrastructural injustice, which offers a new way for software engineers to consider their opportunities for responsible action with respect to society and the planet. We illustrate the utility of this approach by applying it to an Open-Source software communities’ development of Deepfake technology, to find key leverage points of responsibility that are relevant to both Deepfake technology and software engineering more broadly.

近年来,我们看到了许多软件产品无意中造成明显伤害的例子。为此,我们制定了许多符合伦理和负责任的计算准则。主流方法通常将责任归咎于个别公司或行为者,而不是理解软件行业的工作实践、规范和文化理解是如何导致此类结果的。在本文中,我们提出了一种对责任的理解,这种理解具有基础性、关系性和文化性,从而为未来更好地开展负责任的软件工程奠定了基础。我们的方法借鉴了 Young(2006 年)的责任社会联系模型以及 Star 和 Ruhleder(1994 年)的基础设施概念。通过将这些理论结合在一起,我们提出了一个名为 "基础设施不公正 "的概念,它为软件工程师提供了一种新的方法,使他们能够考虑采取对社会和地球负责任的行动的机会。我们将这种方法应用于开源软件社区的 Deepfake 技术开发中,以说明这种方法的实用性,从而找到与 Deepfake 技术和更广泛的软件工程相关的关键责任杠杆点。
{"title":"Infrastructural justice for responsible software engineering,","authors":"Sarah Robinson ,&nbsp;Jim Buckley ,&nbsp;Luigina Ciolfi ,&nbsp;Conor Linehan ,&nbsp;Clare McInerney ,&nbsp;Bashar Nuseibeh ,&nbsp;John Twomey ,&nbsp;Irum Rauf ,&nbsp;John McCarthy","doi":"10.1016/j.jrt.2024.100087","DOIUrl":"https://doi.org/10.1016/j.jrt.2024.100087","url":null,"abstract":"<div><p>In recent years, we have seen many examples of software products unintentionally causing demonstrable harm. Many guidelines for ethical and responsible computing have been developed in response. Dominant approaches typically attribute liability and blame to individual companies or actors, rather than understanding how the working practices, norms, and cultural understandings in the software industry contribute to such outcomes. In this paper, we propose an understanding of responsibility that is infrastructural, relational, and cultural; thus, providing a foundation to better enable responsible software engineering into the future. Our approach draws on Young's (2006) social connection model of responsibility and Star and Ruhleder's (1994) concept of infrastructure. By bringing these theories together we introduce a concept called infrastructural injustice, which offers a new way for software engineers to consider their opportunities for responsible action with respect to society and the planet. We illustrate the utility of this approach by applying it to an Open-Source software communities’ development of Deepfake technology, to find key leverage points of responsibility that are relevant to both Deepfake technology and software engineering more broadly.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000131/pdfft?md5=129d725094c45ad3f08ea3d866a85b49&pid=1-s2.0-S2666659624000131-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141307885","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
European technological protectionism and the risk of moral isolationism: The case of quantum technology development 欧洲技术保护主义与道德孤立主义的风险:量子技术发展案例
Pub Date : 2024-06-01 DOI: 10.1016/j.jrt.2024.100084
Clare Shelley-Egan, Pieter Vermaas

In this editorial, we engage with the European Commission's 2023 recommendation calling for risk assessment with Member States on four critical technology areas, including quantum technology. A particular emphasis is put on the risks associated with technology security and technology leakage. Such risks may lead to protectionist measures. Mobilising European normative anchor points that inform the “right impacts” of research and innovation, we argue that a protectionist approach on the part of the European Union can lead to moral isolationism. This, in turn, can limit Europe's contribution to global development with respect to technological advances, sustainable development and quality of life. We contend that decisions on protectionism around quantum technology should not be made with a protectionist mindset about European values.

在这篇社论中,我们讨论了欧盟委员会的 2023 年建议,该建议呼吁成员国对包括量子技术在内的四个关键技术领域进行风险评估。其中特别强调了与技术安全和技术泄漏相关的风险。这些风险可能导致保护主义措施。我们认为,欧盟的保护主义做法会导致道德上的孤立主义。这反过来又会限制欧洲在技术进步、可持续发展和生活质量方面对全球发展的贡献。我们认为,围绕量子技术的保护主义决策不应以保护主义的心态来看待欧洲的价值观。
{"title":"European technological protectionism and the risk of moral isolationism: The case of quantum technology development","authors":"Clare Shelley-Egan,&nbsp;Pieter Vermaas","doi":"10.1016/j.jrt.2024.100084","DOIUrl":"10.1016/j.jrt.2024.100084","url":null,"abstract":"<div><p>In this editorial, we engage with the European Commission's 2023 recommendation calling for risk assessment with Member States on four critical technology areas, including quantum technology. A particular emphasis is put on the risks associated with technology security and technology leakage. Such risks may lead to protectionist measures. Mobilising European normative anchor points that inform the “right impacts” of research and innovation, we argue that a protectionist approach on the part of the European Union can lead to moral isolationism. This, in turn, can limit Europe's contribution to global development with respect to technological advances, sustainable development and quality of life. We contend that decisions on protectionism around quantum technology should not be made with a protectionist mindset about European values.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-06-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S2666659624000106/pdfft?md5=faecb48e04356c91ce7d914c60d69aa6&pid=1-s2.0-S2666659624000106-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141037141","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
Enabling affordances for AI Governance 人工智能治理的赋能能力
Pub Date : 2024-05-15 DOI: 10.1016/j.jrt.2024.100086
Siri Padmanabhan Poti, Christopher J Stanton

Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.

处理关键任务人工智能自主系统的组织可能需要提供持续的风险管理控制,并建立治理手段。为此,组织需要在这些系统中植入可信度和透明度,并由人类进行监督和问责。自主系统通过对结果的保证、对行为的解释和对意图的诠释来获得可信度、透明度、质量和可维护性。然而,在自主系统的软件开发生命周期(SDLC)中,技术、商业和市场方面的挑战可能会导致自主系统的质量、可维护性、可解释性和可解释性大打折扣。本文从概念上对 SDLC 的转型进行了建模,以实现自主系统的保证、解释、诠释和整体管理能力。我们认为,通过技术债务管理、左移方法和非短暂工件等协同干预措施,可为 SDLC 转型提供机会。本文有助于自主系统治理的理论和实践,以及逐步和分层建立可信度。
{"title":"Enabling affordances for AI Governance","authors":"Siri Padmanabhan Poti,&nbsp;Christopher J Stanton","doi":"10.1016/j.jrt.2024.100086","DOIUrl":"10.1016/j.jrt.2024.100086","url":null,"abstract":"<div><p>Organizations dealing with mission-critical AI based autonomous systems may need to provide continuous risk management controls and establish means for their governance. To achieve this, organizations are required to embed trustworthiness and transparency in these systems, with human overseeing and accountability. Autonomous systems gain trustworthiness, transparency, quality, and maintainability through the assurance of outcomes, explanations of behavior, and interpretations of intent. However, technical, commercial, and market challenges during the software development lifecycle (SDLC) of autonomous systems can lead to compromises in their quality, maintainability, interpretability and explainability. This paper conceptually models transformation of SDLC to enable affordances for assurance, explanations, interpretations, and overall governance in autonomous systems. We argue that opportunities for transformation of SDLC are available through concerted interventions such as technical debt management, shift-left approach and non-ephemeral artifacts. This paper contributes to the theory and practice of governance of autonomous systems, and in building trustworthiness incrementally and hierarchically.</p></div>","PeriodicalId":73937,"journal":{"name":"Journal of responsible technology","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-05-15","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"https://www.sciencedirect.com/science/article/pii/S266665962400012X/pdfft?md5=9bf6cc548743ad7d2d5c0830773f5145&pid=1-s2.0-S266665962400012X-main.pdf","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"141058232","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"OA","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
引用次数: 0
期刊
Journal of responsible technology
全部 Acc. Chem. Res. ACS Applied Bio Materials ACS Appl. Electron. Mater. ACS Appl. Energy Mater. ACS Appl. Mater. Interfaces ACS Appl. Nano Mater. ACS Appl. Polym. Mater. ACS BIOMATER-SCI ENG ACS Catal. ACS Cent. Sci. ACS Chem. Biol. ACS Chemical Health & Safety ACS Chem. Neurosci. ACS Comb. Sci. ACS Earth Space Chem. ACS Energy Lett. ACS Infect. Dis. ACS Macro Lett. ACS Mater. Lett. ACS Med. Chem. Lett. ACS Nano ACS Omega ACS Photonics ACS Sens. ACS Sustainable Chem. Eng. ACS Synth. Biol. Anal. Chem. BIOCHEMISTRY-US Bioconjugate Chem. BIOMACROMOLECULES Chem. Res. Toxicol. Chem. Rev. Chem. Mater. CRYST GROWTH DES ENERG FUEL Environ. Sci. Technol. Environ. Sci. Technol. Lett. Eur. J. Inorg. Chem. IND ENG CHEM RES Inorg. Chem. J. Agric. Food. Chem. J. Chem. Eng. Data J. Chem. Educ. J. Chem. Inf. Model. J. Chem. Theory Comput. J. Med. Chem. J. Nat. Prod. J PROTEOME RES J. Am. Chem. Soc. LANGMUIR MACROMOLECULES Mol. Pharmaceutics Nano Lett. Org. Lett. ORG PROCESS RES DEV ORGANOMETALLICS J. Org. Chem. J. Phys. Chem. J. Phys. Chem. A J. Phys. Chem. B J. Phys. Chem. C J. Phys. Chem. Lett. Analyst Anal. Methods Biomater. Sci. Catal. Sci. Technol. Chem. Commun. Chem. Soc. Rev. CHEM EDUC RES PRACT CRYSTENGCOMM Dalton Trans. Energy Environ. Sci. ENVIRON SCI-NANO ENVIRON SCI-PROC IMP ENVIRON SCI-WAT RES Faraday Discuss. Food Funct. Green Chem. Inorg. Chem. Front. Integr. Biol. J. Anal. At. Spectrom. J. Mater. Chem. A J. Mater. Chem. B J. Mater. Chem. C Lab Chip Mater. Chem. Front. Mater. Horiz. MEDCHEMCOMM Metallomics Mol. Biosyst. Mol. Syst. Des. Eng. Nanoscale Nanoscale Horiz. Nat. Prod. Rep. New J. Chem. Org. Biomol. Chem. Org. Chem. Front. PHOTOCH PHOTOBIO SCI PCCP Polym. Chem.
×
引用
GB/T 7714-2015
复制
MLA
复制
APA
复制
导出至
BibTeX EndNote RefMan NoteFirst NoteExpress
×
0
微信
客服QQ
Book学术公众号 扫码关注我们
反馈
×
意见反馈
请填写您的意见或建议
请填写您的手机或邮箱
×
提示
您的信息不完整,为了账户安全,请先补充。
现在去补充
×
提示
您因"违规操作"
具体请查看互助需知
我知道了
×
提示
现在去查看 取消
×
提示
确定
Book学术官方微信
Book学术文献互助
Book学术文献互助群
群 号:481959085
Book学术
文献互助 智能选刊 最新文献 互助须知 联系我们:info@booksci.cn
Book学术提供免费学术资源搜索服务,方便国内外学者检索中英文文献。致力于提供最便捷和优质的服务体验。
Copyright © 2023 Book学术 All rights reserved.
ghs 京公网安备 11010802042870号 京ICP备2023020795号-1