This paper reports on a qualitative research study that explored the practical and emotional experiences of young people aged 13–17 using algorithmically-mediated online platforms. It demonstrates an RI-based methodology for responsible two-way dialogue with the public, through listening to young people's needs and responding to their concerns. Participants discussed in detail how online algorithms work, enabling the young people to reflect, question, and develop their own critiques on issues related to the use of internet technologies. The paper closes with action areas from the young people for a fairer, usefully transparent and more responsible online environment. These actions include a desire to be informed about what data (both personal and situational) is collected and how, and who uses it and why, and policy recommendations for meaningful algorithmic transparency and accountability. Finally, participants claimed that whilst transparency is an important first principle, they also need more control over how platforms use the information they collect from users, including more regulation to ensure transparency is both meaningful and sustained.
The Responsible Innovation (RI) approach aims to transform research and development (R&D) into being more anticipatory, inclusive, reflective, and responsive. This study highlights the challenges of embedding RI in R&D practices. We fostered collective learning on RI in a socially assistive robot development project through applying participatory action research (PAR). In the PAR, we employed a mixed-methods approach, combining interviews, workshops, and online questionnaires, to collectively explore opportunities for RI, and elicit team member perceptions, opinions, and beliefs about RI. Our PAR led to some modest yet purposeful, deliberate efforts to address particular concerns regarding, for instance, privacy, control, and energy consumption. However, we also found that the embedding of RI in R&D practices can be hampered by four partly interrelated barriers: lack of an action perspective, the noncommittal nature of RI, the misconception that co-design equals RI, and limited integration between different R&D task groups. In this paper, we discuss the implications of these barriers for R&D teams and funding bodies, and we recommend PAR as a solution to address these barriers.
Automated Facial Analysis technologies, predominantly used for facial detection and recognition, have garnered significant attention in recent years. Although these technologies have seen advancements and widespread adoption, biases embedded within systems have raised ethical concerns. This research aims to delve into the disparities of Automatic Gender Recognition systems (AGRs), particularly their oversimplification of gender identities through a binary lens. Such a reductionist perspective is known to marginalize and misgender individuals. This study set out to investigate the alignment of an individual's gender identity and its expression through the face with societal norms, and the perceived difference between misgendering experiences from machines versus humans. Insights were gathered through an online survey, utilizing an AGR system to simulate misgendering experiences. The overarching goal is to shed light on gender identity nuances and guide the creation of more ethically responsible and inclusive facial recognition software.