Sleep plays an integral role in human health and is vitally important for neurological development in infants. In this study, we propose the PneuMat, an interactive shape-changing system integrating sensors and pneumatic drives, to help ensure sleep safety through novel human-computer interaction. This system comprises sensor units, control units and inflatable units. The sensor units utilize information exchange between infants and the system, and detect the infant's sleeping posture, sending raw data to control units. For better sleep experience, the inflatable units are divided into nine areas. The inflatable units are multi-mode, can be independently inflated in different areas, and can be inflated in different areas together. We aim to ensure sleep safety by ensuring that infants stay in a safe sleeping position while in bed, by autonomously actuating the PneuMat's shape-changing capability. In this article, we describe the division of PneuMat, the design of the control unit, integration of the sensors and our preliminary experiments to evaluate the feasibility of our interaction system. Finally, based on the results, we will discuss future work involving the PneuMat.
{"title":"PneuMat: Pneumatic Interaction System for Infant Sleep Safety Using Shape-Changing Interfaces","authors":"Yijun Zhao, Yong Shen, Xiaoqing Wang, Jiacheng Cao, Shang Xia, Fangtian Ying, Guanyun Wang","doi":"10.1145/3411763.3451597","DOIUrl":"https://doi.org/10.1145/3411763.3451597","url":null,"abstract":"Sleep plays an integral role in human health and is vitally important for neurological development in infants. In this study, we propose the PneuMat, an interactive shape-changing system integrating sensors and pneumatic drives, to help ensure sleep safety through novel human-computer interaction. This system comprises sensor units, control units and inflatable units. The sensor units utilize information exchange between infants and the system, and detect the infant's sleeping posture, sending raw data to control units. For better sleep experience, the inflatable units are divided into nine areas. The inflatable units are multi-mode, can be independently inflated in different areas, and can be inflated in different areas together. We aim to ensure sleep safety by ensuring that infants stay in a safe sleeping position while in bed, by autonomously actuating the PneuMat's shape-changing capability. In this article, we describe the division of PneuMat, the design of the control unit, integration of the sensors and our preliminary experiments to evaluate the feasibility of our interaction system. Finally, based on the results, we will discuss future work involving the PneuMat.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127740612","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the CHI community (and the HCI field at large) has only recently started embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness in the same ways as graphical user interfaces have benefited from heuristic design guidelines. The goal of this course is to inform the CHI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.
{"title":"Conversational Voice User Interfaces: Connecting Engineering Fundamentals to Design Considerations","authors":"Cosmin Munteanu, Gerald Penn, Christine Murad","doi":"10.1145/3411763.3445008","DOIUrl":"https://doi.org/10.1145/3411763.3445008","url":null,"abstract":"HCI research has for long been dedicated to better and more naturally facilitating information transfer between humans and machines. Unfortunately, humans' most natural form of communication, speech, is also one of the most difficult modalities to be understood by machines – despite, and perhaps, because it is the highest-bandwidth communication channel we possess. While significant research efforts, from engineering, to linguistic, and to cognitive sciences, have been spent on improving machines' ability to understand speech, the CHI community (and the HCI field at large) has only recently started embracing this modality as a central focus of research. This can be attributed in part to the unexpected variations in error rates when processing speech, in contrast with often-unfounded claims of success from industry, but also to the intrinsic difficulty of designing and especially evaluating speech and natural language interfaces. As such, the development of interactive speech-based systems is mostly driven by engineering efforts to improve such systems with respect to largely arbitrary performance metrics. Such developments have often been void of any user-centered design principles or consideration for usability or usefulness in the same ways as graphical user interfaces have benefited from heuristic design guidelines. The goal of this course is to inform the CHI community of the current state of speech and natural language research, to dispel some of the myths surrounding speech-based interaction, as well as to provide an opportunity for researchers and practitioners to learn more about how speech recognition and speech synthesis work, what are their limitations, and how they could be used to enhance current interaction paradigms. Through this, we hope that HCI researchers and practitioners will learn how to combine recent advances in speech processing with user-centred principles in designing more usable and useful speech-based interactive systems.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"127879785","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Existing artificial skin interfaces suffer from the lack of on-skin compute that can provide fast neural network inference for time-critical application scenarios. In this paper, we propose AI-on-skin - a wearable artificial skin interface integrated with a neural network hardware accelerator that can be reconfigured across diverse neural network models and applications. AI-on-skin is designed to scale to the entire body, comprising tiny, low-power, accelerators distributed across the body. We built a prototype of AI-on-skin that covers the entire forearm (17 by 10 cm) based on off-the-shelf FPGAs. Our electronic skin based prototype can perform (a) handwriting recognition with 96% accuracy, (b) gesture recognition with 95% accuracy and (c) handwritten word recognition with 93.5% accuracy. AI-On-Skin achieves 20X and 35X speedup over off-body inference via bluetooth and on-body microcontroller based inference approach respectively. To the best of our knowledge, AI-On-Skin is the first ever wearable prototype to demonstrate skin interfaces with on-body neural network inference.
{"title":"AI-on-skin: Enabling On-body AI Inference for Wearable Artificial Skin Interfaces","authors":"A. N. Balaji, L. Peh","doi":"10.1145/3411763.3451689","DOIUrl":"https://doi.org/10.1145/3411763.3451689","url":null,"abstract":"Existing artificial skin interfaces suffer from the lack of on-skin compute that can provide fast neural network inference for time-critical application scenarios. In this paper, we propose AI-on-skin - a wearable artificial skin interface integrated with a neural network hardware accelerator that can be reconfigured across diverse neural network models and applications. AI-on-skin is designed to scale to the entire body, comprising tiny, low-power, accelerators distributed across the body. We built a prototype of AI-on-skin that covers the entire forearm (17 by 10 cm) based on off-the-shelf FPGAs. Our electronic skin based prototype can perform (a) handwriting recognition with 96% accuracy, (b) gesture recognition with 95% accuracy and (c) handwritten word recognition with 93.5% accuracy. AI-On-Skin achieves 20X and 35X speedup over off-body inference via bluetooth and on-body microcontroller based inference approach respectively. To the best of our knowledge, AI-On-Skin is the first ever wearable prototype to demonstrate skin interfaces with on-body neural network inference.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"14 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"126426905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Upol Ehsan, Philipp Wintersberger, Q. Liao, Martina Mara, M. Streit, Sandra Wachter, A. Riener, Mark O. Riedl
The realm of Artificial Intelligence (AI)’s impact on our lives is far reaching – with AI systems proliferating high-stakes domains such as healthcare, finance, mobility, law, etc., these systems must be able to explain their decision to diverse end-users comprehensibly. Yet the discourse of Explainable AI (XAI) has been predominantly focused on algorithm-centered approaches, suffering from gaps in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. There is a need to chart the domain and shape the discourse of XAI with reflective discussions from diverse stakeholders. The goal of this workshop is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on “operationalizing”, aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.
{"title":"Operationalizing Human-Centered Perspectives in Explainable AI","authors":"Upol Ehsan, Philipp Wintersberger, Q. Liao, Martina Mara, M. Streit, Sandra Wachter, A. Riener, Mark O. Riedl","doi":"10.1145/3411763.3441342","DOIUrl":"https://doi.org/10.1145/3411763.3441342","url":null,"abstract":"The realm of Artificial Intelligence (AI)’s impact on our lives is far reaching – with AI systems proliferating high-stakes domains such as healthcare, finance, mobility, law, etc., these systems must be able to explain their decision to diverse end-users comprehensibly. Yet the discourse of Explainable AI (XAI) has been predominantly focused on algorithm-centered approaches, suffering from gaps in meeting user needs and exacerbating issues of algorithmic opacity. To address these issues, researchers have called for human-centered approaches to XAI. There is a need to chart the domain and shape the discourse of XAI with reflective discussions from diverse stakeholders. The goal of this workshop is to examine how human-centered perspectives in XAI can be operationalized at the conceptual, methodological, and technical levels. Encouraging holistic (historical, sociological, and technical) approaches, we put an emphasis on “operationalizing”, aiming to produce actionable frameworks, transferable evaluation methods, concrete design guidelines, and articulate a coordinated research agenda for XAI.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"858 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121451595","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neha Rani, Sharon Lynn Chu Yew Yee, Yvette G. Williamson, Sindy Wu
We often get questions about the processes and things that we observe in our surroundings, but there exists no practical support for exploring these questions. Exploring curiosity can lead to learning new science concepts. We propose using a post-event recall and reflection approach to support curiosity-inspired learning in everyday life. This approach involves capturing contextual cues during the curiosity moment with wearables that can capture these contextual cues in daily life, and later using them for recall and focused reflection. Firstly, we conducted a preliminary study to explore different cues and their effectiveness in recalling these curiosity moments. Further, we conducted a virtual study to evaluate the amount of exploration through post-event recall and reflection and compared it with insitu recall and reflection. Results show a significant increase in questions and reflections made with the post-event recall and reflection approach, providing evidence for better learning outcomes from everyday curiosity.
{"title":"Curiosity-Inspired Learning: Insitu versus Post-Event Approaches to Recall and Reflection","authors":"Neha Rani, Sharon Lynn Chu Yew Yee, Yvette G. Williamson, Sindy Wu","doi":"10.1145/3411763.3451715","DOIUrl":"https://doi.org/10.1145/3411763.3451715","url":null,"abstract":"We often get questions about the processes and things that we observe in our surroundings, but there exists no practical support for exploring these questions. Exploring curiosity can lead to learning new science concepts. We propose using a post-event recall and reflection approach to support curiosity-inspired learning in everyday life. This approach involves capturing contextual cues during the curiosity moment with wearables that can capture these contextual cues in daily life, and later using them for recall and focused reflection. Firstly, we conducted a preliminary study to explore different cues and their effectiveness in recalling these curiosity moments. Further, we conducted a virtual study to evaluate the amount of exploration through post-event recall and reflection and compared it with insitu recall and reflection. Results show a significant increase in questions and reflections made with the post-event recall and reflection approach, providing evidence for better learning outcomes from everyday curiosity.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"43 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134324833","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Phillip Gough, J. Forman, Pat Pataranutaporn, L. Hepburn, Carolina Ramirez Figueroa, Clare Cooper, Angela Vujic, D. S. Kong, Raphael Kim, P. Maes, Hiroshi Ishii, Misha Sra, N. Ahmadpour
The home is a place of shelter, a place for family, and for separation from other parts of life, such as work. Global challenges, the most pressing of which are currently the COVID-19 pandemic and climate change has forced extra roles into many homes and will continue to do so in the future. Biodesign integrates living organisms into designed solutions and can offer opportunities for new kinds of technologies to facilitate a transition to the home of the future. Many families have had to learn to work alongside each other, and technology has mediated a transition from standard models of operation for industries. These are the challenges of the 21st century that mandate careful thinking around interactive systems and innovations that support new ways of living and working at home. In this workshop, we will explore opportunities for biodesign interactive systems in the future home. We will bring together a broad group of researchers in HCI, design, and biosciences to build the biodesign community and discuss speculative design futures. The outcome will generate an understanding of the role of interactive biodesign systems at home, as a place with extended functionalities.
{"title":"Speculating on Biodesign in the Future Home","authors":"Phillip Gough, J. Forman, Pat Pataranutaporn, L. Hepburn, Carolina Ramirez Figueroa, Clare Cooper, Angela Vujic, D. S. Kong, Raphael Kim, P. Maes, Hiroshi Ishii, Misha Sra, N. Ahmadpour","doi":"10.1145/3411763.3441353","DOIUrl":"https://doi.org/10.1145/3411763.3441353","url":null,"abstract":"The home is a place of shelter, a place for family, and for separation from other parts of life, such as work. Global challenges, the most pressing of which are currently the COVID-19 pandemic and climate change has forced extra roles into many homes and will continue to do so in the future. Biodesign integrates living organisms into designed solutions and can offer opportunities for new kinds of technologies to facilitate a transition to the home of the future. Many families have had to learn to work alongside each other, and technology has mediated a transition from standard models of operation for industries. These are the challenges of the 21st century that mandate careful thinking around interactive systems and innovations that support new ways of living and working at home. In this workshop, we will explore opportunities for biodesign interactive systems in the future home. We will bring together a broad group of researchers in HCI, design, and biosciences to build the biodesign community and discuss speculative design futures. The outcome will generate an understanding of the role of interactive biodesign systems at home, as a place with extended functionalities.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"17 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131485851","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Visualizations are now widely adopted across disciplines, providing effective means to understand and communicate data. However, people still frequently create misleading visualizations that distort the underlying data and ultimately misinform the audience. While design guidelines exist, they are currently scattered across different sources and devised by different people, often missing design trade-offs in different contexts and providing inconsistent and conflicting design knowledge to visualization practitioners. Our goal in this work is to investigate the ontology of visualization design guidelines and derive a unified framework for structuring the guidelines. We collected existing guidelines on the web and analyzed them using the grounded theory approach. We describe the current landscape of the available guidelines and propose a structured template for describing visualization design guidelines.
{"title":"Toward a Unified Framework for Visualization Design Guidelines","authors":"Jinhan Choi, Changhoon Oh, B. Suh, N. Kim","doi":"10.1145/3411763.3451702","DOIUrl":"https://doi.org/10.1145/3411763.3451702","url":null,"abstract":"Visualizations are now widely adopted across disciplines, providing effective means to understand and communicate data. However, people still frequently create misleading visualizations that distort the underlying data and ultimately misinform the audience. While design guidelines exist, they are currently scattered across different sources and devised by different people, often missing design trade-offs in different contexts and providing inconsistent and conflicting design knowledge to visualization practitioners. Our goal in this work is to investigate the ontology of visualization design guidelines and derive a unified framework for structuring the guidelines. We collected existing guidelines on the web and analyzed them using the grounded theory approach. We describe the current landscape of the available guidelines and propose a structured template for describing visualization design guidelines.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"7 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131492181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Learners consume video-based learning content on various mobile devices due to their mobility and accessibility. However, most video-based learning content is originally designed for desktop without consideration of constraints in mobile learning environments. We focus on readability and visibility problems caused by visual design elements such as text and images on varying screen sizes. To reveal design issues of current content, we examined mobile learning adequacy of content with 681 video frames from 108 video lectures. The content analysis revealed a distribution and guideline compliance rate of visual design elements. We also conducted semi-structured interviews with six video production engineers to investigate current practices and challenges in content design for mobile devices. Based on the interview results, we present a prototype that supports a guideline-based design of video learning content. Our findings can inform engineers and design tool makers on the challenges of editing mobile video-based learning content for accessible and adaptive design across devices.
{"title":"Guideline-Based Evaluation and Design Opportunities for Mobile Video-based Learning","authors":"Jeongyeon Kim, Juho Kim","doi":"10.1145/3411763.3451725","DOIUrl":"https://doi.org/10.1145/3411763.3451725","url":null,"abstract":"Learners consume video-based learning content on various mobile devices due to their mobility and accessibility. However, most video-based learning content is originally designed for desktop without consideration of constraints in mobile learning environments. We focus on readability and visibility problems caused by visual design elements such as text and images on varying screen sizes. To reveal design issues of current content, we examined mobile learning adequacy of content with 681 video frames from 108 video lectures. The content analysis revealed a distribution and guideline compliance rate of visual design elements. We also conducted semi-structured interviews with six video production engineers to investigate current practices and challenges in content design for mobile devices. Based on the interview results, we present a prototype that supports a guideline-based design of video learning content. Our findings can inform engineers and design tool makers on the challenges of editing mobile video-based learning content for accessible and adaptive design across devices.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"51 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131605726","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
While the accuracy of Natural Language Processing (NLP) models has been going up, users have more expectations than captured by just accuracy. Despite practitioners’ attempt to inspect model blind spots or lacking capabilities, the status-quo processes can be ad-hoc and biased. My thesis focuses on helping practitioners organize and explore the inputs and outputs of their models, such that they can gain more systematic insights into their models’ behaviors. I identified two building blocks that are essential for informative analysis: (1) to scale up the analysis by grouping similar instances, and (2) to isolate important components by generating counterfactuals. To support multiple analysis stages (training data assessment, error analysis, model testing), I designed various interactive tools that instantiate these two building blocks. In the process, I characterized the design space of grouping and counterfactual generation, seeking to balance the machine powers and practitioners’ domain expertise. My future work proposes to explore how the grouping and counterfactual techniques can benefit non-experts in the data collection process.
{"title":"Principles and Interactive Tools for Evaluating and Improving the Behavior of Natural Language Processing models","authors":"Tongshuang Wu","doi":"10.1145/3411763.3443423","DOIUrl":"https://doi.org/10.1145/3411763.3443423","url":null,"abstract":"While the accuracy of Natural Language Processing (NLP) models has been going up, users have more expectations than captured by just accuracy. Despite practitioners’ attempt to inspect model blind spots or lacking capabilities, the status-quo processes can be ad-hoc and biased. My thesis focuses on helping practitioners organize and explore the inputs and outputs of their models, such that they can gain more systematic insights into their models’ behaviors. I identified two building blocks that are essential for informative analysis: (1) to scale up the analysis by grouping similar instances, and (2) to isolate important components by generating counterfactuals. To support multiple analysis stages (training data assessment, error analysis, model testing), I designed various interactive tools that instantiate these two building blocks. In the process, I characterized the design space of grouping and counterfactual generation, seeking to balance the machine powers and practitioners’ domain expertise. My future work proposes to explore how the grouping and counterfactual techniques can benefit non-experts in the data collection process.","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131620344","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I remember when I started my DPhil studies joking with friends that my research was improving the sum total of human happiness – by one. I was enjoying the work. It was as a post-doc, however, that I began to see how knowledge, or at the very least the pursuit of knowledge, is not neutral. The things that we choose to study, the problems that we choose to focus on, and the way we frame our questions, lead towards different benefits for different interests. If we want to make a better world, then perhaps we should focus on asking better questions. One question that was a turning point for me was posed by Steve Walker in 2002. In a world where e-commerce and e-government already had thriving, well-financed research communities, he convened a workshop asking “Can there be a Social Movement Informatics?” The topics ranged from designing with voluntary organizations and trade-unions, to investigating hate speech in Internet bulletin boards and chat rooms. Together with colleagues, we ran projects around “Design for Civil Society”, and “Technology and Social Action”, exploring how we as technologists, designers and researchers can connect and collaborate more effectively with groups promoting social change. Following on from that work, I won an opportunity to explore how participatory approaches in international social and economic development relate to understandings of participatory design in HCI. Working with the Sironj Crop Producers Company Ltd (a co-operative of small and marginal farmers in Madhya Pradesh, India) and Safal Solutions (a small software house focused on rural development, based in Telengana, India), this was my first attempt to apply participatory design methods in a context with very limited infrastructure and resources. How can we facilitate meaningful communications about priorities and possibilities across wide social, cultural, geographical, linguistic, experiential and economic divides? How does the way we arrange, organize and conduct projects aiming to advance ‘development’ affect the outputs, the outcomes and the impacts that are achieved? How can agency, creativity and control be shared in ways that move systems towards a more just world? I don't know all the answers to those questions, but I have learned that the inequalities of this world are far greater than I had originally imagined. I started with high hopes that expertise in participatory design, together with a commitment to participatory development would deliver radical results. I discovered that true participation and reciprocity is tougher than I thought. We cannot communicate effectively across such huge social divides without questioning, acknowledging and responding to our own positionality in the wider context. For example, we should ask how our own actions are contributing to harming others, such as the millions who will become, or are already, climate refugees? A few short-term “bungee research” visits will not lead us to real understanding. When key deci
我记得当我开始攻读博士学位时,我和朋友们开玩笑说,我的研究将人类幸福的总和提高了一个。我很享受这份工作。然而,在做博士后的时候,我开始看到知识,或者至少是对知识的追求,是如何不中立的。我们选择研究的东西,我们选择关注的问题,以及我们构建问题的方式,会为不同的兴趣带来不同的好处。如果我们想创造一个更美好的世界,那么也许我们应该专注于提出更好的问题。2002年史蒂夫·沃克提出的一个问题对我来说是一个转折点。在一个电子商务和电子政务已经有了蓬勃发展、资金充足的研究社区的世界里,他召集了一个研讨会,问“能有社会运动信息学吗?”主题包括与志愿组织和工会合作设计,以及调查互联网公告栏和聊天室中的仇恨言论。我们与同事一起,围绕“为公民社会设计”和“科技与社会行动”展开项目,探讨我们作为科技专家、设计师和研究人员如何更有效地与推动社会变革的团体联系和合作。在这项工作之后,我获得了一个探索国际社会和经济发展中的参与式方法如何与HCI中的参与式设计的理解相关联的机会。与Sironj Crop Producers Company Ltd(印度中央邦的小型和边缘农民合作社)和Safal Solutions(一家专注于农村发展的小型软件公司,总部位于印度特伦加纳)合作,这是我第一次尝试在基础设施和资源非常有限的情况下应用参与式设计方法。我们如何才能促进跨越广泛的社会、文化、地理、语言、经验和经济鸿沟,就优先事项和可能性进行有意义的沟通?我们安排、组织和开展旨在促进“发展”的项目的方式如何影响产出、结果和所取得的影响?如何才能共享代理、创造力和控制权,使系统朝着更公正的世界发展?我不知道所有这些问题的答案,但我知道这个世界的不平等比我最初想象的要大得多。一开始,我对参与式设计的专业知识以及对参与式发展的承诺寄予厚望,希望它们能带来根本性的成果。我发现真正的参与和互惠比我想象的要难得多。如果不质疑、承认和回应我们在更大背景下的地位,我们就无法有效地跨越如此巨大的社会鸿沟进行沟通。例如,我们应该问自己,我们自己的行为是如何对他人造成伤害的,比如数百万将成为或已经成为气候难民的人?几次短期的“蹦极研究”不会让我们真正了解。当关键的决策权仍然在通常的权力中心时,这只会加强新殖民主义的安排,巩固我们说我们想要改变的边缘化。为了创造一个作为地球生命一部分的人类的未来,我们必须看到在接近权力中心的行为上的改变——这也包括我们自己。我们已经陷入了一个不公正的社会经济关系体系。“问题”不是“在外面”的东西,它也“在这里”,在我们周围。我们问的是真正重要的问题吗?
{"title":"SIGCHI Social Impact Award: Asking Better Questions","authors":"A. Dearden","doi":"10.1145/3411763.3457779","DOIUrl":"https://doi.org/10.1145/3411763.3457779","url":null,"abstract":"I remember when I started my DPhil studies joking with friends that my research was improving the sum total of human happiness – by one. I was enjoying the work. It was as a post-doc, however, that I began to see how knowledge, or at the very least the pursuit of knowledge, is not neutral. The things that we choose to study, the problems that we choose to focus on, and the way we frame our questions, lead towards different benefits for different interests. If we want to make a better world, then perhaps we should focus on asking better questions. One question that was a turning point for me was posed by Steve Walker in 2002. In a world where e-commerce and e-government already had thriving, well-financed research communities, he convened a workshop asking “Can there be a Social Movement Informatics?” The topics ranged from designing with voluntary organizations and trade-unions, to investigating hate speech in Internet bulletin boards and chat rooms. Together with colleagues, we ran projects around “Design for Civil Society”, and “Technology and Social Action”, exploring how we as technologists, designers and researchers can connect and collaborate more effectively with groups promoting social change. Following on from that work, I won an opportunity to explore how participatory approaches in international social and economic development relate to understandings of participatory design in HCI. Working with the Sironj Crop Producers Company Ltd (a co-operative of small and marginal farmers in Madhya Pradesh, India) and Safal Solutions (a small software house focused on rural development, based in Telengana, India), this was my first attempt to apply participatory design methods in a context with very limited infrastructure and resources. How can we facilitate meaningful communications about priorities and possibilities across wide social, cultural, geographical, linguistic, experiential and economic divides? How does the way we arrange, organize and conduct projects aiming to advance ‘development’ affect the outputs, the outcomes and the impacts that are achieved? How can agency, creativity and control be shared in ways that move systems towards a more just world? I don't know all the answers to those questions, but I have learned that the inequalities of this world are far greater than I had originally imagined. I started with high hopes that expertise in participatory design, together with a commitment to participatory development would deliver radical results. I discovered that true participation and reciprocity is tougher than I thought. We cannot communicate effectively across such huge social divides without questioning, acknowledging and responding to our own positionality in the wider context. For example, we should ask how our own actions are contributing to harming others, such as the millions who will become, or are already, climate refugees? A few short-term “bungee research” visits will not lead us to real understanding. When key deci","PeriodicalId":265192,"journal":{"name":"Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2021-05-08","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"131833423","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}