Debjyoti Ghosh, Can Liu, Shengdong Zhao, Kotaro Hara
Existing voice-based interfaces have limited support for text editing, especially when seeing the text is difficult, e.g., while walking or cooking. This research develops voice interaction techniques for eyes-free text editing. First, with a Wizard-of-Oz study, we identified two primary user strategies: using commands, e.g., “replace go with goes” and re-dictating over an erroneous portion, e.g., correcting “he go there” by saying “he goes there.” To support these user strategies with an actual system implementation, we developed two eyes-free voice interaction techniques, Commanding and Re-dictation, and evaluated them with a controlled experiment. Results showed that while Re-dictation performs significantly better for more semantically complex edits, Commanding is more suitable for making one-word edits, especially deletions. We developed VoiceRev to combine both the techniques in the same interface and evaluated it with realistic tasks. Results showed improved usability of the combined techniques over either of the two techniques used individually.
{"title":"Commanding and Re-Dictation","authors":"Debjyoti Ghosh, Can Liu, Shengdong Zhao, Kotaro Hara","doi":"10.1145/3390889","DOIUrl":"https://doi.org/10.1145/3390889","url":null,"abstract":"Existing voice-based interfaces have limited support for text editing, especially when seeing the text is difficult, e.g., while walking or cooking. This research develops voice interaction techniques for eyes-free text editing. First, with a Wizard-of-Oz study, we identified two primary user strategies: using commands, e.g., “replace go with goes” and re-dictating over an erroneous portion, e.g., correcting “he go there” by saying “he goes there.” To support these user strategies with an actual system implementation, we developed two eyes-free voice interaction techniques, Commanding and Re-dictation, and evaluated them with a controlled experiment. Results showed that while Re-dictation performs significantly better for more semantically complex edits, Commanding is more suitable for making one-word edits, especially deletions. We developed VoiceRev to combine both the techniques in the same interface and evaluated it with realistic tasks. Results showed improved usability of the combined techniques over either of the two techniques used individually.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"26 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116963640","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mateusz Mikusz, Peter Shaw, N. Davies, P. Nurmi, S. Clinch, Ludwig Trotter, Ivan Elhart, Marc Langheinrich, A. Friday
Widespread sensing devices enable a world in which physical spaces become personalised in the presence of mobile users. An important example of such personalisation is the use of pervasive displays to show content that matches the requirements of proximate viewers. Despite prior work on prototype systems that use mobile devices to personalise displays, no significant attempts to trial such systems have been carried out. In this article, we report on our experiences of designing, developing and operating the world’s first comprehensive display personalisation service for mobile users. Through a set of rigorous quantitative measures and 11 potential user/stakeholder interviews, we demonstrate the success of the platform in realising display personalisation, and offer a series of reflections to inform the design of future systems.
{"title":"A Longitudinal Study of Pervasive Display Personalisation","authors":"Mateusz Mikusz, Peter Shaw, N. Davies, P. Nurmi, S. Clinch, Ludwig Trotter, Ivan Elhart, Marc Langheinrich, A. Friday","doi":"10.1145/3418352","DOIUrl":"https://doi.org/10.1145/3418352","url":null,"abstract":"Widespread sensing devices enable a world in which physical spaces become personalised in the presence of mobile users. An important example of such personalisation is the use of pervasive displays to show content that matches the requirements of proximate viewers. Despite prior work on prototype systems that use mobile devices to personalise displays, no significant attempts to trial such systems have been carried out. In this article, we report on our experiences of designing, developing and operating the world’s first comprehensive display personalisation service for mobile users. Through a set of rigorous quantitative measures and 11 potential user/stakeholder interviews, we demonstrate the success of the platform in realising display personalisation, and offer a series of reflections to inform the design of future systems.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"34 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-08-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"134411181","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
We study the impact and interplay of social design features on the engagement behaviors toward user-generated content on Facebook business pages. By examining the introduction of the “Reactions” feature on Facebook, we aim to understand how the introduction of a new engagement feature affects the overall engagement activities and the use of existing engagement features. We found evidence of a positive effect of Reactions on overall engagement levels. Furthermore, the introduction of the Reactions feature had heterogeneous effects on the use of existing engagement features. Posts that received Reactions also ended up receiving more Likes and Comments than what they would have received before the feature change. However, the opposite is true for posts that received no Reactions, although the effect sizes were small. These effects were detected within the first four weeks after the feature introduction, and persisted after six months, indicating long-term structural changes in users’ engagement behaviors.
{"title":"Engagement by Design","authors":"Mochen Yang, Yuqing Ren, G. Adomavicius","doi":"10.1145/3412844","DOIUrl":"https://doi.org/10.1145/3412844","url":null,"abstract":"We study the impact and interplay of social design features on the engagement behaviors toward user-generated content on Facebook business pages. By examining the introduction of the “Reactions” feature on Facebook, we aim to understand how the introduction of a new engagement feature affects the overall engagement activities and the use of existing engagement features. We found evidence of a positive effect of Reactions on overall engagement levels. Furthermore, the introduction of the Reactions feature had heterogeneous effects on the use of existing engagement features. Posts that received Reactions also ended up receiving more Likes and Comments than what they would have received before the feature change. However, the opposite is true for posts that received no Reactions, although the effect sizes were small. These effects were detected within the first four weeks after the feature introduction, and persisted after six months, indicating long-term structural changes in users’ engagement behaviors.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"35 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-26","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121424898","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Neha Kumar, Naveena Karusala, Azra Ismail, A. Tuli
In this article, we present 6 cases (contained in 13 studies) variously connected with women’s health in a range of Indian contexts. Analyzing these cases, we highlight that “women’s health” is inextricably linked with extrinsic factors that also need addressing, to propose a broadened focus of “women’s wellbeing,” as defined through the lens of Martha Nussbaum’s central human capabilities. Drawing again on our cases, we discuss the importance of taking a long, holistic, and intersectional view to women’s wellbeing. Consolidating lessons learned across studies, we emphasize the potential of framing challenges around women’s health as learning problems, rather than problems of information access alone. Leveraging this perspective, we propose the use of design-based implementation research as a potential approach in identified learning ecologies, given its emphasis on long-term engagement with multiple stakeholders in the learning process. Although the empirical research we draw from took place in various Indian contexts, we conclude by arguing that key contextual characteristics may translate to other cultures and geographies as well.
{"title":"Taking the Long, Holistic, and Intersectional View to Women’s Wellbeing","authors":"Neha Kumar, Naveena Karusala, Azra Ismail, A. Tuli","doi":"10.1145/3397159","DOIUrl":"https://doi.org/10.1145/3397159","url":null,"abstract":"In this article, we present 6 cases (contained in 13 studies) variously connected with women’s health in a range of Indian contexts. Analyzing these cases, we highlight that “women’s health” is inextricably linked with extrinsic factors that also need addressing, to propose a broadened focus of “women’s wellbeing,” as defined through the lens of Martha Nussbaum’s central human capabilities. Drawing again on our cases, we discuss the importance of taking a long, holistic, and intersectional view to women’s wellbeing. Consolidating lessons learned across studies, we emphasize the potential of framing challenges around women’s health as learning problems, rather than problems of information access alone. Leveraging this perspective, we propose the use of design-based implementation research as a potential approach in identified learning ecologies, given its emphasis on long-term engagement with multiple stakeholders in the learning process. Although the empirical research we draw from took place in various Indian contexts, we conclude by arguing that key contextual characteristics may translate to other cultures and geographies as well.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"1 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-07-20","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124247541","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
High prevalence of mental illness and the need for effective mental health care, combined with recent advances in AI, has led to an increase in explorations of how the field of machine learning (ML) can assist in the detection, diagnosis and treatment of mental health problems. ML techniques can potentially offer new routes for learning patterns of human behavior; identifying mental health symptoms and risk factors; developing predictions about disease progression; and personalizing and optimizing therapies. Despite the potential opportunities for using ML within mental health, this is an emerging research area, and the development of effective ML-enabled applications that are implementable in practice is bound up with an array of complex, interwoven challenges. Aiming to guide future research and identify new directions for advancing development in this important domain, this article presents an introduction to, and a systematic review of, current ML work regarding psycho-socially based mental health conditions from the computing and HCI literature. A quantitative synthesis and qualitative narrative review of 54 papers that were included in the analysis surfaced common trends, gaps, and challenges in this space. Discussing our findings, we (i) reflect on the current state-of-the-art of ML work for mental health, (ii) provide concrete suggestions for a stronger integration of human-centered and multi-disciplinary approaches in research and development, and (iii) invite more consideration of the potentially far-reaching personal, social, and ethical implications that ML models and interventions can have, if they are to find widespread, successful adoption in real-world mental health contexts.
{"title":"Machine Learning in Mental Health","authors":"Anja Thieme, D. Belgrave, Gavin Doherty","doi":"10.1145/3398069","DOIUrl":"https://doi.org/10.1145/3398069","url":null,"abstract":"High prevalence of mental illness and the need for effective mental health care, combined with recent advances in AI, has led to an increase in explorations of how the field of machine learning (ML) can assist in the detection, diagnosis and treatment of mental health problems. ML techniques can potentially offer new routes for learning patterns of human behavior; identifying mental health symptoms and risk factors; developing predictions about disease progression; and personalizing and optimizing therapies. Despite the potential opportunities for using ML within mental health, this is an emerging research area, and the development of effective ML-enabled applications that are implementable in practice is bound up with an array of complex, interwoven challenges. Aiming to guide future research and identify new directions for advancing development in this important domain, this article presents an introduction to, and a systematic review of, current ML work regarding psycho-socially based mental health conditions from the computing and HCI literature. A quantitative synthesis and qualitative narrative review of 54 papers that were included in the analysis surfaced common trends, gaps, and challenges in this space. Discussing our findings, we (i) reflect on the current state-of-the-art of ML work for mental health, (ii) provide concrete suggestions for a stronger integration of human-centered and multi-disciplinary approaches in research and development, and (iii) invite more consideration of the potentially far-reaching personal, social, and ethical implications that ML models and interventions can have, if they are to find widespread, successful adoption in real-world mental health contexts.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"27 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-23","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"121743013","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Torkil Clemmensen, M. Hertzum, J. Abdelnour-Nocera
We investigate professional greenhouse growers’ user experience (UX) when using climate-management systems in their daily work. We build on the literature on UX, in particular UX at work, and extend it to ordinary UX at work. In a 10-day diary study, we collected data with a general UX instrument (AttrakDiff), a domain-specific instrument, and interviews. We find that AttrakDiff is valid at work; its three-factor structure of pragmatic quality (PQ), hedonic identification quality, and hedonic stimulation quality is recognizable in the growers’ responses. In this article, UX at work is understood as interactions among technology, tasks, structure, and actors. Our data support the recent proposal for the ordinariness of UX at work. We find that during continued use, UX at work is middle-of-the-scale, remains largely constant over time, and varies little across use situations. For example, the largest slope of the four AttrakDiff constructs when regressed over the 10 days was as small as 0.04. The findings contrast existing assumptions and findings in UX research, which is mainly about extraordinary and positive experiences. In this way, the present study contributes to UX research by calling attention to the mundane, unremarkable, and ordinary UXs at work.
{"title":"Ordinary User Experiences at Work","authors":"Torkil Clemmensen, M. Hertzum, J. Abdelnour-Nocera","doi":"10.1145/3386089","DOIUrl":"https://doi.org/10.1145/3386089","url":null,"abstract":"We investigate professional greenhouse growers’ user experience (UX) when using climate-management systems in their daily work. We build on the literature on UX, in particular UX at work, and extend it to ordinary UX at work. In a 10-day diary study, we collected data with a general UX instrument (AttrakDiff), a domain-specific instrument, and interviews. We find that AttrakDiff is valid at work; its three-factor structure of pragmatic quality (PQ), hedonic identification quality, and hedonic stimulation quality is recognizable in the growers’ responses. In this article, UX at work is understood as interactions among technology, tasks, structure, and actors. Our data support the recent proposal for the ordinariness of UX at work. We find that during continued use, UX at work is middle-of-the-scale, remains largely constant over time, and varies little across use situations. For example, the largest slope of the four AttrakDiff constructs when regressed over the 10 days was as small as 0.04. The findings contrast existing assumptions and findings in UX research, which is mainly about extraordinary and positive experiences. In this way, the present study contributes to UX research by calling attention to the mundane, unremarkable, and ordinary UXs at work.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"103 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-06-13","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"124689190","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Self-monitoring devices are becoming increasingly popular in the support of physical activity experiences. These devices mostly represent on-screen data using numbers and graphs and in doing so, they may miss multi-sensorial methods for engaging with data. Embracing the opportunity for pleasurable interactions with one's own data through the use of different materials and digital fabrication technology, we designed and studied three systems that turn this data into 3D-printed plastic artifacts, sports drinks, and 3D-printed chocolate treats. We utilize the insights gained from associated studies, related literature, and our experiences in designing these systems to develop a conceptual framework, “Shelfie.” The “Shelfie” framework has 13 cards that convey key themes for creating material representations of physical activity data. Through this framework, we present a conceptual understanding of relationships between material representation and physical activity data and contribute guidelines to the design of meaningful material representations of physical activity data.
{"title":"Shelfie","authors":"R. A. Khot, L. Hjorth, F. Mueller","doi":"10.1145/3379539","DOIUrl":"https://doi.org/10.1145/3379539","url":null,"abstract":"Self-monitoring devices are becoming increasingly popular in the support of physical activity experiences. These devices mostly represent on-screen data using numbers and graphs and in doing so, they may miss multi-sensorial methods for engaging with data. Embracing the opportunity for pleasurable interactions with one's own data through the use of different materials and digital fabrication technology, we designed and studied three systems that turn this data into 3D-printed plastic artifacts, sports drinks, and 3D-printed chocolate treats. We utilize the insights gained from associated studies, related literature, and our experiences in designing these systems to develop a conceptual framework, “Shelfie.” The “Shelfie” framework has 13 cards that convey key themes for creating material representations of physical activity data. Through this framework, we present a conceptual understanding of relationships between material representation and physical activity data and contribute guidelines to the design of meaningful material representations of physical activity data.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"49 4 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"114331259","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Many people enjoy “vertigo” sensations caused by intense playful bodily activities such as spinning in circles, and riding fairground rides. Game scholar Caillois calls such experiences “vertigo play,” elucidating that these enjoyable activities are a result of confusion between sensory channels. In HCI, designers are often cautious to avoid deliberately causing sensory confusion in players, but we believe there is an opportunity to transition and extend Caillois’ thinking to the digital realm, allowing designers to create novel and intriguing digital bodily experiences inspired by traditional vertigo play activities. To this end, we present the Digital Vertigo Experience framework. Derived from four case studies and the development of three different digital vertigo experiences, this framework aims to bring the excitement of traditional vertigo play experiences to the digital world, allowing designers to create more engaging and exciting body-based games, and provides players with more possibilities to enjoy novel and exciting play experiences.
{"title":"Designing Digital Vertigo Experiences","authors":"Richard Byrne, Joe Marshall, F. Mueller","doi":"10.1145/3387167","DOIUrl":"https://doi.org/10.1145/3387167","url":null,"abstract":"Many people enjoy “vertigo” sensations caused by intense playful bodily activities such as spinning in circles, and riding fairground rides. Game scholar Caillois calls such experiences “vertigo play,” elucidating that these enjoyable activities are a result of confusion between sensory channels. In HCI, designers are often cautious to avoid deliberately causing sensory confusion in players, but we believe there is an opportunity to transition and extend Caillois’ thinking to the digital realm, allowing designers to create novel and intriguing digital bodily experiences inspired by traditional vertigo play activities. To this end, we present the Digital Vertigo Experience framework. Derived from four case studies and the development of three different digital vertigo experiences, this framework aims to bring the excitement of traditional vertigo play experiences to the digital world, allowing designers to create more engaging and exciting body-based games, and provides players with more possibilities to enjoy novel and exciting play experiences.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"5 3 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"116790291","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mark Mcgill, Aidan Kehoe, Euan Freeman, S. Brewster
Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys.
{"title":"Expanding the Bounds of Seated Virtual Workspaces","authors":"Mark Mcgill, Aidan Kehoe, Euan Freeman, S. Brewster","doi":"10.1145/3380959","DOIUrl":"https://doi.org/10.1145/3380959","url":null,"abstract":"Mixed Reality (MR), Augmented Reality (AR) and Virtual Reality (VR) headsets can improve upon existing physical multi-display environments by rendering large, ergonomic virtual display spaces whenever and wherever they are needed. However, given the physical and ergonomic limitations of neck movement, users may need assistance to view these display spaces comfortably. Through two studies, we developed new ways of minimising the physical effort and discomfort of viewing such display spaces. We first explored how the mapping between gaze angle and display position could be manipulated, helping users view wider display spaces than currently possible within an acceptable and comfortable range of neck movement. We then compared our implicit control of display position based on head orientation against explicit user control, finding significant benefits in terms of user preference, workload and comfort for implicit control. Our novel techniques create new opportunities for productive work by leveraging MR headsets to create interactive wide virtual workspaces with improved comfort and usability. These workspaces are flexible and can be used on-the-go, e.g., to improve remote working or make better use of commuter journeys.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"13 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"128333105","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Disclosures of distress and stigma on identified social media can be beneficial. Yet, many who may benefit from such disclosures do not engage in them. I examine factors that inform decisions to not disclose stigmatized experiences on identified social media. I conducted in-depth interviews with women in the US who used social media, had experienced pregnancy loss, and had not disclosed about their loss on identified social media. I detail six types of factors related to the self, audience, network, society, platform, and temporality that contribute to non-disclosure decisions. I show that the Disclosure Decision-Making (DDM) framework introduced in prior work explaining disclosures when they do occur, also explains non-disclosure decisions on social media. I show how DDM builds from and bridges prior privacy theories, namely, Communication Privacy Management and Contextual Integrity. I discuss design implications around removing barriers to disclosure to facilitate beneficial disclosures and reduce stigma.
{"title":"Disclosure, Privacy, and Stigma on Social Media","authors":"Nazanin Andalibi","doi":"10.1145/3386600","DOIUrl":"https://doi.org/10.1145/3386600","url":null,"abstract":"Disclosures of distress and stigma on identified social media can be beneficial. Yet, many who may benefit from such disclosures do not engage in them. I examine factors that inform decisions to not disclose stigmatized experiences on identified social media. I conducted in-depth interviews with women in the US who used social media, had experienced pregnancy loss, and had not disclosed about their loss on identified social media. I detail six types of factors related to the self, audience, network, society, platform, and temporality that contribute to non-disclosure decisions. I show that the Disclosure Decision-Making (DDM) framework introduced in prior work explaining disclosures when they do occur, also explains non-disclosure decisions on social media. I show how DDM builds from and bridges prior privacy theories, namely, Communication Privacy Management and Contextual Integrity. I discuss design implications around removing barriers to disclosure to facilitate beneficial disclosures and reduce stigma.","PeriodicalId":322583,"journal":{"name":"ACM Transactions on Computer-Human Interaction (TOCHI)","volume":"6 1","pages":"0"},"PeriodicalIF":0.0,"publicationDate":"2020-05-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"115903865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}