Gesture typing-entering a word by gliding the finger sequentially over letter to letter- has been widely supported on smartphones for sighted users. However, this input paradigm is currently inaccessible to blind users: it is difficult to draw shape gestures on a virtual keyboard without access to key visuals. This paper describes the design of accessible gesture typing, to bring this input paradigm to blind users. To help blind users figure out key locations, the design incorporates the familiar screen-reader supported touch exploration that narrates the keys as the user drags the finger across the keyboard. The design allows users to seamlessly switch between exploration and gesture typing mode by simply lifting the finger. Continuous touch-exploration like audio feedback is provided during word shape construction that helps the user glide in the right direction of the key locations constituting the word. Exploration mode resumes once word shape is completed. Distinct earcons help distinguish gesture typing mode from touch exploration mode, and thereby avoid unintended mix-ups. A user study with 14 blind people shows 35% increment in their typing speed, indicative of the promise and potential of gesture typing technology for non-visual text entry.
Play is the work of children-but access to play is not equal from child to child. Having access to a place to play is a challenge for marginalized children, such as children with disabilities. For autistic children, playing with other children in the physical world may be uncomfortable or even painful. Yet, having practice in the social skills play provides is essential for childhood development. In this ethnographic work, I explore how one community uses the sense of place and the digital embodied experience in a virtual world specifically to give autistic children access to play with their peers. The contribution of this work is twofold. First, I demonstrate how various physical and virtual spaces work together to make play possible. Second, I demonstrate these spaces, though some of them are digital, are no more or less "real" than the physical spaces making up a schoolyard or playground.
Camera manipulation confounds the use of object recognition applications by blind people. This is exacerbated when photos from this population are also used to train models, as with teachable machines, where out-of-frame or partially included objects against cluttered backgrounds degrade performance. Leveraging prior evidence on the ability of blind people to coordinate hand movements using proprioception, we propose a deep learning system that jointly models hand segmentation and object localization for object classification. We investigate the utility of hands as a natural interface for including and indicating the object of interest in the camera frame. We confirm the potential of this approach by analyzing existing datasets from people with visual impairments for object recognition. With a new publicly available egocentric dataset and an extensive error analysis, we provide insights into this approach in the context of teachable recognizers.
Suicide is the second leading cause of death among young adults but the challenges of preventing suicide are significant because the signs often seem invisible. Research has shown that clinicians are not able to reliably predict when someone is at greatest risk. In this paper, we describe the design, collection, and analysis of text messages from individuals with a history of suicidal thoughts and behaviors to build a model to identify periods of suicidality (i.e., suicidal ideation and non-fatal suicide attempts). By reconstructing the timeline of recent suicidal behaviors through a retrospective clinical interview, this study utilizes a prospective research design to understand if text communications can predict periods of suicidality versus depression. Identifying subtle clues in communication indicating when someone is at heightened risk of a suicide attempt may allow for more effective prevention of suicide.
Low-vision users struggle to browse the web with screen magnifiers. Firstly, magnifiers occlude significant portions of the webpage, thereby making it cumbersome to get the webpage overview and quickly locate the desired content. Further, magnification causes loss of spatial locality and visual cues that commonly define semantic relationships in the page; reconstructing semantic relationships exclusively from narrow views dramatically increases the cognitive burden on the users. Secondly, low-vision users have widely varying needs requiring a range of interface customizations for different page sections; dynamic customization in extant magnifiers is disruptive to users' browsing. We present SteeringWheel, a magnification interface that leverages content semantics to preserve local context. In combination with a physical dial, supporting simple rotate and press gestures, users can quickly navigate different webpage sections, easily locate desired content, get a quick overview, and seamlessly customize the interface. A user study with 15 low-vision participants showed that their web-browsing efficiency improved by at least 20 percent with SteeringWheel compared to extant screen magnifiers.
The hospital setting creates a high-stakes environment where patients' lives depend on accurate tracking of health data. Despite recent work emphasizing the importance of patients' engagement in their own health care, less is known about how patients track their health and care in the hospital. Through interviews and design probes, we investigated hospitalized patients' tracking activity and analyzed our results using the stage-based personal informatics model. We used this model to understand how to support the tracking needs of hospitalized patients at each stage. In this paper, we discuss hospitalized patients' needs for collaboratively tracking their health with their care team. We suggest future extensions of the stage-based model to accommodate collaborative tracking situations, such as hospitals, where data is collected, analyzed, and acted on by multiple people. Our findings uncover new directions for HCI research and highlight ways to support patients in tracking their care and improving patient safety.
We consider why and how women track their menstrual cycles, examining their experiences to uncover design opportunities and extend the field's understanding of personal informatics tools. To understand menstrual cycle tracking practices, we collected and analyzed data from three sources: 2,000 reviews of popular menstrual tracking apps, a survey of 687 people, and follow-up interviews with 12 survey respondents. We find that women track their menstrual cycle for varied reasons that include remembering and predicting their period as well as informing conversations with healthcare providers. Participants described six methods of tracking their menstrual cycles, including use of technology, awareness of their premenstrual physiological states, and simply remembering. Although women find apps and calendars helpful, these methods are ineffective when predictions of future menstrual cycles are inaccurate. Designs can create feelings of exclusion for gender and sexual minorities. Existing apps also generally fail to consider life stages that women experience, including young adulthood, pregnancy, and menopause. Our findings encourage expanding the field's conceptions of personal informatics.
Diagnostic self-tracking, the recording of personal information to diagnose or manage a health condition, is a common practice, especially for people with chronic conditions. Unfortunately, many who attempt diagnostic self-tracking have trouble accomplishing their goals. People often lack knowledge and skills needed to design and conduct scientifically rigorous experiments, and current tools provide little support. To address these shortcomings and explore opportunities for diagnostic self-tracking, we designed, developed, and evaluated a mobile app that applies a self-experimentation framework to support patients suffering from irritable bowel syndrome (IBS) in identifying their personal food triggers. TummyTrials aids a person in designing, executing, and analyzing self-experiments to evaluate whether a specific food triggers their symptoms. We examined the feasibility of this approach in a field study with 15 IBS patients, finding that participants could use the tool to reliably undergo a self-experiment. However, we also discovered an underlying tension between scientific validity and the lived experience of self-experimentation. We discuss challenges of applying clinical research methods in everyday life, motivating a need for the design of self-experimentation systems to balance rigor with the uncertainties of everyday life.
Many people appropriate social media and online communities in their pursuit of personal health goals, such as healthy eating or increased physical activity. However, people struggle with impression management, and with reaching the right audiences when they share health information on these platforms. Instagram, a popular photo-based social media platform, has attracted many people who post and share their food photos. We aim to inform the design of tools to support healthy behaviors by understanding how people appropriate Instagram to track and share food data, the benefits they obtain from doing so, and the challenges they encounter. We interviewed 16 women who consistently record and share what they eat on Instagram. Participants tracked to support themselves and others in their pursuit of healthy eating goals. They sought social support for their own tracking and healthy behaviors and strove to provide that support for others. People adapted their personal tracking practices to better receive and give this support. Applying these results to the design of health tracking tools has the potential to help people better access social support.

