In this paper, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also by explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people’s preference for and perception towards directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centred and context-specific approach to explainable AI.
Web data records are usually accompanied by auxiliary webpage segments, such as filters, sort options, search form, and multi-page links, to enhance interaction efficiency and convenience for end users. However, blind and visually impaired (BVI) persons are presently unable to fully exploit the auxiliary segments like their sighted peers, since these segments are scattered all across the screen, and as such assistive technologies used by BVI users, i.e., screen reader and screen magnifier, are not geared for efficient interaction with such scattered content. Specifically, for blind screen reader users, content navigation is predominantly one-dimensional despite the support for skipping content, and therefore navigating to-and-fro between different parts of the webpage is tedious and frustrating. Similarly, low vision screen magnifier users have to continuously pan back-and-forth between different portions of a webpage, given that only a portion of the screen is viewable at any instant due to content enlargement. The extant techniques to overcome inefficient web interaction for BVI users have mostly focused on general web-browsing activities, and as such they provide little to no support for data record-specific interaction activities such as filtering and sorting – activities that are equally important for facilitating quick and easy access to desired data records. To fill this void, we present InSupport, a browser extension that: (i) employs custom machine learning-based algorithms to automatically extract auxiliary segments on any webpage containing data records; and (ii) provides an instantly accessible proxy one-stop interface for easily navigating the extracted auxiliary segments using either basic keyboard shortcuts or mouse actions. Evaluation studies with 14 blind participants and 16 low vision participants showed significant improvement in web usability with InSupport, driven by increased reduction in interaction time and user effort, compared to the state-of-the-art solutions.
Research has shown that free-form Game-Design (GD) environments can be very effective in fostering Computational Thinking (CT) skills at a young age. However, some students can still need some guidance during the learning process due to the highly open-ended nature of these environments. Intelligent Pedagogical Agents (IPAs) can be used to provide personalized assistance in real-time to alleviate this challenge. This paper presents our results in evaluating such an agent deployed in a real-word free-form GD learning environment to foster CT in the early K-12 education, Unity-CT. We focus on the effect of repetition by comparing student behaviors between no intervention, 1-shot, and repeated intervention groups for two different errors that are known to be challenging in the online lessons of Unity-CT. Our findings showed that the agent was perceived very positively by the students and the repeated intervention showed promising results in terms of helping students make less errors and more correct behaviors, albeit only for one of the two target errors. Building from these results, we provide insights on how to provide IPA interventions in free-form GD environments.