When people are talking together in front of digital signage, advertisements that are aware of the context of the dialogue will work the most effectively. However, it has been challenging for computer systems to retrieve the appropriate advertisement from among the many options presented in large databases. Our proposed system, the Conversational Context-sensitive Advertisement generator (CoCoA), is the first attempt to apply masked word prediction to web information retrieval that takes into account the dialogue context. The novelty of CoCoA is that advertisers simply need to prepare a few abstract phrases, called Core-Queries, and then CoCoA automatically generates a context-sensitive expression as a complete search query by utilizing a masked word prediction technique that adds a word related to the dialogue context to one of the prepared Core-Queries. This automatic generation frees the advertisers from having to come up with context-sensitive phrases to attract users’ attention. Another unique point is that the modified Core-Query offers users speaking in front of the CoCoA system a list of context-sensitive advertisements. CoCoA was evaluated by crowd workers regarding the context-sensitivity of the generated search queries against the dialogue text of multiple domains prepared in advance. The results indicated that CoCoA could present more contextual and practical advertisements than other web-retrieval systems. Moreover, CoCoA acquired a higher evaluation in a particular conversation that included many travel topics to which the Core-Queries were designated, implying that it succeeded in adapting the Core-Queries for the specific ongoing context better than the compared method without any effort on the part of the advertisers. In addition, case studies with users and advertisers revealed that the context-sensitive advertisements generated by CoCoA also had an effect on the content of the ongoing dialogue. Specifically, since pairs unfamiliar with each other more frequently referred to the advertisement CoCoA displayed, the advertisements had an effect on the topics about which the pairs spoke. Moreover, participants of an advertiser role recognized that some of the search queries generated by CoCoA fitted the context of a conversation and that CoCoA improved the effect of the advertisement. In particular, they assimilated the hang of designing a good Core-Query at ease by observing the users’ response to the advertisements retrieved with the generated search queries.
Existing eXplainable Artificial Intelligence (XAI) techniques support people in interpreting AI advice. However, while previous work evaluates the users’ understanding of explanations, factors influencing the decision support are largely overlooked in the literature. This paper addresses this gap by studying the impact of user uncertainty, AI correctness, and the interaction between AI uncertainty and explanation logic-styles, for classification tasks. We conducted two separate studies: one requesting participants to recognise hand-written digits and one to classify the sentiment of reviews. To assess the decision making, we analysed the task performance, agreement with the AI suggestion, and the user’s reliance on the XAI interface elements. Participants make their decision relying on three pieces of information in the XAI interface (image or text instance, AI prediction, and explanation). Participants were shown one explanation style (between-participants design): according to three styles of logical reasoning (inductive, deductive, and abductive). This allowed us to study how different levels of AI uncertainty influence the effectiveness of different explanation styles. The results show that user uncertainty and AI correctness on predictions significantly affected users’ classification decisions considering the analysed metrics. In both domains (images and text), users relied mainly on the instance to decide. Users were usually overconfident about their choices, and this evidence was more pronounced for text. Furthermore, the inductive style explanations led to over-reliance on the AI advice in both domains – it was the most persuasive, even when the AI was incorrect. The abductive and deductive styles have complex effects depending on the domain and the AI uncertainty levels.
Adversarial attacks on a convolutional neural network (CNN)—injecting human-imperceptible perturbations into an input image—could fool a high-performance CNN into making incorrect predictions. The success of adversarial attacks raises serious concerns about the robustness of CNNs, and prevents them from being used in safety-critical applications, such as medical diagnosis and autonomous driving. Our work introduces a visual analytics approach to understanding adversarial attacks by answering two questions: (1) which neurons are more vulnerable to attacks and (2) which image features do these vulnerable neurons capture during the prediction?