People spend an enormous amount of time and effort looking for lost objects. To help remind people of the location of lost objects, various computational systems that provide information on their locations have been developed. However, prior systems for assisting people in finding objects require users to register the target objects in advance. This requirement imposes a cumbersome burden on the users, and the system cannot help remind them of unexpectedly lost objects. We propose GO-Finder (“Generic Object Finder”), a registration-free wearable camera-based system for assisting people in finding an arbitrary number of objects based on two key features: automatic discovery of hand-held objects and image-based candidate selection. Given a video taken from a wearable camera, GO-Finder automatically detects and groups hand-held objects to form a visual timeline of the objects. Users can retrieve the last appearance of the object by browsing the timeline through a smartphone app. We conducted user studies to investigate how users benefit from using GO-Finder. In the first study, we asked participants to perform an object retrieval task and confirmed improved accuracy and reduced mental load in the object search task by providing clear visual cues on object locations. In the second study, the system’s usability on a longer and more realistic scenario was verified, accompanied by an additional feature of context-based candidate filtering. Participant feedback suggested the usefulness of GO-Finder also in realistic scenarios where more than one hundred objects appear.
Open-domain chatbots engage with users in natural conversations to socialize and establish bonds. However, designing and developing an effective open-domain chatbot is challenging. It is unclear what qualities of a chatbot most correspond to users’ expectations and preferences. Even though existing work has considered a wide range of aspects, some key components are still missing. For example, the role of chatbots’ ability to communicate with humans at the emotional level remains an open subject of study. Furthermore, these trait qualities are likely to cover several dimensions. It is crucial to understand how the different qualities relate and interact with each other and what the core aspects would be. For this purpose, we first designed an exploratory user study aimed at gaining a basic understanding of the desired qualities of chatbots with a special focus on their emotional intelligence. Using the findings from the first study, we constructed a model of the desired traits by carefully selecting a set of features. With the help of a large-scale survey and structural equation modeling, we further validated the model using data collected from the survey. The final outcome is called the PEACE model (Politeness, Entertainment, Attentive Curiosity, and Empathy). By analyzing the dependencies between the different PEACE constructs, we shed light on the importance of and interplay between the chatbots’ qualities and the effect of users’ attitudes and concerns on their expectations of the technology. Not only PEACE defines the key ingredients of the social qualities of a chatbot, it also helped us derive a set of design implications useful for the development of socially adequate and emotionally aware open-domain chatbots.
With the rapid evolution of technology, computers and their users’ workspaces have become an essential part of our life in general. Today, many people use computers both for work and for personal needs, spending long hours sitting at a desk in front of a computer screen, changing their pose slightly from time to time. This phenomenon impacts people’s health negatively, adversely affecting their musculoskeletal and ocular systems. To mitigate these risks, several different ergonomic solutions have been suggested. This study proposes, demonstrates, and evaluates a technological solution that automatically adjusts the computer screen position and orientation to its user’s current pose, using a simple RGB camera and robotic arm. The automatic adjustment will reduce the physical load on users and better fit their changing poses. The user’s pose is extracted from images continuously acquired by the system’s camera. The most suitable screen position is calculated according to the user’s pose and ergonomic guidelines. Thereafter, the robotic arm adjusts the screen accordingly. The evaluation was done through a user study with 35 users who rated both the idea and the prototype system itself highly.
We propose a new research framework by which the nascent discipline of human-AI teaming can be explored within experimental environments in preparation for transferal to real-world contexts. We examine the existing literature and unanswered research questions through the lens of an Agile approach to construct our proposed framework. Our framework aims to provide a structure for understanding the macro features of this research landscape, supporting holistic research into the acceptability of human-AI teaming to human team members and the affordances of AI team members. The framework has the potential to enhance decision-making and performance of hybrid human-AI teams. Further, our framework proposes the application of Agile methodology for research management and knowledge discovery. We propose a transferability pathway for hybrid teaming to be initially tested in a safe environment, such as a real-time strategy video game, with elements of lessons learned that can be transferred to real-world situations.
The traditional matrix factorisation (MF)-based recommender system methods, despite their success in making the recommendation, lack explainable recommendations as the produced latent features are meaningless and cannot explain the recommendation. This article introduces an MF-based explainable recommender system framework that utilises the user-item rating data and the available item information to model meaningful user and item latent features. These features are exploited to enhance the rating prediction accuracy and the recommendation explainability. Our proposed feature-based explainable recommender system framework utilises these meaningful user and item latent features to explain the recommendation without relying on private or outer data. The recommendations are explained to the user using text message and bar chart. Our proposed model has been evaluated in terms of the rating prediction accuracy and the reasonableness of the explanation using six real-world benchmark datasets for movies, books, video games, and fashion recommendation systems. The results show that the proposed model can produce accurate explainable recommendations.
Ensuring fairness in artificial intelligence (AI) is important to counteract bias and discrimination in far-reaching applications. Recent work has started to investigate how humans judge fairness and how to support machine learning experts in making their AI models fairer. Drawing inspiration from an Explainable AI approach called explanatory debugging used in interactive machine learning, our work explores designing interpretable and interactive human-in-the-loop interfaces that allow ordinary end-users without any technical or domain background to identify potential fairness issues and possibly fix them in the context of loan decisions. Through workshops with end-users, we co-designed and implemented a prototype system that allowed end-users to see why predictions were made, and then to change weights on features to “debug” fairness issues. We evaluated the use of this prototype system through an online study. To investigate the implications of diverse human values about fairness around the globe, we also explored how cultural dimensions might play a role in using this prototype. Our results contribute to the design of interfaces to allow end-users to be involved in judging and addressing AI fairness through a human-in-the-loop approach.
Sketching is an intuitive and simple way to depict sciences with various object form and appearance characteristics. In the past few years, widely available touchscreen devices have increasingly made sketch-based human-AI co-creation applications popular. One key issue of sketch-oriented interaction is to prepare input sketches efficiently by non-professionals because it is usually difficult and time-consuming to draw an ideal sketch with appropriate outlines and rich details, especially for novice users with no sketching skills. Thus, sketching brings great obstacles for sketch applications in daily life. On the other hand, hand-drawn sketches are scarce and hard to collect. Given the fact that there are several large-scale sketch datasets providing sketch data resources, but they usually have a limited number of objects and categories in sketch, and do not support users to collect new sketch materials according to their personal preferences. In addition, few sketch-related applications support the reuse of existing sketch elements. Thus, knowing how to extract sketches from existing drawings and effectively re-use them in interactive scene sketch composition will provide an elegant way for sketch-based image retrieval (SBIR) applications, which are widely used in various touch screen devices. In this study, we first conduct a study on current SBIR to better understand the main requirements and challenges in sketch-oriented applications. Then we develop the SketchMaker as an interactive sketch extraction and composition system to help users generate scene sketches via reusing object sketches in existing scene sketches with minimal manual intervention. Moreover, we demonstrate how SBIR improves from composited scene sketches to verify the performance of our interactive sketch processing system. We also include a sketch-based video localization task as an alternative application of our sketch composition scheme. Our pilot study shows that our system is effective and efficient, and provides a way to promote practical applications of sketches.