Anticipatory thinking (AT) and design have many commonalities. We identify three challenges for all computational AT systems: representation, generation, and evaluation. We discuss how existing artificial intelligence techniques provide some methods for addressing these, but also fall significantly short. Next, we articulate where AT concepts appear in three computational design paradigms: configuration design, design for resilience, and conceptual design. We close by identifying two promising future directions at the intersection of AT and design: modeling other humans and new interfaces to support human decision-makers.
Anticipatory thinking is the act of identifying problems that may arise in the future, and preparing for them in order to mitigate the risk of (or opportunity for) positive or negative impacts occurring. In this paper, we argue that a critical underlying process of anticipatory thinking is cognitive priming, where one's current thoughts influence the next without conscious intention. We make this argument in terms of two aspects of human cognition that are related to anticipatory thinking: context and creativity. We then use the parallels between context, creativity, and anticipatory thinking to support our belief that cognitive priming plays a key role in various aspects of anticipatory thinking. As part of this analysis, we also discuss its broader implications, including how it can be used to improve computational systems that do anticipatory thinking, as well as how it can be leveraged to improve anticipatory thinking in people.
Anticipatory thinking is necessary for managing risk in the safety- and mission-critical domains where AI systems are being deployed. We analyze the intersection of anticipatory thinking, the optimization paradigm, and metaforesight to advance our understanding of AI systems and their adaptive capabilities when encountering low-likelihood/high-impact risks. We describe this intersection as the anticipatory paradigm. We detail these challenges in concrete examples and propose new types of anticipatory thinking, towards a paradigm shift in how AI systems are evaluated.
We theorize that anticipatory thinking (AT) uses the same computational infrastructure as general cognition as described in the Common Model of Cognition. We extend the Common Model with results from research on event cognition. Using these building blocks, we present a five-step process model of AT as realized in cognitive architecture components. We then revisit simplifying assumptions underlying our model and expand our theory in response. Finally, we make predictions that are entailed by our account of AT, focusing on how computational limits in both natural and artificial cognitive systems can impact support for AT.
AI is transforming the way we live and work, with the potential to improve our lives in many ways. However, there are risks associated with AI deployments including failures of model robustness and security, explainability and interpretability, bias and fairness, and privacy and ethics. While there are international efforts to define governance standards for responsible AI, these are currently only principles-based, leaving organizations uncertain as to how they can prepare for emerging regulations or evaluate their effectiveness. We propose the use of anticipatory thinking and a flexible model risk audit (MRA) framework to bridge this gap and enable organizations to take an advantage of the benefits of responsible AI. This approach enables organizations to characterize risk at the model level and to apply the anticipatory thinking employed by high reliability organizations to achieve responsible AI deployments.
On March 27–29, 2023, the AAAI symposium on “HRI in Academia and Industry: Bridging the Gap” was held in a hybrid format, with both in-person and remote participants, gathering Human-Robot Interaction (HRI) researchers and practitioners from academia, industry, and national research laboratories to find common ground, understand the different constraints at play, and determine how to work together. The use of robots that operate in spaces in which humans are physically co-present is growing at a dramatic rate. We are seeing more and more robots in our warehouses, on our streets, and even in our homes. All of these robots will interact with humans in some way, whether intentionally or unintentionally. To be successful, their interactions with humans will have to be carefully designed. For more than a decade, the field of HRI has been growing at the intersection of robotics, Artificial Intelligence (AI) , human-computer interaction (HCI), psychology, and other fields; however, until quite recently, it has been a largely academic area, with university researchers proposing, implementing, and reporting on experiments at a limited scale. With the current increase of commercially-available robots, HRI is starting to make its way into the robotics industry in a meaningful way. This symposium brought together HRI researchers and practitioners from academia, industry, and national research laboratories to find common ground, understand the different constraints at play, and determine how to effectively work together.
We report about the first ever symposium on the assessment of AI trustworthiness, leading to the birth of a new research community on the matter.