Mayank Kejriwal, Eric Kildebeck, Robert Steininger, Abhinav Shrivastava
{"title":"Challenges, evaluation and opportunities for open-world learning","authors":"Mayank Kejriwal, Eric Kildebeck, Robert Steininger, Abhinav Shrivastava","doi":"10.1038/s42256-024-00852-4","DOIUrl":null,"url":null,"abstract":"Environmental changes can profoundly impact the performance of artificial intelligence systems operating in the real world, with effects ranging from overt catastrophic failures to non-robust behaviours that do not take changing context into account. Here we argue that designing machine intelligence that can operate in open worlds, including detecting, characterizing and adapting to structurally unexpected environmental changes, is a critical goal on the path to building systems that can solve complex and relatively under-determined problems. We present and distinguish between three forms of open-world learning (OWL)—weak, semi-strong and strong—and argue that a fully developed OWL system should be antifragile, rather than merely robust. An antifragile system, an example of which is the immune system, is not only robust to adverse events, but adapts to them quickly and becomes better at handling them in subsequent encounters. We also argue that, because OWL approaches must be capable of handling the unexpected, their practical evaluation can pose an interesting conceptual problem. AI systems operating in the real world unavoidably encounter unexpected environmental changes and need a built-in robustness and capability to learn fast, making use of advances such as lifelong and few-shot learning. Kejriwal et al. discuss three categories of such open-world learning and discuss applications such as self-driving cars and robotic inspection.","PeriodicalId":48533,"journal":{"name":"Nature Machine Intelligence","volume":null,"pages":null},"PeriodicalIF":18.8000,"publicationDate":"2024-06-24","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"0","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Nature Machine Intelligence","FirstCategoryId":"94","ListUrlMain":"https://www.nature.com/articles/s42256-024-00852-4","RegionNum":1,"RegionCategory":"计算机科学","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"COMPUTER SCIENCE, ARTIFICIAL INTELLIGENCE","Score":null,"Total":0}
引用次数: 0
Abstract
Environmental changes can profoundly impact the performance of artificial intelligence systems operating in the real world, with effects ranging from overt catastrophic failures to non-robust behaviours that do not take changing context into account. Here we argue that designing machine intelligence that can operate in open worlds, including detecting, characterizing and adapting to structurally unexpected environmental changes, is a critical goal on the path to building systems that can solve complex and relatively under-determined problems. We present and distinguish between three forms of open-world learning (OWL)—weak, semi-strong and strong—and argue that a fully developed OWL system should be antifragile, rather than merely robust. An antifragile system, an example of which is the immune system, is not only robust to adverse events, but adapts to them quickly and becomes better at handling them in subsequent encounters. We also argue that, because OWL approaches must be capable of handling the unexpected, their practical evaluation can pose an interesting conceptual problem. AI systems operating in the real world unavoidably encounter unexpected environmental changes and need a built-in robustness and capability to learn fast, making use of advances such as lifelong and few-shot learning. Kejriwal et al. discuss three categories of such open-world learning and discuss applications such as self-driving cars and robotic inspection.
期刊介绍:
Nature Machine Intelligence is a distinguished publication that presents original research and reviews on various topics in machine learning, robotics, and AI. Our focus extends beyond these fields, exploring their profound impact on other scientific disciplines, as well as societal and industrial aspects. We recognize limitless possibilities wherein machine intelligence can augment human capabilities and knowledge in domains like scientific exploration, healthcare, medical diagnostics, and the creation of safe and sustainable cities, transportation, and agriculture. Simultaneously, we acknowledge the emergence of ethical, social, and legal concerns due to the rapid pace of advancements.
To foster interdisciplinary discussions on these far-reaching implications, Nature Machine Intelligence serves as a platform for dialogue facilitated through Comments, News Features, News & Views articles, and Correspondence. Our goal is to encourage a comprehensive examination of these subjects.
Similar to all Nature-branded journals, Nature Machine Intelligence operates under the guidance of a team of skilled editors. We adhere to a fair and rigorous peer-review process, ensuring high standards of copy-editing and production, swift publication, and editorial independence.