This paper introduces a novel incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting (MCS) problems, enabling decision makers to progressively provide assignment example preference information. Specifically, we first construct a max-margin optimization-based model to model potentially non-monotonic preferences and inconsistent assignment example preference information in each iteration of the incremental preference elicitation process. Using the optimal objective function value of the max-margin optimization-based model, we devise information amount measurement methods and question selection strategies to pinpoint the most informative alternative in each iteration within the framework of uncertainty sampling in active learning. Once the termination criterion is satisfied, the sorting result for non-reference alternatives can be determined through the use of two optimization models, i.e., the max-margin optimization-based model and the complexity controlling optimization model. Subsequently, two incremental preference elicitation-based algorithms are developed to learn potentially non-monotonic preferences, considering different termination criteria. Ultimately, we apply the proposed approach to a credit rating problem to elucidate the detailed implementation steps, and perform computational experiments on both artificial and real-world data sets to compare the proposed question selection strategies with several benchmark strategies.
{"title":"An incremental preference elicitation-based approach to learning potentially non-monotonic preferences in multi-criteria sorting","authors":"Zhuolin Li, Zhen Zhang, Witold Pedrycz","doi":"arxiv-2409.02760","DOIUrl":"https://doi.org/arxiv-2409.02760","url":null,"abstract":"This paper introduces a novel incremental preference elicitation-based\u0000approach to learning potentially non-monotonic preferences in multi-criteria\u0000sorting (MCS) problems, enabling decision makers to progressively provide\u0000assignment example preference information. Specifically, we first construct a\u0000max-margin optimization-based model to model potentially non-monotonic\u0000preferences and inconsistent assignment example preference information in each\u0000iteration of the incremental preference elicitation process. Using the optimal\u0000objective function value of the max-margin optimization-based model, we devise\u0000information amount measurement methods and question selection strategies to\u0000pinpoint the most informative alternative in each iteration within the\u0000framework of uncertainty sampling in active learning. Once the termination\u0000criterion is satisfied, the sorting result for non-reference alternatives can\u0000be determined through the use of two optimization models, i.e., the max-margin\u0000optimization-based model and the complexity controlling optimization model.\u0000Subsequently, two incremental preference elicitation-based algorithms are\u0000developed to learn potentially non-monotonic preferences, considering different\u0000termination criteria. Ultimately, we apply the proposed approach to a credit\u0000rating problem to elucidate the detailed implementation steps, and perform\u0000computational experiments on both artificial and real-world data sets to\u0000compare the proposed question selection strategies with several benchmark\u0000strategies.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-04","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193865","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The concepts used in IFOL have associated to them a list of sorted attributes, and the sorts are the intensional concepts as well. The requirement to extend the unsorted IFOL (Intensional FOL) to many-sorted IFOL is mainly based on the fact that a natural language is implicitly many-sorted and that we intend to use IFOL to support applications that use natural languages. Thus, the proposed version of many-sorted IFOL is just the completion of this conceptual feature of the IFOL.
{"title":"Intensional FOL: Many-Sorted Extension","authors":"Zoran Majkic","doi":"arxiv-2409.04469","DOIUrl":"https://doi.org/arxiv-2409.04469","url":null,"abstract":"The concepts used in IFOL have associated to them a list of sorted\u0000attributes, and the sorts are the intensional concepts as well. The requirement\u0000to extend the unsorted IFOL (Intensional FOL) to many-sorted IFOL is mainly\u0000based on the fact that a natural language is implicitly many-sorted and that we\u0000intend to use IFOL to support applications that use natural languages. Thus,\u0000the proposed version of many-sorted IFOL is just the completion of this\u0000conceptual feature of the IFOL.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193909","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Anna L. Trella, Kelly W. Zhang, Hinal Jajal, Inbal Nahum-Shani, Vivek Shetty, Finale Doshi-Velez, Susan A. Murphy
Dental disease is a prevalent chronic condition associated with substantial financial burden, personal suffering, and increased risk of systemic diseases. Despite widespread recommendations for twice-daily tooth brushing, adherence to recommended oral self-care behaviors remains sub-optimal due to factors such as forgetfulness and disengagement. To address this, we developed Oralytics, a mHealth intervention system designed to complement clinician-delivered preventative care for marginalized individuals at risk for dental disease. Oralytics incorporates an online reinforcement learning algorithm to determine optimal times to deliver intervention prompts that encourage oral self-care behaviors. We have deployed Oralytics in a registered clinical trial. The deployment required careful design to manage challenges specific to the clinical trials setting in the U.S. In this paper, we (1) highlight key design decisions of the RL algorithm that address these challenges and (2) conduct a re-sampling analysis to evaluate algorithm design decisions. A second phase (randomized control trial) of Oralytics is planned to start in spring 2025.
{"title":"A Deployed Online Reinforcement Learning Algorithm In An Oral Health Clinical Trial","authors":"Anna L. Trella, Kelly W. Zhang, Hinal Jajal, Inbal Nahum-Shani, Vivek Shetty, Finale Doshi-Velez, Susan A. Murphy","doi":"arxiv-2409.02069","DOIUrl":"https://doi.org/arxiv-2409.02069","url":null,"abstract":"Dental disease is a prevalent chronic condition associated with substantial\u0000financial burden, personal suffering, and increased risk of systemic diseases.\u0000Despite widespread recommendations for twice-daily tooth brushing, adherence to\u0000recommended oral self-care behaviors remains sub-optimal due to factors such as\u0000forgetfulness and disengagement. To address this, we developed Oralytics, a\u0000mHealth intervention system designed to complement clinician-delivered\u0000preventative care for marginalized individuals at risk for dental disease.\u0000Oralytics incorporates an online reinforcement learning algorithm to determine\u0000optimal times to deliver intervention prompts that encourage oral self-care\u0000behaviors. We have deployed Oralytics in a registered clinical trial. The\u0000deployment required careful design to manage challenges specific to the\u0000clinical trials setting in the U.S. In this paper, we (1) highlight key design\u0000decisions of the RL algorithm that address these challenges and (2) conduct a\u0000re-sampling analysis to evaluate algorithm design decisions. A second phase\u0000(randomized control trial) of Oralytics is planned to start in spring 2025.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193868","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Jonas Stein, Florentin D Hildebrandt, Barrett W Thomas, Marlin W Ulmer
Home repair and installation services require technicians to visit customers and resolve tasks of different complexity. Technicians often have heterogeneous skills and working experiences. The geographical spread of customers makes achieving only perfect matches between technician skills and task requirements impractical. Additionally, technicians are regularly absent due to sickness. With non-perfect assignments regarding task requirement and technician skill, some tasks may remain unresolved and require a revisit and rework. Companies seek to minimize customer inconvenience due to delay. We model the problem as a sequential decision process where, over a number of service days, customers request service while heterogeneously skilled technicians are routed to serve customers in the system. Each day, our policy iteratively builds tours by adding "important" customers. The importance bases on analytical considerations and is measured by respecting routing efficiency, urgency of service, and risk of rework in an integrated fashion. We propose a state-dependent balance of these factors via reinforcement learning. A comprehensive study shows that taking a few non-perfect assignments can be quite beneficial for the overall service quality. We further demonstrate the value provided by a state-dependent parametrization.
{"title":"Learning State-Dependent Policy Parametrizations for Dynamic Technician Routing with Rework","authors":"Jonas Stein, Florentin D Hildebrandt, Barrett W Thomas, Marlin W Ulmer","doi":"arxiv-2409.01815","DOIUrl":"https://doi.org/arxiv-2409.01815","url":null,"abstract":"Home repair and installation services require technicians to visit customers\u0000and resolve tasks of different complexity. Technicians often have heterogeneous\u0000skills and working experiences. The geographical spread of customers makes\u0000achieving only perfect matches between technician skills and task requirements\u0000impractical. Additionally, technicians are regularly absent due to sickness.\u0000With non-perfect assignments regarding task requirement and technician skill,\u0000some tasks may remain unresolved and require a revisit and rework. Companies\u0000seek to minimize customer inconvenience due to delay. We model the problem as a\u0000sequential decision process where, over a number of service days, customers\u0000request service while heterogeneously skilled technicians are routed to serve\u0000customers in the system. Each day, our policy iteratively builds tours by\u0000adding \"important\" customers. The importance bases on analytical considerations\u0000and is measured by respecting routing efficiency, urgency of service, and risk\u0000of rework in an integrated fashion. We propose a state-dependent balance of\u0000these factors via reinforcement learning. A comprehensive study shows that\u0000taking a few non-perfect assignments can be quite beneficial for the overall\u0000service quality. We further demonstrate the value provided by a state-dependent\u0000parametrization.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193900","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Deriving a representative model using value function-based methods from the perspective of preference disaggregation has emerged as a prominent and growing topic in multi-criteria sorting (MCS) problems. A noteworthy observation is that many existing approaches to learning a representative model for MCS problems traditionally assume the monotonicity of criteria, which may not always align with the complexities found in real-world MCS scenarios. Consequently, this paper proposes some approaches to learning a representative model for MCS problems with non-monotonic criteria through the integration of the threshold-based value-driven sorting procedure. To do so, we first define some transformation functions to map the marginal values and category thresholds into a UTA-like functional space. Subsequently, we construct constraint sets to model non-monotonic criteria in MCS problems and develop optimization models to check and rectify the inconsistency of the decision maker's assignment example preference information. By simultaneously considering the complexity and discriminative power of the models, two distinct lexicographic optimization-based approaches are developed to derive a representative model for MCS problems with non-monotonic criteria. Eventually, we offer an illustrative example and conduct comprehensive simulation experiments to elaborate the feasibility and validity of the proposed approaches.
{"title":"Lexicographic optimization-based approaches to learning a representative model for multi-criteria sorting with non-monotonic criteria","authors":"Zhen Zhang, Zhuolin Li, Wenyu Yu","doi":"arxiv-2409.01612","DOIUrl":"https://doi.org/arxiv-2409.01612","url":null,"abstract":"Deriving a representative model using value function-based methods from the\u0000perspective of preference disaggregation has emerged as a prominent and growing\u0000topic in multi-criteria sorting (MCS) problems. A noteworthy observation is\u0000that many existing approaches to learning a representative model for MCS\u0000problems traditionally assume the monotonicity of criteria, which may not\u0000always align with the complexities found in real-world MCS scenarios.\u0000Consequently, this paper proposes some approaches to learning a representative\u0000model for MCS problems with non-monotonic criteria through the integration of\u0000the threshold-based value-driven sorting procedure. To do so, we first define\u0000some transformation functions to map the marginal values and category\u0000thresholds into a UTA-like functional space. Subsequently, we construct\u0000constraint sets to model non-monotonic criteria in MCS problems and develop\u0000optimization models to check and rectify the inconsistency of the decision\u0000maker's assignment example preference information. By simultaneously\u0000considering the complexity and discriminative power of the models, two distinct\u0000lexicographic optimization-based approaches are developed to derive a\u0000representative model for MCS problems with non-monotonic criteria. Eventually,\u0000we offer an illustrative example and conduct comprehensive simulation\u0000experiments to elaborate the feasibility and validity of the proposed\u0000approaches.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193903","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Haoming Li, Zhaoliang Chen, Jonathan Zhang, Fei Liu
Effective planning is essential for the success of any task, from organizing a vacation to routing autonomous vehicles and developing corporate strategies. It involves setting goals, formulating plans, and allocating resources to achieve them. LLMs are particularly well-suited for automated planning due to their strong capabilities in commonsense reasoning. They can deduce a sequence of actions needed to achieve a goal from a given state and identify an effective course of action. However, it is frequently observed that plans generated through direct prompting often fail upon execution. Our survey aims to highlight the existing challenges in planning with language models, focusing on key areas such as embodied environments, optimal scheduling, competitive and cooperative games, task decomposition, reasoning, and planning. Through this study, we explore how LLMs transform AI planning and provide unique insights into the future of LM-assisted planning.
{"title":"LASP: Surveying the State-of-the-Art in Large Language Model-Assisted AI Planning","authors":"Haoming Li, Zhaoliang Chen, Jonathan Zhang, Fei Liu","doi":"arxiv-2409.01806","DOIUrl":"https://doi.org/arxiv-2409.01806","url":null,"abstract":"Effective planning is essential for the success of any task, from organizing\u0000a vacation to routing autonomous vehicles and developing corporate strategies.\u0000It involves setting goals, formulating plans, and allocating resources to\u0000achieve them. LLMs are particularly well-suited for automated planning due to\u0000their strong capabilities in commonsense reasoning. They can deduce a sequence\u0000of actions needed to achieve a goal from a given state and identify an\u0000effective course of action. However, it is frequently observed that plans\u0000generated through direct prompting often fail upon execution. Our survey aims\u0000to highlight the existing challenges in planning with language models, focusing\u0000on key areas such as embodied environments, optimal scheduling, competitive and\u0000cooperative games, task decomposition, reasoning, and planning. Through this\u0000study, we explore how LLMs transform AI planning and provide unique insights\u0000into the future of LM-assisted planning.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193870","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper presents our research towards a near-term future in which legal entities, such as individuals and organisations can entrust semi-autonomous AI-driven agents to carry out online interactions on their behalf. The author's research concerns the development of semi-autonomous Web agents, which consult users if and only if the system does not have sufficient context or confidence to proceed working autonomously. This creates a user-agent dialogue that allows the user to teach the agent about the information sources they trust, their data-sharing preferences, and their decision-making preferences. Ultimately, this enables the user to maximise control over their data and decisions while retaining the convenience of using agents, including those driven by LLMs. In view of developing near-term solutions, the research seeks to answer the question: "How do we build a trustworthy and reliable network of semi-autonomous agents which represent individuals and organisations on the Web?". After identifying key requirements, the paper presents a demo for a sample use case of a generic personal assistant. This is implemented using (Notation3) rules to enforce safety guarantees around belief, data sharing and data usage and LLMs to allow natural language interaction with users and serendipitous dialogues between software agents.
{"title":"Here's Charlie! Realising the Semantic Web vision of Agents in the age of LLMs","authors":"Jesse Wright","doi":"arxiv-2409.04465","DOIUrl":"https://doi.org/arxiv-2409.04465","url":null,"abstract":"This paper presents our research towards a near-term future in which legal\u0000entities, such as individuals and organisations can entrust semi-autonomous\u0000AI-driven agents to carry out online interactions on their behalf. The author's\u0000research concerns the development of semi-autonomous Web agents, which consult\u0000users if and only if the system does not have sufficient context or confidence\u0000to proceed working autonomously. This creates a user-agent dialogue that allows\u0000the user to teach the agent about the information sources they trust, their\u0000data-sharing preferences, and their decision-making preferences. Ultimately,\u0000this enables the user to maximise control over their data and decisions while\u0000retaining the convenience of using agents, including those driven by LLMs. In view of developing near-term solutions, the research seeks to answer the\u0000question: \"How do we build a trustworthy and reliable network of\u0000semi-autonomous agents which represent individuals and organisations on the\u0000Web?\". After identifying key requirements, the paper presents a demo for a\u0000sample use case of a generic personal assistant. This is implemented using\u0000(Notation3) rules to enforce safety guarantees around belief, data sharing and\u0000data usage and LLMs to allow natural language interaction with users and\u0000serendipitous dialogues between software agents.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-03","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142194039","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Solim LeGris, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis
The Abstraction and Reasoning Corpus (ARC) is a visual program synthesis benchmark designed to test challenging out-of-distribution generalization in humans and machines. Since 2019, limited progress has been observed on the challenge using existing artificial intelligence methods. Comparing human and machine performance is important for the validity of the benchmark. While previous work explored how well humans can solve tasks from the ARC benchmark, they either did so using only a subset of tasks from the original dataset, or from variants of ARC, and therefore only provided a tentative estimate of human performance. In this work, we obtain a more robust estimate of human performance by evaluating 1729 humans on the full set of 400 training and 400 evaluation tasks from the original ARC problem set. We estimate that average human performance lies between 73.3% and 77.2% correct with a reported empirical average of 76.2% on the training set, and between 55.9% and 68.9% correct with a reported empirical average of 64.2% on the public evaluation set. However, we also find that 790 out of the 800 tasks were solvable by at least one person in three attempts, suggesting that the vast majority of the publicly available ARC tasks are in principle solvable by typical crowd-workers recruited over the internet. Notably, while these numbers are slightly lower than earlier estimates, human performance still greatly exceeds current state-of-the-art approaches for solving ARC. To facilitate research on ARC, we publicly release our dataset, called H-ARC (human-ARC), which includes all of the submissions and action traces from human participants.
{"title":"H-ARC: A Robust Estimate of Human Performance on the Abstraction and Reasoning Corpus Benchmark","authors":"Solim LeGris, Wai Keen Vong, Brenden M. Lake, Todd M. Gureckis","doi":"arxiv-2409.01374","DOIUrl":"https://doi.org/arxiv-2409.01374","url":null,"abstract":"The Abstraction and Reasoning Corpus (ARC) is a visual program synthesis\u0000benchmark designed to test challenging out-of-distribution generalization in\u0000humans and machines. Since 2019, limited progress has been observed on the\u0000challenge using existing artificial intelligence methods. Comparing human and\u0000machine performance is important for the validity of the benchmark. While\u0000previous work explored how well humans can solve tasks from the ARC benchmark,\u0000they either did so using only a subset of tasks from the original dataset, or\u0000from variants of ARC, and therefore only provided a tentative estimate of human\u0000performance. In this work, we obtain a more robust estimate of human\u0000performance by evaluating 1729 humans on the full set of 400 training and 400\u0000evaluation tasks from the original ARC problem set. We estimate that average\u0000human performance lies between 73.3% and 77.2% correct with a reported\u0000empirical average of 76.2% on the training set, and between 55.9% and 68.9%\u0000correct with a reported empirical average of 64.2% on the public evaluation\u0000set. However, we also find that 790 out of the 800 tasks were solvable by at\u0000least one person in three attempts, suggesting that the vast majority of the\u0000publicly available ARC tasks are in principle solvable by typical crowd-workers\u0000recruited over the internet. Notably, while these numbers are slightly lower\u0000than earlier estimates, human performance still greatly exceeds current\u0000state-of-the-art approaches for solving ARC. To facilitate research on ARC, we\u0000publicly release our dataset, called H-ARC (human-ARC), which includes all of\u0000the submissions and action traces from human participants.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193872","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Online corner case detection is crucial for ensuring safety in autonomous driving vehicles. Current autonomous driving approaches can be categorized into modular approaches and end-to-end approaches. To leverage the advantages of both, we propose a method for online corner case detection that integrates an end-to-end approach into a modular system. The modular system takes over the primary driving task and the end-to-end network runs in parallel as a secondary one, the disagreement between the systems is then used for corner case detection. We implement this method on a real vehicle and evaluate it qualitatively. Our results demonstrate that end-to-end networks, known for their superior situational awareness, as secondary driving systems, can effectively contribute to corner case detection. These findings suggest that such an approach holds potential for enhancing the safety of autonomous vehicles.
{"title":"Integrating End-to-End and Modular Driving Approaches for Online Corner Case Detection in Autonomous Driving","authors":"Gemb Kaljavesi, Xiyan Su, Frank Diermeyer","doi":"arxiv-2409.01178","DOIUrl":"https://doi.org/arxiv-2409.01178","url":null,"abstract":"Online corner case detection is crucial for ensuring safety in autonomous\u0000driving vehicles. Current autonomous driving approaches can be categorized into\u0000modular approaches and end-to-end approaches. To leverage the advantages of\u0000both, we propose a method for online corner case detection that integrates an\u0000end-to-end approach into a modular system. The modular system takes over the\u0000primary driving task and the end-to-end network runs in parallel as a secondary\u0000one, the disagreement between the systems is then used for corner case\u0000detection. We implement this method on a real vehicle and evaluate it\u0000qualitatively. Our results demonstrate that end-to-end networks, known for\u0000their superior situational awareness, as secondary driving systems, can\u0000effectively contribute to corner case detection. These findings suggest that\u0000such an approach holds potential for enhancing the safety of autonomous\u0000vehicles.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193907","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This booklet, "Unlocking the Wisdom of Large Language Models," serves as an introduction to the comprehensive work "The Path to Artificial General Intelligence." Through a series of nine aphorisms, we distill key insights and principles that underpin the larger exploration of AI's future through adversarial LLM dialogue. We propose this approach as a potential path to realizing artificial general intelligence (AGI). This booklet also includes the titles, abstracts, and introductions of the chapters in the main book, and presents the first two chapters in their entirety.
{"title":"Unlocking the Wisdom of Large Language Models: An Introduction to The Path to Artificial General Intelligence","authors":"Edward Y. Chang","doi":"arxiv-2409.01007","DOIUrl":"https://doi.org/arxiv-2409.01007","url":null,"abstract":"This booklet, \"Unlocking the Wisdom of Large Language Models,\" serves as an\u0000introduction to the comprehensive work \"The Path to Artificial General\u0000Intelligence.\" Through a series of nine aphorisms, we distill key insights and\u0000principles that underpin the larger exploration of AI's future through\u0000adversarial LLM dialogue. We propose this approach as a potential path to\u0000realizing artificial general intelligence (AGI). This booklet also includes the\u0000titles, abstracts, and introductions of the chapters in the main book, and\u0000presents the first two chapters in their entirety.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193905","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}