While LLMs in the RAG paradigm have shown remarkable performance on a variety of tasks, they still under-perform on unseen domains, especially on complex tasks like procedural question answering. In this work, we introduce a novel formalism and structure for manipulating text-based procedures. Based on this formalism, we further present a novel dataset called LCStep, scraped from the LangChain Python docs. Moreover, we extend the traditional RAG system to propose a novel system called analogy-augmented generation (AAG), that draws inspiration from human analogical reasoning and ability to assimilate past experiences to solve unseen problems. The proposed method uses a frozen language model with a custom procedure memory store to adapt to specialized knowledge. We demonstrate that AAG outperforms few-shot and RAG baselines on LCStep, RecipeNLG, and CHAMP datasets under a pairwise LLM-based evaluation, corroborated by human evaluation in the case of RecipeNLG.
{"title":"Pairing Analogy-Augmented Generation with Procedural Memory for Procedural Q&A","authors":"K Roth, Rushil Gupta, Simon Halle, Bang Liu","doi":"arxiv-2409.01344","DOIUrl":"https://doi.org/arxiv-2409.01344","url":null,"abstract":"While LLMs in the RAG paradigm have shown remarkable performance on a variety\u0000of tasks, they still under-perform on unseen domains, especially on complex\u0000tasks like procedural question answering. In this work, we introduce a novel\u0000formalism and structure for manipulating text-based procedures. Based on this\u0000formalism, we further present a novel dataset called LCStep, scraped from the\u0000LangChain Python docs. Moreover, we extend the traditional RAG system to\u0000propose a novel system called analogy-augmented generation (AAG), that draws\u0000inspiration from human analogical reasoning and ability to assimilate past\u0000experiences to solve unseen problems. The proposed method uses a frozen\u0000language model with a custom procedure memory store to adapt to specialized\u0000knowledge. We demonstrate that AAG outperforms few-shot and RAG baselines on\u0000LCStep, RecipeNLG, and CHAMP datasets under a pairwise LLM-based evaluation,\u0000corroborated by human evaluation in the case of RecipeNLG.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-02","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193874","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Mahsa Khosravi, Matthew Carroll, Kai Liang Tan, Liza Van der Laan, Joscif Raigne, Daren S. Mueller, Arti Singh, Aditya Balu, Baskar Ganapathysubramanian, Asheesh Kumar Singh, Soumik Sarkar
Agricultural production requires careful management of inputs such as fungicides, insecticides, and herbicides to ensure a successful crop that is high-yielding, profitable, and of superior seed quality. Current state-of-the-art field crop management relies on coarse-scale crop management strategies, where entire fields are sprayed with pest and disease-controlling chemicals, leading to increased cost and sub-optimal soil and crop management. To overcome these challenges and optimize crop production, we utilize machine learning tools within a virtual field environment to generate localized management plans for farmers to manage biotic threats while maximizing profits. Specifically, we present AgGym, a modular, crop and stress agnostic simulation framework to model the spread of biotic stresses in a field and estimate yield losses with and without chemical treatments. Our validation with real data shows that AgGym can be customized with limited data to simulate yield outcomes under various biotic stress conditions. We further demonstrate that deep reinforcement learning (RL) policies can be trained using AgGym for designing ultra-precise biotic stress mitigation strategies with potential to increase yield recovery with less chemicals and lower cost. Our proposed framework enables personalized decision support that can transform biotic stress management from being schedule based and reactive to opportunistic and prescriptive. We also release the AgGym software implementation as a community resource and invite experts to contribute to this open-sourced publicly available modular environment framework. The source code can be accessed at: https://github.com/SCSLabISU/AgGym.
{"title":"AgGym: An agricultural biotic stress simulation environment for ultra-precision management planning","authors":"Mahsa Khosravi, Matthew Carroll, Kai Liang Tan, Liza Van der Laan, Joscif Raigne, Daren S. Mueller, Arti Singh, Aditya Balu, Baskar Ganapathysubramanian, Asheesh Kumar Singh, Soumik Sarkar","doi":"arxiv-2409.00735","DOIUrl":"https://doi.org/arxiv-2409.00735","url":null,"abstract":"Agricultural production requires careful management of inputs such as\u0000fungicides, insecticides, and herbicides to ensure a successful crop that is\u0000high-yielding, profitable, and of superior seed quality. Current\u0000state-of-the-art field crop management relies on coarse-scale crop management\u0000strategies, where entire fields are sprayed with pest and disease-controlling\u0000chemicals, leading to increased cost and sub-optimal soil and crop management.\u0000To overcome these challenges and optimize crop production, we utilize machine\u0000learning tools within a virtual field environment to generate localized\u0000management plans for farmers to manage biotic threats while maximizing profits.\u0000Specifically, we present AgGym, a modular, crop and stress agnostic simulation\u0000framework to model the spread of biotic stresses in a field and estimate yield\u0000losses with and without chemical treatments. Our validation with real data\u0000shows that AgGym can be customized with limited data to simulate yield outcomes\u0000under various biotic stress conditions. We further demonstrate that deep\u0000reinforcement learning (RL) policies can be trained using AgGym for designing\u0000ultra-precise biotic stress mitigation strategies with potential to increase\u0000yield recovery with less chemicals and lower cost. Our proposed framework\u0000enables personalized decision support that can transform biotic stress\u0000management from being schedule based and reactive to opportunistic and\u0000prescriptive. We also release the AgGym software implementation as a community\u0000resource and invite experts to contribute to this open-sourced publicly\u0000available modular environment framework. The source code can be accessed at:\u0000https://github.com/SCSLabISU/AgGym.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193904","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
This paper establishes a connection between the fields of machine learning (ML) and philosophy concerning the phenomenon of behaving neutrally. It investigates a specific class of ML systems capable of delivering a neutral response to a given task, referred to as abstaining machine learning systems, that has not yet been studied from a philosophical perspective. The paper introduces and explains various abstaining machine learning systems, and categorizes them into distinct types. An examination is conducted on how abstention in the different machine learning system types aligns with the epistemological counterpart of suspended judgment, addressing both the nature of suspension and its normative profile. Additionally, a philosophical analysis is suggested on the autonomy and explainability of the abstaining response. It is argued, specifically, that one of the distinguished types of abstaining systems is preferable as it aligns more closely with our criteria for suspended judgment. Moreover, it is better equipped to autonomously generate abstaining outputs and offer explanations for abstaining outputs when compared to the other type.
{"title":"Abstaining Machine Learning -- Philosophical Considerations","authors":"Daniela Schuster","doi":"arxiv-2409.00706","DOIUrl":"https://doi.org/arxiv-2409.00706","url":null,"abstract":"This paper establishes a connection between the fields of machine learning\u0000(ML) and philosophy concerning the phenomenon of behaving neutrally. It\u0000investigates a specific class of ML systems capable of delivering a neutral\u0000response to a given task, referred to as abstaining machine learning systems,\u0000that has not yet been studied from a philosophical perspective. The paper\u0000introduces and explains various abstaining machine learning systems, and\u0000categorizes them into distinct types. An examination is conducted on how\u0000abstention in the different machine learning system types aligns with the\u0000epistemological counterpart of suspended judgment, addressing both the nature\u0000of suspension and its normative profile. Additionally, a philosophical analysis\u0000is suggested on the autonomy and explainability of the abstaining response. It\u0000is argued, specifically, that one of the distinguished types of abstaining\u0000systems is preferable as it aligns more closely with our criteria for suspended\u0000judgment. Moreover, it is better equipped to autonomously generate abstaining\u0000outputs and offer explanations for abstaining outputs when compared to the\u0000other type.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193906","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
In this paper, we study the shortest path problem (SPP) with multiple source-destination pairs (MSD), namely MSD-SPP, to minimize average travel time of all shortest paths. The inherent traffic capacity limits within a road network contributes to the competition among vehicles. Multi-agent reinforcement learning (MARL) model cannot offer effective and efficient path planning cooperation due to the asynchronous decision making setting in MSD-SPP, where vehicles (a.k.a agents) cannot simultaneously complete routing actions in the previous time step. To tackle the efficiency issue, we propose to divide an entire road network into multiple sub-graphs and subsequently execute a two-stage process of inter-region and intra-region route planning. To address the asynchronous issue, in the proposed asyn-MARL framework, we first design a global state, which exploits a low-dimensional vector to implicitly represent the joint observations and actions of multi-agents. Then we develop a novel trajectory collection mechanism to decrease the redundancy in training trajectories. Additionally, we design a novel actor network to facilitate the cooperation among vehicles towards the same or close destinations and a reachability graph aimed at preventing infinite loops in routing paths. On both synthetic and real road networks, our evaluation result demonstrates that our approach outperforms state-of-the-art planning approaches.
{"title":"Cooperative Path Planning with Asynchronous Multiagent Reinforcement Learning","authors":"Jiaming Yin, Weixiong Rao, Yu Xiao, Keshuang Tang","doi":"arxiv-2409.00754","DOIUrl":"https://doi.org/arxiv-2409.00754","url":null,"abstract":"In this paper, we study the shortest path problem (SPP) with multiple\u0000source-destination pairs (MSD), namely MSD-SPP, to minimize average travel time\u0000of all shortest paths. The inherent traffic capacity limits within a road\u0000network contributes to the competition among vehicles. Multi-agent\u0000reinforcement learning (MARL) model cannot offer effective and efficient path\u0000planning cooperation due to the asynchronous decision making setting in\u0000MSD-SPP, where vehicles (a.k.a agents) cannot simultaneously complete routing\u0000actions in the previous time step. To tackle the efficiency issue, we propose\u0000to divide an entire road network into multiple sub-graphs and subsequently\u0000execute a two-stage process of inter-region and intra-region route planning. To\u0000address the asynchronous issue, in the proposed asyn-MARL framework, we first\u0000design a global state, which exploits a low-dimensional vector to implicitly\u0000represent the joint observations and actions of multi-agents. Then we develop a\u0000novel trajectory collection mechanism to decrease the redundancy in training\u0000trajectories. Additionally, we design a novel actor network to facilitate the\u0000cooperation among vehicles towards the same or close destinations and a\u0000reachability graph aimed at preventing infinite loops in routing paths. On both\u0000synthetic and real road networks, our evaluation result demonstrates that our\u0000approach outperforms state-of-the-art planning approaches.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-09-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193902","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
The article focuses on the urgent issue of femicide in Veracruz, Mexico, and the development of the MFM_FEM_VER_CP_2024 model, a mathematical framework designed to predict femicide risk using fuzzy logic. This model addresses the complexity and uncertainty inherent in gender based violence by formalizing risk factors such as coercive control, dehumanization, and the cycle of violence. These factors are mathematically modeled through membership functions that assess the degree of risk associated with various conditions, including personal relationships and specific acts of violence. The study enhances the original model by incorporating new rules and refining existing membership functions, which significantly improve the model predictive accuracy.
{"title":"Predicting Femicide in Veracruz: A Fuzzy Logic Approach with the Expanded MFM-FEM-VER-CP-2024 Model","authors":"Carlos Medel-Ramírez, Hilario Medel-López","doi":"arxiv-2409.00359","DOIUrl":"https://doi.org/arxiv-2409.00359","url":null,"abstract":"The article focuses on the urgent issue of femicide in Veracruz, Mexico, and\u0000the development of the MFM_FEM_VER_CP_2024 model, a mathematical framework\u0000designed to predict femicide risk using fuzzy logic. This model addresses the\u0000complexity and uncertainty inherent in gender based violence by formalizing\u0000risk factors such as coercive control, dehumanization, and the cycle of\u0000violence. These factors are mathematically modeled through membership functions\u0000that assess the degree of risk associated with various conditions, including\u0000personal relationships and specific acts of violence. The study enhances the\u0000original model by incorporating new rules and refining existing membership\u0000functions, which significantly improve the model predictive accuracy.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193908","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez
This paper introduces the MERIT Dataset, a multimodal (text + image + layout) fully labeled dataset within the context of school reports. Comprising over 400 labels and 33k samples, the MERIT Dataset is a valuable resource for training models in demanding Visually-rich Document Understanding (VrDU) tasks. By its nature (student grade reports), the MERIT Dataset can potentially include biases in a controlled way, making it a valuable tool to benchmark biases induced in Language Models (LLMs). The paper outlines the dataset's generation pipeline and highlights its main features in the textual, visual, layout, and bias domains. To demonstrate the dataset's utility, we present a benchmark with token classification models, showing that the dataset poses a significant challenge even for SOTA models and that these would greatly benefit from including samples from the MERIT Dataset in their pretraining phase.
{"title":"The MERIT Dataset: Modelling and Efficiently Rendering Interpretable Transcripts","authors":"I. de Rodrigo, A. Sanchez-Cuadrado, J. Boal, A. J. Lopez-Lopez","doi":"arxiv-2409.00447","DOIUrl":"https://doi.org/arxiv-2409.00447","url":null,"abstract":"This paper introduces the MERIT Dataset, a multimodal (text + image + layout)\u0000fully labeled dataset within the context of school reports. Comprising over 400\u0000labels and 33k samples, the MERIT Dataset is a valuable resource for training\u0000models in demanding Visually-rich Document Understanding (VrDU) tasks. By its\u0000nature (student grade reports), the MERIT Dataset can potentially include\u0000biases in a controlled way, making it a valuable tool to benchmark biases\u0000induced in Language Models (LLMs). The paper outlines the dataset's generation\u0000pipeline and highlights its main features in the textual, visual, layout, and\u0000bias domains. To demonstrate the dataset's utility, we present a benchmark with\u0000token classification models, showing that the dataset poses a significant\u0000challenge even for SOTA models and that these would greatly benefit from\u0000including samples from the MERIT Dataset in their pretraining phase.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-31","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193910","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni
In recent years, various methods have been introduced for explaining the outputs of "black-box" AI models. However, it is not well understood whether users actually comprehend and trust these explanations. In this paper, we focus on explanations for a regression tool for assessing cancer risk and examine the effect of the explanations' content and format on the user-centric metrics of comprehension and trust. Regarding content, we experiment with two explanation methods: the popular SHAP, based on game-theoretic notions and thus potentially complex for everyday users to comprehend, and occlusion-1, based on feature occlusion which may be more comprehensible. Regarding format, we present SHAP explanations as charts (SC), as is conventional, and occlusion-1 explanations as charts (OC) as well as text (OT), to which their simpler nature also lends itself. The experiments amount to user studies questioning participants, with two different levels of expertise (the general population and those with some medical training), on their subjective and objective comprehension of and trust in explanations for the outputs of the regression tool. In both studies we found a clear preference in terms of subjective comprehension and trust for occlusion-1 over SHAP explanations in general, when comparing based on content. However, direct comparisons of explanations when controlling for format only revealed evidence for OT over SC explanations in most cases, suggesting that the dominance of occlusion-1 over SHAP explanations may be driven by a preference for text over charts as explanations. Finally, we found no evidence of a difference between the explanation types in terms of objective comprehension. Thus overall, the choice of the content and format of explanations needs careful attention, since in some contexts format, rather than content, may play the critical role in improving user experience.
{"title":"Exploring the Effect of Explanation Content and Format on User Comprehension and Trust","authors":"Antonio Rago, Bence Palfi, Purin Sukpanichnant, Hannibal Nabli, Kavyesh Vivek, Olga Kostopoulou, James Kinross, Francesca Toni","doi":"arxiv-2408.17401","DOIUrl":"https://doi.org/arxiv-2408.17401","url":null,"abstract":"In recent years, various methods have been introduced for explaining the\u0000outputs of \"black-box\" AI models. However, it is not well understood whether\u0000users actually comprehend and trust these explanations. In this paper, we focus\u0000on explanations for a regression tool for assessing cancer risk and examine the\u0000effect of the explanations' content and format on the user-centric metrics of\u0000comprehension and trust. Regarding content, we experiment with two explanation\u0000methods: the popular SHAP, based on game-theoretic notions and thus potentially\u0000complex for everyday users to comprehend, and occlusion-1, based on feature\u0000occlusion which may be more comprehensible. Regarding format, we present SHAP\u0000explanations as charts (SC), as is conventional, and occlusion-1 explanations\u0000as charts (OC) as well as text (OT), to which their simpler nature also lends\u0000itself. The experiments amount to user studies questioning participants, with\u0000two different levels of expertise (the general population and those with some\u0000medical training), on their subjective and objective comprehension of and trust\u0000in explanations for the outputs of the regression tool. In both studies we\u0000found a clear preference in terms of subjective comprehension and trust for\u0000occlusion-1 over SHAP explanations in general, when comparing based on content.\u0000However, direct comparisons of explanations when controlling for format only\u0000revealed evidence for OT over SC explanations in most cases, suggesting that\u0000the dominance of occlusion-1 over SHAP explanations may be driven by a\u0000preference for text over charts as explanations. Finally, we found no evidence\u0000of a difference between the explanation types in terms of objective\u0000comprehension. Thus overall, the choice of the content and format of\u0000explanations needs careful attention, since in some contexts format, rather\u0000than content, may play the critical role in improving user experience.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193911","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller
Explainable Artificial Intelligence (XAI) plays a crucial role in fostering transparency and trust in AI systems, where traditional XAI approaches typically offer one level of abstraction for explanations, often in the form of heatmaps highlighting single or multiple input features. However, we ask whether abstract reasoning or problem-solving strategies of a model may also be relevant, as these align more closely with how humans approach solutions to problems. We propose a framework, called Symbolic XAI, that attributes relevance to symbolic queries expressing logical relationships between input features, thereby capturing the abstract reasoning behind a model's predictions. The methodology is built upon a simple yet general multi-order decomposition of model predictions. This decomposition can be specified using higher-order propagation-based relevance methods, such as GNN-LRP, or perturbation-based explanation methods commonly used in XAI. The effectiveness of our framework is demonstrated in the domains of natural language processing (NLP), vision, and quantum chemistry (QC), where abstract symbolic domain knowledge is abundant and of significant interest to users. The Symbolic XAI framework provides an understanding of the model's decision-making process that is both flexible for customization by the user and human-readable through logical formulas.
{"title":"Towards Symbolic XAI -- Explanation Through Human Understandable Logical Relationships Between Features","authors":"Thomas Schnake, Farnoush Rezaei Jafaria, Jonas Lederer, Ping Xiong, Shinichi Nakajima, Stefan Gugler, Grégoire Montavon, Klaus-Robert Müller","doi":"arxiv-2408.17198","DOIUrl":"https://doi.org/arxiv-2408.17198","url":null,"abstract":"Explainable Artificial Intelligence (XAI) plays a crucial role in fostering\u0000transparency and trust in AI systems, where traditional XAI approaches\u0000typically offer one level of abstraction for explanations, often in the form of\u0000heatmaps highlighting single or multiple input features. However, we ask\u0000whether abstract reasoning or problem-solving strategies of a model may also be\u0000relevant, as these align more closely with how humans approach solutions to\u0000problems. We propose a framework, called Symbolic XAI, that attributes\u0000relevance to symbolic queries expressing logical relationships between input\u0000features, thereby capturing the abstract reasoning behind a model's\u0000predictions. The methodology is built upon a simple yet general multi-order\u0000decomposition of model predictions. This decomposition can be specified using\u0000higher-order propagation-based relevance methods, such as GNN-LRP, or\u0000perturbation-based explanation methods commonly used in XAI. The effectiveness\u0000of our framework is demonstrated in the domains of natural language processing\u0000(NLP), vision, and quantum chemistry (QC), where abstract symbolic domain\u0000knowledge is abundant and of significant interest to users. The Symbolic XAI\u0000framework provides an understanding of the model's decision-making process that\u0000is both flexible for customization by the user and human-readable through\u0000logical formulas.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193914","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Sara JaberUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France and VEDECOM, mobiLAB, Department of new solutions of mobility services and shared energy, Versailles, France, Mostafa AmeliUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France, S. M. Hassan MahdaviVEDECOM, mobiLAB, Department of new solutions of mobility services and shared energy, Versailles, France, Neila BhouriUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France
Public transportation systems are experiencing an increase in commuter traffic. This increase underscores the need for resilience strategies to manage unexpected service disruptions, ensuring rapid and effective responses that minimize adverse effects on stakeholders and enhance the system's ability to maintain essential functions and recover quickly. This study aims to explore the management of public transport disruptions through resilience as a service (RaaS) strategies, developing an optimization model to effectively allocate resources and minimize the cost for operators and passengers. The proposed model includes multiple transportation options, such as buses, taxis, and automated vans, and evaluates them as bridging alternatives to rail-disrupted services based on factors such as their availability, capacity, speed, and proximity to the disrupted station. This ensures that the most suitable vehicles are deployed to maintain service continuity. Applied to a case study in the Ile de France region, Paris and suburbs, complemented by a microscopic simulation, the model is compared to existing solutions such as bus bridging and reserve fleets. The results highlight the model's performance in minimizing costs and enhancing stakeholder satisfaction, optimizing transport management during disruptions.
{"title":"A methodological framework for Resilience as a Service (RaaS) in multimodal urban transportation networks","authors":"Sara JaberUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France and VEDECOM, mobiLAB, Department of new solutions of mobility services and shared energy, Versailles, France, Mostafa AmeliUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France, S. M. Hassan MahdaviVEDECOM, mobiLAB, Department of new solutions of mobility services and shared energy, Versailles, France, Neila BhouriUniv. Gustave Eiffel, COSYS, GRETTIA, Paris, France","doi":"arxiv-2408.17233","DOIUrl":"https://doi.org/arxiv-2408.17233","url":null,"abstract":"Public transportation systems are experiencing an increase in commuter\u0000traffic. This increase underscores the need for resilience strategies to manage\u0000unexpected service disruptions, ensuring rapid and effective responses that\u0000minimize adverse effects on stakeholders and enhance the system's ability to\u0000maintain essential functions and recover quickly. This study aims to explore\u0000the management of public transport disruptions through resilience as a service\u0000(RaaS) strategies, developing an optimization model to effectively allocate\u0000resources and minimize the cost for operators and passengers. The proposed\u0000model includes multiple transportation options, such as buses, taxis, and\u0000automated vans, and evaluates them as bridging alternatives to rail-disrupted\u0000services based on factors such as their availability, capacity, speed, and\u0000proximity to the disrupted station. This ensures that the most suitable\u0000vehicles are deployed to maintain service continuity. Applied to a case study\u0000in the Ile de France region, Paris and suburbs, complemented by a microscopic\u0000simulation, the model is compared to existing solutions such as bus bridging\u0000and reserve fleets. The results highlight the model's performance in minimizing\u0000costs and enhancing stakeholder satisfaction, optimizing transport management\u0000during disruptions.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193913","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}
Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita
Artificial intelligence models encounter significant challenges due to their black-box nature, particularly in safety-critical domains such as healthcare, finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI) addresses these challenges by providing explanations for how these models make decisions and predictions, ensuring transparency, accountability, and fairness. Existing studies have examined the fundamental concepts of XAI, its general principles, and the scope of XAI techniques. However, there remains a gap in the literature as there are no comprehensive reviews that delve into the detailed mathematical representations, design methodologies of XAI models, and other associated aspects. This paper provides a comprehensive literature review encompassing common terminologies and definitions, the need for XAI, beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI methods in different application areas. The survey is aimed at XAI researchers, XAI practitioners, AI model developers, and XAI beneficiaries who are interested in enhancing the trustworthiness, transparency, accountability, and fairness of their AI models.
{"title":"Explainable Artificial Intelligence: A Survey of Needs, Techniques, Applications, and Future Direction","authors":"Melkamu Mersha, Khang Lam, Joseph Wood, Ali AlShami, Jugal Kalita","doi":"arxiv-2409.00265","DOIUrl":"https://doi.org/arxiv-2409.00265","url":null,"abstract":"Artificial intelligence models encounter significant challenges due to their\u0000black-box nature, particularly in safety-critical domains such as healthcare,\u0000finance, and autonomous vehicles. Explainable Artificial Intelligence (XAI)\u0000addresses these challenges by providing explanations for how these models make\u0000decisions and predictions, ensuring transparency, accountability, and fairness.\u0000Existing studies have examined the fundamental concepts of XAI, its general\u0000principles, and the scope of XAI techniques. However, there remains a gap in\u0000the literature as there are no comprehensive reviews that delve into the\u0000detailed mathematical representations, design methodologies of XAI models, and\u0000other associated aspects. This paper provides a comprehensive literature review\u0000encompassing common terminologies and definitions, the need for XAI,\u0000beneficiaries of XAI, a taxonomy of XAI methods, and the application of XAI\u0000methods in different application areas. The survey is aimed at XAI researchers,\u0000XAI practitioners, AI model developers, and XAI beneficiaries who are\u0000interested in enhancing the trustworthiness, transparency, accountability, and\u0000fairness of their AI models.","PeriodicalId":501479,"journal":{"name":"arXiv - CS - Artificial Intelligence","volume":null,"pages":null},"PeriodicalIF":0.0,"publicationDate":"2024-08-30","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":null,"resultStr":null,"platform":"Semanticscholar","paperid":"142193946","PeriodicalName":null,"FirstCategoryId":null,"ListUrlMain":null,"RegionNum":0,"RegionCategory":"","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":"","EPubDate":null,"PubModel":null,"JCR":null,"JCRName":null,"Score":null,"Total":0}