{"title":"Two-Sided Deep Reinforcement Learning for Dynamic Mobility-on-Demand Management with Mixed Autonomy","authors":"Jiaohong Xie, Yang Liu, Nan Chen","doi":"10.1287/trsc.2022.1188","DOIUrl":null,"url":null,"abstract":"Autonomous vehicles (AVs) are expected to operate on mobility-on-demand (MoD) platforms because AV technology enables flexible self-relocation and system-optimal coordination. Unlike the existing studies, which focus on MoD with pure AV fleet or conventional vehicles (CVs) fleet, we aim to optimize the real-time fleet management of an MoD system with a mixed autonomy of CVs and AVs. We consider a realistic case that heterogeneous boundedly rational drivers may determine and learn their relocation strategies to improve their own compensation. In contrast, AVs are fully compliant with the platform’s operational decisions. To achieve a high level of service provided by a mixed fleet, we propose that the platform prioritizes human drivers in the matching decisions when on-demand requests arrive and dynamically determines the AV relocation tasks and the optimal commission fee to influence drivers’ behavior. However, it is challenging to make efficient real-time fleet management decisions when spatiotemporal uncertainty in demand and complex interactions among human drivers and operators are anticipated and considered in the operator’s decision making. To tackle the challenges, we develop a two-sided multiagent deep reinforcement learning (DRL) approach in which the operator acts as a supervisor agent on one side and makes centralized decisions on the mixed fleet, and each CV driver acts as an individual agent on the other side and learns to make decentralized decisions noncooperatively. We establish a two-sided multiagent advantage actor-critic algorithm to simultaneously train different agents on the two sides. For the first time, a scalable algorithm is developed here for mixed fleet management. Furthermore, we formulate a two-head policy network to enable the supervisor agent to efficiently make multitask decisions based on one policy network, which greatly reduces the computational time. The two-sided multiagent DRL approach is demonstrated using a case study in New York City using real taxi trip data. Results show that our algorithm can make high-quality decisions quickly and outperform benchmark policies. The efficiency of the two-head policy network is demonstrated by comparing it with the case using two separate policy networks. Our fleet management strategy makes both the platform and the drivers better off, especially in scenarios with high demand volume. History: This paper has been accepted for the Transportation Science Special Issue on Emerging Topics in Transportation Science and Logistics. Funding: This work was supported by the Singapore Ministry of Education Academic Research [Grant MOE2019-T2-2-165] and the Singapore Ministry of Education [Grant R-266-000-135-114].","PeriodicalId":51202,"journal":{"name":"Transportation Science","volume":"21 1","pages":"0"},"PeriodicalIF":4.4000,"publicationDate":"2023-07-01","publicationTypes":"Journal Article","fieldsOfStudy":null,"isOpenAccess":false,"openAccessPdf":"","citationCount":"3","resultStr":null,"platform":"Semanticscholar","paperid":null,"PeriodicalName":"Transportation Science","FirstCategoryId":"1085","ListUrlMain":"https://doi.org/10.1287/trsc.2022.1188","RegionNum":2,"RegionCategory":"工程技术","ArticlePicture":[],"TitleCN":null,"AbstractTextCN":null,"PMCID":null,"EPubDate":"","PubModel":"","JCR":"Q1","JCRName":"OPERATIONS RESEARCH & MANAGEMENT SCIENCE","Score":null,"Total":0}
引用次数: 3
Abstract
Autonomous vehicles (AVs) are expected to operate on mobility-on-demand (MoD) platforms because AV technology enables flexible self-relocation and system-optimal coordination. Unlike the existing studies, which focus on MoD with pure AV fleet or conventional vehicles (CVs) fleet, we aim to optimize the real-time fleet management of an MoD system with a mixed autonomy of CVs and AVs. We consider a realistic case that heterogeneous boundedly rational drivers may determine and learn their relocation strategies to improve their own compensation. In contrast, AVs are fully compliant with the platform’s operational decisions. To achieve a high level of service provided by a mixed fleet, we propose that the platform prioritizes human drivers in the matching decisions when on-demand requests arrive and dynamically determines the AV relocation tasks and the optimal commission fee to influence drivers’ behavior. However, it is challenging to make efficient real-time fleet management decisions when spatiotemporal uncertainty in demand and complex interactions among human drivers and operators are anticipated and considered in the operator’s decision making. To tackle the challenges, we develop a two-sided multiagent deep reinforcement learning (DRL) approach in which the operator acts as a supervisor agent on one side and makes centralized decisions on the mixed fleet, and each CV driver acts as an individual agent on the other side and learns to make decentralized decisions noncooperatively. We establish a two-sided multiagent advantage actor-critic algorithm to simultaneously train different agents on the two sides. For the first time, a scalable algorithm is developed here for mixed fleet management. Furthermore, we formulate a two-head policy network to enable the supervisor agent to efficiently make multitask decisions based on one policy network, which greatly reduces the computational time. The two-sided multiagent DRL approach is demonstrated using a case study in New York City using real taxi trip data. Results show that our algorithm can make high-quality decisions quickly and outperform benchmark policies. The efficiency of the two-head policy network is demonstrated by comparing it with the case using two separate policy networks. Our fleet management strategy makes both the platform and the drivers better off, especially in scenarios with high demand volume. History: This paper has been accepted for the Transportation Science Special Issue on Emerging Topics in Transportation Science and Logistics. Funding: This work was supported by the Singapore Ministry of Education Academic Research [Grant MOE2019-T2-2-165] and the Singapore Ministry of Education [Grant R-266-000-135-114].
期刊介绍:
Transportation Science, published quarterly by INFORMS, is the flagship journal of the Transportation Science and Logistics Society of INFORMS. As the foremost scientific journal in the cross-disciplinary operational research field of transportation analysis, Transportation Science publishes high-quality original contributions and surveys on phenomena associated with all modes of transportation, present and prospective, including mainly all levels of planning, design, economic, operational, and social aspects. Transportation Science focuses primarily on fundamental theories, coupled with observational and experimental studies of transportation and logistics phenomena and processes, mathematical models, advanced methodologies and novel applications in transportation and logistics systems analysis, planning and design. The journal covers a broad range of topics that include vehicular and human traffic flow theories, models and their application to traffic operations and management, strategic, tactical, and operational planning of transportation and logistics systems; performance analysis methods and system design and optimization; theories and analysis methods for network and spatial activity interaction, equilibrium and dynamics; economics of transportation system supply and evaluation; methodologies for analysis of transportation user behavior and the demand for transportation and logistics services.
Transportation Science is international in scope, with editors from nations around the globe. The editorial board reflects the diverse interdisciplinary interests of the transportation science and logistics community, with members that hold primary affiliations in engineering (civil, industrial, and aeronautical), physics, economics, applied mathematics, and business.