当前位置: 首页> 国外交通期刊数据库 >详情
原文传递 Online longitudinal trajectory planning for connected and autonomous vehicles in mixed traffic flow with deep reinforcement learning approach
题名: Online longitudinal trajectory planning for connected and autonomous vehicles in mixed traffic flow with deep reinforcement learning approach
正文语种: eng
作者: Yanqiu Cheng;Xianbiao Hu;Kuanmin Chen;Xinlian Yu;Yulong Luo
作者单位: Department of Traffic Engineering College of Transportation Engineering Chang'an University Xi'an Shaanxi China||Department of Civil Architectural and Environmental Engineering Missouri University of Science and Technology Rolla Missouri USA;Department of Civil and Environmental Engineering Pennsylvania State University University Park Pennsylvania USA;Department of Traffic Engineering College of Transportation Engineering Chang'an University Xi'an Shaanxi China;School of Transportation Engineering Southeast University Nanjing China;School of Architecture and Urban Planning Guangdong University of Technology Guangzhou Guangdong China
关键词: Connected and automated vehicles; deep Q-learning; longitudinal trajectory planning; reinforcement learning
摘要: This manuscript presents an Adam optimization-based Deep Reinforcement Learning model for Mixed Traffic Flow control (ADRL-MTF), so as to guide Connected and Autonomous vehicle's (CAV) longitudinal trajectory on a typical urban roadway with signal-controlled intersections. Two improvements are made when compared with prior literatures. First, the common assumptions to simplify the problem solving, such as dividing a vehicle trajectory into several segments with constant acceleration/deceleration, are avoided, to improve the modeling realism. Second, built on the efficient Adam Optimization and Deep Q-Learning, the proposed model avoids the enumeration of states and actions, and is computational efficient and suitable for real time applications. The mixed traffic flow dynamic is firstly formulated as a finite Markov decision process (MDP) model. Due to the discretization of time, space and speed, this MDP model becomes high-dimensional in state, and is very challenging to solve. We then propose a temporal difference-based deep reinforcement learning approach, with ε-greedy for exploration-exploitation balance. Two neural networks are developed to replace the traditional Q function and generate the targets in the Q-learning update. These two neural networks are trained by the Adam optimization algorithm, which extends stochastic gradient descent and considers the second moments of the gradients, and is thus highly computational efficient and has lower memory requirements. The proposed model is shown to reduce fuel consumption by 7.8%, which outperforms a prior benchmark model based on Monte Carlo Tree Search. The model's runtime efficiency and stability are tested, and the sensitivity analysis is also performed.
出版年: 2023
期刊名称: Journal of Intelligent Transportation Systems
卷: 27
期: 1/6
页码: 396-410
检索历史
应用推荐