Actor-critic based deep reinforcement learning framework for energy management of extended range electric delivery vehicles

Pengyue Wang, Yan Li, Shashi Shekhar, William F. Northrop

Research output: Chapter in Book/Report/Conference proceedingConference contribution

20 Scopus citations

Abstract

In recent years, reinforcement learning (RL) algorithms have been successfully used in energy management strategies (EMS) for hybrid electric vehicles (HEVs) and extended range electric vehicles (EREVs) operating on standard driving cycles and fixed driving routes. For many real-world applications like last-mile package delivery however, although vehicles may traverse the same region, the actual distance and energy intensity can be significantly different day-to-day. Such variation renders existing RL approaches less useful for optimizing energy consumption because vehicle velocity trajectories and routes are not known a priori. This paper presents an actor-critic based RL framework with continuous output to optimize a rule-based (RB) vehicle parameter in the engine control logic during the trip in real-time under uncertainty. The EMS is then tested on an in-use EREV for delivery equipped with two-way vehicle-to-cloud connectivity. The algorithm was trained on 52 historical trips to learn a generalized strategy for different trip distance and energy intensity. An average of 21.8% fuel efficiency improvement in miles per gallon gasoline equivalent was demonstrated on 51 unforeseen test trips made by the same vehicle with a distance range of 31 to 54 miles. The framework can be extended to other RB methods and EREV applications like transit buses and commuter vehicles.

Original languageEnglish (US)
Title of host publicationProceedings of the 2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages1379-1384
Number of pages6
ISBN (Electronic)9781728124933
DOIs
StatePublished - Jul 2019
Event2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019 - Hong Kong, China
Duration: Jul 8 2019Jul 12 2019

Publication series

NameIEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM
Volume2019-July

Conference

Conference2019 IEEE/ASME International Conference on Advanced Intelligent Mechatronics, AIM 2019
Country/TerritoryChina
CityHong Kong
Period7/8/197/12/19

Bibliographical note

Funding Information:
ACKNOWLEDGMENT The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E) U.S. Department of Energy, under Award Number DE-AR0000795. The views and opinions of authors expressed herein do not necessarily state or reflect those of the United States Government or any agency thereof.

Funding Information:
The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E) U.S. Department of Energy, under Award Number DE-AR0000795.

Funding Information:
The information, data, or work presented herein was funded in part by the Advanced Research Projects Agency-Energy (ARPA-E) U.S. Department of Energy, under Award Number DE-AR0000795. * Corresponding author P. Wang, Y. Li, S. Shekhar and W. F. Northrop are with University of Minnesota, Minneapolis, MN 55455. (email: wang6609@umn.edu; lixx4266@umn.edu; shekhar@umn.edu, wnorthro@umn.edu)

Publisher Copyright:
© 2019 IEEE.

Fingerprint

Dive into the research topics of 'Actor-critic based deep reinforcement learning framework for energy management of extended range electric delivery vehicles'. Together they form a unique fingerprint.

Cite this