acm-header
Sign In

Communications of the ACM

ACM Careers

Deep Learning Tapped to Improve Autonomous EVs and Ride-Hailing Services


View as: Print Mobile App Share:
EV charger station

Credit: Getty Images

The future of commuter traffic probably looks something like this: ride-hailing companies operating fleets of autonomous electric vehicles alongside an increasing number of semi-autonomous EVs co-piloted by humans, all supported by a large infrastructure of charging stations. This scenario is particularly likely in California, which has committed to reducing carbon emissions to 40 percent below 1990 levels by 2030.

Computer scientists at Lawrence Livermore National Laboratory are preparing for this potential future by applying Deep Reinforcement Learning — the same kind of goal-driven algorithms that have defeated video game experts and world champions in the strategy game Go — to determine the most efficient strategy for charging and driving electric vehicles used for ride-hailing services. The goal of the strategy is to maximize service while reducing carbon emissions and the impact to the electrical grid, with an emphasis on autonomous EVs capable of 24-hour service.

In "Increasing Performance of Electric Vehicles in Ride-Hailing Services Using Deep Reinforcement Learning," presented at the NeurIPS 2019 Workshop on Tackling Climate Change with Machine Learning, LLNL computer scientists applied deep reinforcement learning to data gathered from ride-hailing services and utility providers to determine when EV drivers or autonomous electric cars should charge their vehicles and when they should pick up customers. The researchers hope to eventually create a robust tool that could provide ride-hailing drivers or autonomous cars with an optimal driving policy based on surge pricing, wait times at charging stations, carbon emissions released while charging, the current cost of energy, and other factors that can change throughout the day.

"This project is a simple environment to train autonomous agents to improve their driving behavior," says LLNL principal investigator and machine learning researcher Ruben Glatt. "We wanted to build a simulation with input from the ride-sharing and energy data so we could simulate typical rides, including costs and energy implications given a certain location or time. We wanted to know: How can we balance ecological factors like the carbon footprint, which is important for the society, while at the same time optimizing revenue that benefits the individual?"

While EVs are clearly a major step to reducing carbon emissions, there are downsides when compared to combustion engine vehicles, the researchers say. For example, drivers who currently use electric vehicles for ride-hailing companies must constantly weigh numerous options in determining when to offer a ride and when to charge their cars.

"It's hard to be a ride-share driver with a normal EV because you don't get as much range with your car when it's fully charged as you would with a full tank of gas. And waiting times at charging stations can be very high, compared to a couple of minutes to fill your gas car," says main author and LLNL machine learning researcher Jacob Pettit. "There's a lot of opportunity cost involved; if you drive for a ride-sharing company you might waste a lot of time just recharging and not providing as much service."

In training the deep reinforcement learning algorithms, each time the agent (representing an autonomous EV or driver of a shared EV capable of driving 24 hours per day) dropped off a customer, it faced a decision to either charge the vehicle or give a ride to a customer. It was rewarded for successfully completing trips with an expected fare amount and penalized for producing carbon emissions when charging or attempting to provide a ride with insufficient battery power.

The agent learned a beneficial strategy was to charge the vehicle when energy costs are cheap or have low carbon emissions, when there is less demand for rides and wait times at charging stations are low. Overall, the agent discovered how to optimize the number of rides it provided (revenue) while at the same time minimizing charging wait times and reducing emissions.

"It learned to look at time of day and extrapolate that, given the time, it wouldn't have to wait long, and wouldn't pay much money to charge the car, says co-author and LLNL machine learning researcher Brenden Petersen. "The surprising thing was that although we were mostly optimizing for money, the policy also produced less emissions per mile. Even though the agents were acting selfishly it still helped the environment, which is basically a win-win."  

The researchers are seeking to collaborate with both ride-hailing companies and energy providers to ensure the infrastructure that will eventually support autonomous EV ride-hailing services will be more stable and increase adoption of EVs generally for such services. They envision a machine-learning based tool that could help utilities and city planners decide where to place future electric vehicle charging stations and build an electrical infrastructure to accommodate autonomous EV traffic.

The team has applied for a Technology Commercialization Fund grant from the U.S. Department of Energy to expand the simulation to include multiple agents and alternative scenarios.

"We want to make the simulation closer to real life," Glatt says. "Here we only investigated for a single agent. We want to see what happens if we can control a fleet of agents and if additional networking effects evolve that we can benefit from."

The work was funded by the U.S. Department of Energy. Former LLNL researcher John Donadee initiated the project and also contributed to the publication.  


 

No entries found

Sign In for Full Access
» Forgot Password? » Create an ACM Web Account