Using simulated self-play to solve all OpenAI Gym classic control problems with Pytorch

Posted on Thu 14 November 2019 in Machine Learning • Tagged with python, pytorch, reinforcement, learning, openai, gym

I use simulated self-play by ranking episodes by summed reward. Game outcomes are divided in two by cutting at the median, winners are assigned +1 rewards, losers are assigned -1 rewards, like in games like Go and Chess. Unlike naive policy gradient descent used in previous posts, this version solves all OpenAI classic control problems, albeit slowly.

OpenAI mountaincar

Continue reading

Applying policy gradient to OpenAI Gym classic control problems with Pytorch

Posted on Tue 12 November 2019 in Machine Learning • Tagged with python, pytorch, reinforcement, learning, openai, gym

I try to generalize the policy gradient algorithm as introduced earlier to solve all the OpenAI classic control problems. It works for CartPole and Acrobot, but not for Pendulum and MountainCar environments.

OpenAI classic control environments

Continue reading

Machine Learning at Fetchr

Posted on Tue 29 October 2019 in Machine Learning • Tagged with machine, learning, fetchr

Opportunities for automating, optimizing and enabling processes with ML at a delivery company such as Fetchr are plentiful. We put three families of ML models into production. These 3 areas are: Scheduling, Notifications and Operational choice.

Operational choice

Continue reading

Solving the CartPole Reinforcement Learning problem with Pytorch

Posted on Tue 22 October 2019 in Machine Learning • Tagged with python, pytorch, reinforcement, learning, openai, gym, cartpole

The CartPole problem is the Hello World of Reinforcement Learning, originally described in 1985 by Sutton et al. The environment is a pole balanced on a cart. CartPole is one of the environments in OpenAI Gym, so we don't have to code up the physics. Here I walk through a simple solution using Pytorch.

Cartpole animation

Continue reading