17

Beating Atari Pong on a Raspberry Pi Without Backpropagation

 4 years ago
source link: https://ogma.ai/2020/03/beating-atari-pong-on-a-raspberry-pi-without-backpropagation/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Hello,

In ourprevious post, we showed that we can now play Atari games from pixels on low-power hardware such as the Raspberry Pi. We can do so in an online, continually-learning fashion.

However, the version of OgmaNeo2 used in that post still used backpropagation for a part of the algorithm just for reinforcement learning. It used a “routing” method to perform backpropagation despite the heavy sparsity in order to approximate a value function. This works reasonably well, but has some drawbacks:

  • Sacrifices biological plausibility
  • Can have exploding/vanishing gradients
  • Runs slower (backwards pass is slow)
  • Limits the hierarchy to reinforcement learning only (inelegant integration with time series prediction/world model building)

We have now completely removed backpropagation from our algorithm, and the resulting algorithm performs better than before (and runs faster)!

The new algorithm relies entirely on the bidirectional temporal nature of the hierarchy to perform credit assignment. The reinforcement learning occurs at the “bottom” (input/output) layer of the hierarchy only. All layers above learn to predict the representation of the layer directly below one timestep ahead of time. The reinforcement learning layer simply selects actions based on the state of the first layer and the feedback from the layers above. For more information on our technology, see our whitepaper ( DRAFT ) .

Here we have a video of the agent playing Atari Pong on a Raspberry Pi 4. It found an exploitable position, although sometimes it will randomly miss and have to play “normally” as well. Training is actually ongoing in this video, since training and inference are about the same speed in OgmaNeo2. It is not shown in this video, but the agent has managed to get a perfect game several times.

Pong on a Pi

Our agent is comprised of only 2 layers in our “exponential memory” structure as well as an additional third layer for the image encoder. Our CSDRs are all of size 4x4x32 (width x height x column size), including the image encoder. The rough architecture of the Pong Agent is shown below.

UNj2InQ.png!web

We have gone ahead and released the version of OgmaNeo2 used in the video (master branch). As mentioned previously, a handy feature of this newest, backprop-free version is that one can perform both time series prediction and reinforcement learning with the same hierarchy.

Finally, here is a peak at what will hopefully become our next demo.

qM7Vz2e.png!web

Until next time!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK