17

How DeepMind’s UNREAL Agent Performed 9 Times Better Than Experts on Atari

 4 years ago
source link: https://mc.ai/how-deepminds-unreal-agent-performed-9-times-better-than-experts-on-atari-2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Auxiliary Control Tasks

We can think of auxiliary tasks as “side quests.” Although they don’t directly help achieve the overall goal, they help the agent learn about environment dynamics and extract relevant information. In turn, that helps the agent learn how to achieve the desired overall end state. We can also view them as additional pseudo-reward functions for the agent to interact with.

Overall, the goal is to maximize the sum of two terms:

  1. The expected cumulative extrinsic reward
  2. The expected cumulative sum of auxiliary rewards
Overall Maximization Goal

where the superscript c denotes an auxiliary control task reward. Here are the two control tasks used by UNREAL:

  • Pixel Changes (Pixel Control): The agent tries to maximize changes in pixel values since these changes often correspond to important events.
  • Network Features (Feature Control): The agent tries to maximize the activation of all units in a given layer. This can force the policy and value networks to extract more task-relevant, high-level information.

For more details on how these tasks are defined and learned, feel free to skim this paper [1]. For now, just know that the agent tries to find accurate Q value functions to best achieve these auxiliary tasks, using auxiliary rewards defined by the user.

Okay, perfect! Now we just add the extrinsic and auxiliary rewards then run A3C using the sum as a newly defined reward! Right?

How UNREAL is Clever

In actuality, UNREAL does something different. Instead of training a single policy to optimize this reward, it trains a policy for each of the tasks on top of the base A3C policy . While all auxiliary task policies share some network components with the base A3C agent, they each also add individual components to define separate policies.

For example, the “Pixel Control” task has a deconvolutional network after the shared convolutional network and LSTM. The output defines the Q-values for the pixel control policy. (Skim [1] for details on the implementation)

Each of the policies optimizes an n-step Q-learning loss:

Auxiliary Control Loss Using N-Step Q

Even more amazingly, we never explicitly use these auxiliary control task policies. Even though we discover which actions optimize each of the auxiliary tasks, we only use the base A3C agent’s actions in the environment. Then, you may think, all this auxiliary training was for nothing!

Not quite. The key is that there are shared parts of the architecture between the A3C agent and auxiliary control tasks! As we optimize policies over the auxiliary tasks, we are changing parameters that are also used by the base agent. This has, what I like to call, a “nudging effect.”

Updating shared components not only helps learn auxiliary tasks but also better equips the agent to solve the overall problem by extracting relevant information from the environment.

In other words, we get more information from the environment than if we did not use auxiliary tasks.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK