43

Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask

 5 years ago
source link: https://www.tuicool.com/articles/hit/bIbMZnr
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

beiiIbZ.png!web

At Uber, we apply neural networks to fundamentally improve how we understand the movement of people and things in cities. Among other use cases, we employ them to enable faster customer service response with natural language models and lower wait times via spatiotemporal prediction of demand across cities, and in the process have developed infrastructure to scale up training and support faster model development .

Though neural networks are powerful, widely used tools, many of their subtle properties are still poorly understood. As scientists around the world make strides towards illuminating fundamental network properties , much of our research at Uber AI aligns in this direction as well, including our work measuring intrinsic network complexity , finding more natural input spaces , and uncovering hidden flaws in popular models .

In our most recent paper aimed at demystifying neural networks, Deconstructing Lottery Tickets: Zeros, Signs, and the Supermask , we build upon the fascinating Lottery Ticket Hypothesis developed by Frankle and Carbin. Their work surprised many researchers by showing that a very simple algorithm—delete small weights and retrain—can find sparse trainable subnetworks, or “lottery tickets”, within larger networks that perform as well as the full network. Although they clearly demonstrated lottery tickets to be effective, their work (as often occurs with great research) raised as many questions as it answered, and many of the underlying mechanics were not yet well understood. Our paper proposes explanations behind these mechanisms, uncovers curious quirks of these subnetworks, introduces competitive variants of the lottery ticket algorithm, and derives a surprising by-product: the Supermask.

The Lottery Ticket Hypothesis

We begin by briefly summarizing Frankle and Carbin’s paper, The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks , which we abbreviated as “LT”. In this paper, the authors proposed a simple approach for producing sparse, performant networks: after training a network, set all weights smaller than some threshold to zero (prune them), rewind the rest of the weights to their initial configuration, and then retrain the network from this starting configuration keeping the pruned weights weights frozen (not trained). Using this approach, they obtained two intriguing results.

First, they showed that the pruned networks performed well. Aggressively pruned networks (with 99.5 percent to 95 percent of weights pruned) showed no drop in performance compared to the much larger, unpruned network. Moreover, networks only moderately pruned (with 50 percent to 90 percent of weights pruned) often outperformed their unpruned counterparts.

Second, as compelling as these results were, the characteristics of the remaining network structure and weights were just as interesting. Normally, if you take a trained network, re-initialize it with random weights, and then re-train it, its performance will be about the same as before. But with the skeletal Lottery Ticket (LT) networks, this property does not hold. The network trains well only if it is rewound to its initial state, including the specific initial weights that were used. Reinitializing it with new weights causes it to train poorly. As pointed out in Frankle and Carbin’s study, it would appear that the specific combination of pruning mask (a per-weight binary value indicating whether or not to delete the weight) and weights underlying the mask form a lucky sub-network found within the larger network, or, as named by the original study, a winning “Lottery Ticket.”

We found this demonstration intriguing because of all left untold. What about LT networks causes them to show better performance? Why are the pruning mask and the initial set of weight so tightly coupled, such that re-initializing the network makes it less trainable? Why does simply selecting large weights constitute an effective criterion for choosing a mask? Would other criteria for creating a mask work, too?

Curiously effective masks

We start our investigation with the observation of a curious phenomenon that demands explanation. While training LT networks, we observed that many of the rewound, masked networks had accuracy significantly better than chance at initialization . That is, an untrained network with a particular mask applied to it results in a partially working network.

This might come as a surprise, because if you use a randomly initialized and untrained network to, say, classify images of handwritten digits from the MNIST dataset , you would expect accuracy to be no better than chance (about 10%). But now imagine you multiply the network weights by a mask containing only zeros and ones. In this instance, weights are either unchanged or deleted entirely, but the resulting network now achieves nearly 40 percent accuracy at the task! This is strange, but it is exactly what happens when applying masks created using the procedure in the LT paper that selects weights with large final values (which we will call the “large final” mask criterion):

ruyIBbE.png!web Figure 1. Untrained networks perform at chance (10 percent accuracy, for example, on the MNIST dataset as depicted), if they are randomly initialized, or randomly initialized and randomly masked. However, applying the Lottery Ticket mask improves the network accuracy beyond the chance level.

We call masks with the property that they immediately produce partially working networks without training of the underlying weights Supermasks .

As depicted in Figure 1, in randomly-initialized networks and randomly-initialized networks with random masks, neither weights nor the mask contain any information about the labels, so accuracy cannot reliably be better than chance. In randomly-initialized networks with LT “large final” masks, it is not entirely implausible to have better-than-chance performance since the masks are indeed derived from the training process. But it was unexpected since the only transmission of information from the training back to the initial network is via a zero-one mask, and the criterion for masking simply selects weights with large final magnitudes.

Masking is training, or why zeros matter

So why do we see a large improvement in test accuracy from simply applying an LT mask?

The masking procedure as implemented in the LT paper performs two actions: it sets weights to zero, and it freezes them. By figuring out which of these two components leads to increased performance in trained networks, it turns out we’ll also uncover the principles underlying the peculiar performance of the untrained networks. 

To separate the above two factors, we run a simple experiment: reproduce the LT iterative pruning experiments in which network weights are masked out in alternating train/mask/rewind cycles, but try an additional treatment: freeze zero-masked weights at their initial values instead of at zero. If zero isn’t special, both treatments should perform similarly. We follow Frankle and Carbin (2019) and train three convolutional neural networks (CNNs), Conv2, Conv4, and Conv6 (small CNNs with 2/4/6 convolutional layers, same as used in the LT paper), on CIFAR-10 .

Results are shown in Figure 2, below, with pruning (or more correctly, “freezing at some value”) progressing from unpruned on the left to very pruned networks on the right. The horizontal black lines represent the performance of the original, unpruned networks, averaged over five runs. The uncertainty bands here and in other figures represent minimum and maximum values over five runs. Solid blue lines represent networks trained using the LT algorithm, which sets pruned weights to zero and freezes them. Dotted blue lines represent networks trained using the LT algorithm except that pruned weights are frozen at their initial values:

NZjeEfI.png!web Figure 2. When testing three CNNs on CIFAR-10, we find that the accuracy of networks with pruned weights frozen at their initial values degrades significantly more than those with pruned weights set to zero.

We see that networks perform better when weights are frozen specifically at zero rather than at random initial values. For these networks masked via the LT “large final” criterion, zero would seem to be a particularly good value to set weights to when they had small final values.

So why is zero an ideal value? One hypothesis is that the mask criterion we use tends to mask to zero those weights that were headed toward zero anyway . To test out this hypothesis, let’s consider a new approach to freezing. We run another experiment interpolated between the previous two: for any weight to be frozen, we freeze it to zero if it moved toward zero over the course of training, and we freeze it at its random initial value if it moved away from zero. Results are shown in Figure 3, below:

BVNBRnm.png!web Figure 3: Selectively freezing weights to their initial value or zero depending on the direction they move during training produces better performance than freezing all weights at zero or init.

We see that performance is better for this treatment than when freezing all to zero or all to init! This supports our hypothesis that the benefit derived from freezing values to zero comes from the fact that those values were moving toward zero anyway. For a deeper discussion of why the “large final” mask criterion biases toward selecting those weights heading toward zero, see our paper .

Thus we find for certain mask criteria, like “large final”, that masking is training : the masking operation tends to move weights in the direction they would have moved during training.

This simultaneously explains why Supermasks exist and hints that other mask criteria may produce better Supermasks if they preferentially mask to zero weights that training drives toward zero.

Alternate mask criteria

Now that we’ve explored why the original LT mask criterion, “large final,” works as well as it does, we can ask what other masking criteria would also perform well. The “large final” criterion keeps weights with large final magnitudes and sets the rest to zero. We can think of this pruning criterion and many others as a division of the 2D (w i = initial weight, w f = final weight) space into regions corresponding to weights that should be kept (mask-1) vs. pruned (mask-0), as shown in Figure 5, below:

Zz6FvuB.png!web Figure 5. Different mask criteria can be thought of as segmenting the (wi, wf) space into regions corresponding to mask values of one vs. zero. The ellipse represents in cartoon form the area occupied by the positively correlated initial and final weights from a given layer. The mask shown corresponds to a “large final” criterion, which was used in the LT paper: weights with large final magnitude are kept, and weights with final values near zero are pruned. Note that this criterion ignores the initial magnitude of the weight.

In the previous section, we showed some supporting evidence for the hypothesis that networks work well when those weights already moving toward zero are set to zero. This hypothesis suggests that other criteria may also work if they respect this basic rule. One such mask criterion is to preferentially keep those weights that move most away from zero, which we can write as the scoring function |w f | – |w i |. We call this criterion “magnitude increase” and depict it along with other criteria run as control cases in Figure 6, below:

JfmAF3q.png!web Figure 6. The eight mask criteria considered in this study are shown, starting with the “large final” criterion that starred in the LT paper. Names we use to refer to the various methods are given along with the formula that projects each (w i , w f ) pair to a score. Weights with the largest scores (colored regions) are kept, and weights with the smallest scores (gray regions) are pruned.

This “magnitude increase” criterion turns out to work just as well as the “large final” criterion, and in some cases significantly better. Results of all criteria are shown in Figure 7, below, for the fully connected (FC) and Conv4 networks; see our paper for performance results on other networks. As a baseline, we also show results on a random pruning criterion that simply chooses a random mask with the desired pruning percentage. Note that the first six criteria out of the eight form three opposing pairs; in each case, we see when one member of the pair performs better than the random baseline, the opposing member performs worse than it.

YFnuMnE.png!web Figure 7. Measurements of the accuracy vs. pruning percentage for two networks, FC on MNIST (left) and Conv4 on CIFAR-10 (right), show that multiple mask criteria—large final, magnitude increase, and two others—reliably outperform the black random pruning baseline. In the Conv4 network, the performance boost of “magnitude increase” is larger than that of other mask criteria; asterisks mark where the difference between “large final” and “magnitude increase” is statistically significant at the p=0.05 level.

In general, we observe that those methods that bias towards keeping weights with large final magnitude are able to uncover performant subnetworks.

Show me a sign

We have explored various ways of choosing which weights to prune and what values to set pruned weights to. We will now consider what values to set kept weights to. In particular, we want to explore an interesting observation in Frankle and Carbin (2019) which showed that the pruned, skeletal LT networks train well when you rewind to its original initialization, but degrades in performance when you randomly reinitialize the network.

Why does reinitialization cause LT networks to train poorly? Which components of the initialization are important?

We evaluate a number of variants of reinitialization to find out the answer.

  • “Reinit” experiments: reinitialize kept weights based on the original initialization distribution
  • “Reshuffle” experiments: reinitializing while respecting the original distribution of remaining weights in that layer, which is achieved by reshuffling the kept weights’ initial values
  • “Constant” experiments: reinitializing by setting remaining weights values to a positive or negative constant, with the constant set to be the standard deviation of each layer’s original initialization

All of the reinitialization experiments are based on the same original networks and use the “large final” mask criterion with iterative pruning. We include the original LT network (rewind, large final) and the randomly pruned network (random) as baselines for comparison.

We find that none of these three variants alone are able to train as well as the original LT network, as shown in dashed lines in Figure 8 below:

jiMrIjV.png!web Figure 8: We show test accuracy vs. pruning percentage of two networks, FC (left) and Conv4 (right), while using different reinitialization methods. A clear distinction of performances between those that respect the consistency of signs and those that do not suggests that the specific initial values of kept weights do not matter as much as their signs.

However, all three variants work better when we control the consistency of sign by ensuring that the reassigned values of the kept weights are of the same sign as their original initial values. These are shown as solid color lines in Figure 8. Clearly, the common factor in all the variants that perform better than chance, including the original “rewind”, is the sign. This suggests that reinitialization is not the deal breaker as long as you keep the sign. In fact, as long as we respect the original sign, even as simple as setting all kept weights to a constant value consistently performs well!

Better Supermasks

At the beginning of the article we introduced the idea of Supermasks, which are binary masks that when applied to a randomly initialized network, produce better-than-chance accuracy without additional training. We now turn our attention to finding methods that would produce the best Supermasks.

We can evaluate the same pruning methods and pruning percentages seen in Figure 7 for their potential as Supermasks. We can also consider additional mask criteria optimized for generating Supermasks. Based on the insight about the importance of the initial sign of LT weights and the idea of having weights close to their final values, we introduce a new mask criterion that selects for weights with large final magnitudes that also maintained the same sign at the end of training. This method is referred to as “large final, same sign”, and we depict it in Figure 9, below. We also add “large final, diff sign” as a control case, which looks for weights that changed sign at the end of training.

mQJVnm7.png!web Figure 9. The “large final, same sign” mask criterion produces the highest performing Supermasks in this study. In contrast to the “large final” mask in Figure 5, note this criterion masks out the quadrants where the sign of w i and w f differ.

By using a simple mask criterion of “large final same sign”, we can create networks that obtain a remarkable 80 percent test accuracy on MNIST and 24 percent on CIFAR-10 without training. Another curious observation is that if we apply the mask to a signed constant (as described in the previous section) rather than the actual initial weights, we can produce even higher test accuracy of up to 86 percent on MNIST and 41 percent on CIFAR-10.

EZNb6ra.png!web Figure 10: Figure 10: We evaluate accuracy at initialization (with no training) of a single FC network on MNIST subject to the application of various masks. The x-axis depicts the percent of weights remaining in the network; all other weights are set to zero. The “large final same sign” mask creates the highest performing Supermask by a wide margin. Note that aside from the five independent runs performed to generate uncertainty bands for this plot, every data point on the plot is the same underlying network, just with different masks applied.

We find it fascinating that these Supermasks exist and can be found via such simple criteria. Besides being a scientific curiosity, they could have implications for transfer learning and meta-learning — networks to approximately solve, say, any permutation of MNIST input pixels and permutation of output classes are all in there, just with different masks. They also present us with a method for network compression, since we only need to save a binary mask and a single random seed to reconstruct the full weights of the network .

If you’re curious how far we can push the performance of these Supermasks, check out our paper where we try training for them directly.

If working with neural networks interests you, consider applying for a machine learning role at Uber .

The authors would like to acknowledge Jonathan Frankle, Joel Lehman, and Sam Greydanus for combinations of helpful discussion and comments on early drafts of this work.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK