101

GitHub - hjmshi/PyTorch-LBFGS: A PyTorch implementation of L-BFGS.

 5 years ago
source link: https://github.com/hjmshi/PyTorch-LBFGS
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

README.md

PyTorch-LBFGS: A PyTorch Implementation of L-BFGS

Authors: Hao-Jun Michael Shi (Northwestern University) and Dheevatsa Mudigere (Facebook)

What is it?

PyTorch-LBFGS is a modular implementation of L-BFGS, a popular quasi-Newton method, for PyTorch that is compatible with many recent algorithmic advancements for improving and stabilizing stochastic quasi-Newton methods and addresses many of the deficiencies with the existing PyTorch L-BFGS implementation. It is designed to provide maximal flexibility to researchers and practitioners in the design and implementation of stochastic quasi-Newton methods for training neural networks.

Main Features

  1. Compatible with multi-batch and full-overlap L-BFGS
  2. Line searches including (stochastic) Armijo backtracking line search (with or without cubic interpolation) and weak Wolfe line search for automatic steplength (or learning rate) selection
  3. Powell damping with more sophisticated curvature pair rejection or damping criterion for constructing the quasi-Newton matrix

Getting Started

To use the L-BFGS optimizer module, simply add /functions/LBFGS.py to your current path and use

from LBFGS import LBFGS

to import the LBFGS optimizer.

Alternatively, you can add LBFGS.py into torch.optim on your local PyTorch installation. To do this, simply add LBFGS.py to /path/to/site-packages/torch/optim, then modify /path/to/site-packages/torch/optim/__init__.py to include the lines from LBFGS.py import LBFGS and del LBFGS. After restarting your Python kernel, you will be able to use PyTorch-LBFGS's LBFGS optimizer like any other optimizer in PyTorch.

To see how full-batch, full-overlap, or multi-batch L-BFGS may be easily implemented with a fixed steplength, Armijo backtracking line search, or Wolfe line search, please see the example codes provided in the /examples/ folder.

Understanding the Main Features

We give a brief overview of (L-)BFGS and each of the main features of the optimization algorithm.

0. Quasi-Newton Methods

Quasi-Newton methods build an approximation to the Hessian 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f485f6b to apply a Newton-like algorithm 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f785f7b6b2673706163653b2b2673706163653b317d2673706163653b3d2673706163653b785f6b2673706163653b2d2673706163653b5c616c7068615f6b2673706163653b485f6b2673706163653b5c6e61626c612673706163653b4628785f6b29. To do this, it solves for a matrix that satisfies the secant condition 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f485f6b2673706163653b28785f6b2673706163653b2d2673706163653b785f7b6b2673706163653b2d2673706163653b317d292673706163653b3d2673706163653b5c6e61626c612673706163653b4628785f6b292673706163653b2d2673706163653b5c6e61626c612673706163653b4628785f7b6b2673706163653b2d2673706163653b317d29. L-BFGS is one particular optimization algorithm in the family of quasi-Newton methods that approximates the BFGS algorithm using limited memory. Whereas BFGS requires storing a dense matrix, L-BFGS only requires storing 5-20 vectors to approximate the matrix implicitly and constructs the matrix-vector product on-the-fly via a two-loop recursion.

In the deterministic or full-batch setting, L-BFGS constructs an approximation to the Hessian by collecting curvature pairs 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f28735f6b2c2673706163653b795f6b29 defined by differences in consecutive gradients and iterates, i.e. 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f735f6b2673706163653b3d2673706163653b785f6b2673706163653b2d2673706163653b785f7b6b2673706163653b2d2673706163653b317d and 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b2673706163653b3d2673706163653b5c6e61626c612673706163653b4628785f6b292673706163653b2d2673706163653b5c6e61626c612673706163653b4628785f7b6b2673706163653b2d2673706163653b317d29 .

Note that other popular optimization methods for deep learning, such as Adam, construct diagonal scalings, whereas L-BFGS constructs a positive definite matrix for scaling the (stochastic) gradient direction.

There are three components to using this algorithm:

  1. two_loop_recursion(vec): Applies the L-BFGS two-loop recursion to construct the vector 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f485f6b2673706163653b76.
  2. step(p_k, g_Ok, g_Sk=None, options={}): Takes a step 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f785f7b6b2673706163653b2b2673706163653b317d2673706163653b3d2673706163653b785f6b2673706163653b2b2673706163653b5c616c7068615f6b2673706163653b705f6b and stores 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f675f7b4f5f6b7d2673706163653b3d2673706163653b5c6e61626c612673706163653b465f7b4f5f6b7d28785f6b29 for constructing the next curvature pair. In addition, 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f675f7b535f6b7d2673706163653b3d2673706163653b5c6e61626c612673706163653b465f7b535f6b7d28785f6b29 is provided to store 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f425f6b2673706163653b735f6b for Powell damping or the curvature pair rejection criterion. (If it is not specified, then 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f675f7b535f6b7d2673706163653b3d2673706163653b675f7b4f5f6b7d.) options pass necessary parameters or callable functions to the line search.
  3. curvature_update(flat_grad, eps=0.2, damping=True): Updates the L-BFGS matrix by computing the curvature pair using flat_grad and the stored 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f675f7b4f5f6b7d then checks the Powell damping criterion to possibly reject or modify the curvature pair.

Using quasi-Newton methods in the noisy regime requires more work. We will describe below some of the key features of our implementation that will help stabilize L-BFGS when used in conjunction with stochastic gradients.

1. Stable Quasi-Newton Updating

The key to applying quasi-Newton updating in the noisy setting is to require consistency in the gradient difference 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b in order to prevent differencing noise.

We provide examples of two approaches for doing this:

  1. Full-Overlap: This approach simply requires us to evaluate the gradient on a sample twice at both the next and current iterate, hence introducing the additional cost of a forward and backward pass over the sample at each iteration, depending on the line search that is used. In particular, given a sample 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f535f7b6b2673706163653b2d2673706163653b317d, we obtain 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b by computing 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b2673706163653b3d2673706163653b5c6e61626c612673706163653b465f7b535f7b6b2673706163653b2d2673706163653b317d7d2673706163653b28785f6b292673706163653b2d2673706163653b5c6e61626c612673706163653b465f7b535f7b6b2673706163653b2d2673706163653b317d7d28785f7b6b2673706163653b2d2673706163653b317d29 .

full-overlap

  1. Multi-Batch: This approach uses the difference between the gradients over the overlap between two consecutive samples 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f4f5f6b2673706163653b3d2673706163653b535f6b2673706163653b5c6361702673706163653b535f7b6b2673706163653b2d2673706163653b317d, hence not requiring any additional cost for curvature pair updating, but incurs sampling bias. This approach also suffers from being generally more tedious to code, although it is more efficient. In particular, given two consecutive samples 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f535f6b and 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f535f7b6b2673706163653b2d2673706163653b317d, we obtain 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b by computing 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b2673706163653b3d2673706163653b5c6e61626c612673706163653b465f7b4f5f6b7d28785f6b292673706163653b2d2673706163653b5c6e61626c612673706163653b465f7b4f5f6b7d2673706163653b28785f7b6b2673706163653b2d2673706163653b317d29 . In multi_batch_lbfgs_example.py, the variable g_Ok denotes 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f5c6e61626c612673706163653b465f7b4f5f6b7d2673706163653b28785f6b29 and the variable g_Ok_prev represents 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f5c6e61626c612673706163653b465f7b4f5f6b7d2673706163653b28785f7b6b2673706163653b2b2673706163653b317d29 .

multi-batch

The code is designed to allow for both of these approaches by delegating control of the samples and the gradients passed to the optimizer to the user. Whereas the existing PyTorch L-BFGS module runs L-BFGS on a fixed sample (possibly full-batch) for a set number of iterations or until convergence, this implementation permits sampling a new mini-batch stochastic gradient at each iteration and is hence amenable with stochastic quasi-Newton methods, and follows the design of other optimizers where one step is equivalent to a single iteration of the algorithm.

2. Line Searches

Deterministic quasi-Newton methods, particularly BFGS and L-BFGS, have traditionally been coupled with line searches that automatically determine a good steplength (or learning rate) and exploit these well-constructed search directions. Although these line searches have been crucial to the success of quasi-Newton algorithms in deterministic nonlinear optimization, the power of line searches in machine learning have generally been overlooked due to concerns regarding computational cost. To overcome these issues, stochastic or probabilistic line searches have been developed to determine steplengths in the noisy setting.

We provide three basic (stochastic) line searches that may be used in conjunction with L-BFGS in the step function:

  1. (Stochastic) Armijo Backtracking Line Search: Ensures that the Armijo or sufficient decrease condition is satisfied on the function evaluated by the closure() function by backtracking from each trial point by multiplying by a constant factor less than 1.
  2. (Stochastic) Armijo Backtracking Line Search with Cubic Interpolation: Similar to the basic backtracking line search but utilizes a quadratic or cubic interpolation to determine the next trial. This is based on Mark Schmidt's minFunc MATLAB code.
  3. (Stochastic) Weak Wolfe Line Search: Based on Michael Overton's weak Wolfe line search implementation in MATLAB, ensures that both the sufficient decrease condition and curvature condition are satisfied on the function evaluated by the closure() function by performing a bisection search.

Note: For quasi-Newton algorithms, the weak Wolfe line search, although immensely simple, gives similar performance to the strong Wolfe line search, a more complex line search algorithm that utilizes a bracketing and zoom phase, for smooth, nonlinear optimization. In the nonsmooth setting, the weak Wolfe line search (not the strong Wolfe line search) is essential for quasi-Newton algorithms. For these reasons, we only implement a weak Wolfe line search here.

One may also use a constant steplength provided by the user, as in the original PyTorch implementation. See https://en.wikipedia.org/wiki/Wolfe_conditions for more detail on the sufficient decrease and curvature conditions.

To use these, when defining the optimizer, the user can specify the line search by setting line_search to Armijo, Wolfe, or None. The user must then define the options (typically a closure for reevaluating the model and loss) passed to the step function to perform the line search. The lr parameter defines the initial steplength in the line search algorithm.

3. Curvature Pair Rejection Criterion and Powell Damping

Another key for L-BFGS is to determine when the history used in constructing the L-BFGS matrix is updated. In particular, one needs to ensure that the matrix remains positive definite. Existing implementations of L-BFGS have generally checked if 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795e542673706163653b732673706163653b3e2673706163653b5c657073696c6f6e or 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795e542673706163653b732673706163653b3e2673706163653b5c657073696c6f6e2673706163653b5c7c735c7c5f325e32, rejecting the curvature pair if the condition is not satisfied. However, both of these approaches suffer from lack of scale-invariance of the objective function and reject the curvature pairs when the algorithm is converging close to the solution.

Rather than doing this, we propose using the Powell damping condition described in Nocedal and Wright (2006) as the rejection criteria, which ensures that 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b5e542673706163653b735f6b2673706163653b3e2673706163653b5c657073696c6f6e2673706163653b735f6b5e542673706163653b425f6b2673706163653b735f6b. Alternatively, one can modify the definition of 68747470733a2f2f6c617465782e636f6465636f67732e636f6d2f7376672e6c617465783f795f6b to ensure that the condition explicitly holds by applying Powell damping to the gradient difference. This has been found to be useful for the stochastic nonconvex setting.

One can perform curvature pair rejection by setting damping=False or apply Powell damping by simply setting damping=True in the step function. Powell damping is not applied by default.

Which variant of stochastic L-BFGS should I use?

By default, the algorithm uses a (stochastic) Wolfe line search without Powell damping. We recommend implementing this in conjunction with the full-overlap approach with a sufficiently large batch size (say, 2048, 4096, or 8192) as this is easiest to implement and leads to the most stable performance. If one uses an Armijo backtracking line search or fixed steplength, we suggest incorporating Powell damping to prevent skipping curvature updates. Since stochastic quasi-Newton methods are still an active research area, this is by no means the final algorithm. We encourage users to try other variants of stochastic L-BFGS to see what works well.

To Do

In maintaining this module, we are working to add the following features:

  • Additional initializations of the L-BFGS matrix aside from the Barzilai-Borwein scaling.
  • Wrappers for specific optimizers developed in various papers.
  • Using Hessian-vector products for computing curvature pairs.
  • More sophisticated stochastic line searches.

Acknowledgements

Thanks to Raghu Bollapragada, Jorge Nocedal, and Yuchen Xie for feedback on the details of this implementation, and Kenjy Demeester, Jaroslav Fowkes, and Dominique Orban for help on installing CUTEst and its Python interface for testing the implementation.

References

  1. Berahas, Albert S., Jorge Nocedal, and Martin Takác. "A Multi-Batch L-BFGS Method for Machine Learning." Advances in Neural Information Processing Systems. 2016.
  2. Bollapragada, Raghu, et al. "A Progressive Batching L-BFGS Method for Machine Learning." International Conference on Machine Learning. 2018.
  3. Lewis, Adrian S., and Michael L. Overton. "Nonsmooth Optimization via Quasi-Newton Methods." Mathematical Programming 141.1-2 (2013): 135-163.
  4. Nocedal, Jorge, and Stephen J. Wright. "Numerical Optimization." Springer New York, 2006.
  5. Schmidt, Mark. "minFunc: Unconstrained Differentiable Multivariate Optimization in Matlab." Software available at http://www.cs.ubc.ca/~schmidtm/Software/minFunc.html (2005).
  6. Schraudolph, Nicol N., Jin Yu, and Simon Günter. "A Stochastic Quasi-Newton Method for Online Convex Optimization." Artificial Intelligence and Statistics. 2007.
  7. Wang, Xiao, et al. "Stochastic Quasi-Newton Methods for Nonconvex Stochastic Optimization." SIAM Journal on Optimization 27.2 (2017): 927-956.

Questions or Suggestions?

Please use the Issues tab on the Github repository. Any suggestions on improving the modules are welcome!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK