20

FLUTE is Microsoft’s New Framework for Federated Learning

 3 years ago
source link: https://jrodthoughts.medium.com/flute-is-microsofts-new-framework-for-federated-learning-8120fb2570bd
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
neoserver,ios ssh client

FLUTE is Microsoft’s New Framework for Federated Learning

The new framework enables large scale, offline simulations of federated learning scenarios.

0*6m7bhJNQXDwoPkt9.jpg

Image Credit: Microsoft Research

I recently started an AI-focused educational newsletter, that already has over 125,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:

Federated learning is one of the most popular architectures to enable privacy in deep learning solutions. Despite its popularity, federated learning architectures remain super difficult to implement. From the computation cost of setting up test environments to the enforcement of privacy guarantees or the complexity of model updates, implementing federated learning solutions can be nothing short of a nightmare. Not surprisingly, there are not many federated learning libraries available in mainstream deep learning frameworks. To address some of these challenges, Microsoft Research recently open sourced Federated Learning Utilities and Tools for Experimentation (FLUTE), a framework for running large scale federated learning simulations.

FLUTE was designed to enable researchers to perform rapid prototyping of offline simulations of federated learning scenarios without incurring in high computational costs. The framework includes a core series of capabilities that enable that goal:

· Ability to simulate millions of endpoints sampling tends of thousands instances per round.

· Multi-GPU, multi-node orchestration.

· Rich model portfolio including popular CNNs, RNNs and transformer architectures.

· Extensible programming model.

· Native integration with Azure ML.

A typical FLUTE architecture consists of a number of nodes which are physical or virtual machines that execute a number of workers. One of the nodes acts as a orchestrator distributing the model and tasks to the different workers.

0*hr6ktOuNkiSdjkw3.png

Image Credit: Microsoft Research

Each worker processes the tasks sequentially, calculates the model delta and sends the gradients back to the orchestrator which federates it into the centralized model.

0*h-33mY4aLlQu23Wk.png

Image Credit: Microsoft Research

Extrapolating this workflow to a large number of clients, we get something like the following architecture.

0*lNATNpfAKQLTKePC.png

Image Credit: Microsoft Research

In this architecture, a federated learning workflow is based on the following steps( from the research paper):

  1. Send and initial global model to clients.
  2. Train instances of the global model with locally available data on each client.
  3. Send training information to the Server (e.g. adapted models, logits, pseudo-gradients).
  4. Combine the returned information on the server to produce a new model.
  5. Optionally, update the logbal model with an additional server-side rehearsal step.
  6. Send the updated global model back to the clients.
  7. Repeat steps 2–6 after sampling a new subset of clients for the next training iteration.

The initial version of FLUTE is based on PyTorch which enables it interoperability with a large number of deep learning architectures. The communication protocols are implemented using OpenMPI which guarantees high levels of performance and scalability. An open source implementation of FLUTE is available at https://github.com/microsoft/msrflute.

</div


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK