

FLUTE is Microsoft’s New Framework for Federated Learning
source link: https://jrodthoughts.medium.com/flute-is-microsofts-new-framework-for-federated-learning-8120fb2570bd
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

FLUTE is Microsoft’s New Framework for Federated Learning
The new framework enables large scale, offline simulations of federated learning scenarios.

Image Credit: Microsoft Research
I recently started an AI-focused educational newsletter, that already has over 125,000 subscribers. TheSequence is a no-BS (meaning no hype, no news etc) ML-oriented newsletter that takes 5 minutes to read. The goal is to keep you up to date with machine learning projects, research papers and concepts. Please give it a try by subscribing below:
TheSequence
The best source to stay up-to-date with the developments in the machine learning, artificial intelligence, and data…
Federated learning is one of the most popular architectures to enable privacy in deep learning solutions. Despite its popularity, federated learning architectures remain super difficult to implement. From the computation cost of setting up test environments to the enforcement of privacy guarantees or the complexity of model updates, implementing federated learning solutions can be nothing short of a nightmare. Not surprisingly, there are not many federated learning libraries available in mainstream deep learning frameworks. To address some of these challenges, Microsoft Research recently open sourced Federated Learning Utilities and Tools for Experimentation (FLUTE), a framework for running large scale federated learning simulations.
FLUTE was designed to enable researchers to perform rapid prototyping of offline simulations of federated learning scenarios without incurring in high computational costs. The framework includes a core series of capabilities that enable that goal:
· Ability to simulate millions of endpoints sampling tends of thousands instances per round.
· Multi-GPU, multi-node orchestration.
· Rich model portfolio including popular CNNs, RNNs and transformer architectures.
· Extensible programming model.
· Native integration with Azure ML.
A typical FLUTE architecture consists of a number of nodes which are physical or virtual machines that execute a number of workers. One of the nodes acts as a orchestrator distributing the model and tasks to the different workers.

Image Credit: Microsoft Research
Each worker processes the tasks sequentially, calculates the model delta and sends the gradients back to the orchestrator which federates it into the centralized model.

Image Credit: Microsoft Research
Extrapolating this workflow to a large number of clients, we get something like the following architecture.

Image Credit: Microsoft Research
In this architecture, a federated learning workflow is based on the following steps( from the research paper):
- Send and initial global model to clients.
- Train instances of the global model with locally available data on each client.
- Send training information to the Server (e.g. adapted models, logits, pseudo-gradients).
- Combine the returned information on the server to produce a new model.
- Optionally, update the logbal model with an additional server-side rehearsal step.
- Send the updated global model back to the clients.
- Repeat steps 2–6 after sampling a new subset of clients for the next training iteration.
The initial version of FLUTE is based on PyTorch which enables it interoperability with a large number of deep learning architectures. The communication protocols are implemented using OpenMPI which guarantees high levels of performance and scalability. An open source implementation of FLUTE is available at https://github.com/microsoft/msrflute.
</div
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK