30

Stop making data scientists manage Kubernetes clusters

 4 years ago
source link: https://towardsdatascience.com/stop-making-data-scientists-manage-kubernetes-clusters-53c3b584cb08?gi=ff40c4dee482
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Building models is hard enough

Feb 17 ·5min read

BneERnQ.jpg!web

Source: Pexels

Disclaimer: The following is based on my observations of machine learning teams—not an academic survey of the industry. For context, I’m a contributor to Cortex , an open source platform for deploying models in production.

Production machine learning has an organizational problem, one that is a byproduct of its relative youth. While more mature fields—web development, for example—have developed best practices over decades, production machine learning hasn’t yet.

To illustrate, imagine you were tasked with growing a product engineering org for your startup, which develops a web app. Even if you had no experience building a team, you could find thousands of articles and books on how your engineering org should be structured and grown.

Now imagine you are at a startup that has dabbled with machine learning. You’ve hired a data scientist to lead the initial efforts, and the results have been good. As machine learning becomes more deeply embedded into your product, it becomes obvious that the machine learning team needs to grow, as the responsibilities of the data scientist have ballooned.

In this situation, there are not thousands of articles and books on how a production machine learning team should be structured.

This is not an uncommon scenario, and what frequently happens is that the new responsibilities of the machine learning org—infrastructure, in particular—get passed onto the data scientist(s).

This is a mistake.

The difference between machine learning and machine learning infrastructure

The difference between a platform and product engineer is pretty well understood at this point. Similarly, data analysts and data engineers are clearly differentiated roles.

Machine learning, at many companies, is still missing that specialization.

To see why the delineation between machine learning and machine learning infrastructure is important, it’s helpful to look at the work and tooling required for each.

To design and train new models, a data scientist is going to:

  • Spend their time in a notebook, analyzing data and running experiments.
  • Worry about things like data hygiene and selecting the right model architecture for their dataset.
  • Use a programming language like Python, R, Swift, or Julia.
  • Be opinionated about machine learning frameworks like PyTorch or TensorFlow.

In other words, their responsibilities, skills, and tools are going to revolve around manipulating data to develop models, and their ultimate output will be models that deliver the most accurate predictions possible.

The infrastructure side is fundamentally different.

A common way to put a model into production is to deploy it to the cloud as a microservice. To deploy a model as a production API, an engineer is going to:

  • Spend their time split between config files, their terminal, and their cloud provider’s console, trying to optimize stability, latency, and cost.
  • Worry about things like auto scaling their instances, updating models without crashing APIs, and serving inferences on GPUs.
  • Use tools like Docker, Kubernetes, Istio, Flask, and whatever services/APIs their cloud provider offers.

An easy way to visualize the difference in working on machine learning versus machine learning infrastructure is like this:

VJ7nquZ.png!web

Machine learning vs. Machine learning infrastructure

Intuitively, it makes sense that a data scientist should handle the circle on the left, but not so much the circle on the right.

What’s wrong with having non-specialists manage infrastructure?

Let’s run this as a hypothetical. Say you had to assign someone to manage your machine learning infrastructure, but you didn’t want to dedicate someone full-time to it. Your only two options would be:

  • A data scientist, because of their familiarity with machine learning.
  • A devops engineer, because of their familiarity with general infrastructure.

Both of these options have issues.

First, data scientists should spend as much time as possible doing what they’re best at—data science. While learning infrastructure certainly isn’t beyond them, both infrastructure and data science are full-time jobs, and splitting a data scientist’s time between them will reduce the quality of output in both roles.

Second, your organization needs someone dedicated specifically to machine learning infrastructure. Serving models in production is different than hosting a web app. You need someone specialized for the role, who can advocate for machine learning infrastructure within your org.

This advocacy piece turns out to be crucial. I get to see inside a lot of machine learning orgs, and you’d be surprised how often their bottlenecks stem not from technical challenges, but from organizational ones.

For instance, I’ve seen machine learning teams who need GPUs for inferencing—big models like GPT-2 basically require them for reasonable latency—but who can’t get them because their infrastructure is managed by the broader devops team, who don’t want to put the cost on their account.

Having someone dedicated to your machine learning infrastructure means you not only have a team member who is constantly improving your infrastructure, it means you have an advocate who can get your team what it needs.

Who should manage the infrastructure then?

Machine learning infrastructure engineers.

Now, before you disagree about the official title, let’s just acknowledge that it’s still early days for production machine learning and that it’s the wild west when it comes to titles. Different companies might call it:

  • Machine learning infrastructure engineer
  • Data science platform engineer
  • ML production engineer

We can already see mature machine learning organizations hiring for this role, including Spotify:

mEN3yam.png!web

Source: Spotify

As well as Netflix:

VvyIrqM.png!web

Source: Netflix

As ML-powered features like Gmail’s Smart Compose, Uber’s ETA prediction, and Netflix’s content recommendation become ubiquitous in software, machine learning infrastructure is becoming more and more important.

If we want a future in which ML-powered software is truly commonplace, removing the infrastructure bottleneck is essential—and to do that, we need to treat it as a real specialization, and let data scientists focus on data science.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK