43

Hosting your ML model on Azure Functions

 4 years ago
source link: https://towardsdatascience.com/hosting-your-ml-model-on-azure-functions-ae3ca4ae1232?gi=e62ee163d4fd
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Hosting your ML model on Azure Functions — Part 1

Nov 17 ·7min read

ZjmumyV.png!web

It’s been weeks of feature engineering, model selecting and hyper parameter tuning and you’re done.

Your colleagues are so confused by your constant use of the word “pickle” that they stop asking you what you’re on about and just assume you’ve really got into preserving veges.

Now let’s put that model somewhere that you can get some value out of it.

If you’re looking on how to do it with AWS lambdas I’ve got a few tutorials to help you:

Here’s the link to my Git repo if you want to follow along

Azure Functions vs AWS Lambdas

These are very similar to AWS lambdas and from a model deployment option they have some great benefits over AWS lambdas.

Specifying your Azure Function runtime is easy as pi

Remember how painful it was to get AWS Lambdas to use pandas? I sure do. Change one small thing (your internal ML package or wanna use a new fancy sklearn model) and AWS lambdas kicks off.

Next you’re using Docker to recreate layers, and heaven forbid you forget to change your layer version..

With Azure Functions you just specify a requirements.txt and it’ll install all those packages for you when you deploy your Azure Function to your account.

It also lets you upload models from local directly to your Azure Function.

That is pretty bloody cool.

The Azure CLI allows you to debug your Azure functions locally

If you followed my AWS lambda tutorial, you probably spent a lot of time writing simple tests to run over your AWS lambdas to make sure you could import pandas or something really small.

And then you’re able to test out sending data to the Azure Function via the requests package in Python. But now you just run them locally and send POST requests via the requests package and you can be sure that it’s working before you deploy

Creating a HTTP Azure Function deploys and exposes the endpoint in one simple step

Previously when deploying a model to AWS Lambdas and API Gateway you’d have to do the two steps separately. With Azure Functions they are only one step.

In addition to this your Azure Functions come presecured (ie: have an API key) so you don’t have to worry about shutting down your AWS Lambda while you configure things on the API Gateway side.

Getting started

Now we get into the fun part. If you want to follow along here’s the GitHub repo.

First you’ll need to install Azure CLI tools. This will let you test your Azure function locally and then push it to your Azure account.

At this point it’s probably worthwhile getting your Azure account. If you’ve never signed up you can probably get $200 free credits. But if you can’t don’t worry — these deployments won’t break the bank.

Follow this to get the Azure CLI installed so we can then start building our function.

To test this you’ll want to try :

func new

If the installation has worked then you’ll be able to create a Azure function locally. But it can be a bit tough to get working on Windows.

Getting Azure CLI working on Windows

If you get an error saying func not found you’ll want to install do a few more steps:

Apparently this is because func is not on your path. So if you’re smarter than me and sort that stuff let me know and I”ll add it here.

This will be where you initialize your Azure Function. The easy thing is that if you get your Azure Function to respond to HTTP requests then you’ve deployed your model and made it available via API in one easy step.

Setting up your Azure Function and local python environment

#1 Intialize the folder you’re working in

func init titanic_model --python

#2 Set up the bare bones Azure Function

cd titanic_model
func new
  • Select HTTP trigger
  • Give your function a name, I called mine model_deployment

And if you’ve done it right you should see this:

feI3Ar6.png!web

Eagle eyed readers would realize that I’m running those functions off my base Python installation.

When we run the Azure Function locally it uses the Python environment we specify to execute. This lets us use the requirements.txt to specify a local environment that will mirror what will run on the Azure Function.

I create my environment using conda because it’s easy, but you can use venv or any other solution.

I’d also add to your requirements.txt to cover all the dependencies. Here’s what mine looks like:

azure-functions==1.0.4
pandas==0.25.3
scikit-learn==0.21.3
requests==2.22.0

Now run so you can install the packages as well as build the model for deployment.

conda create --name model_deploy python=3.7
activate model_deploy
pip install -r requirements.txt

Building the model and configuring the Azure Function

I’ve put a simple model in the Git repo to guide you. To create the model .pkl you’ll need to run:

python model_build.py

This model is built from the titanic dataset and predicts survival off:

  • Age
  • Pclass (Passenger class)
  • Embarked (where a passenger embarked)
  • Sex

Altering __init__.py to handle your requests

__init__.py is what your Azure Function is run off.

Using JSON inputs/outputs can be a bit of a faff and it took awhile for me to get right.

I’ll paste the main sections of the code here so I can highlight the main confusions I had so you can learn from my mistakes.

data = req.get_json()
data = json.loads(data)

You’ll be using a POST request for this model. But when you read in the JSON request it will still be in a string format, so we need to convert it to a proper dict/JSON object using json.loads before we can use the data to predict on.

response = []
for i in range(len(data)):

    data_row = data[i]
    pred_df = pd.DataFrame([data_row])
    pred_label = clf.predict(pred_df)[0]
    pred_probs = clf.predict_proba(pred_df)[0]

    results_dict = {
        'pred_label': int(pred_label),
        'pred_prob_0': pred_probs[0],
        'pred_prob_1': pred_probs[1]
    }

    response.append(results_dict)

return json.dumps(response)

There’s a few things I’ll quickly mention:

  • pd.DataFrame([data_row]): lets you create a one row dataframe in Pandas. Otherwise you’ll get an index error
  • int(pred_label): used because the class outputted is a numpy datatype (int64) and that isn’t usable when returning JSON objects so I convert it
  • json.dumps(response): even though we work with JSON you need to convert it to a string before you send it back

Now let’s deploy that bad boi — locally

func host start

That should give you the below result once it’s up and running

RJZNnmf.png!web

http://localhost:7071/api/model_deployment is what we want to send our requests to. After the local Azure Function is running use test_api.py to ping data to our API. You should see the below results:

Zna67vf.png!web

Booo yah. Now it’s working!!!!! So now we’ve got an Azure Function working locally, we need to push it to Azure so we can deploy it last and for good.

Deploying to Azure

Now we’ve got the Azure Function working locally we can push it to Azure and deploy this bad boi.

If you haven’t created an Azure account then this is your time to do it. And once you’ve done that go to your Portal and spin up an Azure Function App.

Here’s how I configured mine

R3mqmef.png!web

Using Docker is a good shout. I’ll do my next blog post on that

Now you’ve created your Azure App, you can now deploy your model to Azure and try it out.

From your local Azure Function directory you’ll want to run the following command

az login

This will either execute seamlessly or or ask you to log in to your Azure account. Once that’s done you’re ready to go. Now let’s push your local code to Azure.

func azure functionapp publish <APP_NAME> --build remote

Where APP_NAME is the name you gave your Function App (duh). Mine is titanicmodel but yours will be different.

Once that’s built you need to find the URL of your Azure App. You can find that here

2UV36nB.png!web

This URL is what you’re going to use to access your model from Azure. So replace azure_url in test_api.py with your Azure Function URL and give it a try.

If everything has gone to plan you’ll get:

[
  {
    "pred_label": 1,
    "pred_prob_0": 0.13974358581161161,
    "pred_prob_1": 0.86025641418838839
  },
  {
    "pred_label": 0,
    "pred_prob_0": 0.65911568636955931,
    "pred_prob_1": 0.34088431363044069
  },
  {
    "pred_label": 1,
    "pred_prob_0": 0.13974358581161161,
    "pred_prob_1": 0.86025641418838839
  },
  {
    "pred_label": 0,
    "pred_prob_0": 0.65911568636955931,
    "pred_prob_1": 0.34088431363044069
  }
]

And now you’ll also have that warm and fuzzy feeling of deploying your first ML model to an Azure Function!!

Since the API side is natively handled by Azure Functions I’ll make my second part on recreating this process with using Docker which might make things even easier.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK