14

Deploying models to production with TensorFlow model server

 3 years ago
source link: https://mc.ai/deploying-models-to-production-with-tensorflow-model-server/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

TF Serving in Practice

So of course before starting you need to install TF Serving. The code examples for this blog in the repo have the installation commands typed out for you. Installing it is pretty straightforward, find installation steps here .

So, you already have a model and the first thing you would do is simply save it in a format usable by TF Serving. The third line here directory_path shows your model where to save the model and the other two lines just pass the inputs and the outputs-

tf.saved_model.simple_save( keras.backend.get_session(), directory_path, inputs = {'input_image': model.input}, outputs = {i.name: i for i in model.outputs})

If you navigate to the path where you saved this model you would see a directory structure like this, also I made a directory 1 which is my model version we will see how TF server helps us manage and deploy these versions. And also note your model is saved in a .pb extension.

Saved model directory

There is another great interface called the saved model CLI which I find pretty useful. This gives you a lot of useful information about your saved model like operation signatures and input-output shapes.

!saved_model_cli show --dir [DIR] --all

Here is sample output showing the information this tool provides-

Saved Model CLI Output
  • Starting the model server

So here is how you would then start the model server let us break this down-

os.environ["MODEL_DIR"] = MODEL_DIR%%bash --bgnohup tensorflow_model_server \ --rest_api_port = 8501 \ --model_name = test \ --model_base_path="${MODEL_DIR}" >server.log 2>&1

So the third line here tells it to use the tensorflow model server, of course you would not include the bash magic cell while implementing it in practice i.e. the code line %bash --bg but as Iassume most of you might use Colab I have added that as Colab doesn’t provide you a direct terminal.

The fourth line here specifies the port on which you want to run the TF Model Server and is pretty straightforward too.

A thing to noteid that the --model_name will also appear in the URL on which you will be serving your models, so if you have multiple models at action managing your serving model URLs also becomes a lot easier.

The last line here specifies that you want to enable logging and sometimes logs are just so helpful while debugging. I have personally used them quite a lot to figure out errors easily.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK