12

End-to-End Machine Learning Project

 3 years ago
source link: https://towardsdatascience.com/end-to-end-machine-learning-project-627ed48f8109?gi=4ab70180e887
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

End-to-End Machine Learning Project: Part-2

In theprevious post, we saw how I trained an image classification model, starting from data preparation to training different iterations of the model, both using Convolutional Neural Networks(CNNs) and Transfer Learning to get a final model which classifies US dollar bills. If you haven’t already, I would suggest skimming through that post first and then come to this one.

Part-1 (previous post)

: preparing the data and training an image classification model

Part-2 (this post): deploying the model using Flask and Docker

After training the model, the next challenge was to showcase it. I didn’t want to spend too much on cloud resources for hosting, so I tried to look into many different options but couldn’t find one which seemed easy enough to get up and running quickly. Believe me when I say that there are tons of resources and tutorials which discuss how to create and train your deep learning models but there are only a handful of resources that talk about deploying these models .

What “tools” to use?

This question was rolling around my head as I came closer to the completion of building this model. I knew that I wanted to deploy this model but wasn’t sure of the platform — whether it should be a native mobile app or should it be a web application. After debating some more, I went with a quick web application so that it could be accessed from any device by using a browser.

I chose to go with Flask a lightweight WSGI web application framework. It is one of the most popular Python web application frameworks available and is known for its simplicity and ease of use along with the ability to scale up to build more complex applications.

For this deployment, I chose to use my existing cloud server — a Nanode , to avoid investing in additional cloud resources and decided to go with a container-based approach.

A container is a standard unit of software that packages up code and all its dependencies so the application runs quickly and reliably from one computing environment to another. ( Source )

I decided to go with Docker, which provides a way of running multiple containers on the same machine using a common runtime, the Docker Engine .

22yeYfv.jpg!web

Image-1: Containerized Applications using Docker (Photo by Author)

For hosting the application, I went with Nginx which provides a proxy server to host your web apps.

NGINX is a free, open-source, high-performance HTTP server and reverse proxy, aas well as an IMAP/POP3 proxy server. NGINX is known for its high performance, stability, rich feature set, simple configuration, and low resource consumption. ( Source )

Building a Flask-based web application

I found this very useful GitHub project [1] which provides a simple template to deploy your trained image classification models as a web app using Flask . I made a few changes to the code from [1] to make it specific to my model’s objective by loading my trained model (I’m using a .tf model rather than using a .h5 file), setting the output class names, setting a probability threshold for a successful prediction, etc. All these changes were made to the app.py file which serves as the main backend for this flask web app.

I also made some minor changes to the CSS (styling of the web page) and JavaScript files for this project.

You can refer to all these changes on my GitHub repository for this project .

Setting up the Server (correctly…)

This is the most important piece of this deployment puzzle which I probably spent the most time researching, finalizing, and setting up, even more than building the model itself.

The first step is to install the latest version of Docker on your server based on the Operating System (OS). I have an Ubuntu box as my server so I followed this quick guide [2] to install Docker, which was smooth-sailing.

The next step was to find a way to use multiple docker containers to deploy different applications on the same server. After a lot of google-ing around I stumbled upon this amazing article [3], which was similar to what I was trying to achieve and the author explains it all in a very nice and simple way.

EzUNBz3.jpg!web

Image-2: Deployment using Nginx reverse proxy and Docker (Photo by Author)

Here, nginx reverse proxy acts as the master container for this setup and listens on external ports 80 (HTTP) and 443 (HTTPS), acting as a communication medium between the world and the slave container(s). The internal workings of the slave container itself are hidden and there is no difference seen by the user when accessing those applications.

Following the sample docker-compose.yml in [3] I set up my version of nginx reverse-proxy and SSL for my server which acts as the master container, while setting up the network as defined in my docker-compose.

The next step was to create a slave container that will host my web application. For this I created a single container application based on a combination of [3] and this post [4] which uses a Dockerfile to set up the container that hosts the project. I modified the docker-compose and the Dockerfile, based on my project needs.

iU3qUrM.jpg!web
Image-3: Dockerfile (Photo by Author)

The project container is based on tensorflow-2.0.0-py3 docker image from the docker hub, which uses python3 and tensorflow-2.0. Check here for a list of other tensorflow images that you could use in your projects. Once the base image is downloaded, I copy my project’s flask app and the model to this container and install the requirements for running the project, using pip.

6NvyMb6.jpg!web

Image-4: Docker Compose (Photo by Author)

The final docker-compose file uses the above Dockerfile to create a container called usbills which hosts the model and the web app. It also creates a nginx service which depends on the usbills container, i.e., if there was a problem building the usbills container, the nginx container will not be created.

The environment section of this nginx container sets a VIRTUAL_HOST which is the DNS name for this web app and will be accessible once this container is deployed. This nginx container acts as the slave container and works as per the requests received from the server’s master container.

Once these 2 files were created and I made sure the master container was working correctly, it was simply a matter of writing a one-line command to get this application hosted.

$ docker-compose up -d

Conclusion

As shown in the docker-compose above, this web application can be accessed from any device and browser by visiting this link . It took a lot of searching, reading, understanding, trying, and failing before I was able to finally set up and deploy this model. This multi-container approach for deployment also sets the stage for easily deploying any other applications in the future, on the same server while using a similar docker-compose setup.

Hopefully, you enjoyed this two-part series. I understand this approach may not suit everyone but hopefully, it helps someone who is in the same boat as I was when looking for ways to deploy my trained model without having to spend a ton on cloud resources. Feel free to share links to your model deployments using this approach, or if you have any other ways which you find to be better than this approach.

Keep learning and building cool models. Until next time…

References

[1] Deploy Keras model with flask as web app in 10 minutes

[2] How to Install Docker and Pull Images for Container Deployment

[3] Hosting multiple sites using Docker and NGINX reverse proxy with SSL

[4] Nginx as reverse proxy for a flask app using Docker

[5] What is a container?


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK