Personal Staging Environment for Micro-Services Architecture

 2 years ago
source link: https://lambda.grofers.com/personal-staging-testing-environment-for-micro-service-architecture-b98e62e439d9
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Personal Staging Environment for Micro-Services Architecture

Micro-Services Architecture is a method of developing applications as independently deployable, small, modular services in which each service runs a unique process and communicates through a well-defined, lightweight mechanism to serve a business goal.

Grofers has been following micro-services architecture for a while now. While there are plenty of advantages of this architecture, it also comes with its own challenges. One of those is managing a stable stage infrastructure and providing a good dev experience to developers. We have taken multiple steps in the past to improve these. This post is about another improvement that we made to our micro-services architecture by improving our stage infrastructure.

Staging Environment

A good production environment is always supported by a great development environment and a big part of that is the staging environment.

A stage or staging environment is an environment for testing that exactly resembles the production environment. In other words, it’s a complete but independent copy of the production environment.

You execute your QA process in this environment usually at the end to gain confidence in the new code written so that you don’t introduce regression in production. So it is imperative to have an accurate staging environment mimicking the production environment to get the ideal level of confidence.

Personal Staging Environment

While you can work towards building a great staging environment for your QA process, that is by no means all that you need to ship your software. You also need to have a great development environment to be able to test as you write code. Usually developers would treat their workstations as developer environment. But in a complicated and large infrastructure running more than 30 services, it could get extremely difficult to set up everything locally. And even if you end up doing that, there is no guarantee that it will stably run because resources are limited. So often developers set up only a few services and then try to test fully in a stable stage environment.

As products, services and teams grows, having a single shared staging set up becomes a bottleneck. Because with shared staging starts becoming the resource that multiple people are competing for and then waiting on, and that in turn limits speed of feature releases.

We realised that we needed to really address this problem to improve our productivity. And so the idea we arrived at was to have the ability to spawn off a new isolated staging environments on demand.


Isolated stage environment that would be

* Available on-demand
* Easy to use and quick to deploy
* Customizable enough such that any git branch of any service could be tested independently


Dockerising each service such that whole application eco-system would be collection of docker containers communicating with each other and outside world.

Tools Used

* Docker
* Ngrok
* Packer.io
* Ansible
* Jenkins
* Docker swarms
* OpenResty

Read here on how we use use Ngrok and Openresty for setting up our local environment.


All of our API requests are routed through a single API gateway middleware service container which authenticates the requests. After successful authentication, these requests are routed to nginx container which proxies requests to individual micro-service’s container based on requested path.

All the internal communication between services happens directly among containers using container name for service discovery.

Image for post
Image for post

How to containerise service

We use Ansible to deploy our applications on AWS machines .

Ansible helps us provision our machines with required libraries, packages, files, repositories, applications and services.

We use Packer.io for creating docker images, which uses Ansible playbooks for provisioning images. This helped us reuse our Ansible playbooks for building images, instead of writing a Dockerfile for each service.

Read here for more details on how to use packer and Ansible to build docker images.

Typically, we use extra_arguments likebranchwhile building an image for a particular branch of our code.

“extra_arguments”: [“ — extra-vars=branch={{ user `branch` }}”]

We also use the branchvariable as an image tag. This way we can have an image built for every git branch for every repository.

Auto building image on code changes

We are using jenkins as our CI tool where we build images. Jenkins jobs are triggered by Github whenever there are any changes to a particular repo (new branch, new commits on a branch, etc.).


Now finally when the images are ready, we use Docker Compose to deploy them on an EC2 instance.


An Nginx container is a vital component of the whole architecture because all the inter-container communication is handled by Nginx. All the internal calls are directed to Nginx and Nginx routes a request to one of the containers as explained in the illustration below.

Image for post
Image for post

Our API Gateway container runs on port 80. This is the entry point for HTTP traffic from the external world, hence the 80 port.

Ports of all other containers are internally decided by Docker, hence using container name instead of fixing port is always better.

Docker Swarm

Docker Swarm are used to scale up the solution. You create a swarm and add as many machines are required and docker will distribute all these containers and their multiple instances among those machines by itself.

Management and Dashboard

Portainer is used as a management tool. You can scale up/down, restart, add and remove containers and images easily with Portainer.

Read about Portainer for more details.


Ngrok is used to provide a public endpoint, which can be used to connect to services running privately or those that don’t have a dedicated public IP. Ngrok’s docker image is available on Docker Hub. This also makes it easy for everyone to remember how to connect a particular staging environment and also share it easily.


This setup has improved our productivity by many folds. What we tried to achieve here with containers can potentially be achieved with Ansible or any other configuration management and orchestration tool. However, we took this path to learn more about containerization as we can foresee us using containers in the near future. This has been a great learning experience while we working on achieving our goal of improving our staging infrastructure.

Needless to say that we are working on taking containers to production. And, we’re hiring!

About Joyk

Aggregate valuable and interesting links.
Joyk means Joy of geeK