105

Create versatile Microservices in Golang - part 2 of 10 part series

 6 years ago
source link: https://ewanvalentine.io/microservices-in-golang-part-2/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Microservices in Golang - part 2 - Docker and go-micro

Published 5 years ago

Support this content

If you are finding this content useful, especially if you're using an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. The more sponsors/donations I get, the more time I can justify updating these articles and writing new content, I really appreciate your help and support. Cheers! https://monzo.me/ewanvalentine

Or you can buy me a coffee:

Or, sponsor me on Patreon to support more content like this.

UPDATED: 11th April 2019
UPDATED: 12th January 2020
UPDATED: 6th June 2020

用中文阅读

Introduction - Part 2 Docker and go-micro

In the previous post, we covered the basics of writing a gRPC based microservice. In this part; we will cover the basics of Dockerising a service, we will also be updating our service to use go-micro, and finally, introducing a second service.

Introducing Docker

With the advent of cloud computing, and the birth of microservices. The pressure to deploy more, but smaller chunks of code at a time has led to some interesting new ideas and technologies. One of which being the concept of containers.

Traditionally, teams would deploy a monolith to static servers, running a set operating system, with a predefined set of dependencies to keep track of. Or maybe on a VM provisioned by Chef or Puppet for example. Scaling was expensive and not all that effective. The most common option was vertical scaling, i.e throwing more and more resources at static servers.

Tools like vagrant came along and made provisioning VM's fairly trivial. But running a VM was still a fairly heft operation. You were running a full operating system in all its glory, kernel and all, within your host machine. In terms of resources, this is pretty expensive. So when microservices came along, it became infeasible to run so many separate codebases in their own environments.

Along came containers

Containers are slimmed down versions of an operating system. Containers don't contain a kernel, a guest OS or any of the lower level components which would typically make up an OS.

Containers only contain the top level libraries and its run-time components. The kernel is shared across the host machine. So the host machine runs a single Unix kernel, which is then shared by n amount of containers, running very different sets of run-times.

Under the hood, containers utilise various kernel utilities, in order to share resources and network functionality across the container space.

Further reading

This means you can run the run-time and the dependencies your code needs, without booting up several complete operating systems. This is a game changer because the overall size of a container vs a VM, is magnitudes smaller. Ubuntu for example, is typically a little under 1GB in size. Whereas its Docker image counterpart is a mere 188mb.

You will notice I spoke more broadly of containers in that introduction, rather than 'Docker containers'. It's common to think that Docker and containers are the same thing. However, containers are more of a concept or set of capabilities within Linux. Docker is just a flavor of containers, which became popular largely due to its ease of use. There are others, too. But we'll be sticking with Docker as it's in my opinion the best supported, and the simplest for newcomers.

So now hopefully you see the value in containerisation, we can start Dockerising our first service. Let's create a Dockerfile $ touch shippy-service-consignment/Dockerfile.

  1. In that file, add the following:
FROM golang:alpine as builder

RUN apk update && apk upgrade && \
    apk add --no-cache git

RUN mkdir /app
WORKDIR /app

ENV GO111MODULE=on

COPY . .

RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment

# Run container
FROM alpine:latest

RUN apk --no-cache add ca-certificates

RUN mkdir /app
WORKDIR /app
COPY --from=builder /app/shippy-service-consignment .

CMD ["./shippy-service-consignment"]

So what's going on here? We're using what's called a multi-stage dockerfile, which allows us to use separate docker images for building and running our containers. When we build our container, the first 'build' container, fetches our dependencies, using the Golang runtime image as the base image. It then builds our binary. The second part of the Dockerfile, underneath where I left a 'Run container' comment, takes the binary from our build container (they can share state and artefacts implicitly. It then runs our binary, using the Alpine base image, without the Golang runtime. Which means it has just enough dependencies to connect to a network and execute a compiled binary. Which ofcourse means our containers are smaller. The reason it's good to have smaller containers, is because it means they can deploy faster, scale faster, and take up less storage space.

More on multi-stage builds here.

If you're running on Linux, you might run into issues using Alpine. So if you're following this article on a Linux machine, simply replace alpine with debian, and you should be good to go. We'll touch on an even better way to build our binaries later on.

  1. We can then build our Docker image using:
$ docker build -t shippy-service-consignment .
  1. We can run this image using:
$ docker run -p 50051:50051 shippy-service-consignment

The -p flag is a port mapping, which maps port 50051 from our internal docker network, to our host network on the same port. We could map that internal port to a different port on the host network, for example 8080:50051 would run on 8080 on your localhost network.

You can read more about how Docker's networking works here.

Run your newly created docker image using the command above, then in a separate terminal pane, run your cli client again $ go run main.go and double check it still works.

When you run $ docker build, you are building your code and run-time environment into an image. Docker images are portable snapshots of your environment, its dependencies. You can share docker images by publishing them to docker hub. Which is like a sort of npm, or yum repo for docker images. When you define a FROM in your Dockerfile, you are telling docker to pull that image from docker hub to use as your base. You can then extend and override parts of that base file, by re-defining them in your own. We won't be publishing our docker images, but feel free to peruse docker hub, and note how just about any piece of software has been containerised already. Some really remarkable things have been Dockerised.

Each declaration within a Dockerfile is cached when it's first built. This saves having to re-build the entire run-time each time you make a change. Docker is clever enough to work out which parts have changed, and which parts needs re-building. This makes the build process incredibly quick.

Enough about containers! Let's get back to our code.

Back to the code

When creating a gRPC service, there's quite a lot of boilerplate code for creating connections, and you have to hard-code the location of the service address into a client, or other service in order for it to connect to it. This is tricky, because when you are running services in the cloud, they may not share the same host, or the address or ip may change after re-deploying a service.

This is where service discovery comes into play. Service discovery keeps an up-to-date catalogue of all your services and their locations. Each service registers itself at runtime, and de-registers itself on closure. Each service then has a name or id assigned to it. So that even though it may have a new IP address, or host address, as long as the service name remains the same, you don't need to update calls to this service from your other services.

Typically, there are many approaches to this problem, but like most things in programming, if someone has tackled this problem already, there's no point re-inventing the wheel. One person who has tackled these problems with fantastic clarity and ease of use, is @chuhnk (Asim Aslam), creator of Micro. Micro is a suite of tools and a framework for building reliable microservices in Go. Micro has a full team now and is growing very rapidly, and some big names back it.

Micro

Micro has useful features for making microservices in Go trivial. But we'll start with probably the most common issue it solves, and that's service discovery.

We will need to make a few updates to our service in order to work with go-micro. Micro integrates as a protoc plugin, in this case replacing the standard gRPC plugin we're currently using.

Be sure to install the Micro go dependencies:

$ go get github.com/micro/micro/v2
$ go get github.com/micro/micro/v2/cmd/protoc-gen-micro@master
  1. The new command to update our proto files is:
$ shippy-service-consignment
$ protoc --proto_path=. --go_out=. --micro_out=. \
		proto/consignment/consignment.proto

Note: if you get an error at this point, regarding an etcd dependency, try adding the following to your go.mod file

...

replace google.golang.org/grpc => google.golang.org/grpc v1.26.0

require (
  ...
)

  1. Now we will need to update our shippy/shippy-service-consignment/main.go file to use go-micro. This will abstract much of our previous gRPC code. It handles registering and spinning up our service with ease.
// shippy/shippy-service-consignment/main.go

package main

import (
	"log"

	// Import the generated protobuf code
	"context"

	pb "github.com/EwanValentine/shippy/shippy-service-consignment/proto/consignment"
	"github.com/micro/go-micro/v2"
)

type repository interface {
	Create(*pb.Consignment) (*pb.Consignment, error)
	GetAll() []*pb.Consignment
}

// Repository - Dummy repository, this simulates the use of a datastore
// of some kind. We'll replace this with a real implementation later on.
type Repository struct {
	consignments []*pb.Consignment
}

func (repo *Repository) Create(consignment *pb.Consignment) (*pb.Consignment, error) {
	updated := append(repo.consignments, consignment)
	repo.consignments = updated
	return consignment, nil
}

func (repo *Repository) GetAll() []*pb.Consignment {
	return repo.consignments
}

// Service should implement all of the methods to satisfy the service
// we defined in our protobuf definition. You can check the interface
// in the generated code itself for the exact method signatures etc
// to give you a better idea.
type consignmentService struct {
	repo repository
}

// CreateConsignment - we created just one method on our service,
// which is a create method, which takes a context and a request as an
// argument, these are handled by the gRPC server.
func (s *consignmentService) CreateConsignment(ctx context.Context, req *pb.Consignment, res *pb.Response) error {

	// Save our consignment
	consignment, err := s.repo.Create(req)
	if err != nil {
		return err
	}

	// Return matching the `Response` message we created in our
	// protobuf definition.
	res.Created = true
	res.Consignment = consignment
	return nil
}

func (s *consignmentService) GetConsignments(ctx context.Context, req *pb.GetRequest, res *pb.Response) error {
	consignments := s.repo.GetAll()
	res.Consignments = consignments
	return nil
}

func main() {

	repo := &Repository{}

	// Create a new service. Optionally include some options here.
	service := micro.NewService(

		// This name must match the package name given in your protobuf definition
		micro.Name("shippy.service.consignment"),
	)

	// Init will parse the command line flags.
	service.Init()

	// Register service
	if err := pb.RegisterShippingServiceHandler(service.Server(), &consignmentService{repo}); err != nil {
		log.Panic(err)
	}

	// Run the server
	if err := service.Run(); err != nil {
		log.Panic(err)
	}
}

The main changes here are the way in which we instantiate our gRPC server, which has been abstracted neatly behind a mico.NewService() method, which handles registering our service. And finally, the service.Run() function, which handles the connection itself. Similar as before, we register our implementation, but this time using a slightly different method.

Finally, we are no longer hard-coding the port. Micro should be configured using environment variables, or command line arguments. To set the address, use MICRO_SERVER_ADDRESS=:50051. By default, Micro utilises mdns (multicast dns) as the service discovery broker for local use. You wouldn't typically use mdns for service discovery in production, but we want to avoid having to run something like Consul or etcd locally for the sakes of testing. More on this in a later post.

  1. Now we need to pass in an environment variable when running our container, to define what port to run on.
$ docker run -p 50051:50051 \
      -e MICRO_SERVER_ADDRESS=:50051 \
      shippy-service-consignment

The -e is an environment variable flag, this allows you to pass in environment variables into your Docker container. You must have a flag per variable, for example -e ENV=staging -e DB_HOST=localhost etc.

  1. Now you will have a Dockerised service, with service discovery. So let's update our cli tool to utilise this:
import (
    ...
    micro "github.com/micro/go-micro/v2"
)

func main() {
    service := micro.NewService(micro.Name("shippy.consignment.cli"))
	service.Init()

	client := pb.NewShippingService("shippy.consignment.service", service.Client())
    ...
}

See here for full file

Here we've imported the Micro libraries for creating clients, and replaced our existing connection code, with the go-micro client code, which uses service resolution instead of connecting directly to an address.

However if you run this, this won't work. This is because we're running our service in a Docker container now, which has its own mdns, separate to the host mdns we are currently using. The easiest way to fix this is to ensure both service and client are running in "dockerland", so that they are both running on the same host, and using the same network layer. So let's run both using the docker run commands.

$ docker build -t shippy-cli-consignment .
$ docker run shippy-cli-consignment
  1. Now let's create a Dockerfile for our CLI tool:
FROM golang:alpine as builder

RUN apk update && apk upgrade && \
    apk add --no-cache git

RUN mkdir /app
WORKDIR /app

ENV GO111MODULE=on

COPY . .

RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o shippy-service-consignment


FROM alpine:latest

RUN apk --no-cache add ca-certificates

RUN mkdir /app
WORKDIR /app
ADD consignment.json /app/consignment.json
COPY --from=builder /app/shippy-cli-consignment .

CMD ["./shippy-cli-consignment"]

This is very similar to our services Dockerfile, except it also pulls in our json data file as well.

Now when you run this docker image, ensuring your consignment image is also running, you should see Created: true, the same as before.

Vessel service

Let's create a second service. We have a consignment service, this will deal with matching a consignment of containers to a vessel which is best suited to that consignment. In order to match our consignment, we need to send the weight and amount of containers to our new vessel service, which will then find a vessel capable of handling that consignment.

  1. Now let's create a new project shippy-service-vessel, following the same set-up and docker image as our previous service. Create a new proto file, like with our existing serivce.
$ mkdir -p shippy-service-vessel/proto/vessel/
$ touch shippy-service-vessel/proto/vessel/vessel.proto
  1. Since the protobuf definition is really the core of our domain design, let's start there.
// shippy-service-vessel/proto/vessel/vessel.proto
syntax = "proto3";

package vessel;

service VesselService {
  rpc FindAvailable(Specification) returns (Response) {}
}

message Vessel {
  string id = 1;
  int32 capacity = 2;
  int32 max_weight = 3;
  string name = 4;
  bool available = 5;
  string owner_id = 6;
}

message Specification {
  int32 capacity = 1;
  int32 max_weight = 2;
}

message Response {
  Vessel vessel = 1;
  repeated Vessel vessels = 2;
}

As you can see, this is very similar to our first service. We create a service, with a single rpc method called FindAvailable. This takes a Specification type and returns a Response type. The Response type returns either a Vessel type or multiple Vessels, using the repeated field.

We can build and run our new service using similar commands as before:

echo "Build proto file"
$ protoc --proto_path=. --go_out=. --micro_out=. \
		proto/vessel/vessel.proto

echo "Build docker image"
$ docker build -t shippy-service-vessel .

echo "Run docker image"
$ docker run shippy-service-vessel

The only difference here is we're ensuring our new service doesn't run on the same host port as our existing service.

  1. Finally, we can start on our implementation:
// shippy-service-vessel/main.go
package main

import (
	"context"
	"errors"
	"log"

	pb "github.com/<YourUsername>/shippy/shippy-service-vessel/proto/vessel"
	"github.com/micro/go-micro/v2"
)

type Repository interface {
	FindAvailable(*pb.Specification) (*pb.Vessel, error)
}

type VesselRepository struct {
	vessels []*pb.Vessel
}

// FindAvailable - checks a specification against a map of vessels,
// if capacity and max weight are below a vessels capacity and max weight,
// then return that vessel.
func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) {
	for _, vessel := range repo.vessels {
		if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight {
			return vessel, nil
		}
	}
	return nil, errors.New("No vessel found by that spec")
}

// Our grpc service handler
type vesselService struct {
	repo Repository
}

func (s *vesselService) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {

	// Find the next available vessel
	vessel, err := s.repo.FindAvailable(req)
	if err != nil {
		return err
	}

	// Set the vessel as part of the response message type
	res.Vessel = vessel
	return nil
}

func main() {
	vessels := []*pb.Vessel{
		&pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500},
	}
	repo := &VesselRepository{vessels}

	service := micro.NewService(
		micro.Name("shippy.service.vessel"),
	)

	service.Init()

	// Register our implementation with
	if err := pb.RegisterVesselServiceHandler(service.Server(), &vesselService{repo}); err != nil {
		log.Panic(err)
	}

	if err := service.Run(); err != nil {
		log.Panic(err)
	}
}

  1. Now let's get to the interesting part. When we create a consignment, we need to alter our consignment service to call our new vessel service, find a vessel, and update the vessel_id in the created consignment:
package main

import (
	"context"
	"errors"
	"log"

	pb "github.com/EwanValentine/shippy/shippy-service-vessel/proto/vessel"
	"github.com/micro/go-micro/v2"
)

type Repository interface {
	FindAvailable(*pb.Specification) (*pb.Vessel, error)
}

type VesselRepository struct {
	vessels []*pb.Vessel
}

// FindAvailable - checks a specification against a map of vessels,
// if capacity and max weight are below a vessels capacity and max weight,
// then return that vessel.
func (repo *VesselRepository) FindAvailable(spec *pb.Specification) (*pb.Vessel, error) {
	for _, vessel := range repo.vessels {
		if spec.Capacity <= vessel.Capacity && spec.MaxWeight <= vessel.MaxWeight {
			return vessel, nil
		}
	}
	return nil, errors.New("No vessel found by that spec")
}

// Our grpc service handler
type vesselService struct {
	repo Repository
}

func (s *vesselService) FindAvailable(ctx context.Context, req *pb.Specification, res *pb.Response) error {

	// Find the next available vessel
	vessel, err := s.repo.FindAvailable(req)
	if err != nil {
		return err
	}

	// Set the vessel as part of the response message type
	res.Vessel = vessel
	return nil
}

func main() {
	vessels := []*pb.Vessel{
		&pb.Vessel{Id: "vessel001", Name: "Boaty McBoatface", MaxWeight: 200000, Capacity: 500},
	}
	repo := &VesselRepository{vessels}

	service := micro.NewService(
		micro.Name("shippy.service.vessel"),
	)

	service.Init()

	// Register our implementation with
	if err := pb.RegisterVesselServiceHandler(service.Server(), &vesselService{repo}); err != nil {
		log.Panic(err)
	}

	if err := service.Run(); err != nil {
		log.Panic(err)
	}
}

Here we've created a client instance for our vessel service, this allows us to use the service name, i.e shipy.service.vessel to call the vessel service as a client and interact with its methods. In this case, just the one method (FindAvailable). We send our consignment weight, along with the amount of containers we want to ship as a specification to the vessel-service. Which then returns an appropriate vessel.

  1. Update the shippy-consignment-cli/consignment.json file, remove the hardcoded vessel_id, we want to confirm our own is working. And let's add a few more containers and up the weight. For example:
{
  "description": "This is a test consignment",
  "weight": 55000,
  "containers": [
    { "customer_id": "cust001", "user_id": "user001", "origin": "Manchester, United Kingdom" },
    { "customer_id": "cust002", "user_id": "user001", "origin": "Derby, United Kingdom" },
    { "customer_id": "cust005", "user_id": "user001", "origin": "Sheffield, United Kingdom" }
  ]
}

Repository for this tutorial

Now rebuid and re-run your docker images. You should see a response, with a list of created consignments. In your consignments, you should now see a vessel_id has been set.

So there we have it, two inter-connected microservices and a command line interface! The next part in the series, we will look at persisting some of this data using MongoDB. We will also add in a third service, and use docker-compose to manage our growing ecosystem of containers locally.

If you are finding this series useful, and you use an ad-blocker (who can blame you). Please consider chucking me a couple of quid for my time and effort. Cheers! https://monzo.me/ewanvalentine

Or, sponsor me on Patreon to support more content like this.

Accolades: Docker Newsletter (22nd November 2017).


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK