3

How provider-agnostic is Terraform really – a multi-provider approach for a Java...

 2 years ago
source link: https://blog.oio.de/2021/09/17/how-provider-agnostic-is-terraform-really-a-multi-provider-approach-for-a-java-application/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

How provider-agnostic is Terraform really – a multi-provider approach for a Java application

There are a couple of big cloud providers, according to the Magic Quadrant for Cloud Infrastructure and Platform Services from Gartner. At the moment of writing this blog post those are Amazon with AWS (Amazon Web Services), Microsoft Azure, Google Cloud Platform, Alibaba Cloud, Oracle, IBM and Tencent Cloud. Due to the frequent and ongoing changes in this field, it might be useful to not only count on one of those big providers, but using a rather distributed multi-provider approach, when it comes to migrating to the cloud with your infrastructure and applications.

In this blogpost we will take a look on how you can use multiple providers when applying Terraform as a tool of choice for the infrastructure, as it proclaims to be provider-independent.

Which way to go

This post is not intended to align strictly with any good or best practices, rather it is just the extended documentation of the test on how easy it is to create infrastructure with an IaC-approach with Terraform as a multi-cloud provider tool, processing only basic background knowledge.

There are a myriad of ways to get your Java application running on one of the cloud provider’s platforms. If you want to dig deeper into the specific possibilities, you can find the documentations linked. Only some services of Azure and AWS are mentioned, although the other big providers offer similar services. Just to mention a few here, you can

In a recent blog-post by Oliver, you can find an example on how to deploy a containerized Java Application on Azure. An even more automated version can be found here.

Now how easy is it to use multiple providers with infrastructure as code when we do not want to focus on just one of them?

For the sake of testing purposes, we will use the Container Registry Service from Azure as a repository for our docker images, which we will build from our basic Java Spring Boot application. For the deployment, we will provision components of the AWS Elastic Container Service to run our image on AWS.

Prerequisites

To follow along with a tutorial that involves provisioning infrastructure on one of the cloud-provider’s platforms, it often involves some up-front effort to get rolling.

Please be warned that you have to register with a credit card, although all the services we are using should be free to use in the first 12 months.

Starting with our Java-Application …

If you already followed along Oliver’s previous blog post mentioned above, you have a good base for the upcoming steps, since we will be using the same approach, building a Docker image from a basic Java Spring Boot application, pushing the image to our container registry in Azure later on, all automatized by the mentioned Gradle plugin named jib. If not, feel free to take a look at them or at this repository where you can find the demo application as well.

So we could use the Gradle plugin as follows in the build.gradle of the project.

jib {
    from {
        image = 'adoptopenjdk:11-jre'
    }
    to {
        image = 'repository/image'
        tags = ['0.1']
    }
    container {
        ports = ['8080']
        creationTime = 'USE_CURRENT_TIMESTAMP'
    }
}
tasks.build.dependsOn tasks.jib

This creates an image with the latest tag as default and an additional image with the tag 0.1. The port should be remembered since we have to use it later on in the configuration of the container service in AWS.

Before we can build and push our image with ./gradlew build to the registry in Azure, we should first create the registry. So now we will get to build all the infrastructure we need on Azure and AWS.

… before building the infrastructure with Terraform

With Terraform, multiple providers can be used as plugins to enable the communication between the tool terraform and providers like AWS, Azure, Google Cloud Platform, Alibaba Cloud and many more.

To start with Terraform, we first create a directory next to our application directory, which is called e.g. infrastructure and create a file there named main.tf for our infrastructure code.

For provisioning two different services from two different providers – like AWS and Azure in our case – we first have to declare the providers inside of provider blocks. Additionally, the versions for the providers can be specified in the required_providers block within the basic terraform block. So we would have the following configuration as a start.

main.tf

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 3.27"
    }
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=2.46.0"
    }
  }
  required_version = ">= 0.14.9"
}

# Configure AWS Provider
provider "aws" {
  profile = "default"
  region  = "us-east-1"
}

# Configure Microsoft Azure Provider
provider "azurerm" {
  features {}
}

So far so good, now let’s continue with configuring the container registry in Azure for our image to reside in. Therefore we have to connect our local Azure CLI tool to our Azure account, which we can achieve with the installed Azure CLI. There might definitely be other and more robust ways for different purposes to authenticate with Terraform at Azure. For the sake of simplicity, we will use the CLI here.

az login

Now let’s add the code to build the container registry at Azure. To see more of the respective documentation take a look here.

resource "azurerm_resource_group" "rg" {
  name     = "terraform-container-registry"
  location = "West Europe"
}

resource "azurerm_container_registry" "acr" {
  name                     = "acr4711"
  resource_group_name      = azurerm_resource_group.rg.name
  location                 = azurerm_resource_group.rg.location
  sku                      = "Standard"
  admin_enabled            = true
}

Within one resource-block in HCL (HashiCorp Configuration Language) we can reference other resources and their attributes. All the available attributes, that can be referenced, can be found in the documentation, as well.

Now that we have poured our infrastructure for the registry into code, let’s continue with the components we need on AWS, which seems to be a little bit more to do.

First, we need to configure the respective network components, which are needed later on, to reach our running container from the internet. The network configuration consists of a VPC (Virtual Private Network), a subnet, security groups and load balancers.

## Configure networking
data "aws_vpc" "default" {
  default = true
}

data "aws_subnet_ids" "default" {
  vpc_id = "${data.aws_vpc.default.id}"
}

resource "aws_security_group" "lb" {
  name        = "lb-sg"
  description = "controls access to the Application Load Balancer (ALB)"

  ingress {
    protocol    = "tcp"
    from_port   = 80
    to_port     = 80
    cidr_blocks = ["0.0.0.0/0"]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_security_group" "ecs_tasks" {
  name        = "ecs-tasks-sg"
  description = "allow inbound access from the ALB only"

  ingress {
    protocol        = "tcp"
    from_port       = 8080
    to_port         = 8080
    cidr_blocks     = ["0.0.0.0/0"]
    security_groups = [aws_security_group.lb.id]
  }

  egress {
    protocol    = "-1"
    from_port   = 0
    to_port     = 0
    cidr_blocks = ["0.0.0.0/0"]
  }
}

resource "aws_lb" "staging" {
  name               = "alb"
  subnets            = data.aws_subnet_ids.default.ids
  load_balancer_type = "application"
  security_groups    = [aws_security_group.lb.id]
}

resource "aws_lb_listener" "https_forward" {
  load_balancer_arn = aws_lb.staging.arn
  port              = 80
  protocol          = "HTTP"

  default_action {
    type             = "forward"
    target_group_arn = aws_lb_target_group.staging.arn
  }
}

resource "aws_lb_target_group" "staging" {
  name        = "alb-tg"
  port        = 80
  protocol    = "HTTP"
  vpc_id      = data.aws_vpc.default.id
  target_type = "ip"

  health_check {
    healthy_threshold   = "3"
    interval            = "90"
    protocol            = "HTTP"
    matcher             = "200-299"
    timeout             = "20"
    path                = "/"
    unhealthy_threshold = "2"
  }
}

Furthermore, we need a role to allow pulling and running the image.

## Configure roles
data "aws_iam_policy_document" "ecs_task_execution_role" {
  statement {
    sid     = "1"
    effect  = "Allow"
    actions = ["sts:AssumeRole"]

    principals {
      type        = "Service"
      identifiers = ["ecs-tasks.amazonaws.com"]
    }
  }
}

resource "aws_iam_role" "ecs_task_execution_role" {
  name               = "ecs-staging-execution-role"
  assume_role_policy = data.aws_iam_policy_document.ecs_task_execution_role.json
}

resource "aws_iam_role_policy_attachment" "ecs_task_execution_role" {
  role       = aws_iam_role.ecs_task_execution_role.name
  policy_arn = "arn:aws:iam::aws:policy/service-role/AmazonECSTaskExecutionRolePolicy"
}

Last but not least, the most important part is to configure the components, which are ultimately responsible for running our Docker image on AWS. Those are a:

  • a data block containing the definition for the Docker image
  • a task definition describing a virtual machine to be started within the Elastic Container Service
  • a service that manages tasks
  • a cluster for the service to deploy in
AWS_container_deployment_scheme-1024x421.jpg

## Configure ECS itself
data "template_file" "springboot-app" {
  template = file("./springboot-app.json")
  vars = {
    repository = "acr4711.azurecr.io/4711"
    tag        = "0.1"
  }
}

resource "aws_ecs_task_definition" "service" {
  family                   = "staging"
  network_mode             = "awsvpc"
  execution_role_arn       = aws_iam_role.ecs_task_execution_role.arn
  cpu                      = 256
  memory                   = 2048
  requires_compatibilities = ["FARGATE"]
  container_definitions    = data.template_file.springboot-app.rendered
}

resource "aws_ecs_cluster" "staging" {
  name = "tf-ecs-cluster"
}

resource "aws_ecs_service" "staging" {
  name            = "staging"
  cluster         = aws_ecs_cluster.staging.id
  task_definition = aws_ecs_task_definition.service.arn
  desired_count   = 1
  launch_type     = "FARGATE"

  network_configuration {
    security_groups  = [aws_security_group.ecs_tasks.id]
    subnets          = data.aws_subnet_ids.default.ids
    assign_public_ip = true
  }

  load_balancer {
    target_group_arn = aws_lb_target_group.staging.arn
    container_name   = "springboot-app"
    container_port   = 8080
  }

  depends_on = [aws_lb_listener.https_forward, aws_iam_role_policy_attachment.ecs_task_execution_role]
}

The template file for the container definition can be used as below:

[
  {
    "name": "springboot-app",
    "image": "${repository}:${tag}",
    "repositoryCredentials": {
      "credentialsParameter": "<enter the ARN (Amazon Resources Name)>"
    },
    "essential": true,
    "portMappings": [
      {
        "containerPort": 8080,
        "hostPort": 8080,
        "protocol": "tcp"
      }
    ],
    "cpu": 1,
    "environment": [
      {
        "name": "PORT",
        "value": "8080"
      }
    ],
    "ulimits": [
      {
        "name": "nofile",
        "softLimit": 65536,
        "hardLimit": 65536
      }
    ],
    "mountPoints": [],
    "memory": 2048,
    "volumesFrom": []
  }
]

Now we are almost done…..

To authenticate at the registry in Azure, we need the respective credentials. We can find those credentials in the terraform.tfstate-file which will be created after an initial run of terraform apply. In a more advanced scenario, these credentials could be extracted and saved to the respective secret manager of the used cloud platform. One should however always treat the Terraform state files themselves as sensitive data, because even “sensitive” enabled output variables will be saved to the state files in cleartext. If you are using Azure as an example, it is recommended to save your Terraform state in a secured storage account.

So finally let’s get our hands on terraform. Just move to the directory where you have created the main.tf with the contents from above and run

terraform init

You will see that terraform is loading the providers, stores them in a hidden directory .terraform and creates a file .terraform.lock.hcl which contains the exact versions of the providers used at that specific moment, which can be compared to a package.lock.json, if somebody did already wander through the lands of JavaScript.

Now let’s apply our configuration to the platforms with

terraform apply

At first terraform will list all the changes that will be applied, in this case adding all the resources we defined in main.tf and will prompt you then, which you can confirm with Yes.

Now terraform should give you the feedback that it did indeed create all the resources and additionally we can now find our credentials for the registry on Azure in a file called terraform.tfstate. There you should find admin-password and admin-username which you can use to login with your local Docker CLI. You can pass the password directly as option, or on the prompt:

docker login password --username acr4711

After a quick recap on our build.gradle from our Java application that the image is defined with the correct repository, image name and tag, we can finally build and push our containerized Java application to the registry with

./gradlew build

Due to time restrictions on our side we were only able to follow manual steps to give our components on AWS access to the registry. Alternatively, you can use Terraform to achieve the same result.

Therefore, we have to do the following last steps:

  • create an AWS secret containing the credentials to access the registry in Azure
  • create a policy in AWS for a IAM role that is assigned to the task to run the Docker image on ECS
    • go to the roles in the IAM-console and the role we created with terraform
    • there you should have the option to attach a policy and directly create a new custom one
    • to create the custom policy you can hand over the policy in JSON-format as below and enter the ARN of the secret you created before (you can find the ARN in the Secret Manager)
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "VisualEditor0",
      "Effect": "Allow",
      "Action": "secretsmanager:GetSecretValue",
      "Resource": "<ARN of the secret>"
    }
  ]
}

After that we can go back to the dashboard of the Elastic Container Service. Within the task definitions, we should find a task which terraform created for us and that has the respective role attached, where we just added the policy. Now we can run the task that will create a container instance for us and based on our network configuration, we can access our Java application with the DNS name of our load balancer which we can find here:

https://console.aws.amazon.com/ec2/v2/home?region=<your-selected-region>#LoadBalancers:sort=loadBalancerName

Conclusion and Further Resources

Terraform enables us to configure resources for both for Azure and AWS. Still, it requires some provider-specific steps, for example the usage of the Azure CLI to first login locally before using terraform locally. Even though that might not be that important when a server is used to run terraform templates on the cloud platforms. It seems that there is still a lot of potential for automation and integration between the several providers and I guess that Terraform is on it with all its forces to improve the integration and enjoyable usage even further.

Short URL for this post: https://blog.oio.de/2nm51

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK