8

How To Deploy Multiple Environments in Your Terraform Project Without Duplicatin...

 3 years ago
source link: https://www.digitalocean.com/community/tutorials/how-to-deploy-multiple-environments-with-workspaces-in-your-terraform-project
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code

Tutorial

How To Deploy Multiple Environments in Your Terraform Project Without Duplicating Code

TerraformInfrastructure

The author selected the Free and Open Source Fund to receive a donation as part of the Write for DOnations program.

Introduction

Terraform offers advanced features that become increasingly useful as your project grows in size and complexity. It’s possible to alleviate the cost of maintaining complex infrastructure definitions for multiple environments by structuring your code to minimize repetitions and introducing tool-assisted workflows for easier testing and deployment.

Terraform associates a state with a backend, which determines where and how state is stored and retrieved. Every state has only one backend and is tied to an infrastructure configuration. Certain backends, such as local or s3, may contain multiple states. In that case, the pairing of state and infrastructure to the backend is describing a workspace. Workspaces allow you to deploy multiple distinct instances of the same infrastructure configuration without storing them in separate backends.

In this tutorial, you’ll first deploy multiple infrastructure instances using different workspaces. You’ll then deploy a stateful resource, which, in this tutorial, will be a DigitalOcean Volume. Finally, you’ll reference pre-made modules from the Terraform Registry, which you can use to supplement your own.

Prerequisites

  • A DigitalOcean Personal Access Token, which you can create via the DigitalOcean Control Panel. You can find instructions for this in the How to Generate a Personal Access Token tutorial.
  • Terraform installed on your local machine and a project set up with the DO provider. Complete Step 1 and Step 2 of the How To Use Terraform with DigitalOcean tutorial, and be sure to name the project folder terraform-advanced, instead of loadbalance. During Step 2, do not include the pvt_key variable and the SSH key resource.

Note: We have specifically tested this tutorial using Terraform 0.13.

Deploying Multiple Infrastructure Instances Using Workspaces

Multiple workspaces are useful when you want to deploy or test a modified version of your main infrastructure without creating a separate project and setting up authentication keys again. Once you have developed and tested a feature using the separate state, you can incorporate the new code into the main workspace and possibly delete the additional state. When you init a Terraform project, regardless of backend, Terraform creates a workspace called default. It is always present and you can never delete it.

However, multiple workspaces are not a suitable solution for creating multiple environments, such as for staging and production. Therefore workspaces, which only track the state, do not store the code or its modifications.

Since workspaces do not track the actual code, you should manage the code separation between multiple workspaces at the version control (VCS) level by matching them to their infrastructure variants. How you can achieve this is dependent on the VCS tool itself; for example, in Git branches would be a fitting abstraction. To make it easier to manage the code for multiple environments, you can break them up into reusable modules, so that you avoid repeating similar code for each environment.

Deploying Resources in Workspaces

You’ll now create a project that deploys a Droplet, which you’ll apply from multiple workspaces.

You’ll store the Droplet definition in a file called droplets.tf.

Assuming you’re in the terraform-advanced directory, create and open it for editing by running:

nano droplets.tf  

Add the following lines:

droplets.tf
resource "digitalocean_droplet" "web" {
  image  = "ubuntu-18-04-x64"
  name   = "web-${terraform.workspace}"
  region = "fra1"
  size   = "s-1vcpu-1gb"
}
 

This definition will create a Droplet running Ubuntu 18.04 with one CPU core and 1 GB RAM in the fra1 region. Its name will contain the name of the current workspace it is deployed from. When you’re done, save and close the file.

Apply the project for Terraform to run its actions with:

terraform apply -var "do_token=${DO_PAT}"  

Your output will be similar to the following:

Output
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ... 

Enter yes when prompted to deploy the Droplet in the default workspace.

The name of the Droplet will be web-default, because the workspace you start with is called default. You can list the workspaces to confirm that it’s the only one available:

terraform workspace list  

You’ll receive the following output:

Output
* default

The asterisk (*) means that you currently have that workspace selected.

Create and switch to a new workspace called testing, which you’ll use to deploy a different Droplet, by running workspace new:

terraform workspace new testing  

You’ll have output similar to:

Output
Created and switched to workspace "testing"! You're now on a new, empty workspace. Workspaces isolate their state, so if you run "terraform plan" Terraform will not see any existing state for this configuration.

You plan the deployment of the Droplet again by running:

terraform plan -var "do_token=${DO_PAT}"  

The output will be similar to the previous run:

Output
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-testing" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } Plan: 1 to add, 0 to change, 0 to destroy. ...

Notice that Terraform plans to deploy a Droplet called web-testing, which it has named differently from web-default. This is because the default and testing workspaces have separate states and have no knowledge of each other’s resources—even though they stem from the same code.

To confirm that you’re in the testing workspace, output the current one you’re in with workspace show:

terraform workspace show  

The output will be the name of the current workspace:

Output
testing

To delete a workspace, you first need to destroy all its deployed resources. Then, if it’s active, you need to switch to another one using workspace select. Since the testing workspace here is empty, you can switch to default right away:

terraform workspace select default  

You’ll receive output of Terraform confirming the switch:

Output
Switched to workspace "default".

You can then delete it by running workspace delete:

terraform workspace delete testing  

Terraform will then perform the deletion:

Output
Deleted workspace "testing"!

You can destroy the Droplet you’ve deployed in the default workspace by running:

terraform destroy -var "do_token=${DO_PAT}"  

Enter yes when prompted to finish the process.

In this section, you’ve worked in multiple Terraform workspaces. In the next section, you’ll deploy a stateful resource.

Deploying Stateful Resources

Stateless resources do not store data, so you can create and replace them quickly, because they are not unique. Stateful resources, on the other hand, contain data that is unique or not simply re-creatable; therefore, they require persistent data storage.

Since you may end up destroying such resources, or multiple resources require their data, it’s best to store it in a separate entity, such as DigitalOcean Volumes.

Volumes are objects that you can attach to Droplets (servers), but are separate from them, and provide additional storage space. In this step, you’ll define the Volume and connect it to a Droplet in droplets.tf.

Open it for editing:

nano droplets.tf  

Add the following lines:

droplets.tf
resource "digitalocean_droplet" "web" {
  image  = "ubuntu-18-04-x64"
  name   = "web-${terraform.workspace}"
  region = "fra1"
  size   = "s-1vcpu-1gb"
}

resource "digitalocean_volume" "volume" {
  region                  = "fra1"
  name                    = "new-volume"
  size                    = 10
  initial_filesystem_type = "ext4"
  description             = "New Volume for Droplet"
}

resource "digitalocean_volume_attachment" "volume_attachment" {
  droplet_id = digitalocean_droplet.web.id
  volume_id  = digitalocean_volume.volume.id
}
 

Here you define two new resources, the Volume itself and a Volume attachment. The Volume will be 10GB, formatted as ext4, called new-volume, and located in the same region as the Droplet. To connect the Volume to the Droplet, since they are separate entities, you define a Volume attachment object. volume_attachment takes the Droplet and Volume IDs and instructs the DigitalOcean cloud to make the Volume available to the Droplet as a disk device.

When you’re done, save and close the file.

Plan this configuration by running:

terraform plan -var "do_token=${DO_PAT}"  

The actions that Terraform will plan will be the following:

Output
An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: # digitalocean_droplet.web will be created + resource "digitalocean_droplet" "web" { + backups = false + created_at = (known after apply) + disk = (known after apply) + id = (known after apply) + image = "ubuntu-18-04-x64" + ipv4_address = (known after apply) + ipv4_address_private = (known after apply) + ipv6 = false + ipv6_address = (known after apply) + ipv6_address_private = (known after apply) + locked = (known after apply) + memory = (known after apply) + monitoring = false + name = "web-default" + price_hourly = (known after apply) + price_monthly = (known after apply) + private_networking = (known after apply) + region = "fra1" + resize_disk = true + size = "s-1vcpu-1gb" + status = (known after apply) + urn = (known after apply) + vcpus = (known after apply) + volume_ids = (known after apply) + vpc_uuid = (known after apply) } # digitalocean_volume.volume will be created + resource "digitalocean_volume" "volume" { + description = "New Volume for Droplet" + droplet_ids = (known after apply) + filesystem_label = (known after apply) + filesystem_type = (known after apply) + id = (known after apply) + initial_filesystem_type = "ext4" + name = "new-volume" + region = "fra1" + size = 10 + urn = (known after apply) } # digitalocean_volume_attachment.volume_attachment will be created + resource "digitalocean_volume_attachment" "volume_attachment" { + droplet_id = (known after apply) + id = (known after apply) + volume_id = (known after apply) } Plan: 3 to add, 0 to change, 0 to destroy. ... 

The output details that Terraform would create a Droplet, a Volume, and a Volume attachment, which connects the Volume to the Droplet.

You’ve now defined and connected a Volume (a stateful resource) to a Droplet. In the next section, you’ll review public, pre-made Terraform modules that you can incorporate in your project.

Referencing Pre-made Modules

Aside from creating your own custom modules for your projects, you can also use pre-made modules and providers from other developers, which are publicly available at Terraform Registry.

In the modules section you can search the database of available modules and sort by provider in order to find the module with the functionality you need. Once you’ve found one, you can read its description, which lists the inputs and outputs the module provides, as well as its external module and provider dependencies.

Terraform Registry - SSH key Module

You’ll now add the DigitalOcean SSH key module to your project. You’ll store the code separate from existing definitions in a file called ssh-key.tf. Create and open it for editing by running:

nano ssh-key.tf  

Add the following lines:

ssh-key.tf
module "ssh-key" {
  source         = "clouddrove/ssh-key/digitalocean"
  key_path       = "~/.ssh/id_rsa.pub"
  key_name       = "new-ssh-key"
  enable_ssh_key = true
}
 

This code defines an instance of the clouddrove/droplet/digitalocean module from the registry and sets some of the parameters it offers. It should add a public SSH key to your account by reading it from the ~/.ssh/id_rsa.pub.

When you’re done, save and close the file.

Before you plan this code, you must download the referenced module by running:

terraform init  

You’ll receive output similar to the following:

Output
Initializing modules... Downloading clouddrove/ssh-key/digitalocean 0.13.0 for ssh-key... - ssh-key in .terraform/modules/ssh-key Initializing the backend... Initializing provider plugins... - Using previously-installed digitalocean/digitalocean v1.22.2 Terraform has been successfully initialized! ...

You can now plan the code for the changes:

terraform plan -var "do_token=${DO_PAT}"  

You’ll receive output similar to this:

Output
Refreshing Terraform state in-memory prior to plan... The refreshed state will be used to calculate this plan, but will not be persisted to local or remote state storage. ------------------------------------------------------------------------ An execution plan has been generated and is shown below. Resource actions are indicated with the following symbols: + create Terraform will perform the following actions: ... # module.ssh-key.digitalocean_ssh_key.default[0] will be created + resource "digitalocean_ssh_key" "default" { + fingerprint = (known after apply) + id = (known after apply) + name = "devops" + public_key = "ssh-rsa ... demo@clouddrove" } Plan: 4 to add, 0 to change, 0 to destroy. ... 

The output shows that you would create the SSH key resource, which means that you downloaded and invoked the module from your code.

Conclusion

Bigger projects can make use of some advanced features Terraform offers to help reduce complexity and make maintenance easier. Workspaces allow you to test new additions to your code without touching the stable main deployments. You can also couple workspaces with a version control system to track code changes. Using pre-made modules can also shorten development time, but may incur additional expenses or time in the future if the module becomes obsolete.

For further resources on using Terraform, check out our How To Manage Infrastructure With Terraform series.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK