0

Upgrade: Level Up Resource Deployment on AWS

 3 years ago
source link: https://medium.com/slalom-technology/upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Upgrade: Level Up Resource Deployment on AWS

Upgrade is a series of short tutorials describing how to level-up your agility and skill with AWS/Azure and Python. In each tutorial, we lead you from entry-level practices to more advanced techniques that can help improve your products and process in the cloud.

Just like physical infrastructure, cloud infrastructure benefits from specialized tools to build and maintain it.

The first time I used AWS, I was amazed at the power I suddenly possessed. I could sign into the AWS console, click several buttons, and then an entire (t2.micro!) server was available to me. With several clicks of a button, I could configure and assemble several AWS services into a tiny working application.

A few weeks later, I watched over a colleague’s shoulder as he typed a command in a terminal using the AWS command line interface (CLI). Using a simple “aws ec2 list-instances”, he displayed every EC2 instance in a company account, then with a few keystrokes deployed two more. I was dazzled at how efficiently these resources could be spun up.

I discovered CloudFormation soon after. While experimenting with the CLI, I came upon documentation that explained how with CloudFormation, a developer could write formatted JSON or YAML that would instruct AWS to deploy (or redeploy) an entire architecture, the kind I was building slowly by hand in the console or with the CLI.

Since then I’ve been introduced to and use Terraform, a cloud-agnostic peer of CloudFormation, to deploy infrastructure from configuration files, as well as Pulumi and the Amazon Cloud Development Kit (CDK), both infrastructure-as-code tools that allow development in common languages like Python and TypeScript. I’ve also regularly utilized the AWS Python SDK, boto3, to program with AWS resources and the AWS low-level API to send specific provisioning requests.

I had assumed that there were only a couple defined ways to provision cloud resources, but nothing could be further from the truth.

1*BC2SOjKfHV_ImMYmZdKDOQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

My journey using AWS has been filled with pleasant surprises like these. I would start with a basic understanding and then incrementally discover the more efficient, more abstracted, and more managed — all in all, better — ways of doing things. This tutorial will lead you through that exact journey.

In this tutorial, we will provision basic architecture using the following methods:

Level 1: Manual deployment in the console

Level 2: Deployment using the command line

Level 3a: Infrastructure-as-Code (IaC) deployment

  • using CloudFormation (console and CLI)
  • using Terraform

Level 3b: Programmatic Deployment using an SDK, Boto3

Level 3c: Next-Gen IaC Deployment

  • using Pulumi
  • using the Amazon Cloud Development Kit (CDK)

Extra credit: Low-level API requests (GET and POST)

Each of these methods has their own advantages and disadvantages. For instance, the console, with its full user interface and prompts, can be a better explanatory tool than Terraform, while Terraform is better for repeatedly deploying and managing large-scale infrastructure. The table below provides a brief synopsis comparing the tools we will investigate. We will discuss some of these considerations in greater detail later in the article:

1*vmVtV28rYQIHelsgI6J4Rw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d
There are many methods of provisioning resources on AWS. Which one best suits your use case?

For the sake of this tutorial, we will say that our goal is to deploy and maintain infrastructure to support modern, data-driven applications. Once you’ve graduated from early console-based tutorials from AWS or A Cloud Guru, the task of organizing all your AWS resources becomes increasingly complex. We need to level up.

So without further ado, here we go.

The Scenario

We will focus on the techniques rather than the specific infrastructure for this tutorial. In each level, we will deploy a single EC2 instance:

Most real-world infrastructure will be much more complex. We will point to additional documentation along the way to help you with your own use cases.

Prerequisites

This tutorial can be completed with an AWS account using the free tier. You will need:

  • The ability to provision EC2 resources and create IAM resources
  • Login information for the console
  • A user with an access key and secret access key
  • To clone the accompanying GitHub repository
  • For the CLI: download and configure the AWS CLI (instructions)
  • For CloudFormation: permissions to create and deploy stacks (this can be restricted on company accounts)
  • For Terraform: download and install Terraform from HashiCorp (instructions)
  • Optional, for Pulumi: download and install Pulumi (instructions)

Level 1: Manual deployment in the console

The AWS console is an easy way to start provisioning resources. It can be a great way to manage resources for a proof-of-concept, troubleshoot specific issues, monitor the health of infrastructure, or explain structure to a client. It’s also where my provisioning journey began.

Please follow these instructions if you haven’t provisioned an instance before. It’s important to see these steps visually before we try the other, more abstract methods. These are longer than the rest of the instructions, so if this is old hat to you feel free to continue.

  1. Log in to the console
1*sjytlMax3Y4c1EZNh_auEw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

2. Switch to the us-east-1 region.

1*vqkT0maFbMC4Iox0BLQoVA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

3. Navigate to the EC2 landing page

4. Click on “Instances”

5. Click on “Launch Instances”

1*jrqJ-PA0dZfvE7AZnkHWxw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. This is Step 1 of the Instance Launch “Wizard”, which leads you through all the decisions to launch a new instance. These are the same options that will be available through the CLI or in Terraform — take note of what you see as you click through. Scroll through some of the “AMIs”, or Amazon Machine Images, which are essentially the base template with managed software maintained by AWS.

1*VqNAHGuEF5DlsOgVQX9gXw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

7. Choose the first option (Amazon Linux 2 AMI) and note the AMI id. As of the creation of this article, this is ami-0915bcb5fa77e4892. In future steps, replace this AMI with your own if yours is different.

1*IWF43dhBX-VME5EXNQOAwg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

8. Step 2 of the Launch Wizard is about choosing a size for your instance. To keep us in the free tier, we will select a t2.micro instance, which has one virtual CPU (vCPU) and 1 GiB of memory. This is a small computer; for reference, a new MacBook Pro comes with 8GB of memory, approximately eight times more (what’s the difference between GB and GiB? They measure the same things, but one is base 2 and the other is in base 10)

1*HbqcGI1uxpsW7RxQdqYcPA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

9. Click “Next: Configure Instance Details”, “Next: Add Storage”, “Next: Add Tags” and then “Next: Configure Security Group”. These steps we’ve skipped allow you to place your instance within your architecture, configure access via SSH, add storage, and tag your resources to organize them better.

10. In the security group screen, we will only allow connection to our own IP address. We want to avoid opening up our instance to the IP address range 0.0.0.0/0, which is every IP address and insecure. In the existing SSH rule, change the drop-down from “Custom” to “My IP”. AWS automatically detects your IP address and will only allow SSH traffic from that address.

1*5lPhXNUnAQw0xmCz4_Iu8w.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

11. Click “Review and Launch”. Feel free to look through the information you have there. When you’re ready, click “Launch”.

1*rCFSyoMnkHnLDrGRcg5i-g.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

12. Another warning may arise, suggesting that you “select an existing key pair or create a new key pair”. Doing so allows you to log into the instance remotely via the SSH protocol and perform commands in the EC2 console (e.g. installing a new package). For the purposes of this tutorial, you can select “Proceed without a key pair” from the drop down, check the checkbox, and “Launch Instances”.

13. Click on the id of the instance (e.g. i-0f39a691987a28d32), or go to the “Instances” tab in the left-hand menu to see the instance launching.

14. After two status checks have completed, you should be set. Congratulations! You have deployed a virtual machine in minutes, a feat that would make a 1989 computer engineer’s jaw drop.

1*ARIeMahOSk_le9qcsg9erQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

The launch wizard is an excellent place to start. We can see the variety of decisions we have to make, such as “what is the AMI for this instance?” or “should this instance be part of a security group?”. In other deployment methods, these decisions can be opaque; we aren’t prompted to answer them. Similarly, the two warnings we received — about the security group and key pairs — are a convenient reminder that we might not be thinking about certain decisions we are making. Together, these factors make the console an excellent place to play with AWS.

Level 2: Deployment using the command line

1*iQxTfO2uv9JadSf3TFdCJg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

The AWS CLI is an everyday tool for the experienced cloud engineer. We will use it to deploy an EC2 instance with one command. Before we do that, we will quickly create a profile. Once you’ve created a user in your personal account or sandbox (or had one provisioned automatically), we want to add a profile, a set of user-specific configurations used to execute commands via the command line. When we use the CLI, the tool uses one user’s credentials to interact with the AWS system.

1.Set up the AWS CLI (instructions). You will: download the CLI tool, ensure you have a user set up, with access keys, and run the command “aws configure ”, which will lead you through the steps to add a default user profile. This user profile authenticates to AWS using the Access Key and Secret Access Key of your account. Note: throughout this tutorial, I use a non-default profile on my account called “academy” (since I am using a sample account through Linux Academy) by passing “ — profile academy” with each command. See here for more information on using a non-default profile.

2. Once your AWS CLI is set up, you can always verify that it is working with the command “aws” or “aws help”

1*cZa1VZ-jr9HPTxyZxldXrQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

3. List existing instances with “aws ec2 describe-instances”. If there are no instances, the output should be empty. If there are other instances (which we should see because of our instance in step one), you will notice a large output for each instance. In order to reduce this output, we query for specific information. Try the command “ec2 describe-instances — query “Reservations[].Instances[].InstanceId”

1*5npRyKWyqTZsDcxlw9f7qA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

4. Let’s assemble the command to deploy our instance now. We will use the same configuration we used earlier. The “run-instances” command creates new instances. We will also specify the following:

  • image-id ami-0915bcb5fa77e4892 (or your own)
  • count 1
  • instance-type t2.micro
  • profile my-user (which you only use if you don’t want to use your default CLI profile)

Altogether, this command is “run-instances — image-id ami-0915bcb5fa77e4892 — count 1 — instance-type t2.micro — profile my-user”

1*gvZMIpLrVpX805ckstaigg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

5. Run your command. You will see the same long output generated by “aws ec2 describe-instances”

1*dmWeD8NijwTg1RpaYSMR0w.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. Check that the instance deployed properly with a list-instances command: “aws ec2 describe-instances” with the flag “ — query “Reservations[].Instances[].InstanceId””. This shows us all of the account instances, including our recently terminated one.

1*NllGdTDhRVtrZ6IDKQcFlg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

To filter out the terminated instance, let’s add the “instance-state-name” filter. Run “aws ec2 describe-instances — filters “Name=instance-state-name,Values=running” — query “Reservations[].Instances[].InstanceId” ”.

1*arHi2zViT56gTaseR8TC0w.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

Congratulations! You’ve used the command line to deploy an instance. It is much faster to deploy a variety of infrastructure, once you’ve set up the right configurations and know the exact syntax to use. There is a learning curve to using this tool, but it can allow the experienced developer easy access to the inner workings of infrastructure without the headache of navigating dozens of browser pages.

For additional AWS instructions on EC2 provisioning consult here. You can also peruse the AWS CLI User Guide. Using the CLI as a beginner will require frequent consultation of the documentation for exact syntax to execute commands. Stay patient!

Level 3a: Infrastructure-as-Code Deployment

1*AP08maMUyk6dEoWs-fGvpA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

The two tools in this section are widely used and well-established methods of maintaining large infrastructures. Their wide use and documentation are some of their biggest advantages. More recent IaC tools — like Pulumi and the Cloud Development Kit (CDK) — have focused on addressing the primary weaknesses of these tools, including support for common programming languages and best practices code versioning.

Make sure that you have cloned this GitHub repository from the “Requirements” above. We will be using code from the “level-3a-cloudFormation” folder.

Using CloudFormation (console and CLI)

CloudFormation is the AWS configuration-file-based system of maintaining infrastructure. As its most basic, CloudFormation can be thought of as stored CLI commands. Instead of assembling a command each time it is needed, CloudFormation uses static files to send instructions. The features of Cloud Formation are more in-depth, however, allowing us to visualize architecture, and make small adjustments, deploy several similar infrastructure sets, and monitor the whole “stack”.

We can either use the console or the CLI to deploy CloudFormation templates. We will quickly do both.

Console

  1. Search for CloudFormation in the console
1*FVwg0ychHMSgmuGLuPi9rQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

2. Click “Create Stack”

3. Choose Template is ready

1*prYne7lcYYQGUWbcVFxDmg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

4. Upload the cloudFormationTemplateEC2.json from the GitHub repo

1*Pm1ZCs6CdF-4cg3wwSzjqQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

5. Name the stack, e.g. “example-ec2-deployment-stack”

1*IMEZrhNX_khXVrouZcg32A.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. Optionally, configure stack options. Feel free to peruse the options that are available to you.

7. Click “Next”

8. Click “Create Stack”

9. The stack should go to “CREATE_IN_PROGRESS”. Wait for a few minutes.

1*sNE9RTXR02qZZrcV5aoYCA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

10. Once the status switches to “COMPLETE”, the EC2 instance is provisioned. Congratulations! There should be an event for each provisioned resource.

1*t8F9ffCHv8-UILro7fE9-Q.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

11. Clean up: Click “Delete” in the options for the stack. Confirm the deletion.

1*gv6M3rgB1-s4UC4aYFfyQQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

Command Line Interface (CLI)

As we saw for provisioning without CloudFormation, the CLI is often quicker and more direct than the console.

  1. Open a terminal
  2. Let’s exist current stacks using the command “aws cloudformation list-stacks”. Just as we saw with the “list-instances” command earlier, there may be a variety of output that is not useful to us. For example, we will see the deleted stack from the previous step. We can use the “stack-status-filter” option to limit the output to just those stacks that have been completed, “CREATE_COMPLETE”. In the following query, the stack that is listed is one that set up basic IAM roles and other resources in the test account used for this tutorial.
1*Fm2DfBd7JigtTRkKKt5HGw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

3. Now we create the command to deploy our stack (reference). We will use the same configuration file we used earlier. The “deploy” command creates a new stack. We will also specify the following:

— template-file path/to/template.json

— stack-name example-cli-ec2-deployment-stack

— profile my-user (if you don’t want to use your default AWS CLI profile)

1*sKiBWM__GbWqK-BlwusGNQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

4. Enter the command and wait for the output that the stack was created “successfully”

5. Let’s see what the stack looks like now. List the stacks again. You should now see the stack we created.

1*HjLM4iAiefF4uJYtjopMyQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. We can also find the EC2 instance that we created. As we can see in the template file, the name of the instance is “my-cloud-formation-ec2”. We can use a modification of the describe-instances command to see just the names of our instances:

  • aws ec2 describe-instances — filter “Name=instance-state-name,Values=running” — query “Reservations[].Instances[].Tags[?Key==’Name’].Value[]”. Is “my-cloud-formation-ec2” there?
1*jP5iIdM3kuxdE-ffghculw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

7. As an extra step, let’s see how easy it can be to update resources. Go to the json template in the repository and change the name to “my-cli-cloud-formation-ec2”. Then run the same deploy command as we ran above.

1*NlKKIRcp_lyduCXDrMZJ5w.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

8. List the names of the EC2 instances as we did above. Do you get the updated name?

1*PA_SqU0nqgRxEtuBP2ZQeQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

9. Clean Up: Use the “delete-stack” command with the option “ — stack-name example-cli-ec2-deployment-stack”

  • Verify by running the “aws cloudformation list-stacks — stack-status-filter CREATE_COMPLETE” command and ensuring that the stack you created is no longer there. (IMPORTANT: you will keep paying for these instances if you do not clean up after yourself)

We’re starting to get a sense of the benefits of configuration-based deployment. After we go through the initial work of creating a template, CloudFormation takes care of managing the resources. We can easily make updates to our resources. If provisioning problems occur, CloudFormation has a robust system for rolling back changes and reporting errors. And if we run the same command many times, we will always achieve the same result.

These are all advantages of what we call a declarative approach to infrastructure. Declarative infrastructure-as-code specifies the end state, and our executor — in this case, the CloudFormation service — does the work of changing the current state to the desired end state. This approach makes it easier for the user to make changes without having to track the state of a large infrastructure.

The converse of declarative commands are imperative commands. The console and CLI commands we ran are examples of imperative commands. These imperative commands give the exact instructions of what to provision, without any context of the current state or the final desired state. While small infrastructure may be easy to manage with imperative commands, declarative infrastructure-as-code will make maintaining large projects more simple.

Using Terraform

If you work with CloudFormation, another Infrastructure as Code tool will come up: Terraform. Terraform is a mature tool that uses HashiCorp Configuration Language (HCL) to manage resources organized in .tf files. Terraform is a popular tool for enterprise cloud resource management.

Configuration File

The example.tf file outlines the resources to be created. The variables.tf file defines the variables that we see used in the example.tf file, including the access keys, region, and list of AMIs.

For details on the Terraform configuration file, check out Terraform documentation.

1. Open a terminal

2. Change directories to the “level-3a-terraform” directory. For example, cd C:\Users\nathaniel.larson\Desktop\Repos\upgrade-tutorial-aws-deployment\level-3a-terraform

3. Run the initialization terraform command: “terraform init”. Notice how a tfstate file is created. This is how terraform monitors the infrastructure, and “remembers” the cloud resources it created. Do not change it.

1*Xm2TKM4xrk_eyn3ywEjitg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

4. Since we already have our terraform files prepared, we can go directly to provisioning our infrastructure. As this initialization screen suggests, run “terraform plan” to see what terraform would do if you deploy. It should give detailed output describing each resource to be provisioned. Take a moment to review this output.

1*e-Of5U8q2-fWMRVvomW-KQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

5. Let’s provision! Run “terraform apply” to apply the changes that you’ve planned. You’ll notice that the apply command runs an implicit “terraform plan” and requires your approval (i.e. “yes”) before provisioning. These protections are useful, especially on larger projects. However, they are only valuable if you review the changes mentioned. Again, scroll through the plan.

1*3uFq_JSvrSu_POCLm3FufQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. Enter “yes”

7. Wait for the “Apply complete!” status. Congratulations!

1*6GIHMjxbpdpb8jXGh9oN1Q.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

8. Clean Up: Cleaning up Terraform is simple. Run “terraform destroy” and then approve with “yes”.

You’ve now seen two common systems for configuration-based IaC management. The two have similarities. You will notice from the simplicity of the Terraform commands that we can specify even more information for Terraform within the code itself. Since Terraform has a language built around it, we have some additional flexibility for specifying configurations.

Level 3b: Programmatic deployment using boto3

1*xDhgkOcXYPVVIQoaivMGWg.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

There are instances in which you might want to query or provision resources within the context of another application. Instead of embedding a CLI command to provision using Terraform or CloudFormation, AWS manages several Software Development Kits (SDKs) for common programming languages. In our case, we will examine the AWS SDK for Python, Boto3. We’ll use the same GitHub repository as above.

1. Install the SDK by running “pip install boto3”

2. Change directories to the “level-3b-boto3” directory. For example, cd C:\Users\nathaniel.larson\Desktop\Repos\upgrade-tutorial-aws-deployment\level-3b-boto3

3. The code for provisioning our newest instance is in the createEc2withBoto3.py file. Take a look at the code.

4. Choose an option for creating our EC2 client. We can authenticate using a profile or by copying and pasting our access key and secret access key directly.

1*tws_Mr_m1bI2HMs4cCajPQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

5. Run the code in the terminal: “python createEc2withBoto3.py”

1*Wjoh8SQGVgWsuRQZx_IRwQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

6. Clean up: Query the existing instances by running “python deleteEC2withBoto3.py”. If the list looks correct, write “y”. NOTE: the consent condition here is user-created. The boto3 SDK does not require confirmation, even when performing drastic action, so be careful when using this functionality.

1*fSAY3EBRJ7jpA11jDft0nQ.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

Level 3c: Next-Gen Infrastructure-as-Code

1*k7XfESx_SAyRBRserKjXJA.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

Congratulations on completing the walkthroughs included in this article. You may have enough knowledge to get started with provisioning resources for your next project, but you should also consider some of the newer IaC deployment tools that are available. Among them, Pulumi and the AWS CDK are well-known and growing because of their emphasis on usability. Both use defaults and support for many common programming languages to allow for easy integration into your existing software frameworks.

Pulumi

Pulumi is a peer of CloudFormation and Terraform. It has a smaller footprint and less documentation than the other two mature tools. However, it has the major advantage of working with more common languages. Instead of using the command line with CloudFormation or HCL with Terraform, Pulumi allows its users to use TypeScript, Python, .NET, and other languages to specify resources. Pulumi’s backend state-management service can be expensive but depending on the project it can be well worth the investment.

For a tutorial on deploying an AWS EC2 instance using Pulumi, see here.

The AWS Cloud Development Kit

The AWS Cloud Development Kit (CDK) is an Infrastructure-as-Code tool using common coding languages that were released in July 2019. The CDK uses high-level “constructs” to define AWS resources and creates CoudFormation templates automatically to provision them. The CDK is a powerful abstraction that takes some of the advantages of Pulumi — using common coding languages and ease-of-use — and makes them free to use and AWS-native.

For a tutorial on deploying an AWS EC2 instance using the CDK, reference the CDK Examples GitHub here.

Extra credit: Low-level API requests (GET and POST)

Are you ready for an extra challenge? Feel free to explore the “extra-RestApi” folder in the GitHub repository.

There are situations in which boto3 will not meet our needs. While it is rare, you may find yourself in the position when low-level API requests are necessary to provision resources. Feel free to explore the AWS documentation on how to construct these HTTPS REST API requests here or run the files in the “extra-RestApi” folder in the GitHub repository.

Conclusion

The cloud is all about growth. It’s about the ability to start quickly, develop innovative new systems, and transform how we conduct business. We need to be able to keep up with the new tools and practices that the cloud has enabled. In this article, we took a slice of the cloud development process — provisioning resources — and examined various approaches. As we have seen, each approach has its advantages and disadvantages, and leveling up from more rudimentary techniques can provide you and your team incredible benefits in terms of robustness, flexibility, and maintainability.

Be sure to keep an eye out for new developments, and good luck on your cloud journey.

1*6SSVpShD-AGIRCUcaPboCw.png?q=20
upgrade-level-up-resource-deployment-on-aws-65c67ac3ab9d

About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK