5

VS Code’s Development Container: A Stepping Stone To IaC

 2 years ago
source link: https://keyholesoftware.com/2021/09/28/vs-codes-development-container-a-stepping-stone-to-iac/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

VS Code’s Development Container: A Stepping Stone To IaC

Luke Patterson September 28, 2021 AWS, Development Technologies, DevOps, Tutorial Leave a Comment

I’m currently helping a client transition from manual infrastructure provisioning to Collaborative Infrastructure as Code.

I’ve found that Visual Studio Code’s Development Container feature can be a handy stepping stone as part of a larger infrastructure automation strategy.

In this post, l explain how we used Visual Studio Code’s Development Container feature as a stepping stone in our long-term effort to achieve Collaborative Infrastructure as Code. This one step in the process gave a versioned, repeatable working environment and allowed us time to determine the next steps in the effort to achieve IaC.

Four General Phases of Operational Maturity

It’s worth noting that this is just one small step in the phased approach to operational maturity and Collaborative IaC. HashiCorp’s “Four Levels of Operational Maturity” is a good outline of what each of these automation phases typically looks like.

Terraform lists the following levels in its “docs:”

1. Manual

  • A UI or CLI is used to provision infrastructure. This is often the easiest option when developing one-off tasks, but recurring operations are a significant time drain in addition to being hard to track and make changes.
  • Configuration changes do not leave a traceable history. This makes troubleshooting tough without a record of who made which changes when.
  • Limited or no naming standards are in place.
  • Manual deployments and changes are made to existing infrastructure—which is generally inefficient and prone to human error.

Generally, this step in the process includes an infrastructure that is hard to reproduce, hard to scale, and hard to audit. As with any initiative, it’s always good practice to start small when you first jump in to gain expertise. Attempt a proof of concept with a small scope and clear goals—such as provisioning infrastructure for a new application on AWS. Once that effort goes well, your team has a basis of understanding to jump off from to make more meaningful changes to complement your small PoC.

2. Semi-automated

  • Terraform defines this step as a mix of at least two of the following practices: Manual CLI or GUI processes, Scripts, or Infrastructure as code.
  • This is faster and more reliable than Manual changes and adds efficiency through repeatable recurring operations, but the lack of consistency and versioning can become a real challenge to manage over time.
  • Traceability is limited as varied record-keeping methods are used across the organization.
  • Rollbacks are hard to achieve due to differing record-keeping methods.

The best way to move from Semi-Automated to Infrastructure as Code is to implement foundational practices that ensure the consistent and reliable infrastructure like the following:

  • Reduce the use of manual, recurring processes (and manual scripts)
  • Ensure there is some type of version control (GitHub, GitLab, Atlassian Bitbucket, etc.) providing a versioned, repeatable working environment.
  • Use reusable code modules and configuration units to manage pieces of infrastructure as a single package
  • Adopt a configuration management tool
  • Create standard build architectures to use as guidelines for any future development

These key activities eliminate a great deal of technical complexity, inconsistency, and personnel time requirements.

3. Infrastructure as Code

  • Infrastructure is provisioned using Terraform OSS or CloudFormation to enable scalable, repeatable, and versioned infrastructure.
  • Things are fully automated, everything’s consistent, and you also have a lot of reusability throughout the provisioning and deployment processes.
  • Infrastructure configuration details are fully documented (i.e., nothing is siloed in a sysadmin’s head).
  • Source files are stored in version control to record editing history, and, if necessary, roll back to older versions.
  • Some Terraform code is split out into modules, to promote consistent reuse of your organization’s more common architectural patterns.

4. Collaborative Infrastructure as Code

  • Users across the organization can safely provision infrastructure without conflicts and with clear access permissions.
  • Expert users within an organization can produce standardized infrastructure templates that allow beginner users to consume and follow company infrastructure best practices.
  • Per-workspace access control helps committers and approvers on workspaces protect production environments.
  • Functional groups that don’t directly write Terraform code have visibility into infrastructure status and changes through Terraform Cloud’s UI.

Determining which level your infrastructure currently functions in helps us determine the best method for evolving your provisioning. That said, these phases aren’t hard and fast rules, I have seen many clients who have both manual and semi-automated. Additionally, different teams in the same company might have different automation processes.

No matter what phase a client is in the path to infrastructure as code, any changes that work to make the infrastructure more automated is a step in the right direction.

Our Example

In this client’s case, provisioning was mostly manual and scripts were not all version controlled. It wasn’t always clear what executables (and of which version) needed to be on the execution path.

This is fairly common in an enterprise environment where ad-hoc provisioning solutions written by many different individuals build up over the years. There isn’t generally a good overall inventory of what is in place. And sometimes a dependency for a script is updated without the knowledge of or consultation with the scriptwriter, common with shared “pet” servers.

From this stage in the phased approach, the goal is to reduce the use of manual processes and imperative scripts, while making sure we were moving forward to adopt other processes to make the infrastructure more consistent.

The Visual Studio Code Development Container feature gave us a versioned, repeatable working environment where we could collaborate while the next level of operational maturity was being planned out. Once everything is fully automated the development container will likely be retired, or possibly kept around but only for exploratory tool testing.

How We Implemented Development Container

We created a new git repo containing all the scripts currently in use. Then we built up the Dockerfile containing all the executables necessary to run the scripts.

In our case, we had Terraform, Kubernetes CLI, Postgres CLI, AWS CLI, python, and a few others. We then extracted out all sensitive parameters to be included through a gitignored env file:

.devcontainer/devcontainer.json file

{

  "dockerFile": "Dockerfile", # install the executables our scripts need
  "initializeCommand": "IF NOT EXIST .devcontainer\\.env ECHO # gitignored environmental variables for devcontainer> .devcontainer\\.env",
  "runArgs": [
    "--env-file", "${localWorkspaceFolder}/.devcontainer/.env"
  ],
  ...

Now when developers clone the repo and open it in VS Code, its terminal is running in an environment where all the necessary tools are configured and available. We also included bash autocomplete which really helps out with exploratory kubectl use. VS Code plugins like file/language-specific assistance and autocomplete can be configured to be automatically installed.

This is how easy it was to have the plugins we needed automatically installed:

.devcontainer/devcontainer.json file

{
  ...
         "extensions": [
           "hashicorp.terraform",
           "ms-azuretools.vscode-docker",
           "ms-kubernetes-tools.vscode-kubernetes-tools",
           "adamhartford.vscode-base64",
           "codecontemplator.kubeseal",
           "eamodio.gitlens"
         ],

Also, if you bind to a port in the container, for example with a kubectl proxy command, VS Code automatically binds to the host port and allows localhost access from your host OS. This removes some of the logistics overhead with developing inside a container as you don’t need to manually reconfigure, expose new ports on, and restart the container.

There were some gotchas we had to keep in mind as we built and ran the dev container locally, but they weren’t necessarily related to dev containers as much as they were related to the fact that we were running in Windows. Listing here because they can be easy to forget.

.gitattributes file

# use all linux line endings since the system reading files will be linux
* text=auto eol=lf
Dockerfile file

# if need to copy scripts into container for use inside VS Code terminal
# make sure they are executable even if cloned via Windows
# https://github.com/moby/moby/issues/34819

COPY --chmod +x myscript.sh ./

Time will tell how the licensing changes for Docker Desktop will affect things. For clients unwilling to purchase licenses, hopefully a viable drop-in option will come along that will still allow seamless startup and run/upgrade paths for dev containers. I’ll be checking out how well Podman can help here just in case.

Next Steps Toward Collaborative Infrastructure as Code

The work that went into creating a development environment image will also help for the next phase where a basic level of automation will be implemented.

The same exact image used for development, captured by the Dockerfile, could be used as a first draft in the automation system. In this client’s case, the next phase will include automation through Terraform in GitLab pipelines before ultimately transitioning to Terraform Cloud.

Final Thoughts

I recommend checking out the Development Container feature of VS Code. It helps provide a versioned, repeatable development experience and can be used as part of an overall automation strategy.


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK