35

9 Reasons DevOps Is Better With Docker and Kubernetes

 5 years ago
source link: https://www.tuicool.com/articles/hit/AJFrYfV
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

One of the main challenges that companies face with is a long time to market, which usually happens when your development process is slowed down. When deploying applications most of the teams usually face a problem between Dev and Ops because these two departments make the same application but work completely in different ways.

Wouldn’t it be nice if they work together without any misunderstandings to make shorten time to market? I’ve assembled this list of advantages that DevOps plus Docker and Kubernetes can provide you compared to a traditional DevOps approach.

The Traditional Approach to DevOps

In a traditional DevOps approach, developers write code and commit it to a Git repository. Then they check how it works locally and in a development environment. They launch a build process of the code using a CI tool like Jenkins, which also runs functional tests during the build. If the tests pass successfully, those changes are merged into a release branch. The tests are handled in a staging environment and at the end of the sprint, then the release is issued. System administrators prepare scripts for application deployment to production using Ansible, Puppet, or Chef. Finally, system administrators roll out the changes to production (updating the version).

Problems of the Traditional Approach

  • The first problem is that system administrators and developers use different tools. For example, the majority of developers don’t know how to work with Ansible, Puppet, or Chef. A common outcome of this situation is that the task of preparing release falls on the shoulders of system administrators. But system administrators often do not understand how an application should work, because developers are the ones who have the expertise in this area.

  • The second problem is that development environments are usually updated manually without any automation. As a result, they are very unstable and always breaking down. Changes made by one developer break changes made by another. Figuring out problems usually takes a lot of time. And finally, you get slow time to market.

  • The third problem is that development environments can differ significantly from staging and production. Moreover, staging can be very different from production. It leads to many difficulties. For example, a release prepared by developers may not work correctly in the staging environment. Even if the tests pass successfully in the staging environment, some issues may appear unexpectedly in production. Meanwhile, the rollback process of a broken version from production is very problematic and not trivial, even using Ansible, Puppet, or Chef.

  • The fourth problem is that writing Ansible manifests is time-consuming and difficult. It is very easy to lose track of changes made in manifests, updating an application from version to version. It may result in a high number of mistakes.

Improvement of DevOps Approach With Docker

The main advantage of this approach is that both developers and system administrators use the same tool — Docker. Developers create Docker images from Dockerfiles at the development stage, on local computers, and run them in a development environment.

The same Docker images are used by system administrators who make updates to the stage and production environments using Docker. It is very important that Docker containers are not patched when updating to a new version of a software. This means that a new version of your software is represented by a new Docker image and a new copy of the Docker container, but not to patch the old Docker container.

As a result, you can make immutable dev, staging, and production environments. There are several benefits of using this approach. First of all, there is a high level of control over all changes, because changes are made using immutable Docker images and containers. You can roll back to the previous version at any moment. Development, staging, and production environments become more similar to each other than with Ansible. Using Docker, you can guarantee that if a feature works in the development environment, it will work in staging and production, too.

How to Get DevOps Superpowers With Docker and Kubernetes

  • The process of creating application topology containing multiple interconnected components becomes much easier and understandable with Docker.

  • The process of load balancing configuration simplifies greatly because of built-in Service and Ingress concepts.

  • Thanks to the built-in features of Kubernetes Deployments, StatefulSets, and ReplicaSets, the process of rolling updates or green/blue deployments becomes very easy.

  • You can run CI/CD using Helm, which is more comfortable than just with Docker containers because

    • Helm charts are more production-ready and stable than individual Docker images. You were probably facing this issue when you tried to interconnect different Docker containers into a joint topology, but you failed because those images were not ready for such interconnection.

    • It provides you with a high-level template language and a concept of application releases that can be rolled back if needed.

    • Moreover, you can use existing Helm charts as dependencies for your own charts, which allows you to have complex topologies using third party building blocks.

  • Kubernetes supports out-of-the-box of deployment scenarios to multi-cloud (AWS, Google, Hidora, or another hosting provider) through Federation or service mesh tools.

Have you simplified your DevOps processes using Docker and Kubernetes? Please, share your experience!


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK