37

Google now allows using gVisor virtualization in Kubernetes Engine

 4 years ago
source link: https://www.tuicool.com/articles/Rz6nyev
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Editor's note: This is one of several posts in a series on theunique capabilities you can find in Google Kubernetes Engine (GKE) Advanced.

There’s a saying among security experts: containers do not contain. Security researchers have demonstrated vulnerabilities that allow an attacker to compromise a container and gain access to the shared host operating system (OS), also known as “container escape.” For applications that use untrusted code, container escape is a critical part of the threat profile.

At Google Cloud Next ‘19 we announcedGKE Sandbox in beta, a new feature in Google Kubernetes Engine (GKE) that increases the security and isolation of your containers by adding an extra layer between your containers and host OS. At general availability, GKE Sandbox will be available as part of the upcoming GKE Advanced, which offers enhanced features to help you build demanding production applications on top of our managed Kubernetes service.

Let’s look at an example of what could happen with a container escape. Say you have a software as a service (SaaS) application that runs machine learning (ML) workloads for users. Imagine that an attacker uploads malicious code that generates a privilege escalation to the host OS, and from that host OS, the attacker accesses the model and data of the other ML workloads, when the model and data aren’t theirs.

GKE Sandbox is based on gVisor , the open-source container sandbox runtime that we released last year. We originally created gVisor to defend against a host compromise when running arbitrary, untrusted code, while still integrating with our container-based infrastructure. And because we use gVisor to increase the security of Google’s own internal workloads, it continuously benefits from our expertise and experience running containers at scale in a security-first environment. We also use gVisor in Google Cloud Platform (GCP) services like the App Engine standard environment, Cloud Functions, Cloud ML Engine, and most recentlyCloud Run.

gVisor works by providing an independent operating system kernel to each container. Applications then interact with the virtualized environment provided by gVisor's kernel rather than the host kernel. gVisor also manages and places restrictions on file and network operations, ensuring that there are two isolation layers between the containerized application and the host OS. By reducing and restricting the application's interaction with the host kernel, attackers have a smaller attack surface with which to circumvent the isolating mechanism of the container.

GKE Sandbox takes gVisor, abstracts the internals, and presents it as an easy-to-use service. When you create a pod, simply choose GKE Sandbox and continue to interact with your containers as you normally would—no need to learn a new set of controls or a new mental model.

In addition to limiting potential attacks, GKE Sandbox helps teams running multi-tenant clusters, such as SaaS providers, who often execute unknown or untrusted code. There are many components tomulti-tenancy, and technologies like GKE Sandbox take the first step toward delivering more secure multi-tenancy in GKE.

How users are hardening containers with GKE Sandbox

Data refinery creator Descartes Labs applies machine intelligence to massive data sets. “At Descartes Labs, we have a wide range of remote sensing data measuring the Earth and we wanted to enable our users to build unique custom models that deliver value to their organizations,” said Tim Kelton, Co-Founder and Head of SRE, Security, and Cloud Operations at Descartes Labs. “As a multi-tenant SaaS provider, we still wanted to leverage Kubernetes scheduling to achieve cost optimizations, but build additional security layers on top of users’ individual workloads. GKE Sandbox provides an additional layer of isolation that is quick to deploy, scales, and performs well on the ML workloads we execute for our users."

We also heard from early customer Shopify about how they’re using GKE Sandbox. “Shopify is always looking for more secure ways of running our merchants’ stores,” said Catherine Jones, Infrastructure Security Engineer at Shopify. “Hosting over 800,000 stores and running customer code (such as custom templates and third-party applications) requires substantial work to ensure that a vulnerability in an application cannot be exploited to affect other services running in the same cluster.”

Jones and her team developed proof-of-concept trials to use GKE Sandbox and now plan on upgrading existing clusters and enabling it for all new clusters for developers. “GKE Sandbox’s userland kernel acts as a firewall between applications and the cluster node’s kernel, preventing a compromised application from exploiting other applications through it,” said Jones. “This will allow us to provide more security to our 600+ applications without impacting developers’ workflows or requiring our security team to maintain custom seccomp and apparmor profiles for each individual application. In addition, because GKE Sandbox is based on the open-source gVisor project, we can troubleshoot it more effectively and contribute code to support our use cases as need be.”

Getting started with GKE Sandbox

When we say that running a cluster with GKE Sandbox is easy, we really mean it. The following command creates anode pool with GKE Sandbox enabled, which you can attach to your existing cluster.

<!----><code _ngcontent-c19="">gcloud beta container node-pools create YOUR-NODE-POOL \
</code><code _ngcontent-c19="">  --cluster=YOUR-CLUSTER \
</code><code _ngcontent-c19="">  --image-type=cos_containerd \
</code><code _ngcontent-c19="">  --sandbox type=gvisor \
</code><code _ngcontent-c19="">  --enable-autoupgrade</code>

To run your application in GKE Sandbox, you just need to set runtimeClassName: gvisor in your Kubernetes pod spec. The following example creates a Kubernetes deployment to run on a node with GKE Sandbox enabled.

<!----><code _ngcontent-c19="">kind: Deployment
</code><code _ngcontent-c19="">metadata:
</code><code _ngcontent-c19="">  name: httpd
</code><code _ngcontent-c19="">  labels:
</code><code _ngcontent-c19="">    app: httpd
</code><code _ngcontent-c19="">spec:
</code><code _ngcontent-c19="">  ...
</code><code _ngcontent-c19="">  template:
</code><code _ngcontent-c19="">  ...
</code><code _ngcontent-c19="">    spec:
</code><code _ngcontent-c19="">      runtimeClassName: gvisor
</code><code _ngcontent-c19="">      containers:
</code><code _ngcontent-c19="">      - name: httpd
</code><code _ngcontent-c19="">        image: httpd</code>

For a more detailed explanation of GKE Sandbox, check out thedocumentation.

Applications that are a great fit for GKE Sandbox

GKE Sandbox uses gVisor efficiently, but running in a sandbox can still have additional costs. Memory overhead is typically on the order of tens of megabytes, while CPU overhead depends more on the workload. Therefore GKE Sandbox is well-suited to run compute and memory-bound applications, such as:

  • Microservices and functions: Microservices and functions built with third-party and open-source components often have varying levels of trust. GKE Sandbox enables additional defense in depth while preserving low spin-up times and high service density. gVisor itself can launch in less than 150ms and its memory footprint can be as low as 15MB .
  • Data processing: Processing untrusted sensor inputs, complex media, or data formats may require using potentially vulnerable tools or parsers. Isolating these activities in sandboxed services can help to reduce the risk of exploitation. The CPU overhead of sandboxing data processing depends on how I/O intensive the service is, but is less than 5 percent for streaming disk I/O and compute-bound applications like FFmpeg. Other examples are MapReduce, ETL (Extract, Transform, Load), and media processing.
  • CPU-based machine learning: Training and executing machine learning models frequently involves large quantities of data and complex workflows. Often the data or the model itself is from a third party. Typically, the CPU overhead of sandboxing compute-bound machine learning tasks is less than 10 percent .

The above list is not exhaustive, and GKE Sandbox works with a wide variety of applications. Keep in mind that the extra validation for file system and network operations can increase your overhead. We recommend that you always test your specific use case and application with GKE Sandbox.

Try GKE Sandbox today

To get started using GKE Sandbox today, visit our feature pagehere. To learn more, check out our GKE Sandbox and gVisor sessions:

As GKE Sandbox gets closer to general availability, look for a free trial of GKE Advanced coming soon.

Posted in:


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK