9

Vultr Kubernetes Engine

 2 years ago
source link: https://www.vultr.com/docs/vultr-kubernetes-engine
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
<?xml encoding="utf-8" ??>

Introduction

Welcome to the Open Beta release of Vultr Kubernetes Engine!

Vultr Kubernetes Engine (VKE) is a fully managed Kubernetes product with predictable pricing. When you deploy VKE, you'll get a managed Kubernetes control plane that includes our Cloud Controller Manager (CCM) and the Container Storage Interface (CSI). In addition, you can configure block storage and load balancers or install add-ons such as Vultr's ExternalDNS and Cert Manager. We've made Kubernetes hosting easy, so you can focus on scaling your application.

Audience

This quickstart guide explains how to deploy a VKE cluster and assumes you have experience using Kubernetes. If you have comments about this guide, please use the Suggest an Update button at the bottom of the page.
Please see our changelog for information about supported versions of Kubernetes.

How to Deploy a VKE Cluster

You can deploy a new VKE cluster in a few clicks. Here's how to get started.

  1. Navigate to the Kubernetes page in the Customer Portal.
  2. Click Add Cluster.
  3. Enter a descriptive label for the Cluster Name.
  4. Select the Kubernetes version.
  5. Choose a deployment location.
  6. Create a Node Pool.

    About Node Pools

    When creating a VKE cluster, you can assign one or more Node Pools with multiple nodes per pool. For each Node Pool, you'll need to make a few selections.

    Node Pools Screenshot

    • Node Pool Name: Enter a descriptive label for the node pool.
    • Node Pool Type: Choose a Cloud Compute or High Frequency Compute type.
    • Plan: All nodes in the pool will be the same plan. Choose a size appropriate for your workload.
    • Amount of Nodes: Choose how many nodes should be in this pool. It's strongly recommended to use more than one node.

    The monthly rate for the node pool is calculated as you make your selections. If you want to deploy more than one, click Add Another Node Pool.

  7. When ready, click Deploy Now.

Note: During the open beta, you'll notice that the cluster status reports Running soon after deployment, which indicates that the nodes have booted. However, Kubernetes requires additional time to inventory and configure the nodes. Please allow several minutes for VKE to complete the configuration. We will correct the status reporting before the final release. To verify the status of your cluster, please download your kubeconfig file (as described in the next section) and run:

    $ kubectl --kubeconfig={PATH TO THE FILE} cluster-info

How to Manage a VKE Cluster

After deploying your VKE cluster, you need to gather some information and manage it.

  1. Navigate to the Kubernetes section of the Customer Portal.
  2. Click the Manage button to the right of the desired cluster.

    On the Overview tab, you'll see the IP address and Endpoint information for your cluster.

    Manage Cluster

  3. Click the Download Configuration button in the upper-right to download your kubeconfig file, which has credentials and endpoint information to control your cluster. Use this file with kubectl as shown:

    $ kubectl --kubeconfig={PATH TO THE FILE} get nodes
    

About kubeconfig

kubectl uses a configuration file, known as the kubeconfig, to access your Kubernetes cluster.

A kubeconfig file has information about the cluster, such as users, namespaces, and authentication mechanisms. The kubectl command uses the kubeconfig to find a cluster and communicate with it. The default kubeconfig is ~/.kube/config unless you override that location on the command line or with an environment variable. The order of precedence is:

  1. If you set the --kubeconfig flag, kubectl loads only that file. You may use only one flag, and no merging occurs.
  2. If you set the $KUBECONFIG environment variable, it is parsed as a list of filesystem paths according to the normal path delimiting rules for your system.
  3. Otherwise, kubectl uses ~/.kube/config file, and no merging occurs.

Please see the Kubernetes documentation for more details.

Managing the Node Pools

To manage Node Pools, click the Nodes tab on the Manage Cluster page.

Manage Nodes

You have several controls available:

  • Click the Node Pool name to expand the pool and view the individual nodes. You can replace or remove nodes individually.
  • Click Add Node Pool to add another pool.
  • Click - or + to decrease or increase the number of nodes.
  • Click the X icon to the right of the pool the destroy the pool.

Important: You must use VKE Dashboard or the Kubernetes endpoints of the Vultr API to delete VKE worker nodes. If you delete a worker node from the elsewhere in the customer portal or with Instance endpoints of the Vultr API, Vultr will redeploy the worker node to preserve the defined VKE Cluster node pool configuration.

Managing Resources

To manage the resources linked to VKE, such as Block Storage and Load Balancers, click the Linked Resources tab on the Manage Cluster page.

Linked Resources screenshot

About the Managed Control Plane

When you deploy VKE, you automatically get several managed components. Although you don't need to deploy or configure them yourself, here's a brief description with links to more information.

Cloud Controller Manager

Vultr Cloud Controller Manager (CCM) is part of the managed control plane that connects Vultr features to your Kubernetes cluster. The CCM monitors the node's state, assigns their IP addresses, and automatically deploys managed Load Balancers as needed for your Kubernetes Load Balancer/Ingress services. Learn more about the CCM on GitHub.

Container Storage Interface

The Container Storage Interface (CSI) driver connects your Kubernetes cluster with Vultr's high-speed block storage. It's included as part of the managed control plane in VKE. Learn more about the CSI on GitHub.

VKE Block Storage

If your application persists data, you need storage. We've made it easy for you because VKE automatically deploys Vultr Container Storage Interface (CSI) configured with Vultr Block Storage as the default storage provider. To use block storage with VKE, you'll deploy a Persistent Volume Claim (PVC). For example, to deploy a 10Gi block on your account for VKE, use a PersistentVolumeClaim template like this:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: csi-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi
  storageClassName: vultr-block-storage

To attach this PVC to a Pod, define a volume node in your Pod template. Note the claimName below is csi-pvc, referencing the PersistentVolumeClaim in the example above.

kind: Pod
apiVersion: v1
metadata:
  name: readme-app
spec:
  containers:
    - name: readme-app
      image: busybox
      volumeMounts:
        - mountPath: "/data"
          name: vultr-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: vultr-volume
      persistentVolumeClaim:
        claimName: csi-pvc

To learn more about Persistent Volumes, see the Kubernetes documentation. If you'd like to learn more about Vultr CSI, see our GitHub repository.

VKE Load Balancer

Load Balancers in VKE offer all the same features and capabilities as standalone managed Load Balancers. To deploy a VKE load balancer for your application, add a LoadBalancer type to your service configuration file and use metadata annotations to tell the CCM how to configure VKE load balancer. VKE will deploy the Kubernetes service load balancer according to your service configuration and attach it to the cluster.

Here's an example service configuration file that declares a load balancer for HTTP traffic on port 80. The app selector app-name matches an existing service set of pods on your cluster.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
  name: vultr-lb-http
spec:
  type: LoadBalancer
  selector:
    app: app-name
  ports:
    - port: 80
      name: "http"

Notice the annotations in the metadata section. Annotations are how you configure the load balancer, and you'll find the complete list of available annotations in our GitHub repository.

Here is another load balancer example that listens on HTTP port 80, and HTTPS port 443. The SSL certificate is declared as a Kubernetes TLS secret named ssl-secret, which this example assumes was already deployed. See the TLS Secrets documentation to learn how to deploy a TLS secret.

apiVersion: v1
kind: Service
metadata:
  annotations:
    service.beta.kubernetes.io/vultr-loadbalancer-protocol: "http"
    service.beta.kubernetes.io/vultr-loadbalancer-https-ports: "443"
    # You will need to have created a TLS Secret and pass in the name as the value
    service.beta.kubernetes.io/vultr-loadbalancer-ssl: "ssl-secret"
  name: vultr-lb-https
spec:
  type: LoadBalancer
  selector:
    app: app-name
  ports:
    - port: 80
      name: "http"
    - port: 443
      name: "https"

As you increase or decrease the number of cluster worker nodes, VKE manages their attachment to the load balancer. If you'd like to learn general information about Kubernetes load balancers, see the documentation at kubernetes.io.

VKE Cert Manager

VKE Cert Manager adds certificates and certificate issuers as resource types in VKE and simplifies the process of obtaining, renewing, and using those certificates. Our Cert Manager documentation is on GitHub, and you can use Vultr's Helm chart to install Cert Manager.

VKE ExternalDNS

ExternalDNS makes Kubernetes resources discoverable via public DNS servers. For more information, see our tutorial to set up ExternalDNS with Vultr DNS.

Frequently Asked Questions

What is Vultr Kubernetes Engine?

Vultr Kubernetes Engine is a fully-managed product offering with predictable pricing that makes Kubernetes easy to use. Vultr manages the control plane and worker nodes and provides integration with other managed services such as Load Balancers, Block Storage, and DNS.

What versions of Kubernetes does VKE Support?

Please see our changelog for information about supported versions of Kubernetes.

How much does Vultr Kubernetes Engine cost?

Vultr Kubernetes Engine includes the managed control plane free of charge. You pay for the Worker Nodes, Load Balancers, and Block Storage resources you deploy. Worker nodes and Load Balancers run on Vultr cloud server instances of your choice with 2 GB of RAM or more. See our hourly rates.

Is there a minimum size for Block Storage volumes?

Yes, the minimum size for a Block Storage volume is 10GB.

Can I deploy a Bare Metal server to my Kubernetes cluster?

Kubernetes uses Vultr cloud servers. It does not support Bare Metal servers.

Does VKE come with an ingress controller?

No, VKE does not come with an ingress controller preconfigured. Vultr Load Balancers will work with any ingress controller you deploy. Popular ingress controllers include Nginx, HAProxy, and Traefik.

Want to contribute?

You could earn up to $600 by adding new articles


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK