3

Deploying OpenFaaS on Digital Ocean with Terraform

 2 years ago
source link: https://willschenk.com/articles/2021/deploying_open_faa_s_on_digital_ocean_with_terraform/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

Everything functional

Published June 2, 2021 #openfaas, #kubernetes, #terraform, #helm

We are going to look at how to use Terraform to deploy a Kubernetes cluster on Digital Ocean, add a managed postgres database, and redis and OpenFaaS in kubernetes. This will show how to use Terraform to manage the configuration and how we can access both cloud and kubernetes managed services from OpenFaaS functions.

We are going to use the digitalocean, kubernetes, and helm terraform providers.

The plan

  1. Provision a digitalocean_kubernetes_cluster

  2. Provision a digitalocean_database_cluster

  3. Provision 2 kubernetes_namespace for openfaas and openfass-fn

  4. Provision a helm_release for openfaas

  5. Provision a helm_release for redis

  6. Provision 2 kubernetes_secret to point to the databases

  7. Deploy an OpenFaaS function that reads those secrets and talks to the database.

Let's go.

Install the software

I'm using Debian, your mileage may vary.

OpenFaaS

curl -sL https://cli.openfaas.com | sudo sh

Terraform

sudo apt-get install apt-transport-https --yes
curl -fsSL https://apt.releases.hashicorp.com/gpg | sudo apt-key add -
sudo apt-add-repository "deb [arch=$(dpkg --print-architecture)] https://apt.releases.hashicorp.com $(lsb_release -cs) main"
sudo apt update
sudo apt install terraform

kubectl

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg
echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list
sudo apt-get update
sudo apt-get install -y kubectl

This is optional since we are using terraform, but here for reference.

curl https://baltocdn.com/helm/signing.asc | sudo apt-key add -
echo "deb https://baltocdn.com/helm/stable/debian/ all main" | sudo tee /etc/apt/sources.list.d/helm-stable-debian.list
sudo apt-get update
sudo apt-get install helm

Terraform

Providers

First we need to define out providers, which we will do in providers.tf:

  terraform {
    required_providers {
      digitalocean = {
        source = "digitalocean/digitalocean"
        version = "~> 2.0"
      }
    }
  }

  variable "do_token" {
    description = "digitalocean access token"
    type        = string
  }

  provider "digitalocean" {
    token             = var.do_token
  }

Then run

terraform init

To load them locally.

Also, you should define your do_token perhaps in an environment variable TF_VAR_do_token.

Digital Ocean Resources

Define digitalocean.tf:

  resource "digitalocean_kubernetes_cluster" "gratitude" {
    name    = "gratitude"
    region  = "nyc1"
    version = "1.20.2-do.0"

    node_pool {
      name       = "worker-pool"
      size       = "s-2vcpu-2gb"
      node_count = 3
    }
  }

  resource "digitalocean_database_cluster" "gratitude-postgres" {
    name       = "gratitude-postgres-cluster"
    engine     = "pg"
    version    = "11"
    size       = "db-s-1vcpu-1gb"
    region     = "nyc1"
    node_count = 1
  }

  output "cluster-id" {
    value = "${digitalocean_kubernetes_cluster.gratitude.id}"
  }

We can spin these up using terraform apply. This takes about 6 minutes for me.

Kubernetes

Now we can add our kubernetes namespaces. In another file called kubernetes.tf:

  provider "kubernetes" {
    host             = digitalocean_kubernetes_cluster.gratitude.endpoint
    token            = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    cluster_ca_certificate = base64decode(
      digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate
    )
  }

  resource "kubernetes_namespace" "openfaas" {
    metadata {
      name = "openfaas"
      labels = {
        role = "openfaas-system"
        access = "openfaas-system"
        istio-injection = "enabled"
      }
    }
  }

  resource "kubernetes_namespace" "openfaas-fn" {
    metadata {
      name = "openfaas-fn"
      labels = {
        role = "openfaas-fn"
        istio-injection = "enabled"
      }
    }
  }

We'll need to run terraform init again since we added a provider, and then we can terraform apply.

  provider "helm" {
    kubernetes {
      host = digitalocean_kubernetes_cluster.gratitude.endpoint
      cluster_ca_certificate = base64decode( digitalocean_kubernetes_cluster.gratitude.kube_config[0].cluster_ca_certificate )
      token = digitalocean_kubernetes_cluster.gratitude.kube_config[0].token
    }
  }

  resource "helm_release" "openfaas" {
    repository = "https://openfaas.github.io/faas-netes"
    chart = "openfaas"
    name = "openfaas"
    namespace = "openfaas"

    set {
      name = "functionalNamepsace"
      value = "openfaas-fn"
    }

    set {
      name = "generateBasicAuth"
      value = "true"
    }

    set {
      name = "ingress.enabled"
      value = "true"
    }
  }

  resource "random_password" "redis_password" {
    length           = 16
    special          = true
    override_special = "_%@"
  }

  resource "helm_release" "redis" {
    repository = "https://charts.bitnami.com/bitnami"
    chart = "redis"
    name = "redis"

    set {
      name = "auth.password"
      value = random_password.redis_password.result
    }

    set {
      name = "architecture"
      value = "standalone"
    }
  }

Once you have this file, do terraform init and then terraform apply and both OpenFaaS and Redis should be deployed to your cluster.

Secrets

  resource "kubernetes_secret" "redispassword" {
    metadata {
      name = "redispassword"
      namespace = "openfaas-fn"
    }

    data = {
      password = random_password.redis_password.result
    }
  }

  resource "kubernetes_secret" "postgresconnection" {
    metadata {
      name = "postgresconnection"
      namespace = "openfaas-fn"
    }

    data = {
       host     = digitalocean_database_cluster.gratitude-postgres.private_uri
    }
  }

Verifying the deployment

Setup kubectl

  export CLUSTER_ID=$(terraform output -raw cluster-id)
  mkdir -p ~/.kube/
  curl -X GET \
  -H "Content-Type: application/json" \
  -H "Authorization: Bearer ${TF_VAR_do_token}" \
  "https://api.digitalocean.com/v2/kubernetes/clusters/$CLUSTER_ID/kubeconfig" \
  > ~/.kube/config

If you have your TF_VAR_do_token setup correctly, it should create a valid config file.

Test this with

kubectl cluster-info
Kubernetes control plane is running at https://39cef8c8-ca33-40f1-9454-3373707a22ef.k8s.ondigitalocean.com
CoreDNS is running at https://39cef8c8-ca33-40f1-9454-3373707a22ef.k8s.ondigitalocean.com/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.

Verifying OpenFaaS

We can then verify the deploy with:

kubectl -n openfaas get deployments -l "release=openfaas, app=openfaas"
NAMEREADYUP-TO-DATEAVAILABLEAGEalertmanager1/11119mbasic-auth-plugin1/11119mgateway1/11119mnats1/11119mprometheus1/11119mqueue-worker1/11119m

Connecting to OpenFaaS

Setup the proxy

In a new window, lets setup port forwarding from your local machine to connect to the openfaas gateway in kubernetes.

kubectl port-forward svc/gateway -n openfaas 8080:8080

Get the OpenFaaS login credentials and login

# This command retrieves your password
PASSWORD=$(kubectl get secret -n openfaas basic-auth -o jsonpath="{.data.basic-auth-password}" | base64 --decode; echo)

# This command logs in and saves a file to ~/.openfaas/config.yml
echo -n $PASSWORD | faas-cli login --username admin --password-stdin
Calling the OpenFaaS server to validate the credentials...
credentials saved for admin http://127.0.0.1:8080

And now we can list out our deployed functions:

faas-cli list
FunctionInvocationsReplicas

Not a whole lot there yet.

Testing out deploying a function

faas-cli store deploy nodeinfo
echo | faas-cli invoke nodeinfo
Deployed. 202 Accepted.
URL: http://127.0.0.1:8080/function/nodeinfo

Hostname: nodeinfo-8545846564-wpqm6

Arch: x64
CPUs: 2
Total mem: 1995MB
Platform: linux
Uptime: 361

Writing a OpenFaaS function that talks to Redis

Get the template running

Lets create our first function. We need to pull the templates locally, so lets do that with:

faas-cli template pull
Fetch templates from repository: https://github.com/openfaas/templates.git at 

And create our function, I'm going to use ruby.

faas-cli new --lang ruby rubyredis

Change the image in rubyredis.yml to be your Docker hub user name, and then lets deploy it:

faas-cli up -f rubyredis.yml

And if that's successful, we can invoke it with:

echo | faas-cli invoke rubyredis
Hello world from the Ruby template

Adding redis

Now that we have it working, lets add redis to the picture.

First we need to add the secret to the rubyredis.yml file, so that it references the secret we defined above in terraform:

  version: 1.0
  provider:
    name: openfaas
    gateway: http://127.0.0.1:8080
  functions:
    rubyredis:
      lang: ruby
      handler: ./rubyredis
      image: wschenk/rubyredis:latest
      secrets:
      - redispassword

In the Gemfile add the redis gem:

source 'https://rubygems.org'

gem "redis"

Now we need to change handler.rb to conenct to the redis service on the cluster, which is redis-master.default (default is the namespace that it's in) with the password that we load from /var/openfass/secrets/password.

  require 'redis'

  class Handler
    def run(req)
      @redis = Redis.new( host: "redis-master.default", password: File.read( '/var/openfaas/secrets/password' ) )

      return @redis.incr( 'mykey' ) end
  end

We can then redeploy using

faas-cli up -f rubyredis.yml

And we can invoke it now using

echo | faas-cli invoke rubyredis

Each time you run this you should see the result increment.

Writing a OpenFaaS function that talk to Postgres

Start a remplate

faas-cli new --lang ruby rubypostgres

Then lets tweak the rubypostgres.yml file to add the secret (and docker username!)

  version: 1.0
  provider:
    name: openfaas
    gateway: http://127.0.0.1:8080
  functions:
    rubypostgres:
      lang: ruby
      handler: ./rubypostgres
      image: wschenk/rubypostgres:latest
      build_args:
        ADDITIONAL_PACKAGE: build-base postgresql-dev
      secrets:
      - postgresconnection

Then we need to add the 'pg' gem:

  source 'https://rubygems.org'

  gem "pg", "~> 1.2"
  gem "database_url"
  gem "json"

Then in the handler

  require 'json'
  require 'database_url'
  require 'pg'

  class Handler
    def run(req)
      c = DatabaseUrl.to_active_record_hash(File.read( '/var/openfaas/secrets/host' ) )

  #    {"adapter":"postgresql","host":"private-gratitude-postgres-cluster-do-user-1078430-0.b.db.ondigitalocean.com","database":"defaultdb","port":25060,"user":"doadmin","password":"ievezzbyz0a1stxa"}

      # Output a table of current connections to the DB
      conn = PG.connect(
        c[:host],
        c[:port],
        nil,
        nil,
        c[:database],
        c[:user],
        c[:password] )

      r = []
      conn.exec( "SELECT * FROM pg_stat_activity" ) do |result|
        r << "     PID | User             | Query"
        result.each do |row|
          r << " %7d | %-16s | %s " %
               row.values_at('pid', 'usename', 'query')
        end
      end

      return r.join( "\n" );
    end
  end

Now we build it:

faas-cli up -f rubypostgres.yml

And invoke:

echo | faas-cli invoke rubypostgres
     PID | User             | Query
      76 | postgres         |  
      69 | postgres         |  
      65 |                  |  
      67 | postgres         |  
      72 | postgres         |  
      78 | _dodb            |  
   22113 | doadmin          | SELECT * FROM pg_stat_activity 
      63 |                  |  
      62 |                  |  
      64 |                  | 

Conclusion

When you are done, you can use terraform destroy to remove everything. Don't do that for production!!!

Terraform is pretty nifty in that it lets you spin up the whole environment easily, and OpenFaaS is a very nice way to work with functions easily. Kubernetes is a bit daunting but once it's up and running gives you a great way to scale things up and down.

We setup the cluster and on it deployed OpenFaaS as well as redis. We showed how to connect to redis from OpenFaaS, as well as how to connection to a managed postgres install using an OpenFaaS function.

References


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK