

Run Your Own AWS APIs on OpenShift
source link: https://zwischenzugs.com/2017/07/31/run-your-own-aws-apis-on-openshift/
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

tl;dr
This article shows you how you can use OpenShift to set up and test against AWS APIs using localstack.
Example code to run through this using ShutIt is available here.
Here’s an asciicast of the process:
Introduction
In this walkthrough you’re going to set up an OpenShift system using minishift, and then run localstack in a pod on it.
OpenShift is a RedHat-sponsored wrapper around Kubernetes that provides extra functionality more suited to enterprise production deployments of Kubernetes. Many features from OpenShift have swum upstream to be integrated into Kubernetes (eg role-based access control).
The open source version of OpenShift is called Origin.
Localstack
Localstack is a project that aims to give you as complete as possible a set of AWS APIs to develop against without incurring any cost. This is great for testing or trying code out before running it ‘for real’ against AWS and potentially wasting time and money.
Localstack spins up the following core Cloud APIs on your local machine:
At present it supports running in a Docker container, or natively on a machine.
It is built on moto, which is a mocking framework in turn built on boto, which is a python AWS SDK.
Running within an OpenShift cluster gives you the capability to run very many of these AWS API environments. You can then create distinct endpoints for each set of services, and isolate them from one another. Also, you can worry less about resource usage as the cluster scheduler will take care of that.
However, it doesn’t run out of the box, so this will guide you through what needs to be done to get it to work.
Started Minishift?
If you don’t have an OpenShift cluster to hand, then you can run up minishift, which gives you a standalone VM with a working OpenShift on it.
Installing minishift is documented here. You’ll need to install it first and run ‘minishift start’ successfully.
Once you have started minishift, you will need to set up your shell so that you are able to communicate with the OpenShift server.
$ eval $(minishift oc-env)
Change the default security context constraints
Security Context Constraints (scc) are an OpenShift concept that allows more granular control over Docker containers’ powers.
They control seLinux contexts, can drop capabilities from the running containers, can determine which user the pod can run as, and so on.
To get this running you’re going to change the default ‘restricted’ scc, but you could create a separate scc and apply that to a particular project. To change the ‘restricted’ scc you will need to become a cluster administrator:
$ oc login -u system:admin
Then you need to edit the restricted scc with:
$ oc edit scc restricted
You will see the definition of the restricted
At this point you’re going to have to do two things:
- Allow containers to run as any user (in this case ‘root’)
- Prevent the scc from restricting your capabilities to setuid and setgid
1) Allow RunAsAny
The localstack container runs as root by default.
For security reasons, OpenShift does not allow containers to run as root by default. Instead it picks a random UID within a very high range, and runs as that.
To simplify matters, and allow the localstack container to run as root, change the lines:
runAsUser: type: MustRunAsRange
to read:
runAsUser: type: RunAsAny
this allows containers to run as any user.
2) Allow SETUID and SETGID Capabilities
When localstack starts up it needs to become another user to start up elasticache. The elasticache service does not start up as the root user.
To get round this, localstack su’s the startup command to the localstack user in the container.
Because the ‘restricted’ scc explicitly disallows actions that change your user or group id, you need to remove these restrictions. Do this by deleting the lines:
- SETUID - SETGID
Once you have done these two steps, save the file.
Make a note of the host
If you run:
$ minishift console --machine-readable | grep HOST | sed 's/^HOST=\(.*\)/\1/'
you will get the host that the minishift instance is accessible as from your machine. Make a note of this, as you’ll need to substitute it in later.
Deploy the pod
Deploying the localstack is as easy as running:
$ oc new-app localstack/localstack --name="localstack"
This takes the localstack/localstack image and creates an OpenShift application around it for you, setting up internal services (based on the exposed ports in the Dockerfile), running the container in a pod, and various other management tasks.
Create the routes
If you want to access the services from outside, you need to create OpenShift routes, which create an external address to access services within the OpenShift network.
For example, to create a route for the sqs service, create a file like this:
apiVersion: v1 items: - apiVersion: v1 kind: Route metadata: annotations: openshift.io/host.generated: "true" name: sqs selfLink: /oapi/v1/namespaces/test/routes/sqs spec: host: sqs-test.HOST.nip.io port: targetPort: 4576-tcp to: kind: Service name: localstack weight: 100 wildcardPolicy: None status: ingress: - conditions: - lastTransitionTime: 2017-07-28T17:49:18Z status: "True" type: Admitted host: sqs-test.HOST.nip.io routerName: router wildcardPolicy: None kind: List metadata: {} resourceVersion: "" selfLink: ""
then create the route with:
$ oc create -f
See above for the list of services and their ports.
If you have multiple localstacks running on your OpenShift cluster, you might want to prepend the host name with a unique name for the instance, eg
host: localstackenv1-sqs-test.HOST.nip.io
.Look upon your work
Run an ‘oc get all’ to see what you have created within your OpenShift project:
$ oc get all NAME DOCKER REPO TAGS UPDATED is/localstack 172.30.1.1:5000/myproject/localstack latest 15 hours ago NAME REVISION DESIRED CURRENT TRIGGERED BY dc/localstack 1 1 1 config,image(localstack:latest) NAME DESIRED CURRENT READY AGE rc/localstack-1 1 1 1 15h NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD routes/apigateway apigateway-test.192.168.64.2.nip.io localstack 4567-tcp None routes/cloudformation cloudformation-test.192.168.64.2.nip.io localstack 4581-tcp None routes/cloudwatch cloudwatch-test.192.168.64.2.nip.io localstack 4582-tcp None routes/dynamodb dynamodb-test.192.168.64.2.nip.io localstack 4569-tcp None routes/dynamodbstreams dynamodbstreams-test.192.168.64.2.nip.io localstack 4570-tcp None routes/es es-test.192.168.64.2.nip.io localstack 4578-tcp None routes/firehose firehose-test.192.168.64.2.nip.io localstack 4573-tcp None routes/kinesis kinesis-test.192.168.64.2.nip.io localstack 4568-tcp None routes/lambda lambda-test.192.168.64.2.nip.io localstack 4574-tcp None routes/redshift redshift-test.192.168.64.2.nip.io localstack 4577-tcp None routes/route53 route53-test.192.168.64.2.nip.io localstack 4580-tcp None routes/s3 s3-test.192.168.64.2.nip.io localstack 4572-tcp None routes/ses ses-test.192.168.64.2.nip.io localstack 4579-tcp None routes/sns sns-test.192.168.64.2.nip.io localstack 4575-tcp None routes/sqs sqs-test.192.168.64.2.nip.io localstack 4576-tcp None routes/web web-test.192.168.64.2.nip.io localstack 8080-tcp None NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE svc/localstack 172.30.187.65 4567/TCP,4568/TCP,4569/TCP,4570/TCP,4571/TCP,4572/TCP,4573/TCP,4574/TCP,4575/TCP,4576/TCP,4577/TCP,4578/TCP,4579/TCP,4580/TCP,4581/TCP,4582/TCP,8080/TCP 15h NAME READY STATUS RESTARTS AGE po/localstack-1-hnvpw 1/1 Running 0 15h
Each route created is now accessible as an AWS service ready to test your code.
Access the services
Can now hit the services from your host, like this:
$ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis list-streams { "StreamNames": [] }
For example, to create a kinesis stream:
$ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis create-stream --stream-name teststream --shard-count 2 $ aws --endpoint-url=http://kinesis-test.192.168.64.2.nip.io kinesis list-streams { "StreamNames": [ "teststream" ] }
This is an extract from my book
This is a work in progress from the second edition of Docker in Practice
Get 39% off with the code: 39miell2

Recommend
-
63
Zac J. Szewczykon2019/ 07 /05 09:24:09 EST I go back and forth on running my own website. I loveworking
-
8
How to run WildFly on Openshift 23 December 202123 December 2021 by F.Marchioni This tutorial will teach you how to run the late...
-
2
Why might you run your own DNS server? • dns • One of the things that makes DNS difficult to understand is that it’s decentralized. Ther...
-
3
Run Your Own Server For Fun (and Zero Profit) It seems there’s a service for everything, but sometimes you simply learn more by doing it yourself. If you haven’t enjoyed the somewhat anachronistic pleasures of runn...
-
10
This tutorial will teach you how to run WildFly applications on Openshift using WildFly S2I images. At first, we will learn how to build and deploy applications using Helm Charts. Then, we will learn how to use th...
-
9
Deploy and run OpenShift on AWS: 4 options Compare methods for running OpenShift's ready-to-use Kubernetes environment on Amazon'...
-
6
Explore OpenShift APIs from the command line Posted: September 22, 2022 | %t min read | by
-
8
Run Your Own Instant Messaging Service on FreeBSDWhat if you could host your own instant messaging service for you and your friends, to communicate privately and securely, away from the prying eyes of big tech? Turns out...
-
6
4 steps to run an application under OpenShift Service Mesh Skip to main content ...
-
6
How to run your own LLM (GPT) @2023/04/20 Origina...
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK