Taming Kubernetes with Rest APIs

AMIT RAWAT
6 min readJul 19, 2020

Let’s see how we can do the Kubernetes workload automation using the Rest APIs.

Kubernetes is quickly becoming the new de-facto standard for container deployment and orchestration in the cloud. In our day to day life, most of the times we use kubectl to interact with the kubernetes cluster. Kubectl is a command line application written in Golang which takes your CLI commands and pass it to the Kubernetes API Server via REST communication.

Lets’ see this picture to make it more clear.

Kubernetes REST APIs

Kube-API-Server is the main interface which exposes a communication channel and allows us to interact with the whole K8s cluster.

I usually don't’ trust people even if they show me some fancy diagrams so I hope you might also be thinking the same. Let’s try to validate whatever I have demonstrated in the above diagram.

Let’s create a minimalistic demo pod which prints “Hello” every 10 seconds:

As we can see the pod is successfully created and we are also able to query it.

Now, let’s try to dissect and see what exactly happens when we fire any kubectl command like ‘kubectl get command’ which we have fired above.

To know that we would need to fire the same command again with an additional flag: ‘- -v=9’.

kubectl --v=9 get pod demo-pod

This extra flag has enabled some additional logs which are clearly showing the order of activities happening behind the scenes:

  • First kubectl command is reading some data like client key and certificate from this file: “/Users/amitrawat/.kube/config” to authenticate itself.
  • Then it is firing a curl command to get the pod named demo-pod.
curl -k -v -XGET  -H "User-Agent: kubectl/v1.18.5 (darwin/amd64) kubernetes/e6503f8" -H "Accept: application/json;as=Table;v=v1;g=meta.k8s.io,application/json;as=Table;v=v1beta1;g=meta.k8s.io,application/json" 'https://192.168.99.100:8443/api/v1/namespaces/default/pods/demo-pod'
  • In the end it is also printing the JSON response sent back to the kubectl client by the kube-api-server.

Trying the API using CURL client:

Now it started making sense that there is some Rest API based communication happening between the Kubectl client and the Kubernetes master.

So, now can we try to replicate the same curl command independently without using the kubectl CLI ? For sake of simplicity, I have removed some unwanted headers from the above curl command.

$curl -k -XGET  -H "Accept: application/json" 'https://192.168.99.100:8443/api/v1/namespaces/default/pods/demo-pod'
{
"kind": "Status",
"apiVersion": "v1",
"metadata": {
},
"status": "Failure",
"message": "pods \"demo-pod\" is forbidden: User \"system:anonymous\" cannot get resource \"pods\" in API group \"\" in the namespace \"default\"",
"reason": "Forbidden",
"details": {
"name": "demo-pod",
"kind": "pods"
},
"code": 403
}$

As we can see we got some error which says that we are not authorized to get resources from this namespace called “default”.

What exactly we are missing here ?

To understand this bit, we would need to understand that how these Rest APIs needs to be authenticated using a bearer token which gets generated out of a service account.

By default kubernetes uses a namespace called “default” where it creates all our pods and services. It also creates a default serviceaccount under this namespace for which we would need to get the bearer token as shown below.

First getting the default secret:

Getting the Service Account
Getting the Service Account details as yaml

Now, getting the default token from this secret “default-token-xmxlk”

Let’s try the same curl command now with this additional header where we will pass this token:

$TOKEN="<token value>"
$curl -k -XGET -H "Accept: application/json" -H "Authorization: Bearer $TOKEN" 'https://192.168.99.100:8443/api/v1/namespaces/default/pods/demo-pod'

We can see the curl command has finally worked!

Time to comeback to the original idea of this blog post “How to tame Kubernetes using these Rest APIs”

Let’s cover some real life scenarios to showcase the power of these rest APIs:

1. Building your own Chaos Testing client like ChaosMonkey

Let’s think of building a simple chaos testing agent which can delete any random pod every x seconds to simulate the pod failures which can happen in real life production scenario.

Here is the Golang code for this agent where we are first querying all the running pods in default namespace and then killing one of the random pod every 10 seconds which can create some real chaos in the cluster :-)

Let’s run this program and we will see it is deleting the random pods.

2. Building your own custom horizontal auto-scaler:

Kubernetes provides an in-built HPA (horizontal pod scaler) but it primarily scale on the basis of metrics like CPU/Memory/Requests per second of each Pod.

Let’s think if you want to scale up/down your application deployment on the basis of some custom tailored made metrics which gets derived via some external system. To configure such an auto-scaler capability using native k8s HPA is not possible so we would need to come up with our own k8s operator which can do this auto scaling.

We can easily use the Rest APIs to do this on-demand auto scaling. Here is one example curl request to change the scale of a deployment to 2.

API_TOKEN=<BEARERTOKEN>
curl -X PATCH https://192.168.99.100:8443/apis/apps/v1/namespaces/default/deployments/selenium-node-chrome-deployment/scale \
--header "Authorization: Bearer $API_TOKEN" \
--header 'Accept: application/json' \
--header 'Content-Type: application/strategic-merge-patch+json' \
--insecure \
--data '{"spec": {"replicas": 2}}'

I have built a similar auto-scaler in Java, more details can be found here.

3. Doing Resiliency Testing of your Micro-services

This is quite a hot topic these days in the era of SRE and Devops. Idea is to pr-actively bring down few services and then seeing how resilient is your entire micro-services ecosystem to gracefully recover from that simulated outage.

Usually there are different kind of outages we can trigger like:

  • Deleting a Pod and wait if K8s can bring it back automatically
  • Deleting a service to simulate a circuit breaker and see how your client use the re-try mechanism to handle it
  • Simulating a disk failure in case of statefulset applications like Kafka, Cassandra etc by temporarily removing a mounted volume.
  • Removing the entire Kubernetes node from the cluster

We can add lot more scenarios to this list and all these scenarios can be easily automated using the k8s Rest API client very easily.

Conclusion:

We have seen the power of K8s REST APIs in this blog and how we can easily see these api calls by just adding the flag “- -v=9” in the normal kubectl commands. Once we get these rest call details, we can easily automate them in the programming language of our choice and can tame this huge elephant called “Kubernetes”

Please feel free to post your feedback/question in the comments section.

--

--

AMIT RAWAT

I am a Civil Engineer by qualification, an Engineering Manager by profession and a Developer by passion. (amitrawat.dev)