Kubernetes keeps increasing in popularity and not just in public cloud. It keeps making inroads into the on-premises market. This is creating the need for automation. In many Kubernetes environments you tend to find developers using CI/CD pipelines not just for their applications code for the Kubernetes objects that deploy the code in the cluster (ex: deployment, service …). This means that most of the automation needs are covered. However there are several instances where you might want to use automation tools (ex: Ansible) either to replace or to supplement CI/CD tools. By the way, I am not talking about the deployment of the Kubernetes cluster itself, which is a valid use case. I am talking about the things that you would normally do with the “kubectl” tool
While creating a new video for the IaC Avengers channel in Youtube I came across one such use case and this prompt me to investigate how to manage Kubernetes with Ansible. This article contains my lessons learned.
My use case is as follows. I wanted to expose the creation of namespaces in any cloud to end-users from ServiceNow. The idea is that rather than giving developers and other personas the right to create their own namespaces an organization would like to keep a central control plane where they can implement the much needed governance and cost transparency. This use case is very important in RedHat OpenShift environments because the general guidance is to share a few clusters as opposed to creating a cluster per tenant as other vendors recommend. Namespaces is the native mechanism to keep tenants separate with this approach
This “Multi-Cloud Kubernetes as a Service” is the latest in a growing set of demos that we have been creating for a while.
In this article we are going to cover:
- Architecture
- Installation in command line Ansible
- Installation in AWX/Tower
- A practical example
Architecture
We will use a single Ansible module for this solution: “kubernetes.core.k8s”, which might surprise many of you. At first when I was thinking about this solution I thought there would be multiple modules to manage all the different objects in the Kubernetes API: pods, deployments, secrets … but no, there is a single one. To put this into perspective let’s bear in mind that there are more than 150 different modules to manage all aspects of vSphere environments.
So why is there a single module for Kubernetes? At the end of the day Kubernetes and Ansible have much in common. Both frameworks use a declarative syntax where you express your desired state and then the system does whatever is necessary to implement your specified end state. Furthermore, they both use YAML files. So rather than creating multiple modules, you embed your each individual Kubernetes task manifest inside its own Ansible task. You need to watch out for the right indentations but that in essence how it works. We will see some examples in a later section
Another clever shortcut the creators of the module took is that the module doesn’t include its own Kubernetes client. Instead what the Ansible engine will do is to SSH into a machine that has “kubectl” and the “kubeconfig” installed. You could install “kubectl” in your Ansible system if you wanted (and use “localhost” as the target) but you don’t have to. In my case I have created a separate VM with “kubectl” and all the “kubeconfig” files for all clusters I am managing and the Ansible playbook is targeting that VM which is defined in the inventory. In OpenShift environments your Kubernetes client machine will need to run also the “oc” tool
In our video we assumed there will be multiple clusters available for different combinations of:
- Cloud (vSphere based private cloud, AWS, Azure and GCP)
- Production or development (You might want to have more like UAT …)
- Different Kubernetes versions (v1.22, v1.23, v1.24)
The actual selections made by the user determine the target cluster in which to create the “namespace” (a.k.a “project” in RedHat parlance). The playbook takes the 3 parameters selected by the user and builds the name of the “kubeconfig” file to use. The Ansible module allows you to specify a “kubeconfig” file. From that point any tasks are run in the relevant cluster
The Ansible playbook allows you to specify also a “context”. At the beginning I started using a single “kubeconfig” with multiple contexts but as I kept adding clusters it was getting hard to manage. I think the “kubeconfig” method is easier. Every time you create a new cluster, grab the file, rename it to match the type/location of the cluster (ex: “aws-prod-22.config”) and place it in the directory where the client machine expects to find them and you are done

Installation in command line Ansible
The installation requires you to install things in both the Ansible and Kubernetes client system. With other modules you typically install some Python libraries as a prerequisite and then install the Ansible collection. A very important difference with the Kubernetes collection is the libraries are required in the Kubernetes client system, not in the Ansible system. Of course if you have decided to run the Kubernetes client in your Ansible system you will install everything in the same machine.
Before you start please make sure you are running Python 3.6 or higher in the client. In my case I started installing this in a system with CentOS7 which comes with Python 2.7 by default and I was getting errors until I did
ln -s /usr/bin/python3 /usr/bin/python
In terms of libraries you need the following in the Kubernetes client machine:
- kubernetes >= 12.0.0
- PyYAML >= 3.11
- jsonpatch
In my case I just did “pip install kubernetes” and it installed everything else. OpenShift environments are better managed with the “oc” tool. For that reason you also need an additional library called “openshift”.
The ‘kubernetes’ library expects the kubeconfig file to be present in .kube/config. However, as we discussed earlier you can specify a different location and kubeconfig file name as part of the task inside the playbook
Now in the the Ansible machine you need to install the Ansible collection
ansible-galaxy collection install kubernetes.core
Finally, you will need to add your Kubernetes client to the inventory in the Ansible machine, This is mine:
[root@ansible-vm ~] # cat inv.ini
[kubectl01]
172.24.167.53
You can test that everything works by running a simple playbook
[root@ansible-vm ~] # cat create-ns.yaml
- name: Create namespaces in kubernetes cluster
hosts: kubectl01
tasks:
- name: Create namespace in default Kubernetes cluster
kubernetes.core.k8s:
name: "ansible-ns"
api_version: v1
kind: Namespace
state: present
[root@ansible-vm ~] # ansible-playbook create-ns.yaml
The above syntax assumes that the kubeconfig is in the default location, i.e. ~/.kube/config in the home directory of the user running the playbook as in the kubernetes client system. Keep reading to see how to store the config in a different location
Installation in AWX/Tower
If we need to run the playbook in AWX or Ansible Tower, nothing of we discussed previously for the Kubernetes clients changes. So you still need the following in the client:
- the Python libraries
- a supported version of Python in the client
- the “kubectl” tool (and “oc” if you are managing OpenShift clusters
However, on the Ansible system you need to:
- create the inventory entry that points to the Kubernetes client system
- install the “kubernetes.core” collection in the “task” container
- create a job template as usual
This is how I installed the “kubernetes.core” collection in my AWX system. Notice how I install it in the “awx_task” container
[root@awx17 ~]# docker exec -it awx_task /bin/bash
bash-4.4# ansible-galaxy collection install kubernetes.core
However, when I went to trigger the job template I got this error
TASK [Create namespace in target Kubernetes cluster] ***************************
fatal: [172.24.167.53]: FAILED! => {"msg": "Could not find imported module support code for ansiblemodule. Looked for either AnsibleTurboModule.py or module.py"}
I fixed it by installing the “cloud.common” collection also inside the “task” container:
[root@awx17 ~]# docker exec -it awx_task /bin/bash
bash-4.4# ansible-galaxy collection install cloud.common
Process install dependency map
Starting collection install process
Installing 'cloud.common:2.1.2' to '/var/lib/awx/.ansible/collections/ansible_collections/cloud/common'
A practical example
The example we are going to use will do 2 things:
- create a namespace
- assign permissions to the namespace to the user that requested the namespace
In this Kubernetes as a Service design the assumption is that developers and other personas they cannot create or join namespaces by themselves. This is achieved by creating a new namespace or joining an existing one. Hence the need to assign the relevant permissions in the playbook. A future blog post show the “join namespace” scenario which includes including the creator of the namespace in a ServiceNow workflow approval.
The first thing the playbook does is to figure out what kubeconfig file needs to be use. It does so by combining 3 pieces of information. In the video you can see how these details are provided by the user that is requesting the namespace in ServiceNow. They allow us to uniquely identify the Kubernetes cluster we have to use to apply the changes
- name: Build the kubeconfig file name out of input parameters
set_fact:
configname: "{{ cloud }}-{{ envtype }}-{{ version }}"
So for example if the user selects “aws”, “production” and “1.22” the playbook will look for a file named “aws-prod-22.config” and run the remaining tasks on the cluster that is defined in that kubeconfig file. Note how we decided to drop the “1.” from the Kubernetes version to make the file names more streamlined. With this approach, onboarding a new cluster couldn’t be easier. Let’s say in the future we want to create a new development cluster in GCP that is running v1.25. All we need to do is grab the kubeconfig file and place it in the same directory as the other files in the client and rename it to “gcp-dev-25.config”. No further changes are required
Let’s take a look at the playbook
---
- name: Create a namespace in a kubernetes cluster
hosts: kubectl01
gather_facts: false
vars:
#nsname: ansible # needs to be provided by end-user
#version: 22 # corresponds to k8s version 1.22, 1.23 ...
#envtype: dev # type of environment: prod, dev ...
#cloud: vsphere # vpshere, gcp, aws ...
#snow_username: finance1 # this comes also in the API call
#backup_type: gold # user needs to choose between gold/silver policies
tasks:
- name: Build the kubeconfig file name out of input parameters
set_fact:
configname: "{{ cloud }}-{{ envtype }}-{{ version }}"
- debug:
msg: "Let's create namespace {{ nsname }} with kubeconfig {{ configname }}.config"
- name: Create namespace in target Kubernetes cluster
kubernetes.core.k8s:
state: present
kubeconfig: "~/.kube/{{ configname }}.config"
kind: Namespace
name: "{{ nsname }}"
definition:
metadata:
labels:
backuptype: "{{ backup_type }}"
snowowner: "{{ snow_username }}"
- name: Create role binding for user {{ snow_username }}
kubernetes.core.k8s:
state: present
kubeconfig: "~/.kube/{{ configname }}.config"
definition:
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: "{{ nsname }}-owner"
namespace: "{{ nsname }}"
subjects:
- kind: User
name: "{{ snow_username }}"
roleRef:
kind: ClusterRole
name: admin
I have commented out all the variables required as they are being passed as parameters but you can remove the comments when you are testing the playbook
Pay close attention to the “definition” section in the “role binding” task. If you took everything that follows, insert it into a YAML file and use “kubectl apply” it accomplish the same thing. This is what I was referring to about the beauty of how the creators have designed the Ansible module
Notice how we are adding 2 labels to the namespace. These will be used for the “join namespace” workflow and for automatically adding the namespace to a backup policy in PPDM (PowerProtect Data Manager). We will cover these two features in future posts
The “snow_username” is the username of the user that places the request in ServiceNow. In our demo we used KeyCloak to create in seamless authentication infrastructure across ServiceNow and the rest of our infrastructure including OpenShift
Finally, notice how we are binding the default “admin” role to the user, but restricted to the namespace, which is what you would expect from an owner. However, by the rules of least privilege, if you wanted to you could restrict to whatever you need by defining a specific role. You could potentially create this role at the only once at the cluster level. In that case it wouldn’t need to be part of this playbook. We will use this technique for offering various roles in the “join namespace” workflow. The following code is an example for a “deployment manager” role in a specific namespace
- name: Create a new role for deployment managers
kubernetes.core.k8s:
state: present
kubeconfig: "~/.kube/{{ configname }}.config"
definition:
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1 #rbac.authorization.k8s.io/v1
metadata:
namespace: office
name: deployment-manager
rules:
- apiGroups: ["", "extensions", "apps"]
resources: ["deployments", "replicasets", "pods"]
verbs: ["*"]
I hope you found this helpful. Keep an eye on the follow up video and the two follow up blog articles
Categories: DellEMC