I have been working with Ansible for automating Kubernetes deployment using CentOS VM templates. As a pre-requisite we need to ensure net.bridge.bridge-nf-call-iptables
and net.bridge.bridge-nf-call-ip6tables
is set to 1.
I created below tasks in my Ansible role playbook.
# Set net.bridge.bridge-nf-call-ip6tables value to 1 all K8S cluster nodes
- name: ensure net.bridge.bridge-nf-call-ip6tables is set to 1
sysctl:
name: net.bridge.bridge-nf-call-ip6tables
value: 1
state: present
# Set net.bridge.bridge-nf-call-iptables value to 1 all K8S cluster nodes
- name: ensure net.bridge.bridge-nf-call-iptables is set to 1
sysctl:
name: net.bridge.bridge-nf-call-iptables
value: 1
state: present
But when I executed this playbook I got below error
fatal: [prod-k8s-master01]: FAILED! => {"changed": false, "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or dire ctory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
fatal: [prod-k8s-worker01]: FAILED! => {"changed": false, "msg": "Failed to reload sysctl: sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-ip6tables: No such file or dire ctory\nsysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory\n"}
After lots of reading and researching I found that I did not escalate the privileges on in my main YML file. After adding Become: yes
in the main YML resolved my issue. Below is the syntax of my main playbook.
- hosts: all
gather_facts: false
become: yes
vars_files:
- answerfile.yml
Sometimes common mistakes are the most time consuming because we take it for granted.
Note that in my example I am using CentOS.
Categories: ansible, kubernetes