Deploy Patu as a Kubernetes CNI
Patu deployment consist of CNI binary and KPNG eBPF backend.
Deploy Kubernetes without kube-proxy
If you didn’t deploy the cluster yet, you can skip the kube-proxy addon phase during the cluster deployment
kubeadm init --upload-certs --pod-network-cidr=10.200.0.0/16 --v=6 --skip-phases=addon/kube-proxy
If you already have a running cluster, remove the existing deployed CNI (if any), and remove the existing kube-proxy
kubectl delete daemonsets -n kube-system kube-proxy
Clone Patu Repository
git clone https://github.com/redhat-et/patu.git
Kubernetes Kubeconfig
By default, kubeadm places kubeconfig file at /etc/kubernetes/admin.conf
with root permissions. The installer defaults to that location as well. Specify a custom kubeconfig location with the following example passing KUBECONFIG
as an environmental variable:
KUBECONFIG=~/.kube/config ./deploy/kubernetes/patu-installer apply all
Install PATU CNI Binary
./deploy/kubernetes/patu-installer apply cni
Ensure all pods are in running state. Coredns pods should have IP from patu CIDR as mentioned in ./deploy/patu.yaml
, and should be running but not in Ready
state because Cluster-Ip is yet not enabled.
Install KPNG eBPF backend
./deploy/kubernetes/patu-installer apply kpng
All pods should be in running and ready state. Coredns pods should have IP from patu CIDR, KPNG pod should have 3 containers Running
and Ready
state.
Remove KPNG
./deploy/kubernetes/patu-installer delete kpng
Remove PATU CNI
./deploy/kubernetes/patu-installer delete cni
Install full Patu stack (CNI Binary and KPNG eBPF backend)
./deploy/kubernetes/patu-installer apply all
Uninstall full Patu stack (CNI Binary and KPNG eBPF backend)
./deploy/kubernetes/patu-installer delete all
Manual Instructions:
Install Patu CNI
- Deploy Patu CNI from the patu directory
`kubectl apply -f ./deploy/patu.yaml`
- Ensure status - All pods should be in running state. Coredns pods should have IP from patu CIDR as mentioned in patu.yaml, and should be running but not ready
Install KPNG
- Extract node name:
local_node=$(kubectl get nodes -l 'node-role.kubernetes.io/control-plane' -o custom-columns=NAME:.metadata.name --no-headers)
- Remove taints concerning control-plane, master:
kubectl taint nodes $local_node node-role.kubernetes.io/master:NoSchedule- node-role.kubernetes.io/control-plane:NoSchedule-
- Label node:
kubectl label node $local_node kube-proxy=kpng
- Create configmap:
kubectl create configmap kpng --namespace kube-system --from-file /etc/kubernetes/admin.conf
- Deploy kpng from the patu directory:
kubectl apply -f ./deploy/kpngebpf.yaml
- Ensure status - All pods should be in running and ready state. Coredns pods should have IP from patu CIDR, KPNG pod should have 3 containers running and ready
Remove KPNG
- Remove KPNG Daemon set, service account, cluster role+ binding:
kubectl delete -f ./deploy/kpngebpf.yaml
- Remove configmap:
kubectl delete cm kpng -n kube-system
- Extract node name:
local_node=$(kubectl get nodes -l 'node-role.kubernetes.io/control-plane' -o custom-columns=NAME:.metadata.name --no-headers)
- Remove label from node:
kubectl label node $local_node kube-proxy-
Remove PATU
- Remove PATU Daemon set, service account, cluster role+ binding:
kubectl delete -f ./deploy/patu.yaml
Testing setup:
kubectl create -f https://k8s.io/examples/application/deployment.yaml
kubectl expose deployment nginx-deployment --type=ClusterIP
kubectl run tmp-shell --rm -i --tty --image nicolaka/netshoot
Expected: Curl to the exposed ip of the nginx-deployment service (obtained using kubectl get svc -A -o wide
) should work from the tmp-shell.
Coredns pods should be in ready state.