This example shows that NSM keeps working after the remote NSE death.
NSC and NSE are using the kernel
mechanism to connect to its local forwarder.
Forwarders are using the vxlan
mechanism to connect with each other.
Make sure that you have completed steps from basic or memory setup.
Deploy NSC and NSE:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/heal/remote-nse-death-ip/nse-before-death?ref=aad7c26ad32fb4c3b515179bbe85d59c811c52f1
Wait for applications ready:
kubectl wait --for=condition=ready --timeout=1m pod -l app=alpine -n ns-remote-nse-death-ip
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -n ns-remote-nse-death-ip
Ping from NSC to NSE:
kubectl exec pods/alpine -n ns-remote-nse-death-ip -- ping -c 4 172.16.1.100
Ping from NSE to NSC:
kubectl exec deployments/nse-kernel -n ns-remote-nse-death-ip -- ping -c 4 172.16.1.101
Apply patch. It recreates NSE with a new label:
kubectl apply -k https://github.com/networkservicemesh/deployments-k8s/examples/heal/remote-nse-death-ip/nse-after-death?ref=aad7c26ad32fb4c3b515179bbe85d59c811c52f1
Wait for new NSE to start:
kubectl wait --for=condition=ready --timeout=1m pod -l app=nse-kernel -l version=new -n ns-remote-nse-death-ip
Find new NSE pod:
NEW_NSE=$(kubectl get pods -l app=nse-kernel -l version=new -n ns-remote-nse-death-ip --template '{{range .items}}{{.metadata.name}}{{"\n"}}{{end}}')
Ping from NSC to new NSE:
kubectl exec pods/alpine -n ns-remote-nse-death-ip -- ping -c 4 172.16.1.102
Ping from new NSE to NSC:
kubectl exec ${NEW_NSE} -n ns-remote-nse-death-ip -- ping -c 4 172.16.1.103
Delete ns:
kubectl delete ns ns-remote-nse-death-ip