netassert
: network security testing for DevSecOps workflows
NOTE: this framework is in beta state as we move towards our first 1.0 release. Please file any issues you find and note the version used.
This is a security testing framework for fast, safe iteration on firewall, routing, and NACL rules for Kubernetes (Network Policies, services) and non-containerised hosts (cloud provider instances, VMs, bare metal). It aggressively parallelises nmap
to test outbound network connections and ports from any accessible host, container, or Kubernetes pod by joining the same network namespace as the instance under test.
- netassert
- Example
- Configuration
The alternative is to exec
into a container and curl
, or spin up new pods with the same selectors and curl
from there. This has lots of problems (extra tools in container image, or tool installation despite immutable root filesystems, or egress prevention). netassert
aims to fix this:
- does not rely on a dedicated tool speaking the correct target protocol (e.g. doesn't need
curl
, GRPC client, etc) - does not bloat the pod under test or increase the pod's attack surface with non-production tooling
- works with
FROM scratch
containers - is parallelised to run in near-constant time for large or small test suites
- does not appear to the Kubernetes API server that it's changing the system under test
- uses TCP/IP (layers 3 and 4) so does not show up in HTTP logs (e.g.
nginx
access logs) - produces TAP output for humans and build servers
More information and background in this presentation from Configuration Management Camp 2018.
Usage: netassert [options] [filename]
Options:
--image Name of test image
--no-pull Don't pull test container on target nodes
--timeout Integer time to wait before giving up on tests (default 120)
--ssh-user SSH user for kubelet host
--ssh-options Optional options to pass to the 'gcloud compute ssh' command
--known-hosts A known_hosts file (default: ${HOME}/.ssh/known_hosts)
--debug More debug
-h --help Display this message
jq
yj
(checked in to root of this repo, direct download)parallel
timeout
These will be moved into a container runner in the future
docker
for DEPLOYMENT_TYPE in \
frontend \
microservice \
database\
; do
DEPLOYMENT="test-${DEPLOYMENT_TYPE}"
kubectl run "${DEPLOYMENT}" \
--image=busybox \
--labels=app=web,role="${DEPLOYMENT_TYPE}" \
--requests='cpu=10m,memory=32Mi' \
--expose \
--port 80 \
-- sh -c "while true; do { printf 'HTTP/1.1 200 OK\r\n\n I am a ${DEPLOYMENT_TYPE}\n'; } | nc -l -p 80; done"
kubectl scale deployment "${DEPLOYMENT}" --replicas=3
done
As we haven't applied network policies, this should FAIL.
./netassert test/test-k8s.yaml
Ensure your user has SSH access to the node names listed by
kubectl get nodes
. To change the SSH user set--ssh-user MY_USER
. To configure your ssh keys, use DNS resolvable names (or/etc/hosts
entries) for the nodes, and/or add login directives to~/.ssh/config
:# ~/.ssh/config Host node-1 HostName 192.168.10.1 User sublimino IdentityFile ~/.ssh/node-1-key.pem
kubectl apply -f resource/net-pol/web-deny-all.yaml
kubectl apply -f resource/net-pol/test-services-allow.yaml
Now that we've applied the policies that these tests reflect, this should pass:
./netassert test/test-k8s.yaml
For manual verification of the test results we can exec
and curl
in the pods under test (see [why] above for reasons that this is a bad idea).
kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-microservice
kubectl exec -it test-microservice-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database
kubectl exec -it test-database-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-frontend
These should all pass as they have equivalent network policies.
The network policies do not allow the frontend
pods to communicate with the database
pods.
Let's verify that manually - this should FAIL:
kubectl exec -it test-frontend-$YOUR_POD_ID -- wget -qO- --timeout=2 http://test-database
netassert takes a single YAML file as input. This file lists the hosts to test from, and describes the hosts and ports that it should be able to reach.
It can test from any reachable host, and from inside Kubernetes pods.
A simple example:
host: # child keys must be ssh-accessible hosts
localhost: # host to run test from, must be accessible via SSH
8.8.8.8: UDP:53 # host and ports to test for access
A full example:
host: # child keys must be ssh-accessible hosts
localhost: # host to run test from, can be a remote host
8.8.8.8: UDP:53 # host and ports to test from localhost
google.co.uk: 443 # if no protocol is specified then TCP is implied
control-plane.io: 80, 81, 443, 22 # ports can be comma or space delimited
kubernetes.io: # this can be anything SSH can access
- 443 # ports can be provided as a list
- 80
localhost: # this tests ports on the local machine
- 22
- -999 # ports can be negated with `-`, this checks that 999 TCP is not open
- -TCP:30731 # TCP is implied, but can be specified
- -UDP:1234 # UDP must be explicitly stated, otherwise TCP assumed
- -UDP:555
control-plane.io: # this must be accessible via ssh (perhaps via ssh-agent), or `localhost`
8.8.8.8: UDP:53 # this tests 8.8.8.8:53 is accesible from control-plane.io
8.8.4.4: UDP:53 # this tests 8.8.4.4:53 is accesible from control-plane.io
google.com: 443 # this tests google.com:443 is accesible from control-plane.io
k8s: # child keys must be Kubernetes entities
deployment: # only deployments currently supported
test-frontend: # pod name, defaults to `default` namespace
test-microservice: 80 # `test-microservice` is the DNS name of the target service
test-database: -80 # test-frontend should not be able to access test-database port 80
new-namespace:test-microservice: # `new-namespace` is the namespace name
test-database.new-namespace: 80 # longer DNS names can be used for other namespaces
test-frontend.default: 80
default:test-database:
test-frontend.default.svc.cluster.local: 80 # full DNS names can be used
test-microservice.default.svc.cluster.local: -80
To test that localhost
can reach 8.8.8.8
and 8.8.4.4
on port 53 UDP:
host:
localhost:
8.8.8.8: UDP:53
8.8.4.4: UDP:53
What this test does:
- Starts on the test runner host
- Pull the test container
- Check port
UDP:53
is open on8.8.8.8
and8.8.4.4
- Shows TAP results
Test that control-plane.io
can reach github.com
:
host:
control-plane.io:
github.com:
- 22
- 443
What this test does:
- Starts on the test runner host
- SSH to
control-plane.io
- Pull the test container
- Check ports
22
and443
are open - Returns TAP results to the test runner host
host:
localhost:
control-plane.io:
- 22
control-plane.io:
github.com:
- 22
Test that a pod can reach 8.8.8.8
:
k8s:
deployment:
some-namespace:my-pod:
8.8.8.8: UDP:53
Test that my-pod
in namespace default
can reach other-pod
in other-namespace
, and that other-pod
cannot reach
my-pod
:
k8s:
deployment:
default:my-pod:
other-namespace:other-pod: 80
other-namespace:other-pod:
default:my-pod: -80
- from test host:
nettest test/test-k8s.yaml
- look up deployments, pods, and namespaces to test in Kube API
- for each pod, SSH to a worker node running an instance
- connect a test container to the container's network namespace
- run that pod's test suite from inside the network namespace
- report results via TAP
- test host gathers TAP results and reports
- the same process applies to non-Kubernetes instances accessible via ssh