- About this repository
- How to use this repository
- Clone the repository
- Move to the local repository
- Build an image from the Dockerfile
- Update the SaltStack pillar
- Create a container
- Move to the container
- SaltStack configuration directory and configuration files
- Start salt-master and salt-minion
- Start a salt-proxy daemon for each device
- Check if the keys are accepted
- Test if the minion and proxies are up and responding to the master
- Grains module usage examples
- Pillar module usage examples
- About SaltStack targeting system
- About SaltStack modules
- About templates
- Napalm proxy usage examples
- Netmiko proxy usage examples
- pyeapi execution module usage examples
- Writing Execution Modules
- Troubleshooting
- Source code
- Move to the container
- Print basic information about the operating system
- List the installed python packages
- Check the SaltStack Version
- SaltStack help
- Verbose
- Start SaltStack in foreground with a debug log level
- Check log
- To kill a process
- tcpdump
- Watch the event bus
- Check port connectivity
- More content about SaltStack and Arista Networks
Arista EOS automation demo using SaltStack.
SaltStack is running in one single container. The content of this repository has been designed for one single SaltStack container.
git clone https://github.com/arista-netdevops-community/saltstack-hello-world.git
cd saltstack-hello-world
docker build --tag salt_eos:1.5 .
List images and verify
docker images | grep salt_eos
Update the pillar with your devices IP/username/password
docker run -d -t --rm --name salt \
-p 5001:5001 -p 4505:4505 -p 4506:4506 \
-v $PWD/master:/etc/salt/master \
-v $PWD/proxy:/etc/salt/proxy \
-v $PWD/minion:/etc/salt/minion \
-v $PWD/pillar/.:/srv/pillar/. \
-v $PWD/states/.:/srv/salt/states/. \
-v $PWD/templates/.:/srv/salt/templates/. \
-v $PWD/eos/.:/srv/salt/eos \
-v $PWD/_modules/.:/srv/salt/_modules/. \
salt_eos:1.5
List containers and verify
docker ps | grep salt
docker exec -it salt bash
SaltStack default configuration directory
ls /etc/salt/
Using the above docker run
command:
master configuration file
more /etc/salt/master
proxy configuration file
more /etc/salt/proxy
minion configuration file
more /etc/salt/minion
This can be done:
- using the python script start_saltstack.py from the host
- or manually from the container using
- Ubuntu services
- or SaltStack command-line
python3 start_saltstack.py
List all the services
service --status-all
we can use start/stop/restart/status.
service salt-master start
service salt-master status
service salt-minion start
service salt-minion status
Start as a daemon (in background)
salt-master -d
salt-minion -d
ps -ef | grep salt
If you did not use the python script start_saltstack.py you also need to start a salt-proxy daemon for each device
salt-proxy --proxyid=leaf1 -d
salt-proxy --proxyid=leaf2 -d
salt-proxy --proxyid=spine1 -d
salt-proxy --proxyid=spine2 -d
ps -ef | grep proxy
Help
salt-key --help
To list all keys
salt-key -L
Run this command to accept one pending key
salt-key -a minion1 -y
Run this command to accept all pending keys
salt-key -A -y
Or use this in the master configuration file to auto accept keys
auto_accept: True
It is not an ICMP ping
salt minion1 test.ping
salt leaf1 test.ping
salt '*' test.ping
salt 'leaf1' grains.items
salt 'leaf1' grains.ls
salt 'leaf1' grains.item os vendor version host
salt 'leaf1' pillar.ls
salt 'leaf1' pillar.items
salt 'leaf1' pillar.get pyeapi
salt 'leaf1' pillar.item pyeapi vlans
It is very flexible.
salt -L "minion1, leaf1" test.ping
salt "leaf*" test.ping
salt '*' test.ping
salt -G 'os:eos' test.ping
salt -G 'os:eos' cmd.run 'uname'
salt -G 'os:eos' net.cli 'show version'
Include this in the master configuration file:
nodegroups:
leaves: 'L@leaf1,leaf2'
spines:
- spine1
- spine2
eos: 'G@os:eos'
salt -N eos test.ping
salt -N leaves test.ping
salt -N spines test.ping
salt 'leaf1' sys.list_modules 'napalm*'
salt 'leaf1' sys.list_functions net
salt 'leaf1' sys.list_functions napalm
salt 'leaf1' sys.list_functions napalm_net
net
and napalm_net
is the same module.
Example with Napalm
salt 'leaf1' sys.doc net
salt 'leaf1' sys.doc net.traceroute
or
salt 'leaf1' net -d
salt 'leaf1' net.traceroute -d
The file vlans.j2 is in the master file server
salt '*' slsutil.renderer salt://vlans.j2 'jinja'
The file render.sls and the file vlans.j2 are in the master file server
salt -G 'os:eos' state.sls render
ls /srv/salt/eos/*cfg
This repository uses the Napalm proxy
Pillar example for Napalm proxy (pillar/leaf1.sls):
proxy:
proxytype: napalm
driver: eos
host: 10.73.1.105
username: ansible
password: ansible
The Napalm proxy uses different modules to interact with network devices.
net
and napalm_net
is the same module.
net module source code
Examples:
we can use the net
or napalm.net
commands:
salt 'leaf*' net.load_config text='vlan 8' test=True
The file vlan.cfg is available in the master file server
salt 'leaf*' net.load_config filename='salt://vlan.cfg' test=True
salt 'leaf*' net.cli 'show version' 'show vlan'
salt 'leaf1' net.cli 'show vlan | json'
salt 'leaf1' net.cli 'show version' --out=json
salt 'leaf1' net.cli 'show version' --output=json
salt 'leaf1' net.cli 'show vlan' --output-file=show_vlan.txt
salt 'leaf1' net.cli 'show version' > show_version.txt
salt 'leaf1' net.lldp
salt 'leaf1' net.lldp interface='Ethernet1'
salt 'leaf1' net.arp
salt 'leaf1' net.connected
salt 'leaf1' net.facts
salt 'leaf1' net.interfaces
salt 'leaf1' net.ipaddrs
salt 'leaf1' net.config source=running --output-file=leaf1_running.cfg
Examples:
salt 'leaf1' napalm.alive
salt 'leaf1' napalm.pyeapi_run_commands 'show version' encoding=json
salt 'leaf1' napalm.pyeapi_run_commands 'show version' --out=raw
salt 'leaf1' napalm.pyeapi_run_commands 'show version' --out=json
napalm.pyeapi_run_commands
forwards to pyeapi.run_commands
The Netmiko execution module can be used with a Netmiko proxy
Netmiko execution module source code
This repository uses the Napalm proxy. You can replace it with a Netmiko proxy. Here's an example of pillar for Netmiko proxy:
proxy:
proxytype: netmiko
device_type: arista_eos
host: spine1
ip: 10.73.1.101
username: ansible
password: ansible
Examples:
salt '*' netmiko.send_command -d
salt 'spine1' netmiko.send_command 'show version'
The pyeapi execution module can be used to interact with Arista switches.
It is flexible enough to execute the commands both when running under an pyeapi Proxy, as well as running under a Regular Minion by specifying the connection arguments, i.e., host
, username
, password
transport
etc.
Examples:
salt 'leaf1' pyeapi.run_commands 'show version'
salt 'leaf1' pyeapi.get_config as_string=True
salt -G 'os:eos' state.sls collect_commands
ls /tmp/*/*.json
The file push_vlans.sls and the file vlans.j2 are in the master file server
salt 'leaf1' state.sls push_vlans
or
salt 'leaf1' state.apply push_vlans
Verify:
salt 'leaf1' net.cli 'show vlan'
The file render.sls and the file vlans.j2 are in the master file server
salt -G 'os:eos' state.sls render
ls /srv/salt/eos/*cfg
The file push_config.sls is in the master file server
salt -G 'os:eos' state.sls push_config
A Salt execution module is a Python module placed in a directory called _modules
at the root of the Salt fileserver.
In this setup the directory _modules
is /srv/salt/_modules
The execution module _modules/custom_eos.py is /srv/salt/_modules/custom_eos.py
salt 'leaf1' custom_eos.version
salt 'leaf1' custom_eos.model
If you create a new execution module, run this command to sync execution modules placed in the _modules
directory:
salt '*' saltutil.sync_modules
After loading the modules, you can use them
https://github.com/saltstack/salt
docker exec -it salt bash
uname -a
lsb_release -a
pip3 list
pip3 freeze
salt --versions-report
salt --version
salt-master --version
salt-minion --version
salt-proxy --version
salt --help
Use -v
to also display the job id:
salt 'leaf1' net.cli 'show version' 'show vlan' -v
salt-master -l debug
salt-minion -l debug
salt-proxy --proxyid=leaf1 -l debug
ps -ef | grep salt
more /var/log/salt/master
more /var/log/salt/proxy
more /var/log/salt/minion
tail -f /var/log/salt/master
ps -ef | grep salt
kill PID
run this command on the master if you need to display received packets
tcpdump -i < interface > port < port > -vv
Example
tcpdump -i eth0 port 5001 -vv
run this command on the master if you need to watch the event bus:
salt-run state.event pretty=True
run this command to fire an event:
salt "minion1" event.fire_master '{"data": "message to be sent in the event"}' 'tag/blabla'
From outside the container, check port connectivity with the nc command:
From the host where the container runs:
nc -v -z < salt_container_ip > 4505
nc -v -z < salt_container_ip > 4506
Example if the container ip is 172.17.0.2:
nc -v -z 172.17.0.2 4505
nc -v -z 172.17.0.2 4506
From another host:
nc -v -z < host_that_has_the_container > 4505
nc -v -z < host_that_has_the_container > 4506
Example if the host ip where the container runs is 10.83.28.180:
nc -v -z 10.83.28.180 4505
nc -v -z 10.83.28.180 4506