An Ansible role to backup MySQL or MariaDB databases to an S3 bucket or SFTP server.
- The
s3cmd
command must be installed and executable by the user running the role. - The
mysql
command must be installed and executable by the user running the role. - The
scp
command must be installed and executable by the user running the role. - The
ssh
command must be installed and executable by the user running the role.
Please see s3cmd's documentation on how to install the command.
The following collections must be installed:
- cloud.common
- amazon.aws
- community.general
- community.mysql
This role requires one dictionary as configuration, mysql_backup
:
mysql_backup:
s3cmd: "/usr/local/bin/s3cmd"
debug: true
stopOnFailure: false
sources: {}
remotes: {}
backups: []
Where:
s3cmd
is the full path to thes3cmd
executable. Optional, defaults tos3cmd
.debug
istrue
to enable debugging output. Optional, defaults tofalse
.stopOnFailure
istrue
to stop the entire role if any one backup fails. Optional, defaults tofalse
.sources
is a dictionary of sites and environments. Required.remotes
is a dictionary of remote upload locations. Required.backups
is a list of backups to perform. Required.
In this role, "sources" specify the source from which to download backups. Each must have a unique key which is later used in the mysql_backup.backups
list.
mysql_backup:
sources:
my-prod-db:
host: "mysql.example.com"
port: 3306
usernameFile: "/path/to/username.txt"
passwordFile: "/path/to/password.txt"
tlsCertFile: "/path/to/tls.cert"
tlsKeyFile: "/path/to/tls.key"
retryCount: 3
retryDelay: 30
Where, in each entry:
host
is the hostname of the database server.port
is the port with which to connect to the database.usernameFile
is the path to a file containing the username. Optional ifusername
is specified.username
is the username with which to connect to the database. Ignored ifusernameFile
is specified.passwordFile
is the path to a file containing the password. Optional ifpassword
is specified.password
is the password with which to connect to the database. Ignored ifpasswordFile
is specified.tlsCertFile
is the path to the TLS certificate necessary to connect to the database. Optional.tlsKeyFile
is the path to the TLS certificate key necessary to connect to the database. Optional.retryCount
is the number of time to retryplatform
commands if they fail. Optional, defaults to3
.retryDelay
is the time in seconds to wait before retrying a failedplatform
command. Optional, defaults to30
.
In this role "remotes" are upload destinations for backups. This role supports S3 or SFTP as remotes. Each remote must have a unique key which is later used in the mysql_backup.backups
list.
- hosts: servers
vars:
mysql_backup:
remotes:
example-s3-bucket:
type: "s3"
bucket: "my-s3-bucket"
provider: "AWS"
accessKeyFile: "/path/to/aws-s3-key.txt"
secretKeyFile: "/path/to/aws-s3-secret.txt"
hostBucket: "my-example-bucket.s3.example.com"
s3Url: "https://my-example-bucket.s3.example.com"
region: "us-east-1"
sftp.example.com:
type: "sftp"
host: "sftp.example.com"
user: "example_user"
keyFile: "/config/id_example_sftp"
pubKeyFile: "/config/id_example_sftp.pub"
For s3
type remotes:
bucket
is the name of the S3 bucket.provider
is the S3 provider. See rclone's S3 documenation on--s3-provider
for possible values. Optional, defaults toAWS
.accessKeyFile
is the path to a file containing the access key. Optional ifaccessKey
is specified.accessKey
is the value of the access key necessary to access the bucket. Ignored ifaccessKeyFile
is specified.secretKeyFile
is the path to a file containing the secret key. Optional ifsecretKey
is specified.secretKey
is the value of the access key necessary to secret the bucket. Ignored ifsecretKeyFile
is specified.region
is the AWS region in which the bucket resides. Required if using AWS S3, may be optional for other providers.endpoint
is the S3 endpoint to use. Optional if using AWS, required for other providers.
For sftp
type remotes:
host
is the hostname of the SFTP server. Required.user
is the username necessary to login to the SFTP server. Required.keyFile
is the path to a file containing the SSH private key. Required.pubKeyFile
si the path to a file containing the SSH public key. Required for database backups, ignored for file backups.
The mysql_backup.backups
list specifies the database backups perform, referencing the mysql_backup.sources
and mysql_backup.remotes
sections for connectivity details.
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/source/bucket"
disabled: false
targets: []
Where:
name
is the display name of the backup. Optional, but makes the logs easier.source
is the name of the key undermysql_backups.sources
from which to generate the backup. Required.database
is the name of the database to backup. Required.path
is the path in the source bucket from which to get files. Optional.disabled
istrue
to disable (skip) the backup. Optional, defaults tofalse
.targets
is a list of remotes and additional destination information about where to upload backups. Required.
Backup targets reference a key in mysql_backup.remotes
, and combine that with additional information used to upload this specific backup.
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
disabled: false
Where:
remote
is the key undermysql_backup.remotes
to use when uploading the backup. Required.path
is the path on the remote to upload the backup. Optional.disabled
istrue
to skip uploading to the specifedremote
. Optional, defaults tofalse
.
When a backup completes, you have the option to ping an URL via HTTP:
mysql_backup:
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
healthcheckUrl: "https://pings.example.com/path/to/service"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
disabled: false
Where:
healthcheckUrl
is the URL to ping when the backup completes successfully. Optional.
Backups are uploaded to the remote with the <database_name>.<host>.<port>-0.sql.gz
. Often, you'll want to retain previous backups in the case an older backup can aid in research or recovery. This role supports retaining and rotating multiple backups using the retainCount
key.
mysql_backup:
backups:
- name: "example.com database"
source: "example.com"
database: "my-site-live"
element: "database"
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retainCount: 3
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
retainCount: 3
disabled: false
Where:
retainCount
is the total number of backups to retain in the directory. Optional. Defaults to1
, or no rotation.
During a backup, if retainCount
is set:
- The backup with the ending
<retainCount - 1>.tar.gz
is deleted. - Starting with
<retainCount - 2>.tar.gz
, each backup is renamed incremending the ending index. - The new backup is uploaded with a
0
index as<database_name>.<host>.<port>-0.sql.gz
.
This feature works both in S3 and SFTP.
Sometimes an upload will fail due to transient network issues. You can specify how to control the backup using the retries
and retryDelay
keys on each target:
mysql_backup:
backups:
- name: "example.com database"
source: "example.com"
database: "my-site-live"
element: "database"
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retries: 3
retryDelay: 30
- remote: "sftp.example.com"
path: "backups/example.com/database"
retries: 3
retryDelay: 30
Where:
retries
is the total number of retries to perform if the upload should fail. Defaults to no retries.retryDelay
the number of seconds to wait between retries. Defaults to no delay.
- hosts: servers
vars:
mysql_backup:
sources:
my-prod-db:
host: "mysql.example.com"
port: 3306
usernameFile: "/path/to/username.txt"
passwordFile: "/path/to/password.txt"
tlsCertFile: "/path/to/tls.cert"
tlsKeyFile: "/path/to/tls.key"
retryCount: 3
retryDelay: 30
remotes:
example-s3-bucket:
type: "s3"
bucket: "my-s3-bucket"
provider: "AWS"
accessKeyFile: "/path/to/aws-s3-key.txt"
secretKeyFile: "/path/to/aws-s3-secret.txt"
hostBucket: "my-example-bucket.s3.example.com"
s3Url: "https://my-example-bucket.s3.example.com"
region: "us-east-1"
sftp.example.com:
type: "sftp"
host: "sftp.example.com"
user: "example_user"
keyFile: "/config/id_example_sftp"
pubKeyFile: "/config/id_example_sftp.pub"
backups:
- name: "example.com database"
source: "my-prod-db"
database: "my-site-live"
path: "path/in/target/bucket"
healthcheckUrl: "https://pings.example.com/path/to/service"
disabled: false
targets:
- remote: "example-s3-bucket"
path: "example.com/database"
retainCount: 3
disabled: true
- remote: "sftp.example.com"
path: "backups/example.com/database"
retainCount: 3
disabled: false
roles:
- { role: ten7.mysql_backup }
GPL v3
This role was created by TEN7.