How to Backup your MySql database on a bitnami wordpress site

I recently managed to explode my wordpress site (whilst trying to upgrade PHP). Anyway, luckily I had created an AMI a month ago – but I had written a few articles since then and so wanted to avoid rewriting them. So below is a method to create a backup of your wordpress mysql database to S3 and recover it onto a new wordpress server. Note: I actually mounted the corrupt instance as a volume and did this the long way around.

Step 1: Create an S3 bucket to store the backup

$ aws s3api create-bucket \
>     --bucket andrewbakerninjabackupdb \
>     --region af-south-1 \
>     --create-bucket-configuration LocationConstraint=af-south-1
Unable to locate credentials. You can configure credentials by running "aws configure".
$ aws configure
AWS Access Key ID [None]: XXXXX
AWS Secret Access Key [None]: XXXX
Default region name [None]: af-south-1
Default output format [None]: 
$ aws s3api create-bucket     --bucket andrewbakerninjabackupdb     --region af-south-1     --create-bucket-configuration LocationConstraint=af-south-1
{
    "Location": "http://andrewbakerninjabackupdb.s3.amazonaws.com/"
}
$ 

Note: To get your API credentials simply go to IAM, Select the Users tab and then Select Create Access Key

Step 2: Create a backup of your MsSql database and copy it to S3

For full backups follow the below script (note: this wont be restorable across mysql versions as it will include the system “mysql” db)

# Check mysql is install/version (note you cannot restore across versions)
mysql --version
# First get your mysql credentials
sudo cat /home/bitnami/bitnami_credentials
Welcome to the Bitnami WordPress Stack

******************************************************************************
The default username and password is XXXXXXX.
******************************************************************************

You can also use this password to access the databases and any other component the stack includes.

# Now create a backup using this password
$ mysqldump -A -u root -p > backupajb.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 3635504 Aug 26 07:24 backupajb.sql

# Next copy the file to your S3 bucket
$ aws s3 cp backupajb.sql s3://andrewbakerninjabackupdb
upload: ./backupajb.sql to s3://andrewbakerninjabackupdb/backupajb.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    3635504 backupajb.sql

OR for partial backups, follow the below to just backup the bitnami wordpress database:

# Login to database
mysql -u root -p
show databases;
+--------------------+
| Database           |
+--------------------+
| bitnami_wordpress  |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
exit
$ mysqldump -u root -p --databases bitnami_wordpress > backupajblight.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 2635204 Aug 26 07:24 backupajblight.sql
# Next copy the file to your S3 bucket
$ aws s3 cp backupajblight.sql s3://andrewbakerninjabackupdb
upload: ./backupajblight.sql to s3://andrewbakerninjabackupdb/backupajblight.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    2635204 backupajblight.sql

Step 3: Restore the file on your new wordpress server

Note: If you need the password, use the cat command from Step 2.

#Copy the file down from S3
$ aws s3 cp s3://andrewbakerninjabackupdb/backupajbcron.sql restoreajb.sql --region af-south-1
#Restore the db
$ mysql -u root -p < restoreajb.sql

Step 4: Optional – Automate the Backups using Cron and S3 Versioning

This part is unnecessary (and one could credibly argue that AWS Backup is the way to go – but am not a fan of its clunky UI). Below I enable S3 versioning and create a Cron job to backup the database every week. I will also set the S3 lifecycle policy to delete anything older than 90 days.

# Enable bucket versioning
aws s3api put-bucket-versioning --bucket andrewbakerninjabackupdb --versioning-configuration Status=Enabled
# Now set the bucket lifecycle policy
nano lifecycle.json 

Now paste the following policy into nano and save it (as lifecycle.json):

{
    "Rules": [
        {
            "Prefix": "",
            "Status": "Enabled",
            "Expiration": {
                "Days": 90
            },
            "ID": "NinetyDays"
        }
    ]
}

Next add the lifecycle policy to delete anything older than 90 days (as per above policy):

aws s3api put-bucket-lifecycle --bucket andrewbakerninjabackupdb --lifecycle-configuration file://lifecycle.json
## View the policy
aws s3api get-bucket-lifecycle-configuration --bucket andrewbakerninjabackupdb

Now add a CronJob to run every week:

## List the cron jobs
crontab -l
## Edit the cron jobs
crontab -e
## Enter these lines. 
## Backup on weds at 12:00 and copy it to S3 at 1am (cron format: min hour day month weekday (sunday is day zero))
1 0 * * SAT /opt/bitnami/mysql/bin/mysqldump -A -uroot -pPASSWORD > backupajbcron.sql
1 2 * * SAT /opt/bitnami/mysql/bin/mysqldump -u root -pPASSWORD --databases bitnami_wordpress > backupajbcronlight.sql
0 3 * * SAT aws s3 cp backupajbcron.sql s3://andrewbakerninjabackupdb
0 4 * * SAT aws s3 cp backupajbcronlight.sql s3://andrewbakerninjabackupdb

How to Fix SSH error: WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED

If the fingerprint of your remote host changes you will see the following error message appear:

 ~ % ssh -i "mykey.pem" [email protected]
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@    WARNING: REMOTE HOST IDENTIFICATION HAS CHANGED!     @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
IT IS POSSIBLE THAT SOMEONE IS DOING SOMETHING NASTY!
Someone could be eavesdropping on you right now (man-in-the-middle attack)!
It is also possible that a host key has just been changed.
The fingerprint for the ED25519 key sent by the remote host is
SHA256:S60CvpE17ri+E594StxXBQcNIrga4Nb7uX4s7BPr3dw.
Please contact your system administrator.
Add correct host key in /Users/user_id/.ssh/known_hosts to get rid of this message.
Offending ED25519 key in /Users/user_id/.ssh/known_hosts:2
Host key for my_host.af-south-1.compute.amazonaws.com has changed and you have requested strict checking.
Host key verification failed.

There are many ways to fix this. The easiest of which is simply to delete your “known_hosts” file. This will mean you just need to accept new finger prints on all your SSH hosts. Yes, this is very lazy…

rm ~/.ssh/known_hosts

Tip: Using the Watch command to poll a URL

If you want to quickly test a URL for changes, then the linux Watch command couple with Curl is a really simple way to hit a URL every n seconds (I use this for blue/green deployment testing to make sure there is no downtime when cutting over):

# Install watch command using homebrew
brew install watch
# Poll andrewbaker.ninja every 1 seconds
watch -n 1 curl andrewbaker.ninja

How to trigger Scaling Events using Stress-ng Command

If you are testing how your autoscaling policies respond to CPU load then a really simple way to test this is using the “stress” command. Note: this is a very crude mechanism to test and wherever possible you should try and generate synthetic application load.

#!/bin/bash

# DESCRIPTION: After updating from the repo, installs stress-ng, a tool used to create various system load for testing purposes.
yum update -y
# Install stress-ng
sudo apt install stress-ng

# CPU spike: Run a CPU spike for 5 seconds
uptime
stress-ng --cpu 4 --timeout 5s --metrics-brief
uptime

# Disk Test: Start N (2) workers continually writing, reading and removing temporary files:
stress-ng --disk 2 --timeout 5s --metrics-brief

# Memory stress test
# Populate memory. Use mmap N bytes per vm worker, the default is 256MB. 
# You can also specify the size as % of total available memory or in units of 
# Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g:
# Note: The --vm 2 will start N workers (2 workers) continuously calling 
# mmap/munmap and writing to the allocated memory. Note that this can cause 
# systems to trip the kernel OOM killer on Linux systems if not enough 
# physical memory and swap is not available
stress-ng --vm 2 --vm-bytes 1G --timeout 5s

# Combination Stress
# To run for 5 seconds with 4 cpu stressors, 2 io stressors and 1 vm 
# stressor using 1GB of virtual memory, enter:
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 5s --metrics-brief