Macbook: View the list of DNS nameservers used for resolution

To view the list of nameservers your mac is using, simply open Terminal and paste the following:

myMac ~ % scutil --dns | grep 'nameserver*'
  nameserver[0] : 100.64.0.1
  nameserver[0] : 9.9.9.9
  nameserver[1] : 1.1.1.1
  nameserver[2] : 8.8.8.8
  nameserver[0] : 9.9.9.9
  nameserver[1] : 1.1.1.1
  nameserver[2] : 8.8.8.8

Alternatively, you can copy the DNS servers to clipboard directly from the command line (using pbcopy):

networksetup -getdnsservers Wi-Fi | pbcopy

How to Backup your MySql database on a bitnami wordpress site

I recently managed to explode my wordpress site (whilst trying to upgrade PHP). Anyway, luckily I had created an AMI a month ago – but I had written a few articles since then and so wanted to avoid rewriting them. So below is a method to create a backup of your wordpress mysql database to S3 and recover it onto a new wordpress server. Note: I actually mounted the corrupt instance as a volume and did this the long way around.

Step 1: Create an S3 bucket to store the backup

$ aws s3api create-bucket \
>     --bucket andrewbakerninjabackupdb \
>     --region af-south-1 \
>     --create-bucket-configuration LocationConstraint=af-south-1
Unable to locate credentials. You can configure credentials by running "aws configure".
$ aws configure
AWS Access Key ID [None]: XXXXX
AWS Secret Access Key [None]: XXXX
Default region name [None]: af-south-1
Default output format [None]: 
$ aws s3api create-bucket     --bucket andrewbakerninjabackupdb     --region af-south-1     --create-bucket-configuration LocationConstraint=af-south-1
{
    "Location": "http://andrewbakerninjabackupdb.s3.amazonaws.com/"
}
$ 

Note: To get your API credentials simply go to IAM, Select the Users tab and then Select Create Access Key

Step 2: Create a backup of your MsSql database and copy it to S3

For full backups follow the below script (note: this wont be restorable across mysql versions as it will include the system “mysql” db)

# Check mysql is install/version (note you cannot restore across versions)
mysql --version
# First get your mysql credentials
sudo cat /home/bitnami/bitnami_credentials
Welcome to the Bitnami WordPress Stack

******************************************************************************
The default username and password is XXXXXXX.
******************************************************************************

You can also use this password to access the databases and any other component the stack includes.

# Now create a backup using this password
$ mysqldump -A -u root -p > backupajb.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 3635504 Aug 26 07:24 backupajb.sql

# Next copy the file to your S3 bucket
$ aws s3 cp backupajb.sql s3://andrewbakerninjabackupdb
upload: ./backupajb.sql to s3://andrewbakerninjabackupdb/backupajb.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    3635504 backupajb.sql

OR for partial backups, follow the below to just backup the bitnami wordpress database:

# Login to database
mysql -u root -p
show databases;
+--------------------+
| Database           |
+--------------------+
| bitnami_wordpress  |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
exit
$ mysqldump -u root -p --databases bitnami_wordpress > backupajblight.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 2635204 Aug 26 07:24 backupajblight.sql
# Next copy the file to your S3 bucket
$ aws s3 cp backupajblight.sql s3://andrewbakerninjabackupdb
upload: ./backupajblight.sql to s3://andrewbakerninjabackupdb/backupajblight.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    2635204 backupajblight.sql

Step 3: Restore the file on your new wordpress server

Note: If you need the password, use the cat command from Step 2.

#Copy the file down from S3
$ aws s3 cp s3://andrewbakerninjabackupdb/backupajbcron.sql restoreajb.sql --region af-south-1
#Restore the db
$ mysql -u root -p < restoreajb.sql

Step 4: Optional – Automate the Backups using Cron and S3 Versioning

This part is unnecessary (and one could credibly argue that AWS Backup is the way to go – but am not a fan of its clunky UI). Below I enable S3 versioning and create a Cron job to backup the database every week. I will also set the S3 lifecycle policy to delete anything older than 90 days.

# Enable bucket versioning
aws s3api put-bucket-versioning --bucket andrewbakerninjabackupdb --versioning-configuration Status=Enabled
# Now set the bucket lifecycle policy
nano lifecycle.json 

Now paste the following policy into nano and save it (as lifecycle.json):

{
    "Rules": [
        {
            "Prefix": "",
            "Status": "Enabled",
            "Expiration": {
                "Days": 90
            },
            "ID": "NinetyDays"
        }
    ]
}

Next add the lifecycle policy to delete anything older than 90 days (as per above policy):

aws s3api put-bucket-lifecycle --bucket andrewbakerninjabackupdb --lifecycle-configuration file://lifecycle.json
## View the policy
aws s3api get-bucket-lifecycle-configuration --bucket andrewbakerninjabackupdb

Now add a CronJob to run every week:

## List the cron jobs
crontab -l
## Edit the cron jobs
crontab -e
## Enter these lines. 
## Backup on weds at 12:00 and copy it to S3 at 1am (cron format: min hour day month weekday (sunday is day zero))
1 0 * * SAT /opt/bitnami/mysql/bin/mysqldump -A -uroot -pPASSWORD > backupajbcron.sql
1 2 * * SAT /opt/bitnami/mysql/bin/mysqldump -u root -pPASSWORD --databases bitnami_wordpress > backupajbcronlight.sql
0 3 * * SAT aws s3 cp backupajbcron.sql s3://andrewbakerninjabackupdb
0 4 * * SAT aws s3 cp backupajbcronlight.sql s3://andrewbakerninjabackupdb

Macbook: How to get your Mac to behave like MS Windows to restore minimised windows when using Command + Tab (Alt + Tab)

For those who like to maximise or minimise their windows on a Mac, you will likely be frustrated by the default behaviour of your Macbook (in that it doesn’t restore/focus minimised or maximised screens). Below are a few steps to make your mac screen restores behave like Microsoft Windows:

Install Homebrew (if you dont have):

## Install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
## IMPORTANT: Once the install finishes run the two commands displayed in the terminal window
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> $HOME/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Install AltTab:

brew install --cask alt-tab

Next run the AltTab application (click the magnify glass/search glass in the top right of your macbook (near the clock) and then type “AltTab”). When it starts up it will ask you to permission it to access the various system accessibility functions (ie window preview). If you don’t adjust the settings you will need to switch from using “Command + Tab” to using “Option + Tab”, or read the note below to adjust the settings…

Note: I recommend the following tweaks…

If you want to use the default windows style for tab keystrokes, you will need to change the “Controls” tab setting called “Hold” from “Option” to “Command” as per below:

Next, go to Appearance tab and change the Theme to “Windows 10” (as its hard to see the focus window on Mac style):

Note: detailed documents on AltTab can be found here: https://alt-tab-macos.netlify.app/

How to Automatically Turn your bluetooth off and on when you open and close your MacBook

If you’re like me, little things bother you. When I turn on my bluetooth headset and it connects to my Macbook when its closed/sleeping, I get very frustrated. So I wrote a simple script to fix this behaviour. After running the script below, when you close the lid on your Macbook it will automatically turn bluetooth off. When you open you Macbook it will automatically re-enable bluetooth. Simple 🤓

If you need to install brew/homebrew on your mac then run this:

## Install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
## IMPORTANT: Once the install finishes run the two commands displayed in the terminal window
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> $HOME/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Script to automatically enable/disable bluetooth:

## Install the bluetooth util and sleepwatcher
brew install sleepwatcher blueutil
## This creates a file which switches bluetooth off when the macbook lid is closed
echo "$(which blueutil) -p 0" > ~/.sleep
## This creates a file which switches on bluetooth when the lid is open
echo "$(which blueutil) -p 1" > ~/.wakeup
## This makes both the files runable
chmod 755 ~/.sleep ~/.wakeup
## Finally restart the sleepwatcher service (to pickup the new files)
brew services restart sleepwatcher

Tip: Using the Watch command to poll a URL

If you want to quickly test a URL for changes, then the linux Watch command couple with Curl is a really simple way to hit a URL every n seconds (I use this for blue/green deployment testing to make sure there is no downtime when cutting over):

# Install watch command using homebrew
brew install watch
# Poll andrewbaker.ninja every 1 seconds
watch -n 1 curl andrewbaker.ninja

How to trigger Scaling Events using Stress-ng Command

If you are testing how your autoscaling policies respond to CPU load then a really simple way to test this is using the “stress” command. Note: this is a very crude mechanism to test and wherever possible you should try and generate synthetic application load.

#!/bin/bash

# DESCRIPTION: After updating from the repo, installs stress-ng, a tool used to create various system load for testing purposes.
yum update -y
# Install stress-ng
sudo apt install stress-ng

# CPU spike: Run a CPU spike for 5 seconds
uptime
stress-ng --cpu 4 --timeout 5s --metrics-brief
uptime

# Disk Test: Start N (2) workers continually writing, reading and removing temporary files:
stress-ng --disk 2 --timeout 5s --metrics-brief

# Memory stress test
# Populate memory. Use mmap N bytes per vm worker, the default is 256MB. 
# You can also specify the size as % of total available memory or in units of 
# Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g:
# Note: The --vm 2 will start N workers (2 workers) continuously calling 
# mmap/munmap and writing to the allocated memory. Note that this can cause 
# systems to trip the kernel OOM killer on Linux systems if not enough 
# physical memory and swap is not available
stress-ng --vm 2 --vm-bytes 1G --timeout 5s

# Combination Stress
# To run for 5 seconds with 4 cpu stressors, 2 io stressors and 1 vm 
# stressor using 1GB of virtual memory, enter:
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 5s --metrics-brief

Using TPC-H tools to Create Test Data for AWS Redshift and AWS EMR

If you need to test out your big data tools below is a useful set of scripts that I have used in the past for aws emr and redshift the below might be helpful:

install git
 sudo yum install make git -y
 install the tpch-kit
 git clone https://github.com/gregrahn/tpch-kit
 cd tpch-kit/dbgen
 sudo yum install gcc -y
 Compile the tpch kit
 make OS=LINUX
 Go home
 cd ~
 Now make your emr data
 mkdir emrdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/emrdata
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 10gb data size
 ./dbgen -v -T o -s 10
 move the data to a s3 bucket
 cd $HOME/emrdata
 aws s3api create-bucket -- bucket andrewbakerbigdata --region af-south-1 --LocationConstraint=af-south-1
 aws s3 cp $HOME/emrdata s3://andrewbakerbigdata/emrdata --recursive
 cd $HOME
 mkdir redshiftdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/redshiftdata
 Now make your redshift data
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 40gb data size
 ./dbgen -v -T o -s 40
 These are big files, so lets find out how big they are and split them
 Count lines
 cd $HOME/redshiftdata
 wc -l orders.tbl
 Now split orders into 15m lines per file
 split -d -l 15000000 -a 4 orders.tbl orders.tbl.
 Now split line items
 wc -l lineitem.tbl
 split -d -l 60000000 -a 4 lineitem.tbl lineitem.tbl.
 Now clean up the master files
 rm orders.tbl
 rm lineitem.tbl
 move the split data to a s3 bucket
 aws s3 cp $HOME/redshiftdata s3://andrewbakerbigdata/redshiftdata --recursive

Setting up ssh for ec2-user to your wordpress sites

So after getting frustrated (and even recreating my ec2 instances) due to a “Permission denied (publickey)”, I finally released that the worpress builds by default as set up for SSH using the bitnami account (or at least my build was).

This means each time I login using ec2-user I get:

sudo ssh -i CPT_Default_Key.pem ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com
ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com: Permission denied (publickey).

Being a limited human being, I will never cope with two user names. So to move over to a standard login name (ec2-user) is relatively simple. Just follow the below steps (after logging in using the bitnami account):

sudo useradd -s /bin/bash -o -u id -u -g id -g ec2-user

sudo mkdir ~ec2-user/
sudo cp -rp ~bitnami/.ssh ~ec2-user/
sudo cp -rp ~bitnami/.bashrc ~ec2-user/
sudo cp -rp ~bitnami/.profile ~ec2-user/

Next you need to copy your public key into the authorised keys file using:

cat mypublickey.pub >> /home/ec2-user/.ssh/authorized_key

Next to allow the ec2-user to execute commands as the root user, add the new user account to the bitnami-admins group, by executing the following command when logged in as the bitnami user:

sudo usermod -aG bitnami-admins ec2-user

Linux: Quick guide to the CD command – for windows dudes :)

Ok, so I am a windows dude and only after docker and K8 came along did I start to get all they hype around Linux. To be fair, Linux is special and I have been blown away with the engineering effort behind this OS (and also glad to leave my book of Daniel Appleman win32 api on the shelf for a few years!).

What surprises me with Linux is the number of shortcuts and so before I forget them I am going to document a few of my favorites (the context here is that I use WSL2 a lot and these are my favorite navigation commands).

Exchanging files between Linux and Windows:

This is a bit of a pain, so I just create a symbolic link to a windows root directory in my linux home directory so that I can easily copy files back an forth.

cd ~
ln -s /mnt/c/ mywindowsroot
cd mywindowsroot
ls
# copy everything from my windows root folder into my wsl linux directory
cp mywindowsroot/. .

Show Previous Directory

cd --

Switch back to your previous directory

cd -

Move to Home Directory

cd ~
or just use
cd

Pushing and Popping Directories

Pushd and popd are Linux commands in bash and certain other shell which saves current working directory location to memory or brings to the directory from memory and changes to this directory, respectively. This is very handy when your jumping around but don’t want to create symbolic links.

# Push the current directory onto the stack (you can also enter an absolute directory here, like pushd /var/www)
pushd .
# Go to the home dir
cd
ls
# Now move back to this directory
popd
ls