Macbook: MyTraceRoute an alternative ICMP route tracing which works with Zscaler / Zero Trust architecture

If your on a zero trust network adapter like zscaler or netskope, you will see that traceroute doesn’t work as expected. The article below shows how to install mtr (my trace route) using brew:

## Install xcode
xcode-select --install
## Install mtr
brew install mtr


Next we need to change the owner of the MTR package and it’s permissions (otherwise you will need to run it as root every time):

sudo chown root /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet
sudo chmod 4755 /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet
## Symlink to the new mtr package instead of the default MAC version
ln -s /opt/homebrew/Cellar/mtr/0.95/sbin/mtr /opt/homebrew/bin/
ln -s /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet /opt/homebrew/bin/


To run a rolling traceroute with ICMP echo’s use the following:

mtr andrewbaker.ninja
Keys:  Help   Display mode   Restart statistics   Order of fields   quit
                                       Packets               Pings
 Host                                Loss%   Snt   Last   Avg  Best  Wrst StDev

The issue is that Zscaler will attempt to tunnel this traffic. This can be observed by viewing your current routes:

netstat -rn
Internet:
Destination        Gateway            Flags           Netif Expire
default            192.168.0.1        UGScg             en0
1                  100.64.0.1         UGSc            utun6
2/7                100.64.0.1         UGSc            utun6
4/6                100.64.0.1         UGSc            utun6
8/5                100.64.0.1         UGSc            utun6
10/12              100.64.0.1         UGSc            utun6
10.1.30.3          100.64.0.1         UGHS            utun6
10.1.30.15         100.64.0.1         UGHS            utun6
10.1.31/24         100.64.0.1         UGSc            utun6
10.1.31.3          100.64.0.1         UGHS            utun6
10.1.31.41         100.64.0.1         UGHS            utun6
10.1.31.101        100.64.0.1         UGHS            utun6
10.1.31.103        100.64.0.1         UGHS            utun6
10.10.0.11         100.64.0.1         UGHS            utun6
10.10.0.12         100.64.0.1         UGHS            utun6
10.10.160.86       100.64.0.1         UGHS            utun6

As you can see from the above, it lists the routes that are being sent to the Zscaler tunnel interface “utun6” (this is unique to your machine but will look similar). To get around this you can specify the source interface the MTR should run from with the “-I” flag. Below we instruct mtr to use en0 (the lan cable):

mtr andrewbaker.ninja -I en0
                                                                                                                                                                                                            Packets               Pings
 Host                                                                                                                                                                                                     Loss%   Snt   Last   Avg  Best  Wrst StDev
 1. unfisecuregateway                                                                                                                                                                                      1.8%    56    2.0   2.2   1.5   4.5   0.6
 2. 41.71.48.65                                                                                                                                                                                            0.0%    56    4.2   8.1   3.1  28.3   6.0
 3. 41.74.176.249                                                                                                                                                                                          0.0%    56    4.2   4.5   3.4   8.2   0.9
 4. 196.10.140.105                                                                                                                                                                                         0.0%    55    3.0   4.0   2.6  18.8   2.4
 5. 52.93.57.88                                                                                                                                                                                            0.0%    55    5.1   6.3   3.7  12.4   2.0
 6. 52.93.57.103                                                                                                                                                                                           0.0%    55    4.9   4.1   2.6  12.5   1.5
 7. (waiting for reply)
 8. 150.222.94.230                                                                                                                                                                                         0.0%    55    4.0   4.8   3.1  13.8   1.8
 9. 150.222.94.243                                                                                                                                                                                         0.0%    55    4.3   5.3   2.9  37.6   5.2
10. 150.222.94.242                                                                                                                                                                                         0.0%    55   15.2   4.9   2.9  15.2   2.2
11. 150.222.94.237                                                                                                                                                                                         0.0%    55    3.4   5.7   3.1  18.9   2.9
12. 150.222.93.218                                                                                                                                                                                         0.0%    55    4.6   5.5   3.8  11.4   1.3
13. (waiting for reply)

MTR supports TCP, UDP and SCTP based traceroutes. This is useful when testing path latency and packet loss in external or internal networks where QoS is applied to different protocols and ports. Multiple flags are available (man mtr), but for a TCP based MTR use  -T (indicates TCP should be used) and -P (port to trace to):

mtr andrewbaker.ninja -T -P 443 -I en0

Ping specifying source interface

Ping supports specifying the source interface you would like to initiate the ping from. The “-S” flag indicates that the following IP is the source IP address the ping should be done from. This is useful if you want to ping using an internal resource bypassing a route manipulator tool such as Zscaler.

ping outlook.office.com -S 10.220.64.37

Macbook: View the list of DNS nameservers used for resolution

To view the list of nameservers your mac is using, simply open Terminal and paste the following:

myMac ~ % scutil --dns | grep 'nameserver*'
  nameserver[0] : 100.64.0.1
  nameserver[0] : 9.9.9.9
  nameserver[1] : 1.1.1.1
  nameserver[2] : 8.8.8.8
  nameserver[0] : 9.9.9.9
  nameserver[1] : 1.1.1.1
  nameserver[2] : 8.8.8.8

Alternatively, you can copy the DNS servers to clipboard directly from the command line (using pbcopy):

networksetup -getdnsservers Wi-Fi | pbcopy

How to Backup your MySql database on a bitnami wordpress site

I recently managed to explode my wordpress site (whilst trying to upgrade PHP). Anyway, luckily I had created an AMI a month ago – but I had written a few articles since then and so wanted to avoid rewriting them. So below is a method to create a backup of your wordpress mysql database to S3 and recover it onto a new wordpress server. Note: I actually mounted the corrupt instance as a volume and did this the long way around.

Step 1: Create an S3 bucket to store the backup

$ aws s3api create-bucket \
>     --bucket andrewbakerninjabackupdb \
>     --region af-south-1 \
>     --create-bucket-configuration LocationConstraint=af-south-1
Unable to locate credentials. You can configure credentials by running "aws configure".
$ aws configure
AWS Access Key ID [None]: XXXXX
AWS Secret Access Key [None]: XXXX
Default region name [None]: af-south-1
Default output format [None]: 
$ aws s3api create-bucket     --bucket andrewbakerninjabackupdb     --region af-south-1     --create-bucket-configuration LocationConstraint=af-south-1
{
    "Location": "http://andrewbakerninjabackupdb.s3.amazonaws.com/"
}
$ 

Note: To get your API credentials simply go to IAM, Select the Users tab and then Select Create Access Key

Step 2: Create a backup of your MsSql database and copy it to S3

For full backups follow the below script (note: this wont be restorable across mysql versions as it will include the system “mysql” db)

# Check mysql is install/version (note you cannot restore across versions)
mysql --version
# First get your mysql credentials
sudo cat /home/bitnami/bitnami_credentials
Welcome to the Bitnami WordPress Stack

******************************************************************************
The default username and password is XXXXXXX.
******************************************************************************

You can also use this password to access the databases and any other component the stack includes.

# Now create a backup using this password
$ mysqldump -A -u root -p > backupajb.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 3635504 Aug 26 07:24 backupajb.sql

# Next copy the file to your S3 bucket
$ aws s3 cp backupajb.sql s3://andrewbakerninjabackupdb
upload: ./backupajb.sql to s3://andrewbakerninjabackupdb/backupajb.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    3635504 backupajb.sql

OR for partial backups, follow the below to just backup the bitnami wordpress database:

# Login to database
mysql -u root -p
show databases;
+--------------------+
| Database           |
+--------------------+
| bitnami_wordpress  |
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
exit
$ mysqldump -u root -p --databases bitnami_wordpress > backupajblight.sql
Enter password: 
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami      17 Jun 15  2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami      27 Jun 15  2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami      12 Jun 15  2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami      13 Nov 18  2020 bitnami_application_password
-r-------- 1 bitnami bitnami     424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 2635204 Aug 26 07:24 backupajblight.sql
# Next copy the file to your S3 bucket
$ aws s3 cp backupajblight.sql s3://andrewbakerninjabackupdb
upload: ./backupajblight.sql to s3://andrewbakerninjabackupdb/backupajblight.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09    2635204 backupajblight.sql

Step 3: Restore the file on your new wordpress server

Note: If you need the password, use the cat command from Step 2.

#Copy the file down from S3
$ aws s3 cp s3://andrewbakerninjabackupdb/backupajbcron.sql restoreajb.sql --region af-south-1
#Restore the db
$ mysql -u root -p < restoreajb.sql

Step 4: Optional – Automate the Backups using Cron and S3 Versioning

This part is unnecessary (and one could credibly argue that AWS Backup is the way to go – but am not a fan of its clunky UI). Below I enable S3 versioning and create a Cron job to backup the database every week. I will also set the S3 lifecycle policy to delete anything older than 90 days.

# Enable bucket versioning
aws s3api put-bucket-versioning --bucket andrewbakerninjabackupdb --versioning-configuration Status=Enabled
# Now set the bucket lifecycle policy
nano lifecycle.json 

Now paste the following policy into nano and save it (as lifecycle.json):

{
    "Rules": [
        {
            "Prefix": "",
            "Status": "Enabled",
            "Expiration": {
                "Days": 90
            },
            "ID": "NinetyDays"
        }
    ]
}

Next add the lifecycle policy to delete anything older than 90 days (as per above policy):

aws s3api put-bucket-lifecycle --bucket andrewbakerninjabackupdb --lifecycle-configuration file://lifecycle.json
## View the policy
aws s3api get-bucket-lifecycle-configuration --bucket andrewbakerninjabackupdb

Now add a CronJob to run every week:

## List the cron jobs
crontab -l
## Edit the cron jobs
crontab -e
## Enter these lines. 
## Backup on weds at 12:00 and copy it to S3 at 1am (cron format: min hour day month weekday (sunday is day zero))
1 0 * * SAT /opt/bitnami/mysql/bin/mysqldump -A -uroot -pPASSWORD > backupajbcron.sql
1 2 * * SAT /opt/bitnami/mysql/bin/mysqldump -u root -pPASSWORD --databases bitnami_wordpress > backupajbcronlight.sql
0 3 * * SAT aws s3 cp backupajbcron.sql s3://andrewbakerninjabackupdb
0 4 * * SAT aws s3 cp backupajbcronlight.sql s3://andrewbakerninjabackupdb

Macbook: How to get your Mac to behave like MS Windows to restore minimised windows when using Command + Tab (Alt + Tab)

For those who like to maximise or minimise their windows on a Mac, you will likely be frustrated by the default behaviour of your Macbook (in that it doesn’t restore/focus minimised or maximised screens). Below are a few steps to make your mac screen restores behave like Microsoft Windows:

Install Homebrew (if you dont have):

## Install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
## IMPORTANT: Once the install finishes run the two commands displayed in the terminal window
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> $HOME/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Install AltTab:

brew install --cask alt-tab

Next run the AltTab application (click the magnify glass/search glass in the top right of your macbook (near the clock) and then type “AltTab”). When it starts up it will ask you to permission it to access the various system accessibility functions (ie window preview). If you don’t adjust the settings you will need to switch from using “Command + Tab” to using “Option + Tab”, or read the note below to adjust the settings…

Note: I recommend the following tweaks…

If you want to use the default windows style for tab keystrokes, you will need to change the “Controls” tab setting called “Hold” from “Option” to “Command” as per below:

Next, go to Appearance tab and change the Theme to “Windows 10” (as its hard to see the focus window on Mac style):

Note: detailed documents on AltTab can be found here: https://alt-tab-macos.netlify.app/

How to Automatically Turn your bluetooth off and on when you open and close your MacBook

If you’re like me, little things bother you. When I turn on my bluetooth headset and it connects to my Macbook when its closed/sleeping, I get very frustrated. So I wrote a simple script to fix this behaviour. After running the script below, when you close the lid on your Macbook it will automatically turn bluetooth off. When you open you Macbook it will automatically re-enable bluetooth. Simple 🤓

If you need to install brew/homebrew on your mac then run this:

## Install homebrew
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
## IMPORTANT: Once the install finishes run the two commands displayed in the terminal window
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> $HOME/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"

Script to automatically enable/disable bluetooth:

## Install the bluetooth util and sleepwatcher
brew install sleepwatcher blueutil
## This creates a file which switches bluetooth off when the macbook lid is closed
echo "$(which blueutil) -p 0" > ~/.sleep
## This creates a file which switches on bluetooth when the lid is open
echo "$(which blueutil) -p 1" > ~/.wakeup
## This makes both the files runable
chmod 755 ~/.sleep ~/.wakeup
## Finally restart the sleepwatcher service (to pickup the new files)
brew services restart sleepwatcher

Tip: Using the Watch command to poll a URL

If you want to quickly test a URL for changes, then the linux Watch command couple with Curl is a really simple way to hit a URL every n seconds (I use this for blue/green deployment testing to make sure there is no downtime when cutting over):

# Install watch command using homebrew
brew install watch
# Poll andrewbaker.ninja every 1 seconds
watch -n 1 curl andrewbaker.ninja

How to trigger Scaling Events using Stress-ng Command

If you are testing how your autoscaling policies respond to CPU load then a really simple way to test this is using the “stress” command. Note: this is a very crude mechanism to test and wherever possible you should try and generate synthetic application load.

#!/bin/bash

# DESCRIPTION: After updating from the repo, installs stress-ng, a tool used to create various system load for testing purposes.
yum update -y
# Install stress-ng
sudo apt install stress-ng

# CPU spike: Run a CPU spike for 5 seconds
uptime
stress-ng --cpu 4 --timeout 5s --metrics-brief
uptime

# Disk Test: Start N (2) workers continually writing, reading and removing temporary files:
stress-ng --disk 2 --timeout 5s --metrics-brief

# Memory stress test
# Populate memory. Use mmap N bytes per vm worker, the default is 256MB. 
# You can also specify the size as % of total available memory or in units of 
# Bytes, KBytes, MBytes and GBytes using the suffix b, k, m or g:
# Note: The --vm 2 will start N workers (2 workers) continuously calling 
# mmap/munmap and writing to the allocated memory. Note that this can cause 
# systems to trip the kernel OOM killer on Linux systems if not enough 
# physical memory and swap is not available
stress-ng --vm 2 --vm-bytes 1G --timeout 5s

# Combination Stress
# To run for 5 seconds with 4 cpu stressors, 2 io stressors and 1 vm 
# stressor using 1GB of virtual memory, enter:
stress-ng --cpu 4 --io 2 --vm 1 --vm-bytes 1G --timeout 5s --metrics-brief

Using TPC-H tools to Create Test Data for AWS Redshift and AWS EMR

If you need to test out your big data tools below is a useful set of scripts that I have used in the past for aws emr and redshift the below might be helpful:

install git
 sudo yum install make git -y
 install the tpch-kit
 git clone https://github.com/gregrahn/tpch-kit
 cd tpch-kit/dbgen
 sudo yum install gcc -y
 Compile the tpch kit
 make OS=LINUX
 Go home
 cd ~
 Now make your emr data
 mkdir emrdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/emrdata
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 10gb data size
 ./dbgen -v -T o -s 10
 move the data to a s3 bucket
 cd $HOME/emrdata
 aws s3api create-bucket -- bucket andrewbakerbigdata --region af-south-1 --LocationConstraint=af-south-1
 aws s3 cp $HOME/emrdata s3://andrewbakerbigdata/emrdata --recursive
 cd $HOME
 mkdir redshiftdata
 Tell tcph to use the this dir
 export DSS_PATH=$HOME/redshiftdata
 Now make your redshift data
 cd tpch-kit/dbgen
 Now run dbgen in verbose mode, with tables (orders), 40gb data size
 ./dbgen -v -T o -s 40
 These are big files, so lets find out how big they are and split them
 Count lines
 cd $HOME/redshiftdata
 wc -l orders.tbl
 Now split orders into 15m lines per file
 split -d -l 15000000 -a 4 orders.tbl orders.tbl.
 Now split line items
 wc -l lineitem.tbl
 split -d -l 60000000 -a 4 lineitem.tbl lineitem.tbl.
 Now clean up the master files
 rm orders.tbl
 rm lineitem.tbl
 move the split data to a s3 bucket
 aws s3 cp $HOME/redshiftdata s3://andrewbakerbigdata/redshiftdata --recursive

Setting up ssh for ec2-user to your wordpress sites

So after getting frustrated (and even recreating my ec2 instances) due to a “Permission denied (publickey)”, I finally released that the worpress builds by default as set up for SSH using the bitnami account (or at least my build was).

This means each time I login using ec2-user I get:

sudo ssh -i CPT_Default_Key.pem ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com
ec2-user@ec2-13-244-140-33.af-south-1.compute.amazonaws.com: Permission denied (publickey).

Being a limited human being, I will never cope with two user names. So to move over to a standard login name (ec2-user) is relatively simple. Just follow the below steps (after logging in using the bitnami account):

sudo useradd -s /bin/bash -o -u id -u -g id -g ec2-user

sudo mkdir ~ec2-user/
sudo cp -rp ~bitnami/.ssh ~ec2-user/
sudo cp -rp ~bitnami/.bashrc ~ec2-user/
sudo cp -rp ~bitnami/.profile ~ec2-user/

Next you need to copy your public key into the authorised keys file using:

cat mypublickey.pub >> /home/ec2-user/.ssh/authorized_key

Next to allow the ec2-user to execute commands as the root user, add the new user account to the bitnami-admins group, by executing the following command when logged in as the bitnami user:

sudo usermod -aG bitnami-admins ec2-user