There are a few things that I tweak when I get a new Macbook, one of which is the screenshot format (mainly because it doesnt natively render in Whatsapp). So I thought I would share the code snippet that you can run in Terminal to alter the default image type of your screenshots:
I have an old website that I want to avoid the hosting costs and so just wanted to download the website and run it from an AWS S3 bucket using Cloud Front to publish the content. Below are the steps I took to do this:
Below is a summary of the parameters (inc common alternatives):
–recursive: Wget is capable of traversing parts of the Web (or a single HTTP or FTP server), following links and directory structure. We refer to this as to recursive retrieval, or recursion.
–no-clobber: If a file is downloaded more than once in the same directory, Wget’s behavior depends on a few options, including `-nc’. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved. When running Wget without `-N’, `-nc’, or `-r’, downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named `file.1′. If that file is downloaded yet again, the third copy will be named `file.2′, and so on. When `-nc’ is specified, this behavior is suppressed, and Wget will refuse to download newer copies of `file‘. Therefore, “no-clobber” is actually a misnomer in this mode–it’s not clobbering that’s prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that’s prevented. When running Wget with `-r’, but without `-N’ or `-nc’, re-downloading a file will result in the new copy simply overwriting the old. Adding `-nc’ will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored. When running Wget with `-N’, with or without `-r’, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file (see section Time-Stamping). `-nc’ may not be specified at the same time as `-N’. Note that when `-nc’ is specified, files with the suffixes `.html’ or (yuck) `.htm’ will be loaded from the local disk and parsed as if they had been retrieved from the Web.
–page-requisites: This causes wget to download all the files that are necessary to properly display a given HTML page which includes images, css, js, etc. –adjust-extension Preserves proper file extensions for . html, . css, and other assets
–html-extension: This adds .html after the downloaded filename, to make sure it plays nicely on whatever system you’re going to view the archive on
–convert-links: After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
–no-check-certificate: Don’t check the server certificate against the available certificate authorities. Also don’t require the URL host name to match the common name presented by the certificate.
–restrict-file-names: By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, perhaps because you are downloading to a non-native partition”. So unless you are not downloading to non-native partition you do not need to restrict file names by OS. its automatic. Additionally: “The values ‘unix’ and ‘windows’ are mutually exclusive (one will override the other)”
–domains: Limit spanning to specified domains
–no-parent: If you don’t want wget to descend down to the parent directory, use -np or –no-parent option. This instructs wget not to ascend to the parent directory when it hits references like ../ in href links.
Upload Files to S3 Bucket
Next upload the files to your S3 bucket. First move into the relevant bucket, then perform the recursive upload.
$ cd archive.andrewbaker.ninja
$ ls .
$ aws s3 cp . s3://vbusers.com/ --recursive
Create a CloudFront Distribution from an S3 Bucket
Finally go to CloudFront and create a distribution from the S3 Bucket you just created. You can pretty much use the default settings. Note: you will need to wait a few minutes before you browse to the distributions domain name:
Below is a quick (am busy) outline on how to automatically stop and start your EC2 instances.
Step 1: Tag your resources
In order to decide which instances stop and start you first need to add an auto-start-stop: Yes tag to all the instances you want to be affected by the start / stop functions. Note: You can use “Resource Groups and Tag Editor” to bulk apply these tags to the resources you want to be affected by the lambda functions you are going to create. See below (click the orange button called “Manage tags of Selected Resources”).
Step 2: Create a new role for our lambda functions
First we need to create the IAM role to run the Lambda functions. Go to IAM and click the “Create Role” button. Then select “AWS Service” from the “Trusted entity options”, and select Lambda from the “Use Cases” options. Then click “Next”, followed by “Create Policy”. To specify the permission, simply Click the JSON button on the right of the screen and enter the below policy (swapping the region and account id for your region and account id):
Hit next and under “Review and create”, save the above policy as ec2-lambda-start-stop by clicking the “Create Policy” button. Next, search for this newly created policy and select it as per below and hit “Next”.
You will now see the “Name, review, and create” screen. Here you simply need to hit “Create Role” after you enter the role name as ec2-lambda-start-stop-role.
Note the policy is restricted to only have access to EC2 instances that contains auto-start-stop: Yes tags (least privileges).
If you want to review your role, this is how it should look. You can see I have filled in my region and account number in the policy:
Step 3: Create Lambda Functions To Start/Stop EC2 Instances
In this section we will create two lambda functions, one to start the instances and the other to stop the instances.
Step 3a: Add the Stop EC2 instance function
Goto Lambda console and click on create function
Create a lambda function with a function name of stop-ec2-instance-lambda, python3.11 runtime, and ec2-lambda-stop-start-role (see image below).
Next add the lamdba stop function and save it as stop-ec2-instance. Note, you will need to change the value of the region_name parameter accordingly.
import json
import boto3
ec2 = boto3.resource('ec2', region_name='af-south-1')
def lambda_handler(event, context):
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']},{'Name': 'tag:auto-start-stop','Values':['Yes']}])
for instance in instances:
id=instance.id
ec2.instances.filter(InstanceIds=[id]).stop()
print("Instance ID is stopped:- "+instance.id)
return "success"
This is how your Lambda function should look:
Step 3b: Add the Start EC2 instance function
Goto Lambda console and click on create function
Create lambda functions with start-ec2-instance, python3.11 runtime, and ec2-lambda-stop-start-role.
Then add the below code and save the function as start-ec2-instance-lambda.
Note, you will need to change the value of the region_name parameter accordingly.
import json
import boto3
ec2 = boto3.resource('ec2', region_name='af-south-1')
def lambda_handler(event, context):
instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['stopped']},{'Name': 'tag:auto-start-stop','Values':['Yes']}])
for instance in instances:
id=instance.id
ec2.instances.filter(InstanceIds=[id]).stop()
print("Instance ID is stopped:- "+instance.id)
return "success"
4. Summary
If either of the above lambda functions are triggered, they will start or stop your EC2 instances based on the instance state and the value of auto-start-stop tag. To automate this you can simply setup up cron jobs, step functions, AWS Event Bridge, Jenkins etc.
If you have just changed ISPs or moved house and your internet suddenly starts misbehaving the likelihood is your Maximum Transmission Unit (MTU) is set too high for your ISP. The default internet facing MTU is 1500 bytes, BUT depending on your setup, this often needs to be set much lower.
Step 1:
First check your current MTU across all your ipv4 interfaces using netsh:
netsh interface ipv4 show subinterfaces
MTU MediaSenseState Bytes In Bytes Out Interface
------ --------------- --------- --------- -------------
4294967295 1 0 0 Loopback Pseudo-Interface 1
1492 1 675 523 Local Area Connection
As you can see, the Local Area Connection interface is set to a 1492 bytes MTU. So how do we find out what it should be? We are going to send a fixed size Echo packet out, and tell the network not to fragment this packet. If somewhere along the line this packet is too big then this request will fail.
Next enter (if it fails then you know your MTU is too high):
ping 8.8.8.8 -f -l 1492
Procedure to find optimal MTU:
For PPPoE, your Max MTU should be no more than 1492 to allow space for the 8 byte PPPoE “wrapper”. 1492 + 8 = 1500. The ping test we will be doing does not include the IP/ICMP header of 28 bytes. 1500 – 28 = 1472. Include the 8 byte PPPoE wrapper if your ISP uses PPPoE and you get 1500 – 28 – 8 = 1464.
The best value for MTU is that value just before your packets get fragmented. Add 28 to the largest packet size that does not result in fragmenting the packets (since the ping command specifies the ping packet size, not including the IP/ICMP header of 28 bytes), and this is your Max MTU setting.
The below is an automated ping sweep, that tests various packet sizes until it fails (increasing in 10 bytes per iteration):
C:\Windows\system32>for /l %i in (1360,10,1500) do @ping -n 1 -w 8.8.8.8 -l %i -f
Pinging 8.8.8.8. with 1400 bytes of data:
Reply from 8.8.8.8: bytes=1400 time=6ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 6ms, Maximum = 6ms, Average = 6ms
Pinging 8.8.8.8 with 1401 bytes of data:
Reply from 8.8.8.8: bytes=1401 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Pinging 8.8.8.8 with 1402 bytes of data:
Reply from 8.8.8.8: bytes=1402 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Pinging 8.8.8.8 with 1403 bytes of data:
Reply from 8.8.8.8: bytes=1403 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Once you find the MTU, you can set it as per below:
set subinterface “Local Area Connection” mtu=1360 store=persistent
Nikto is becoming one of my favourite tools. I like it because of its wide ranging use cases and its simplicity. So whats an example use case for Nikto? When I am bored right now and so I am going to hunt around my local network and see what I can find…
# First install Nikto
brew install nikto
# Now get my ipaddress range
ifconfig
# Copy my ipaddress into to ipcalculator to get my cidr block
eth0 Link encap:Ethernet HWaddr 00:0B:CD:1C:18:5A
inet addr:172.16.25.126 Bcast:172.16.25.63 Mask:255.255.255.224
inet6 addr: fe80::20b:cdff:fe1c:185a/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2341604 errors:0 dropped:0 overruns:0 frame:0
TX packets:2217673 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:293460932 (279.8 MiB) TX bytes:1042006549 (993.7 MiB)
Interrupt:185 Memory:f7fe0000-f7ff0000
# Get my Cidr range (brew install ipcalc)
ipcalc 172.16.25.126
cp363412:~ $ ipcalc 172.16.25.126
Address: 172.16.25.126 10101100.00010000.00011001. 01111110
Netmask: 255.255.255.0 = 24 11111111.11111111.11111111. 00000000
Wildcard: 0.0.0.255 00000000.00000000.00000000. 11111111
=>
Network: 172.16.25.0/24 10101100.00010000.00011001. 00000000
HostMin: 172.16.25.1 10101100.00010000.00011001. 00000001
HostMax: 172.16.25.254 10101100.00010000.00011001. 11111110
Broadcast: 172.16.25.255 10101100.00010000.00011001. 11111111
Hosts/Net: 254 Class B, Private Internet
# Our NW range is "Network: 172.16.25.0/24"
Now lets pop across to nmap to get a list of active hosts in my network
# Now we run a quick nmap scan for ports 80 and 443 across the entire range looking for any hosts that respond and dump the results into a grepable file
nmap -p 80,433 172.16.25.0/24 -oG webhosts.txt
# View the list of hosts
cat webhosts.txt
$ cat webhosts.txt
# Nmap 7.93 scan initiated Wed Jan 25 20:17:42 2023 as: nmap -p 80,433 -oG webhosts.txt 172.16.25.0/26
Host: 172.16.25.0 () Status: Up
Host: 172.16.25.0 () Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.1 () Status: Up
Host: 172.16.25.1 () Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.2 () Status: Up
Host: 172.16.25.2 () Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.3 () Status: Up
Host: 172.16.25.3 () Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.4 () Status: Up
Host: 172.16.25.4 () Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.5 () Status: Up
Next we want to grep this webhost file and send all the hosts that responded to the port probe of to Nikto for scanning. To do this we can use some linux magic. First we cat to read the output stored in our webhosts.txt document. Next we use awk. This is a Linux tool that will help search for the patterns. In the command below we are asking it to look for “Up” (meaning the host is up). Then we tell it to print $2, which means to print out the second word in the line that we found the word “Up” on, i.e. to print the IP address. Finally, we send that data to a new file called niktoscan.txt.
DIG is an awesome command line utility to explore DNS. Below is a quick guide to get you started.
Query Specific Name Server
By default, if no name server is specified, dig will use the servers listed in /etc/resolv.conf file. To view the default server use:
% cat /etc/resolv.conf
#
# macOS Notice
#
# This file is not consulted for DNS hostname resolution, address
# resolution, or the DNS query routing mechanism used by most
# processes on this system.
#
# To view the DNS configuration used by this system, use:
# scutil --dns
#
# SEE ALSO
# dns-sd(1), scutil(8)
#
# This file is automatically generated.
#
nameserver 100.64.0.1
You can override the name server against which the query will be executed, use the @ (at) symbol followed by the name server IP address or hostname.
For example, to query the Google name server (8.8.8.8) for information about andrewbaker.ninja you would use:
To get a short answer to your query, use the +short option:
% dig andrewbaker.ninja +short
13.244.140.33
Query a Record Type
Dig allows you to perform any valid DNS query by appending the record type to the end of the query. In the following section, we will show you examples of how to search for the most common records, such as A (the IP address), CNAME (canonical name), TXT (text record), MX (mail exchanger), and NS (name servers).
Querying A records
To get a list of all the address(es) for a domain name, use the a option:
% dig +nocmd andrewbaker.ninja a +noall +answer
andrewbaker.ninja. 156 IN A 13.244.140.33
Querying CNAME records
To find the alias domain name use the cname option:
Use the any option to get a list of all DNS records for a specific domain:
dig +nocmd andrewbaker.ninja any +noall +answer
andrewbaker.ninja. 300 IN A 13.244.140.33
andrewbaker.ninja. 21600 IN NS ns-1254.awsdns-28.org.
andrewbaker.ninja. 21600 IN NS ns-1514.awsdns-61.org.
andrewbaker.ninja. 21600 IN NS ns-1728.awsdns-24.co.uk.
andrewbaker.ninja. 21600 IN NS ns-1875.awsdns-42.co.uk.
andrewbaker.ninja. 21600 IN NS ns-491.awsdns-61.com.
andrewbaker.ninja. 21600 IN NS ns-496.awsdns-62.com.
andrewbaker.ninja. 21600 IN NS ns-533.awsdns-02.net.
andrewbaker.ninja. 21600 IN NS ns-931.awsdns-52.net.
andrewbaker.ninja. 900 IN SOA ns-1363.awsdns-42.org. awsdns-hostmaster.amazon.com. 1 7200 900 1209600 86400
Tracing DNS Resolution
DNS query resolution follows a simple recursive process outlined below:
You as the DNS client (or stub resolver) query your recursive resolver for www.example.com.
Your recursive resolver queries the root name server for www.example.com.
The root name server refers your recursive resolver to the .com Top-Level Domain (TLD) authoritative server.
Your recursive resolver queries the .com TLD authoritative server for www.example.com.
The .com TLD authoritative server refers your recursive server to the authoritative servers for example.com.
Your recursive resolver queries the authoritative servers for www.example.com, and receives 1.2.3.4 as the answer.
Your recursive resolver caches the answer for the duration of the time to live (TTL) specified on the record, and returns it to you.
Below is an example trace:
% dig +trace andrewbaker.ninja
; <<>> DiG 9.10.6 <<>> +trace andrewbaker.ninja
;; global options: +cmd
. 62163 IN NS g.root-servers.net.
. 62163 IN NS j.root-servers.net.
. 62163 IN NS e.root-servers.net.
. 62163 IN NS l.root-servers.net.
. 62163 IN NS d.root-servers.net.
. 62163 IN NS a.root-servers.net.
. 62163 IN NS b.root-servers.net.
. 62163 IN NS i.root-servers.net.
. 62163 IN NS m.root-servers.net.
. 62163 IN NS h.root-servers.net.
. 62163 IN NS c.root-servers.net.
. 62163 IN NS k.root-servers.net.
. 62163 IN NS f.root-servers.net.
. 62163 IN RRSIG NS 8 0 518400 20221129170000 20221116160000 18733 . MbE0OpdxRbInDK0olZm8n585L4oPq3q8iVbn/O0S7bfelS9wauhHQnnY Ifuj3D6Owp6R7H2Om6utfeB2kjrocJG9ZQPy0UQhWvgcFp9I4KnWRr1L H/yvmSM2EejR7kQHp4OBrb55RBsX4tojvr1UU+fWRuy988prwBVBdKj6 EElNwteQCosJHxVzqP0z6UpP9i5rUkRNGOD7OvdwF8ynBV93F4FpOI9r yuKzz0hdE3YAQJztOY84VuLkXM2DPs51LR6ftibxswUwoeUg04QUS7py gzn1z9en99oUgX+Lic6fLKc5Q0LpeZGhW0qBCY2CB9KEaRth+ZCD6WEU tjOBCw==
;; Received 525 bytes from 8.8.8.8#53(8.8.8.8) in 249 ms
ninja. 172800 IN NS v0n2.nic.ninja.
ninja. 172800 IN NS v2n1.nic.ninja.
ninja. 172800 IN NS v0n0.nic.ninja.
ninja. 172800 IN NS v0n1.nic.ninja.
ninja. 172800 IN NS v2n0.nic.ninja.
ninja. 172800 IN NS v0n3.nic.ninja.
ninja. 86400 IN DS 46082 8 2 C8F816A7A575BDB2F997F682AAB2653BA2CB5EDDB69B036A30742A33 BEFAF141
ninja. 86400 IN RRSIG DS 8 1 86400 20221130050000 20221117040000 18733 . xoEolCAm4d+f6LxulPa/lnCwKuwWLPI8LzlgmOVvMNL7z8J/21FqTWBu 4tZT8KZTciAvcTcRo3TDAg0Qr48QvJI30ld4yYa81HGHpVKVuTSoNCtn FnxvCuZmqDY+aFM/zn9jSTdCcT8EhwLJrsHq/zj/iasymLZ/UvanJo8j X/PRSorGfWJjUeDSSjCOpOITjRLqzHeBcY9+Qpf7O5fDguqtkhzc/8pS qKmjUh2B+yJA4QgDSaoxdv9LRQIvdSL1Iwq9eAXnl9azJy3GbVIUVZCw bA8ZsFYhw9sQbk39ZDi3K4pS717uymh4RBlk4r/5EuqdKBpWFYdOW4ZC EGDBcg==
;; Received 763 bytes from 198.41.0.4#53(a.root-servers.net) in 285 ms
andrewbaker.ninja. 3600 IN NS ns-1363.awsdns-42.org.
andrewbaker.ninja. 3600 IN NS ns-1745.awsdns-26.co.uk.
andrewbaker.ninja. 3600 IN NS ns-462.awsdns-57.com.
andrewbaker.ninja. 3600 IN NS ns-983.awsdns-58.net.
4vnuq0b3phnjevus6h4meuj446b44iqj.ninja. 3600 IN NSEC3 1 1 10 332539EE7F95C32A 4VVVNRI7K3EH48N753IKM6TUI5G921J7 NS SOA RRSIG DNSKEY NSEC3PARAM
4vnuq0b3phnjevus6h4meuj446b44iqj.ninja. 3600 IN RRSIG NSEC3 8 2 3600 20221208121502 20221117111502 22878 ninja. RIuQHRcUrHqMNg1lab6s/oRNmflV4e+8r2553miiZdlGqCl8Q05+e1f5 /AY0enkAaG4DvoXCAlwroL7B7iYgivgrmPXklPTEahnzdeZV76UWimRs 2WjKLI9DSUsSl5yPZBDloqYBxhQlHwY7RPcKxELX2wO7ld8Dk+cSpQIu CQQ=
dg8umbqgrvdemk76n4dtbddckfghtloo.ninja. 3600 IN NSEC3 1 1 10 332539EE7F95C32A DGG261SH46I7K27S1MPEID8CER0BFH07 NS DS RRSIG
dg8umbqgrvdemk76n4dtbddckfghtloo.ninja. 3600 IN RRSIG NSEC3 8 2 3600 20221130155636 20221109145636 22878 ninja. b3g1om7FYmaboSk49ZuQC/wiyuZ0zQXOs/HbfrtDP1wUGyvXMAG1ofik //wSTVEvi7bufrbKUCSkBrxiBweSkRIKokaB/5j90Izpb9znaN0MWmOQ gywML7TQ3etOWb9s8L/oUmiBUUUtBtPGAy/e4hsbuYKQt+awJZVhR4G/ GBM=
;; Received 691 bytes from 65.22.21.4#53(v0n1.nic.ninja) in 892 ms
andrewbaker.ninja. 300 IN A 13.244.140.33
andrewbaker.ninja. 172800 IN NS ns-1254.awsdns-28.org.
andrewbaker.ninja. 172800 IN NS ns-1514.awsdns-61.org.
andrewbaker.ninja. 172800 IN NS ns-1728.awsdns-24.co.uk.
andrewbaker.ninja. 172800 IN NS ns-1875.awsdns-42.co.uk.
andrewbaker.ninja. 172800 IN NS ns-491.awsdns-61.com.
andrewbaker.ninja. 172800 IN NS ns-496.awsdns-62.com.
andrewbaker.ninja. 172800 IN NS ns-533.awsdns-02.net.
andrewbaker.ninja. 172800 IN NS ns-931.awsdns-52.net.
;; Received 328 bytes from 205.251.195.215#53(ns-983.awsdns-58.net) in 53 ms
As you can see above, the first set of results are the NS (nameservers) for the root domain (.), followed by the NS for .ninja, then finally the NS for andrewbaker.ninja (hosted in AWS).
Below is a dump of examples of doing pretty much the same thing differently. I mostly use netstat and lsof, coupled with some bash scripts.
You can argue that this is overkill, but below is a simple bash function that you can paste into terminal and call it whenever you want to see which application/process IDs have open ports:
If your on a zero trust network adapter like zscaler or netskope, you will see that traceroute doesn’t work as expected. The article below shows how to install mtr (my trace route) using brew:
Next we need to change the owner of the MTR package and it’s permissions (otherwise you will need to run it as root every time):
sudo chown root /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet
sudo chmod 4755 /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet
## Symlink to the new mtr package instead of the default MAC version
ln -s /opt/homebrew/Cellar/mtr/0.95/sbin/mtr /opt/homebrew/bin/
ln -s /opt/homebrew/Cellar/mtr/0.95/sbin/mtr-packet /opt/homebrew/bin/
To run a rolling traceroute with ICMP echo’s use the following:
mtr andrewbaker.ninja
Keys: Help Display mode Restart statistics Order of fields quit
Packets Pings
Host Loss% Snt Last Avg Best Wrst StDev
The issue is that Zscaler will attempt to tunnel this traffic. This can be observed by viewing your current routes:
As you can see from the above, it lists the routes that are being sent to the Zscaler tunnel interface “utun6” (this is unique to your machine but will look similar). To get around this you can specify the source interface the MTR should run from with the “-I” flag. Below we instruct mtr to use en0 (the lan cable):
MTR supports TCP, UDP and SCTP based traceroutes. This is useful when testing path latency and packet loss in external or internal networks where QoS is applied to different protocols and ports. Multiple flags are available (man mtr), but for a TCP based MTR use -T (indicates TCP should be used) and -P (port to trace to):
mtr andrewbaker.ninja -T -P 443 -I en0
Ping specifying source interface
Ping supports specifying the source interface you would like to initiate the ping from. The “-S” flag indicates that the following IP is the source IP address the ping should be done from. This is useful if you want to ping using an internal resource bypassing a route manipulator tool such as Zscaler.
I recently managed to explode my wordpress site (whilst trying to upgrade PHP). Anyway, luckily I had created an AMI a month ago – but I had written a few articles since then and so wanted to avoid rewriting them. So below is a method to create a backup of your wordpress mysql database to S3 and recover it onto a new wordpress server. Note: I actually mounted the corrupt instance as a volume and did this the long way around.
Step 1: Create an S3 bucket to store the backup
$ aws s3api create-bucket \
> --bucket andrewbakerninjabackupdb \
> --region af-south-1 \
> --create-bucket-configuration LocationConstraint=af-south-1
Unable to locate credentials. You can configure credentials by running "aws configure".
$ aws configure
AWS Access Key ID [None]: XXXXX
AWS Secret Access Key [None]: XXXX
Default region name [None]: af-south-1
Default output format [None]:
$ aws s3api create-bucket --bucket andrewbakerninjabackupdb --region af-south-1 --create-bucket-configuration LocationConstraint=af-south-1
{
"Location": "http://andrewbakerninjabackupdb.s3.amazonaws.com/"
}
$
Note: To get your API credentials simply go to IAM, Select the Users tab and then Select Create Access Key
Step 2: Create a backup of your MsSql database and copy it to S3
For full backups follow the below script (note: this wont be restorable across mysql versions as it will include the system “mysql” db)
# Check mysql is install/version (note you cannot restore across versions)
mysql --version
# First get your mysql credentials
sudo cat /home/bitnami/bitnami_credentials
Welcome to the Bitnami WordPress Stack
******************************************************************************
The default username and password is XXXXXXX.
******************************************************************************
You can also use this password to access the databases and any other component the stack includes.
# Now create a backup using this password
$ mysqldump -A -u root -p > backupajb.sql
Enter password:
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami 17 Jun 15 2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami 27 Jun 15 2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami 12 Jun 15 2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami 13 Nov 18 2020 bitnami_application_password
-r-------- 1 bitnami bitnami 424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 3635504 Aug 26 07:24 backupajb.sql
# Next copy the file to your S3 bucket
$ aws s3 cp backupajb.sql s3://andrewbakerninjabackupdb
upload: ./backupajb.sql to s3://andrewbakerninjabackupdb/backupajb.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09 3635504 backupajb.sql
OR for partial backups, follow the below to just backup the bitnami wordpress database:
# Login to database
mysql -u root -p
show databases;
+--------------------+
| Database |
+--------------------+
| bitnami_wordpress |
| information_schema |
| mysql |
| performance_schema |
| sys |
+--------------------+
exit
$ mysqldump -u root -p --databases bitnami_wordpress > backupajblight.sql
Enter password:
$ ls -ltr
total 3560
lrwxrwxrwx 1 bitnami bitnami 17 Jun 15 2020 apps -> /opt/bitnami/apps
lrwxrwxrwx 1 bitnami bitnami 27 Jun 15 2020 htdocs -> /opt/bitnami/apache2/htdocs
lrwxrwxrwx 1 bitnami bitnami 12 Jun 15 2020 stack -> /opt/bitnami
-rw------- 1 bitnami bitnami 13 Nov 18 2020 bitnami_application_password
-r-------- 1 bitnami bitnami 424 Aug 25 14:08 bitnami_credentials
-rw-r--r-- 1 bitnami bitnami 2635204 Aug 26 07:24 backupajblight.sql
# Next copy the file to your S3 bucket
$ aws s3 cp backupajblight.sql s3://andrewbakerninjabackupdb
upload: ./backupajblight.sql to s3://andrewbakerninjabackupdb/backupajblight.sql
# Check the file is there
$ aws s3 ls s3://andrewbakerninjabackupdb
2022-08-26 07:27:09 2635204 backupajblight.sql
Step 3: Restore the file on your new wordpress server
Note: If you need the password, use the cat command from Step 2.
#Copy the file down from S3
$ aws s3 cp s3://andrewbakerninjabackupdb/backupajbcron.sql restoreajb.sql --region af-south-1
#Restore the db
$ mysql -u root -p < restoreajb.sql
Step 4: Optional – Automate the Backups using Cron and S3 Versioning
This part is unnecessary (and one could credibly argue that AWS Backup is the way to go – but am not a fan of its clunky UI). Below I enable S3 versioning and create a Cron job to backup the database every week. I will also set the S3 lifecycle policy to delete anything older than 90 days.
# Enable bucket versioning
aws s3api put-bucket-versioning --bucket andrewbakerninjabackupdb --versioning-configuration Status=Enabled
# Now set the bucket lifecycle policy
nano lifecycle.json
Now paste the following policy into nano and save it (as lifecycle.json):
## List the cron jobs
crontab -l
## Edit the cron jobs
crontab -e
## Enter these lines.
## Backup on weds at 12:00 and copy it to S3 at 1am (cron format: min hour day month weekday (sunday is day zero))
1 0 * * SAT /opt/bitnami/mysql/bin/mysqldump -A -uroot -pPASSWORD > backupajbcron.sql
1 2 * * SAT /opt/bitnami/mysql/bin/mysqldump -u root -pPASSWORD --databases bitnami_wordpress > backupajbcronlight.sql
0 3 * * SAT aws s3 cp backupajbcron.sql s3://andrewbakerninjabackupdb
0 4 * * SAT aws s3 cp backupajbcronlight.sql s3://andrewbakerninjabackupdb