MacOs: How to see which processes are using a specific port (eg 443)

Below is a useful script when you want to see which processes are using a specific port.

#!/bin/bash

# Port Monitor Script for macOS
# Usage: ./port_monitor.sh <port_number>
# Check if port number is provided

if [ $# -eq 0 ]; then
echo "Usage: $0 <port_number>"
echo "Example: $0 8080"
exit 1
fi

PORT=$1

# Validate port number

if ! [[ $PORT =~ ^[0-9]+$ ]] || [ $PORT -lt 1 ] || [ $PORT -gt 65535 ]; then
echo "Error: Please provide a valid port number (1-65535)"
exit 1
fi

# Function to display processes using the port

show_port_usage() {
local timestamp=$(date "+%Y-%m-%d %H:%M:%S")

# Clear screen for better readability
clear

echo "=================================="
echo "Port Monitor - Port $PORT"
echo "Last updated: $timestamp"
echo "Press Ctrl+C to exit"
echo "=================================="
echo

# Check for processes using the port with lsof - both TCP and UDP
if lsof -i :$PORT &>/dev/null || netstat -an | grep -E "[:.]$PORT[[:space:]]" &>/dev/null; then
    echo "Processes using port $PORT:"
    echo
    lsof -i :$PORT -P -n | head -1
    echo "--------------------------------------------------------------------------------"
    lsof -i :$PORT -P -n | tail -n +2
    echo
    
    # Also show netstat information for additional context
    echo "Network connections on port $PORT:"
    echo
    printf "%-6s %-30s %-30s %-12s\n" "PROTO" "LOCAL ADDRESS" "FOREIGN ADDRESS" "STATE"
    echo "--------------------------------------------------------------------------------------------"
    
    # Show all connections (LISTEN, ESTABLISHED, etc.)
    # Use netstat -n to show numeric addresses
    netstat -anp tcp | grep -E "\.$PORT[[:space:]]" | while read line; do
        # Extract the relevant fields from netstat output
        proto=$(echo "$line" | awk '{print $1}')
        local_addr=$(echo "$line" | awk '{print $4}')
        foreign_addr=$(echo "$line" | awk '{print $5}')
        state=$(echo "$line" | awk '{print $6}')
        
        # Only print if we have valid data
        if [ -n "$proto" ] && [ -n "$local_addr" ]; then
            printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "$state"
        fi
    done
    
    # Also check UDP connections
    netstat -anp udp | grep -E "\.$PORT[[:space:]]" | while read line; do
        proto=$(echo "$line" | awk '{print $1}')
        local_addr=$(echo "$line" | awk '{print $4}')
        foreign_addr=$(echo "$line" | awk '{print $5}')
        printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "-"
    done
    
    # Also check for any established connections using lsof
    echo
    echo "Active connections with processes:"
    echo "--------------------------------------------------------------------------------------------"
    lsof -i :$PORT -P -n 2>/dev/null | grep -v LISTEN | tail -n +2 | while read line; do
        if [ -n "$line" ]; then
            echo "$line"
        fi
    done
    
else
    echo "No processes found using port $PORT"
    echo
    
    # Check if the port might be in use but not showing up in lsof
    local netstat_result=$(netstat -anv | grep -E "\.$PORT ")
    if [ -n "$netstat_result" ]; then
        echo "However, netstat shows activity on port $PORT:"
        echo "$netstat_result"
    fi
fi

echo
echo "Refreshing in 20 seconds... (Press Ctrl+C to exit)"
}

# Trap Ctrl+C to exit gracefully

trap 'echo -e "\n\nExiting port monitor..."; exit 0' INT

# Main loop - refresh every 20 seconds

while true; do
show_port_usage
sleep 20
done

Windows Server: Polling critical DNS entries for any changes or errors

If you have tier 1 services that are dependant on a few DNS records, then you may want a simple batch job to monitor these dns records for changes or deletion.

The script below contains an example list of DNS entries (replace these records for the ones you want to monitor).

@echo off
setlocal enabledelayedexpansion

REM ============================================================================
REM DNS Monitor Script for Windows Server
REM Purpose: Monitor DNS entries for changes every 15 minutes
REM Author: Andrew Baker
REM Version: 1.0
REM Date: August 13, 2018
REM ============================================================================

REM Configuration Variables
set "LOG_FILE=dns_monitor.log"
set "PREVIOUS_FILE=dns_previous.tmp"
set "CURRENT_FILE=dns_current.tmp"
set "CHECK_INTERVAL=900"

REM DNS Entries to Monitor (Comma Separated List)
REM Add or modify domains as needed
set "DNS_LIST=google.com,microsoft.com,github.com,stackoverflow.com,amazon.com,facebook.com,twitter.com,linkedin.com,youtube.com,cloudflare.com"

REM Initialize log file with header if it doesn't exist
if not exist "%LOG_FILE%" (
    echo DNS Monitor Log - Started on %DATE% %TIME% > "%LOG_FILE%"
    echo ============================================================================ >> "%LOG_FILE%"
    echo. >> "%LOG_FILE%"
)

:MAIN_LOOP
echo [%DATE% %TIME%] Starting DNS monitoring cycle...
echo [%DATE% %TIME%] INFO: Starting DNS monitoring cycle >> "%LOG_FILE%"

REM Clear current results file
if exist "%CURRENT_FILE%" del "%CURRENT_FILE%"

REM Process each DNS entry
for %%d in (%DNS_LIST%) do (
    call :CHECK_DNS "%%d"
)

REM Compare with previous results if they exist
if exist "%PREVIOUS_FILE%" (
    call :COMPARE_RESULTS
) else (
    echo [%DATE% %TIME%] INFO: First run - establishing baseline >> "%LOG_FILE%"
)

REM Copy current results to previous for next comparison
copy "%CURRENT_FILE%" "%PREVIOUS_FILE%" >nul 2>&1

echo [%DATE% %TIME%] DNS monitoring cycle completed. Next check in 15 minutes...
echo [%DATE% %TIME%] INFO: DNS monitoring cycle completed >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"

REM Wait 15 minutes (900 seconds) before next check
timeout /t %CHECK_INTERVAL% /nobreak >nul

goto MAIN_LOOP

REM ============================================================================
REM Function: CHECK_DNS
REM Purpose: Resolve DNS entry and log results
REM Parameter: %1 = Domain name to check
REM ============================================================================
:CHECK_DNS
set "DOMAIN=%~1"
echo Checking DNS for: %DOMAIN%

REM Perform nslookup and capture results
nslookup "%DOMAIN%" > temp_dns.txt 2>&1

REM Check if nslookup was successful
if %ERRORLEVEL% equ 0 (
    REM Extract IP addresses from nslookup output
    for /f "tokens=2" %%i in ('findstr /c:"Address:" temp_dns.txt ^| findstr /v "#53"') do (
        set "IP_ADDRESS=%%i"
        echo %DOMAIN%,!IP_ADDRESS! >> "%CURRENT_FILE%"
        echo [%DATE% %TIME%] INFO: %DOMAIN% resolves to !IP_ADDRESS! >> "%LOG_FILE%"
    )
    
    REM Handle case where no IP addresses were found in successful lookup
    findstr /c:"Address:" temp_dns.txt | findstr /v "#53" >nul
    if !ERRORLEVEL! neq 0 (
        echo %DOMAIN%,RESOLUTION_ERROR >> "%CURRENT_FILE%"
        echo [%DATE% %TIME%] ERROR: %DOMAIN% - No IP addresses found in DNS response >> "%LOG_FILE%"
        type temp_dns.txt >> "%LOG_FILE%"
        echo. >> "%LOG_FILE%"
    )
) else (
    REM DNS resolution failed
    echo %DOMAIN%,DNS_FAILURE >> "%CURRENT_FILE%"
    echo [%DATE% %TIME%] ERROR: %DOMAIN% - DNS resolution failed >> "%LOG_FILE%"
    type temp_dns.txt >> "%LOG_FILE%"
    echo. >> "%LOG_FILE%"
)

REM Clean up temporary file
if exist temp_dns.txt del temp_dns.txt

goto :EOF

REM ============================================================================
REM Function: COMPARE_RESULTS
REM Purpose: Compare current DNS results with previous results
REM ============================================================================
:COMPARE_RESULTS
echo Comparing DNS results for changes...

REM Read previous results into memory
if exist "%PREVIOUS_FILE%" (
    for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
        set "PREV_%%a=%%b"
    )
)

REM Compare current results with previous
for /f "tokens=1,2 delims=," %%a in (%CURRENT_FILE%) do (
    set "CURRENT_DOMAIN=%%a"
    set "CURRENT_IP=%%b"
    
    REM Get previous IP for this domain
    set "PREVIOUS_IP=!PREV_%%a!"
    
    if "!PREVIOUS_IP!"=="" (
        REM New domain added
        echo [%DATE% %TIME%] INFO: New domain added to monitoring: !CURRENT_DOMAIN! = !CURRENT_IP! >> "%LOG_FILE%"
    ) else if "!PREVIOUS_IP!" neq "!CURRENT_IP!" (
        REM DNS change detected
        echo [%DATE% %TIME%] WARNING: DNS change detected for !CURRENT_DOMAIN! >> "%LOG_FILE%"
        echo [%DATE% %TIME%] WARNING: Previous IP: !PREVIOUS_IP! >> "%LOG_FILE%"
        echo [%DATE% %TIME%] WARNING: Current IP:  !CURRENT_IP! >> "%LOG_FILE%"
        echo [%DATE% %TIME%] WARNING: *** INVESTIGATE DNS CHANGE *** >> "%LOG_FILE%"
        echo. >> "%LOG_FILE%"
        
        REM Also display warning on console
        echo.
        echo *** WARNING: DNS CHANGE DETECTED ***
        echo Domain: !CURRENT_DOMAIN!
        echo Previous: !PREVIOUS_IP!
        echo Current:  !CURRENT_IP!
        echo Check log file for details: %LOG_FILE%
        echo.
    )
)

REM Check for domains that disappeared from current results
for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
    set "CHECK_DOMAIN=%%a"
    set "FOUND=0"
    
    for /f "tokens=1 delims=," %%c in (%CURRENT_FILE%) do (
        if "%%c"=="!CHECK_DOMAIN!" set "FOUND=1"
    )
    
    if "!FOUND!"=="0" (
        echo [%DATE% %TIME%] WARNING: Domain !CHECK_DOMAIN! no longer resolving or removed from monitoring >> "%LOG_FILE%"
    )
)

goto :EOF

REM ============================================================================
REM End of Script
REM ============================================================================

Mac OSX: Altering the OS route table to re-direct the traffic of a website to a different interface (eg re-routing whatsapp traffic to en0)

This was a hard article to figure out the title for! Put simply, your mac book has a route table and if you want to move a specific IP address or dns from one interface to another, then follow the steps below:

First find the IP address of the website that you want to re-route the traffic for:

$ nslookup web.whatsapp.com
Server:		100.64.0.1
Address:	100.64.0.1#53

Non-authoritative answer:
web.whatsapp.com	canonical name = mmx-ds.cdn.whatsapp.net.
Name:	mmx-ds.cdn.whatsapp.net
Address: 102.132.99.60

We want to re-route traffic the traffic from: 102.132.99.60 to the default interface. So first lets find out which interface this traffic is currently being routed to?

$ route -n get web.whatsapp.com
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 100.64.0.1
  interface: utun0
      flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0        34        21         0      1400         0

So this is currently going to a tunnelled interface called utun0 on gateway 100.64.0.1.

Ok, so I want to move if off this tunnelled interface. So lets first display the kernel routing table. The -n option forces netstat to print the IP addresses. Without this option, netstat attempts to display the host names.

$ netstat - rn | head -n 5
Active Internet connections
Proto Recv-Q Send-Q  Local Address          Foreign Address        (state)
tcp4       0    126  100.64.0.1.64770       136.226.216.14.https   ESTABLISHED
tcp4       0      0  100.64.0.1.64768       whatsapp-cdn-shv.https ESTABLISHED
tcp4       0      0  100.64.0.1.64766       52.178.17.3.https      ESTABLISHED

Now we want to re-route whatsapp to the default interface. So lets get the IP address of the default interface.

$ netstat -nr | grep default
default            192.168.8.1        UGScg                 en0
default                                 fe80::%utun1                            UGcIg               utun1
default                                 fe80::%utun2                            UGcIg               utun2
default                                 fe80::%utun3                            UGcIg               utun3
default                                 fe80::%utun4                            UGcIg               utun4
default                                 fe80::%utun5                            UGcIg               utun5
default                                 fe80::%utun0                            UGcIg               utun0

We can see that our en0 interface is on IP address: 192.168.8.1. So lets re-route the traffic from Whatsapp’s ip address to this interace’s IP address:

$ sudo route add 102.132.99.60 192.168.0.1
route: writing to routing socket: File exists
add host 102.132.99.60: gateway 192.168.8.1: File exists

Now lets test if we are routing via the correct interface:

$ route -n get 102.132.99.60
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 192.168.8.1
  interface: utun6
      flags: <UP,GATEWAY,HOST,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0         0         0         0      1400         0

Finally delete the route and recheck the routing:

$ sudo route delete 102.132.99.60
delete host 102.132.99.60

$ route -n get 102.132.99.60
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 100.64.0.1
  interface: utun6
      flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0         0         0         0      1400         0

AWS: Use the AWS CLI to delete snapshots from your account

The Amazon EC2 console allows you to delete up to 50 Amazon Elastic Block Store (Amazon EBS) snapshots at once. To delete more than 50 snapshots, use the AWS Command Line Interface (AWS CLI) or the AWS SDK.

To see all the snapshots that you own in a specific region, run the following. Note, replace af-south-1 with your region:

aws ec2 describe-snapshots --owner-ids self  --query 'Snapshots[]' --region af-south-1

Note: To run the code below, first make sure your in the correct account (or life will become difficult for you). Next replace BOTH instances “af-south-1” with your particular region. Finally, you can use a specific account number in place of –owner-ids=self (eg –owner-ids=1234567890).

for SnapshotID in $(aws ec2 --region af-south-1 describe-snapshots --owner-ids=self --query 'Snapshots[*].SnapshotId' --output=text); do
aws ec2 --region af-south-1 delete-snapshot --snapshot-id ${SnapshotID}
done

Macbook OSX: Using Touch ID / fingerprints to enable SUDO and permanently enabling this after Mac OSX updates

Each day that I wake up I try and figure out if I can do less work than yesterday. With this in mind I was playing around to see if there is a way to save me typing my password each time I SUDO. It turns out this is quite a simple change…

Open Terminal and run the following to edit sudos behaviour:

sudo nano /etc/pam.d/sudo

Next add the following to the top of the file:

auth       sufficient     pam_tid.so

The only issue with this is that /etc/pam.d/sudo is overwritten on every macOS update (major, minor or patch – it is always overwritten and reset back to its default state).

MacOS: Sonoma

In their “What’s new for enterprise in macOS Sonoma” document Apple listed the following in the “Bug fixes and other improvements” section:

Touch ID can be allowed for sudo with a configuration that persists across software updates using /etc/pam.d/sudo_local. See /etc/pam.d/sudo_local.template for details.

So lets create a template file in /etc/pam.d/sudo_local.template:

sudo nano /etc/pam.d/sudo_local.template

Next uncomment the auth line, as per:

# sudo_local: local config file which survives system update and is included fo$
# uncomment following line to enable Touch ID for sudo
auth       sufficient     pam_tid.so

This should mean that Touch ID now survive system updates!

Quick tests:

sudo ls
# exit sudo
sudo -k
sudo ls

To enable Touch ID access on Iterm2. You need to do the following. Go to Prefs -> Advanced -> Allow sessions to survive logging out and back in and set value to no . Restart Iterm2 and touch ID authentication will work on Iterm2.

Macbook OSX: Change the default image type of your screenshots from PNG to JPEG, GIF or PDF

There are a few things that I tweak when I get a new Macbook, one of which is the screenshot format (mainly because it doesnt natively render in Whatsapp). So I thought I would share the code snippet that you can run in Terminal to alter the default image type of your screenshots:

For JPEG use:

$ defaults write com.apple.screencapture type JPG

For GIF use:

$ defaults write com.apple.screencapture type GIF

For PDF use:

$ defaults write com.apple.screencapture type PDF

For PNG use:

$ defaults write com.apple.screencapture type PNG

How to make an offline copy of a static website using wget and hosting on AWS S3 with CloudFront

I have an old website that I want to avoid the hosting costs and so just wanted to download the website and run it from an AWS S3 bucket using Cloud Front to publish the content. Below are the steps I took to do this:

First download the website to your laptop

$ wget \
     --recursive \
     --no-clobber \
     --page-requisites \
     --html-extension \
     --convert-links \
     --no-check-certificate \
     --restrict-file-names=unix \
     --domains archive.andrewbaker.ninja \
     --no-parent \
         http://archive.andrewbaker.ninja/
$ cd archive.andrewbaker.ninja
$ ls

Below is a summary of the parameters (inc common alternatives):

–recursive: Wget is capable of traversing parts of the Web (or a single HTTP or FTP server), following links and directory structure. We refer to this as to recursive retrieval, or recursion.

–no-clobber: If a file is downloaded more than once in the same directory, Wget’s behavior depends on a few options, including `-nc’. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved. When running Wget without `-N’`-nc’, or `-r’, downloading the same file in the same directory will result in the original copy of file being preserved and the second copy being named `file.1′. If that file is downloaded yet again, the third copy will be named `file.2′, and so on. When `-nc’ is specified, this behavior is suppressed, and Wget will refuse to download newer copies of `file. Therefore, “no-clobber” is actually a misnomer in this mode–it’s not clobbering that’s prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that’s prevented. When running Wget with `-r’, but without `-N’ or `-nc’, re-downloading a file will result in the new copy simply overwriting the old. Adding `-nc’ will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored. When running Wget with `-N’, with or without `-r’, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file (see section Time-Stamping). `-nc’ may not be specified at the same time as `-N’. Note that when `-nc’ is specified, files with the suffixes `.html’ or (yuck) `.htm’ will be loaded from the local disk and parsed as if they had been retrieved from the Web.

–page-requisites: This causes wget to download all the files that are necessary to properly display a given HTML page which includes images, css, js, etc. –adjust-extension Preserves proper file extensions for . html, . css, and other assets

–html-extension: This adds .html after the downloaded filename, to make sure it plays nicely on whatever system you’re going to view the archive on

–convert-links: After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.

–no-check-certificate: Don’t check the server certificate against the available certificate authorities. Also don’t require the URL host name to match the common name presented by the certificate.

–restrict-file-names: By default, Wget escapes the characters that are not valid or safe as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, perhaps because you are downloading to a non-native partition”. So unless you are not downloading to non-native partition you do not need to restrict file names by OS. its automatic. Additionally: “The values ‘unix’ and ‘windows’ are mutually exclusive (one will override the other)”

–domains: Limit spanning to specified domains

–no-parent: If you don’t want wget to descend down to the parent directory, use -np or –no-parent option. This instructs wget not to ascend to the parent directory when it hits references like ../ in href links.

Upload Files to S3 Bucket

Next upload the files to your S3 bucket. First move into the relevant bucket, then perform the recursive upload.

$ cd archive.andrewbaker.ninja
$ ls .
$ aws s3 cp . s3://vbusers.com/ --recursive

Create a CloudFront Distribution from an S3 Bucket

Finally go to CloudFront and create a distribution from the S3 Bucket you just created. You can pretty much use the default settings. Note: you will need to wait a few minutes before you browse to the distributions domain name:

AWS: Automatically Stop and Start your EC2 Services

Below is a quick (am busy) outline on how to automatically stop and start your EC2 instances.

Step 1: Tag your resources

In order to decide which instances stop and start you first need to add an auto-start-stop: Yes tag to all the instances you want to be affected by the start / stop functions. Note: You can use “Resource Groups and Tag Editor” to bulk apply these tags to the resources you want to be affected by the lambda functions you are going to create. See below (click the orange button called “Manage tags of Selected Resources”).

Step 2: Create a new role for our lambda functions

First we need to create the IAM role to run the Lambda functions. Go to IAM and click the “Create Role” button. Then select “AWS Service” from the “Trusted entity options”, and select Lambda from the “Use Cases” options. Then click “Next”, followed by “Create Policy”. To specify the permission, simply Click the JSON button on the right of the screen and enter the below policy (swapping the region and account id for your region and account id):

{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "ec2:DescribeInstances",
                "ec2:StartInstances",
                "ec2:DescribeTags",
                "logs:*",
                "ec2:DescribeInstanceTypes",
                "ec2:StopInstances",
                "ec2:DescribeInstanceStatus"
            ],
            "Resource": "arn:aws:ec2:<region>:<accountID>:instance/*",
            "Condition": {
                "StringEquals": {
                    "aws:ResourceTag/auto-start-stop": "Yes"
                }
            }
        }
    ]
}

Hit next and under “Review and create”, save the above policy as ec2-lambda-start-stop by clicking the “Create Policy” button. Next, search for this newly created policy and select it as per below and hit “Next”.

You will now see the “Name, review, and create” screen. Here you simply need to hit “Create Role” after you enter the role name as ec2-lambda-start-stop-role.

Note the policy is restricted to only have access to EC2 instances that contains auto-start-stop: Yes tags (least privileges).

If you want to review your role, this is how it should look. You can see I have filled in my region and account number in the policy:

Step 3: Create Lambda Functions To Start/Stop EC2 Instances

In this section we will create two lambda functions, one to start the instances and the other to stop the instances.

Step 3a: Add the Stop EC2 instance function

  • Goto Lambda console and click on create function
  • Create a lambda function with a function name of stop-ec2-instance-lambda, python3.11 runtime, and ec2-lambda-stop-start-role (see image below).

Next add the lamdba stop function and save it as stop-ec2-instance. Note, you will need to change the value of the region_name parameter accordingly.

import json
import boto3

ec2 = boto3.resource('ec2', region_name='af-south-1')
def lambda_handler(event, context):
   instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['running']},{'Name': 'tag:auto-start-stop','Values':['Yes']}])
   for instance in instances:
       id=instance.id
       ec2.instances.filter(InstanceIds=[id]).stop()
       print("Instance ID is stopped:- "+instance.id)
   return "success"

This is how your Lambda function should look:

Step 3b: Add the Start EC2 instance function

  • Goto Lambda console and click on create function
  • Create lambda functions with start-ec2-instance, python3.11 runtime, and ec2-lambda-stop-start-role.
  • Then add the below code and save the function as start-ec2-instance-lambda.

Note, you will need to change the value of the region_name parameter accordingly.

import json
import boto3

ec2 = boto3.resource('ec2', region_name='af-south-1')
def lambda_handler(event, context):
   instances = ec2.instances.filter(Filters=[{'Name': 'instance-state-name', 'Values': ['stopped']},{'Name': 'tag:auto-start-stop','Values':['Yes']}])
   for instance in instances:
       id=instance.id
       ec2.instances.filter(InstanceIds=[id]).stop()
       print("Instance ID is stopped:- "+instance.id)
   return "success"

4. Summary

If either of the above lambda functions are triggered, they will start or stop your EC2 instances based on the instance state and the value of auto-start-stop tag. To automate this you can simply setup up cron jobs, step functions, AWS Event Bridge, Jenkins etc.

Finding and Setting the Maximum Transmission Unit (MTU) on a Windows Machine

If you have just changed ISPs or moved house and your internet suddenly starts misbehaving the likelihood is your Maximum Transmission Unit (MTU) is set too high for your ISP. The default internet facing MTU is 1500 bytes, BUT depending on your setup, this often needs to be set much lower.

Step 1:

First check your current MTU across all your ipv4 interfaces using netsh:

netsh interface ipv4 show subinterfaces
   MTU  MediaSenseState   Bytes In  Bytes Out  Interface
------  ---------------  ---------  ---------  -------------
4294967295                1          0          0  Loopback Pseudo-Interface 1
  1492                1        675        523  Local Area Connection

As you can see, the Local Area Connection interface is set to a 1492 bytes MTU. So how do we find out what it should be? We are going to send a fixed size Echo packet out, and tell the network not to fragment this packet. If somewhere along the line this packet is too big then this request will fail.

Next enter (if it fails then you know your MTU is too high):

ping 8.8.8.8 -f -l 1492

Procedure to find optimal MTU:

For PPPoE, your Max MTU should be no more than 1492 to allow space for the 8 byte PPPoE “wrapper”. 1492 + 8 = 1500. The ping test we will be doing does not include the IP/ICMP header of 28 bytes. 1500 – 28 = 1472. Include the 8 byte PPPoE wrapper if your ISP uses PPPoE and you get 1500 – 28 – 8 = 1464.

The best value for MTU is that value just before your packets get fragmented. Add 28 to the largest packet size that does not result in fragmenting the packets (since the ping command specifies the ping packet size, not including the IP/ICMP header of 28 bytes), and this is your Max MTU setting.

The below is an automated ping sweep, that tests various packet sizes until it fails (increasing in 10 bytes per iteration):

C:\Windows\system32>for /l %i in (1360,10,1500) do @ping -n 1 -w 8.8.8.8 -l %i -f

Pinging 8.8.8.8. with 1400 bytes of data:
Reply from 8.8.8.8: bytes=1400 time=6ms TTL=64

Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 6ms, Maximum = 6ms, Average = 6ms

Pinging 8.8.8.8 with 1401 bytes of data:
Reply from 8.8.8.8: bytes=1401 time<1ms TTL=64

Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

Pinging 8.8.8.8 with 1402 bytes of data:
Reply from 8.8.8.8: bytes=1402 time<1ms TTL=64

Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms

Pinging 8.8.8.8 with 1403 bytes of data:
Reply from 8.8.8.8: bytes=1403 time<1ms TTL=64

Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms 

Once you find the MTU, you can set it as per below:

set subinterface “Local Area Connection” mtu=1360 store=persistent

Hacking: Using a Macbook and Nikto to Scan your Local Network

Nikto is becoming one of my favourite tools. I like it because of its wide ranging use cases and its simplicity. So whats an example use case for Nikto? When I am bored right now and so I am going to hunt around my local network and see what I can find…

# First install Nikto
brew install nikto
# Now get my ipaddress range
ifconfig
# Copy my ipaddress into to ipcalculator to get my cidr block
eth0      Link encap:Ethernet  HWaddr 00:0B:CD:1C:18:5A
          inet addr:172.16.25.126  Bcast:172.16.25.63  Mask:255.255.255.224
          inet6 addr: fe80::20b:cdff:fe1c:185a/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:2341604 errors:0 dropped:0 overruns:0 frame:0
          TX packets:2217673 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:293460932 (279.8 MiB)  TX bytes:1042006549 (993.7 MiB)
          Interrupt:185 Memory:f7fe0000-f7ff0000
# Get my Cidr range (brew install ipcalc)
ipcalc 172.16.25.126
cp363412:~ $ ipcalc 172.16.25.126
Address:   172.16.25.126        10101100.00010000.00011001. 01111110
Netmask:   255.255.255.0 = 24   11111111.11111111.11111111. 00000000
Wildcard:  0.0.0.255            00000000.00000000.00000000. 11111111
=>
Network:   172.16.25.0/24       10101100.00010000.00011001. 00000000
HostMin:   172.16.25.1          10101100.00010000.00011001. 00000001
HostMax:   172.16.25.254        10101100.00010000.00011001. 11111110
Broadcast: 172.16.25.255        10101100.00010000.00011001. 11111111
Hosts/Net: 254                   Class B, Private Internet
# Our NW range is "Network:   172.16.25.0/24"

Now lets pop across to nmap to get a list of active hosts in my network

# Now we run a quick nmap scan for ports 80 and 443 across the entire range looking for any hosts that respond and dump the results into a grepable file
nmap -p 80,433 172.16.25.0/24 -oG webhosts.txt
# View the list of hosts
cat webhosts.txt
$ cat webhosts.txt
# Nmap 7.93 scan initiated Wed Jan 25 20:17:42 2023 as: nmap -p 80,433 -oG webhosts.txt 172.16.25.0/26
Host: 172.16.25.0 ()	Status: Up
Host: 172.16.25.0 ()	Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.1 ()	Status: Up
Host: 172.16.25.1 ()	Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.2 ()	Status: Up
Host: 172.16.25.2 ()	Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.3 ()	Status: Up
Host: 172.16.25.3 ()	Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.4 ()	Status: Up
Host: 172.16.25.4 ()	Ports: 80/open/tcp//http///, 433/open/tcp//nnsp///
Host: 172.16.25.5 ()	Status: Up

Next we want to grep this webhost file and send all the hosts that responded to the port probe of to Nikto for scanning. To do this we can use some linux magic. First we cat to read the output stored in our webhosts.txt document. Next we use awk. This is a Linux tool that will help search for the patterns. In the command below we are asking it to look for “Up” (meaning the host is up). Then we tell it to print $2, which means to print out the second word in the line that we found the word “Up” on, i.e. to print the IP address. Finally, we send that data to a new file called niktoscan.txt.

cat webhosts.txt | awk '/Up$/{print $2}' | cat >> niktoscan.txt
cat niktoscan.txt
$ cat niktoscan.txt
172.16.25.0
172.16.25.1
172.16.25.2
172.16.25.3
172.16.25.4
172.16.25.5
172.16.25.6
172.16.25.7
172.16.25.8
172.16.25.9
172.16.25.10
...

Now let nikto do its stuff:

nikto -h niktoscan.txt -ssl >> niktoresults.txt
# Lets check what came back
cat niktoresults.txt