WindowServer is a core macOS system process that manages everything you see on your display. It acts as the graphics engine powering your Mac’s visual interface.
WindowServer handles:
Drawing windows, menus, and desktop elements
Managing transparency effects and blur
Rendering animations and transitions
Coordinating with the GPU for visual effects
Managing multiple displays
CPU usage varies based on activity:
High usage (10% to 25%): Multiple windows with transparency, active animations, external displays, video playback
Low usage (1% to 5%): Minimal visual effects, few active windows, single display
When WindowServer uses high CPU, it drains battery because the GPU must work harder to render visual effects.
Common Battery Drain Issues
macOS laptops often experience battery drain due to:
Sleep Prevention
Power Nap causing periodic wake events
Handoff keeping devices in constant communication
TCP Keep Alive maintaining network connections
Wake on Magic Packet allowing network wake events
High WindowServer CPU Usage
Transparency and blur effects
Active animations and transitions
Multiple windows updating simultaneously
Suboptimal Power Settings
Long display sleep timers
Extended standby delays
Unnecessary wake triggers
Optimization Solutions
Power Management Settings
Disable features that prevent proper sleep:
sudo pmset -a powernap 0
sudo pmset -a tcpkeepalive 0
sudo pmset -a womp 0
sudo pmset -a displaysleep 5
sudo pmset -a standbydelay 1800
What each setting does:
Setting
Purpose
Trade off
powernap 0
Disables background updates during sleep
Email/iCloud won’t sync while asleep
tcpkeepalive 0
Disables network connections during sleep
Find My Mac won’t work while asleep
womp 0
Disables wake on network packet
Can’t remotely wake Mac
displaysleep 5
Display sleeps after 5 minutes
Earlier screen timeout
standbydelay 1800
Deep sleep after 30 minutes
Slightly slower wake from hibernation
Disable Handoff
Handoff prevents sleep by maintaining constant communication with iPhone/iPad.
Via System Settings: System Settings > General > AirDrop & Handoff > Uncheck “Allow Handoff between this Mac and your iCloud devices”
Via command line:
defaults write ~/Library/Preferences/ByHost/com.apple.coreservices.useractivityd.plist ActivityAdvertisingAllowed -bool no
defaults write ~/Library/Preferences/ByHost/com.apple.coreservices.useractivityd.plist ActivityReceivingAllowed -bool no
killall sharingd
Reduce Visual Effects
Lower WindowServer CPU usage by disabling resource intensive visual effects:
Modern networks are far more complex than the simple point to point paths of the early internet. Equal Cost Multi Path (ECMP) routing, carrier grade NAT, and load balancing mean that packets from your machine to a destination might traverse entirely different network paths depending on flow hashing algorithms. Traditional traceroute tools simply cannot handle this complexity, often producing misleading or incomplete results. Dublin Traceroute solves this problem.
This guide provides a detailed walkthrough of installing Dublin Traceroute on macOS, addressing the common Xcode compatibility issues that plague the build process, and exploring the tool’s advanced capabilities for network path analysis.
1. Understanding Dublin Traceroute
1.1 What is Dublin Traceroute?
Dublin Traceroute is a NAT aware multipath tracerouting tool developed by Andrea Barberio. Unlike traditional traceroute utilities, it uses techniques pioneered by Paris traceroute to enumerate all possible network paths in ECMP environments, while adding novel NAT detection capabilities.
The tool addresses a fundamental limitation of classic traceroute. When multiple equal cost paths exist between source and destination, traditional traceroute cannot distinguish which path each packet belongs to, potentially showing you a composite “ghost path” that no real packet actually traverses.
1.2 How ECMP Breaks Traditional Traceroute
Consider a network topology where packets from host A to host F can take two paths:
A → B → D → F
A → C → E → F
Traditional traceroute sends packets with incrementing TTL values and records the ICMP Time Exceeded responses. However, because ECMP routers hash packets to determine their path (typically based on source IP, destination IP, source port, destination port, and protocol), successive traceroute packets may be routed differently.
The result? Traditional traceroute might show you something like A → B → E → F which is a path that doesn’t actually exist in your network. This phantom path combines hops from two different real paths, making network troubleshooting extremely difficult.
1.3 The Paris Traceroute Innovation
The Paris traceroute team invented a technique that keeps the flow identifier constant across all probe packets. By maintaining consistent values for the fields that routers use for ECMP hashing, all probes follow the same path. Dublin Traceroute implements this technique and extends it.
1.4 Dublin Traceroute’s NAT Detection
Dublin Traceroute introduces a unique NAT detection algorithm. It forges a custom IP ID in outgoing probe packets and tracks these identifiers in ICMP response packets. When a response references an outgoing packet with different source/destination addresses or ports than what was sent, this indicates NAT translation occurred at that hop.
For IPv6, where there is no IP ID field, Dublin Traceroute uses the payload length field to achieve the same tracking capability.
2. Prerequisites and System Requirements
Before installing Dublin Traceroute, ensure your system meets these requirements:
2.1 macOS Version
Dublin Traceroute builds on macOS, though the maintainers note that macOS “breaks at every major release”. Currently supported versions include macOS Monterey, Ventura, Sonoma, and Sequoia. The Apple Silicon (M1/M2/M3/M4) Macs work correctly with Homebrew’s ARM native builds.
2.2 Xcode Command Line Tools
The Xcode Command Line Tools are mandatory. Verify your installation:
Homebrew is the recommended package manager for installing dependencies. Verify or install:
# Check if Homebrew is installed
which brew
# If not installed, install it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
For Apple Silicon Macs, ensure the Homebrew path is in your shell configuration:
Warning: Your Xcode (16.1) at /Applications/Xcode.app is too outdated.
Please update to Xcode 26.0 (or delete it).
This is a known Homebrew bug on macOS Tahoe betas where placeholder version mappings reference non existent Xcode versions. The workaround:
# Force Homebrew to use the CLT instead
sudo xcode-select --switch /Library/Developer/CommandLineTools
# Or ignore the warning if builds succeed
export HOMEBREW_NO_INSTALLED_DEPENDENTS_CHECK=1
A common build failure occurs when CMake cannot find jsoncpp even though it’s installed:
CMake Error at /usr/local/Cellar/cmake/3.XX.X/share/cmake/Modules/FindPkgConfig.cmake:696 (message):
None of the required 'jsoncpp' found
This happens because jsoncpp’s pkg-config file may not be in the expected location. Fix this by setting the PKG_CONFIG_PATH:
# For Intel Macs
export PKG_CONFIG_PATH="/usr/local/lib/pkgconfig:$PKG_CONFIG_PATH"
# For Apple Silicon Macs
export PKG_CONFIG_PATH="/opt/homebrew/lib/pkgconfig:$PKG_CONFIG_PATH"
Dublin Traceroute provides a Homebrew formula, though it’s not in the official repository:
# Download the formula
wget https://raw.githubusercontent.com/insomniacslk/dublin-traceroute/master/homebrew/dublin-traceroute.rb
# Install using the local formula
brew install ./dublin-traceroute.rb
-- googletest git submodule is absent. Run `git submodule init && git submodule update` to get it
This is informational only and doesn’t prevent the build. To silence it:
cd dublin-traceroute
git submodule init
git submodule update
5.4 Setting Up Permissions
Dublin Traceroute requires raw socket access. On macOS, this typically means running as root:
sudo dublin-traceroute 8.8.8.8
For convenience, you can set the setuid bit (security implications should be understood):
# Find the installed binary
DTPATH=$(which dublin-traceroute)
# If it's a symlink, get the real path
DTREAL=$(greadlink -f "$DTPATH")
# Set ownership and setuid
sudo chown root:wheel "$DTREAL"
sudo chmod u+s "$DTREAL"
Note: Homebrew’s security model discourages setuid binaries. The recommended approach is to use sudo explicitly.
6. Installing Python Bindings
The Python bindings provide additional features including visualization and statistical analysis.
6.1 Installation
pip3 install dublintraceroute
If the C++ library isn’t found:
# Ensure the library is in the expected location
sudo cp /usr/local/lib/libdublintraceroute* /usr/lib/
# Or set the library path
export DYLD_LIBRARY_PATH="/usr/local/lib:$DYLD_LIBRARY_PATH"
pip3 install dublintraceroute
Starting dublin-traceroute
Traceroute from 0.0.0.0:12345 to 8.8.8.8:33434~33453 (probing 20 paths, min TTL is 1, max TTL is 30, delay is 10 ms)
== Flow ID 33434 ==
1 192.168.1.1 (gateway), IP ID: 17503 RTT 2.657 ms ICMP (type=11, code=0) 'TTL expired in transit', NAT ID: 0
2 10.0.0.1, IP ID: 0 RTT 15.234 ms ICMP (type=11, code=0) 'TTL expired in transit', NAT ID: 0
3 72.14.215.85, IP ID: 0 RTT 18.891 ms ICMP (type=11, code=0) 'TTL expired in transit', NAT ID: 0
...
7.2 Command Line Options
dublin-traceroute --help
Dublin Traceroute v0.4.2
Written by Andrea Barberio - https://insomniac.slackware.it
Usage:
dublin-traceroute <target> [options]
Options:
-h --help Show this help
-v --version Print version
-s SRC_PORT --sport=PORT Source port to send packets from
-d DST_PORT --dport=PORT Base destination port
-n NPATHS --npaths=NUM Number of paths to probe (default: 20)
-t MIN_TTL --min-ttl=TTL Minimum TTL to probe (default: 1)
-T MAX_TTL --max-ttl=TTL Maximum TTL to probe (default: 30)
-D DELAY --delay=MS Inter-packet delay in milliseconds
-b --broken-nat Handle broken NAT configurations
-N --no-dns Skip reverse DNS lookups
-o --output-file=FILE Output file name (default: trace.json)
7.3 Controlling Path Enumeration
Probe fewer paths for faster results:
sudo dublin-traceroute -n 5 8.8.8.8
Limit TTL range for local network analysis:
sudo dublin-traceroute -t 1 -T 10 192.168.1.1
7.4 JSON Output
Dublin Traceroute always produces a trace.json file containing structured results:
Convert the JSON output to a graphical representation:
# Run the traceroute
sudo dublin-traceroute 8.8.8.8
# Generate the graph
python3 scripts/to_graphviz.py trace.json
# View the image
open trace.json.png
This is useful for quick connectivity tests to verify reachability through multiple paths.
9. Interpreting Results
9.1 Understanding Flow IDs
Each “flow” in Dublin Traceroute output represents a distinct path through the network. The flow ID is derived from the destination port number. With --npaths=20, you’ll see flows numbered 33434 through 33453.
9.2 NAT ID Field
The NAT ID indicates detected NAT translations:
NAT ID: 0 means no NAT detected at this hop
NAT ID: N (where N > 0) indicates the Nth NAT device encountered
9.3 ICMP Codes
Common ICMP responses:
Type
Code
Meaning
11
0
TTL expired in transit
3
0
Network unreachable
3
1
Host unreachable
3
3
Port unreachable (destination reached)
3
13
Administratively filtered
9.4 Identifying ECMP Paths
When multiple flows show different hops at the same TTL, you’ve discovered ECMP routing:
== Flow 33434 ==
3 router-a.isp.net, RTT 25 ms
== Flow 33435 ==
3 router-b.isp.net, RTT 28 ms
This reveals two distinct paths through the ISP network.
9.5 Recognizing Asymmetric Routing
Different RTT values for the same hop across flows might indicate:
Load balancing with different queue depths
Asymmetric return paths
Different physical path lengths
10. Go Implementation
Dublin Traceroute also has a Go implementation with IPv6 support:
# Install Go if needed
brew install go
# Build the Go version
cd dublin-traceroute/go/dublintraceroute
go build -o dublin-traceroute-go ./cmd/dublin-traceroute
# Run with IPv6 support
sudo ./dublin-traceroute-go -6 2001:4860:4860::8888
The Go implementation provides:
IPv4/UDP probes
IPv6/UDP probes (not available in C++ version)
JSON output compatible with Python visualization tools
DOT output for Graphviz
11. Integration Examples
11.1 Automated Network Monitoring Script
#!/bin/bash
# monitor_paths.sh - Periodic path monitoring
TARGETS=("8.8.8.8" "1.1.1.1" "208.67.222.222")
OUTPUT_DIR="/var/log/dublin-traceroute"
INTERVAL=3600 # 1 hour
mkdir -p "$OUTPUT_DIR"
while true; do
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
for target in "${TARGETS[@]}"; do
OUTPUT_FILE="${OUTPUT_DIR}/${target//\./_}_${TIMESTAMP}.json"
echo "Tracing $target at $(date)"
sudo dublin-traceroute -n 10 -o "$OUTPUT_FILE" "$target" > /dev/null 2>&1
# Generate visualization
python3 /usr/local/share/dublin-traceroute/to_graphviz.py "$OUTPUT_FILE"
done
sleep $INTERVAL
done
11.2 Path Comparison Analysis
#!/usr/bin/env python3
"""Compare network paths between two traceroute runs."""
import json
import sys
from collections import defaultdict
def load_trace(filename):
with open(filename) as f:
return json.load(f)
def extract_paths(trace):
paths = {}
for flow_id, flow_data in trace['flows'].items():
path = []
for hop in sorted(flow_data['hops'], key=lambda x: x['sent']['ip']['ttl']):
if 'received' in hop:
path.append(hop['received']['ip']['src'])
else:
path.append('*')
paths[flow_id] = path
return paths
def compare_traces(trace1_file, trace2_file):
trace1 = load_trace(trace1_file)
trace2 = load_trace(trace2_file)
paths1 = extract_paths(trace1)
paths2 = extract_paths(trace2)
print("Path Comparison Report")
print("=" * 60)
all_flows = set(paths1.keys()) | set(paths2.keys())
for flow in sorted(all_flows, key=int):
p1 = paths1.get(flow, [])
p2 = paths2.get(flow, [])
if p1 == p2:
print(f"Flow {flow}: IDENTICAL")
else:
print(f"Flow {flow}: DIFFERENT")
max_len = max(len(p1), len(p2))
for i in range(max_len):
h1 = p1[i] if i < len(p1) else '-'
h2 = p2[i] if i < len(p2) else '-'
marker = ' ' if h1 == h2 else '>>'
print(f" {marker} TTL {i+1}: {h1:20} vs {h2}")
if __name__ == '__main__':
if len(sys.argv) != 3:
print(f"Usage: {sys.argv[0]} trace1.json trace2.json")
sys.exit(1)
compare_traces(sys.argv[1], sys.argv[2])
11.3 Alerting on Path Changes
#!/usr/bin/env python3
"""Alert when network paths change from baseline."""
import json
import hashlib
import smtplib
from email.mime.text import MIMEText
import subprocess
import sys
BASELINE_FILE = '/etc/dublin-traceroute/baseline.json'
ALERT_EMAIL = 'netops@example.com'
def get_path_hash(trace):
"""Generate a hash of all paths for quick comparison."""
paths = []
for flow_id in sorted(trace['flows'].keys(), key=int):
flow = trace['flows'][flow_id]
path = []
for hop in sorted(flow['hops'], key=lambda x: x['sent']['ip']['ttl']):
if 'received' in hop:
path.append(hop['received']['ip']['src'])
paths.append(':'.join(path))
combined = '|'.join(paths)
return hashlib.sha256(combined.encode()).hexdigest()
def send_alert(target, old_hash, new_hash, trace_file):
msg = MIMEText(f"""
Network path change detected!
Target: {target}
Previous hash: {old_hash}
Current hash: {new_hash}
Trace file: {trace_file}
Please investigate the path change.
""")
msg['Subject'] = f'[ALERT] Network path change to {target}'
msg['From'] = 'dublin-traceroute@example.com'
msg['To'] = ALERT_EMAIL
with smtplib.SMTP('localhost') as s:
s.send_message(msg)
def main(target):
# Run traceroute
trace_file = f'/tmp/trace_{target.replace(".", "_")}.json'
subprocess.run([
'sudo', 'dublin-traceroute',
'-n', '10',
'-o', trace_file,
target
], capture_output=True)
# Load results
with open(trace_file) as f:
trace = json.load(f)
current_hash = get_path_hash(trace)
# Load baseline
try:
with open(BASELINE_FILE) as f:
baseline = json.load(f)
except FileNotFoundError:
baseline = {}
# Compare
if target in baseline:
if baseline[target] != current_hash:
send_alert(target, baseline[target], current_hash, trace_file)
print(f"ALERT: Path to {target} has changed!")
# Update baseline
baseline[target] = current_hash
with open(BASELINE_FILE, 'w') as f:
json.dump(baseline, f, indent=2)
if __name__ == '__main__':
if len(sys.argv) != 2:
print(f"Usage: {sys.argv[0]} target")
sys.exit(1)
main(sys.argv[1])
12. Troubleshooting Common Issues
12.1 Permission Denied
Error: Could not open raw socket: Permission denied
Solution: Run with sudo or configure setuid as described in section 5.4.
12.2 No Response from Hops
If you see many asterisks (*) in output:
Firewall may be blocking ICMP responses
Rate limiting on intermediate routers
Increase the delay between probes:
sudo dublin-traceroute --delay=50 8.8.8.8
12.3 Library Not Found at Runtime
dyld: Library not loaded: @rpath/libdublintraceroute.dylib
Fix:
# Add library path
export DYLD_LIBRARY_PATH="/usr/local/lib:$DYLD_LIBRARY_PATH"
# Or create a symlink
sudo ln -s /usr/local/lib/libdublintraceroute.dylib /usr/lib/
12.4 Python Import Error
ImportError: No module named 'dublintraceroute._dublintraceroute'
The C++ library wasn’t found during Python module installation. Rebuild:
Dublin Traceroute provides essential visibility into modern network paths that traditional traceroute tools simply cannot offer. The combination of ECMP path enumeration and NAT detection makes it invaluable for troubleshooting complex network issues, validating routing policies, and understanding how your traffic actually traverses the internet.
The installation process on macOS, while occasionally complicated by Xcode version mismatches, is straightforward once dependencies are properly configured. The Python bindings extend the tool’s utility with visualization and analytical capabilities that transform raw traceroute data into actionable network intelligence.
For network engineers dealing with multi homed environments, CDN architectures, or simply trying to understand why packets take the paths they do, Dublin Traceroute deserves a place in your diagnostic toolkit.
15. References
Dublin Traceroute Official Site: https://dublin-traceroute.net
Ever wondered how to adjust the time window before your Mac demands a password again after using Touch ID? Here’s how to configure these settings from the terminal.
Screen Lock Password Delay
The most common scenario is controlling how long after your screen locks before a password is required. This setting determines whether Touch ID alone can unlock your Mac or if you need to type your password.
# Set delay in seconds (0 = immediately, 300 = 5 minutes)
defaults write com.apple.screensaver askForPasswordDelay -int 0
If you try to disable iCloud Drive syncing for your Desktop and Documents folders using the macOS System Settings interface, you’ll encounter this alarming warning:
If you continue, items will be removed from the Desktop and the Documents folder on this Mac and will remain available in iCloud Drive.
New items added to your Desktop or your Documents folder on this Mac will no longer be stored in iCloud Drive.
This is problematic because clicking “Turn Off” will remove all your Desktop files from your local Mac, leaving them only in iCloud Drive. This is not what most users want when they’re trying to disable iCloud sync.
The Solution: Use Terminal to Download First
The key is to ensure all iCloud files are downloaded locally before you disable the sync. Here’s the safe approach:
Step 1: Download All iCloud Desktop Files
Open Terminal and run:
# Force download all iCloud Desktop files to local storage
brctl download ~/Desktop/
# Check the download status
brctl status ~/Desktop/
Wait for the brctl download command to complete. This ensures every file on your Desktop that’s stored in iCloud is now also stored locally on your Mac.
Step 2: Verify Files Are Local
Check if any files are still cloud-only:
# Look for files that haven't been downloaded yet
find ~/Desktop -type f -exec sh -c 'ls -lO@ "$1" | grep -q "com.apple.fileprovider.status"' _ {} \; -print
If this returns any files, wait a bit longer or run brctl download ~/Desktop/ again.
Step 3: Now Disable iCloud Sync Safely
Once you’ve confirmed all files are downloaded:
Open System Settings
Click your Apple ID
Click iCloud
Click the ⓘ or Options button next to iCloud Drive
Uncheck Desktop & Documents Folders
Click Done
When you see the warning message about files being removed, you can click “Turn Off” with confidence because you’ve already downloaded everything locally.
Why This Happens
Apple’s iCloud Drive uses a feature called “Optimize Mac Storage” which keeps some files in the cloud only (not downloaded locally). When you disable Desktop & Documents sync through the UI, macOS assumes you want to keep files in iCloud and removes the local copies.
The brctl (iCloud Broadcast) command-line tool gives you more control, allowing you to force a full download before disabling sync.
Alternative: Disable Without the GUI
You can try disabling some iCloud behaviors via terminal:
Note: These commands affect iCloud behavior but may not completely disable Desktop & Documents syncing. The GUI method after downloading is still the most reliable approach.
Summary
To safely disable iCloud Desktop sync without losing files:
Run brctl download ~/Desktop/ in Terminal
Wait for all files to download
Use System Settings to disable Desktop & Documents sync
Click “Turn Off” when warned (your files are already local)
This ensures you keep all your files on your Mac while stopping iCloud synchronization.
Have you encountered this issue? The warning message is genuinely scary because it sounds like you’re about to lose your files. Always download first, disable second.
Understanding and testing your server’s maximum concurrent stream configuration is critical for both performance tuning and security hardening against HTTP/2 attacks. This guide provides comprehensive tools and techniques to test the SETTINGS_MAX_CONCURRENT_STREAMS parameter on your web servers.
This article complements our previous guide on Testing Your Website for HTTP/2 Rapid Reset Vulnerabilities from a macOS. While that article focuses on the CVE-2023-44487 Rapid Reset attack, this guide helps you verify that your server properly enforces stream limits, which is a critical defense mechanism.
2. Why Test Stream Limits?
The SETTINGS_MAX_CONCURRENT_STREAMS setting determines how many concurrent requests a client can multiplex over a single HTTP/2 connection. Testing this limit is important because:
Security validation: Confirms your server enforces reasonable stream limits
Configuration verification: Ensures your settings match security recommendations (typically 100-128 streams)
Performance tuning: Helps optimize the balance between throughput and resource consumption
Attack surface assessment: Identifies if servers accept dangerously high stream counts
3. Understanding HTTP/2 Stream Limits
When an HTTP/2 connection is established, the server sends a SETTINGS frame that includes:
SETTINGS_MAX_CONCURRENT_STREAMS: 100
This tells the client the maximum number of concurrent streams allowed. A compliant client should respect this limit, but attackers will not.
3.1. Common Default Values
Web Servers:
Nginx: 128 (configurable via http2_max_concurrent_streams)
Apache: 100 (configurable via H2MaxSessionStreams)
Caddy: 250 (configurable via max_concurrent_streams)
LiteSpeed: 100 (configurable in admin panel)
Reverse Proxies and Load Balancers:
HAProxy: No default limit (should be explicitly configured)
Envoy: 100 (configurable via max_concurrent_streams)
Traefik: 250 (configurable via maxConcurrentStreams)
CDN and Cloud Services:
CloudFlare: 128 (managed automatically)
AWS ALB: 128 (managed automatically)
Azure Front Door: 100 (managed automatically)
4. The Stream Limit Testing Script
The following Python script tests your server’s maximum concurrent streams using the h2 library. This script will:
Connect to your HTTP/2 server
Read the advertised SETTINGS_MAX_CONCURRENT_STREAMS value
Attempt to open more streams than the advertised limit
Verify that the server actually enforces the limit
Advertised max streams: What the server claims to support
Successful stream opens: How many streams were successfully created
Failed stream opens: Streams that failed to open
Streams reset by server: Streams terminated by the server (enforcement)
Actual max achieved: The real concurrent stream limit
6.1. Example Output
Testing HTTP/2 Stream Limits:
Target: example.com:443
Max streams to test: 200
Batch size: 10
============================================================
Server advertised limit: 128 concurrent streams
Opening batch of 10 streams (total: 10)...
Opening batch of 10 streams (total: 20)...
Opening batch of 10 streams (total: 130)...
WARNING: 5 stream(s) were reset by server
Stream limit enforcement detected
============================================================
STREAM LIMIT TEST RESULTS
============================================================
Server Configuration:
Advertised max streams: 128
Test Statistics:
Successful stream opens: 130
Failed stream opens: 0
Streams reset by server: 5
Actual max achieved: 125
Test duration: 3.45s
Enforcement:
Stream limit enforcement: DETECTED
============================================================
ASSESSMENT
============================================================
Advertised limit (128) is within recommended range
Server actively enforces stream limits
Stream limit protection is working correctly
============================================================
7. Interpreting Different Scenarios
7.1. Scenario 1: Proper Enforcement
Advertised max streams: 100
Successful stream opens: 105
Streams reset by server: 5
Actual max achieved: 100
Stream limit enforcement: DETECTED
Analysis: Server properly enforces the limit. Configuration is working exactly as expected.
7.2. Scenario 2: No Enforcement
Advertised max streams: 128
Successful stream opens: 200
Streams reset by server: 0
Actual max achieved: 200
Stream limit enforcement: NOT DETECTED
Analysis: Server accepts far more streams than advertised. This is a potential vulnerability that should be investigated.
7.3. Scenario 3: No Advertised Limit
Advertised max streams: Not specified
Successful stream opens: 200
Streams reset by server: 0
Actual max achieved: 200
Stream limit enforcement: NOT DETECTED
Analysis: Server does not advertise or enforce limits. High risk configuration that requires immediate remediation.
7.4. Scenario 4: Conservative Limit
Advertised max streams: 50
Successful stream opens: 55
Streams reset by server: 5
Actual max achieved: 50
Stream limit enforcement: DETECTED
Analysis: Very conservative limit. Good for security but may impact performance for legitimate high-throughput applications.
8. Monitoring During Testing
8.1. Server Side Monitoring
While running tests, monitor your server for resource utilization and connection metrics.
You can use both the stream limit tester and the Rapid Reset tester together for comprehensive HTTP/2 security assessment:
# Step 1: Test stream limits
python3 http2_stream_limit_tester.py --host example.com
# Step 2: Test rapid reset with IP spoofing
sudo python3 http2rapidresettester_macos.py \
--host example.com \
--cidr 192.168.1.0/24 \
--packets 1000
# Step 3: Re-test stream limits to verify no degradation
python3 http2_stream_limit_tester.py --host example.com
11. Security Best Practices
11.1. Configuration Guidelines
Set explicit limits: Never rely on default values
Use conservative values: 100-128 streams is the recommended range
Monitor enforcement: Regularly verify that limits are actually being enforced
Document settings: Maintain records of your stream limit configuration
Test after changes: Always test after configuration modifications
11.2. Defense in Depth
Stream limits should be one layer in a comprehensive security strategy:
Stream limits: Prevent excessive concurrent streams per connection
Connection limits: Limit total connections per IP address
Request rate limiting: Throttle requests per second
Resource quotas: Set memory and CPU limits
WAF/DDoS protection: Use cloud-based or on-premise DDoS mitigation
11.3. Regular Testing Schedule
Establish a regular testing schedule:
Weekly: Automated basic stream limit tests
Monthly: Comprehensive security testing including Rapid Reset
After changes: Always test after configuration or infrastructure changes
Quarterly: Full security audit including penetration testing
12. Troubleshooting
12.1. Common Errors
Error: “SSL: CERTIFICATE_VERIFY_FAILED”
This occurs when testing against servers with self-signed certificates. For testing purposes only, you can modify the script to skip certificate verification (not recommended for production testing).
If streams are not being reset despite exceeding the advertised limit:
Server may not be enforcing limits properly
Configuration may not have been applied (restart required)
Server may be using a different enforcement mechanism
Limits may be set at a different layer (load balancer vs web server)
12.3. High Failure Rate
If many streams fail to open:
Network connectivity issues
Firewall blocking requests
Server resource exhaustion
Rate limiting triggering prematurely
13. Understanding the Attack Surface
When testing your infrastructure, consider all HTTP/2 endpoints:
Web servers: Nginx, Apache, IIS
Load balancers: HAProxy, Envoy, ALB
API gateways: Kong, Tyk, AWS API Gateway
CDN endpoints: CloudFlare, Fastly, Akamai
Reverse proxies: Traefik, Caddy
13.1. Testing Strategy
Test at multiple layers:
# Test CDN edge
python3 http2_stream_limit_tester.py --host cdn.example.com
# Test load balancer directly
python3 http2_stream_limit_tester.py --host lb.example.com
# Test origin server
python3 http2_stream_limit_tester.py --host origin.example.com
14. Conclusion
Testing your HTTP/2 maximum concurrent streams configuration is essential for maintaining a secure and performant web infrastructure. This tool allows you to:
Verify that your server advertises appropriate stream limits
Confirm that advertised limits are actually enforced
Identify misconfigurations before they can be exploited
Tune performance while maintaining security
Regular testing, combined with proper configuration and monitoring, will help protect your infrastructure against HTTP/2-based attacks while maintaining optimal performance for legitimate users.
This guide and testing script are provided for educational and defensive security purposes only. Always obtain proper authorization before testing systems you do not own.
This guide walks you through setting up Memgraph with Claude Desktop on your laptop to analyze relationships between mule accounts in banking systems. By the end of this tutorial, you’ll have a working setup where Claude can query and visualize banking transaction patterns to identify potential mule account networks.
Why Graph Databases for Fraud Detection?
Traditional relational databases store data in tables with rows and columns, which works well for structured, hierarchical data. However, fraud detection requires understanding relationships between entities—and this is where graph databases excel.
In fraud investigation, the connections matter more than the entities themselves:
Follow the money: Tracing funds through multiple accounts requires traversing relationships, not joining tables
Multi-hop queries: Finding patterns like “accounts connected within 3 transactions” is natural in graphs but complex in SQL
Pattern matching: Detecting suspicious structures (like a controller account distributing to multiple mules) is intuitive with graph queries
Real-time analysis: Graph databases can quickly identify new connections as transactions occur
Mule account schemes specifically benefit from graph analysis because they form distinct network patterns:
A central controller account receives large deposits
Funds are rapidly distributed to multiple recruited “mule” accounts
Mules quickly withdraw cash or transfer funds, completing the laundering cycle
These patterns create a recognizable “hub-and-spoke” topology in a graph
In a traditional relational database, finding these patterns requires multiple complex JOINs and recursive queries. In a graph database, you simply ask: “show me accounts connected to this one” or “find all paths between these two accounts.”
Why This Stack?
We’ve chosen a powerful combination of technologies that work seamlessly together:
Memgraph (Graph Database)
Native graph database built for speed and real-time analytics
Uses Cypher query language (intuitive, SQL-like syntax for graphs)
Perfect for fraud detection where you need to explore relationships quickly
Lightweight and runs easily in Docker on your laptop
Open-source with excellent tooling (Memgraph Lab for visualization)
Claude Desktop (AI Interface)
Natural language interface eliminates the need to learn Cypher query syntax
Ask questions in plain English: “Which accounts received money from ACC006?”
Claude translates your questions into optimized graph queries automatically
Provides explanations and insights alongside query results
Dramatically lowers the barrier to entry for graph analysis
MCP (Model Context Protocol)
Connects Claude directly to Memgraph
Enables Claude to execute queries and retrieve real-time data
Secure, local connection—your data never leaves your machine
Extensible architecture allows adding other tools and databases
Why Not PostgreSQL?
While PostgreSQL is excellent for transactional data storage, graph relationships in SQL require:
Complex recursive CTEs (Common Table Expressions) for multi-hop queries
Multiple JOINs that become exponentially slower as relationships deepen
Manual construction of relationship paths
Limited visualization capabilities for network structures
Memgraph’s native graph model represents accounts and transactions as nodes and edges, making relationship queries natural and performant. For fraud detection where you need to quickly explore “who’s connected to whom,” graph databases are the right tool.
What You’ll Build
By following this guide, you’ll create:
The ability to ask natural language questions and get instant graph insights
A local Memgraph database with 57 accounts and 512 transactions
A realistic mule account network hidden among legitimate transactions
An AI-powered analysis interface through Claude Desktop
2. Prerequisites
Before starting, ensure you have:
macOS laptop
Homebrew package manager (we’ll install if needed)
Claude Desktop app installed
Basic terminal knowledge
3. Automated Setup
Below is a massive script. I did have it as single scripts, but it has merged into a large hazardous blob of bash. This script is badged under the “it works on my laptop” disclaimer!
cat > ~/setup_memgraph_complete.sh << 'EOF'
#!/bin/bash
# Complete automated setup for Memgraph + Claude Desktop
echo "========================================"
echo "Memgraph + Claude Desktop Setup"
echo "========================================"
echo ""
# Step 1: Install Rancher Desktop
echo "Step 1/7: Installing Rancher Desktop..."
# Check if Docker daemon is already running
DOCKER_RUNNING=false
if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
echo "Container runtime is already running!"
DOCKER_RUNNING=true
fi
if [ "$DOCKER_RUNNING" = false ]; then
# Check if Homebrew is installed
if ! command -v brew &> /dev/null; then
echo "Installing Homebrew first..."
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Add Homebrew to PATH for Apple Silicon Macs
if [[ $(uname -m) == 'arm64' ]]; then
echo 'eval "$(/opt/homebrew/bin/brew shellenv)"' >> ~/.zprofile
eval "$(/opt/homebrew/bin/brew shellenv)"
fi
fi
# Check if Rancher Desktop is installed
RANCHER_INSTALLED=false
if brew list --cask rancher 2>/dev/null | grep -q rancher; then
RANCHER_INSTALLED=true
echo "Rancher Desktop is installed via Homebrew."
fi
# If not installed, install it
if [ "$RANCHER_INSTALLED" = false ]; then
echo "Installing Rancher Desktop..."
brew install --cask rancher
sleep 3
fi
echo "Starting Rancher Desktop..."
# Launch Rancher Desktop
if [ -d "/Applications/Rancher Desktop.app" ]; then
echo "Launching Rancher Desktop from /Applications..."
open "/Applications/Rancher Desktop.app"
sleep 5
else
echo ""
echo "Please launch Rancher Desktop manually:"
echo " 1. Press Cmd+Space"
echo " 2. Type 'Rancher Desktop'"
echo " 3. Press Enter"
echo ""
echo "Waiting for you to launch Rancher Desktop..."
echo "Press Enter once you've started Rancher Desktop"
read
fi
# Add Rancher Desktop to PATH
export PATH="$HOME/.rd/bin:$PATH"
echo "Waiting for container runtime to start (this may take 30-60 seconds)..."
# Wait for docker command to become available
for i in {1..60}; do
if command -v docker &> /dev/null && docker info &> /dev/null 2>&1; then
echo ""
echo "Container runtime is running!"
break
fi
echo -n "."
sleep 3
done
if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
echo ""
echo "Rancher Desktop is taking longer than expected. Please:"
echo "1. Wait for Rancher Desktop to fully initialize"
echo "2. Accept any permissions requests"
echo "3. Once you see 'Kubernetes is running' in Rancher Desktop, press Enter"
read
# Try to add Rancher Desktop to PATH
export PATH="$HOME/.rd/bin:$PATH"
# Check one more time
if ! command -v docker &> /dev/null || ! docker info &> /dev/null 2>&1; then
echo "Container runtime still not responding."
echo "Please ensure Rancher Desktop is fully started and try again."
exit 1
fi
fi
fi
# Ensure docker is in PATH for the rest of the script
export PATH="$HOME/.rd/bin:$PATH"
echo ""
echo "Step 2/7: Installing Memgraph container..."
# Stop and remove existing container if it exists
if docker ps -a 2>/dev/null | grep -q memgraph; then
echo "Removing existing Memgraph container..."
docker stop memgraph 2>/dev/null || true
docker rm memgraph 2>/dev/null || true
fi
docker pull memgraph/memgraph-platform || { echo "Failed to pull Memgraph image"; exit 1; }
docker run -d -p 7687:7687 -p 7444:7444 -p 3000:3000 \
--name memgraph \
-v memgraph_data:/var/lib/memgraph \
memgraph/memgraph-platform || { echo "Failed to start Memgraph container"; exit 1; }
echo "Waiting for Memgraph to be ready..."
sleep 10
echo ""
echo "Step 3/7: Installing Python and Memgraph MCP server..."
# Install Python if not present
if ! command -v python3 &> /dev/null; then
echo "Installing Python..."
brew install python3
fi
# Install uv package manager
if ! command -v uv &> /dev/null; then
echo "Installing uv package manager..."
curl -LsSf https://astral.sh/uv/install.sh | sh
export PATH="$HOME/.local/bin:$PATH"
fi
echo "Memgraph MCP will be configured to run via uv..."
echo ""
echo "Step 4/7: Configuring Claude Desktop..."
CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"
mkdir -p "$CONFIG_DIR"
if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
echo "Backing up existing Claude configuration..."
cp "$CONFIG_FILE" "$CONFIG_FILE.backup.$(date +%s)"
fi
# Get the full path to uv
UV_PATH=$(which uv 2>/dev/null || echo "$HOME/.local/bin/uv")
# Merge memgraph config with existing config
if [ -f "$CONFIG_FILE" ] && [ -s "$CONFIG_FILE" ]; then
echo "Merging memgraph config with existing MCP servers..."
# Use Python to merge JSON (more reliable than jq which may not be installed)
python3 << PYTHON_MERGE
import json
import sys
config_file = "$CONFIG_FILE"
uv_path = "${UV_PATH}"
try:
# Read existing config
with open(config_file, 'r') as f:
config = json.load(f)
# Ensure mcpServers exists
if 'mcpServers' not in config:
config['mcpServers'] = {}
# Add/update memgraph server
config['mcpServers']['memgraph'] = {
"command": uv_path,
"args": [
"run",
"--with",
"mcp-memgraph",
"--python",
"3.13",
"mcp-memgraph"
],
"env": {
"MEMGRAPH_HOST": "localhost",
"MEMGRAPH_PORT": "7687"
}
}
# Write merged config
with open(config_file, 'w') as f:
json.dump(config, f, indent=2)
print("Successfully merged memgraph config")
sys.exit(0)
except Exception as e:
print(f"Error merging config: {e}", file=sys.stderr)
sys.exit(1)
PYTHON_MERGE
if [ $? -ne 0 ]; then
echo "Failed to merge config, creating new one..."
cat > "$CONFIG_FILE" << JSON
{
"mcpServers": {
"memgraph": {
"command": "${UV_PATH}",
"args": [
"run",
"--with",
"mcp-memgraph",
"--python",
"3.13",
"mcp-memgraph"
],
"env": {
"MEMGRAPH_HOST": "localhost",
"MEMGRAPH_PORT": "7687"
}
}
}
}
JSON
fi
else
echo "Creating new Claude Desktop configuration..."
cat > "$CONFIG_FILE" << JSON
{
"mcpServers": {
"memgraph": {
"command": "${UV_PATH}",
"args": [
"run",
"--with",
"mcp-memgraph",
"--python",
"3.13",
"mcp-memgraph"
],
"env": {
"MEMGRAPH_HOST": "localhost",
"MEMGRAPH_PORT": "7687"
}
}
}
}
JSON
fi
echo "Claude Desktop configured!"
echo ""
echo "Step 5/7: Setting up mgconsole..."
echo "mgconsole will be used via Docker (included in memgraph/memgraph-platform)"
echo ""
echo "Step 6/7: Setting up database schema..."
sleep 5 # Give Memgraph extra time to be ready
echo "Clearing existing data..."
echo "MATCH (n) DETACH DELETE n;" | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
echo "Creating indexes..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE INDEX ON :Account(account_id);
CREATE INDEX ON :Account(account_type);
CREATE INDEX ON :Person(person_id);
CYPHER
echo ""
echo "Step 7/7: Populating test data..."
echo "Loading core mule account data..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
CREATE (p1:Person {person_id: 'P001', name: 'John Smith', age: 45, risk_score: 'low'})
CREATE (a1:Account {account_id: 'ACC001', account_type: 'checking', balance: 15000, opened_date: '2020-01-15', status: 'active'})
CREATE (p1)-[:OWNS {since: '2020-01-15'}]->(a1)
CREATE (p2:Person {person_id: 'P002', name: 'Sarah Johnson', age: 38, risk_score: 'low'})
CREATE (a2:Account {account_id: 'ACC002', account_type: 'savings', balance: 25000, opened_date: '2019-06-10', status: 'active'})
CREATE (p2)-[:OWNS {since: '2019-06-10'}]->(a2)
CREATE (p3:Person {person_id: 'P003', name: 'Michael Brown', age: 22, risk_score: 'high'})
CREATE (a3:Account {account_id: 'ACC003', account_type: 'checking', balance: 500, opened_date: '2024-08-01', status: 'active'})
CREATE (p3)-[:OWNS {since: '2024-08-01'}]->(a3)
CREATE (p4:Person {person_id: 'P004', name: 'Lisa Chen', age: 19, risk_score: 'high'})
CREATE (a4:Account {account_id: 'ACC004', account_type: 'checking', balance: 300, opened_date: '2024-08-05', status: 'active'})
CREATE (p4)-[:OWNS {since: '2024-08-05'}]->(a4)
CREATE (p5:Person {person_id: 'P005', name: 'David Martinez', age: 21, risk_score: 'high'})
CREATE (a5:Account {account_id: 'ACC005', account_type: 'checking', balance: 450, opened_date: '2024-08-03', status: 'active'})
CREATE (p5)-[:OWNS {since: '2024-08-03'}]->(a5)
CREATE (p6:Person {person_id: 'P006', name: 'Robert Wilson', age: 35, risk_score: 'critical'})
CREATE (a6:Account {account_id: 'ACC006', account_type: 'business', balance: 2000, opened_date: '2024-07-15', status: 'active'})
CREATE (p6)-[:OWNS {since: '2024-07-15'}]->(a6)
CREATE (p7:Person {person_id: 'P007', name: 'Unknown Entity', risk_score: 'critical'})
CREATE (a7:Account {account_id: 'ACC007', account_type: 'business', balance: 150000, opened_date: '2024-06-01', status: 'active'})
CREATE (p7)-[:OWNS {since: '2024-06-01'}]->(a7)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN001', amount: 50000, timestamp: '2024-09-01T10:15:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN002', amount: 9500, timestamp: '2024-09-01T14:30:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN003', amount: 9500, timestamp: '2024-09-01T14:32:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN004', amount: 9500, timestamp: '2024-09-01T14:35:00', type: 'transfer', flagged: true}]->(a5)
CREATE (a3)-[:TRANSACTION {transaction_id: 'TXN005', amount: 9000, timestamp: '2024-09-02T09:00:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a4)-[:TRANSACTION {transaction_id: 'TXN006', amount: 9000, timestamp: '2024-09-02T09:15:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a5)-[:TRANSACTION {transaction_id: 'TXN007', amount: 9000, timestamp: '2024-09-02T09:30:00', type: 'cash_withdrawal', flagged: true}]->(a6)
CREATE (a7)-[:TRANSACTION {transaction_id: 'TXN008', amount: 45000, timestamp: '2024-09-15T11:20:00', type: 'wire_transfer', flagged: true}]->(a6)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN009', amount: 9800, timestamp: '2024-09-15T15:00:00', type: 'transfer', flagged: true}]->(a3)
CREATE (a6)-[:TRANSACTION {transaction_id: 'TXN010', amount: 9800, timestamp: '2024-09-15T15:05:00', type: 'transfer', flagged: true}]->(a4)
CREATE (a1)-[:TRANSACTION {transaction_id: 'TXN011', amount: 150, timestamp: '2024-09-10T12:00:00', type: 'debit_card', flagged: false}]->(a2)
CREATE (a2)-[:TRANSACTION {transaction_id: 'TXN012', amount: 1000, timestamp: '2024-09-12T10:00:00', type: 'transfer', flagged: false}]->(a1);
CYPHER
echo "Loading noise data (50 accounts, 500 transactions)..."
cat <<'CYPHER' | docker exec -i memgraph mgconsole --host 127.0.0.1 --port 7687
UNWIND range(1, 50) AS i
WITH i,
['Alice', 'Bob', 'Carol', 'David', 'Emma', 'Frank', 'Grace', 'Henry', 'Iris', 'Jack',
'Karen', 'Leo', 'Mary', 'Nathan', 'Olivia', 'Peter', 'Quinn', 'Rachel', 'Steve', 'Tina',
'Uma', 'Victor', 'Wendy', 'Xavier', 'Yara', 'Zack', 'Amy', 'Ben', 'Chloe', 'Daniel',
'Eva', 'Fred', 'Gina', 'Hugo', 'Ivy', 'James', 'Kate', 'Luke', 'Mia', 'Noah',
'Opal', 'Paul', 'Rosa', 'Sam', 'Tara', 'Umar', 'Vera', 'Will', 'Xena', 'Yuki'] AS firstNames,
['Anderson', 'Baker', 'Clark', 'Davis', 'Evans', 'Foster', 'Garcia', 'Harris', 'Irwin', 'Jones',
'King', 'Lopez', 'Miller', 'Nelson', 'Owens', 'Parker', 'Quinn', 'Reed', 'Scott', 'Taylor',
'Underwood', 'Vargas', 'White', 'Young', 'Zhao', 'Adams', 'Brooks', 'Collins', 'Duncan', 'Ellis'] AS lastNames,
['checking', 'savings', 'checking', 'savings', 'checking'] AS accountTypes,
['low', 'low', 'low', 'medium', 'low'] AS riskScores,
['2018-03-15', '2018-07-22', '2019-01-10', '2019-05-18', '2019-09-30', '2020-02-14', '2020-06-25', '2020-11-08', '2021-04-17', '2021-08-29', '2022-01-20', '2022-05-12', '2022-10-03', '2023-02-28', '2023-07-15'] AS dates
WITH i,
firstNames[toInteger(rand() * size(firstNames))] + ' ' + lastNames[toInteger(rand() * size(lastNames))] AS fullName,
accountTypes[toInteger(rand() * size(accountTypes))] AS accType,
riskScores[toInteger(rand() * size(riskScores))] AS risk,
toInteger(rand() * 40 + 25) AS age,
toInteger(rand() * 80000 + 1000) AS balance,
dates[toInteger(rand() * size(dates))] AS openDate
CREATE (p:Person {person_id: 'NOISE_P' + toString(i), name: fullName, age: age, risk_score: risk})
CREATE (a:Account {account_id: 'NOISE_ACC' + toString(i), account_type: accType, balance: balance, opened_date: openDate, status: 'active'})
CREATE (p)-[:OWNS {since: openDate}]->(a);
UNWIND range(1, 500) AS i
WITH i,
toInteger(rand() * 50 + 1) AS fromIdx,
toInteger(rand() * 50 + 1) AS toIdx,
['transfer', 'debit_card', 'check', 'atm_withdrawal', 'direct_deposit', 'wire_transfer', 'mobile_payment'] AS txnTypes,
['2024-01-15', '2024-02-20', '2024-03-10', '2024-04-05', '2024-05-18', '2024-06-22', '2024-07-14', '2024-08-09', '2024-09-25', '2024-10-30'] AS dates
WHERE fromIdx <> toIdx
WITH i, fromIdx, toIdx, txnTypes, dates,
txnTypes[toInteger(rand() * size(txnTypes))] AS txnType,
toInteger(rand() * 5000 + 10) AS amount,
(rand() < 0.05) AS shouldFlag,
dates[toInteger(rand() * size(dates))] AS txnDate
MATCH (from:Account {account_id: 'NOISE_ACC' + toString(fromIdx)})
MATCH (to:Account {account_id: 'NOISE_ACC' + toString(toIdx)})
CREATE (from)-[:TRANSACTION {
transaction_id: 'NOISE_TXN' + toString(i),
amount: amount,
timestamp: txnDate + 'T' + toString(toInteger(rand() * 24)) + ':' + toString(toInteger(rand() * 60)) + ':00',
type: txnType,
flagged: shouldFlag
}]->(to);
CYPHER
echo ""
echo "========================================"
echo "Setup Complete!"
echo "========================================"
echo ""
echo "Next steps:"
echo "1. Restart Claude Desktop (Quit and reopen)"
echo "2. Open Memgraph Lab at http://localhost:3000"
echo "3. Start asking Claude questions about the mule account data!"
echo ""
echo "Example query: 'Show me all accounts owned by people with high or critical risk scores in Memgraph'"
echo ""
EOF
chmod +x ~/setup_memgraph_complete.sh
~/setup_memgraph_complete.sh
The script will:
Install Rancher Desktop (if not already installed)
Install Homebrew (if needed)
Pull and start Memgraph container
Install Node.js and Memgraph MCP server
Configure Claude Desktop automatically
Install mgconsole CLI tool
Set up database schema with indexes
Populate with mule account data and 500+ noise transactions
After the script completes, restart Claude Desktop (quit and reopen) for the MCP configuration to take effect.
4. Verifying the Setup
Verify the setup by accessing Memgraph Lab at http://localhost:3000 or using mgconsole via Docker:
Now that everything is set up, you can interact with Claude Desktop to analyze the mule account network. Here are example queries you can try:
Example 1: Find All High-Risk Accounts
Ask Claude:
Show me all accounts owned by people with high or critical risk scores in Memgraph
Claude will query Memgraph and return results showing the suspicious accounts (ACC003, ACC004, ACC005, ACC006, ACC007), filtering out the 50+ noise accounts.
Example 2: Identify Transaction Patterns
Ask Claude:
Find all accounts that received money from ACC006 within a 24-hour period. Show the transaction amounts and timestamps.
Claude will identify the three mule accounts (ACC003, ACC004, ACC005) that received similar amounts in quick succession.
Example 3: Trace Money Flow
Ask Claude:
Trace the flow of money from ACC007 through the network. Show me the complete transaction path.
Claude will visualize the path: ACC007 -> ACC006 -> [ACC003, ACC004, ACC005], revealing the laundering pattern.
Example 4: Calculate Total Funds
Ask Claude:
Calculate the total amount of money that flowed through ACC006 in September 2024
Claude will aggregate all incoming and outgoing transactions for the controller account.
Example 5: Find Rapid Withdrawal Patterns
Ask Claude:
Find accounts where money was withdrawn within 48 hours of being deposited. What are the amounts and account holders?
This reveals the classic mule account behavior of quick cash extraction.
Example 6: Network Analysis
Ask Claude:
Show me all accounts that have transaction relationships with ACC006. Create a visualization of this network.
Claude will generate a graph showing the controller account at the center with connections to both the source and mule accounts.
Example 7: Risk Assessment
Ask Claude:
Which accounts have received flagged transactions totaling more than $15,000? List them by total amount.
This helps identify which mule accounts have processed the most illicit funds.
6. Understanding the Graph Visualization
When Claude displays graph results, you’ll see:
Nodes: Circles representing accounts and persons
Edges: Lines representing transactions or ownership relationships
Properties: Attributes like amounts, timestamps, and risk scores
The graph structure makes it easy to spot:
Central nodes (controllers) with many connections
Similar transaction patterns across multiple accounts
Timing correlations between related transactions
Isolation of legitimate vs. suspicious account clusters
7. Advanced Analysis Queries
Once you’re comfortable with basic queries, try these advanced analyses:
Community Detection
Ask Claude:
Find groups of accounts that frequently transact with each other. Are there separate communities in the network?
Temporal Analysis
Ask Claude:
Show me the timeline of transactions for accounts owned by people under 25 years old. Are there any patterns?
Shortest Path Analysis
Ask Claude:
What's the shortest path of transactions between ACC007 and ACC003? How many hops does it take?
8. Cleaning Up
When you’re done experimenting, you can stop and remove the Memgraph container:
docker stop memgraph
docker rm memgraph
To remove the data volume completely:
docker volume rm memgraph_data
To restart later with fresh data, just run the setup script again.
9. Troubleshooting
Docker Not Running
If you get errors about Docker not running:
open -a Docker
Wait for Docker Desktop to start, then verify:
docker info
Memgraph Container Won’t Start
Check if ports are already in use:
lsof -i :7687
lsof -i :3000
Kill any conflicting processes or change the port mappings in the docker run command.
Create additional graph algorithms for anomaly detection
Connect to real banking data sources (with proper security)
Build automated alerting for suspicious patterns
Expand the schema to include IP addresses, devices, and locations
The combination of Memgraph’s graph database capabilities and Claude’s natural language interface makes it easy to explore and analyze complex relationship data without writing complex Cypher queries manually.
11. Conclusion
You now have a complete environment for analyzing banking mule accounts using Memgraph and Claude Desktop. The graph database structure naturally represents the relationships between accounts, making it ideal for fraud detection. Claude’s integration through MCP allows you to query and visualize this data using natural language, making sophisticated analysis accessible without deep technical knowledge.
The test dataset demonstrates typical mule account patterns: rapid movement of funds through multiple accounts, young account holders, recently opened accounts, and structured amounts designed to avoid reporting thresholds. These patterns are much easier to spot in a graph database than in traditional relational databases.
Experiment with different queries and explore how graph thinking can reveal hidden patterns in connected data.
NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.
In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.
Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.
Prerequisites
macOS, Linux, or Windows with WSL
Basic understanding of networking concepts
Permission to scan target systems
Claude Desktop installed
Part 1: Installation and Setup
Step 1: Install NMAP
On macOS:
# Using Homebrew brew install nmap
# Verify installation
On Linux (Ubuntu/Debian):
Step 2: Install Node.js (Required for MCP Server)
The NMAP MCP server requires Node.js to run.
Mac OS:
brew install node
node --version
npm --version
Step 3: Install the NMAP MCP Server
The most popular NMAP MCP server is available on GitHub. We’ll install it globally:
cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build
Step 4: Configure Claude Desktop
Edit the Claude Desktop configuration file to add the NMAP MCP server.
with open(config_file, 'w') as f: json.dump(config, f, indent=2)
print("nmap server added to Claude Desktop config!") print(f"Backup saved to: {config_file}.backup") EOF
Step 5: Restart Claude Desktop
Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.
Part 2: Understanding NMAP MCP Capabilities
Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:
Host discovery scans
Port scanning (TCP/UDP)
Service version detection
OS detection
Script scanning (NSE – NMAP Scripting Engine)
Output parsing and interpretation
Part 3: 20 Most Common Vulnerability Checks
For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).
1. Basic Host Discovery and Open Ports
Prompt:
Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.
What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.
Expected NMAP command:
nmap -sS -p 1-1000 example-target.com
2. Comprehensive Port Scan (All 65535 Ports)
Prompt:
Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.
What this does: Scans every possible TCP port – time-consuming but thorough.
Expected NMAP command:
nmap -p- example-target.com
3. Service Version Detection
Prompt:
Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.
What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.
Expected NMAP command:
nmap -sV example-target.com
4. Operating System Detection
Prompt:
Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.
What this does: Analyzes network responses to guess the target OS.
Expected NMAP command:
nmap -O example-target.com
5. Aggressive Scan (OS + Version + Scripts + Traceroute)
Prompt:
Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.
What this does: Combines multiple detection techniques for maximum information.
Expected NMAP command:
nmap -A example-target.com
6. Vulnerability Scanning with NSE Scripts
Prompt:
Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.
What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.
Expected NMAP command:
nmap --script vuln example-target.com
7. SSL/TLS Security Analysis
Prompt:
Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.
What this does: Comprehensive SSL/TLS security assessment.
Deep Dive Exercise 1: Complete Web Application Security Assessment
Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.
Claude Prompt:
I need a complete security assessment of webapp.example-target.com. Please:
1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice
Use timing template T3 (normal) to avoid overwhelming the target.
What Claude will do:
Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:
How to interpret multiple scan results holistically
Prioritization of security findings by severity
Claude’s ability to correlate findings across multiple scans
Deep Dive Exercise 2: Network Perimeter Reconnaissance
Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.
Claude Prompt:
Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:
1. Discover all live hosts in the IP range
2. For each live host, identify:
- Operating system
- All open ports (full 65535 range)
- Service versions
- Potential vulnerabilities
3. Map the network topology and identify:
- Firewalls and filtering
- DMZ hosts vs internal hosts
- Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
- Open DNS resolvers
- Open mail relays
- Unauthenticated database access
- Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary
Use slow timing (T2) to minimize detection risk and avoid false positives.
What Claude will do:
# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24
# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24
# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24
# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24
# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24
# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]
Learning Outcomes:
Large-scale network scanning strategies
How to handle and analyze results from multiple hosts
Network segmentation analysis
Risk assessment across an entire network perimeter
Understanding firewall and filtering detection
Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting
Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.
Claude Prompt:
I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:
1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability
Run this aggressively (-T4) as we have permission for intensive testing.
Cross-reference detected versions with CVE databases
Explain potential exploit chains
Provide PoC (Proof of Concept) suggestions
Recommend remediation priorities
Suggest additional manual testing techniques
Learning Outcomes:
Advanced NSE scripting capabilities
How to correlate vulnerabilities for exploit chains
Understanding vulnerability severity and exploitability
Version-specific vulnerability research
Claude’s ability to provide context from its training data about specific CVEs
Part 5: Wide-Ranging Reconnaissance Exercises
Exercise 5.1: Subdomain Discovery and Mapping
Prompt:
Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings
Also check for common subdomain patterns like api, dev, staging, admin, etc.
What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.
Exercise 5.2: API Security Testing
Prompt:
I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable
Exercise 5.3: Cloud Infrastructure Detection
Prompt:
Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure
Exercise 5.4: IoT and Embedded Device Discovery
Prompt:
Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)
Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces
Exercise 5.5: Checking for Known Vulnerabilities and Old Software
Prompt:
Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:
1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
- CVSS score
- Exploit availability
- Exposure (internet-facing vs internal)
5. Check for:
- Outdated TLS/SSL versions
- Deprecated cryptographic algorithms
- Unpatched web frameworks
- Old CMS versions (WordPress, Joomla, Drupal)
- Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations
A table of detected software with current versions and latest versions
CVE listings with severity scores
Specific upgrade recommendations
Risk assessment for each finding
Part 6: Advanced Tips and Techniques
6.1 Optimizing Scan Performance
Timing Templates:
-T0 (Paranoid): Extremely slow, for IDS evasion
-T1 (Sneaky): Slow, minimal detection risk
-T2 (Polite): Slower, less bandwidth intensive
-T3 (Normal): Default, balanced approach
-T4 (Aggressive): Faster, assumes good network
-T5 (Insane): Extremely fast, may miss results
Prompt:
Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.
6.2 Evading Firewalls and IDS
Prompt:
Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering
Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.
Claude can help you write Lua scripts for NMAP’s scripting engine!
6.4 Output Parsing and Reporting
Prompt:
Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.
Expected command:
nmap -oA scan_results example-target.com
Claude can then help you parse the XML file programmatically.
Part 7: Responsible Disclosure and Next Steps
After Finding Vulnerabilities
Document everything: Keep detailed records of your findings
Prioritize by risk: Use CVSS scores and business impact
Responsible disclosure: Follow the organization’s security policy
Remediation tracking: Help create an action plan
Verify fixes: Re-test after patches are applied
Using Claude for Post-Scan Analysis
Prompt:
I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output].
Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team
Claude excels at translating technical scan results into actionable business intelligence.
Part 8: Continuous Monitoring with NMAP and Claude
Set up regular scanning routines and use Claude to track changes:
Prompt:
Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected
Conclusion
Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:
Express complex scanning requirements in natural language
Get intelligent interpretation of scan results
Receive contextual security advice
Automate repetitive reconnaissance tasks
Learn security concepts through interactive exploration
Key Takeaways:
Always get permission before scanning any network or system
Start with gentle scans and progressively get more aggressive
Use timing controls to avoid overwhelming targets or triggering alarms
Correlate multiple scans for a complete security picture
Leverage Claude’s knowledge to interpret results and suggest next steps
Document everything for compliance and knowledge sharing
Keep NMAP updated to benefit from the latest scripts and capabilities
The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.
About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.
Modern sites often block plain curl. Using a real browser engine (Chromium via Playwright) gives you true browser behavior: real TLS/HTTP2 stack, cookies, redirects, and JavaScript execution if needed. This post mirrors the functionality of the original browser_curl.sh wrapper but implemented with Playwright. It also includes an optional Selenium mini-variant at the end.
What this tool does
Sends realistic browser headers (Chrome-like)
Uses Chromium’s real network stack (HTTP/2, compression)
Manages cookies (persist to a file)
Follows redirects by default
Supports JSON and form POSTs
Async mode that returns immediately
--count N to dispatch N async requests for quick load tests
Note: Advanced bot defenses (CAPTCHAs, JS/ML challenges, strict TLS/HTTP2 fingerprinting) may still require full page automation and real user-like behavior. Playwright can do that too by driving real pages.
Setup
Run these once to install Playwright and Chromium:
cat > pw_scrape.sh << 'EOF'
#!/usr/bin/env bash
URLS=(
"https://example.com/"
"https://example.com/"
"https://example.com/"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
node browser_playwright.mjs -o "$(echo "$url" | sed 's#[/:]#_#g').html" "$url"
sleep 2
done
EOF
chmod +x pw_scrape.sh
./pw_scrape.sh
Health check monitoring:
cat > pw_health.sh << 'EOF'
#!/usr/bin/env bash
ENDPOINT="${1:-https://httpbin.org/status/200}"
while true; do
if node browser_playwright.mjs "$ENDPOINT" >/dev/null 2>&1; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done
EOF
chmod +x pw_health.sh
./pw_health.sh
Hanging or quoting issues: ensure your shell quoting is balanced. Prefer simple commands without complex inline quoting.
Verbose mode too noisy: omit -v in production.
Cookie file format: the script writes Playwright storageState JSON. It’s safe to keep or delete.
403 errors: site uses stronger protections. Drive a real page (Playwright page.goto) and interact, or solve CAPTCHAs where required.
Performance notes
Dispatch time depends on process spawn and Playwright startup. For higher throughput, consider reusing the same Node process to issue many requests (modify the script to loop internally) or use k6/Locust/Artillery for large-scale load testing.
Limitations
This CLI uses Playwright’s HTTP client bound to a Chromium context. It is much closer to real browsers than curl, but some advanced fingerprinting still detects automation.
WebSocket flows, MFA, or complex JS challenges generally require full page automation (which Playwright supports).
When to use what
Use this Playwright CLI when you need realistic browser behavior, cookies, and straightforward HTTP requests with quick async dispatch.
Use full Playwright page automation for dynamic content, complex logins, CAPTCHAs, and JS-heavy sites.
If you prefer Selenium, here’s a minimal GET/headers/redirect/cookie-capable script. Note: issuing cross-origin POST bodies is more ergonomic with Playwright’s request client; Selenium focuses on page automation.
You now have a Playwright-powered CLI that mirrors the original curl-wrapper’s ergonomics but uses a real browser engine, plus a minimal Selenium alternative. Use the CLI for realistic headers, cookies, redirects, JSON/form POSTs, and async dispatch with --count. For tougher sites, scale up to full page automation with Playwright.
Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.
This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.
What this script does
Sends browser-like headers (Chrome on macOS)
Uses HTTP/2 and compression
Manages cookies automatically (cookie jar)
Follows redirects by default
Supports JSON and form POSTs
Async mode that returns immediately
--count N to dispatch N async requests in one command
Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.
The complete script
Save this as browser_curl.sh and make it executable in one command:
cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash
# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false
# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
usage() {
cat << EOH
Usage: $(basename "$0") [OPTIONS] URL
Advanced curl wrapper that mimics browser behavior to bypass bot protection.
OPTIONS:
-X, --method METHOD HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
-d, --data DATA POST/PUT data
-H, --header HEADER Add custom header (can be used multiple times)
-o, --output FILE Write output to file
-c, --cookie FILE Use custom cookie file [default: temp file]
-A, --user-agent UA Custom user agent [default: Chrome on macOS]
-t, --timeout SECONDS Request timeout [default: 30]
--async Run request asynchronously in background
--count N Number of async requests to fire [default: 1, requires --async]
--no-redirect Don't follow redirects
--show-headers Show response headers
--json Send data as JSON (sets Content-Type)
--form Send data as form-urlencoded
-v, --verbose Verbose output
-h, --help Show this help message
EXAMPLES:
# Simple GET request
$(basename "$0") https://example.com
# Async GET request
$(basename "$0") --async https://example.com
# POST with JSON data
$(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login
# POST with form data
$(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login
# Multiple async requests (using loop)
for i in {1..10}; do
$(basename "$0") --async https://example.com/api/endpoint
done
# Multiple async requests (using --count)
$(basename "$0") --async --count 10 https://example.com/api/endpoint
EOH
exit 0
}
# Parse arguments
EXTRA_HEADERS=()
URL=""
while [[ $# -gt 0 ]]; do
case $1 in
-X|--method)
METHOD="$2"
shift 2
;;
-d|--data)
DATA="$2"
shift 2
;;
-H|--header)
EXTRA_HEADERS+=("$2")
shift 2
;;
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-c|--cookie)
COOKIE_FILE="$2"
shift 2
;;
-A|--user-agent)
USER_AGENT="$2"
shift 2
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
--async)
ASYNC=true
shift
;;
--count)
COUNT="$2"
shift 2
;;
--no-redirect)
FOLLOW_REDIRECTS=false
shift
;;
--show-headers)
SHOW_HEADERS=true
shift
;;
--json)
CONTENT_TYPE="application/json"
shift
;;
--form)
CONTENT_TYPE="application/x-www-form-urlencoded"
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-h|--help)
usage
;;
*)
if [[ -z "$URL" ]]; then
URL="$1"
else
echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
exit 1
fi
shift
;;
esac
done
# Validate URL
if [[ -z "$URL" ]]; then
echo -e "${RED}Error: URL is required${NC}" >&2
usage
fi
# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
echo -e "${RED}Error: --count requires --async${NC}" >&2
exit 1
fi
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
exit 1
fi
# Execute curl
execute_curl() {
# Build curl arguments as array instead of string
local -a curl_args=()
# Basic options
curl_args+=("--compressed")
curl_args+=("--max-time" "$TIMEOUT")
curl_args+=("--connect-timeout" "10")
curl_args+=("--http2")
# Cookies (ensure file exists to avoid curl warning)
: > "$COOKIE_FILE" 2>/dev/null || true
curl_args+=("--cookie" "$COOKIE_FILE")
curl_args+=("--cookie-jar" "$COOKIE_FILE")
# Follow redirects
if [[ "$FOLLOW_REDIRECTS" == true ]]; then
curl_args+=("--location")
fi
# Show headers
if [[ "$SHOW_HEADERS" == true ]]; then
curl_args+=("--include")
fi
# Output file
if [[ -n "$OUTPUT_FILE" ]]; then
curl_args+=("--output" "$OUTPUT_FILE")
fi
# Verbose
if [[ "$VERBOSE" == true ]]; then
curl_args+=("--verbose")
else
curl_args+=("--silent" "--show-error")
fi
# Method
curl_args+=("--request" "$METHOD")
# Browser-like headers
curl_args+=("--header" "User-Agent: $USER_AGENT")
curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
curl_args+=("--header" "Connection: keep-alive")
curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
curl_args+=("--header" "Sec-Fetch-Dest: document")
curl_args+=("--header" "Sec-Fetch-Mode: navigate")
curl_args+=("--header" "Sec-Fetch-Site: none")
curl_args+=("--header" "Sec-Fetch-User: ?1")
curl_args+=("--header" "Cache-Control: max-age=0")
# Content-Type for POST/PUT
if [[ -n "$DATA" ]]; then
if [[ -n "$CONTENT_TYPE" ]]; then
curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
fi
curl_args+=("--data" "$DATA")
fi
# Extra headers
for header in "${EXTRA_HEADERS[@]}"; do
curl_args+=("--header" "$header")
done
# URL
curl_args+=("$URL")
if [[ "$ASYNC" == true ]]; then
# Run asynchronously in background
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
# Fire multiple requests if count > 1
local pids=()
for ((i=1; i<=COUNT; i++)); do
# Run in background detached, suppress all output
nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
local pid=$!
disown $pid
pids+=("$pid")
done
if [[ "$COUNT" -eq 1 ]]; then
echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
else
echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
fi
else
# Run synchronously
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
curl "${curl_args[@]}"
local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
return $exit_code
fi
fi
}
# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
rm -f "$COOKIE_FILE"
fi
}
# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
trap cleanup EXIT
fi
# Main execution
execute_curl
# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
exit 0
fi
EOF
COOKIE_FILE="session_cookies.txt"
# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://example.com/login
# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
https://example.com/dashboard
#!/bin/bash
URLS=(
"https://example.com/page1"
"https://example.com/page2"
"https://example.com/page3"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
./browser_curl.sh -o "$(basename "$url").html" "$url"
sleep 2 # Rate limiting
done
Example 3: Health check monitoring
#!/bin/bash
ENDPOINT="https://api.example.com/health"
while true; do
if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done
Installing browser_curl to your PATH
If you want browser_curl.sh to be available anywhere then install it on your path using:
mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl
echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc
echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"
Troubleshooting
Issue: Hanging with dquote> prompt
Cause: Shell quoting issue (unbalanced quotes)
Solution: Use simple, direct commands
# Good
./browser_curl.sh --async https://example.com
# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"
Response validation – Assert status codes, content patterns
Metrics collection – Timing stats, success rates
Configuration file – Default settings per domain
Conclusion
browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.
Key takeaways:
Simple wrapper around curl with realistic browser behavior
Async mode with --count for easy load testing
Works for basic bot detection, not advanced challenges
Combine with Playwright for tough targets
Lightweight and fast for everyday API work
The script is particularly useful for:
API development and testing
Quick load testing during development
Monitoring and health checks
Simple scraping tasks
Learning curl features
For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.