Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.
This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.
What this script does
- Sends browser-like headers (Chrome on macOS)
- Uses HTTP/2 and compression
- Manages cookies automatically (cookie jar)
- Follows redirects by default
- Supports JSON and form POSTs
- Async mode that returns immediately
--count Nto dispatch N async requests in one command
Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.
The complete script
Save this as browser_curl.sh and make it executable in one command:
cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash
# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false
# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
usage() {
cat << EOH
Usage: $(basename "$0") [OPTIONS] URL
Advanced curl wrapper that mimics browser behavior to bypass bot protection.
OPTIONS:
-X, --method METHOD HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
-d, --data DATA POST/PUT data
-H, --header HEADER Add custom header (can be used multiple times)
-o, --output FILE Write output to file
-c, --cookie FILE Use custom cookie file [default: temp file]
-A, --user-agent UA Custom user agent [default: Chrome on macOS]
-t, --timeout SECONDS Request timeout [default: 30]
--async Run request asynchronously in background
--count N Number of async requests to fire [default: 1, requires --async]
--no-redirect Don't follow redirects
--show-headers Show response headers
--json Send data as JSON (sets Content-Type)
--form Send data as form-urlencoded
-v, --verbose Verbose output
-h, --help Show this help message
EXAMPLES:
# Simple GET request
$(basename "$0") https://example.com
# Async GET request
$(basename "$0") --async https://example.com
# POST with JSON data
$(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login
# POST with form data
$(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login
# Multiple async requests (using loop)
for i in {1..10}; do
$(basename "$0") --async https://example.com/api/endpoint
done
# Multiple async requests (using --count)
$(basename "$0") --async --count 10 https://example.com/api/endpoint
EOH
exit 0
}
# Parse arguments
EXTRA_HEADERS=()
URL=""
while [[ $# -gt 0 ]]; do
case $1 in
-X|--method)
METHOD="$2"
shift 2
;;
-d|--data)
DATA="$2"
shift 2
;;
-H|--header)
EXTRA_HEADERS+=("$2")
shift 2
;;
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-c|--cookie)
COOKIE_FILE="$2"
shift 2
;;
-A|--user-agent)
USER_AGENT="$2"
shift 2
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
--async)
ASYNC=true
shift
;;
--count)
COUNT="$2"
shift 2
;;
--no-redirect)
FOLLOW_REDIRECTS=false
shift
;;
--show-headers)
SHOW_HEADERS=true
shift
;;
--json)
CONTENT_TYPE="application/json"
shift
;;
--form)
CONTENT_TYPE="application/x-www-form-urlencoded"
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-h|--help)
usage
;;
*)
if [[ -z "$URL" ]]; then
URL="$1"
else
echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
exit 1
fi
shift
;;
esac
done
# Validate URL
if [[ -z "$URL" ]]; then
echo -e "${RED}Error: URL is required${NC}" >&2
usage
fi
# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
echo -e "${RED}Error: --count requires --async${NC}" >&2
exit 1
fi
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
exit 1
fi
# Execute curl
execute_curl() {
# Build curl arguments as array instead of string
local -a curl_args=()
# Basic options
curl_args+=("--compressed")
curl_args+=("--max-time" "$TIMEOUT")
curl_args+=("--connect-timeout" "10")
curl_args+=("--http2")
# Cookies (ensure file exists to avoid curl warning)
: > "$COOKIE_FILE" 2>/dev/null || true
curl_args+=("--cookie" "$COOKIE_FILE")
curl_args+=("--cookie-jar" "$COOKIE_FILE")
# Follow redirects
if [[ "$FOLLOW_REDIRECTS" == true ]]; then
curl_args+=("--location")
fi
# Show headers
if [[ "$SHOW_HEADERS" == true ]]; then
curl_args+=("--include")
fi
# Output file
if [[ -n "$OUTPUT_FILE" ]]; then
curl_args+=("--output" "$OUTPUT_FILE")
fi
# Verbose
if [[ "$VERBOSE" == true ]]; then
curl_args+=("--verbose")
else
curl_args+=("--silent" "--show-error")
fi
# Method
curl_args+=("--request" "$METHOD")
# Browser-like headers
curl_args+=("--header" "User-Agent: $USER_AGENT")
curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
curl_args+=("--header" "Connection: keep-alive")
curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
curl_args+=("--header" "Sec-Fetch-Dest: document")
curl_args+=("--header" "Sec-Fetch-Mode: navigate")
curl_args+=("--header" "Sec-Fetch-Site: none")
curl_args+=("--header" "Sec-Fetch-User: ?1")
curl_args+=("--header" "Cache-Control: max-age=0")
# Content-Type for POST/PUT
if [[ -n "$DATA" ]]; then
if [[ -n "$CONTENT_TYPE" ]]; then
curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
fi
curl_args+=("--data" "$DATA")
fi
# Extra headers
for header in "${EXTRA_HEADERS[@]}"; do
curl_args+=("--header" "$header")
done
# URL
curl_args+=("$URL")
if [[ "$ASYNC" == true ]]; then
# Run asynchronously in background
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
# Fire multiple requests if count > 1
local pids=()
for ((i=1; i<=COUNT; i++)); do
# Run in background detached, suppress all output
nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
local pid=$!
disown $pid
pids+=("$pid")
done
if [[ "$COUNT" -eq 1 ]]; then
echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
else
echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
fi
else
# Run synchronously
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
curl "${curl_args[@]}"
local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
return $exit_code
fi
fi
}
# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
rm -f "$COOKIE_FILE"
fi
}
# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
trap cleanup EXIT
fi
# Main execution
execute_curl
# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
exit 0
fi
EOF
Optionally, move it to your PATH:
sudo mv browser_curl.sh /usr/local/bin/browser_curl
Quick start
Simple GET request
./browser_curl.sh https://example.com
Async GET (returns immediately)
./browser_curl.sh --async https://example.com
Fire 100 async requests in one command
./browser_curl.sh --async --count 100 https://example.com/api
Common examples
POST JSON
./browser_curl.sh -X POST --json \
-d '{"username":"user","password":"pass"}' \
https://api.example.com/login
POST form data
./browser_curl.sh -X POST --form \
-d "username=user&password=pass" \
https://example.com/login
Include response headers
./browser_curl.sh --show-headers https://example.com
Save response to a file
./browser_curl.sh -o response.json https://api.example.com/data
Custom headers
./browser_curl.sh \
-H "X-API-Key: your-key" \
-H "Authorization: Bearer token" \
https://api.example.com/data
Persistent cookies across requests
COOKIE_FILE="session_cookies.txt"
# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://example.com/login
# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
https://example.com/dashboard
Load testing patterns
Simple load test with –count
The easiest way to fire multiple requests:
./browser_curl.sh --async --count 100 https://example.com/api
Example output:
[ASYNC] 100 requests started with PIDs: 1234 1235 1236 ... 1333
Performance: 100 requests dispatched in approximately 0.09 seconds
Loop-based approach (alternative)
for i in {1..100}; do
./browser_curl.sh --async https://example.com/api
done
Timed load test
Run continuous requests for a specific duration:
#!/bin/bash
URL="https://example.com/api"
DURATION=60 # seconds
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
./browser_curl.sh --async "$URL" > /dev/null 2>&1
((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"
Parameterized load test script
#!/bin/bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
./browser_curl.sh --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"
Usage:
./load_test.sh https://api.example.com/endpoint 200
Options reference
| Option | Description | Default |
|---|---|---|
-X, --method | HTTP method (GET/POST/PUT/DELETE) | GET |
-d, --data | Request body (JSON or form) | – |
-H, --header | Add extra headers (repeatable) | – |
-o, --output | Write response to a file | stdout |
-c, --cookie | Cookie file to use (and persist) | temp file |
-A, --user-agent | Override User-Agent | Chrome/macOS |
-t, --timeout | Max request time in seconds | 30 |
--async | Run request(s) in the background | false |
--count N | Fire N async requests (requires --async) | 1 |
--no-redirect | Don’t follow redirects | follows |
--show-headers | Include response headers | false |
--json | Sets Content-Type: application/json | – |
--form | Sets Content-Type: application/x-www-form-urlencoded | – |
-v, --verbose | Verbose diagnostics | false |
-h, --help | Show usage | – |
Validation rules:
--countrequires--async--countmust be a positive integer
Under the hood: why this works better than plain curl
Browser-like headers
The script automatically adds these headers to mimic Chrome:
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1
HTTP/2 + compression
- Uses
--http2flag for HTTP/2 protocol support - Enables
--compressedfor automatic gzip/brotli decompression - Closer to modern browser behavior
Cookie jar
- Maintains session cookies across redirects and calls
- Persists cookies to file for reuse
- Automatically created and cleaned up
Redirect handling
- Follows redirects by default with
--location - Critical for login flows, SSO, and OAuth redirects
These features help bypass basic bot detection that blocks obvious non-browser clients.
Real-world examples
Example 1: API authentication flow
cd ~/Desktop/warp
bash -c 'cat > test_auth.sh << '\''SCRIPT'\''
#!/bin/bash
COOKIE_FILE="auth_session.txt"
API_BASE="https://api.example.com"
echo "Logging in..."
./browser_curl.sh -c "$COOKIE_FILE" -X POST --json -d "{\"username\":\"user\",\"password\":\"pass\"}" "$API_BASE/auth/login" > /dev/null
echo "Fetching profile..."
./browser_curl.sh -c "$COOKIE_FILE" "$API_BASE/user/profile" | jq .
echo "Load testing..."
./browser_curl.sh -c "$COOKIE_FILE" --async --count 50 "$API_BASE/api/data"
echo "Done!"
rm -f "$COOKIE_FILE"
SCRIPT
chmod +x test_auth.sh
./test_auth.sh'
Example 2: Scraping with rate limiting
#!/bin/bash
URLS=(
"https://example.com/page1"
"https://example.com/page2"
"https://example.com/page3"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
./browser_curl.sh -o "$(basename "$url").html" "$url"
sleep 2 # Rate limiting
done
Example 3: Health check monitoring
#!/bin/bash
ENDPOINT="https://api.example.com/health"
while true; do
if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done
Installing browser_curl to your PATH
If you want browser_curl.sh to be available anywhere then install it on your path using:
mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl
echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc
echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"
Troubleshooting
Issue: Hanging with dquote> prompt
Cause: Shell quoting issue (unbalanced quotes)
Solution: Use simple, direct commands
# Good
./browser_curl.sh --async https://example.com
# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"
For chaining commands:
echo Start; ./browser_curl.sh --async https://example.com; echo Done
Issue: Verbose mode produces too much output
Cause: -v flag prints all curl diagnostics to stderr
Solution: Remove -v for production use:
# Debug mode
./browser_curl.sh -v https://example.com
# Production mode
./browser_curl.sh https://example.com
Issue: Cookie file warnings
Cause: First-time cookie file creation
Solution: The script now pre-creates the cookie file automatically. You can ignore any residual warnings.
Issue: 403 Forbidden errors
Cause: Site has stronger protections (JavaScript challenges, TLS fingerprinting)
Solution: Consider using real browser automation:
- Playwright (Python/Node.js)
- Selenium
- Puppeteer
Or combine approaches:
- Use Playwright to initialize session and get cookies
- Export cookies to file
- Use
browser_curl.sh -c cookies.txtfor subsequent requests
Performance benchmarks
Tests conducted on 2023 MacBook Pro M2, macOS Sonoma:
| Test | Time | Requests/sec |
|---|---|---|
| Single sync request | approximately 0.2s | – |
| 10 async requests (–count) | approximately 0.03s | 333/s |
| 100 async requests (–count) | approximately 0.09s | 1111/s |
| 1000 async requests (–count) | approximately 0.8s | 1250/s |
Note: Dispatch time only; actual HTTP completion depends on target server.
Limitations
What this script CANNOT do
- JavaScript execution – Can’t solve JS challenges (use Playwright)
- CAPTCHA solving – Requires human intervention or services
- Advanced TLS fingerprinting – Can’t mimic exact browser TLS stack
- HTTP/2 fingerprinting – Can’t perfectly match browser HTTP/2 frames
- WebSocket connections – HTTP only
- Browser API access – No Canvas, WebGL, Web Crypto fingerprints
What this script CAN do
- Basic header spoofing – Pass simple User-Agent checks
- Cookie management – Maintain sessions
- Load testing – Quick async request dispatch
- API testing – POST/PUT/DELETE with JSON/form data
- Simple scraping – Pages without JS requirements
- Health checks – Monitoring endpoints
When to use what
Use browser_curl.sh when:
- Target has basic bot detection (header checks)
- API testing with authentication
- Quick load testing (less than 10k requests)
- Monitoring/health checks
- No JavaScript required
- You want a lightweight tool
Use Playwright/Selenium when:
- Target requires JavaScript execution
- CAPTCHA challenges present
- Advanced fingerprinting detected
- Need to interact with dynamic content
- Heavy scraping with anti-bot measures
- Login flows with MFA/2FA
Hybrid approach:
- Use Playwright to bootstrap session
- Extract cookies
- Use
browser_curl.shfor follow-up requests (faster)
Advanced: Combining with other tools
With jq for JSON processing
./browser_curl.sh https://api.example.com/users | jq '.[] | .name'
With parallel for concurrency control
cat urls.txt | parallel -j 10 "./browser_curl.sh -o {#}.html {}"
With watch for monitoring
watch -n 5 "./browser_curl.sh https://api.example.com/health | jq .status"
With xargs for batch processing
cat ids.txt | xargs -I {} ./browser_curl.sh "https://api.example.com/item/{}"
Future enhancements
Potential features to add:
- Rate limiting – Built-in requests/second throttling
- Retry logic – Exponential backoff on failures
- Output formats – JSON-only, CSV, headers-only modes
- Proxy support – SOCKS5/HTTP proxy options
- Custom TLS – Certificate pinning, client certs
- Response validation – Assert status codes, content patterns
- Metrics collection – Timing stats, success rates
- Configuration file – Default settings per domain
Conclusion
browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.
Key takeaways:
- Simple wrapper around curl with realistic browser behavior
- Async mode with
--countfor easy load testing - Works for basic bot detection, not advanced challenges
- Combine with Playwright for tough targets
- Lightweight and fast for everyday API work
The script is particularly useful for:
- API development and testing
- Quick load testing during development
- Monitoring and health checks
- Simple scraping tasks
- Learning curl features
For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.