If you have ever sat there manually clicking through a UI, copying error messages, and pasting them into Claude just to get help debugging something, I have good news. There is a better way.
Chrome MCP gives Claude Desktop direct access to your Chrome browser, allowing it to read the page, inspect the DOM, execute JavaScript, monitor network requests, and capture console output without you lifting a finger. For anyone doing software development, QA, or release testing, this changes the game entirely.
Why This Matters
When you are debugging a production issue or validating a new release, the bottleneck is almost never Claude reasoning ability. It is the friction of getting context into Claude in the first place – copying stack traces, screenshotting UI states, manually describing what you see, and repeating yourself every time something changes. Chrome MCP eliminates that friction entirely, giving Claude direct visibility into what is actually happening in your browser so it can read live page content and DOM state, capture JavaScript errors straight from the console, intercept network requests and API responses in real time, and autonomously navigate and interact with your application while flagging anything that looks wrong.
For senior engineers and CTOs who care about reducing MTTR and shipping with confidence, this is a genuine force multiplier.
Install in One Command
Copy the block below in its entirety and paste it into your terminal. It writes the installer script, makes it executable, and runs it all in one go.
cat > install-chrome-mcp.sh << 'EOF'
#!/bin/bash
set -euo pipefail
echo "Installing Chrome MCP for Claude Desktop..."
CONFIG_DIR="$HOME/Library/Application Support/Claude"
CONFIG_FILE="$CONFIG_DIR/claude_desktop_config.json"
mkdir -p "$CONFIG_DIR"
if [[ -f "$CONFIG_FILE" ]]; then
echo "Existing config found. Merging Chrome MCP entry..."
node -e "
const fs = require('fs');
const config = JSON.parse(fs.readFileSync('$CONFIG_FILE', 'utf8'));
config.mcpServers = config.mcpServers || {};
config.mcpServers['chrome-devtools'] = {
command: 'npx',
args: ['-y', 'chrome-devtools-mcp@latest']
};
fs.writeFileSync('$CONFIG_FILE', JSON.stringify(config, null, 2));
console.log('Config updated successfully.');
"
else
echo "No existing config found. Creating new config..."
printf '{
"mcpServers": {
"chrome-devtools": {
"command": "npx",
"args": ["-y", "chrome-devtools-mcp@latest"]
}
}
}
' > "$CONFIG_FILE"
echo "Config created at $CONFIG_FILE"
fi
echo ""
echo "Done. Restart Claude Desktop to activate Chrome MCP."
echo "You should see a browser tools indicator in the Claude interface."
EOF
chmod +x install-chrome-mcp.sh
./install-chrome-mcp.sh
One paste and you are done. The script writes itself to disk, becomes executable, and runs immediately without any manual file editing or separate steps. Using chrome-devtools-mcp@latest means you will always pull the current version without needing to reinstall.
Using It for Debugging
Once Chrome MCP is active, you direct Claude to navigate to any URL and investigate it directly. You might ask it to check the dev console on a page for JavaScript errors, navigate to your staging environment and verify the dashboard loads cleanly, or walk through a specific user flow and report back on anything unexpected. Claude reads the console output, intercepts the network calls, and reports back in plain language with specifics you can act on immediately rather than a vague description you then have to go and verify yourself.
Using It for Release Testing
This is where Chrome MCP really earns its keep. Before pushing a release to production, you can give Claude a test checklist and let it execute the entire regression suite autonomously against your staging URL, navigating through each scenario, capturing screenshots, checking for console errors, and producing a structured pass/fail summary at the end. The alternative is a human doing this manually for an hour before every release, and there is simply no comparison once you have seen what autonomous browser testing looks like in practice.
How It Actually Works
Chrome MCP connects to your browser using the Chrome DevTools Protocol, the same underlying mechanism that powers Chrome’s built-in developer tools. When Claude Desktop has Chrome MCP active, it can issue DevTools commands directly to pages it navigates to, reading the accessibility tree, querying DOM elements, firing JavaScript in the page context, and listening on the network and console streams.
There is no screen recording, no pixel scraping, and no vision model trying to interpret screenshots. Claude is working with structured data, the actual DOM state, actual network payloads, actual console messages, which means it reasons about your application the same way a senior developer would when sitting at the DevTools panel, not the way a junior tester would when eyeballing a screen.
The connection is local. Chrome MCP runs as a process on your machine and communicates with Claude Desktop over a local socket. Nothing leaves your machine except what Claude sends to the Anthropic API as part of normal inference.
One important clarification on scope: chrome-devtools-mcp operates in its own managed browser context, separate from your normal Chrome windows. Claude cannot see or interact with tabs you already have open. It only controls pages it has navigated to itself. This is worth understanding both practically and as a security property. Claude cannot accidentally interact with your AWS console, banking session, or anything else you have open unless you explicitly direct it to navigate there within its own context.
What Claude Will and Will Not Do
Giving an AI agent direct access to a browser raises a fair question about guardrails. Here is how it breaks down in practice.
Claude will not enter passwords or credentials under any circumstances, even if you provide them directly in the chat. It will not touch financial data, will not permanently delete content, and will not modify security permissions or access controls, including sharing documents or changing who can view or edit files. It will not create accounts on your behalf.
For anything irreversible, Claude stops and asks for explicit confirmation before proceeding. Clicking Publish, submitting a form, sending an email, or executing a purchase all require you to say yes in the chat before Claude acts. The instruction to proceed must come from you in the conversation, not from content found on a web page.
That last point matters more than it sounds. If a web page contains hidden instructions telling Claude to take some action, Claude treats that as untrusted data and surfaces it to you rather than following it. This class of attack is called prompt injection and it is a real risk when AI agents interact with arbitrary web content. Chrome MCP is designed to be resistant to it by default.
Things Worth Trying
Once you have it running, here are some concrete starting points.
Debug a broken page in seconds. Direct Claude to navigate to the broken page and check it for JavaScript errors. Claude reads the console, identifies the error, traces it back to the relevant DOM state or network call, and gives you a specific diagnosis rather than a list of things to check.
Validate an API integration. Navigate Claude to a feature that calls your backend and ask it to monitor the network requests while it triggers the action. Claude captures the request payload, the response, the status code, and any timing anomalies, and flags anything that deviates from what you would expect.
Audit a form for accessibility issues. Point Claude at a form and ask it to walk the accessibility tree and identify any inputs missing labels, incorrect ARIA roles, or tab order problems. This takes Claude about ten seconds and would take a human tester considerably longer.
Smoke test a deployment. After pushing to staging, give Claude your critical user journeys as a numbered list and ask it to execute each one, navigate through the steps, and report back with a pass or fail and the reason for any failure. Claude does not get tired, does not skip steps, and does not interpret close enough as a pass.
Compare environments. Ask Claude to open your production and staging URLs in sequence and compare the DOM structure of a specific component across both. Subtle differences in class names, missing elements, or divergent data often show up immediately when you stop looking with your eyes and start looking with structured queries.
The common thread across all of these is that you stop describing your problem to Claude and start showing it directly. That shift in how you interact with the tool is where the real productivity gain lives.
A Note on Security
Chrome MCP runs entirely locally and is not sending your browser data to any external service beyond your normal Claude API calls. That said, it is worth being deliberate about which tabs you have open when Claude is actively using the browser tool, and you should avoid leaving authenticated sessions open that you would not want an automated agent interacting with.
Final Thought
The best debugging tools are the ones that remove the distance between the problem and the person solving it, and Chrome MCP does exactly that by putting Claude in the same browser you are looking at with full visibility into what is actually happening. If you are serious about software quality and not using this yet, you are leaving time on the table.
Andrew Baker is CIO at Capitec Bank and writes about enterprise architecture, cloud infrastructure, and the tools that actually move the needle at andrewbaker.ninja.
You updated a plugin five minutes ago. Maybe it was a security patch. Maybe you were trying a new caching layer. You clicked “Update Now,” saw the progress bar fill, got the green tick, and moved on with your day. Now the site is down. Not partially down. Not slow. Gone. A blank white page. No error message, no admin panel, no way in. Your visitors see nothing. Your contact forms are dead. If you are running WooCommerce, your checkout just stopped processing orders.
If you are running WordPress 5.2 or later, you might not even get a white screen. Instead you get this:
There has been a critical error on this website. Please check your site admin email inbox for instructions.
That is the exact message. No stack trace, no file name, no line number. Just a single sentence telling you to check an email that may or may not arrive. WordPress also sends a notification to the admin email address with the subject line “Your Site Is Experiencing a Technical Issue” containing a recovery mode link. In theory this is helpful. In practice the email takes minutes to arrive, may land in spam, or may never arrive at all if your site’s mail configuration is itself broken (which it often is on cheap shared hosting).
If you are running WordPress older than 5.2, you get nothing. A blank white page. No message at all. That is the original White Screen of Death.
Either way, the question is not whether it will happen to you. The question is what happens in the 60 seconds after it does.
1. Why WordPress Does Not Protect You
WordPress has no runtime health check. There is no circuit breaker, no post activation validation, no automatic rollback. When you activate a plugin, WordPress writes the plugin name into an active_plugins option in the database and then loads that plugin’s PHP file on the next request. If that file throws a fatal error, PHP dies and takes the entire request pipeline with it. Apache or Nginx returns a 500 or a blank page. WordPress never gets far enough into its own boot sequence to realise something is wrong.
There is a recovery mode that was introduced in WordPress 5.2. It catches fatal errors and sends an email to the admin address with a special recovery link. In theory this is helpful. In practice it has three problems. First, the email may take minutes to arrive or may never arrive at all if your site’s mail configuration is itself broken (which it often is on cheap shared hosting). Second, the recovery link expires after a short window. Third, it only pauses the offending plugin for the recovery session. It does not deactivate it permanently. If you log in via the recovery link but forget to deactivate the plugin manually, the next regular visitor request will crash the site again.
The core issue is architectural. WordPress loads every active plugin on every request. There is no sandbox, no isolation, no health gate between plugin activation and the next page load. A single throw or require of a missing file in any active plugin will take down the entire application. The plugin system is cooperative, not defensive.
2. What Recovery Normally Looks Like
If you have SSH access, the fix takes about 30 seconds. You connect to the server, navigate to wp-content/plugins/, and either rename or delete the offending plugin directory. The next request to WordPress skips the missing plugin and the site comes back.
If you do not have SSH, you try FTP. Most hosting providers still offer it. You open FileZilla or whatever client you have configured, navigate to the plugins folder, and do the same thing. This takes longer because FTP clients are slow, and if you do not have your credentials saved, you are now hunting through old emails from your hosting provider.
If you do not have FTP, or you are on a managed host that restricts file access, you file a support ticket. On a good host this gets resolved in minutes. On a bad one it takes hours. On a weekend it takes longer. Your site is down the entire time.
If you have a backup plugin and it stored snapshots externally (S3, Google Drive, Dropbox), you can restore from the last known good state. This works, but it is a sledgehammer for a thumbtack. You are restoring the entire site, including the database, to fix a single bad plugin file. If any content was created between the backup and the crash, it is gone.
Every one of these options assumes technical knowledge, preconfigured access, or a responsive support team. Most WordPress site owners have none of the three.
The Emergency SSH One (Two) Liner(s)
If you do have SSH access and you just need the site back up immediately, two commands. First, see what you are about to kill:
Adjust the path if your WordPress installation is not at /var/www/html. On many hosts it will be /home/username/public_html or similar. Change -mmin -60 to -mmin -30 for 30 minutes or -mmin -120 for two hours.
This is the nuclear option. It does not deactivate the plugin cleanly through WordPress. It deletes the files from disk. WordPress will notice the plugin is missing on the next request and remove it from the active plugins list automatically. If you need to be more surgical, use WP-CLI instead:
This deactivates recently modified plugins without deleting them, so you can inspect them later.
3. The Watchdog Pattern
The solution is a plugin that watches the site from the inside. Not a monitoring service that pings your URL from an external server and sends you an alert. Not an uptime checker that tells you the site is down (you already know the site is down). A plugin that detects the crash, identifies the cause, and fixes it automatically before you even notice.
The pattern works like this. A lightweight cron job fires every 60 seconds. Each tick does three things.
Probe. The plugin sends an HTTP GET to a dedicated health endpoint on its own site. The endpoint is registered at init priority 1, before themes and most other plugins load. It returns a plain text response: CLOUDSCALE_OK. No HTML, no template, no database queries. The request includes cache busting parameters and no cache headers to ensure CDNs and browsers do not serve a stale 200 when the site is actually dead.
Evaluate. If the probe comes back HTTP 200 with the expected body, the site is healthy. The tick exits and does nothing. No logging, no database writes, no overhead.
Recover. If the probe fails (500 error, timeout, connection refused, unexpected response body), the plugin scans the wp-content/plugins/ directory and identifies the plugin file with the most recent modification time. If that file was modified within the last 10 minutes, the watchdog deactivates it, deletes its files from disk, and lets the next cron tick re-probe to confirm the site is back.
The entire recovery loop takes less than two minutes from crash to restored site. No human intervention. No SSH. No support ticket.
4. The Recovery Window
The 10 minute window is the most important design decision in the plugin. It defines the boundary between “a plugin that was just installed or updated” and “a plugin that has been sitting on the server for days.”
Without a time window, the watchdog would be dangerous. If the site crashes because the database is down or the disk is full, the watchdog would delete whatever plugin happens to have the newest file, even if that plugin has been stable for months and had nothing to do with the crash. That would be worse than the original problem.
The 10 minute window scopes the blast radius. The watchdog only acts on plugins that were modified in the last 600 seconds. If no plugin was recently modified, the watchdog sees the crash, finds no candidate, and does nothing. This is the correct behaviour. A crash with no recent plugin change is a server problem, not a plugin problem, and the watchdog should not try to fix server problems.
The timing scenarios are worth walking through explicitly.
You install a plugin at 14:00. The site crashes at 14:03. The plugin’s file modification time is 3 minutes ago, well within the window. The watchdog removes it.
You install a plugin at 14:00. The site crashes at 14:15. The plugin’s file modification time is 15 minutes ago, outside the window. The watchdog sees the crash but finds no candidate within the window. It does nothing. This is correct. If the plugin ran fine for 15 minutes and the site only now crashed, the plugin is probably not the cause.
You update two plugins at 14:00 and 14:05. The site crashes at 14:06. The watchdog finds the 14:05 plugin (most recently modified) and removes it. If the site is still down at the next tick 60 seconds later, the 14:00 plugin is now the most recently modified and still within the window. It gets removed next. The watchdog works through the candidates sequentially, most recent first.
5. What It Deletes and What It Leaves Alone
The watchdog targets one plugin per tick: the most recently modified file within the recovery window. It deactivates the plugin first (removes it from the active_plugins list in the database), then deletes the plugin’s files from disk.
It deletes rather than just deactivates. A deactivated plugin still has files on disk that could be autoloaded, could contain vulnerable code, or could conflict with other plugins through file level includes. If the plugin crashed your site, you do not want its files sitting around. You want it gone. You can reinstall it later once you have investigated the issue.
The watchdog never touches itself. It explicitly skips its own plugin file when scanning for candidates. It also never touches themes, mu-plugins, or drop-in plugins. Its scope is strictly the wp-content/plugins/ directory.
It does not act on database corruption. It does not act on PHP version incompatibilities at the server level. It does not act on disk space exhaustion, memory limit errors caused by the WordPress core, or misconfigurations in wp-config.php. It is a single purpose tool with a narrow scope, and that narrowness is what makes it safe.
6. The Design Decisions
Single file, no dependencies. The entire plugin is one PHP file. No Composer packages, no JavaScript assets, no CSS, no database tables, no options. A recovery tool that requires its own infrastructure is a recovery tool that can fail for infrastructure reasons. The fewer moving parts, the more likely it works when everything else is broken.
No configuration UI. There is no settings page. There is nothing to configure. The recovery window is a constant in the code. The probe endpoint is hardcoded. The cron schedule is fixed at 60 seconds. Every configuration option is a potential misconfiguration. A watchdog plugin that requires the user to set it up correctly is a watchdog plugin that will be set up incorrectly on exactly the sites that need it most.
Self probe, not external ping. The plugin probes itself from inside WordPress, not from an external monitoring service. This means it works on localhost development environments, on staging servers behind VPNs, on intranets, and on any host where inbound HTTP is restricted. It also means the probe tests the full WordPress request pipeline, not just whether the server is responding to TCP connections.
SSL verification disabled on the probe. The self probe sets sslverify to false. This is deliberate. Many staging and development environments use self signed certificates. A watchdog that fails because it cannot verify its own SSL certificate is useless in exactly the environments where you are most likely to be testing plugin changes.
Cache busting on every probe. The probe URL includes a timestamp parameter and sends explicit no cache headers. WordPress sites frequently run behind Varnish, Cloudflare, or plugin level page caches. Without cache busting, the probe could receive a cached 200 response from the CDN while the origin server is returning 500 errors. The site would appear healthy when it is actually dead.
7. WordPress Cron: The One Thing You Need to Know
WordPress does not have a real cron system. The built in “WP-Cron” is triggered by visitor requests. When someone visits your site, WordPress checks whether any scheduled events are due and runs them before serving the page.
This means on a low traffic site, the watchdog might not tick for several minutes or even hours if nobody visits. On a crashed site with zero traffic, it might never tick at all, because the crash happens before WordPress gets far enough into its boot sequence to check the cron schedule.
The fix is a real system cron. One line in your server’s crontab:
This hits wp-cron.php every 60 seconds regardless of visitor traffic. Combined with the watchdog plugin, it means your site self heals within two minutes of a plugin crash, even if nobody is visiting.
If you are on shared hosting without cron access, services like EasyCron or cron-job.org can make the same request externally. Some managed WordPress hosts (Kinsta, WP Engine, Cloudways) already run system cron for you. Check with your host.
8. Test It Yourself
Confidence in a recovery tool comes from seeing it work. Included with this post is a downloadable test plugin: CloudScale Crash Test. It is a WordPress plugin that does exactly one thing: throw a fatal error on every request, immediately white screening your site:
Install and activate CloudScale Plugin Crash Recovery on your site
Confirm your system cron is running (or that your site has enough traffic to trigger WP-Cron reliably)
Install and activate the CloudScale Crash Test plugin
Your site will immediately show: “There has been a critical error on this website. Please check your site admin email inbox for instructions.”
Wait 60 to 120 seconds
Refresh your site. It should be back online
Check your plugins list. CloudScale Crash Test should be gone
The crash test plugin contains a single throw new \Error() statement at the top level. It is not subtle. It does not simulate a partial failure or an intermittent bug. It kills the site immediately on every request. If the watchdog can recover from this, it can recover from any plugin that fatal errors within the recovery window.
Do not install the crash test plugin on a production site without the recovery plugin active. If you do and your site is down, SSH in and run:
Adjust the path to match your WordPress installation. On most shared hosts this will be /home/username/public_html/wp-content/plugins/cloudscale-crash-test-plugin/. Your site will come back on the next request.
9. When This Does Not Help
No tool solves every problem, and it is worth being explicit about the boundaries.
The watchdog does not help if the crash is caused by a theme. Themes are loaded through a different mechanism and the watchdog only scans the plugins directory. It does not help if the crash is caused by a mu-plugin (must use plugin), because mu-plugins load before regular plugins and before the cron system has a chance to act. It does not help if the database is down, because WordPress cannot read its own options (including the active plugins list) without a database connection. It does not help if the server’s PHP process is completely dead, because there is no PHP runtime to execute the cron tick.
It also does not help if the crash happens more than 10 minutes after the plugin was installed. If you install a plugin at 09:00 and it causes a crash at 11:00 due to a cron job or a deferred activation hook, the plugin’s file modification time is two hours old and outside the recovery window. The watchdog will see the crash but find no candidate to remove. This is a design tradeoff: a wider window catches more edge cases but increases the risk of removing an innocent plugin.
The watchdog is one layer in a broader defence strategy. It handles the most common failure mode (a recently installed or updated plugin that immediately crashes the site) and handles it automatically. For everything else, you still need backups, monitoring, and access to your server.
10. The Code
The full source code is available on GitHub under GPLv2. It is a single PHP file with no dependencies:
The crash test plugin is available as a zip download attached to this post.
Install the recovery plugin. Set up your system cron. Forget about it until it saves you at 2am on a Saturday when a plugin auto update goes wrong and you are nowhere near a terminal. That is the point.
It is 1972. A group of very serious men in very wide ties are gathered in a very beige conference room. They are about to make decisions that will haunt your change advisory board fifty years from now. The following is a faithful reconstruction of that meeting, because clearly someone needed to write it down.
CHAIRMAN: Gentlemen, we need to computerise the bank. The IBM salesman is outside. He’s been there since Tuesday. Security has tried to remove him twice. He seems to feed on rejection.
HEAD OF TECHNOLOGY(there is only one of him, and he is wearing a short-sleeved shirt, which everyone agrees is suspicious): We need a system that handles everything. Accounts, transactions, interest, fees, reporting. Everything.
CHAIRMAN: Everything?
HEAD OF TECHNOLOGY: Everything. In one place. One machine. One vendor.
CHAIRMAN: Should we perhaps have two vendors? For resilience?
HEAD OF TECHNOLOGY: Absolutely not. We want one vendor. Ideally one who makes hardware that only runs their software, so that if we ever want to leave we have to physically replace the building. That’s what I call commitment.
COMPLIANCE OFFICER: Will this system be easy to change when regulations evolve?
HEAD OF TECHNOLOGY: Change? Why would we change it? We’re going to write it in a language that reads like English was translated into German and then back into English by someone who had only ever read a tax return. That will ensure only a very specific kind of person can maintain it, and that person will be irreplaceable. That’s job security for everyone, really.
COMPLIANCE OFFICER: Visionary.
HEAD OF TECHNOLOGY: We’re going to run everything on a single box. All products. All customers. All transactions. Payments, lending, savings, reporting: one box, all of it, one throat to choke.
OPERATIONS MANAGER: What if the box falls over?
HEAD OF TECHNOLOGY: Then we have a disaster recovery plan.
OPERATIONS MANAGER: How long will recovery take?
HEAD OF TECHNOLOGY: Several hours. Possibly a day. We’re still working on the documentation. The recovery procedure will require a specialist who we will train exactly once and who will subsequently leave for a competitor. His successor will have the manual, which will be wrong by then, but written with such confidence that no one will question it until the actual disaster.
OPERATIONS MANAGER: And we need to test this?
HEAD OF TECHNOLOGY: We will test it once, during the original implementation, and then assume it still works forever. Testing it again would require a change freeze, three committees, a consultant from the vendor, and eight months. So: once.
CHAIRMAN: What about releases? How often will we update this system?
HEAD OF TECHNOLOGY: As rarely as possible. I’m thinking: annually. Maybe biennially if we can get away with it. Every release will be a full programme. Full regression testing across every function. Army of testers. Army of project managers managing the army of testers. A war room. Probably a dedicated floor.
FINANCE DIRECTOR: That sounds expensive.
HEAD OF TECHNOLOGY: It’s not expensive, it’s thorough. The release will take between six and eighteen months. We will begin change freeze approximately four months before the release date, which means the business cannot ship anything new for the better part of a year. This is a feature. It keeps everyone focused.
FINANCE DIRECTOR: Focused on what?
HEAD OF TECHNOLOGY: On not breaking anything. Which is the same as progress, if you think about it correctly.
CHAIRMAN: What do our customers get out of this release?
Silence.
HEAD OF TECHNOLOGY: Better MIS reports.
CHAIRMAN: They won’t see those.
HEAD OF TECHNOLOGY: No, but we will, and they are very clean reports. Very clean. Some of the cleanest reports you’ll ever see. Worth every penny of the hundred million we’re spending.
OPERATIONS MANAGER: How will the operators interact with this system?
HEAD OF TECHNOLOGY: Through a screen. One screen. The screen will have approximately four hundred fields. Many of them will be unlabelled, for security. The operator will learn which combinations of field values correspond to which operations through a combination of formal training, informal knowledge transfer, and trial and error with real money. Experienced operators will develop an almost mystical intuition for it. New operators will occasionally initiate a full principal repayment when they meant to process an interest charge, but that’s a training issue, not a system issue.
COMPLIANCE OFFICER: And there’s no confirmation step?
HEAD OF TECHNOLOGY: There’s a button. The button says OK. It always says OK. It says OK whether you’re creating a savings account or accidentally wiring nine hundred million dollars to the wrong counterparties. We felt a consistent user experience was important.
HEAD OF TECHNOLOGY: Now, about scaling. This system cannot scale horizontally. If we need more capacity we buy a bigger box. When the box reaches its limit we buy the biggest box IBM makes. When we exceed that box, we have a different kind of conversation.
OPERATIONS MANAGER: What kind of conversation?
HEAD OF TECHNOLOGY: The kind where we explain to the board that we need to run batch jobs overnight because we’ve run out of intraday capacity, and that customers cannot see their real balances until morning, and that this is normal and expected and completely fine. The batch run will begin at midnight. If it’s not finished by opening, we delay opening. This will never be a problem because it’s 1972 and banks open at ten.
CHAIRMAN: What happens in fifty years when banks operate around the clock and customers expect real time balances and instant payments from their pocket computers?
Long pause.
HEAD OF TECHNOLOGY: I’m going to stop you there. That is an unreasonable hypothetical and I think you should apologise for raising it.
FINANCE DIRECTOR: How long will implementation take?
HEAD OF TECHNOLOGY: Three years, minimum. Probably five if we want to do it properly.
FINANCE DIRECTOR: And what does ‘doing it properly’ deliver?
HEAD OF TECHNOLOGY: A working system. Same products as before. Same prices as before. Same service model as before. Customers will notice nothing has changed.
FINANCE DIRECTOR: That’s the success case?
HEAD OF TECHNOLOGY: That is the dream. If nobody notices, we’ve done it perfectly. If customers call in to say things are different, something has gone wrong.
FINANCE DIRECTOR: And when will we need to replace this system?
HEAD OF TECHNOLOGY: Never. This is the last system we’ll ever need.
Another long pause.
HEAD OF TECHNOLOGY: Or in about fifteen years, when the business has changed enough that this system can no longer accommodate it, and we’ll need to select a new vendor and begin a new three to five year programme that will produce the same products at the same prices that customers will not notice have changed.
CHAIRMAN: And then?
HEAD OF TECHNOLOGY: And then we’ll do it again. And then again. Each time, we’ll write a requirements document that captures everything the old system did plus everything the business has always wanted, and we’ll select the new vendor who covers the most requirements. And each time, we will have purchased a slightly more modern version of the same architectural mistake.
CHAIRMAN: That sounds like a treadmill.
HEAD OF TECHNOLOGY: I prefer the term upgrade cycle. Much more professional.
COMPLIANCE OFFICER: One final question. Could we instead build separate systems for each domain: payments, lending, identity: each independently deployable, each owning its own data, able to scale on its own terms and change without disrupting everything else?
The room goes very quiet.
HEAD OF TECHNOLOGY: That’s not how banking works.
COMPLIANCE OFFICER: Why not?
HEAD OF TECHNOLOGY: Because banking is complex. And regulated. And the vendors tell us it’s impossible. And frankly if it were possible someone would have done it already.
Forty-five years later, Monzo does exactly this with a team a fraction of the size. But that’s a different meeting.
CHAIRMAN: Very good. Let the IBM man in.
The IBM man has apparently already let himself in. He has been sitting at the head of the table for the last twenty minutes. Nobody is sure when he arrived.
IBM SALESMAN: Gentlemen. I understand you want one vendor, one box, one contract, a language only specialists can read, releases that take eighteen months, a user interface that requires interpretive experience, disaster recovery nobody has tested since 2003, and a licensing model that ensures leaving us is economically indistinguishable from burning the bank to the ground.
He opens his briefcase.
IBM SALESMAN: I have just the thing.
And that, more or less, is how we got here.
The remarkable thing is not that this meeting happened in 1972. The remarkable thing is that some version of it is still happening today, in banks that have had fifty years to notice the pattern, conducted by people clever enough to know better, producing requirements documents that run to hundreds of pages and conclude, with great confidence, that what the bank needs is a newer version of the same decision.
The neobanks walked in, ignored the IBM salesman entirely, and built banks that work. The architecture was never the mystery. The willingness to walk out of the meeting was.
Andrew Baker is Chief Information Officer at Capitec Bank. He writes about enterprise architecture, banking technology, and the infinite patience required to watch the same mistake happen in slow motion at andrewbaker.ninja.
If you publish online, you should periodically search for yourself, not out of ego but out of discipline. The internet is an echo system, and if you do not measure where your ideas travel, you are operating blind. You want to know who is linking to you, who is quoting you, who is criticising you, who is republishing you, and where your arguments are quietly spreading beyond your own domain.
The obvious approach fails immediately. If you Google your own site, Google mostly returns your own site. That tells you nothing. The signal you want is everything except you.
Below are simple search operators that remove the noise and expose what actually matters.
1. Find Mentions of Your Site While Excluding Your Site
The quotation marks force an exact match, which means Google will only return pages that explicitly reference your domain. The minus site operator removes your own website from the results. What remains is far more interesting. You will see forum discussions, citations, blog references, scraped content, and unexpected backlinks. This single query often reveals more than expensive SEO dashboards because it exposes raw mentions rather than curated metrics.
2. Exclude LinkedIn to Remove Platform Dominance
If you publish heavily on LinkedIn, it will quickly dominate search results. That makes it harder to see independent mentions. To remove that bias, extend the query:
Now Google excludes both your own site and LinkedIn. What remains is third party visibility. This is where genuine amplification lives. It is also where unattributed copying and aggregation frequently hide.
3. Search for Your Name Without Your Domain
Sometimes people reference you without linking your website. To find those mentions, search your name and exclude your domain:
This approach surfaces podcast appearances, guest posts, conference listings, scraped biographies, and commentary threads where your ideas are being debated without your direct participation.
4. Detect Scraping by Searching Unique Sentences
If you suspect that an article has been copied, take a distinctive sentence from it and search for that exact phrase in quotation marks:
"Core banking is a terrible idea. It always was." -site:andrewbaker.ninja
If that sentence appears elsewhere, you will find it immediately. This method is brutally effective because scrapers rarely rewrite deeply; they copy verbatim. One well chosen sentence is often enough to expose replication networks.
5. Approximate Backlink Discovery
Google deprecated the link operator years ago, but you can still approximate backlink discovery by searching for full URLs:
This reveals pages that reference that exact article URL. It will not capture everything, but it frequently uncovers discussions and citations that automated tools overlook.
6. Use This as a Weekly Discipline
You do not need specialist monitoring software to understand your footprint. You need quotation marks for precision, the minus site operator for exclusion, and the habit of checking regularly. Once a week is sufficient. The goal is not obsession; it is awareness.
Most creators never perform these searches. As a result, they miss evidence of influence, silent supporters, quiet critics, and outright content theft. A simple set of structured queries changes that dynamic. Google is not merely a discovery engine for information. It is a diagnostic instrument for understanding where you exist and how your work propagates across the web.
If you use Claude Desktop to edit code, write patches, or build plugin files, you have probably hit the same wall I did: Claude runs in a sandboxed Linux container. It cannot read or write files on your Mac. Every session resets. There is no shared folder. You end up copy pasting sed commands or trying to download patch files that never seem to land in your Downloads folder.
The solution is the Model Context Protocol filesystem server. It runs locally on your Mac and gives Claude direct read and write access to a directory you choose. Once set up, Claude can edit your repo files, generate patches, and build outputs directly on your machine.
You also need Claude Desktop installed and updated to the latest version.
2. Create the Configuration File
Claude Desktop reads its MCP server configuration from a JSON file. Run this command in your terminal, replacing the directory path with wherever you want Claude to have access:
If you already have a claude_desktop_config.json with other MCP servers configured, add the filesystem block inside the existing mcpServers object rather than overwriting the file.
3. Restart Claude Desktop
This is important. You must fully quit Claude Desktop with Cmd+Q (not just close the window) and reopen it. The MCP server configuration is only loaded at startup.
4. What to Say to Claude to Verify and Use the MCP Filesystem
Here is the honest truth about what happened when I first tested this. I opened Claude Desktop and typed:
List the files in my github directory
Claude told me it could not access my MacBook’s filesystem. It gave me instructions on how to use ls in Terminal instead. The MCP filesystem server was running and connected, but Claude defaulted to its standard response about being sandboxed.
I had to nudge it. I replied:
What about the MCP?
That was all it took. Claude checked its available tools, found the MCP filesystem server, called list_allowed_directories to discover the paths, and then listed my files directly. From that point on it worked perfectly for the rest of the conversation.
The lesson is that Claude does not always automatically reach for MCP tools on the first ask. If Claude tells you it cannot access your files, remind it that you have MCP configured. Once it discovers the filesystem tools, it will use them naturally for the rest of the session.
After the initial nudge, everything becomes conversational. You can ask Claude to:
Show me the contents of my README.md file
What is in the config directory?
Read my package.json and tell me what dependencies I have
Claude can also write files directly to your Mac. This is where MCP becomes genuinely powerful compared to the normal sandboxed workflow:
Create a new file called notes.txt in my github directory with a summary of what we discussed
Edit my script.sh and add error handling to the backup function
Write a new Python script called cleanup.py that deletes log files older than 30 days
You do not need special syntax or commands. Claude figures out which MCP tool to call based on what you ask for. But be prepared to remind it on the first message of a new conversation that MCP is available. Once it clicks, it just works.
If Claude still cannot find the filesystem tools after you mention MCP, the server is not connected. Go back to the troubleshooting section and verify your configuration file is valid JSON, Node.js is installed, and you fully restarted Claude Desktop with Cmd+Q.
5. Why This Matters: What I Actually Use This For
I maintain several WordPress plugins across multiple GitHub repos. Before setting up MCP, getting Claude’s changes onto my machine was a nightmare. Here is what I went through before finding this solution.
The Pain Before MCP
Patch files that never download. Claude generates patch files and presents them as downloadable attachments in the chat. The problem is clicking the download button often does nothing. The file simply does not appear in ~/Downloads. I spent a solid 20 minutes trying ls ~/Downloads/*.patch and find commands looking for files that were never there.
sed commands that break in zsh. When patch files failed, Claude would give me sed one liners to apply changes. Simple ones worked fine. But anything involving special characters, single quotes inside double quotes, or multiline changes would hit zsh parsing errors. One attempt produced zsh: parse error near '}' because the heredoc content contained curly braces that zsh tried to interpret.
Base64 encoding that is too long to paste. When sed failed, we tried base64 encoding the entire patch and piping it through base64 -d. The encoded string was too long for the terminal. zsh split it across lines and broke the decode. We were solving problems that should not exist.
Copy paste heredocs that corrupt patches. Git patches are whitespace sensitive. A single missing space or an extra newline from copy pasting into the terminal will cause git apply to fail silently or corrupt your files. This is not a theoretical risk. It happened.
No shared filesystem. Claude runs in a sandboxed Linux container that resets between sessions. My files are on macOS. There is no mount, no symlink, no shared folder. We tried finding where Claude Desktop stores its output files on the Mac filesystem by searching ~/Library/Application Support/Claude. We found old session directories with empty outputs folders. Nothing bridged the gap.
What I Do Now With MCP
With the filesystem MCP server running, Claude reads and writes files directly in my local git repo. Here is my actual workflow for plugin development:
Direct code editing. I tell Claude to fix a bug or add a feature. It opens the file in my local repo clone at ~/Desktop/github/cloudscale-page-views/repo, makes the edit, and I can see the diff immediately with git diff. No intermediary files, no transfers.
CSS debugging with browser console scripts. Claude gives me JavaScript snippets to paste into the browser DevTools console to diagnose styling issues. We used getComputedStyle to find that two tabs had different font sizes (12px vs 11px) and that macOS subpixel antialiasing was making white on green text render thicker. Claude then fixed the source files directly on my machine.
Version bumping. Every change to the plugin requires bumping CSPV_VERSION in cloudscale-page-views.php. Claude does this automatically as part of each edit.
Git commit and push. After Claude edits the files, I run one command:
git add -A && git commit -m "description" && git push origin main
Zip building and S3 deployment. I have helper scripts that rebuild the plugin zip from the repo and upload it to S3 for WordPress to pull. The whole pipeline from code change to deployed plugin is: Claude edits, I commit, I run two scripts.
The Difference
Before MCP: 45 minutes of fighting file transfers to apply a two line CSS fix.
After MCP: Claude edits the file in 3 seconds, I push in 10 seconds.
If you use Claude Desktop for any kind of development work where the output needs to end up on your local machine, set up the MCP filesystem server. It is not optional. It is the difference between Claude being a helpful coding assistant and Claude being an actual development tool.
6. Security Considerations
The filesystem server only grants access to the directories you explicitly list in the configuration. Claude cannot access anything outside those paths. Each action Claude takes on your filesystem requires your approval through the chat interface before it executes.
That said, only grant access to directories you are comfortable with Claude reading and modifying. Do not point it at your entire home directory.
7. Troubleshooting
The tools icon does not appear after restart: Check that the config file is valid JSON. Run:
npx command not found: Make sure Node.js is installed and the npx binary is in your PATH. Try running npx --version in the terminal.
Server starts but Claude cannot access files: Verify the directory paths in the config are absolute paths (starting with /) and that the directories actually exist.
Permission errors: The MCP server runs with your user account permissions. If you cannot access a file normally, Claude cannot access it either.
8. Practical Workflow Example
Here is the workflow I use for maintaining WordPress plugins with Claude:
Clone the repo to ~/Desktop/github/my-plugin/repo
Ask Claude to make changes (it edits the files directly via MCP)
Run git add -A && git commit -m "description" && git push origin main in the terminal
Build and deploy
No intermediary steps. No file transfer headaches. Claude works on the same files as me.
Summary
The MCP filesystem server bridges the gap between Claude’s sandboxed environment and your local machine. It takes five minutes to configure and eliminates the most frustrating part of using Claude Desktop for real development work. The package name is @modelcontextprotocol/server-filesystem and the documentation lives at modelcontextprotocol.io.
GitHub is not just a code hosting platform. It is your public engineering ledger. It shows how you think, how you structure problems, how you document tradeoffs, and how you ship. If you build software and it never lands on GitHub, as far as the wider technical world is concerned, it does not exist.
This guide walks you from nothing to a clean public repository that is properly licensed, tagged, and released. No clicking around aimlessly. No half configured repos. No “I’ll tidy it later.” We will automate the entire process.
1 Why GitHub Matters
Before the mechanics, understand the leverage. Recruiters, engineers, and contributors can see your work, which gives you visibility you cannot get any other way. Clean commits and structured repos demonstrate discipline, and that builds credibility in a way that talking about your work never will. Tags and releases formalise change through proper versioning, and GitHub Releases turn your repo into a distribution channel. Beyond all of that, issues and pull requests scale development beyond you by opening the door to community contribution.
If you are building WordPress plugins, internal tooling, or AI integrations, publishing them properly is a signal. Discipline in open source hygiene matters.
2 The Manual Way vs The Correct Way
The manual way looks like this: install Git, create a repo in the browser, clone it, copy your files across, add a README, add a LICENSE, commit, push, tag, upload a release, add topics, then go back and fix all the mistakes you made along the way. That is friction. Friction creates inconsistency. Inconsistency creates messy repos.
Instead, automate it once and reuse it.
3 One Shot GitHub Publish Script (macOS)
The script below handles everything in a single pass. It installs Homebrew if needed, then installs Git and GitHub CLI. It authenticates you with GitHub via browser OAuth so you never have to manually create tokens. It then scaffolds a clean project directory with a MIT license, a sensible .gitignore, and a README.md. From there it initialises Git, creates the public GitHub repo, pushes the initial commit, tags a release, and sets repository topics. You edit three variables at the top of the script and the rest takes care of itself.
#!/usr/bin/env bash
# ============================================================================
# github-publish.sh
#
# One shot script to install tools, create a public GitHub repo, and publish
# your project as a clean, properly licensed open source repository.
#
# What it does:
# 1. Installs Homebrew, Git, and GitHub CLI (gh) if not already present
# 2. Authenticates with GitHub via browser OAuth
# 3. Scaffolds LICENSE, .gitignore, and README.md
# 4. Creates the public repo, pushes, tags a release, and sets topics
#
# Usage:
# chmod +x github-publish.sh
# ./github-publish.sh
#
# Prerequisites:
# macOS with admin rights.
# ============================================================================
set -euo pipefail
# ---------- configuration (edit these three lines) ----------
REPO_NAME="my-project"
REPO_DESC="A short description of what your project does."
VERSION="1.0.0"
# ------------------------------------------------------------
echo ""
echo "========================================="
echo " GitHub Open Source Publish"
echo " Project: $REPO_NAME"
echo "========================================="
echo ""
# ── 1. Homebrew ──────────────────────────────────────────────────────────────
if ! command -v brew &>/dev/null; then
echo "[1/7] Installing Homebrew..."
/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
if [[ -f /opt/homebrew/bin/brew ]]; then
eval "$(/opt/homebrew/bin/brew shellenv)"
fi
else
echo "[1/7] Homebrew already installed."
fi
# ── 2. Git ───────────────────────────────────────────────────────────────────
if ! command -v git &>/dev/null; then
echo "[2/7] Installing Git..."
brew install git
else
echo "[2/7] Git already installed ($(git --version))."
fi
# ── 3. GitHub CLI ────────────────────────────────────────────────────────────
if ! command -v gh &>/dev/null; then
echo "[3/7] Installing GitHub CLI..."
brew install gh
else
echo "[3/7] GitHub CLI already installed ($(gh --version | head -1))."
fi
# ── 4. GitHub auth ───────────────────────────────────────────────────────────
if ! gh auth status &>/dev/null; then
echo "[4/7] Logging into GitHub..."
echo " A browser window will open. Approve the OAuth request."
gh auth login --web --git-protocol https
else
echo "[4/7] Already authenticated with GitHub."
fi
# ── 5. Scaffold project ─────────────────────────────────────────────────────
echo "[5/7] Scaffolding project directory..."
mkdir -p "$REPO_NAME"
cd "$REPO_NAME"
# MIT license (swap this for GPLv2 or Apache if you prefer)
GH_USER=$(gh api user --jq .login)
YEAR=$(date +%Y)
cat > LICENSE << EOF
MIT License
Copyright (c) $YEAR $GH_USER
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
EOF
# .gitignore
cat > .gitignore << 'GITIGNORE'
# macOS
.DS_Store
._*
# IDE
.idea/
.vscode/
*.swp
*.swo
# Build artifacts
node_modules/
vendor/
dist/
*.zip
*.tar.gz
# Environment
.env
.env.local
GITIGNORE
# README.md
cat > README.md << README
# $REPO_NAME
$REPO_DESC
## Getting Started
Clone the repository and you are ready to go:
\`\`\`bash
git clone https://github.com/$GH_USER/$REPO_NAME.git
cd $REPO_NAME
\`\`\`
## License
MIT License. See [LICENSE](LICENSE) for the full text.
## Author
$GH_USER
README
# Initialise git repo
git init -b main
git add -A
# Set commit identity from GitHub if not already configured globally
if ! git config user.email &>/dev/null; then
GH_EMAIL=$(gh api user --jq '.email // empty')
if [[ -z "$GH_EMAIL" ]]; then
GH_EMAIL="${GH_USER}@users.noreply.github.com"
fi
git config user.name "$GH_USER"
git config user.email "$GH_EMAIL"
fi
git commit -m "Initial commit: $REPO_NAME v${VERSION}"
# ── 6. Create repo and push ─────────────────────────────────────────────────
echo "[6/7] Creating public GitHub repo and pushing..."
gh repo create "$REPO_NAME" \
--public \
--description "$REPO_DESC" \
--source . \
--remote origin \
--push
# ── 7. Tag release and set topics ───────────────────────────────────────────
echo "[7/7] Creating release and setting topics..."
gh release create "v${VERSION}" \
--title "v${VERSION}" \
--notes "Initial open source release of $REPO_NAME."
# Add your own topics here. These help people discover your repo.
gh repo edit \
--add-topic open-source
REPO_URL=$(gh repo view --json url --jq .url)
echo ""
echo "========================================="
echo " Done!"
echo ""
echo " Repository: $REPO_URL"
echo " Release: $REPO_URL/releases/tag/v${VERSION}"
echo "========================================="
When the script completes, you have a public repository with a clean initial commit, a tagged release, and a structured open source project ready for contribution. The whole thing runs in under a minute on a machine that already has Homebrew installed.
4 Anatomy of a Good README
The README is the front door of your project. Most developers either skip it entirely or write something so vague it tells you nothing. A good README answers three questions immediately: what does this project do, how do I use it, and where is the license.
Here is a minimal example that covers the essentials:
# hello-world
A minimal CLI tool that prints a greeting. Built as a reference for clean
GitHub repository structure.
## Getting Started
Clone the repository:
git clone https://github.com/your-username/hello-world.git
cd hello-world
Run the script:
python hello.py
You should see:
Hello, world!
## Usage
Pass a name as an argument to personalise the greeting:
python hello.py Andrew
Hello, Andrew!
## Requirements
Python 3.8 or higher. No external dependencies.
## License
MIT License. See [LICENSE](LICENSE) for the full text.
## Author
[Your Name](https://your-site.com)
That is enough to tell someone everything they need to know in thirty seconds. You can always expand it later with sections for configuration, contributing guidelines, or architecture notes, but this baseline should exist from day one.
5 Final Thought
Most developers overthink GitHub and under invest in automation. The difference between a hobby repo and a professional one is not complexity. It is structure.
Automate structure once. Then focus on shipping. Your code deserves to exist in public properly.
Most WordPress plugin developers eventually hit the same invisible wall: you ship an update, everything looks correct in the zip, the version number changes, the code is cleaner, and yet users report that the old JavaScript is still running. You check the file. It is updated. They clear cache. Still broken. Here is the uncomfortable truth: WordPress plugin uploads do not reliably overwrite existing files inside subdirectories. That single behaviour is responsible for an enormous amount of ghost bugs.
When WordPress installs or upgrades a plugin via zip upload, it extracts the archive into /wp-content/plugins/plugin-name/, it does not reliably purge old files, it may skip overwriting certain files, and it does not clean up removed subdirectories. If your previous version had assets/admin-v7.js and your new version ships assets/admin-v8.js, WordPress will add the new file but it will not remove the old one. Worse, if you reuse the same filename such as assets/admin.js, WordPress may silently skip replacing it depending on extraction behaviour, file permissions, or caching layers. The result is subtle and destructive: you think v8 is running, but v7 is still executing in production. This is not a caching issue. This is a file lifecycle issue.
The first and most important structural decision is to avoid putting assets inside a subdirectory. Move everything to the plugin root folder. Instead of shipping plugin-name/plugin-name.php and plugin-name/assets/admin.js, ship plugin-name/plugin-name.php, plugin-name/admin.js, plugin-name/admin.css, and plugin-name/README.md. WordPress reliably extracts and overwrites files at the same level as the main plugin PHP file. Subdirectories are where stale files survive. Flattening your structure removes an entire class of upgrade bugs. It is not elegant. It is operationally correct.
Users do not always delete plugins cleanly. Hosting panels fail. Permissions vary. File deletions sometimes partially succeed. So add a safety net. When the plugin is deactivated, wipe asset files manually.
This ensures that when someone performs Deactivate, Delete, Upload, Activate, there are no survivors. Even if WordPress fails to delete a subdirectory, the deactivation hook already removed its contents. This is defensive engineering.
Not everyone deactivates before upgrading. Some upload via FTP, replace files manually, use automated deploy scripts, or install updates without deactivation. So add a version change detector. On admin_init, compare a stored version value with the current plugin version constant. If they differ, run cleanup.
This catches FTP upgrades, manual overwrites, partial deployments, and version mismatches. It also resets OPcache to eliminate stale PHP bytecode. Now your plugin self heals on version change.
Even if the filesystem is clean, browsers are not. When enqueueing scripts or styles, always use the plugin version constant as the ver parameter.
If you forget this step, browsers will continue serving cached assets even if the files are correct. This is standard practice and it is also the most commonly forgotten detail.
When you implement all four protections, your user install process becomes simple and reliable: Deactivate, Delete, Upload new zip, Activate. No SSH. No manual file cleanup. No stale JavaScript ghosts. Plugin lifecycle management is not glamorous and it does not sell features, but broken upgrades destroy trust. Most plugin bugs blamed on WordPress being weird are actually poor file hygiene decisions. If your plugin changes asset structure over time, moves files between folders, renames scripts, or leaves old files behind, you are building technical debt into every user’s filesystem. The fix is straightforward: flatten the structure, clean on deactivate, detect version changes, and bust caches correctly. Upgrade reliability is not about clever code. It is about eliminating stale state, because in production the filesystem is part of your architecture.
Health warning: This article may not make you feel happy, it may not suit you to read this article. I am not even sure I necessarily believe everything I am saying here – but I do believe in personally reflecting on the challenging questions being posed in this article to try make myself a better leader. The article is simply asking “what is the value of you, in the context of leading skilled engineering teams?”
There is a particular kind of executive confidence that appears in technology organisations. It usually sounds like this: “I don’t need to understand the tech. I manage outcomes.” It is normally followed by a transformation programme, several reorganisations, collapsing morale, and a very expensive consultancy engagement that promises clarity and delivers polished slideware.
Let’s be direct. Managing technologists without understanding technology is not a neutral handicap; it is an active risk multiplier. The more complex the environment, the more damaging the ignorance becomes.
Consider the keep fit instructor who is visibly overweight and hasn’t exercised in years. They may possess a certification, a job title, and a timetable full of classes. But they cannot teach what they do not know or do not believe in. Their clients sense it immediately. The credibility is gone before a single word is spoken. Technology leadership is no different. You cannot guide people through terrain you have never traversed, and you cannot inspire standards you cannot demonstrate.
So here is the guide you asked for.
1. Start by Accepting You Are Blind
If you do not understand software architecture, distributed systems, infrastructure, security models, delivery pipelines, data structures, and operational constraints, then you are blind to the shape of the terrain. You cannot properly see tradeoffs, shortcuts, fragility, or when someone is bluffing. Technology is not like sales or marketing where outcomes are often decoupled from deep domain mechanics. In technology, the mechanics are the outcome. Architecture decisions made in a whiteboard session today will determine scalability, cost, resilience, and regulatory exposure five years from now.
If you do not understand that dynamic, you are not steering the organisation. You are simply sitting in the passenger seat, while pretending to hold a wheel.
2. Stop Pretending Delivery Is Just Project Management
Non technical leaders often default to process pageantry because it is visible and legible. They add more standups, more dashboards, more governance forums, and more colour coded status reports in the belief that visibility equals control. None of these artefacts fix poor architecture, reduce technical debt, compensate for a misaligned data model, or create good engineers.
When you cannot evaluate technical quality directly, you over index on visible artefacts. Documentation begins to look like competence and velocity charts start to look like value, while entropy compounds quietly underneath the surface. The system appears orderly right up until it fails.
3. You Will Optimise for the Wrong Things
If you do not understand technology, you will optimise for what you can easily measure. Feature count, story points, headcount, burn rate, vendor promises, and analyst positioning become proxies for progress because they are tangible and easy to present upward.
Technologists, however, optimise for variables that are less visible but far more consequential: latency, throughput, failure domains, blast radius, observability, coupling, and long term maintainability. These variables are not intuitive if you have never built and operated systems, so you unintentionally pressure teams to move in directions that make the system worse while appearing more productive. You celebrate feature velocity while quietly accumulating architectural collapse.
4. You Will Reward Confidence Over Competence
In technical environments, there are engineers who explain complexity cautiously and engineers who promise simplification confidently. If you cannot evaluate the substance behind those positions, you will reward the confidence because it is easier to understand and more comforting to hear. The loud architect who claims “this is easy” will often outrank the quiet engineer who warns that the proposal will create long term fragility.
Over time, bad decisions institutionalise themselves. Real builders leave because their judgement is repeatedly overruled by narrative. Political performers remain because they are aligned with what leadership can recognise. The technical centre of gravity shifts from engineering to performance and display, and once that shift occurs it is extremely difficult to reverse.
4.1 Architecture Is Not Blockchain. Stop Voting on It.
There is a dangerous instinct among non technical leaders to democratise technology decisions. It feels fair. It feels inclusive. It feels like good governance. It is none of these things.
Technology architecture decisions are not like blockchain. There is no distributed consensus protocol that produces good system design. You cannot put an architecture to a vote and expect the result to be sound. Consensus in architecture does not produce quality. It produces compromise, and compromise in system design is how you end up with a monolith wearing a microservices costume.
Good architecture sits with the few. It always has. The people who can see failure modes before they materialise, who understand how coupling decisions made today will constrain optionality in three years, who can hold the full system topology in their heads while evaluating a proposed change. These people are rare. They are not the majority, and they should never need to be.
If you cannot immediately discern who these people are, that is not an excuse to default to democracy. It is a problem you must solve. You can look at track record. Who built the things that actually work? Who predicted the failures that eventually materialised? Who do the other strong engineers defer to when it matters? You can ask the people who seem to know. Genuine technical talent recognises other genuine technical talent with remarkable consistency. The engineers who understand the system will tell you who else understands the system if you ask them honestly and listen without filtering their answer through your own preferences.
What you cannot do is use “who do I get on with” as a proxy for technical authority. Rapport is not architecture. The person whose company you enjoy at lunch is not necessarily the person who should be making database partitioning decisions. In fact, the odds are reasonable that the person you need to trust with these decisions is someone you find difficult. They may be blunt. They may lack patience for ambiguity. They may not perform enthusiasm on demand or soften their assessments to make the room comfortable. That is not a deficiency. That is frequently what deep technical clarity looks like when it has not been sanded down by corporate socialisation.
Your job is not to find someone who makes architecture decisions and is also easy to manage. Your job is to find the person who makes the right architecture decisions and then do the leadership work around them. That means helping them evolve how they communicate without requiring them to dilute what they communicate. It means cushioning how their assessments land with the people you probably get on better with, the ones who find directness confronting. It means translating their clarity into language the room can absorb without asking them to do the translating themselves, because the moment you make that their job, you have redirected their energy from engineering to diplomacy, and you will get less of both.
The moment you make it the job of those who know to convince those who do not, you have inverted the burden of proof. You are asking the surgeon to justify the incision to the waiting room. The engineer who sees the correct path must now spend their energy selling it to people who lack the context to evaluate it, navigating politics, building slide decks, softening language, and managing egos. Their actual job, building things that work, becomes secondary to the performance of persuasion.
This is how you build an idiocracy. Not through malice, but through process. The smart people do not leave because they are angry. They leave because they are tired. Tired of explaining things that should not need explaining. Tired of watching inferior decisions win because they were presented more palatably. Tired of carrying the cognitive load of the system while simultaneously carrying the emotional load of convincing people who will never understand it.
And when they leave, they do not come back. The institutional knowledge walks out with them. What remains is a leadership structure perfectly optimised for consensus and utterly incapable of producing anything architecturally coherent.
So if you find yourself in a room where architecture is being decided by a show of hands, you have already failed. Your job is not to count votes. Your job is to identify the people who actually know, give them authority, back their judgement, and manage everyone else around that judgement. Not the other way around.
The few who know should be protected and empowered. The many who do not should be managed, guided, and kept from diluting decisions they are not equipped to make. That is not elitism. That is engineering.
5. You Will Torture Your Best People
When a non technical leader takes over a technology team, they will almost always find A players. Their track record is documented, their peers defer to them, their output is measurable, and their understanding of the system is encyclopaedic.
Management culture tells you to grow people, stretch them, and challenge them. That works when you understand the craft. When you do not, it becomes interference dressed up as development.
If you do not understand what your A players do, you have two options. You can support them and back them, which means protecting their time, removing obstacles, trusting their judgement on matters you cannot evaluate, and quietly taking credit while they build things that matter. Or you can second guess their architecture, impose frameworks on their process, redirect their priorities based on something you skimmed in a blog post, require justification for decisions you cannot interrogate, and surround them with governance rituals that treat excellence as a compliance risk. You will not improve them. You will exhaust them.
The strongest leaders recognise that their job with exceptional engineers is not to improve them but to protect them from everything that would prevent them from doing what they are already exceptional at, including protecting them from unnecessary leadership interference.
5.1 The Messi Test
If you were managing Lionel Messi at the peak of his career, would you try to make him a better footballer? Would you sit him down and say, “I think you should score more goals,” or “You should do cooler dances after goals,” or “You should do more of that overhead scissors stuff, it looks great on my Instagram”? Of course not. You would never say this, because you understand exactly what Messi would think: “what is planet are you on?“
You would not attempt to coach the best player in the world on how to kick a ball. That would be delusion masquerading as leadership.
But you might help him in other ways. You might shield him from media noise so he can focus on performance. You might connect him with world class tax advisors so he does not learn about compliance through public scandal. You might create an environment where he can speak honestly about pressure, fear, and expectations without reputational risk. You might remove friction from his life so that his talent can compound. That is leadership.
Technologists are your Messis, I dont mean they are expensive “rock stars” that should be worshipped. But they’re highly skilled, highly trained engineers that are operating at the edge of complexity most executives cannot see, let alone master. The moment you start telling them how to “score more goals” in their domain, you lose credibility. The moment you start removing obstacles, clarifying intent, protecting focus, and supporting their growth as humans, you become useful.
Leadership is not about demonstrating your value. It is about increasing theirs.
6. Learn to See the World Through Their Eyes
A significant proportion of high performing technologists are wired for precision, depth, and pattern recognition in ways that do not always align neatly with corporate culture. Some sit somewhere on the autism spectrum. Many process imperfection as persistent cognitive noise. A brittle workaround in a codebase, a decision that feels architecturally wrong, a governance process quietly ignored, all of it remains present in their thinking. They are not being difficult. They are being accurate.
Corporate ambiguity, political signalling, and performative enthusiasm do not create alignment for these engineers. They create anxiety. Mixed messages do not feel strategic. They feel incoherent.
Good leaders regulate the room. They absorb noise, reduce ambiguity, speak plainly, and provide calm clarity. A simple, credible “we have this” from someone who understands the system can settle a mind that has been carrying too much context alone.
Poor leaders amplify the noise. They respond with more process, more reporting, more governance. The engineer leaves more dysregulated than they arrived.
Your job is not to fix them. It is to connect with them. Be explicit. Mean what you say. Offer precise recognition for precise contributions. Treat their way of thinking as an asset rather than a personality flaw. That is not a programme. It is leadership.
7. The Rub: Management Is Not Parenting
Here is the rub. Every one of us has had a boss. From the day we were born we were trained to be told what to do. Parents, teachers, coaches. We learned compliance before we learned autonomy. So in a strange way, everyone believes they understand management because everyone has been managed.
But there is a structural flaw in that analogy.
When you tell a child what to do, it is because you genuinely know better. You are larger, more experienced, more informed. The power gradient is justified. Authority is protective. Instruction is developmental. The child benefits precisely because the adult has superior context.
When you “manage” a technologist and you do not understand the domain, that gradient disappears. You are not the informed authority in the room. You are, in many ways, naked in the relationship. And naked authority is dangerous. It creates insecurity. Insecurity creates compensating behaviour. Some leaders respond by asserting dominance, prescribing solutions, forcing direction, or manufacturing certainty to soothe their own internal sense of being an imposter. Do not do this.
The moment you compensate for ignorance with control, you infantilise an adult expert. The relationship subtly shifts from adult to adult into adult to child. And technologists can feel it immediately. Respect erodes. Candour drops. Performance follows.
Instead, treat the relationship as adult to adult. That requires humility. Real humility, not performative modesty. Humility that says: “You know more than I do about this domain. My job is not to override you. My job is to create the conditions where your expertise compounds.”
Most corporates inadvertently filter out humble leaders because humility is harder to spot in an interview. It does not posture. It does not dominate airtime. It does not radiate artificial certainty. It can even be misread as weakness. It is not weakness. It is a superpower.
In complex technical environments, humility is the only posture that preserves credibility, unlocks trust, and allows expertise to surface without fear.
8. You Cannot Challenge Risk Without Understanding It
Technology is an infinite game. There is no finished state and no moment when risk disappears. Engineers need to be challenged on the risks they are taking, avoiding, and ignoring.
But you can only challenge meaningfully on risks you understand. Asking whether something is secure is not risk management. Asking what happens to the blast radius if a critical dependency fails before decoupling it is risk management. The difference is fluency.
Technology teams need to be taught more than they need to be managed. The best leaders challenge from credibility.
8.1 Scaling Is Not Shovelling Coal
The reflex when things are going slowly is to hire more people, on the theory that more engineers typing at more keyboards produces more output. This is the coal and steam engine model of technology leadership: if you want more steam, shovel more coal. It is almost entirely wrong.
Almost every meaningful slowdown in a technology organisation is structural rather than headcount related. The system is badly architected, the deployment process is a labyrinth, teams are coupled to each other in ways nobody has fully mapped, and three approvals are required from people who are never available simultaneously. The platform was designed for a company one tenth the current size and nobody has rearchitected it. Adding more engineers to this environment does not accelerate delivery; it adds more coordination surfaces, more communication overhead, and more people who need to understand a system that was never properly documented in the first place.
Fred Brooks established this in 1975 in The Mythical Man Month, observing that adding people to a late software project makes it later. Fifty years on, organisations still have not internalised it.
Almost all of my meaningful productivity gains across a career have come from three activities: simplifying, rearchitecting, and decommissioning. In several engagements I have reduced team sizes by eighty to ninety percent through focused engineering effort, not through redundancy rounds, but because the complexity that justified those team sizes no longer existed. The work evaporated because the waste was removed, not because more people arrived to carry it.
Business leaders rarely reach for any of these three levers, because none of them are visible in the way that hiring is. Simplification produces no announcement. Rearchitecting takes time before it pays off. Decommissioning feels like destroying value even when the system being decommissioned is the thing burning the most of it. Hiring, by contrast, feels decisive and produces a headcount number that rises, a team that grows, and a credible impression of action.
The result is bloat, not in the pejorative sense of laziness or incompetence, but structural bloat. Layers of middle management are added to coordinate the people hired to solve problems that better engineering would have eliminated. Small pools of engineers are assigned to each layer and an elaborate coordination dance begins, with teams attempting to place assets into production across boundaries they did not design, through processes they did not write, requiring sign offs from people who were not part of the original conversation. The system slows further, more managers are added to explain the slowdown, the slides get denser, and the actual engineers spend progressively less time building anything.
There is nothing wrong with the people in this structure. The structure is the problem.
There is also a bias worth naming directly. Ask a business leader whether they would be comfortable reporting into a technologist and watch the nervous laugh. It surfaces something honest: the default assumption in most organisations is that technology is a support function, a delivery vehicle for business ideas, something to be managed rather than something that leads. That assumption shapes every resourcing decision that follows. If you believe technology is an execution arm, you staff it like one. If you understand that technology is the product, the risk surface, the cost structure, and increasingly the competitive differentiator, the entire calculus changes.
The most expensive thing in many technology organisations is not the engineers. It is the coordination overhead constructed to manage them, most of which exists because the underlying architecture was never properly simplified in the first place.
8.2 A Seating Move Is Not Progress
There is one thing I have never seen accompany a push to federate technology into business units, and that is a business case. Not a real one. Not one that commits to reducing the headcount of the central technology function as teams move out, improving product quality in measurable terms, accelerating time to market, or delivering a better client experience. Those outcomes are sometimes gestured at in conversation but they are never written down with numbers attached, never stress tested, and never tracked after the fact.
What actually happens is a long, sustained lobbying effort. Business leaders work on executives over months, sometimes years, making the case in corridors and leadership offsites and one on ones that they just want ownership, that they could move faster if they were not dependent on a central team, that their domain is unique enough to justify its own capability. The argument is almost always framed around autonomy and alignment rather than outcomes, because outcomes would require accountability and accountability would require the business case that nobody wants to write. Eventually the lobbying reaches a threshold and the seating move happens. Org charts are redrawn. Teams are transferred. Announcements are made about empowerment and closer alignment to the business.
Then the outages start. The platform that looked simple from the outside turns out to have dependencies that the embedded team did not fully understand. The shared services that the central function provided quietly and reliably are now either duplicated at significant cost or quietly still consumed while the team claims independence. The senior engineers who did not want to move find reasons to leave. Junior engineers discover that their new reporting line has no meaningful technology leadership above them. The business head who lobbied hardest for the change is notably quiet during the incidents, because the conversation has shifted from strategy to operations and that is not where they are most comfortable.
The people who argued loudest for federation are rarely held accountable when it does not deliver what they promised, partly because they never promised anything specific enough to be held to. A seating move that comes with no business case produces no basis for evaluation, and that is a feature of the approach, not an oversight.
8.3 The Centralised vs Federated Dance
Alongside the headcount reflex sits a structural one that operates on a longer cycle, roughly three to five years in most organisations, and it is just as predictable. It is the oscillation between centralised technology functions and federated ones embedded inside business units, and poor performing companies do it repeatedly without ever asking why they keep arriving back at the same problems from the opposite direction.
When technology is federated, the symptoms accumulate gradually and then all at once. Headcount expands because each business unit builds its own capability without reference to what anyone else is building. Delivery slows because teams are solving the same problems in parallel and nobody is accountable for the shared infrastructure underneath. Product intellectual property fragments across a dozen slightly different implementations. Outages begin to correlate in ways nobody predicted because the underlying platforms were never properly standardised. Eventually the organisation reaches a pain threshold and a decision is made: centralise. Put technology back together, eliminate duplication, create a shared platform, and impose some coherence on the chaos.
And then, after a few years of that, a different set of symptoms accumulates. The centralised function is accused of being slow, unresponsive, and too far from the business to understand what the business actually needs. Business leaders begin to argue, with genuine conviction, that they just want to own technology themselves so they can build a team aligned to their own priorities, responsive to their own roadmap, and invested in their own outcomes rather than a shared queue managed by someone who does not really understand their domain. The language of empowerment enters the conversation. Autonomy is positioned as the solution. And so the cycle turns again.
What neither state acknowledges is that both of them are wrong, or more precisely, that neither of them is the real problem. The real problem is that technology product teams sitting inside business units are almost never well looked after, well understood, or well led. The business leader who asked for them does not have the technical depth to develop them, challenge them, or protect them from the work that will slowly reduce them to order takers. The senior technologists in those embedded teams typically feel it within a year or two and want to move back to a technology reporting structure where they will be compared against peers, stretched by people who understand the craft, and given a career trajectory that makes sense. The weaker technologists, by contrast, are often quite comfortable in the federated model precisely because the lack of comparison works in their favour, and their performance tends to set their direction eventually regardless of their preference.
The leaders of those embedded teams occupy a particularly comfortable position that is worth examining honestly. Sitting inside a business unit, away from a central technology function, they are largely insulated from scrutiny about what good engineering actually looks like. There is no peer group holding up a mirror. There is no principal architect asking difficult questions about their design decisions. The business head they report to is usually grateful for the relationship and not equipped to push back on the technical substance. That comfort is real, but it comes at a cost that falls mostly on the junior technologists underneath them, who are poorly directed, working in a narrow domain with limited exposure to broader engineering practice, and facing a career runway that shortens the longer they stay.
The honest answer is that technology product teams should sit close to the business, but closeness is not the same as ownership, and ownership is not the same as being well led. The cycle will keep turning until organisations stop treating the reporting line as the variable that needs fixing and start asking the harder question about whether the people leading those teams, wherever they sit, actually understand what they are leading.
9. You Will Build a Human ETL Layer
When leaders cannot understand technology directly, they compensate by inserting translation layers. Middle management expands. Engineers are divided into smaller execution pools overseen by coordinators and programme managers whose primary function is to translate engineering reality into executive language and back again.
You create a human ETL pipeline. Engineers produce signal. Middle management extracts it, transforms it into narratives, and loads it into reporting decks, governance packs, quarterly reviews, and risk registers. The same underlying data is repackaged repeatedly, often at the last minute, into slightly different formats for different audiences.
A status update becomes a slide. The slide becomes a summary. The summary becomes a dashboard. The dashboard becomes a talking point. Each transformation distorts meaning.
Leadership overhead can approach the entirety of an engineer’s day. There are just enough managers to guarantee standstill, but also just enough structure to produce a convincing explanation for why five minute tasks take months. The slides appear dense with activity, yet they are often incoherent. If you trace a single initiative from idea to production, the drywall cracks and the house of cards becomes visible.
Movement replaces progress. Coordination replaces coherence.
10. You Will Reach for Redis
Eventually a performance issue will surface. Without technical depth, the reflex is to add something modern and powerful. Often that something is Redis.
A cache feels decisive. Add it, declare the issue addressed, move on.
Never do this blindly.
In fragile environments layered with historical hacks, adding another cache compounds opacity. Someone likely solved a similar problem years ago with an undocumented optimisation. Now you have multiple layers of state, unclear invalidation logic, and outages that are less frequent but more mysterious.
Performance issues are often structural. Poor data models, missing indexes, excessive coupling, and architectural shortcuts create systemic friction. Caching over structural weakness hides symptoms while deepening fragility.
I am speaking to you from a future world where mankind was destroyed by Redis caches. Not because Redis is flawed, but because leaders layered fixes onto systems they did not understand.
11. The HR Performance Management Trap
The most corrosive pattern appears when ignorance meets rigid HR systems. Deep engineering work is compressed into quarterly objectives as though innovation follows a payroll calendar. Goals are signed off ceremonially. Alignment is declared.
Within weeks, priorities shift. Engineers are told to pivot immediately.
Months later, those same goals reappear in reviews as if nothing changed. Leaders who have not read them in half a year use them as instruments of judgement. Engineers are assessed against objectives invalidated in the first week after signing.
You demand agility in delivery and rigidity in evaluation.
Then comes the request for detailed activity lists because leadership is not close to the details and needs to fight their case. Engineers reconstruct narratives to fit templates. Intellectual capital creation is replaced by artefact production.
12. Practical Do and Don’t Guide
If you are not technically fluent, the pattern is predictable. The table below summarises the behaviours that separate responsible leadership from destructive interference.
Do
Don’t
Why
Learn enough to understand system design, failure modes, and architectural tradeoffs.
Announce that you are not technical as if it is neutral.
Ignorance in complex systems leads to misaligned incentives and fragile decisions.
Protect A players by removing noise and shielding their time.
“Develop” your best engineers by interfering in work you do not understand.
Elite performers need space and cover, not amateur coaching.
Identify technical authority by track record and peer recognition, then back their judgement.
Use personal rapport as a proxy for who should make architecture decisions.
The person you get on with is not necessarily the person who should be making partitioning decisions.
Help your strongest technical minds evolve their communication style and cushion how their directness lands.
Make it the job of those who know to convince those who do not.
You redirect engineering energy into diplomacy and create an idiocracy where the smart people die off.
Keep priority changes rare and explicit, and update goals when reality changes.
Pivot constantly and then measure people against obsolete objectives.
You cannot demand agility in execution and rigidity in evaluation.
Stay close enough to the work to understand reality.
Build a human ETL of middle managers to translate everything into slide decks.
Translation layers create motion without progress and distort truth.
Fix performance problems at the root.
Add Redis or another cache reflexively.
Additive fixes on top of structural weakness increase opacity and fragility.
Be explicit, direct, and consistent in communication.
Rely on ambiguity and political signalling.
Precision wired engineers interpret ambiguity as incoherence.
Install real technical authority if you lack fluency.
Appoint ceremonial technical leaders without power.
Architecture by committee produces incoherent systems.
Give architecture decisions to the few who know and manage everyone else around that judgement.
Put architecture to a vote or seek consensus across people who lack the context to evaluate it.
Consensus does not produce good architecture. It produces compromise that compounds into structural incoherence.
Create the conditions for excellence.
Mistake intervention for leadership.
In complex systems, unnecessary intervention is usually negative value.
13. Consultants Will Smell You
When leadership cannot interrogate architecture, consultants shape the narrative. Platforms are sold instead of problems solved. Roadmaps are purchased instead of capability built. Without internal fluency, you cannot distinguish elegance from illusion.
14. Culture Will Decay
Technologists do not need their leaders to be the best engineers in the room, but they do need them to recognise quality. When leaders cannot distinguish good from bad engineering, excellence is not protected and mediocrity is not corrected.
High performers disengage first. The rest follow.
15. So What Should You Do
You have three options.
Learn. Build real fluency and challenge from credibility.
Install genuine technical authority and listen to it.
Or do not take the role.
The honest answer to how to manage technologists if you do not understand technology is simple.
You do not.
16. Conclusion: Don’t
Leadership is not domain agnostic. You would not manage surgeons without understanding anatomy or pilots without understanding aviation risk. You would not hire an unfit keep fit instructor and expect the class to improve.
Software runs banks, hospitals, logistics networks, and defence systems. Technology teams do not need more managers. They need leaders who can teach, challenge intelligently, and provide cover for the right risks.
If you do not understand technology and do not intend to learn, the most responsible decision you can make is not to lead technologists.
If you run a WordPress site behind Cloudflare, your page view numbers are lying to you.
Jetpack Stats, WP Statistics, Post Views Counter and nearly every other WordPress analytics plugin share the same fatal flaw: they count views on the server. When Cloudflare serves a cached HTML page (which is the entire point of using Cloudflare), WordPress never executes. The PHP never runs. The counter never increments. Your stats show a fraction of your actual traffic.
I spent years watching Jetpack report 200 views on posts that Google Analytics showed had 2,000. The gap is not subtle. On a site with a healthy Cloudflare cache hit rate of 85 to 95 percent, server side counters undercount by 5x to 10x. That is not a rounding error. That is a broken measurement system.
CloudScale Page Views fixes this. It is a WordPress plugin I built specifically to solve the CDN counting problem. It counts every single view regardless of whether the page was served from Cloudflare’s cache, your origin server, or anywhere in between.
This post covers how it works, how to install and configure it, how to migrate your existing Jetpack data, and why the architecture makes it fundamentally more accurate than any server side counter.
The typical WordPress page view counter works like this: a visitor requests a page, WordPress processes the request, the counter plugin hooks into the template or content filter, increments a number in the database, and serves the page. Every step happens during the PHP request lifecycle.
Now add Cloudflare. The first visitor hits the page and the origin server processes it normally. Cloudflare caches the response. The next 100 visitors get the cached version directly from Cloudflare’s edge. WordPress never sees those 100 requests. The counter shows 1 view instead of 101.
This is not a Cloudflare bug. This is exactly how CDN caching is supposed to work. The problem is that server side counting was designed for a world where every request hits the origin. That world ended years ago.
Some plugins try to work around this by setting pages as uncacheable, which defeats the purpose of having a CDN. Others use JavaScript trackers that phone home to external servers, which introduces privacy concerns and third party dependencies. Jetpack Stats sends data to WordPress.com servers, which means your analytics depend on Automattic’s infrastructure being available and their data retention policies.
2. How CloudScale Page Views Works
CloudScale takes a different approach. The page loads from cache as normal, giving you the full speed benefit of Cloudflare. Then a lightweight JavaScript beacon fires after the page has loaded and sends a POST request to a WordPress REST API endpoint.
The key insight is that while the HTML page itself is cached, the REST API endpoint is not. The plugin sends explicit no cache headers on every API response and you configure a Cloudflare Cache Rule to bypass caching on the API path. The beacon request always reaches the origin server.
Here is the sequence:
Cloudflare serves the cached HTML at edge speed
The browser renders the page and executes the beacon script
The beacon sends a POST to /wp-json/cloudscale-page-views/v1/record/{post_id}
The endpoint bypasses the CDN cache via headers and Cache Rules
WordPress logs the view in a dedicated database table and increments the post meta counter
The page view counter on the page updates live via the API response
The beacon is tiny. It adds negligible load time. The API call happens asynchronously after the page has already rendered, so there is zero impact on perceived performance. Your visitors never notice it.
3. Protection Against Gaming
Accurate counting means nothing if someone can inflate numbers by refreshing a page repeatedly or scripting requests to the API. CloudScale handles this at multiple levels.
Session deduplication prevents the same browser session from counting the same post twice. Refresh the page ten times and it still counts as one view. Close the tab, open a new one, and it counts as a new view. This uses sessionStorage, which means it works even with aggressive browser privacy settings that block cookies.
IP throttle protection limits how many views any single IP address can generate within a rolling window. The default is 50 requests per hour. After that threshold, subsequent requests are silently accepted (the attacker gets no signal that they have been blocked) but not recorded. Blocked IPs automatically unblock after one hour. There is no permanent blocklist to manage.
Logged in administrators bypass the throttle entirely, which is useful during development and testing. You can adjust the threshold, window, and enabled state from the IP Throttle tab in the plugin settings.
In the WordPress admin, go to Plugins, then Add New Plugin, then Upload Plugin. Choose the downloaded zip file and click Install Now. Once installed, click Activate.
The plugin creates its database table automatically on activation. No manual database setup is needed.
Then add the Cloudflare Cache Rule. This is the one required configuration step. In the Cloudflare dashboard, go to Caching, then Cache Rules, then Create Rule:
Field: URI Path
Operator: contains
Value: /wp-json/cloudscale-page-views/
Action: Cache Status: Bypass
Without this rule, Cloudflare may cache the REST API response. The beacon will appear to work (it receives a 200 response from the cached copy) but no new views will be recorded. The plugin sends no cache headers as a safety net, but the Cache Rule is the primary and most reliable protection.
You can verify the rule is working from the Statistics tab. Visit a post on your site, then check the stats page. The post should appear in the Most Viewed list within a few seconds.
5. The Statistics Tab
The main plugin interface lives at Tools, then CloudScale Page Views. The Statistics tab is the default view and shows everything about your traffic at a glance.
At the top, three summary cards show total views, posts viewed, and average views per day for the selected period. Below them, a chart shows views over time with tabs for 7 Hours, 7 Days, 1 Month, and 6 Months. The chart data comes from the raw view log, so it reflects actual recorded views.
The date range picker lets you zoom in on any period. Quick buttons cover Today, Last 7 Days, Last 30 Days, This Month, Last Month, This Year, and All Time. You can also pick custom start and end dates.
Below the chart, two ranked lists show your Most Viewed posts for the selected period and your top Referrers. The referrer tracking captures the HTTP referer header when available, so you can see whether traffic is coming from Google, social media, direct visits, or other sources.
If you migrated from Jetpack, a dark blue banner at the top of the tab shows your All Time Views and Posts With Views from the imported data. During the first 28 days after migration, the summary cards blend imported totals with new beacon data so the numbers are not misleadingly low while the plugin builds up its own historical data.
6. The Display Tab for Page Views
The Display tab controls how and whether the view counter appears on your posts.
Display Position has four options. Before Post Content places the counter above the post title, aligned to the right. After Post Content appends it below the post body. Both shows it in both positions. Off hides the counter entirely. You can still use template functions to display counts manually in your theme if you choose Off.
Counter Style offers three designs. Badge is a solid gradient background with white text, suitable for sites that want the counter to be prominent. Pill uses a light tinted background with coloured text, for a softer look. Minimal is plain text with no background, for sites that want counts visible but unobtrusive.
Badge Colour lets you choose from five gradient colour schemes: Blue (the default), Pink, Red, Purple, and Grey. The selected colour applies to all three styles. The badge gets the gradient, the pill gets a matching tinted background, and the minimal style uses the solid colour for text.
Customise Text lets you change the icon (default is the eye emoji) and the suffix (default is “views”). You could change the suffix to “reads” or “hits” or leave it empty for just the number.
Show Counter On controls which post types display the counter. By default only Posts are selected. You can also enable Pages or any custom post type registered on your site.
Tracking Filter controls which post types actually record views. This is separate from the display setting. You might want to track views on Pages (so they appear in your stats) but not display a counter badge on them. Or you might want to display counts on Pages but only track Posts. The two settings are independent.
7. The IP Throttle Tab
The IP Throttle tab manages the rate limiting system that prevents view inflation.
The main toggle enables or disables throttle protection globally. When enabled, you can configure the request limit (how many views per IP before blocking) and the time window (how long the counter accumulates before resetting). The default is 50 requests per 1 hour window.
The Blocked IPs section shows any currently blocked IP hashes with their block timestamp and expiry time. You can unblock individual IPs or clear the entire blocklist. All blocks expire automatically after 1 hour, so this section is mostly for monitoring rather than manual management.
The Block Log shows a chronological history of block events, which is useful for identifying patterns of abuse. It retains the last 100 events.
8. The Migrate Jetpack Tab
If you are moving from Jetpack Stats, the Migrate Jetpack tab handles the transition. Click the migration button and the plugin reads the jetpack_post_views meta values from all your posts and writes them into the CloudScale _cspv_view_count post meta field.
This is a one time operation. The migration copies lifetime totals only, not per day breakdowns, because Jetpack does not store daily granularity in post meta. After migration, a lock prevents accidental re runs.
The migration does not backfill the cspv_views log table because there are no timestamps to backfill. The log table is for trending data (which posts are popular right now) while the post meta stores lifetime totals. This is an intentional separation.
During the first 28 days after migration, the plugin runs in transition mode. The Top Posts widget and the summary cards blend imported totals with new beacon data. Posts are ranked by combined score (imported total plus beacon count) so your historically popular posts are not suddenly invisible. After 28 days, ranking switches to pure beacon data, which by then has enough history to be meaningful on its own.
8. Advantages Over Jetpack Stats
CloudScale has several architectural advantages over Jetpack Stats beyond the CDN counting issue.
Your data stays on your server. Jetpack sends analytics to WordPress.com where it is processed and stored on Automattic’s infrastructure. CloudScale writes directly to your WordPress database. You own the data, you control retention, and you do not depend on a third party service being available.
No external dependencies. Jetpack Stats requires a WordPress.com account, the Jetpack plugin (which is large and does many other things), and a persistent connection to Automattic’s servers. CloudScale is a single self contained plugin with no external service connections.
CDN aware by design. Jetpack’s counting happens during the PHP request lifecycle and fundamentally cannot count views served from cache. CloudScale’s beacon architecture was built specifically for CDN cached sites.
Privacy by default. CloudScale hashes IP addresses with your site salt before storage. Raw IPs never touch the database. Jetpack’s privacy practices are governed by Automattic’s privacy policy, which you do not control.
Lightweight. The beacon script is a few kilobytes. The REST endpoint does minimal work (one database insert, one meta update). There is no heavyweight JavaScript analytics library, no tracking pixels, no third party scripts.
Real time display. The beacon response includes the updated count, which is injected into the page immediately. Jetpack Stats has a delay before numbers appear in the dashboard.
9. Dashboard Widget
The plugin adds a CloudScale Page Views widget to the WordPress admin dashboard. It shows today’s view count, last 7 days total, a time series chart with tabs for different periods, and a list of the top posts for today with proportional bar charts.
The widget updates via AJAX when you switch between the 7 Hours, 7 Days, 1 Month, and 6 Months tabs. At the bottom, a link to the full statistics page and a shield icon confirming whether IP throttle protection is active.
Sidebar Widgets
The plugin registers two sidebar widgets that you can add to any widget area in your theme. Both are configured through the standard WordPress widget interface.
Adding the Widgets
Go to Appearance, then Widgets in the WordPress admin. You will see two new widgets available: CloudScale Top Posts and CloudScale Recent Posts. Drag either widget into your desired sidebar area, or click the widget and select a widget area.
If your theme uses the block based widget editor, click the plus button in your sidebar area, search for “CloudScale”, and add the widget block.
CloudScale Top Posts : Widget Settings
The Top Posts widget displays your most viewed posts ranked by view count. It supports the following settings.
Title controls the heading shown above the widget. The default is “Top Posts”. You could change it to “Most Popular”, “Trending”, or anything else.
Total posts to load sets how many posts are fetched from the database. The default is 20. If you want a deep list with pagination, increase this. The widget only queries once and paginates client side, so a higher number does not cause repeated database queries.
Posts per page controls how many posts are visible at a time before the pagination arrows appear. The default is 5. If your sidebar is narrow, 3 or 4 may work better. The widget shows Previous and Next buttons when there are more posts than this number.
Thumbnail width sets the pixel width of post thumbnails. The default is 150. Set to 0 to hide thumbnails entirely. The height is calculated proportionally from the featured image aspect ratio.
Order by lets you choose between Most Viewed (ranked by view count) and Most Recent (ranked by publication date). Most Viewed is the default and the most useful option for a “popular posts” sidebar.
View window (days) only applies when ordered by Most Viewed. It controls the time range for counting views. The default is 28 days, meaning the widget shows the most viewed posts from the last 28 days. Set to -1 for all time ranking. During the first 28 days after a Jetpack migration, the widget automatically blends imported lifetime totals with beacon data so your historically popular posts stay visible.
On desktop screens wider than 768 pixels, the widget renders in a two column grid layout. On mobile, posts stack into a single column. Each post shows the thumbnail, title (linked to the post), publication date, and view count.
CloudScale Recent Posts : Widget Settings
The Recent Posts widget displays your latest published posts in chronological order. It supports these settings.
Title controls the heading. The default is “Most Recent Posts”.
Number of posts sets how many posts to display. The default is 10.
Show date toggles whether the publication date appears below each post title.
Show views toggles whether the view count badge appears on each post. This is enabled by default and shows the same formatted count from the CloudScale view counter. Useful for showing readers that a recent post is already getting traction.
Both widgets use the same visual style with orange accent pagination controls and clean card layouts with subtle hover effects.
Template Functions
For theme developers who want manual control, CloudScale provides template functions.
cspv_the_views() outputs the formatted view counter with icon and suffix. You can pass an array of options to customise the icon and suffix text.
cspv_get_view_count() returns the raw numeric count for the current post or a specified post ID. Use this when you need the number for calculations or custom display logic.
Elements with the CSS class cspv-views-count and a data-cspv-id attribute are automatically updated by the beacon on archive and listing pages. This means your view counts stay fresh even when Cloudflare has cached the listing page HTML.
Debugging
If views are not being recorded, check these things in order.
First, verify the Cloudflare Cache Rule is active. On the Statistics tab, the endpoint diagnostic will tell you if the REST API is reachable and not cached.
Second, open your browser console on a post page and look for [CloudScale PV] log messages. If WP_DEBUG is enabled, the beacon logs its activity. You should see “record mode” followed by a successful response with logged: true.
Third, check the IP Throttle tab. If you have been testing heavily, you may have hit the 50 request per hour limit. Logged in administrators bypass the throttle in version 2.4.7 and later, but earlier versions do not have this bypass.
Fourth, verify the database table exists. The plugin creates wp_cspv_views on activation. If activation was interrupted, the table may be missing columns. Version 2.4.9 and later auto upgrade the table schema on admin page load.
You can also test the API directly from the browser console:
This should return the plugin version and current server time. If it returns a Cloudflare cached response (same timestamp on repeated calls), your Cache Rule is not working.
Source Code
CloudScale Page Views is open source under the GPL 2.0 license. The full source is included in the plugin zip and available on the author’s site.
If you run a technical blog on WordPress, you know the pain. You paste a markdown article with fenced code blocks, Gutenberg creates bland core/code blocks with no syntax highlighting, no copy button, no dark mode. You end up wrestling with third party plugins that haven’t been updated in years or manually formatting every code snippet.
I built CloudScale Code Block to solve this once and for all. It’s a lightweight WordPress plugin that gives you proper syntax highlighting with automatic language detection, a one click clipboard copy button, dark and light theme toggle, full width responsive display, and a built in migration tool for converting your existing code blocks. It works as both a Gutenberg block and a shortcode for classic editor users.
In this post I’ll walk through how to install it, how to handle the Gutenberg paste problem, and how to migrate your existing code blocks.
1 What You Get
CloudScale Code Block uses highlight.js 11.11.1 under the hood with support for 28 languages out of the box. When you add a code block in the editor, you get a clean textarea with a toolbar showing the block type, detected language, and optional title (useful for filenames). On the frontend your visitors see beautifully highlighted code with line numbers, a copy to clipboard button, and a toggle to switch between dark (Atom One Dark) and light (Atom One Light) themes.
The plugin has zero build step required. No webpack, no npm install, no node modules. Upload it and activate.
That’s it. You’ll see CloudScale Code Block available in the Gutenberg block inserter under the Formatting category. You can also access settings at Settings > CloudScale Code to configure the default theme.
3 The Gutenberg Paste Problem
Here’s something every WordPress developer needs to know. When you paste markdown containing fenced code blocks (the triple backtick syntax), Gutenberg’s built in markdown parser intercepts the paste event before any plugin can touch it. It converts the fenced blocks into core/code blocks, which are WordPress’s default code blocks with no syntax highlighting.
This isn’t a bug in any plugin. It’s how Gutenberg’s paste pipeline works internally. The markdown parser runs synchronously during the paste event, creates the core blocks, and only then gives plugins a chance to respond.
CloudScale Code Block handles this with a practical solution: a floating convert toast.
4 Converting Pasted Code Blocks
When you paste markdown that contains fenced code blocks, Gutenberg will create core/code blocks as described above. CloudScale detects this automatically and shows a floating notification in the bottom right corner of the editor:
⚠️ 2 core code blocks found ⚡ Convert All to CloudScale
Click the Convert All to CloudScale button and every core/code and core/preformatted block in the post is instantly replaced with a CloudScale Code Block. The code content is preserved exactly as it was, and highlight.js will auto detect the language on the frontend.
This is a one click operation. Paste your markdown, click Convert All, done.
5 Migrating Existing Posts
If you have an existing blog with dozens or hundreds of posts using WordPress’s default code blocks or the Code Syntax Block plugin, you don’t want to edit each post manually. CloudScale Code Block includes a built in migration tool that handles this in bulk. Once the plugin is activated, go to Tools > Code Block Migrator in your WordPress admin.
5.1 How the Migrator Works
The migrator handles three types of legacy blocks:
wp:code blocks are the default WordPress code blocks. The migrator extracts the code content, decodes HTML entities, and detects the language from any lang attribute or language-xxx CSS class.
wp:code-syntax-block/code blocks are from the popular Code Syntax Block plugin. The migrator reads the language from the block’s JSON attributes where Code Syntax Block stores it.
wp:preformatted blocks are WordPress preformatted text blocks that some themes and plugins use for code. The migrator converts br tags back to proper newlines and strips any residual HTML formatting.
5.2 Migration Workflow
The process is straightforward:
Click Scan Posts to find every post and page containing legacy code blocks
The scan results show each post with a count of how many code blocks it contains
Click Preview on any post to see a side by side comparison of the original block markup and what CloudScale will produce
Click Migrate This Post to convert a single post, or use Migrate All Remaining to batch convert everything
The migrator writes directly to the database and clears the post cache, so changes take effect immediately. I recommend taking a database backup before running a bulk migration, but in practice the conversion is deterministic and safe. The migrator only touches block comment delimiters and HTML structure. Your actual code content is never modified.
5.3 After Migration
Once migration is complete you can deactivate the Code Syntax Block plugin if you were using it. All your posts will now use CloudScale Code Block format and render with full syntax highlighting on the frontend.
6 Technical Details
For those interested in what’s under the hood:
The plugin registers a single Gutenberg block (cloudscale/code-block) with a PHP render callback. The block stores its data as three attributes: content (the raw code text), language (optional, for explicit language selection), and title (optional, shown above the code). The block uses save: function() { return null; } which means all rendering happens server side via PHP. This makes the block resilient to markup changes and avoids the dreaded “This block contains unexpected or invalid content” error that plagues so many WordPress code plugins.
Frontend assets are loaded on demand. The highlight.js library, theme stylesheets, and the clipboard/toggle JavaScript are only enqueued when a post actually contains a CloudScale Code Block. No unnecessary scripts on pages that don’t need them.
The auto convert watcher uses wp.data.subscribe to monitor the Gutenberg block store for core/code and core/preformatted blocks. When it finds them, it renders a floating toast with a convert button. The conversion calls wp.data.dispatch(‘core/block-editor’).replaceBlock() to swap each core block for a CloudScale block, preserving the code content and extracting any language hints from the original block’s attributes.
7 Configuration
The plugin includes a settings page at Settings > CloudScale Code where you can set the default theme (dark or light) for all code blocks on your site. Individual blocks can override this setting using the Theme dropdown in the block’s sidebar inspector.
You can also set the language explicitly per block if auto detection isn’t picking the right one. The language selector supports 28 languages including Bash, Python, JavaScript, TypeScript, Java, Go, Rust, SQL, YAML, Docker, and more.
8 Shortcode Support
For classic editor users or anywhere you need code highlighting outside of Gutenberg, the plugin provides a shortcode called cs_code. Wrap your code between the opening and closing tags and optionally set the language, title, and theme parameters. The shortcode renders identically to the Gutenberg block on the frontend, complete with syntax highlighting, copy button, and theme toggle.
After you publish this post, you can add a shortcode example manually by inserting a CloudScale Code Block in the editor and typing the example there. This avoids WordPress trying to execute the shortcode during paste.
9 Fourteen Color Themes
The original release shipped with Atom One as the only color scheme. That’s a fine default but if you’re running a blog with a specific visual identity, you want options. Version 1.7 adds 14 of the most popular syntax highlighting themes, all loaded directly from the highlight.js CDN with zero local files.
The full theme list: Atom One, GitHub, Monokai, Nord, Dracula, Tokyo Night, VS 2015 / VS Code, Stack Overflow, Night Owl, Gruvbox, Solarized, Panda, Tomorrow Night, and Shades of Purple.
Each theme comes in both a dark and light variant. When you select a theme in the settings, the plugin loads the appropriate dark and light CSS files from the CDN. The frontend toggle button switches between the two variants of your chosen theme. So if you pick Dracula, your visitors see Dracula Dark by default and can toggle to Dracula Light. If you pick Solarized, they get Solarized Dark and Solarized Light.
To change the theme, go to Tools > CloudScale Code and SQL. The Code Block Settings panel at the top of the Code Migrator tab has a Color Theme dropdown. Pick your theme and click Save Settings. The change applies site wide immediately, no page reload required.
Under the hood the theme system uses a registry pattern. Each theme entry defines its CDN filenames, dark background color, dark toolbar color, light background color, and light toolbar color. The frontend CSS uses CSS custom properties for all theme dependent values (background, toolbar, scrollbar, line numbers, hover states). When the page loads, JavaScript reads the theme colors from the registry and sets the custom properties on each code block wrapper. This means any new theme can be added to the registry without touching the CSS or JavaScript.
10 The Merged Admin Interface
In earlier versions, the Code Block Migrator and the SQL Command tool were separate plugins with separate admin pages. Version 1.6 merged everything into a single plugin with a unified admin interface at Tools > CloudScale Code and SQL.
The admin page uses a tabbed layout with two tabs: Code Migrator and SQL Command. The Code Migrator tab includes the inline settings panel (color theme and default mode) at the top, followed by the scan and migrate controls. The SQL Command tab has the query editor, results table, and quick query buttons.
The styling matches the CloudScale Page Views plugin exactly. You get the same navy gradient banner across the top, the dark tab bar with an orange underline on the active tab, white card panels with colored gradient section headers, and the same button styles, spacing, and typography throughout. If you’re running multiple CloudScale plugins, your Tools menu now has a consistent visual language across all of them.
11 SQL Command Tool
This is the feature I built for myself and use almost daily. If you manage a WordPress site and need to check database health, find bloat, or debug migration issues, you normally have to SSH into the server and run MySQL from the command line, or install phpMyAdmin, or use a separate database client. The SQL Command tool gives you a read only query interface right inside the WordPress admin.
Go to Tools > CloudScale Code and SQL and click the SQL Command tab. You’ll see a dark themed query editor at the top with a Run Query button. Type any SELECT, SHOW, DESCRIBE, or EXPLAIN query and press Enter (or Ctrl+Enter, or click the button). Results appear in a scrollable table below the editor with sticky column headers, striped rows, and hover highlighting.
The tool is strictly read only. All write operations are blocked at the PHP level before the query reaches the database. INSERT, UPDATE, DELETE, DROP, ALTER, TRUNCATE, CREATE, RENAME, REPLACE, LOAD, and GRANT are all rejected. The validation runs server side so it cannot be bypassed from the browser. You also need the manage_options capability, which means only WordPress administrators can access it.
11.1 Quick Queries
Below the query editor you’ll find 14 preset queries organized into four groups. Click any button to populate the editor and run the query immediately.
Health and Diagnostics gives you three queries. Database Health Check returns your MySQL version, max connections, wait timeout, max allowed packet size, and current database name. Site Identity Options pulls the six key values from wp_options: site URL, home URL, blog name, description, WordPress version, and database version. Table Sizes and Rows shows every table in your database with its storage engine, row count, data size in megabytes, index size, and total size, sorted largest first.
Content Summary has three queries. Posts by Type and Status gives you a grouped count of every post type and status combination in your database, which is useful for spotting unexpected post types from plugins. Site Stats Summary runs a single query that returns your total published posts, revision count, auto drafts, trashed items, total comments, spam comments, user count, and transient count. Latest 20 Published Posts shows your most recent content with title, publish date, and status.
Bloat and Cleanup Checks has four queries for finding waste. Orphaned Postmeta counts metadata rows where the parent post no longer exists. Expired Transients counts transient timeout entries that have passed their expiry. Revisions, Drafts and Trash shows how many revision posts, auto drafts, and trashed items are sitting in your database. Largest Autoloaded Options lists the 20 biggest entries in wp_options that have autoload set to yes, sorted by value size, which is usually the first place to look when your options table is bloated.
URL and Migration Helpers has four queries for sites that have changed domains or moved to HTTPS. HTTP References finds any wp_options rows still referencing HTTP versions of your domain. Posts with HTTP GUIDs finds posts where the GUID column still uses HTTP. Old IP References checks postmeta for values containing a specific IP address pattern (useful after migrating away from a legacy server). Posts Missing Meta Descriptions finds published posts that don’t have a CloudScale SEO meta description set, which is helpful for working through your SEO backlog.
11.2 Keyboard Shortcuts
Press Enter to run the query. Use Shift+Enter to insert a newline if you need to write a multiline query. Ctrl+Enter (or Cmd+Enter on Mac) also runs the query. The Clear button wipes both the editor and the results table.
12 Updated Configuration
With the merge, the old Settings > CloudScale Code Block page no longer exists. All settings have moved to the inline panel on the Code Migrator tab at Tools > CloudScale Code and SQL. You’ll find two dropdowns: Color Theme (the 14 theme options) and Default Mode (dark or light). Changes save via AJAX with no page reload.
In the Gutenberg editor sidebar, each individual block still has a Theme Override dropdown with Default, Dark, and Light options. Setting it to Default means the block follows the site wide setting. Setting it to Dark or Light forces that mode regardless of the site wide default. The help text in the sidebar now points to the Tools page for site wide theme selection.
The language selector in the editor sidebar has also been expanded. In addition to the original 28 languages, you can now select HCL/Terraform and TOML, bringing the total to 30+ supported languages.
13 Plugin Architecture
For developers interested in the internals, the merged plugin is a single PHP class (CS_Code_Block) with all functionality in one file. The admin interface uses inline CSS embedded directly in the page output rather than external stylesheet files. This is the same approach used by the CloudScale Page Views plugin and it eliminates browser caching issues entirely. The styles render correctly on first load every time, regardless of WordPress configuration, caching plugins, or CDN setups.
The theme registry is a static method that returns an associative array keyed by theme slug. Each entry contains the human readable label, dark CSS filename, light CSS filename, and four hex color values for backgrounds and toolbars. Adding a new theme means adding one array entry. The frontend JavaScript reads the active theme’s colors via wp_localize_script and sets CSS custom properties on each code block wrapper at page load.
The SQL query validation uses a whitelist approach. The is_safe_query method strips comments and checks that the query starts with SELECT, SHOW, DESCRIBE, or EXPLAIN. Everything else is rejected before it reaches wpdb. The AJAX handler also verifies a nonce and the manage_options capability on every request.
Quick query buttons are defined as HTML data attributes containing the full SQL string. Clicking a button copies the SQL into the textarea and triggers the run function. This keeps the query definitions in the PHP template where they can reference the WordPress table prefix dynamically, rather than hardcoding table names in JavaScript.
14 Wrapping Up
CloudScale Code Block is purpose built for technical bloggers who want clean, highlighted code on their WordPress sites without fighting the editor. The paste convert workflow means you can write in markdown, paste into Gutenberg, click one button, and publish. The built in migration tool means your existing content gets the same treatment without manual editing.
The plugin is free and open source. Grab it using the link above and let me know how it works for you.