Background
It is increasingly common for business and product heads to seek ownership of technology teams. The argument usually sounds reasonable: closer alignment with commercial outcomes, faster decision making, fewer handoffs. Sometimes it is the right move. Often it is not. And the difference between those two outcomes is almost entirely determined by whether the incoming leader has the technology instincts to make it work.
Before administering this assessment, three questions should be answered honestly by the organisation. These are not trick questions. They are the questions that nobody asks because the decision has already been made and asking them feels like dissent. But they determine whether the restructure will strengthen the organisation or quietly dismantle its engineering capability over the next eighteen months.
1. How will the organisation benefit from this?
What is the actual business case? Is the goal to cut headcount, deliver faster, or produce better quality products? If the answer is “deliver faster,” the follow up question is: faster than what, and why is the current structure the bottleneck? If the answer cannot be articulated beyond “better alignment” or “simplification,” the restructure may be solving a communication problem by creating a competence problem. Alignment is not a strategy. It is a word people use when they cannot explain what will actually change.
2. How will the technology teams benefit?
This question is almost never asked, and the silence is revealing. When a business head absorbs a technology team, the immediate beneficiary is the leader themselves. They are now, by default, the most senior person in the room on every topic, including the ones they do not understand. But what happens to the people underneath?
Where does the single customer experience engineer’s career go when there is nobody above them who understands their discipline? Who does the lone data scientist learn from? Who reviews the machine learning engineer’s model methodology when nobody in the management chain knows what a validation set is? Who hires the next senior architect, and how does a non technical leader distinguish between a strong candidate and a confident one?
The people who suffer most in these structures are the specialists. They lose their career path, their peer community, and their quality gate in a single reorganisation. The best ones leave quietly. The ones who stay become isolated, unchallenged, and invisible.
3. Who will hire the technology leadership?
This is the question that determines long term sustainability. If the business head cannot evaluate a senior technologist in an interview, they will hire for communication skills, executive presence, and delivery track record. These are not bad qualities. But they are not sufficient for technology leadership, and selecting for them exclusively produces a leadership layer that looks impressive in steering committees and makes catastrophic decisions about architecture, tooling, and technical debt.
A technology organisation that cannot hire its own successors is not sustainable. It is a countdown.
This assessment is designed to surface readiness. The 40 questions that follow test whether a business head has the technology instincts, the technical literacy, and the leadership philosophy to run a technology team without degrading it. There are no right or wrong answers. Each question presents four options that reflect different leadership styles and priorities. Simply select the option that best reflects your natural instinct in each situation.
Select one answer per question. Do not overthink it. Your first instinct is what matters.
1 Leadership Philosophy
Question 1. A major platform decision was approved by the steering committee six months ago. New evidence suggests it may be the wrong choice. What do you do?
A) Revisit the decision with the new evidence and recommend a course correction even if it causes short term disruption
B) Flag the concern but continue execution since the committee already approved it and reversing would delay the programme
C) Raise it informally but keep delivery on track since the timeline commitments to the board cannot slip
D) Continue as planned because reopening approved decisions undermines confidence in the governance process
Question 2. Your team proposes simplifying a system by removing an integration layer. It will reduce complexity but invalidate three months of another team’s work. How do you proceed?
A) Protect the other team’s work and find a compromise that keeps both approaches since we need to respect the investment already made
B) Evaluate the simplification on its technical merits regardless of sunk cost and proceed if the outcome is better for customers
C) Delay the decision until next quarter’s planning cycle so it can be properly socialised across all stakeholders
D) Proceed only if the simplification can be shown to accelerate the current delivery timeline
Question 3. You inherit a technology organisation with seven management layers between the CTO and the engineers writing code. What is your first instinct?
A) Understand why each layer exists and remove any that do not directly contribute to decision quality or delivery outcomes
B) Add a dedicated delivery management function to coordinate across the layers more effectively
C) Maintain the structure but introduce better reporting dashboards so you can see through the layers
D) Restructure the layers around revenue streams so each layer has clear commercial accountability
Question 4. What is the primary purpose of a technology strategy document?
A) To secure budget approval by demonstrating alignment between technology investments and projected revenue growth
B) To reduce uncertainty by clarifying what the organisation will and will not build, and why
C) To provide a roadmap with delivery dates that the business can hold the technology team accountable to
D) To communicate the technology vision to non technical stakeholders in a way they find compelling
2 Architecture and Systems Thinking
Question 5. What does the term blast radius mean in the context of systems architecture?
A) The scope of impact when a single component fails, and how far the failure propagates across dependent systems
B) The amount of data lost during a disaster recovery event before backups can be restored
C) The total number of customers affected during a planned maintenance window
D) The financial exposure created by a system outage, measured in lost revenue per minute
Question 6. When designing a critical system, which of the following should be your primary architectural concern?
A) Ensuring the system can scale to meet projected revenue targets for the next three years
B) Designing for graceful failure so the system degrades safely rather than failing catastrophically
C) Selecting the vendor with the strongest enterprise support agreement and SLA guarantees
D) Ensuring the architecture aligns with the approved enterprise reference model and standards
Question 7. What does it mean to design a system assuming breach will happen?
A) Building layered defences, monitoring, and containment so that when a breach occurs the damage is limited and detected quickly
B) Purchasing comprehensive cyber insurance to cover the financial impact of a breach event
C) Conducting annual penetration tests and remediating all critical findings before the next audit cycle
D) Ensuring all systems are compliant with the relevant regulatory frameworks and industry standards
3 Delivery and Process
Question 8. A project is behind schedule. The team suggests reducing scope to meet the deadline. The business stakeholder wants the full scope delivered on time. What do you recommend?
A) Deliver the reduced scope with high quality and iterate, since shipping broken software on time is worse than shipping less software that works
B) Add additional resources to accelerate delivery since the business committed to the date with external partners
C) Negotiate a two week extension with the full scope since the revenue impact of a delayed launch is manageable
D) Split the team to deliver the core features on time and the remaining features two weeks later as a fast follow
Question 9. How should work ideally flow through a well functioning technology team?
A) Through two week sprints with defined ceremonies, backlog grooming, sprint reviews, and retrospectives
B) Through continuous small changes deployed frequently with clear ownership and minimal handoffs
C) Through quarterly planning cycles with monthly milestone reviews and weekly status reporting
D) Through a prioritised backlog managed by a product owner who coordinates with the business on delivery sequencing
Question 10. A team is consistently delivering features on time but production incidents are increasing. What does this tell you?
A) The team is likely cutting corners on quality to meet deadlines and the delivery metric is masking a growing technical debt problem
B) The team needs better production support tooling and a dedicated site reliability function
C) The team is delivering well but the infrastructure team is not scaling the platform to match the increased feature throughput
D) The incident management process needs improvement since faster triage would reduce the apparent incident volume
4 Technical Fundamentals
Question 11. What is the difference between vertical scaling and horizontal scaling?
A) Vertical scaling adds more power to a single machine while horizontal scaling adds more machines to distribute the load
B) Vertical scaling increases storage capacity while horizontal scaling increases network bandwidth
C) Vertical scaling is for databases and horizontal scaling is for application servers
D) Vertical scaling is cheaper at small volumes while horizontal scaling is cheaper at large volumes, which is why you choose based on cost projections
Question 12. What is technical debt?
A) Shortcuts or suboptimal decisions in code and architecture that make future changes harder, slower, or riskier
B) The accumulated cost of software licences and infrastructure that the organisation is contractually committed to paying
C) The gap between the current technology stack and the approved target state architecture
D) Legacy systems that have not yet been migrated to the cloud as part of the digital transformation programme
Question 13. Why is it important that a system can be observed in production?
A) Because without visibility into how the system behaves under real conditions you cannot diagnose problems, understand performance, or detect failures early
B) Because the compliance team requires evidence that systems are being monitored as part of the annual audit
C) Because the business needs real time dashboards showing transaction volumes and revenue metrics
D) Because the vendor SLA requires the organisation to demonstrate monitoring capability to qualify for support credits
5 Cloud Computing
Question 14. What is the primary benefit of using a public cloud provider like AWS or Azure?
A) The ability to provision and scale infrastructure on demand without managing physical hardware, paying only for what you use
B) Guaranteed lower costs compared to on premises infrastructure for all workload types and volumes
C) Automatic compliance with all regulatory requirements since the cloud provider manages the security controls
D) Eliminating the need for a technology team since the cloud provider manages everything end to end
Question 15. What is the shared responsibility model in cloud computing?
A) The cloud provider is responsible for the security of the cloud infrastructure while the customer is responsible for securing what they build and run on it
B) The cloud provider and the customer share the cost of infrastructure equally based on a negotiated commercial agreement
C) Both the cloud provider and the customer have equal responsibility for all aspects of security and neither can delegate
D) The cloud provider assumes full responsibility for everything deployed on their platform as part of the service agreement
Question 16. What is an availability zone in the context of cloud infrastructure?
A) A physically separate data centre within a cloud region, designed so that failures in one zone do not affect others
B) A geographic region where the cloud provider offers services, such as Europe West or US East
C) A virtual network boundary that isolates different customer workloads from each other for security purposes
D) A pricing tier that determines the level of uptime guarantee and support response time for your workloads
Question 17. What is Infrastructure as Code?
A) Defining and managing cloud infrastructure through machine readable configuration files that can be version controlled and reviewed like software
B) A software tool that automatically generates infrastructure diagrams from the live cloud environment
C) A methodology for documenting infrastructure decisions in a shared wiki so the team can track changes over time
D) An approach where infrastructure costs are coded into the project budget as a separate line item from application development
6 Testing Strategy
Question 18. When should testing happen in the development lifecycle?
A) Continuously throughout development, with automated tests running on every code change as part of the build pipeline
B) After development is complete, during a dedicated testing phase before the release is approved for production
C) At key milestones defined in the project plan, with formal sign off required before moving to the next phase
D) Primarily before major releases, with exploratory testing conducted by the QA team in the staging environment
Question 19. A team tells you they have 95% code coverage. How confident should you be in their quality?
A) Coverage alone does not indicate quality because tests can cover code without meaningfully validating behaviour or edge cases
B) Very confident since 95% coverage means almost all of the codebase has been validated by automated tests
C) Moderately confident but you would want to see the coverage broken down by module to check for gaps in critical areas
D) You would need to compare the coverage metric against the industry benchmark for their technology stack to assess it properly
Question 20. What is the purpose of a chaos engineering or game day exercise?
A) To deliberately introduce failures into a system to test how it responds and to build confidence that recovery mechanisms work
B) To simulate peak traffic scenarios to verify the infrastructure can handle projected load during high revenue periods
C) To test the disaster recovery plan by failing over to the secondary site and measuring recovery time against the SLA
D) To stress test the team’s incident management process and identify bottlenecks in the escalation procedures
7 Data and AI
Question 21. What is the difference between a data warehouse and a data lake?
A) A data warehouse stores structured, curated data optimised for querying and reporting, while a data lake stores raw data in its native format for flexible future use
B) A data warehouse is an on premises solution while a data lake is a cloud native service that replaces the need for traditional databases
C) A data warehouse is owned by the business intelligence team while a data lake is owned by the engineering team, which is why they are governed separately
D) A data warehouse handles historical data for compliance purposes while a data lake handles real time data for operational dashboards
Question 22. Your organisation wants to build a machine learning model to predict customer churn. What is the first question you should ask?
A) Do we have clean, representative data that captures the behaviours and signals that precede churn, and do we understand the biases in that data
B) What is the expected revenue impact of reducing churn by a target percentage, and does it justify the investment in a data science team
C) Which vendor platform offers the best prebuilt churn prediction model so we can deploy quickly without building a team from scratch
D) Can we have a working model within the current quarter so we can demonstrate the value of AI to the executive committee
Question 23. What is the biggest risk of deploying a machine learning model into production without ongoing monitoring?
A) The model will silently degrade as real world data drifts away from the data it was trained on, producing increasingly wrong predictions that nobody notices until damage is done
B) The model will consume increasing amounts of compute resources over time, driving up infrastructure costs beyond the original budget
C) The compliance team may flag the model as a risk because it was deployed without a formal model governance review and sign off process
D) The business will lose confidence in AI if the model produces a visible error, which could jeopardise funding for future AI initiatives
Question 24. A business stakeholder asks you to build an AI feature that automates a customer decision. The team warns that the training data contains historical bias. What do you do?
A) Take the bias concern seriously. Deploying a biased model at scale will amplify discrimination, create regulatory exposure, and damage customer trust in ways that are extremely difficult to undo
B) Proceed with the deployment but add a disclaimer that the model’s recommendations should be reviewed by a human before any final decision is made
C) Ask the data science team to quantify the bias impact and present a risk assessment to the steering committee so leadership can make an informed commercial decision
D) Deprioritise the concern for now and launch the feature since the competitive advantage of being first to market outweighs the risk, and the bias can be addressed in a future iteration
Question 25. You have hired one AI engineer and placed them alone in a feature team surrounded by backend and frontend developers. Nobody in the team or its management chain has AI or machine learning experience. The engineer’s work is reviewed by people who do not understand it. How do you evaluate this structure?
A) This is a problem. The engineer has no peers to learn from, no manager who can grow their career, and no quality gate on their work. They will either stagnate, produce unchallenged work of unknown quality, or leave. AI engineers need to sit in or be connected to a community of practice with people who understand their discipline
B) This is fine as long as the engineer has clear deliverables and the feature team has a strong product owner who can validate the business outcomes of the AI work
C) This is efficient. Embedding specialists directly in feature teams ensures their work is aligned with delivery priorities and avoids the overhead of a separate AI team that operates disconnected from the product
D) This is manageable. Provide the engineer with access to external training and conferences so they can maintain their skills, and ensure their performance is measured on delivery milestones like any other team member
Question 26. What does data governance mean in practice?
A) Ensuring the organisation knows what data it has, where it lives, who owns it, how it flows, what quality it is in, and what rules govern its use, so that data is treated as a product rather than an accident
B) A framework of policies and committees that approve data access requests and ensure all data usage complies with the relevant regulatory requirements
C) A set of data classification standards and retention policies that are documented and audited annually to satisfy regulatory obligations
D) A technology platform that enforces role based access controls and encrypts data at rest and in transit across all systems
8 People and Hiring
Question 27. You need to hire a senior engineer. Which quality matters most?
A) Deep curiosity, the ability to reason through unfamiliar problems, and a track record of simplifying complex systems
B) Certifications in the specific technologies your team currently uses, with at least ten years of experience in the industry
C) Strong communication skills and experience presenting to executive stakeholders and steering committees
D) A proven ability to deliver projects on time and within budget, with references from previous programme managers
Question 28. An engineer pushes back on a technical decision you have made, providing evidence you were wrong. What is the ideal response?
A) Thank them, evaluate the evidence, and change the decision if the evidence warrants it because being right matters more than being in charge
B) Acknowledge their input and ask them to document their concerns formally so they can be reviewed in the next architecture review board
C) Listen carefully but explain the broader strategic context they may not be aware of that influenced your original decision
D) Appreciate the initiative but remind them that decisions at your level factor in commercial and timeline considerations beyond the technical merits
Question 29. What is the biggest risk when a non technical leader runs a technology team?
A) They cannot distinguish between genuine technical risk and comfortable excuses, which leads to either missed danger or wasted time
B) They tend to over rely on vendor solutions and consultancies because they cannot evaluate build versus buy decisions independently
C) They struggle to earn the respect of senior engineers, which leads to talent attrition and difficulty recruiting strong replacements
D) They focus on timelines and deliverables rather than the technical foundations that determine whether those deliverables are sustainable
9 Quality and Sustainability
Question 30. A vendor promises to solve a critical problem with their platform. What is your first concern?
A) Whether the solution creates a dependency that will be expensive or impossible to exit, and what happens when the vendor changes direction
B) Whether the vendor is on the approved procurement list and whether the commercial terms fit within the current budget cycle
C) Whether the vendor has case studies from similar organisations and what their Net Promoter Score is among existing customers
D) Whether the vendor can commit to a delivery timeline that aligns with the programme milestones already communicated to the board
Question 31. You are reviewing two architecture proposals. Proposal A is clever and impressive but requires deep expertise to operate. Proposal B is simpler but less elegant. Which do you prefer?
A) Proposal B, because a system that can be understood, operated, and maintained by the team that inherits it is more valuable than one that impresses today
B) Proposal A, because the additional complexity is justified if it delivers significantly better performance metrics
C) Neither until both proposals include detailed cost projections and a total cost of ownership comparison over five years
D) Whichever proposal the lead architect recommends since they have the deepest technical context on the constraints
Question 32. A 97 slide strategy deck is presented to you. What is your reaction?
A) Scepticism, because length often compensates for lack of clarity and a strong strategy should be explainable in a few pages
B) Appreciation, because a thorough strategy deck shows the team has done their due diligence and considered all angles
C) Request an executive summary of no more than five slides that highlights the key investment asks and expected returns
D) Review it in detail because strategic decisions of this magnitude deserve comprehensive analysis and supporting evidence
10 Reporting and Planning
Question 33. A technology team has no weekly status report. They deploy daily, incidents are low, and customers are satisfied. Is this a problem?
A) No. Outcomes are the evidence. If the system works, customers are happy, and the team ships reliably, the absence of a status report means nothing is being hidden
B) Yes. Without a structured weekly report the leadership team has no visibility into what the team is doing and cannot govern effectively
C) It depends. A lightweight status update would be beneficial for alignment even if things are going well, since stakeholders deserve visibility
D) Yes. Consistent reporting is a professional discipline. Even high performing teams need to document their progress for accountability and audit purposes
Question 34. A team starts a complex migration and discovers halfway through that the original plan was based on incorrect assumptions. They adjust and complete the migration successfully but two weeks later than planned. How do you evaluate this?
A) Positively. Learning while doing is an inherent property of complex work. The team adapted to reality and delivered a successful outcome, which is exactly what good engineering looks like
B) As a planning failure. The incorrect assumptions should have been identified during the planning phase. A proper discovery exercise would have prevented the overrun
C) Neutrally. The outcome was acceptable but the team should produce a lessons learned document to prevent similar planning gaps in future projects
D) As a risk management issue. The two week overrun needs to be logged and the planning process needs to include more rigorous assumption validation before execution begins
Question 35. You ask a technology lead how a project is going. They say they do not know yet because the team is still working through some unknowns. How do you respond?
A) Appreciate the honesty. Not knowing is a valid state early in complex work. Ask what they are doing to reduce the unknowns and when they expect to have a clearer picture
B) Ask them to prepare a risk register and preliminary timeline estimate within two days so you have something to report upward
C) Express concern. A technology lead should always be able to articulate the status of their work, even if uncertain, and should present options with probability weightings
D) Escalate the concern. If the lead cannot provide a clear status update, the project may lack adequate governance and oversight
Question 36. What is the most important thing to measure about a technology team’s performance?
A) The business outcomes their work enables, including reliability, customer experience, and the ability to change safely
B) Velocity and throughput, measured by story points completed per sprint across all teams
C) Time to market for new features, measured from business request to production deployment
D) Budget adherence, measured by comparing actual technology spend against the approved annual plan
11 Relationship with Technologists
Question 37. A senior architect strongly disagrees with your proposed approach and presents an alternative in a team meeting. They are blunt and direct. How do you handle this?
A) Welcome it. Blunt disagreement backed by evidence is a sign of a healthy team. Evaluate the alternative on its merits and decide based on what produces the best outcome
B) Thank them for their perspective but ask them to raise concerns through the proper channels rather than challenging your direction in a group setting
C) Acknowledge their passion but remind the team that once a direction is set, the expectation is to commit and execute rather than relitigate decisions
D) Listen but note that architectural decisions need to factor in business timelines and stakeholder commitments, not just technical preferences
Question 38. How do you view the role of engineers in the decision making process?
A) Engineers are domain experts whose knowledge should be actively extracted, challenged, and synthesised into better decisions. The best outcomes come from iterative collaboration, not instruction
B) Engineers should provide technical input and recommendations, but the final decision authority rests with the business leader who owns the commercial outcome
C) Engineers should focus on execution excellence. They are most effective when given clear requirements and the autonomy to choose the implementation approach
D) Engineers should be consulted on technical feasibility, but strategic decisions about what to build and when should be driven by the product and business teams
Question 39. You notice your best engineers have stopped voicing opinions in meetings. What does this tell you?
A) Something is wrong. When strong engineers go quiet, it usually means they have concluded that their input does not matter, which means the organisation is about to lose them or already has in spirit
B) They may be focused on delivery. Not every engineer wants to participate in strategic discussions and some prefer to let their code speak for itself
C) It could indicate that the team has matured and aligned around a shared direction, which reduces the need for debate
D) It suggests the decision making process is working efficiently. Fewer objections means the planning and communication have improved
Question 40. An engineer tells you the proposed deadline is unrealistic and the team will either miss it or ship something that breaks. What do you do?
A) Take the warning seriously. Engineers who raise alarms about deadlines are usually right and ignoring them is how organisations end up with production failures and burnt out teams
B) Acknowledge the concern and ask them to propose an alternative timeline with a clear breakdown of what can be delivered by when
C) Thank them for the flag but explain that the deadline was set based on commercial commitments and the team needs to find a way to make it work
D) Ask them to quantify the risk. If they can show specific technical evidence for why the deadline is unrealistic, you will escalate it. Otherwise the plan stands
Assessor Guide
Everything below this line is for the assessor only. Do not share with the candidate.
Traffic Light Scoring
Each answer is scored using a traffic light system.
Green. Strong technology leadership instinct. The answer demonstrates understanding of systems thinking, quality, sustainability, customer outcomes, or respect for engineering as a discipline.
Amber. Acceptable but surface level. The answer is not wrong but reveals a preference for process, optics, conventional wisdom, or a management lens over a technology leadership lens.
Red. Concerning. The answer reveals a fixation on timelines, revenue projections, reporting, governance ceremony, or a belief that technologists are interchangeable resources who should execute rather than think.
Answer Key
| # | Category | Green | Amber | Red |
|---|---|---|---|---|
| 1 | Leadership | A | B, C | D |
| 2 | Leadership | B | A, C | D |
| 3 | Leadership | A | B, C | D |
| 4 | Leadership | B | C, D | A |
| 5 | Architecture | A | B, C | D |
| 6 | Architecture | B | C, D | A |
| 7 | Architecture | A | B, C | D |
| 8 | Delivery | A | C, D | B |
| 9 | Delivery | B | A, D | C |
| 10 | Delivery | A | B, C | D |
| 11 | Technical | A | B, C | D |
| 12 | Technical | A | B, C | D |
| 13 | Technical | A | B, D | C |
| 14 | Cloud | A | B, C | D |
| 15 | Cloud | A | B, C | D |
| 16 | Cloud | A | B, C | D |
| 17 | Cloud | A | B, C | D |
| 18 | Testing | A | B, D | C |
| 19 | Testing | A | B, C | D |
| 20 | Testing | A | C, D | B |
| 21 | Data and AI | A | B, C | D |
| 22 | Data and AI | A | B, C | D |
| 23 | Data and AI | A | B, C | D |
| 24 | Data and AI | A | B, C | D |
| 25 | Data and AI | A | B, D | C |
| 26 | Data and AI | A | B, C | D |
| 27 | People | A | B, C | D |
| 28 | People | A | B, C | D |
| 29 | People | A | B, C | D |
| 30 | Quality | A | B, C | D |
| 31 | Quality | A | B, D | C |
| 32 | Quality | A | B, C | D |
| 33 | Reporting | A | C, D | B |
| 34 | Reporting | A | C, D | B |
| 35 | Reporting | A | B, C | D |
| 36 | Reporting | A | B, C | D |
| 37 | Technologists | A | B, C | D |
| 38 | Technologists | A | B, D | C |
| 39 | Technologists | A | B, C | D |
| 40 | Technologists | A | B, D | C |
Scoring Thresholds
30 to 40 Green. Strong candidate. Likely to build sustainable technology, retain talented engineers, and make sound architectural decisions.
20 to 29 Green. Moderate. May need coaching on the difference between managing a technology team and leading one. Watch for patterns in which categories the red answers cluster.
Below 20 Green. Significant risk. Likely to prioritise optics and timelines over quality, struggle to retain senior technologists, and make hiring decisions based on compliance rather than capability.
10 or more Red. Disqualifying regardless of green count. The candidate consistently gravitates toward answers that would damage engineering culture, product quality, and team retention.
Red Flag Patterns
Beyond the raw count, watch for clustering patterns that reveal specific blind spots.
The Timeline Addict. Red answers cluster in Delivery and Quality. The candidate treats every question as a scheduling problem and evaluates every decision through the lens of “will this delay the programme.”
The Dashboard Governor. Red answers cluster in Reporting and Planning. The candidate believes that better reporting equals better understanding, and that learning while doing is evidence of poor planning rather than an inherent property of complex work.
The Order Taker Factory. Red answers cluster in Relationship with Technologists. The candidate sees engineers as execution resources, gets uncomfortable with opinionated technologists, and interprets pushback as insubordination rather than intellectual rigour.
The Revenue Lens. Red answers cluster across multiple categories but consistently reference commercial outcomes, revenue projections, or stakeholder commitments as the deciding factor. Technology decisions are subordinated to the current quarter’s numbers.
The Process Worshipper. Red answers cluster in Delivery and Leadership. The candidate equates process with progress, ceremonies with delivery, and governance with good judgment.
The AI Tourist. Red answers cluster in Data and AI. The candidate treats AI as a buzzword to be deployed for competitive optics rather than a discipline that requires data quality, monitoring, ethical consideration, and properly supported specialists. They see nothing wrong with isolating a single AI engineer in a team that cannot grow, challenge, or manage them.
A Note on Opinionated Technologists
One of the most revealing dimensions of this assessment is how the candidate responds to questions about engineers who push back, disagree, or hold strong technical opinions. Business heads who have succeeded in environments where teams execute instructions often find opinionated technologists threatening. They interpret technical pushback as resistance, disagreement as disloyalty, and independent thinking as a management problem.
The reality is the opposite. The best technology teams are built from opinionated people who care deeply about the work. The role of the leader is not to suppress those opinions but to create an environment where they can be heard, challenged, and synthesised into better decisions. A leader who cannot tolerate dissent will build a team of compliant executors who ship mediocre products on time and wonder why the customers leave.
A Note on Learning While Doing
Business heads with a strong planning orientation often view learning while doing as evidence of failure. If you had planned properly, you would not need to learn anything during execution. This belief is incompatible with technology leadership.
Complex systems cannot be fully understood before they are built. Architecture emerges from contact with reality. Requirements change as users interact with early versions. Performance characteristics only reveal themselves under production load. Security vulnerabilities surface through adversarial testing, not through documentation reviews.
A leader who demands complete certainty before starting will either never start or will force the team to fabricate certainty they do not have, which is worse. The right instinct is to plan enough to reduce the biggest risks, start building, learn from what you discover, and adjust. This is not the absence of planning. It is the only kind of planning that works for complex technology.
A Note on Engineers as Order Takers
The most damaging instinct a business head can carry into a technology organisation is the belief that engineers exist to execute instructions. This mental model treats technology as a cost centre staffed by interchangeable resources whose job is to convert requirements into code on schedule.
In practice, the best engineers carry deep domain knowledge, architectural intuition, and an understanding of how systems behave under stress that cannot be replicated by reading a requirements document. A leader who treats them as order takers will never access this knowledge. They will receive exactly what they ask for, nothing more, and the products they ship will reflect the limits of their own understanding rather than the collective intelligence of the team.
The alternative is to treat every interaction with a technologist as an opportunity to iteratively extract intellectual property. Ask what they think. Ask why they disagree. Ask what they would build if they had the authority. The answers will be better than anything a steering committee can produce.
A Note on the Isolated AI Engineer
Question 25 is one of the most diagnostic questions in this assessment. The pattern it describes is common: an organisation hires a single AI or machine learning engineer, places them in a feature team composed entirely of people from different disciplines, and declares the AI capability embedded.
The candidate who sees nothing wrong with this structure reveals several dangerous blind spots simultaneously.
No quality gate. Machine learning work is unlike conventional software engineering. Model selection, feature engineering, training methodology, bias detection, and evaluation metrics require peer review from people who understand the discipline. An engineer whose work is reviewed only by people who cannot evaluate it is an engineer whose mistakes go undetected.
No career growth. Engineers grow by working alongside people who are better than them, or at least different enough to challenge their assumptions. A single AI engineer in a feature team has no mentor, no sparring partner, and no career path. They will plateau and leave, and the organisation will have to start again.
No management competence. If nobody in the management chain understands what the AI engineer does, nobody can set meaningful objectives, evaluate performance, identify when they are struggling, or advocate for the resources they need. The engineer is simultaneously unsupported and unaccountable.
No intellectual community. AI and machine learning are disciplines where techniques evolve rapidly. An isolated engineer has no internal community of practice, no one to discuss new approaches with, and no one to challenge their methodology. They become a single point of knowledge failure.
The green answer recognises that specialist disciplines need communities of practice. This does not necessarily mean a separate AI team, but it does mean deliberate structures that connect specialists, provide peer review, enable career progression, and ensure management understands the work well enough to support it.
The red answers treat the AI engineer as a fungible delivery resource whose value is measured by output against a timeline, which is the same mistake that drives experienced engineers out of organisations that claim they cannot find talent.
Final Thought
This assessment is not a test of intelligence. It is a test of instinct. Intelligent people can hold damaging instincts. The business head who optimises for reporting, timelines, and compliant teams is not stupid. They are applying a mental model that works in other domains but fails catastrophically in technology.
The purpose of this assessment is to find out which mental model the candidate carries before they are given the keys to a technology organisation and the careers of the people inside it.
Score Sheet
| Field | Value |
|---|---|
| Candidate Name | |
| Assessment Date | |
| Assessor |
| Score | Count |
|---|---|
| Green | /40 |
| Amber | /40 |
| Red | /40 |
| Category | Green | Amber | Red |
|---|---|---|---|
| Leadership (Q1 to Q4) | /4 | /4 | /4 |
| Architecture (Q5 to Q7) | /3 | /3 | /3 |
| Delivery (Q8 to Q10) | /3 | /3 | /3 |
| Technical (Q11 to Q13) | /3 | /3 | /3 |
| Cloud (Q14 to Q17) | /4 | /4 | /4 |
| Testing (Q18 to Q20) | /3 | /3 | /3 |
| Data and AI (Q21 to Q26) | /6 | /6 | /6 |
| People (Q27 to Q29) | /3 | /3 | /3 |
| Quality (Q30 to Q32) | /3 | /3 | /3 |
| Reporting and Planning (Q33 to Q36) | /4 | /4 | /4 |
| Technologist Relationship (Q37 to Q40) | /4 | /4 | /4 |
| Red Flag Pattern | Detected |
|---|---|
| Timeline Addict | |
| Dashboard Governor | |
| Order Taker Factory | |
| Revenue Lens | |
| Process Worshipper | |
| AI Tourist |
|Overall Assessment||
|——————||
|Recommendation ||
|Notes ||
Inspired by Why Andrew Baker Is the World’s Worst CTO