AI is changing software engineering at extraordinary speed.
Code generation, agentic workflows, autonomous testing, and AI-assisted development are accelerating how quickly teams can ship software.
But beneath the excitement sits a growing engineering challenge:
“Code is a liability. The system is the asset.”
That principle has existed for decades.
And in the AI era, it may be more important than ever.
The New Reality: Unlimited Code Generation
Modern AI tooling has fundamentally changed software delivery.
Teams can now generate:
- APIs
- integration layers
- infrastructure
- tests
- workflows
- documentation
- security checks
…in minutes.
The bottleneck is no longer code creation.
The bottleneck is now:
- understanding
- verification
- observability
- maintainability
- security
- operational confidence
Because every line of generated code introduces:
- risk
- maintenance overhead
- security exposure
- operational complexity
And AI dramatically increases the amount of code entering systems.
More Code Does Not Equal More Value
One of the most dangerous misconceptions in the AI era is this:
More code = more progress
It doesn’t.
AI can generate thousands of lines of code extremely quickly.
But outcomes matter more than outputs.
A beautifully generated system with:
- unit tests
- integration tests
- end-to-end tests
- linting
- documentation
…can still deliver zero business value.
As discussed in the transcript:
“You can almost become snow blind because it’ll be world-class software that ticks every box for best practices… but unless it’s actually doing something valuable, then what’s the point?”
This is where many engineering teams will struggle.
AI can optimise software production.
It cannot automatically optimise product thinking.
The Return of Engineering Fundamentals
One of the most interesting patterns emerging in the AI era is the return of foundational engineering principles.
Concepts from:
- Extreme Programming (XP)
- Continuous Delivery
- Serverless-first engineering
- Lean systems thinking
- Observability-driven engineering
…are suddenly becoming highly relevant again.
Why?
Because AI accelerates both:
- good engineering practices
- bad engineering practices
If your organisation lacks:
- testing discipline
- engineering standards
- observability
- verification strategies
- operational maturity
AI will amplify those weaknesses rapidly.
As Michael O’Reilly explains:
“The agentic AI stuff will accelerate you into all your weak points.”
AI Is Creating a Technical Debt Explosion
Technical debt has always existed.
But AI changes the scale.
Historically, developers were constrained by:
- time
- effort
- team capacity
- cognitive load
AI removes many of those constraints.
Now teams can:
- generate systems faster
- experiment more aggressively
- build custom implementations rapidly
But faster creation also means:
- faster accumulation of liabilities
- more unmanaged dependencies
- more unverified code
- more operational risk
This becomes especially dangerous when organisations generate code they don’t fully understand.

Unmanaged Code Is the Real Risk
The conversation introduces an important distinction:
Legacy code is not necessarily bad.
Unmanaged code is.
You can have:
- a 15-year-old system that is carefully maintained and observable
- or code generated this morning that nobody truly understands
The age of the code is not the problem.
The lack of ownership is.
This becomes critical in regulated environments where:
- security vulnerabilities
- compliance obligations
- audit requirements
- operational resilience
…must all be maintained continuously.
Why Verification and Observability Matter More Than Ever
AI-generated systems force engineering teams to rethink validation.
Traditional deterministic testing approaches become harder when:
- probabilistic AI systems are involved
- outputs vary dynamically
- models evolve underneath applications
This is why:
- observability
- evaluation frameworks
- production testing
- rapid feedback loops
…are becoming essential capabilities.
As discussed in the transcript:
“How do you test a probabilistic system?”
You don’t simply test outputs anymore.
You evaluate behaviours, drift, confidence, and operational outcomes.
This is a major shift in software engineering thinking.
The Security Problem Is Accelerating
AI is also changing security dynamics.
Modern models are increasingly capable of:
- identifying vulnerabilities
- scanning codebases
- finding insecure patterns
- analysing dependencies
That’s useful for defenders.
But attackers have access to the same acceleration.
This means organisations carrying unmanaged technical debt may become increasingly vulnerable over time.
Older systems, abandoned libraries, and unmaintained dependencies become high-risk liabilities in an AI-driven threat landscape.
Critical Thinking Becomes the Competitive Advantage
Perhaps the most important insight from the discussion is this:
The value of engineers is shifting.
The future engineer is not simply:
- writing code
- implementing APIs
- producing features
Increasingly, the role becomes:
- systems thinking
- defining constraints
- verification
- evaluation
- operational judgement
- critical thinking
AI may generate implementation.
But humans still own:
- intent
- accountability
- architecture
- trust
- risk
And ultimately:
Engineers remain responsible for the outcomes of the systems they deploy.
Engineering Excellence Is No Longer Optional
The organisations that benefit most from AI will not necessarily be the ones generating the most code.
They will be the organisations that:
- maintain engineering discipline
- invest in observability
- manage technical debt
- build strong verification systems
- encourage critical thinking
- continuously improve their delivery systems
Engineering excellence is becoming a force multiplier.
And AI is accelerating the gap between mature engineering organisations and struggling ones.
Final Thought
AI is making software generation easier than ever.
But software generation was never the hardest part of engineering.
The difficult part has always been:
- understanding systems
- managing complexity
- reducing risk
- delivering value
- maintaining trust
AI does not remove those responsibilities.
If anything:
It makes them more important than ever.
Key Takeaways
- AI-generated code increases technical debt risk
- More code does not equal more business value
- Observability and verification are now critical
- Engineering excellence becomes a competitive advantage
- Critical thinking matters more in probabilistic systems
- Unmanaged code creates major operational and security risk
- AI amplifies both good and bad engineering practices
