There’s no shortage of opinion about AI in software engineering right now.
Depending on who you listen to, we’re either witnessing the death of the profession — or the start of a golden age.
As with most things in technology, the truth is far more interesting.
We reflect on how AI and GenAI are colliding with the ideas behind The Value Flywheel Effect — particularly clarity of purpose, sustainable value, and long-term thinking. Out of that conversation emerged six recurring myths we keep hearing from engineers, leaders, and organisations alike.
Myth 1: “Software engineering is dead”
This one shows up everywhere — usually phrased as “AI can write all the code now”.
But writing code has never been the point.
Software engineering has always been about solving problems, not typing syntax. Code is just the mechanism. The real work lives in:
- Understanding user needs
- Navigating constraints
- Making trade-offs explicit
- Designing for operability, cost, and change
AI doesn’t remove that work. It amplifies the consequences of getting it wrong.
In fact, many of the barriers to good engineering are being lowered. Tooling can now help with observability, testing, resilience, and documentation — areas that were previously time-consuming or inconsistently applied.
Engineering isn’t dying.
It’s becoming more visible.
Myth 2: “My skills will become irrelevant”
This fear is understandable — but it’s also misplaced.
Yes, some tasks are being automated:
- Boilerplate code
- Mechanical refactoring
- Mundane documentation updates
But those were never the skills that differentiated great engineers.
What remains essential — and increasingly valuable — is:
- Domain expertise
- Systems thinking
- Decision-making under uncertainty
- The ability to explain why, not just how
If your role is about delivering real value and solving real problems, AI doesn’t replace you — it forces you up the value chain.
The engineers who struggle will be the ones who refuse to evolve, not the ones who learn to leverage new tools thoughtfully.
Myth 3: “The quality isn’t good enough”
You’ll hear this a lot:
“It’s just AI slop.”
Sometimes that’s true.
But it’s also missing the bigger picture.
These tools are evolving at a pace we rarely see in technology. The uncomfortable truth is this:
The AI tools you’re using today are the worst they will ever be.
Quality isn’t just about model output — it’s about:
- Constraints
- Standards
- Context
- Operational requirements
Engineering standards exist for more than skill alignment. They protect compatibility, compliance, cost control, and long-term ownership. AI-generated software still has to live in the real world — and the real world has failure modes.
Quality doesn’t disappear.
It just becomes your responsibility to enforce it differently.
Myth 4: “The model understands the problem”
This is one of the most dangerous misconceptions.
Large language models don’t understand problems. They predict patterns. They reflect the shape of the input you give them — nothing more.
That makes context critical.
Defining the problem well has always been a core engineering skill, and AI makes that skill even more important. The tools can help surface considerations you might miss — performance, security, scalability — but they can’t decide what actually matters.
That judgement still belongs to humans.
Critical thinking doesn’t go away in an AI world.
It becomes the primary differentiator.
Myth 5: “I’ll be forced to use AI”
Some engineers push back hard on this — often because they’ve seen AI used irresponsibly.
That reaction is rational.
The mistake is treating AI as a workflow rather than a capability. A healthy engineering process should be explainable without mentioning AI at all. AI should support the process, not replace it.
Unbounded agents, excessive privileges, and poorly understood tooling introduce serious security and supply-chain risks. Giving an agent unrestricted access to systems is the modern equivalent of handing over the keys to production — without an audit trail.
You won’t be forced to use AI.
But you will be forced to understand it.
Understanding its risks, costs, and failure modes is now part of being a responsible engineer.
Myth 6: “We’ll need fewer engineers”
This one sounds logical — and history tells us it’s wrong.
Efficiency doesn’t reduce demand. It increases it.
This is classic Jevons Paradox: when something becomes cheaper or easier, usage expands. AI lowers the barrier to creating software, which means:
- More products
- More experiments
- More features
- More complexity
And complexity always pulls engineering back in.
Someone still has to make systems scalable, reliable, secure, and sustainable. Someone still has to ensure change doesn’t break everything.
The result isn’t fewer engineers — it’s more demand for good ones.
What This Means for Engineers and Leaders
AI doesn’t eliminate software engineering.
It strips away excuses.
You can move faster — but only if you understand where you’re going.
You can automate more — but only if you know what matters.
You can build more — but only if you can sustain it.
The teams that succeed won’t be the ones chasing hype.
They’ll be the ones applying clarity of purpose, constraints, and discipline to a much more powerful set of tools.
That’s not the end of software engineering.
It’s the next escalation.
