·4 min read

Everything Changes Now

With the advent of AI tooling there's been talk about the end of the junior developer, but over the past three months I've become increasingly concerned that senior developers may be at risk too.

TL;DR

  • AI agents now produce high-quality code, tests, and documentation; teams must redesign workflows to use them effectively.
  • Expect smaller, more agent-driven teams, a large increase in agent-created code, and costly failures where organisations don't adopt proper guardrails.

How my approach to agentic development has changed

This time last year I was using Cursor, Claude, and some rudimentary, self-developed AI agents to build applications. It definitely wasn't perfect — I couldn't leave them to run unattended for more than a few minutes at a time.

My experience and knowledge of how to use these tools was limited; I now realise I was merely "vibe coding" — refining prompts, trying to do too much in a single context window, and watching quality degrade as the context filled up.

I think many engineers still use AI tooling this way: they try to get the tool to do what they want without understanding its underlying principles, limitations, or the areas of knowledge they need to develop.

When I talk with what I call "Developer Classic," they're quick to call out what they term "AI slop." But when I dig into their methodology I often find a naive approach: bolting tools onto existing workflows instead of adapting practices around them.

Let me be blunt: the code, documentation, and tests I can get an AI to produce are superior to what roughly 80% of the teams I've worked with produce. The AI isn't augmenting you — you are augmenting the AI.

In the last three months I went from treating the AI as a very junior engineer that needed constant hand-holding and strict review, to treating it as a senior engineer I can trust. This shift is possible not only because LLMs have improved, but because of the tooling that supports them (Agents, Claude Code, Opencode, Skills, Oh My Opencode, Superpowers) and methodologies like BMAD that help tease out requirements and constraints. I now spend more time verifying documentation, specifications, and test suites.

I've also started asking the AI to look for edge cases in end-to-end (e2e) tests. Previously I'd begin with the happy path and iterate; now I split each edge case into its own context and assign a sub-agent to work on it — isolated, and not blocking other agents working on the happy path.

How many projects have we been on where time constraints prevented us from creating the comprehensive tests we wanted?

In the past three months I've:

  • Deployed several non-trivial applications to production.
  • Created several skills I use within Opencode to speed development and daily work.
  • Developed code-review and security agents to catch issues early and prevent larger problems.
  • Hooked agents up to error logs to detect issues, fix bugs automatically, and create PRs for me to review. My next step is enabling verification on staging and, eventually, automatic merging and deployment.

What I'm seeing matches other senior engineers who've built agent-centric processes. This will alarm many engineers: expect significant downsizing as organisations become comfortable with one engineer per repo. Why not more? Coordination becomes a bottleneck: an agent and its sub-agents can generate a rapid stream of changes. If three engineers each drive their agents, everyone risks being blocked on others' changes or building on shifting assumptions. It isn't going to work well — at least not yet.

I've seen ideas like the agent-mailbox pattern, where each agent listens to topics, but it's unclear how well this will scale.

Could this force us back to microservice architectures to manage organisational scaling around agents? What does it mean for monoliths and patterns like hexagonal architecture — can they support the same productivity we'd expect with one engineer per repo? It's too early to tell.

What will 2026 look like?

So, to wrap up, here are my thoughts on how 2026 may look for tech:

  • By the end of 2026, most of us won't use an IDE like we used to. Instead we'll use tools such as Claude Code or Opencode, plus interfaces that make reviewing changes, tests, and deployments easier — tools that help define guardrails and verify outcomes.
  • The proportion of agent-created code will rise above 80%.
  • Many engineering orgs will fail because leaders buy subscriptions and tooling without redesigning organisations from first principles for agentic development; the short-term output will often be what we call "AI slop."
  • There will be some very public, very expensive failures by organisations that scale agentic development without appropriate guardrails and processes.
Share: