Will Bunting

Developing at the Edge: Rapidly Changing Development Habits in the Face of AI

August 11, 2025 (2d ago)

For years, I chased tiny efficiency gains — the 1% boosts you get from shaving milliseconds off a keystroke in NeoVim or tuning a split keyboard layout. People would joke that optimizing your dev setup like this was missing the forest for the trees. But I always believed the compounding payoff was real if you were going to be coding for decades.

Now, with AI agents in the mix, the equation has changed entirely. The right workflow tweaks aren’t worth 1% — they can mean 10–20% speed jumps overnight, if you can orchestrate them.

The tools are evolving so quickly that my environment, which once felt finished and stable, is now in constant flux. New habits form, get replaced, and form again. At the center of this new baseline is what I call the Parallel Agent Workflow — running multiple AI coding agents at once, each on a different slice of work, much like a small engineering team.

The Hosted Agent Flow

The hub of my setup is the Claude Code GitHub agent, connected to Linear. Every Linear issue syncs to GitHub, where the agent listens for instructions.

When I want to start something, I create a Linear task, dictate the description by voice (speech-to-text is so good now that it’s faster than typing — and perfection isn’t required when your audience is an AI prompt), and post it.

Voice has quietly become a core input method for me — a shock after years of training myself to type as fast as possible. The agent picks up the issue, spins up a branch, makes changes, and opens a PR. It’s rarely flawless — maybe a dangling import, maybe a missed refactor — but it’s enough to break the inertia.

That’s the real value: you don’t have to start from a blank page anymore. The mental activation energy drops dramatically. It’s like a writer being handed a solid first paragraph — the hardest part is over, and finishing becomes much easier.

Unblocking as Muscle Memory

With multiple agents working, the question becomes: what should I do first? In pre-AI days, coding was serial; you could pick off easy, low-impact tasks without penalty.

Now, the smart move is Agent Unblocking — clearing the bottlenecks that prevent other work from running in parallel. If a schema change will unblock ten backend edits, that’s my first target. It’s the same reflex you build managing humans: remove the blockers early so everyone can move.

The Frontend Slow Lane

AI has flipped my sense of where development is faster. Backend moves quickly now — automated tests and clear success criteria make it agent-friendly. Frontend work, by contrast, remains frustratingly serial.

It’s inherently visual. You have to see the output to judge it, and intermediate states matter. Agents can’t yet self-correct purely from code context. One experiment I’d like to try: wiring Playwright into Claude Code so it can pull and analyze screenshots mid-run.

Until then, frontend changes often set the pace for the whole project. You can’t “parallel” away a UI bug — it needs iterative, hands-on passes.

Tool Evolution: From Avante to Cursor to Terminal AI

I started folding AI into my workflow in the most basic way: copy-pasting code stubs into ChatGPT, pasting results back.

The first real leap was Avante.nvim — inline completions without leaving NeoVim. Then came Cursor, with better multi-file context and the ability to tag exact code sections for reference.

Today, I’m mostly back in the CLI, leaning on terminal-integrated Claude Code agents and hosted cloud agents. Staying in the terminal keeps me close to NeoVim’s speed and my established muscle memory. That said, Cursor’s new agent is worth watching — it’s good enough to pull me back in for specific use cases.

The New Baseline

Before AI, my environment was stable. Now, the ground shifts weekly. Gains aren’t marginal — they’re double-digit — and the challenge is adapting without drowning in the churn.

The irony is that my old belief in optimizing the development environment — a belief many saw as overkill — feels more justified than ever. In a world where AI can magnify the impact of your setup, the payoff from workflow tuning isn’t just cumulative; it’s multiplicative.

Checklist: Best Practices for Parallel Agent Development

Clear bottlenecks first – unblock dependent tasks before touching peripheral ones.

Divide work for simultaneity – agents thrive on small, independent jobs.

Offload long jobs to hosted agents – keep your local loop free for high-leverage work.

Document for the AI – explain why a solution was chosen so future agents can follow suit.

Aim for “good enough” starts – let agents get you rolling; finish yourself.

Treat frontend as the special case – plan for slower, more iterative cycles there.

Continuously refine orchestration – retire tools and habits that no longer earn their keep.

If you’re building with AI, the tools won’t stop shifting anytime soon. That’s the opportunity — and the challenge. The better you get at orchestrating them, the more those 20% overnight gains stop being the exception and start being your baseline.