AI didn't replace our engineers. It replaced what our engineers do.
A year ago, I spent hours writing code. Now I spend hours reviewing it. The AI writes. I architect, criticize, and catch what it misses.
This isn't a story about AI taking jobs. It's about jobs changing shape. If you're still thinking about AI as a threat or a toy, you're missing the shift.
Here's how we actually use AI at DevDash Labs — and where we don't trust it at all.
We Use Three Tools. That's It.
Cursor for autocomplete. It's the best at predicting what you're about to type. Fast, unobtrusive, rarely wrong.
Claude Code for planning and building. When we need to scaffold a feature or work through a complex refactor, Claude Code handles the heavy lifting. It thinks before it codes.
Codex for criticism. It's faster than Claude, doesn't over-explain, and catches bugs we missed. We use it as a second set of eyes, not a first pair of hands.
We tried others. These three earned their place. Everything else added noise.
What AI Is Bad At
AI can't design systems. It doesn't know your codebase the way you do. It doesn't understand why you split things a certain way or why that pattern exists.
Give it a component to build and it'll dump everything into one file — hundreds of lines, unscannable, unreusable. Or it'll do the opposite: refactor into twelve files when three would do. It optimizes for completion, not maintainability.
AI can't write docs either. We tried. The results were bloated with emojis, redundant explanations, and code blocks nobody asked for. The whole point of our docs is fast scanning for team leads. AI wrote novels instead.
AI doesn't respect your standards. You can give it guidelines, patterns, specs. It'll follow them — mostly. Then it'll quietly drift. Small violations compound. By the time you notice, you're debugging decisions you never made.
How We Actually Work With AI
We break work into phases. Big features get split into steps. AI handles one step at a time. This keeps context tight and mistakes traceable.
We provide context upfront. Before AI touches code, we feed it: official docs, our internal guides, relevant blog posts, package documentation. "Learn this first" is a real prompt we use.
We review every plan before execution. AI proposes, we approve. If the plan has issues, we fix them before a single line gets written. This is where most teams skip — and where compound errors start.
We never accept all changes at once. Step by step. Review each diff. This takes longer upfront and saves hours later.
The Compound Error Problem
Here's what happens when you skip review:
AI makes a small architectural choice you didn't catch. That choice shapes the next three files. Those files inform the next feature. By week two, you're maintaining a codebase designed by autocomplete.
The fix isn't "don't use AI." The fix is review early, review often. Catch drift before it compounds.
The New Job Description
The developers who thrive won't be the best coders. They'll be the best critics.
Your job now:
- Architect systems before AI builds them
- Set standards and enforce them relentlessly
- Review plans, not just code
- Test what AI writes — it's confident but not careful
- Understand security — AI doesn't know what it costs when things break in production
You don't need to memorize syntax anymore. You don't need Stack Overflow or hours in documentation. AI handles that.
What you need is taste. Judgment. The ability to look at AI's work and say: this isn't good enough.
The Shift
AI replaced me as a coder. It made me an architect.
That's not a loss. It's a trade. Less typing, more thinking. Less syntax, more structure. Less doing, more deciding.
The paradigm shifted. The developers who shifted with it are building faster than ever.
The ones who didn't are still arguing about whether AI is coming for their jobs.
It already came. It just didn't take what they expected.