Back to Thoughts

The Boring Infrastructure Behind Shipping Weekly

Jan 27, 2026

3 min read

View raw

We ship weekly at DevDash Labs. Not because we're 10x developers — because we automated the stuff that usually slows teams down.

This isn't a post about AI-assisted coding or the latest framework. It's about the boring, invisible infrastructure that actually lets us move fast.


The Problem: It's Never the Code

Every dev has felt this friction:

  • "It works on my machine"
  • Backend changed the API, frontend didn't know
  • Deploys take 10 minutes, so you batch changes and pray
  • Three months later, nobody remembers why we chose X over Y

We fixed these systematically. Here's the stack.


API Contracts: openapi-ts

We run FastAPI on the backend, Next.js on the frontend. Keeping types in sync manually was painful.

FastAPI generates OpenAPI schemas. We pipe those through openapi-ts to generate TypeScript types and API clients automatically.

Result:

  • Type-safe API calls on the frontend
  • Backend changes surface as TypeScript errors immediately
  • AI coding tools (Cursor, Claude Code) actually understand the API contract

Two Environments: Dev and Main

Every feature gets deployed to staging first. We've caught:

  • Auth flow bugs (onboarding broken, OAuth provider misconfigs)
  • AI agent failures (429 rate limits, missing context, no exponential backoff)
  • Silent failures that would have hit production users

Nothing clever here. Just discipline.


Feature Flags: PostHog

AWS Amplify deploys take ~10 minutes. Feature flags take seconds.

We use PostHog for boolean flags. Ship code dark, enable when ready. If something breaks, kill it instantly without redeploying.

This changed how we think about releases. Shipping is no longer an event — it's continuous.


Linting and Formatting: No Debates

Backend (Python):

  • ruff for linting and formatting
  • uv for package management (faster than pip, lockfile via pyproject.toml)
  • Pre-commit hooks enforce standards

Frontend (Next.js):

  • eslint with strict rules (including Tailwind and folder structure conventions)
  • prettier for formatting
  • husky for pre-commit enforcement
  • tsc compilation catches type errors

Code reviews focus on logic, not style. The tools handle style.


Spec-Driven Development + ADRs

Before we build, we write:

  • PRDs — what we're building and why
  • Specs — how it should work
  • ADRs — what we decided and what we rejected

These aren't just docs. They're context for AI tools. When Claude Code or Cursor has the spec, it writes better code. When a human reviews, they know the intent.

Six months later, we know why decisions were made. No archaeology required.


What We Don't Automate

Manual code review: AI writes code. Humans review it. Always. AI-assisted development without oversight creates compounding errors. Small mistakes stack up until you're debugging ghosts.

100% test coverage: We test critical paths. Not everything. Full coverage sounds good until you're maintaining 400 tests that break every refactor.


The Result

  • Weekly deploys (sometimes more)
  • No lint errors, type errors, or style debates in production
  • Bugs caught in staging, not by users
  • Faster onboarding — new devs read the ADRs, run the setup, and contribute Day 1
  • More time building features, less time fighting infrastructure

None of this is revolutionary. It's just boring infrastructure, consistently applied.

The teams that ship fast aren't working harder. They've just removed the friction that slows everyone else down.


Gopal Khadka