The toolkit that makes Claude Code and Codex feel like Lovable
Every file in the TNDD toolkit explained. What it does, what Lovable feeling it recreates, and why it matters.
By The Non-Developer Developer
A lot of people who move off Lovable say the same thing a few weeks later.
"It's technically better. But it doesn't feel as good. I kind of miss how Lovable works."
I felt this too. For longer than I want to admit.
What I missed wasn't the UI generation. It was the loop - describe what you want, get a clear plan, watch it execute, receive a plain English explanation of what just happened. That feeling of being connected to your own product even when you don't understand every line of code.
Claude Code and Codex don't give you that out of the box. So I built a system that does.
This post walks through every file in the TNDD AI Builder Workflow Toolkit, what Lovable feeling it recreates, and why it's there. By the end you'll understand exactly what you're installing and what problem each piece solves.
What you actually miss about Lovable
When people say they miss Lovable after moving off it, they don't usually mean the UI generation. They mean the loop.
You describe what you want. Lovable makes a plan. It executes cleanly. Then it tells you what it did - in plain English, in terms you can follow, connected to what it means for your actual product.
That loop keeps you in control even when you don't understand every line of code. It's approachable. It's connected. For someone more on the product side than the developer side, that feeling matters as much as the code itself.
But here's what I found: the Lovable experience isn't a feature of Lovable. It's a workflow. And you can install that workflow into any tool.
The toolkit is that workflow - packaged, ready to install, about ten minutes to set up.
The install prompts - start here
INSTALL-PROMPT-CODEX.md
INSTALL-PROMPT-CLAUDE-CODE.md
Before anything else - you don't need to manually place the files. The install prompts do it for you.
Copy the contents of whichever file matches your tool. Paste it into Claude Code or Codex at the root of your repo. The agent reads your codebase, installs every file in the right place, fills in your project-specific details, and confirms what it did.
The Claude Code version even reads your package.json and fills in your actual stack and commands so your CLAUDE.md is accurate to your project from day one.
This is how Lovable works - you describe what you want, it handles the setup. Same principle here.
The core files - recreating the Lovable loop
AGENTS.md (Codex / Cursor / Windsurf)
CLAUDE.md (Claude Code)
These are the two most important files in the toolkit. They are what recreates the Lovable loop in Claude Code and Codex.
Drop either file in your repo root and it loads automatically at the start of every session. No special prompts, no setup, no remembering.
They recreate the "here's what I'm about to do" feeling Lovable gives you before it builds.
Before writing a single line of code, the agent has to complete five steps and show them to you:
Inspect - review the current state of relevant files
List files - every file it will read or modify
Propose path - the approach in plain English
Scope traps - what could silently expand this task
Will not touch - explicit declaration of what's out of scope
You see the plan. You can read it, approve it, push back on it. Nothing moves until you say so. That's exactly what Lovable does - and it's exactly what raw Claude Code and Codex skip unless you remember to ask.
They also recreate the "here's what I just did and why" feeling Lovable gives you after it builds.
The agent can't mark a task complete until it produces all three:
Technical summary - what changed, what was checked, what the result was
Plain-English explanation - what changed, why it mattered, what it means for your app in terms you can follow
What to learn from this - the concept involved, a rule of thumb, how to avoid the issue next time
Here's what that looks like on a real bug - a Supabase query returning empty results even though the user was logged in.
Without the workflow:
Modified src/hooks/useProjects.ts
Changed .eq('user_id', user.id) to .eq('user_id', session.user.id)
Ran check:types - passed
With the workflow:
Plain-English explanation:
The query was looking for your user ID in the wrong place. Your app
stores the logged-in user inside the session object, but the query
was reading from a separate user variable that was sometimes undefined
on first render. That's why you were seeing empty results even when
you were clearly logged in - the data was there, the query just
couldn't find it yet.
What to learn from this:
In Supabase with React, always pull the user ID from session.user.id.
The session is what Supabase keeps in sync. The user variable can lag
behind on first render and silently return undefined.
Same fix. Same code. Completely different experience of what just happened.
That's Lovable's plain English output - running in Claude Code and Codex, on your own stack, at $20 a month.
The additional Codex files - how the system connects
If you're using Codex, you get four additional files. Here's what each one does and why it exists.
.codex/config.toml - the wiring
project_doc_fallback_filenames = [
"codex-task-start.md",
"AI_TASK_TEMPLATE.md",
"PLANS.md"
]
This tells Codex to auto-discover the three instruction files. Without it, you'd need to reference them in every prompt. With it, they load automatically every session - exactly like Lovable's project rules load automatically when you open a project.
PLANS.md - making planning automatic and structured
Both Codex and Lovable can plan before they build - Codex has a plan mode, Lovable uses it by default. The difference is that in Codex, structured planning only happens if you remember to ask for it, and the quality of the plan depends on how much you push for it.
PLANS.md removes the "remember to ask" step and defines exactly what a valid plan must contain: a plain-English explanation, technical steps, a risk check, and a verification plan. AGENTS.md tells the agent it must plan. PLANS.md tells it precisely what the plan needs to include - so you never get a vague one-liner passed off as a plan.
codex-task-start.md - the operating procedure
Loaded automatically before every task. Defines how every session begins - the pre-flight steps, what counts as a valid plan, what must be declared out of scope, and the full output structure required when a task finishes.
This is what standardises the experience across every task and every session. Consistent and predictable - the way Lovable always feels consistent.
AI_TASK_TEMPLATE.md - the output standard
Defines the exact structure for how every completed task is reported back to you. Technical summary, plain-English explanation, what to learn from this - always in the same format, always complete.
Without this file the format varies session to session. With it, every task output looks the same every time. That consistency is what builds trust in the tool - which is exactly why Lovable feels trustworthy even when you don't know what it's doing under the hood.
The CLAUDE.md modular rules - protecting what matters
Claude Code specific. Three files that load automatically based on context.
database.md - recreates the "don't touch my database without asking" safety Lovable enforces automatically. Never modify existing migration files. Never push to Supabase directly from the agent. All deployments go through CI. Flags anything touching auth or user data for explicit review.
auth.md - the file that stops the agent quietly breaking your login flow. Requires explicit risk flagging before any auth file is touched. Forces the agent to state what will change and what could break - then wait for confirmation before proceeding.
testing.md - enforces the verification order: verify:quick → check:types → build → full test suite only when explicitly asked. Never retry a failing command without changing the code first.
The dev loop scripts - getting the speed back
scripts/dev-loop.ps1 (Windows)
scripts/dev-loop.sh (Mac / Linux)
One of the things people miss about Lovable is the speed. You describe something, it builds, you move on.
Claude Code and Codex out of the box default to running a full build to verify every change. On a medium-sized project that's 30 to 40 seconds. It fails, the agent retries, fails again, burns another 40 seconds. A task that should take five minutes takes forty.
The dev loop scripts fix this by enforcing a smarter verification order. They check what's available in your package.json and run the lightest option first:
verify:quick- if it exists, run it (type check only, seconds)check:types- if verify:quick doesn't existbuild- only if neither of the above exist
Add these two lines to your package.json scripts:
"check:types": "tsc --noEmit",
"verify:quick": "npm run check:types"
Type check only. Seconds not minutes. The agent uses this instead of defaulting to a full build on every small change. The speed comes back.
The Lovable Handoff Guide - the bridge
LOVABLE-HANDOFF.md
This file is specifically for people moving off Lovable - the bridge between where you are and where you want to be.
The core problem: when you export from Lovable, your schema exists in Supabase but there are no migration files in your repo. The moment you start using Claude Code or Codex to build features, you end up with three sources of truth that disagree. The agent breaks things trying to fix a mismatch it can't see.
Four steps before you run the install prompt:
Pull your schema as a baseline migration:
supabase db pullSet up your
.envfile - Lovable injects variables its own way, your local setup needs them explicitlyVerify the local dev server actually runs
Make one clean baseline commit - your restore point if anything goes wrong
Fifteen minutes. Prevents a day of debugging. Run it before anything else.
The five printable cheat sheets
Moving off Lovable means working in a real development environment for the first time. Terminal. Git. SQL. Chrome DevTools. These feel intimidating until you've used them a few times.
The cheat sheets are one-page reference cards for each:
Git & GitHub - the commands you'll use every day, the PR workflow explained
Terminal - npm, Supabase CLI, reading output, common errors decoded
SQL for Supabase - the debugging queries, the RLS silent failure checklist
Chrome DevTools - Network tab, Console, status codes, five-step debug workflow
AI Code Understanding Prompts - eight templates for using Claude to understand your code, not just fix it
Print them. Stick them above your desk. The PDF versions are designed to be readable at A4 - not walls of text, just what you actually need while you're building.
The complete system
Codex / Cursor / Windsurf:
config.toml → auto-discovers instruction files every session
codex-task-start.md → loaded first, defines how every task begins
PLANS.md → defines what a valid plan must contain
AGENTS.md → defines how the agent behaves throughout
AI_TASK_TEMPLATE.md → standardises execution and reporting
dev-loop scripts → lightweight verification, seconds not minutes
Claude Code:
CLAUDE.md → project rules, auto-loaded every session
~/.claude/CLAUDE.md → global defaults across all repos
.claude/rules/database.md → loads when working with database files
.claude/rules/auth.md → loads when near authentication code
.claude/rules/testing.md → enforces verification order
dev-loop scripts → same as above
What this gives you
Lovable handles the frontend. Claude Code and Codex handle everything else - logic, backend, database, bugs.
And with this toolkit installed, both feel like Lovable to work with.
Clear plan before every task. Clean execution. Plain English explanation after every task. A record of what changed and why. The sense of being in control of your own product.
The things you were worried about losing when you left Lovable - you don't lose them. You install them into the new setup.
The bill for everything outside Lovable stays at $20 a month.
Download the TNDD AI Builder Workflow Toolkit - free, everything described in this post included.