I made Claude Code and Codex feel like Lovable. Here's how.
Working outside Lovable doesn't mean giving up what you love about it. The friendly explanations, the sense of being in control — you don't lose them. You just have to install them into the new setup.
By The Non-Developer Developer
# Site Post — TNDD
## Title: I made Claude Code and Codex feel like Lovable. Here's how.
**Slug:** made-claude-code-feel-like-lovable
**Category:** Workflow
**CTA:** Download the TNDD AI Builder Workflow Toolkit
---
I still use Lovable. For building frontend fast it's genuinely world class — what would take days to scaffold manually, Lovable does in an afternoon. I'm not here to tell you to stop using it.
But at some point every Lovable project outgrows what Lovable is good at. The logic gets complex. The backend needs proper control. The database needs to be yours. And you start doing some or all of that work outside Lovable — in Claude Code, in Codex, in Cursor.
That's when most people hit the same wall.
Working outside Lovable feels like a step backwards. Not because the tools are worse — they're technically better in almost every way. But because something that felt natural in Lovable suddenly feels cold and disconnected.
This post is about what that something is, why it happens, and exactly how I fixed it.
---
## What you actually love about Lovable
It's not the UI generation. Or not just that.
It's the loop.
You describe what you want. Lovable makes a plan. It executes cleanly. Then it tells you what it did — in plain English, in terms you can follow, connected to what it means for your actual product.
That loop keeps you in control even when you don't understand every line of code. It's why Lovable feels approachable for people who are more on the product side than the developer side. You're not just getting output — you're staying connected to what's happening inside your own app.
When you move to Claude Code or Codex that loop breaks.
You give it a task and wait. It comes back. Something's not quite right, you send it back. It goes off again. Eventually it finishes and you get a diff, a wall of technical output, or silence. No plan upfront, no explanation after. You did the task but you have no real sense of what just happened or why.
After a few sessions of this, Lovable's credits start looking reasonable again.
But here's what I figured out after two months of running both:
**The Lovable experience isn't a feature of Lovable. It's a workflow. And you can install that workflow into any tool.**
---
## The two things that make Lovable feel like Lovable
When I broke it down, the Lovable loop has two distinct parts.
**Before the task** — it shows you a plan. You know what it's going to do before it does it. You can approve, redirect, push back. You feel in control.
**After the task** — it explains what happened. In plain English. Not a diff, not a technical summary — an actual explanation you can follow.
Claude Code and Codex don't do either of these by default. They just execute. Fast, capable, but opaque.
The fix was installing both behaviours directly into my repos.
---
## What I installed
Two files. Drop them in the repo root once and they load automatically every session.
`AGENTS.md` for Codex and Cursor. `CLAUDE.md` for Claude Code.
### The before-task fix: a five-step pre-flight
Before writing a single line of code the agent has to complete all five steps and show them to you:
1. **Inspect** — review the current state of the relevant files
2. **List files** — every file it will read or modify
3. **Propose path** — the approach in plain English
4. **Scope traps** — what could silently expand this task
5. **Will not touch** — explicit declaration of what's out of scope
That fifth step is the one that changed the most for me. The agent commits upfront to its blast radius. You can see exactly what's in play before anything moves. You approve the plan, then it executes — exactly like Lovable.
Tasks that used to drag on while you waited to see what the agent would do now have a clear plan you've signed off on before a file is touched.
### The after-task fix: required teaching output
The agent cannot mark a task complete until it produces all three:
**Technical summary** — what changed, what was checked, what the result was.
**Plain-English explanation** — what changed, why it mattered, what it actually means for your app in terms you can follow without reading the code.
**What to learn from this** — the concept involved, a simple rule of thumb, how to avoid the issue next time.
This last part is something Lovable doesn't even do. You're not just getting the explanation — you're building genuine understanding of your own codebase over time. After two months of working this way I know my app better than I ever did inside Lovable.
---
## What the difference actually looks like
Here's a real example. I had a Supabase query returning empty results on first load even though the user was logged in.
**Raw output (no workflow):**
```
Modified src/hooks/useProjects.ts
Changed .eq('user_id', user.id) to .eq('user_id', session.user.id)
Ran check:types — passed
```
**With the workflow installed:**
```
Plain-English explanation:
The query was looking for your user ID in the wrong place. Your app
stores the logged-in user inside the session object, but the query
was reading from a separate user variable that was sometimes undefined
on first render. That's why you were seeing empty results even when
you were clearly logged in — the data was there, the query just
couldn't find it yet.
What to learn from this:
In Supabase with React, always pull the user ID from session.user.id.
The session is what Supabase keeps in sync. The user variable can lag
behind on first render and silently return undefined.
```
Same fix. Same code change. Completely different experience of what just happened.
The first version tells you what changed. The second version tells you what it means — and makes sure you won't hit the same thing again.
---
## How the full stack works together
This is what a normal working day looks like now:
**Lovable** handles new screens and UI work. If I need a new page, a component redesign, anything visual — Lovable is still the fastest tool for that. I build it there, sync to GitHub, pull locally.
**Claude Code and Codex** handle everything else. Logic, backend, Supabase queries, bug fixes, new features, anything that touches data. With the workflow files installed, both tools now give me a plan before they start and a plain-English explanation when they finish.
**GitHub Actions** is the only thing that touches production. The agent writes code and commits to a branch. I review, merge, CI deploys. The agent never has a direct path to production.
Three tools, clear lanes. Each one doing what it's actually good at.
---
## The workflow files
Everything described in this post is packaged into the TNDD AI Builder Workflow Toolkit — free download, no email required.
It includes:
- `AGENTS.md` — for Codex and Cursor
- `CLAUDE.md` — for Claude Code (project level)
- `claude-global.md` — global defaults for all repos
- `.claude/rules/` — modular rules for database, auth, and testing
- Install prompts for both tools — paste once to set up the workflow in any repo
- Daily prompts — use at the start of every session
- Dev loop scripts for Windows and Mac
- Lovable handoff guide — the steps to take before running the install prompt on a Lovable export
The install prompts do the heavy lifting. Paste the prompt into your tool at the root of any repo, the agent installs the workflow files for you, and you're done. Ten minutes.
---
## If you're just getting started with working outside Lovable
Two earlier posts cover the setup and migration side in detail:
- **[I was spending $400/month on Lovable. Here's how I cut it to $20](#)** — the full stack, why I made the move, and how to set it up from scratch
- **[Already built in Lovable? Here's how to migrate](#)** — step-by-step migration guide including the Supabase schema baseline, environment variables, and the common things that break
This post assumes you're already set up outside Lovable and want to make it feel better to work with. If you're still at the "should I do this" stage, start with the first post.
---
## The thing nobody mentions
Every tutorial about Claude Code or Codex focuses on what they can do technically. How powerful they are. What they're capable of.
Nobody talks about how they feel to use. And for people building on the product side — people who came to Lovable because it made building feel possible — that matters enormously.
The tools are capable. They just need to be told how to talk to you.
Install the workflow. Give them the rules. The capability was already there — now the experience matches it.
---
*Download the TNDD AI Builder Workflow Toolkit — free, includes both Claude Code and Codex versions.*
**[Download the Toolkit]**