I was reconciling last month's tooling spend and noticed the Claude Code line item had been climbing since October. I scrolled back through the billing history expecting three or four months. It was closer to seven. My bill told me, not me, that I had quietly stopped authoring code sometime last fall.
I did not plan this post. I planned a spreadsheet.
But once I saw the number, I sat with it for a few days. Then I went back through my commits, my PRs, and my terminal history to check whether the billing logs were lying. They were not.
The claim, stated plainly
For roughly six months, I have not opened a file and typed business logic. I do not sit down and write a controller. I do not hand-roll a Vue component. I do not type out a migration.
What I do is decide, specify, review, reject, approve, and paste error output back into a running agent.
The honest caveats: I still tweak a .env occasionally. I still resolve merge conflicts by hand when the diff is small enough that doing it myself is faster than explaining the conflict. I have nudged a line or two when I was mid-review and the fix was one character. They are the rounding error, not the pattern.
The pattern is that the keyboard shortcut I use most often during a workday is now the one that approves a tool call.
What this post is not
This is not a "I vibe-coded a SaaS in a weekend" post. I have not built a greenfield project in six months. Everything I have shipped has been on client work or on long-running side projects with real history behind them.
This is not an argument that AI is replacing developers. The opposite. I am arguing that fourteen years of pattern-matching, architectural taste, and knowing where the bodies are buried in a legacy codebase is the thing that makes this workflow function at all. Strip out the senior judgment and the whole setup collapses into a very expensive way to generate plausible-looking bugs.
This is also not a pitch for any particular tool. Claude Code is what I use. The interesting question is not the tool. It is what happens to the work when the typing stops.
The terrain: why legacy is the real test
Most of the "I no longer write code" posts I have seen come from people shipping greenfield. Fresh repo, modern framework, a single developer who owns every decision. That is easy mode for an AI coding agent. There are no hidden invariants, no half-finished migrations, no decisions made in 2017 by someone who no longer works there.
My work is the opposite shape. A typical week looks like a Laravel 8 monolith a client inherited from a previous agency, a Vue 2 dashboard that is partway through a Vue 3 migration that stalled eighteen months ago, and a side project or two where I am the only person who remembers why a particular abstraction exists.
Legacy codebases are the hardest terrain for an AI agent because the code itself lies. The file structure suggests one pattern. Three routes actually use that pattern. The other forty route through a helper someone wrote during a crunch. The models look like standard Eloquent until you notice the global scope that silently filters out soft-deleted rows except in one admin view. There is a service class that looks dead and is called by a cron job defined in a YAML file nobody opens.
Claude, on its own, will cheerfully write code that matches the apparent pattern. That code will also silently break the admin view nobody tested.
This is where senior judgment earns its keep. I know which files lie. I know which abstractions are load-bearing and which are scaffolding that never got torn down. I know that the OrderService is not the place to add logic because the actual business rules moved to a job queue three years ago and nobody updated the class docblock.
My job, six months in, is to carry that knowledge into the loop so Claude's output lands in the right place. I am not doing less thinking. I am doing different thinking, earlier, and writing less of it down in code.
If you want the tactical version of this argument, I wrote it up in my practitioner's guide to Claude Code on legacy codebases and in the legacy modernization post.
What I actually do now
The work has reshaped into three roles I move between, sometimes inside the same hour.
Architect
Before any code gets generated, I decide what is being built and how it fits the existing system. For a recent change on a Laravel codebase I inherited, the client asked for a new report endpoint. Fifteen minutes of reading and thinking got me to: this needs a dedicated query class, not another method on the bloated ReportController. It needs to respect the existing tenant scope. It should cache at the query level, not the response level, because three other reports will reuse the same underlying data.
None of that lives in code yet. It lives in a spec I hand to Claude. The spec names the files to touch, the classes to create, the interfaces to match, and the patterns to avoid. When the spec is right, the generated code usually lands on the first or second pass. When I shortcut the spec, I pay for it in rejected diffs.
The architect's job used to involve typing. Now it involves writing clearly enough that a literal, context-limited agent can execute without going off the rails.
Reviewer
Every diff Claude produces gets read before it lands. I reject a lot.
The rejections fall into a few buckets. Claude invents a helper that already exists two directories over. Claude uses the modern Laravel idiom in a codebase pinned to an older version. Claude adds a test that passes by mocking the exact thing that needed to be tested. Claude silently changes a return type in a way that will break a consumer I know about and the agent does not.
None of these are catastrophic on their own. All of them are the kind of thing that, compounded across fifty PRs, will degrade a codebase fast. The reviewer role is where six months of doing this has sharpened the most. I read generated code differently than I read human-written code. I am looking for a different set of failure modes.
Orchestrator
For anything non-trivial, I am running more than one agent at a time. One is refactoring. One is writing tests against the refactored surface. One is checking the result against a style guide I keep in a CLAUDE.md at the repo root.
Managing context across those agents is its own craft. I wrote about the mechanics of running agent teams in parallel and about keeping token usage under control in earlier posts. The short version: the orchestrator's job is to design the workflow so each agent has exactly the context it needs and nothing else. Too little context and the output is wrong. Too much and it is slow, expensive, and drifts.
A day in the loop
Here is a representative Tuesday.
Morning. A client pings me about a bug in a Laravel app I have been maintaining for about a year. Orders with a specific discount combination are calculating tax incorrectly. I open the codebase, re-read the relevant TaxCalculator class for two minutes, then describe the bug to Claude with the reproduction steps and the three files I know are involved. I ask it to propose a fix, not implement one.
Claude comes back with a theory that is close but wrong. It thinks the issue is a rounding error in the discount application. I know from last year that this class already had a rounding fix and the real issue is almost certainly in the order-of-operations between coupon stacking and tax. I tell it that. It comes back with a correct diagnosis and a patch. I read the patch, notice it does not account for a legacy tenant flag that disables coupon stacking entirely, and reject it. Second pass, correct. I run the tests, one fails because a fixture assumed the old behavior. I ask Claude to update the fixture with a comment explaining the change. Merge.
Elapsed time, maybe forty minutes. Lines I typed into a source file, zero.
Afternoon. A side project. I am reshaping how one of my own tools handles authentication. I spend twenty minutes writing a spec that covers the new flow, the endpoints that need to change, and the test coverage I want. I hand it to Claude, go get coffee, come back to a draft PR. I read it, find two things I dislike, reject with reasons. Second draft. I merge the first three commits, ask for changes on the fourth, get the revision, merge that too.
This is the texture. Short bursts of decision. Long stretches where I am reading, thinking, and waiting. The parts that used to be typing are now reading and specifying. The parts that used to be debugging are now usually triage and direction.
I wrote about the day-to-day Laravel workflow in more detail for anyone who wants the concrete commands and config.
Honest limitations
This approach has real failure modes on legacy code. Three I run into regularly.
Undocumented business rules. Claude can read code. It cannot read the Slack thread from 2022 where the product manager and the engineer agreed that expired subscriptions still get access for seven days because of a deal with a single large customer. That rule lives in a conditional that looks arbitrary. Every agent I have tried will "simplify" it unless I explicitly tell it not to. I have learned to flag these in CLAUDE.md the moment I discover one.
Cross-file invariants. The kind of implicit contract where changing a return type in one service silently breaks a consumer three modules away. Claude can usually find these if I ask it to look. It will almost never find them if I do not. On a tightly-coupled legacy codebase, this means every non-trivial change still requires me to think about the blast radius before I prompt.
Framework version drift. Claude's training cuts off at a point. Its default instinct on Laravel code is to suggest the modern idiom. Plenty of codebases I work in are pinned to older versions for reasons ranging from valid to "we are afraid to upgrade." I spend a non-trivial amount of time redirecting the agent away from patterns the codebase cannot support.
None of these are reasons to abandon the approach. They are reasons the senior judgment stays in the loop.
What this means for senior devs and for clients
If you are a senior developer reading this and the claim annoys you, I get it. It annoyed me eight months ago when I saw someone else make a version of it. The thing that changed my mind was not a demo. It was picking a codebase I already knew cold, running Claude Code against it for a week, and watching what happened when I stopped pre-judging the output and started actually reviewing it.
The skill hasn't disappeared. What it touches has moved up the stack. If you already have the taste, the pattern-matching, and the scar tissue from production incidents, you have almost all of what this workflow needs. What you have to build is the habit of specifying before generating and reviewing before merging. Neither is hard. Both take reps.
If you are a client with a legacy Laravel or Vue codebase that has been stuck in priority limbo for a year or two, the shape of this work is worth understanding. Projects that used to be too expensive to touch, like the half-migrated frontend, the payment flow nobody wants to own, or the reporting module with the 800-line controller, are now tractable in a way they were not in 2023. Not because the agent is magic. Because a senior engineer with the right workflow can now move through them at a pace that makes the work worth doing.
If you want to try this
If you are a skeptical senior dev, run this experiment. Pick a codebase you already know cold. Do not pick a greenfield toy project. Spend a week letting Claude Code do the typing while you do the deciding and reviewing. At the end of the week, decide whether the output was better, worse, or the same as what you would have written. Do not decide before the week is over.
If you have a legacy Laravel or Vue codebase that keeps falling down the stack, this is the kind of work I take on. The services page has the shape of the engagements I run, and the contact form is the quickest way in.

Richard Joseph Porter
Senior Laravel Developer with 14+ years of experience building scalable web applications. Specializing in PHP, Laravel, Vue.js, and AWS cloud infrastructure. Based in Cebu, Philippines, I help businesses modernize legacy systems and build high-performance APIs.
Looking for Expert Web Development?
With 14+ years of experience in Laravel, AWS, and modern web technologies, I help businesses build and scale their applications.
Related Articles
Claude Code Remote Control: The Untethered Developer
How Claude Code Remote Control lets you continue local coding sessions from your phone, tablet, or any browser. Setup guide with real workflow scenarios.
Claude Code Agent Teams: Parallel AI Development
How Claude Code Agent Teams coordinate multiple AI instances for parallel development across Laravel, Vue.js, and AWS. Setup guide with real-world workflow examples.
GrepAI: Semantic Code Search That Saves 27% on Claude Code
Discover GrepAI, the privacy-first CLI that reduces Claude Code token usage by 97%. Learn setup, MCP integration, and call graph tracing for AI-assisted development.