Plan, Review, Execute, Repeat
I once worked for a company where developers would pair-program for a big part of their working hours. It was an exciting (albeit tiring on some occasions) experience where I learned a different way of approaching development. Among the things I missed from that time was the habit of discussing an implementation plan with a colleague before even typing the first character in an editor. This exercise of thinking and explaining a problem to another person was crucial to assess how well I understood the challenge I was about to tackle.
The fact that there was a person next to me to provide a different perspective was crucial in the process, since quite often, based on their questions, we would refine the plan a couple of iterations before we reached an agreement.
With the recent advancements in LLMs and the improvements in agentic AI, I feel like a similar experience can now be replicated with AI.
Initial Setup
It is important to set some expectations and have an initial setup in order to achieve the best results.
Vibe coding != Augmented Coding
I’m NOT talking about Vibe Coding here. As Kent Beck correctly said in his Augmented Coding Beyond the Vibes post [ 1 ]:
- In vibe coding you don’t care about the code, just the behavior of the system. If there’s an error, you feed it back into the genie in hopes of a good enough fix. In augmented coding you care about the code, its complexity, the tests, & their coverage.
I’m talking about code that is built to last, with the intention to get it to production in the same way we would before the Gen AI coding tools appeared.
And it is important to make sure the same standards I hold for “human” code are applied to LLM code.
Setting your quality standards
Every Gen AI coding tool has some mechanism to define basic rules and guidance. They each give it a different name, but in practice they work similarly.
This should be the place where you define your expectations, best practices, and general guidelines. For example:
- Use TDD when creating new features or changing existing ones.
- Run the tests after each change checkpoint
- When go files are changed, run `go fmt` in the affected files.
- Favor functional solutions, where state management is consolidated in fewer places
Pro Tip: Encode coding standards and practices in a RULES.md file. Symlink this file to agent-specific rules files such as .cursorrules
, .windsurfrules
, claude.md
, agents.md
, etc. [ 2 ]
Plan
Defining your workflow
There is one extra setup step that we need before we start. In the same rules file mentioned previously, the workflow used for planning the work should be defined:
# Planning
- When I ask for a plan, save it in a file called docs/plan.md
- In the plan.md file, make sure each stage has a description and that individual steps are stored as checkboxes.
- Checkmark the appropriate step as you progress through the plan
Stating the current goal
With the setup complete, we are ready to start the development loop. The first step is to provide context [ 3 ] for a problem/feature. What I like about this part is that it forces me to describe it thoroughly, guaranteeing that I understood and can articulate exactly what the issue is.
Ex:
I want to develop a new feature that will allow users to share their optishell configuration with others. At the moment, it is possible to share nix flakes via the optishell.yaml file. I want it to be possible to point to optishell.yaml files from other places and have them combined into a final optishell setup.
There are some challenges with this change. For example, the imported optishell files might have different flakes, and we need to be able to handle that. There needs to be some kind of resolving logic to handle duplications.
We also might have to change the current schema of the optishell.yaml file to allow for this.
Propose a plan for such a feature. This plan can start with a high-level overview of the changes that need to be made and we will improve the details later. One option is to start with a limited set of features in external optishell.yaml files, like only importing environment variables from them.
References:
- the @rebuild.go command is used to apply changes in the current environment.
- The @env_common.go is currently in charge of compiling the @flake.nix template into a valid nix flake file using the inputs of the optishell.yml file
- @optishell.yml is a valid example of a current yaml file
Then I ask for a high-level plan of how to tackle the challenge.
Pro Tip: Claude, Cursor, and other similar tools offer an official “plan” mode. Asking the LLM to plan something ensures that all the tokens available are concentrated on the plan, and not spent changing code.
Review
Once the plan is provided, it’s time to review. I try to be really critical here, as if I were working with a colleague who actually expects my input and not someone who only wants me to agree with their ideas.
I can suggest changes and ask for a revised plan, or shift the order of certain intermediate phases.
At this point, the high-level plan is already much more comprehensive than any plan that a human would create before starting to code, but it should still be high-level enough that it fits in a few paragraphs of text.
Refine
Once I’m happy with the overall plan, I ask it to refine the intermediate steps. This part is particularly useful for creating more atomic changes that can be committed to source control during progress.
I would request something like:
The overall plan looks good. Now detail the first phase so I can review the multiple steps needed
This will force the LLM to plan for different scenarios (not only the happy path), create steps that follow the guidelines from the rules (e.g., it will create a step to write unit tests), etc.
This refinement will output a list of checkboxes that will make it easier to follow progress. It will also serve as “memory” for the LLM so you can continue the work in future sessions.
Execute
Now it’s time to let the agents work. I use explicit commands like:
“Now implement the items of phase 1, in the order described in the plan. Stop after each step so I can make sure things are progressing as expected.”
Repeat
Once the execution is done, a new review should be made to make sure things turned out the way I expected. It’s also a good time to run tests (in case you haven’t explicitly asked for that in the rules).
You might review the commits (if they were already made) or commit now.
And then it’s time to repeat the process for the next phases.
-
Kent Beck, 2024. Augmented Coding Beyond the Vibes. https://tidyfirst.substack.com/p/augmented-coding-beyond-the-vibes ↩︎
-
AI-assisted Coding for Teams That Can’t Get Away With Vibes. https://blog.nilenso.com/blog/2025/05/29/ai-assisted-coding/ ↩︎
-
Context here could mean different things (rules, system prompts, specific descriptions, a Story ticket fetched from a project management tool like Jira via MCP, etc..) ↩︎