Tag: AI development

  • The Real Speed Comes After Structure, Not Before It

    The Real Speed Comes After Structure, Not Before It

    There is a point in AI-assisted product development where the whole thing suddenly stops feeling heavy.

    Before that point, the work is still serious. You are defining the product. You are making decisions. You are narrowing scope. You are giving the repository memory. You are setting rules so the coding agent does not walk into the project like an enthusiastic intern with root access and too much confidence.

    Then something changes.

    Once that preparation is in place, the experience shifts. What felt like moving carefully in the early stages starts feeling like climbing into a private jet after spending days building the runway, checking the engine, filing the flight plan, and making sure the pilot actually knows where you want to go.

    That is the part many people misunderstand.

    They expect the speed at the beginning.

    In my experience, the real speed comes later.

    It comes after structure, not before it.

    This is the fourth and last piece in the sequence I use when building products with AI. First, I reject the whole “vibe coding” framing because I think building with AI is serious product work. Then I start with a clear product brief. Then I prepare the repository so the agent has memory, rules, and source-of-truth material to work from. Only after that do I move into what I would call the fast lane.

    That is where Codex planning mode becomes extremely useful.

    The easy part starts after the hard thinking

    People often talk about AI coding tools as if the big advantage is that they let you skip the difficult early work.

    That has not been my experience at all.

    The difficult early work is still there. You still need to think clearly. You still need to decide what version one actually is. You still need to define what the product does, what it does not do, what the constraints are, and what the repository should treat as the source of truth.

    None of that disappeared.

    What changed is what happens after you do it properly.

    Once the product is clear and the workspace is prepared, implementation stops feeling like a slow march through mud. That is the stage where things become surprisingly fast. Not because AI is doing magic, but because the project has already been shaped into something the tool can execute against with much less guessing.

    That distinction matters.

    The speed is not replacing product thinking.

    The speed is the reward for product thinking.

    Why I switch to Planning mode at this stage

    Once I have the product brief in place and the repository already has its working memory, I go to Codex and switch on Planning mode.

    Then I give it a starting instruction that is usually straightforward. Something like this:

    Let us start planning for this public WordPress plugin. The PRD available @whatever-software-prd-v1.md

    That is where the nature of the work changes.

    Before this stage, I am mostly shaping the product from a business and product perspective. I am deciding what matters, what belongs in version one, what should wait, and how the thing should behave for the user.

    Planning mode takes that clarity and starts examining it from the technical side.

    This is the moment where the developer appears.

    And honestly, that is one of the best parts of the process.

    Because a product can sound very clear from the business side while still hiding a lot of unanswered technical questions. Planning mode is where those hidden questions start coming out into the open.

    The planning conversation gets more real, very quickly

    What I like about this stage is that Codex stops behaving like a tool waiting for a giant prompt and starts behaving more like a technical planner trying to remove ambiguity before implementation.

    The questions become direct.

    What is the API URL?

    What security model are you using?

    What encryption choice do you want here?

    Do you already have the database structure?

    Should I generate the SQL schema?

    What happens when this flow fails?

    What should be stored and what should never be persisted?

    That is important because it exposes something many people miss: product clarity is necessary, but it is not the same thing as technical readiness.

    You need both.

    The PRD gives the project direction. Planning mode starts pressure-testing that direction against technical reality.

    And that is where the process becomes much stronger.

    Instead of discovering those things halfway through implementation, or worse, letting the model invent them on the fly, I get them surfaced early while the project is still in a planning state. That keeps the implementation cleaner and reduces the amount of correction later.

    In other words, the planning phase is not a formality.

    It is where the project becomes executable.

    This is where the speed starts compounding

    Once that planning conversation settles, Codex produces a plan.

    Not a vague summary. A real plan.

    A staged implementation roadmap.

    That is the moment where the whole process starts to feel different.

    Because now I am no longer looking at a product idea, a PRD, and a repository setup and wondering how the implementation will unfold. Now I can see the work broken into stages, with each stage carrying a specific purpose, a limited scope, and a clear next step.

    That changes everything.

    At this point, the project does not feel like driving a family car across a long road while stopping every few kilometers to check whether the map is upside down.

    It feels like getting on a private jet.

    The destination is clear. The route is calculated. The machine is ready. Now the movement becomes fast.

    This is the part where what used to take weeks can start taking hours.

    Not because the work became unserious.

    Because the structure became good enough to support acceleration.

    Why the plan matters more than one giant prompt

    One of the biggest advantages here is that the work no longer depends on trying to squeeze the whole product into one enormous implementation prompt.

    That approach sounds fast until it collapses under its own weight.

    Large language models all have context limits. Even when those limits are large, they are still limits. A real product contains business rules, technical decisions, non-goals, architecture constraints, edge cases, and documentation that keep evolving. Trying to force the entire implementation into one giant leap is not a serious long-term method.

    The staged plan solves that problem elegantly.

    Instead of asking the model to carry the full project in one overloaded burst, the work gets divided into contained steps. One stage handles the baseline. Another handles persistence. Another handles integration. Another handles the backup engine. Another handles scheduling, retention, diagnostics, and hardening.

    That is much easier to execute well.

    The model has a bounded target.

    The scope stays tighter.

    Review becomes easier.

    Drift becomes easier to catch.

    And the project keeps moving without constantly smashing into the invisible wall of context overload.

    That is one of the reasons I trust this workflow more than the “just ask it to build the whole thing” approach.

    It works with the limits of the medium instead of pretending they do not exist.

    Stage-based implementation is what makes fast feel safe

    Another reason this works well is that the execution is not just broken into stages internally. It is also broken into approvals.

    That matters a lot.

    After Codex completes a stage, it reports back with what was done, what files were added or changed, what was deliberately kept out of scope, and what the next gate is.

    Then I give it the next instruction:

    Let us go to Stage 1

    And the project continues.

    I like this because it gives me the speed of AI-assisted implementation without forcing me into blind trust.

    I do not need to stand over every keystroke like a nervous manager in a bad office drama. But I also do not need to throw the entire codebase into a black box and hope that what comes out still resembles the product I intended to build.

    The stage boundary creates a checkpoint.

    That checkpoint does several useful things at once:

    • it keeps implementation scoped,
    • it makes review lighter,
    • it makes mistakes easier to catch earlier,
    • and it preserves a sense of control without slowing everything down.

    That is a much healthier way to move quickly.

    Fast is good.

    Fast with gates is better.

    The earlier preparation is what makes this stage feel easy

    This is the point I would emphasize most strongly.

    Planning mode feels powerful because of what happened before it.

    If the product brief is weak, the planning stage gets foggy.

    If the repository has no memory, the planning stage gets unstable.

    If the source-of-truth documents are missing, the planning stage starts leaning on assumptions.

    If the workspace rules are vague, the implementation stage becomes loose and drifty very quickly.

    So when this part starts feeling easy, that is not because the earlier work was unnecessary.

    It is because the earlier work worked.

    That is why I keep saying the speed comes after structure.

    The structure is what made the speed possible.

    This is also why I do not think of planning mode as some magical shortcut that rescues a poorly prepared project. It is much better than that. It is an accelerator for a project that already knows what it is trying to become.

    That is a very different thing.

    What changed for me in practice

    Before working this way, implementation felt heavier.

    Not always because the coding itself was difficult, but because so much hidden uncertainty stayed mixed into the coding phase. Product decisions, technical decisions, missing requirements, unclear boundaries, undocumented rules — all of it stayed tangled together.

    That creates friction.

    It slows down execution even when code is being produced quickly.

    Now the flow is different.

    I think through the product first.

    I create the brief.

    I prepare the repository with memory and working rules.

    I switch to planning mode.

    Codex pressure-tests the product technically, asks sharper questions, gives me a structured plan, and then implements it stage by stage.

    Once that machinery is in motion, the pace becomes very different.

    The work becomes lighter to steer.

    The progress becomes easier to review.

    The output becomes more reliable.

    And yes, the speed becomes dramatic enough that things that used to take weeks can now take hours.

    That is not marketing language. That is the practical effect of reducing ambiguity before implementation starts.

    This is the happy ending, but not a shortcut

    Since this is the last article in the series, I think this is the right place to say it plainly:

    This fast stage is the happy ending.

    But it is only a happy ending because the earlier chapters were not skipped.

    If someone looks only at this part, they may get the wrong idea. They may think the lesson is that AI makes implementation easy if you know which button to click.

    That is not my lesson.

    My lesson is that AI makes implementation fast after you do the serious work of defining the product, preparing the workspace, and giving the agent enough structure to operate well.

    That is why I do not see this workflow as hype.

    I see it as compound leverage.

    First clarity.

    Then memory.

    Then rules.

    Then planning.

    Then staged implementation.

    And then, finally, the private jet.

    That is when the project starts moving with unusual speed.

    Not before the runway exists.

    After it.

  • Why I Start Every AI-Built Product With a PRD

    Why I Start Every AI-Built Product With a PRD

    One of the stranger side effects of AI-assisted development is that it made some people think planning matters less.

    I had the opposite experience.

    The more useful AI became, the more I needed clarity before I asked it to build anything.

    That is why I start every AI-built product with a Product Requirements Document “PRD”. I will use the term once here, but what I really mean is something simpler: a clear product definition. A brief. A spec. A document that forces me to stop pretending the idea is finished when it is still half-baked and wearing confidence like a costume.

    I am not doing this because I enjoy documentation. I am definitely not doing it because I miss corporate ceremony. I am doing it because I want better output, fewer wrong turns, and a product that behaves like something I actually meant to build.

    In my workflow, this step happens before Codex.

    Always.

    I do not want the coding tool to invent the product for me

    When people talk about building with AI, a lot of the conversation goes straight to prompting the coding tool.

    That makes sense on the surface. If the tool writes code, then the obvious question is how to talk to it.

    But for me, that is already too late.

    Before any implementation starts, I need to know what I am asking for. Not in a vague “I have a direction” way. I mean clearly enough that the system is not quietly making important product decisions on my behalf.

    Because that is what happens when the brief is weak.

    The tool still produces output. Sometimes a lot of it. Sometimes surprisingly good-looking output. But under the surface, it is filling gaps with assumptions. It is choosing scope, making tradeoffs, inventing behavior, smoothing over ambiguity, and guessing what matters.

    That may be acceptable for a quick experiment.

    It is not how I want to build products.

    I do not want Codex, or any AI coding tool, deciding what version one should include, what edge cases matter, which features should wait, what happens when something fails, or how the product should behave under real constraints. Those are product decisions. I want to make them before I get to the implementation stage.

    That is the real reason I start with a product brief.

    The document is not the goal. Clarity is the goal.

    I think this is where people misunderstand the point.

    The goal is not to produce a polished document for its own sake. The goal is to reach a level of clarity that makes implementation reliable.

    Sometimes that clarity ends up in a clean PRD. Sometimes it is a technical spec. Sometimes it is a plain-language product brief with enough structure to remove confusion. I do not care too much what label it gets.

    What I care about is whether it answers the questions that would otherwise get pushed into the coding phase.

    What is the product?

    Who is it for?

    What problem is it solving?

    What does the first version include?

    What is explicitly out?

    What should happen when something fails?

    What constraints matter in the real environment?

    If those answers are still fuzzy, then the coding stage becomes more expensive than it looks. Not always in money, but definitely in time, rework, review effort, and product drift.

    You feel like you are moving fast because code is appearing quickly.

    Then you discover you have been building motion, not clarity.

    Why this matters even more with AI

    With human developers, unclear thinking is already expensive.

    With AI, it gets expensive faster.

    That is the part I learned very quickly.

    If the product definition is weak, AI does not pause and say, “This is still not thought through properly, maybe let us step back.” It usually goes ahead and builds something. That can create a very convincing illusion of progress.

    And honestly, that illusion is dangerous.

    Because now you have screens, flows, database structures, API logic, and implementation details growing around assumptions that were never properly decided. You are not just missing clarity anymore. You are missing clarity with momentum.

    That is why I do not see the product brief as optional overhead. I see it as protection against fast confusion.

    The clearer the product is, the more useful AI becomes.

    It guesses less.

    It drifts less.

    It makes fewer wrong assumptions.

    It becomes easier to review.

    It becomes easier to refine.

    And the gap between what I meant and what gets built becomes much smaller.

    That is a very practical payoff.

    I use ChatGPT first because I need pressure before I need code

    This is also why my workflow starts in ChatGPT, not in Codex.

    At that stage, I am not asking for implementation. I am asking for pressure.

    I want the idea challenged.

    I want missing decisions exposed.

    I want vague language forced into specific language.

    I want the comfortable illusion that “the idea is clear in my head” to be tested before the coding starts.

    That is what ChatGPT is useful for in this part of the process.

    I use it to discuss what already exists in the market. I use it to brainstorm the shape of the product. I use it to narrow scope. I use it to discover what I have not yet decided. And very often, the value is not in the answer itself. The value is in the next question it pushes back at me.

    That interaction helps me move from “I think I want this” to “this is what I am actually building.”

    Only after that do I want the coding tool involved.

    What I put inside my product brief

    I do not treat this like a giant enterprise artifact. I keep it practical.

    At a minimum, I want these things clear before I start implementation:

    1. The purpose
    What the product is supposed to do, in plain language.

    2. The user
    Who it is for and what real need it serves.

    3. The first version
    What version one includes, and just as importantly, what it does not include.

    4. Core flows
    What the user actually does and what the product needs to support.

    5. Constraints
    Technical realities, hosting limits, security concerns, operational conditions, or anything else that changes design decisions.

    6. Failure behavior
    What should happen when something breaks, times out, gets interrupted, or loses access.

    7. Boundaries
    What I am consciously postponing so the first version stays focused.

    That last part matters more than people think.

    A good product brief is not only a description of what I want to build. It is also a written record of what I am refusing to build right now.

    That saves a lot of pain later.

    A real example from my backup plugin

    One public example I can talk about is a WordPress backup plugin I am designing for Google Drive backups.

    If I had started directly in a coding tool, the request could have been one sentence long: build a WordPress plugin that backs up a site to Google Drive.

    That sounds reasonable until you realize how much product thinking is hiding inside that sentence.

    What exactly gets backed up?

    Should it create large temporary ZIP files locally, or should it upload directly?

    Should version one support restore?

    Should it support migration?

    Should it support multiple cloud providers?

    How should it behave on shared hosting with limited resources?

    Without a clear product definition, the coding tool would still start building. But it would be building around guesses.

    In my case, working through the brief changed the shape of the product in important ways.

    For example, I made an explicit decision to keep version one focused on Google Drive only, instead of trying to support multiple storage providers immediately. I also made a clear decision that automated restore would be out of scope for version one. That kept the first release much tighter and much more realistic.

    Those are not small details. They shape architecture, user expectations, support burden, and implementation complexity from the beginning.

    That is exactly why I want those decisions made before code starts, not discovered halfway through it.

    This step saves me from fake progress

    One reason I value this part of the workflow so much is that it protects me from fake progress.

    Fake progress is when implementation starts quickly, output appears quickly, and the whole thing feels productive, but the actual product is still not defined properly.

    That kind of speed is seductive.

    It feels efficient.

    It looks efficient.

    But later it usually turns into revisions, corrections, backtracking, and the awkward realization that the tool did not misunderstand me. I just had not finished understanding the product myself.

    The product brief helps me catch that earlier, when the cost is still low.

    It is easier to fix vagueness in a document than in a growing codebase.

    It is easier to challenge scope in a discussion than after features have already started multiplying.

    It is easier to decide what matters before implementation than while reviewing output shaped by assumptions I never meant to approve.

    I do not see this as documentation. I see it as leverage.

    That is probably the simplest way to put it.

    I am not writing a product brief because I enjoy preparing documents before the “real work” begins.

    For me, this is the real work.

    This is where the product becomes concrete enough to build well.

    This is where ambiguity gets reduced.

    This is where version one becomes realistic.

    This is where the coding tool becomes more useful, because it is no longer being asked to fill in the product thinking for me.

    The better this stage goes, the more value I get from AI later.

    That is why I start here every time.

    Not because it looks organized.

    Not because it sounds professional.

    Because it works.

    And when I am using AI to build real products, “it works” is a much better standard than “it felt fast at the beginning.”

  • Using AI to Build Products Is Serious Work, Not Vibe Coding

    Using AI to Build Products Is Serious Work, Not Vibe Coding

    I really dislike the term vibe coding.

    Not because it sounds silly, although it does. And not because people should not have fun with technology. They should. The problem is that the term quietly suggests something bigger: that building with AI is mostly casual, experimental, and a bit unserious. Like you are just throwing prompts at a machine, getting lucky sometimes, and calling it product development.

    That is not how I see it.

    For small businesses, freelancers, and solo founders, AI-assisted development is not a toy. It is not a side show. It is a serious way to build real products faster, with less overhead, and with much more independence than most people realize.

    But there is a catch.

    Using AI to build products only works well when you stop treating it like magic.

    I am not speaking theoretically here. I am speaking from how I work now. I stopped hiring developers in early 2025, and today I rely on Codex as the only solo developer across the projects I am handling. That does not mean I sit down, type a vague idea, and watch perfect software appear like a cooking show reveal. It means I changed the way I think about product development.

    And that is exactly where most of the confusion around “vibe coding” starts.

    Using AI is not the problem. Vagueness is.

    The first rule many of us learned when we started learning computer science in the late 1990s was simple: garbage in, garbage out.

    That rule did not disappear because AI arrived. If anything, it became more important.

    If you give an AI coding tool a half-idea, a blurry goal, and a pile of unmade decisions, it will still produce something. That is part of the danger. It is very easy now to get output that looks impressive before you realize it is built on fuzzy thinking.

    That is what I would call vibe coding.

    Not using AI.

    Not building quickly.

    Not shipping with AI assistance.

    Vibe coding, to me, is when someone gives AI a semi-idea and expects it to somehow fill in the product thinking, business logic, edge cases, user needs, and technical tradeoffs by itself. That may produce demos. It may even produce code that runs. But it usually does not produce a reliable product.

    The issue is not the intelligence of the tool. The issue is the laziness of the input.

    My workflow does not start with Codex

    This is the part that matters most.

    I do not start with Codex.

    I start with ChatGPT.

    That surprises some people because if the goal is to build software, the instinct is to go directly to the coding tool. But in my experience, that is too early. Before implementation, I need clarity. I need pressure. I need questions. I need to discover whether the idea in my head is actually a product or just a direction wearing a confident face.

    So I use ChatGPT first for thinking, not coding.

    I use it to discuss what already exists in the market. I use it to brainstorm approaches. I use it to challenge weak assumptions. I use it to force me into specifics when I am still speaking too generally. Sometimes the most useful thing it does is not answering me. It is asking me the questions I should have asked myself earlier.

    That stage is extremely important because most product ideas are incomplete when they first show up. They sound clear because they are familiar to the person thinking about them. But once you start discussing them properly, all the hidden ambiguity comes out.

    What exactly is the first version?

    What is essential and what is not?

    What should happen when something fails?

    What does the user actually need, not what sounds nice in a feature list?

    What should be postponed even if it looks attractive now?

    That is where the serious work starts.

    The goal is not a better prompt. The goal is a clear product.

    A lot of people talk about prompting as if the secret is finding the perfect sentence to unlock perfect software.

    I do not think that is the real game.

    The goal is not to write a clever prompt. The goal is to create a clear picture of the product before asking an AI coding tool to build it.

    Sometimes that ends up becoming a proper PRD. Sometimes it is a technical specification. Sometimes it is just a very well-structured breakdown of scope, flows, decisions, and constraints. The format matters less than the clarity.

    What matters is that by the time I move into implementation, I am no longer asking the AI tool to invent the product for me. I am asking it to execute against something I have already thought through.

    That changes everything.

    When the product is clear, the AI becomes dramatically more useful. It stops guessing as much. It makes fewer wrong assumptions. It can move faster without dragging the whole project into chaos. Review becomes easier. Iteration becomes sharper. The output becomes more reliable because the target is more reliable.

    This is why I do not see serious AI development as “prompting.” I see it as structured product thinking followed by accelerated execution.

    A small real example

    One public example I can talk about is a WordPress backup plugin I am designing.

    If I had gone straight to a coding tool with the raw idea, the prompt would have sounded something like this: build me a WordPress plugin that backs up a site to Google Drive.

    That sounds fine until you realize how many decisions are hidden inside that one sentence.

    What exactly gets backed up?

    Should it create large local zip files or upload directly?

    Does it support manual backups, scheduled backups, or both?

    What belongs in version one and what should wait?

    How should it behave on limited shared hosting?

    What errors deserve admin alerts?

    Should it support restore? Migration? Multiple cloud providers? Incremental backup?

    If you skip those questions and just start coding, the AI will still build something. But now the product is being shaped by whatever the model guesses, not by your actual priorities.

    That is not a development strategy. That is delegation by wishful thinking.

    Instead, I used ChatGPT first to work through the product properly. The discussion narrowed the scope, removed unnecessary features from the first version, and focused heavily on reliability over feature count. The result was a much clearer first release: what it should do, what it should not do, how it should behave, and what kind of architecture made sense for the real hosting environments it would run on.

    Only after that kind of clarification does a coding tool become truly powerful.

    That is the difference I keep trying to explain when people reduce AI development to “vibes.”

    My tool preference is personal. The method is broader.

    For my own work, I use Codex.

    I have also tried Google AI assistance and Google Firebase Studio, and in my own experience they did not come close to Codex for the way I work. I have not had the chance to try Claude Code yet, so I am not pretending to publish a grand ranking of all AI coding tools from a mountaintop.

    But honestly, that is not the most important part.

    The methodology matters more than the vendor.

    If you start with vague thinking, weak scope, and missing decisions, most AI development tools will give you unreliable results sooner or later. If you start with a clear product picture, a defined first version, and a realistic understanding of the problem, you give any strong AI development tool a much better chance of producing useful work.

    Tool choice matters, yes.

    But thought process matters more.

    AI did not remove the need for product thinking. It increased it.

    This is the part many people still miss.

    AI does not remove the need to think clearly. It increases the cost of not thinking clearly.

    When development becomes faster, confusion also becomes faster. Wrong assumptions spread quicker. Bad scope decisions show up earlier. Weak product thinking gets amplified instead of hidden behind longer timelines.

    That is why I reject the framing behind vibe coding.

    Using AI to build products is not unserious. In many cases, it is the opposite. It demands sharper thinking because the execution layer has become much faster. If you are careless, the tool will happily help you build the wrong thing efficiently. That is not innovation. That is just a quicker route to regret.

    Serious AI development, at least in the way I use it, starts before the first line of code. It starts with clarity. It starts with the discipline to define what you want, what you do not want, and why.

    That is not a vibe.

    That is product work.

    And if you do it properly, it can absolutely produce real products.