Category: AI & Automation

  • The Real Speed Comes After Structure, Not Before It

    The Real Speed Comes After Structure, Not Before It

    There is a point in AI-assisted product development where the whole thing suddenly stops feeling heavy.

    Before that point, the work is still serious. You are defining the product. You are making decisions. You are narrowing scope. You are giving the repository memory. You are setting rules so the coding agent does not walk into the project like an enthusiastic intern with root access and too much confidence.

    Then something changes.

    Once that preparation is in place, the experience shifts. What felt like moving carefully in the early stages starts feeling like climbing into a private jet after spending days building the runway, checking the engine, filing the flight plan, and making sure the pilot actually knows where you want to go.

    That is the part many people misunderstand.

    They expect the speed at the beginning.

    In my experience, the real speed comes later.

    It comes after structure, not before it.

    This is the fourth and last piece in the sequence I use when building products with AI. First, I reject the whole “vibe coding” framing because I think building with AI is serious product work. Then I start with a clear product brief. Then I prepare the repository so the agent has memory, rules, and source-of-truth material to work from. Only after that do I move into what I would call the fast lane.

    That is where Codex planning mode becomes extremely useful.

    The easy part starts after the hard thinking

    People often talk about AI coding tools as if the big advantage is that they let you skip the difficult early work.

    That has not been my experience at all.

    The difficult early work is still there. You still need to think clearly. You still need to decide what version one actually is. You still need to define what the product does, what it does not do, what the constraints are, and what the repository should treat as the source of truth.

    None of that disappeared.

    What changed is what happens after you do it properly.

    Once the product is clear and the workspace is prepared, implementation stops feeling like a slow march through mud. That is the stage where things become surprisingly fast. Not because AI is doing magic, but because the project has already been shaped into something the tool can execute against with much less guessing.

    That distinction matters.

    The speed is not replacing product thinking.

    The speed is the reward for product thinking.

    Why I switch to Planning mode at this stage

    Once I have the product brief in place and the repository already has its working memory, I go to Codex and switch on Planning mode.

    Then I give it a starting instruction that is usually straightforward. Something like this:

    Let us start planning for this public WordPress plugin. The PRD available @whatever-software-prd-v1.md

    That is where the nature of the work changes.

    Before this stage, I am mostly shaping the product from a business and product perspective. I am deciding what matters, what belongs in version one, what should wait, and how the thing should behave for the user.

    Planning mode takes that clarity and starts examining it from the technical side.

    This is the moment where the developer appears.

    And honestly, that is one of the best parts of the process.

    Because a product can sound very clear from the business side while still hiding a lot of unanswered technical questions. Planning mode is where those hidden questions start coming out into the open.

    The planning conversation gets more real, very quickly

    What I like about this stage is that Codex stops behaving like a tool waiting for a giant prompt and starts behaving more like a technical planner trying to remove ambiguity before implementation.

    The questions become direct.

    What is the API URL?

    What security model are you using?

    What encryption choice do you want here?

    Do you already have the database structure?

    Should I generate the SQL schema?

    What happens when this flow fails?

    What should be stored and what should never be persisted?

    That is important because it exposes something many people miss: product clarity is necessary, but it is not the same thing as technical readiness.

    You need both.

    The PRD gives the project direction. Planning mode starts pressure-testing that direction against technical reality.

    And that is where the process becomes much stronger.

    Instead of discovering those things halfway through implementation, or worse, letting the model invent them on the fly, I get them surfaced early while the project is still in a planning state. That keeps the implementation cleaner and reduces the amount of correction later.

    In other words, the planning phase is not a formality.

    It is where the project becomes executable.

    This is where the speed starts compounding

    Once that planning conversation settles, Codex produces a plan.

    Not a vague summary. A real plan.

    A staged implementation roadmap.

    That is the moment where the whole process starts to feel different.

    Because now I am no longer looking at a product idea, a PRD, and a repository setup and wondering how the implementation will unfold. Now I can see the work broken into stages, with each stage carrying a specific purpose, a limited scope, and a clear next step.

    That changes everything.

    At this point, the project does not feel like driving a family car across a long road while stopping every few kilometers to check whether the map is upside down.

    It feels like getting on a private jet.

    The destination is clear. The route is calculated. The machine is ready. Now the movement becomes fast.

    This is the part where what used to take weeks can start taking hours.

    Not because the work became unserious.

    Because the structure became good enough to support acceleration.

    Why the plan matters more than one giant prompt

    One of the biggest advantages here is that the work no longer depends on trying to squeeze the whole product into one enormous implementation prompt.

    That approach sounds fast until it collapses under its own weight.

    Large language models all have context limits. Even when those limits are large, they are still limits. A real product contains business rules, technical decisions, non-goals, architecture constraints, edge cases, and documentation that keep evolving. Trying to force the entire implementation into one giant leap is not a serious long-term method.

    The staged plan solves that problem elegantly.

    Instead of asking the model to carry the full project in one overloaded burst, the work gets divided into contained steps. One stage handles the baseline. Another handles persistence. Another handles integration. Another handles the backup engine. Another handles scheduling, retention, diagnostics, and hardening.

    That is much easier to execute well.

    The model has a bounded target.

    The scope stays tighter.

    Review becomes easier.

    Drift becomes easier to catch.

    And the project keeps moving without constantly smashing into the invisible wall of context overload.

    That is one of the reasons I trust this workflow more than the “just ask it to build the whole thing” approach.

    It works with the limits of the medium instead of pretending they do not exist.

    Stage-based implementation is what makes fast feel safe

    Another reason this works well is that the execution is not just broken into stages internally. It is also broken into approvals.

    That matters a lot.

    After Codex completes a stage, it reports back with what was done, what files were added or changed, what was deliberately kept out of scope, and what the next gate is.

    Then I give it the next instruction:

    Let us go to Stage 1

    And the project continues.

    I like this because it gives me the speed of AI-assisted implementation without forcing me into blind trust.

    I do not need to stand over every keystroke like a nervous manager in a bad office drama. But I also do not need to throw the entire codebase into a black box and hope that what comes out still resembles the product I intended to build.

    The stage boundary creates a checkpoint.

    That checkpoint does several useful things at once:

    • it keeps implementation scoped,
    • it makes review lighter,
    • it makes mistakes easier to catch earlier,
    • and it preserves a sense of control without slowing everything down.

    That is a much healthier way to move quickly.

    Fast is good.

    Fast with gates is better.

    The earlier preparation is what makes this stage feel easy

    This is the point I would emphasize most strongly.

    Planning mode feels powerful because of what happened before it.

    If the product brief is weak, the planning stage gets foggy.

    If the repository has no memory, the planning stage gets unstable.

    If the source-of-truth documents are missing, the planning stage starts leaning on assumptions.

    If the workspace rules are vague, the implementation stage becomes loose and drifty very quickly.

    So when this part starts feeling easy, that is not because the earlier work was unnecessary.

    It is because the earlier work worked.

    That is why I keep saying the speed comes after structure.

    The structure is what made the speed possible.

    This is also why I do not think of planning mode as some magical shortcut that rescues a poorly prepared project. It is much better than that. It is an accelerator for a project that already knows what it is trying to become.

    That is a very different thing.

    What changed for me in practice

    Before working this way, implementation felt heavier.

    Not always because the coding itself was difficult, but because so much hidden uncertainty stayed mixed into the coding phase. Product decisions, technical decisions, missing requirements, unclear boundaries, undocumented rules — all of it stayed tangled together.

    That creates friction.

    It slows down execution even when code is being produced quickly.

    Now the flow is different.

    I think through the product first.

    I create the brief.

    I prepare the repository with memory and working rules.

    I switch to planning mode.

    Codex pressure-tests the product technically, asks sharper questions, gives me a structured plan, and then implements it stage by stage.

    Once that machinery is in motion, the pace becomes very different.

    The work becomes lighter to steer.

    The progress becomes easier to review.

    The output becomes more reliable.

    And yes, the speed becomes dramatic enough that things that used to take weeks can now take hours.

    That is not marketing language. That is the practical effect of reducing ambiguity before implementation starts.

    This is the happy ending, but not a shortcut

    Since this is the last article in the series, I think this is the right place to say it plainly:

    This fast stage is the happy ending.

    But it is only a happy ending because the earlier chapters were not skipped.

    If someone looks only at this part, they may get the wrong idea. They may think the lesson is that AI makes implementation easy if you know which button to click.

    That is not my lesson.

    My lesson is that AI makes implementation fast after you do the serious work of defining the product, preparing the workspace, and giving the agent enough structure to operate well.

    That is why I do not see this workflow as hype.

    I see it as compound leverage.

    First clarity.

    Then memory.

    Then rules.

    Then planning.

    Then staged implementation.

    And then, finally, the private jet.

    That is when the project starts moving with unusual speed.

    Not before the runway exists.

    After it.

  • Before Codex Writes Code, I Give the Repository a Memory

    Before Codex Writes Code, I Give the Repository a Memory

    AI coding agents can make a new workspace feel deceptively easy.

    You open a fresh repository, write a few sentences, attach a file or two, and within minutes the agent is ready to generate code. It feels fast. It feels impressive. It also feels like progress.

    Sometimes it is.

    Sometimes it is just very fast confusion.

    That is the part I think people underestimate.

    The problem is usually not that the coding agent is weak. The problem is that the repository is empty of working memory. The product rules are still floating around in your head. The business logic is scattered across old chats. Scope decisions live in yesterday’s conversation. Important constraints exist, but not in any durable place the agent can reliably use.

    So the agent starts building anyway.

    And that is where trouble starts.

    Most developers do not enjoy writing documentation. I do not either. But when I work with AI coding agents, I treat documentation differently now. I am not writing it mainly for myself to read later. I am writing it so the agent can work properly inside the project.

    That is why I do not start a new workspace by asking Codex to write code.

    I start by giving the repository a memory.

    For me, that usually means two things:

    • an artifacts/ directory
    • an AGENTS.md file

    Those two things do a lot more work than they look like they do.

    The problem is not only coding. It is continuity.

    One of the easiest mistakes in AI-assisted development is assuming that the agent will somehow “understand the project” after a few good chats.

    It will understand the current conversation, maybe quite well.

    That is not the same thing.

    A real product is not a single prompt. It is a live thing. It keeps changing. Features get added. Earlier decisions create constraints. Business rules pile up quietly in the background. The first version makes tradeoffs. Later improvements depend on those tradeoffs. Edge cases appear. Security rules get added. Non-goals matter just as much as goals.

    You cannot keep carrying all of that in your own head and re-explaining it every time.

    At some point, the conversation starts sounding ridiculous.

    You are effectively saying: I need to add feature C, which is an extension of feature A, which already behaves in these ways, and it also touches feature B, which follows those rules, and do not forget that this old business rule still applies, and also do not break the thing we decided three weeks ago.

    That is not a serious long-term workflow.

    The repository needs to remember things so I do not have to keep reconstructing the project from memory every time I open a new chat.

    artifacts/ is where the repository remembers

    This is why I create an artifacts/ directory early.

    The point is not folder beauty. The point is memory.

    I want a place where the project can store the documents that define how it should behave. Not vague notes. Real source-of-truth material.

    That usually includes things like:

    • product requirements
    • business rules
    • feature documentation
    • API contracts
    • design rules
    • release notes
    • implementation constraints
    • decisions about scope and non-goals

    The exact structure changes depending on the repository, but the principle stays the same: if the code must respect something, that thing should live somewhere durable.

    So instead of hoping I remember every rule about feature A and feature B, I let the documentation remember it.

    Maybe I have:

    • artifacts/business-rules/feature-a.md
    • artifacts/business-rules/feature-b.md
    • artifacts/api/token-flow.md
    • artifacts/design/admin-ui-rules.md
    • artifacts/release-notes/

    Now the project has a memory system.

    And this matters more than many younger developers realize.

    The whole idea of a source of truth is still not deeply built into how a lot of developers think, especially in our region. Many people still work as if the real product knowledge lives in scattered chats, half-memory, old messages, and whoever happens to still remember why a decision was made.

    That may work for a while with small teams and simple projects.

    It does not work well when you are using an AI coding agent that can produce output very quickly and still be missing half the rules that matter.

    AGENTS.md is the operating manual

    If artifacts/ is where the repository remembers, AGENTS.md is how the repository teaches the agent to behave.

    This file is not a random set of notes. It is the operating manual for the workspace.

    It tells the agent things like:

    • what this repository actually is
    • what the product is trying to do
    • what is in scope
    • what is explicitly out of scope
    • where the source-of-truth documents live
    • what to read before implementing
    • how to work in stages
    • when to stop and ask questions
    • when documentation must be updated
    • what “done” means in this repo

    That matters because a coding agent does not just need context. It also needs guardrails.

    I do not want it jumping into implementation because the prompt sounded enthusiastic.

    I want it to understand the workflow first.

    In my setup, AGENTS.md usually bakes in a few recurring rules:

    • read first
    • reach high confidence before implementation
    • ask focused questions if confidence is not high enough
    • do not write code until there is explicit approval
    • treat certain docs inside artifacts/ as canonical
    • update those docs when behavior changes
    • keep changes scoped to the approved stage

    That is not bureaucracy for the sake of it.

    That is how I reduce rework, drift, and fake progress.

    I am not writing documentation for fun. I am reducing agent memory burden.

    This is the part I think many people misunderstand.

    When I talk about artifacts/ and AGENTS.md, some people hear “documentation process” and immediately imagine overhead.

    I get it. Most developers do not enjoy writing documentation. I do not either.

    But in this workflow, the documentation is doing a different job.

    It is not there because I suddenly became emotionally attached to markdown.

    It is there because the agent needs governed context.

    The repository should not depend on my ability to remember every business rule, every scope boundary, every earlier decision, and every exception that accumulated across the product.

    That is too fragile.

    I want the codebase to be surrounded by enough structure that the agent can work with continuity instead of constantly depending on conversational reminders.

    In other words, I am not trying to become more formal.

    I am trying to become less forgetful at the system level.

    The nice part: I do not write most of this manually anymore

    There is also a practical point here that makes this much less painful than it sounds.

    At this point, creating AGENTS.md is barely a task I think about.

    I do not sit there writing it from scratch like it is a literary project.

    My usual setup is simple:

    1. I place the PRD or product brief inside artifacts/
    2. I attach a few AGENTS.md files from other workspaces
    3. I open the first chat in the new Codex workspace and ask it to create a new AGENTS.md for this project by extracting my working methodology from the previous ones and adapting it to the new product context

    That is usually enough.

    In less than a minute, I have a solid first draft of a new AGENTS.md.

    Then I review it, adjust anything that is specific to that repository, and move on.

    So no, I am not manually rebuilding the whole operating system every time.

    The first time requires thought. After that, the method starts reproducing itself.

    That is one of the practical advantages of having a clear methodology. Once it exists, your future workspaces get easier to set up.

    The setup changes from one repository to another

    This part matters a lot.

    I am not using one generic AGENTS.md for everything and pretending all projects have the same needs.

    They do not.

    A public WordPress plugin does not need the same rules as a private broker plugin.

    A mobile app does not need the same workflow notes as an n8n automation repo.

    A data-heavy system does not need the same constraints as a content-focused site.

    The structure is similar, but the actual instructions must adapt to the repository.

    For one project, the important sections may focus on plugin scope, localization rules, shared hosting limitations, and release notes.

    For another, the important sections may focus on API authorization, mobile UI rules, or migration architecture.

    That is exactly why I like this method.

    It is standardized without becoming generic.

    The repo gets a familiar operating model, but the actual rules still reflect the real product.

    What I usually put inside artifacts/

    I try to keep this practical.

    I am not trying to create a corporate document cemetery.

    I want documents that help the agent make better decisions and preserve continuity.

    A simple example might look like this:

    artifacts/
    ├── business-rules/
    │   ├── feature-a.md
    │   ├── feature-b.md
    │   └── retention-policy.md
    ├── api/
    │   ├── auth-flow.md
    │   └── webhook-contract.md
    ├── design/
    │   └── admin-ui-rules.md
    ├── release-notes/
    └── prd.md

    That is enough to be useful.

    The important part is not how impressive the tree looks. The important part is that the project now has durable memory.

    What I usually put inside AGENTS.md

    I also keep this structured and opinionated.

    A simple version usually includes:

    • purpose of the repository
    • collaboration protocol
    • source-of-truth files and folders
    • product scope and non-goals
    • technical environment baseline
    • engineering standards
    • implementation workflow
    • documentation update rules
    • definition of done
    • maintenance rule

    And yes, I usually include stage gates, confidence thresholds, and explicit approval rules.

    Some people may find that strict.

    I find it cheaper than cleaning up after a fast misunderstanding.

    One important rule: let the agent maintain the memory too

    This may be the most useful practical point in the whole method.

    I do not want artifacts/ to become a folder that gets outdated the moment real development starts.

    So I push one more rule into AGENTS.md:

    If a task changes behavior, business rules, API contracts, UX expectations, or implementation constraints, update the relevant document inside artifacts/ as part of the same stage output.

    That changes the role of documentation again.

    Now the docs are not a side activity I keep postponing.

    They become part of the actual workflow.

    That is important because I am not trying to personally write and maintain every document by hand. I want the system to help maintain its own memory as the product evolves.

    That is the real leverage.

    This setup makes AI coding feel much less fragile

    The practical payoff is straightforward.

    When the repository has memory and operating rules:

    • I repeat myself less
    • the agent guesses less
    • business rules are easier to preserve
    • product continuity improves
    • new chats are easier to start
    • stage outputs become cleaner
    • the code is less likely to drift from the real product

    Most importantly, I stop treating each coding session like a fresh act of reconstruction.

    That is a big shift.

    Instead of constantly re-explaining the project, I can point the agent to the repo’s memory and working rules, then focus on the stage in front of me.

    That is a much better use of both my time and the tool.

    Final thought

    I do not prepare a repository this way because I want AI development to feel more formal.

    I do it because I want it to feel more reliable.

    Before Codex writes code, I want the repository to know what it is, what rules matter, where the truth lives, and how work should happen.

    So I give it memory in artifacts/.

    And I give it operating rules in AGENTS.md.

    After that, the coding agent is no longer walking into an empty room.

    It is walking into a workspace that already knows how to think.

    Want a sample?

    I put together a sample AGENTS.md template based on the structure I usually use when setting up a new Codex workspace.

    It includes stage gates, confidence rules, source-of-truth guidance, and the requirement to keep artifacts/ updated as the project evolves.

    Download it here: AGENTS-sample-template-by-Hasan-Halabi.md

  • Why I Start Every AI-Built Product With a PRD

    Why I Start Every AI-Built Product With a PRD

    One of the stranger side effects of AI-assisted development is that it made some people think planning matters less.

    I had the opposite experience.

    The more useful AI became, the more I needed clarity before I asked it to build anything.

    That is why I start every AI-built product with a Product Requirements Document “PRD”. I will use the term once here, but what I really mean is something simpler: a clear product definition. A brief. A spec. A document that forces me to stop pretending the idea is finished when it is still half-baked and wearing confidence like a costume.

    I am not doing this because I enjoy documentation. I am definitely not doing it because I miss corporate ceremony. I am doing it because I want better output, fewer wrong turns, and a product that behaves like something I actually meant to build.

    In my workflow, this step happens before Codex.

    Always.

    I do not want the coding tool to invent the product for me

    When people talk about building with AI, a lot of the conversation goes straight to prompting the coding tool.

    That makes sense on the surface. If the tool writes code, then the obvious question is how to talk to it.

    But for me, that is already too late.

    Before any implementation starts, I need to know what I am asking for. Not in a vague “I have a direction” way. I mean clearly enough that the system is not quietly making important product decisions on my behalf.

    Because that is what happens when the brief is weak.

    The tool still produces output. Sometimes a lot of it. Sometimes surprisingly good-looking output. But under the surface, it is filling gaps with assumptions. It is choosing scope, making tradeoffs, inventing behavior, smoothing over ambiguity, and guessing what matters.

    That may be acceptable for a quick experiment.

    It is not how I want to build products.

    I do not want Codex, or any AI coding tool, deciding what version one should include, what edge cases matter, which features should wait, what happens when something fails, or how the product should behave under real constraints. Those are product decisions. I want to make them before I get to the implementation stage.

    That is the real reason I start with a product brief.

    The document is not the goal. Clarity is the goal.

    I think this is where people misunderstand the point.

    The goal is not to produce a polished document for its own sake. The goal is to reach a level of clarity that makes implementation reliable.

    Sometimes that clarity ends up in a clean PRD. Sometimes it is a technical spec. Sometimes it is a plain-language product brief with enough structure to remove confusion. I do not care too much what label it gets.

    What I care about is whether it answers the questions that would otherwise get pushed into the coding phase.

    What is the product?

    Who is it for?

    What problem is it solving?

    What does the first version include?

    What is explicitly out?

    What should happen when something fails?

    What constraints matter in the real environment?

    If those answers are still fuzzy, then the coding stage becomes more expensive than it looks. Not always in money, but definitely in time, rework, review effort, and product drift.

    You feel like you are moving fast because code is appearing quickly.

    Then you discover you have been building motion, not clarity.

    Why this matters even more with AI

    With human developers, unclear thinking is already expensive.

    With AI, it gets expensive faster.

    That is the part I learned very quickly.

    If the product definition is weak, AI does not pause and say, “This is still not thought through properly, maybe let us step back.” It usually goes ahead and builds something. That can create a very convincing illusion of progress.

    And honestly, that illusion is dangerous.

    Because now you have screens, flows, database structures, API logic, and implementation details growing around assumptions that were never properly decided. You are not just missing clarity anymore. You are missing clarity with momentum.

    That is why I do not see the product brief as optional overhead. I see it as protection against fast confusion.

    The clearer the product is, the more useful AI becomes.

    It guesses less.

    It drifts less.

    It makes fewer wrong assumptions.

    It becomes easier to review.

    It becomes easier to refine.

    And the gap between what I meant and what gets built becomes much smaller.

    That is a very practical payoff.

    I use ChatGPT first because I need pressure before I need code

    This is also why my workflow starts in ChatGPT, not in Codex.

    At that stage, I am not asking for implementation. I am asking for pressure.

    I want the idea challenged.

    I want missing decisions exposed.

    I want vague language forced into specific language.

    I want the comfortable illusion that “the idea is clear in my head” to be tested before the coding starts.

    That is what ChatGPT is useful for in this part of the process.

    I use it to discuss what already exists in the market. I use it to brainstorm the shape of the product. I use it to narrow scope. I use it to discover what I have not yet decided. And very often, the value is not in the answer itself. The value is in the next question it pushes back at me.

    That interaction helps me move from “I think I want this” to “this is what I am actually building.”

    Only after that do I want the coding tool involved.

    What I put inside my product brief

    I do not treat this like a giant enterprise artifact. I keep it practical.

    At a minimum, I want these things clear before I start implementation:

    1. The purpose
    What the product is supposed to do, in plain language.

    2. The user
    Who it is for and what real need it serves.

    3. The first version
    What version one includes, and just as importantly, what it does not include.

    4. Core flows
    What the user actually does and what the product needs to support.

    5. Constraints
    Technical realities, hosting limits, security concerns, operational conditions, or anything else that changes design decisions.

    6. Failure behavior
    What should happen when something breaks, times out, gets interrupted, or loses access.

    7. Boundaries
    What I am consciously postponing so the first version stays focused.

    That last part matters more than people think.

    A good product brief is not only a description of what I want to build. It is also a written record of what I am refusing to build right now.

    That saves a lot of pain later.

    A real example from my backup plugin

    One public example I can talk about is a WordPress backup plugin I am designing for Google Drive backups.

    If I had started directly in a coding tool, the request could have been one sentence long: build a WordPress plugin that backs up a site to Google Drive.

    That sounds reasonable until you realize how much product thinking is hiding inside that sentence.

    What exactly gets backed up?

    Should it create large temporary ZIP files locally, or should it upload directly?

    Should version one support restore?

    Should it support migration?

    Should it support multiple cloud providers?

    How should it behave on shared hosting with limited resources?

    Without a clear product definition, the coding tool would still start building. But it would be building around guesses.

    In my case, working through the brief changed the shape of the product in important ways.

    For example, I made an explicit decision to keep version one focused on Google Drive only, instead of trying to support multiple storage providers immediately. I also made a clear decision that automated restore would be out of scope for version one. That kept the first release much tighter and much more realistic.

    Those are not small details. They shape architecture, user expectations, support burden, and implementation complexity from the beginning.

    That is exactly why I want those decisions made before code starts, not discovered halfway through it.

    This step saves me from fake progress

    One reason I value this part of the workflow so much is that it protects me from fake progress.

    Fake progress is when implementation starts quickly, output appears quickly, and the whole thing feels productive, but the actual product is still not defined properly.

    That kind of speed is seductive.

    It feels efficient.

    It looks efficient.

    But later it usually turns into revisions, corrections, backtracking, and the awkward realization that the tool did not misunderstand me. I just had not finished understanding the product myself.

    The product brief helps me catch that earlier, when the cost is still low.

    It is easier to fix vagueness in a document than in a growing codebase.

    It is easier to challenge scope in a discussion than after features have already started multiplying.

    It is easier to decide what matters before implementation than while reviewing output shaped by assumptions I never meant to approve.

    I do not see this as documentation. I see it as leverage.

    That is probably the simplest way to put it.

    I am not writing a product brief because I enjoy preparing documents before the “real work” begins.

    For me, this is the real work.

    This is where the product becomes concrete enough to build well.

    This is where ambiguity gets reduced.

    This is where version one becomes realistic.

    This is where the coding tool becomes more useful, because it is no longer being asked to fill in the product thinking for me.

    The better this stage goes, the more value I get from AI later.

    That is why I start here every time.

    Not because it looks organized.

    Not because it sounds professional.

    Because it works.

    And when I am using AI to build real products, “it works” is a much better standard than “it felt fast at the beginning.”

  • Using AI to Build Products Is Serious Work, Not Vibe Coding

    Using AI to Build Products Is Serious Work, Not Vibe Coding

    I really dislike the term vibe coding.

    Not because it sounds silly, although it does. And not because people should not have fun with technology. They should. The problem is that the term quietly suggests something bigger: that building with AI is mostly casual, experimental, and a bit unserious. Like you are just throwing prompts at a machine, getting lucky sometimes, and calling it product development.

    That is not how I see it.

    For small businesses, freelancers, and solo founders, AI-assisted development is not a toy. It is not a side show. It is a serious way to build real products faster, with less overhead, and with much more independence than most people realize.

    But there is a catch.

    Using AI to build products only works well when you stop treating it like magic.

    I am not speaking theoretically here. I am speaking from how I work now. I stopped hiring developers in early 2025, and today I rely on Codex as the only solo developer across the projects I am handling. That does not mean I sit down, type a vague idea, and watch perfect software appear like a cooking show reveal. It means I changed the way I think about product development.

    And that is exactly where most of the confusion around “vibe coding” starts.

    Using AI is not the problem. Vagueness is.

    The first rule many of us learned when we started learning computer science in the late 1990s was simple: garbage in, garbage out.

    That rule did not disappear because AI arrived. If anything, it became more important.

    If you give an AI coding tool a half-idea, a blurry goal, and a pile of unmade decisions, it will still produce something. That is part of the danger. It is very easy now to get output that looks impressive before you realize it is built on fuzzy thinking.

    That is what I would call vibe coding.

    Not using AI.

    Not building quickly.

    Not shipping with AI assistance.

    Vibe coding, to me, is when someone gives AI a semi-idea and expects it to somehow fill in the product thinking, business logic, edge cases, user needs, and technical tradeoffs by itself. That may produce demos. It may even produce code that runs. But it usually does not produce a reliable product.

    The issue is not the intelligence of the tool. The issue is the laziness of the input.

    My workflow does not start with Codex

    This is the part that matters most.

    I do not start with Codex.

    I start with ChatGPT.

    That surprises some people because if the goal is to build software, the instinct is to go directly to the coding tool. But in my experience, that is too early. Before implementation, I need clarity. I need pressure. I need questions. I need to discover whether the idea in my head is actually a product or just a direction wearing a confident face.

    So I use ChatGPT first for thinking, not coding.

    I use it to discuss what already exists in the market. I use it to brainstorm approaches. I use it to challenge weak assumptions. I use it to force me into specifics when I am still speaking too generally. Sometimes the most useful thing it does is not answering me. It is asking me the questions I should have asked myself earlier.

    That stage is extremely important because most product ideas are incomplete when they first show up. They sound clear because they are familiar to the person thinking about them. But once you start discussing them properly, all the hidden ambiguity comes out.

    What exactly is the first version?

    What is essential and what is not?

    What should happen when something fails?

    What does the user actually need, not what sounds nice in a feature list?

    What should be postponed even if it looks attractive now?

    That is where the serious work starts.

    The goal is not a better prompt. The goal is a clear product.

    A lot of people talk about prompting as if the secret is finding the perfect sentence to unlock perfect software.

    I do not think that is the real game.

    The goal is not to write a clever prompt. The goal is to create a clear picture of the product before asking an AI coding tool to build it.

    Sometimes that ends up becoming a proper PRD. Sometimes it is a technical specification. Sometimes it is just a very well-structured breakdown of scope, flows, decisions, and constraints. The format matters less than the clarity.

    What matters is that by the time I move into implementation, I am no longer asking the AI tool to invent the product for me. I am asking it to execute against something I have already thought through.

    That changes everything.

    When the product is clear, the AI becomes dramatically more useful. It stops guessing as much. It makes fewer wrong assumptions. It can move faster without dragging the whole project into chaos. Review becomes easier. Iteration becomes sharper. The output becomes more reliable because the target is more reliable.

    This is why I do not see serious AI development as “prompting.” I see it as structured product thinking followed by accelerated execution.

    A small real example

    One public example I can talk about is a WordPress backup plugin I am designing.

    If I had gone straight to a coding tool with the raw idea, the prompt would have sounded something like this: build me a WordPress plugin that backs up a site to Google Drive.

    That sounds fine until you realize how many decisions are hidden inside that one sentence.

    What exactly gets backed up?

    Should it create large local zip files or upload directly?

    Does it support manual backups, scheduled backups, or both?

    What belongs in version one and what should wait?

    How should it behave on limited shared hosting?

    What errors deserve admin alerts?

    Should it support restore? Migration? Multiple cloud providers? Incremental backup?

    If you skip those questions and just start coding, the AI will still build something. But now the product is being shaped by whatever the model guesses, not by your actual priorities.

    That is not a development strategy. That is delegation by wishful thinking.

    Instead, I used ChatGPT first to work through the product properly. The discussion narrowed the scope, removed unnecessary features from the first version, and focused heavily on reliability over feature count. The result was a much clearer first release: what it should do, what it should not do, how it should behave, and what kind of architecture made sense for the real hosting environments it would run on.

    Only after that kind of clarification does a coding tool become truly powerful.

    That is the difference I keep trying to explain when people reduce AI development to “vibes.”

    My tool preference is personal. The method is broader.

    For my own work, I use Codex.

    I have also tried Google AI assistance and Google Firebase Studio, and in my own experience they did not come close to Codex for the way I work. I have not had the chance to try Claude Code yet, so I am not pretending to publish a grand ranking of all AI coding tools from a mountaintop.

    But honestly, that is not the most important part.

    The methodology matters more than the vendor.

    If you start with vague thinking, weak scope, and missing decisions, most AI development tools will give you unreliable results sooner or later. If you start with a clear product picture, a defined first version, and a realistic understanding of the problem, you give any strong AI development tool a much better chance of producing useful work.

    Tool choice matters, yes.

    But thought process matters more.

    AI did not remove the need for product thinking. It increased it.

    This is the part many people still miss.

    AI does not remove the need to think clearly. It increases the cost of not thinking clearly.

    When development becomes faster, confusion also becomes faster. Wrong assumptions spread quicker. Bad scope decisions show up earlier. Weak product thinking gets amplified instead of hidden behind longer timelines.

    That is why I reject the framing behind vibe coding.

    Using AI to build products is not unserious. In many cases, it is the opposite. It demands sharper thinking because the execution layer has become much faster. If you are careless, the tool will happily help you build the wrong thing efficiently. That is not innovation. That is just a quicker route to regret.

    Serious AI development, at least in the way I use it, starts before the first line of code. It starts with clarity. It starts with the discipline to define what you want, what you do not want, and why.

    That is not a vibe.

    That is product work.

    And if you do it properly, it can absolutely produce real products.