Category: Clear Thinking

  • The Hidden Skill in Using ChatGPT: Turning Ambiguity into Next Actions

    The Hidden Skill in Using ChatGPT: Turning Ambiguity into Next Actions

    Most advice about using ChatGPT eventually becomes advice about prompts.

    Use this structure. Add this role. Say it in this order. Put this phrase at the end. Sprinkle a little magic dust on the prompt and wait for professional-grade results to arrive.

    I understand why people talk this way. Prompts matter. A clear request is better than a vague one. This is not exactly breaking news.

    But I think the obsession with “prompt engineering” misses the more important skill.

    In real work, the hard part is often not writing the perfect prompt. The hard part is that you do not yet know what you are asking for. The situation is messy. The requirement is unclear. The business problem is half-defined. The client request sounds simple until you start touching it. The product idea feels obvious in your head, then suddenly collapses the moment someone asks one reasonable question.

    This is where ChatGPT becomes useful in a much deeper way.

    Not as a content generator.

    Not as a magic answer machine.

    Not as a productivity toy wearing a suit.

    Its real value, at least in the way I use it, is helping me turn ambiguity into the next practical action.

    That may sound less exciting than “10 prompts that will change your life,” but it is much more useful. Also, it has the small advantage of being true.

    The real problem is usually not the prompt

    When people say they are bad at using ChatGPT, they often assume the problem is that they are bad at prompting.

    Sometimes that is true. Many people do give it vague, lazy, or incomplete instructions and then act surprised when the output is not useful. That is still the old rule: garbage in, garbage out. AI did not cancel that rule. It just made the garbage arrive faster and with better formatting.

    But in many cases, the deeper problem is not the wording of the prompt.

    The deeper problem is unclear thinking.

    A founder may say, “I need help improving onboarding.”

    A client may say, “We need to automate this process.”

    A manager may say, “The team is not aligned.”

    A consultant may say, “I need to prepare a proposal.”

    A product owner may say, “We need an AI feature.”

    These sound like tasks. They are not tasks yet. They are clouds.

    There may be a task hiding inside them, but it has not been extracted. The real issue may be unclear ownership, missing information, bad workflow design, weak product positioning, unrealistic scope, poor communication, or simply the fact that nobody has agreed what success looks like.

    If you treat that kind of statement as a prompt and ask ChatGPT to “solve it,” you will usually get something that looks helpful and feels slightly hollow.

    The better move is to use ChatGPT to help unpack the situation before asking for output.

    That is the part many people skip.

    Sometimes you do not know what output you need

    One of the most useful lessons I learned from working with ChatGPT is that I do not always know what the right output should be.

    This sounds obvious, but it matters.

    People often approach ChatGPT as if the format is already clear:

    “Write me an email.”

    “Create a checklist.”

    “Summarize this.”

    “Give me a plan.”

    Those are useful requests when you already understand the problem well enough to know what form the next step should take.

    But many real situations are not like that.

    Sometimes you are facing a new kind of project. Sometimes you are entering a business area you do not fully understand. Sometimes a client request touches technical, operational, and political issues at the same time. Sometimes you are dealing with a personal situation you have never encountered before, and the problem does not come with a clean label attached.

    In those moments, asking for “a plan” may be too early.

    You may not need a plan yet.

    You may need questions.

    You may need a map of the situation.

    You may need a list of assumptions.

    You may need to separate facts from opinions.

    You may need to identify what you do not know.

    You may need to define the decision before trying to make it.

    This is where ChatGPT is valuable as a thinking partner. It can help you figure out what kind of output is useful before you waste time producing the wrong one.

    That difference matters.

    A polished checklist for the wrong problem is not progress. It is stationery.

    The best use of ChatGPT is often the questions it asks back

    The single most useful habit I have developed with ChatGPT is very simple:

    I ask it to ask me questions when it does not understand me well enough.

    That is it.

    Not a secret mega-prompt. Not a framework with a dramatic name. Not something I need to sell in a course while standing in front of a rented bookshelf.

    Just this: if the situation is unclear, do not pretend it is clear. Ask me.

    This one habit changed the quality of the output more than most prompt tricks I have seen online.

    The reason is simple. Good questions force better thinking.

    A vague idea can survive inside your head for a long time because nobody is challenging it there. It sounds complete because you are familiar with it. You know what you mean, or at least you think you do.

    Then ChatGPT asks:

    Who is this for?

    What does success look like?

    What is the first version?

    What is out of scope?

    What happens if this fails?

    Who owns the decision?

    What information is missing?

    What assumption are you making here?

    Suddenly, the idea is not as complete as it felt ten minutes ago.

    That is not a bad thing. That is the work.

    The question exposes the missing part. Answering it forces you to think in a direction you may have avoided, ignored, or simply never noticed. Many times, the question is more valuable than the answer because it moves your attention to the right place.

    This is especially useful in unfamiliar territory.

    When you already understand a domain, you know where the traps usually are. You know which questions matter. You know which details are dangerous to ignore.

    But when the territory is new, you do not even know what to look for. You may be confident about the wrong things and blind to the important ones. In that situation, a thinking partner that keeps asking structured questions is extremely useful.

    Not because it replaces your judgment.

    Because it improves the conditions under which your judgment works.

    Ambiguity becomes useful when it turns into an artifact

    A good ChatGPT session should not end with a nice conversation.

    It should end with something useful.

    That does not always mean a finished document. Sometimes the useful output is small. But it should be concrete enough that you can do something with it.

    For example, ambiguity can become:

    • a list of decisions that need to be made
    • a product brief
    • a checklist
    • a set of acceptance criteria
    • a meeting agenda
    • a proposal outline
    • a risk list
    • a set of client questions
    • a first experiment
    • a workflow map
    • a ClickUp task list
    • a draft email
    • a comparison table
    • a clear “not now” list

    This is where the value becomes real.

    The conversation takes something foggy and turns it into an object you can review, edit, share, assign, test, or implement.

    That is the difference between using ChatGPT as entertainment and using it as part of serious work.

    I do not want to leave a session thinking, “That was interesting.”

    Interesting is nice. Actionable is better.

    If I start with a vague idea for a product, I want to leave with a clearer product definition.

    If I start with a confusing client request, I want to leave with a list of questions that will uncover the real requirement.

    If I start with an operational mess, I want to leave with a workflow breakdown and the next few decisions.

    If I start with a new area I do not understand, I want to leave with a learning path, unknowns, risks, and first experiments.

    The point is not that ChatGPT magically solves the whole thing.

    The point is that the fog has been reduced.

    Now there is something to hold.

    This is how I use it before product work

    This is also why I use ChatGPT before I move into implementation work.

    When I am building a product, I do not want the coding tool to invent the product for me. That is not its job. Before I go anywhere near implementation, I need the idea to become clearer.

    So I use ChatGPT to pressure-test the thinking.

    What is the product supposed to do?

    Who is it for?

    What is version one?

    What should wait?

    What are the constraints?

    What would make this fragile?

    What are the dangerous assumptions?

    What happens when something fails?

    By the time I move toward a PRD, a product brief, or a technical specification, the value has already started. The document is not just documentation. It is the result of thinking being forced into shape.

    This is why I do not see ChatGPT as something I use only to “generate content.” That is one small use case.

    The better use case is structured thinking.

    It helps me move from “I have an idea” to “this is the product I am actually building.”

    Those are not the same thing.

    An idea can be vague and still sound impressive. A product definition cannot hide as easily. It has to answer questions. It has to make tradeoffs. It has to say what is included and what is not.

    That is where ChatGPT is useful. It helps expose the distance between the idea and the thing that can actually be built.

    This is also how I use it in business work

    The same pattern applies outside product development.

    For example, when work becomes messy across tools, people, deadlines, and priorities, ChatGPT can help me think through the mess before I put structure around it.

    I may start with a rough description of what is happening:

    This project has too many moving parts.

    This client request is unclear.

    This workstream keeps getting delayed.

    I am not sure what the next right step is.

    That is not enough to produce a reliable plan. But it is enough to begin a useful conversation.

    The value comes when ChatGPT starts helping me separate the situation into parts:

    What are the facts?

    What are the assumptions?

    Who is waiting for whom?

    What decision is blocked?

    What is urgent but not important?

    What is important but still undefined?

    What can be turned into a task?

    What needs a conversation before it becomes a task?

    This is where a tool like ClickUp becomes useful after the thinking. ChatGPT helps me clarify, question, and organize. ClickUp helps me store the result in a structured way.

    That sequence matters.

    If I put unclear thinking into a task management system, I do not get clarity. I get organized confusion. Very neat. Very searchable. Still confusion.

    The thinking has to happen first.

    Then the structure becomes useful.

    The problem with prompt engineering culture

    This is why I am not very impressed by the online obsession with prompt engineering.

    Not because prompts are useless. Again, clear language matters.

    But a lot of what gets sold as prompt engineering feels like course-selling theater. It takes a real thing — the importance of clear instruction — and turns it into a performance. Suddenly every normal thinking habit needs a special name, a template, a secret formula, and ideally a checkout page.

    I do not think most people need that.

    Most people need to get better at explaining the situation, identifying what is unclear, answering hard questions, and turning the conversation into a usable next step.

    That is not as marketable as “copy this prompt and become 10x,” but it is far more practical.

    The best ChatGPT users I have seen are not necessarily people with fancy prompts. They are people who can think clearly with the tool.

    They know when to ask for options.

    They know when to ask for questions.

    They know when to challenge assumptions.

    They know when to turn the discussion into a checklist.

    They know when to stop generating and start deciding.

    They know when the answer sounds good but is still not grounded enough.

    This is not prompt engineering in the theatrical sense.

    It is thinking discipline.

    The tool is useful, but you still own the judgment

    There is an important boundary here.

    Using ChatGPT as a thinking partner does not mean outsourcing your judgment to it.

    That would be a mistake.

    The tool can ask useful questions. It can organize information. It can suggest options. It can help you see gaps. It can turn scattered thoughts into a first structure. It can make unfamiliar territory feel less chaotic.

    But it does not live with the consequences.

    You do.

    You still need to decide what is true, what matters, what is safe, what is appropriate, and what should happen next.

    This is especially important in business situations where context matters. A tool may produce a clean plan that ignores the politics of a client relationship. It may suggest an efficient workflow that does not fit the people who actually have to use it. It may make something sound simple because it does not understand the hidden cost of change.

    So I do not use ChatGPT as the decision-maker.

    I use it as the thinking environment.

    That distinction keeps the work grounded.

    The tool helps me think better. It does not absolve me from thinking.

    The real skill is moving from fog to next action

    The hidden skill in using ChatGPT is not having a perfect prompt library.

    It is knowing how to work with ambiguity.

    It is being able to start with something unclear and move toward something useful.

    Sometimes that means asking for a draft.

    Sometimes it means asking for a checklist.

    Sometimes it means asking for questions.

    Sometimes it means admitting that the next step is not a plan, but a better understanding of the problem.

    This is why ChatGPT can be useful for non-technical founders, technical operators, consultants, and anyone who deals with messy work. It gives you a way to think with pressure. It helps you slow down the right parts of the process before you speed up the wrong ones.

    That matters because most bad execution does not start as bad execution.

    It starts as unclear thinking that nobody challenged early enough.

    ChatGPT is valuable when it helps you challenge that thinking before it turns into tasks, code, commitments, proposals, or decisions.

    Used well, it does not just help you produce more.

    It helps you see what needs to be produced.

    And sometimes, that is the whole difference.