Are You Using ChatGPT Like Search, or Like a Collaborator?
In 2025, ChatGPT introduced end-of-year recaps, similar to Spotify Wrapped. It showed stats, the kinds of tasks the user relied on it for, and even grouped users into an “archetype.” For my archetype, I landed in the Strategist category, alongside about 3.6% of users.

That got me thinking. People use ChatGPT in many different ways, and it’s not just what they ask, it’s how they ask it. A lot of the time, it’s treated like a faster search bar. That’s useful, but it’s only one mode. For my work, I’ve found the bigger impact comes when I treat ChatGPT less like “lookup” and more like a collaborator for producing drafts, exploring options, pressure-testing ideas, and turning decisions into shippable outputs.
My productivity has changed over the past few years. I’ve always been a high-output person, but this has taken it to another level. So I asked a few questions: Is there a pattern to how I’m using it? Do my prompts follow a structure? Are there specific follow-ups that consistently lead to stronger results?
To find out, I asked ChatGPT to analyze our past conversations and pull out the repeatable pieces, what I tend to include up front, how I sift through options, and the kinds of follow-up prompts I use to stress-test and simplify. What follows is that pattern, generalized so it’s useful across different types of work.
Where this Workflow Applies
This isn’t just for writing. The same structure works anytime the goal is to move from a fuzzy problem to a clear, shippable output. For example:
- Process design: create workflows, handoffs, checklists, and acceptance criteria
- Systems thinking: map dependencies, identify constraints, define guardrails, and reduce complexity
- Product ideation: explore concepts, compare directions, define MVP vs v2, outline requirements
- Decision support: generate options, build a decision matrix, document tradeoffs and risks
- Planning: turn goals into a staged plan with milestones and next actions
- Communication artifacts: drafts for proposals, stakeholder updates
The Collaborator Workflow
1. Start with a structured prompt so ChatGPT can be decisive
Provide a brief that’s structured enough to prevent vague output. This gives ChatGPT the context and resources it needs to base its answers on. Remember, vague prompts produce vague answers.
What to include (if applicable):
- Objective: what needs to be achieved by the end of the chat (a decision or deliverable)
- Audience: who it’s for (because it changes tone and constraints)
- Inputs: what’s already true, plus any draft/notes (even messy)
- Constraints: scope (v1 vs later), time/effort limits, tone, tools
- Output format: the shape of the deliverable (outline, checklist, steps, table, copy blocks)
- Rule of engagement: “Be decisive; label assumptions.”
2. Generate options first, then use human judgment to converge
Use ChatGPT to create the options, then do the real work: selecting what’s strategically sound. Apply your judgment to sift out poor options, combine ideas, or input your own ideas inspired by ChatGPT’s answers.
What to do:
- Ask for multiple directions, not a single answer.
- Apply human judgment using criteria like:
- Clarity: will people understand it quickly?
- Credibility: does it over-claim or oversimplify?
- Effort: can it realistically ship with the available time and resources?
- Fit: does it match the audience and context?
3. Turn the chosen direction into a v1 artifact
Once a direction is selected, shift from “thinking” to “building.” The goal here is a complete first draft that is concrete enough to evaluate. It does not need to be perfect. It needs to be structured enough that it can be pressure-tested, revised, and simplified.
What to produce in v1:
- Structure: headings, sections, hierarchy
- Steps: sequence and decision points
- Definition of done: what “finished” means
- Draft text blocks: snippets, scripts, checklists, templates
- Assumptions: so they can be validated or corrected
4. Pressure-test the v1 for misalignment and gaps
Once there’s a v1 on the page, the next step is to make it defensible. This is where expectation management happens. The goal is to catch overpromising language, wrong-fit use cases, missing constraints, and anything that will break in execution.
Check where the output might:
- imply promises it can’t keep
- attract the wrong audience or scenario
- skip operational realities (tools, people, constraints)
- rely on assumptions that may not hold
5. Iterate by simplifying (because adoption beats complexity)
After pressure-testing, simplify. A draft can be “right” and still fail if it’s too complicated to follow. The goal is fewer steps, clearer ownership, and a version someone can use quickly without extra explanation.
Ask for:
- a simpler version
- an “MVP now” vs “v2 later” version
- a checklist version
- a 30-second scannable version
If there’s one takeaway here, it’s that the quality of the output usually follows the quality of the interaction. A small amount of structure up front, plus a habit of generating options, pressure-testing, and simplifying, can turn ChatGPT from “quick answers” into a practical tool for shipping real work.
I’m curious how others are using it. When you open ChatGPT, what mode are you in most often: looking something up, talking through an idea, or producing a deliverable? What does your prompt usually include, and what follow-ups help you get from a rough draft to something you can actually use?
If you have a pattern that consistently works for you, share it. I’d love to compare workflows and keep refining a set of strategies that help people get better outcomes, faster.


