Writing a Great Prompt

Most “bad prompts” aren’t wrong—they’re just missing parts.
When someone says, “ChatGPT is inconsistent,” what they often mean is: I gave it a one-liner and hoped it would read my mind. (Relatable.)
A great prompt doesn’t rely on mind-reading. It gives the model four ingredients:
- Context — who/what/why this is for
- Instruction — the job to do (and rules)
- Input Data — the raw material to work with
- Output Indicator — what the final answer should look like
If you master this anatomy, you can prompt like a pro whether you’re writing Python, drafting legal language, or sending a sales follow-up.
The 4-part rule
If your output feels off, don’t rewrite the entire prompt. First check: did you provide context, instruction, input data, and an output indicator?
1) Context: Put the Model in the Right “Room"
Context is the background the model needs to behave appropriately.
Examples of context:
- Who the audience is (VP of Sales vs new intern)
- Your goal (inform, persuade, de-escalate, debug)
- Constraints (brand voice, legal risk, company policy)
For AI engineers: context can include the stack, runtime, libraries, and failure mode.
2) Instruction: Give a Clear Job (Not a Vibe)
Instruction is what you actually want done.
Strong instructions:
- Start with an action verb (“summarize,” “draft,” “refactor,” “compare”)
- Add requirements (“keep it under 120 words,” “include 3 options,” “cite sources”)
- Include “don’ts” sparingly (“don’t invent metrics,” “don’t assume access to databases”)
3) Input Data: Provide the Ingredients
Input data is the text, bullet points, code, table, or transcript you want the model to transform.
This is where many prompts fail. People ask:
“Write my Q4 update”
…but don’t provide:
- accomplishments
- metrics
- blockers
- next steps
No inputs = the model improvises.
Garbage in, confident garbage out
LLMs will happily generate something that sounds right even if your input data is missing. If it matters, include the source material.
4) Output Indicator: Specify the Shape of the Answer
Output indicators are format instructions that force clarity.
Examples:
- “Return a table with columns: Risk, Impact, Mitigation”
- “Output valid JSON with fields: title, summary, action_items”
- “Use exactly 5 bullets, each under 12 words”
This is how you get consistency across teams.
Example 1: Marketing + Sales (Reliable Copy, Not Random Copy)
Here’s a prompt that uses all four parts.
CONTEXT:You are a B2B copywriter for JoinAISchool. Audience is a VP of Sales who wants their team to use AI responsibly.INSTRUCTION:Write a short outbound email (90–110 words) inviting them to a 4-week prompt engineering program.Tone: confident, helpful, not hype.Include: subject line + one clear CTA.INPUT DATA:* Course: Prompt Engineering for teams (hands-on)* Outcome: better emails, faster research, safer workflows* Proof point: “Used by cross-functional teams (engineering + revenue)”OUTPUT INDICATOR:Return exactly:1. Subject line2. Email body
Why it works: the model knows the audience, the task, the ingredients, and the exact format.
Example 2: AI Engineer (Debugging With Boundaries)
Now let’s do a technical prompt that’s structured and safe.
CONTEXT:You are a senior Python engineer helping debug a FastAPI endpoint.Environment: Python 3.11, pytest, pydantic v2.INSTRUCTION:Identify the bug and propose the minimal fix. Then add one regression test.Do not change function signatures unless necessary.INPUT DATA:<stack trace here><relevant code here>OUTPUT INDICATOR:Return:- Root cause (3 bullets)- Patch (code block)- New test (code block)
Again: context + instruction + input + format = fewer surprises.
Takeaway
A great prompt isn’t long—it’s complete.
When your AI output misses the mark, don’t panic and start over. Use the anatomy checklist:
- Context: Who is this for and what’s the situation?
- Instruction: What should it do (and what should it avoid)?
- Input Data: What info should it use to stay grounded?
- Output Indicator: What format makes “good” undeniable?
Get these four right, and your prompts will feel less like gambling—and more like engineering.
