PromptingBasics

Formatting LLM Outputs

By
Dan Lee
Dan Lee
Dec 20, 2025

Let’s be real: the fastest way to hate using an LLM is getting a beautiful answer… in a format you can’t use.

You asked for “action items” and got a wall of text. You asked for a comparison and got a vague paragraph. You asked for something pasteable into Notion, Jira, Confluence, Slack, or GitHub—and the formatting exploded.

Good news: you can fix 80% of that with one prompt habit:

Tell the model exactly what shape the output should take.

A simple rule

If the output has no format constraints, the model will pick one for you (and it might be the wrong one).

Why formatting is a prompt engineering skill (not a styling detail)

Formatting is how you get:

  • Consistency across teammates and runs
  • Scannability for busy humans
  • Machine-readability for automation (copy/paste, parsing, checklists)

This matters for everyone:

  • Engineers want pasteable code reviews and bug tables.
  • Sales wants tight follow-ups, not essays.
  • HR wants structured interview rubrics.
  • Support wants sortable triage outputs.

Lists: for speed, clarity, and “what do I do next?”

Use lists when the reader needs quick decisions.

Prompt patterns that work:

  • “Return exactly 7 bullets”
  • “Each bullet under 12 words”
  • “Order bullets by impact”

Example 1: Executive-ready action items (bullet list)

Text
Turn these meeting notes into action items.
Output rules:
- Exactly 6 bullets
- Each bullet starts with a verb
- Add an owner tag in brackets like [Marketing] or [Engineering]
- Keep each bullet under 14 words
Notes: <PASTE HERE>

That “starts with a verb” trick is sneaky-good. It turns summaries into tasks.

Tables: for comparisons, prioritization, and operational work

Tables are your best friend when you need structure: tradeoffs, QA, planning, evaluation.

Prompt patterns that work:

  • “Return a Markdown table with columns: …”
  • “No extra commentary outside the table”
  • “Include a final row with recommendation”

Example 2: Compare options (Markdown table)

Text
Compare these 3 options for an internal AI chatbot:
1) OpenAI hosted API
2) Anthropic Claude API
3) Self-hosted Llama
Output:
Return a Markdown table with columns:
Option | Best For | Risks | Cost Notes | Recommendation (Yes/No)
Rules:
- Keep each cell under 18 words
- No text outside the table

Now your answer is instantly pasteable into Notion or a decision doc.

Markdown: make outputs portable across tools

Markdown is the "universal adapter" for modern teams: GitHub, Slack, Notion, docs, and wikis.

To get clean Markdown:

  • Ask for one H1 title and consistent H2s
  • Request code fences for code only
  • Use short sections and avoid deep nesting

Prevent format drift

Add: “Do not include anything outside the requested format.” It’s the easiest way to stop extra paragraphs from sneaking in.

Quick formatting cheat sheet

  • Need clarity fast → Bullets
  • Need comparisons or decisions → Tables
  • Need a doc you can paste anywhere → Markdown

Takeaway

Formatting isn’t cosmetic—it’s control.

When you specify tables, lists, or Markdown (plus a few tight rules), your LLM outputs become scannable, reusable, and automation-friendly. And that’s when AI stops being a “cool demo” and starts being a real workflow tool.

Dan Lee

Dan Lee

DataInterview Founder (Ex-Google)

Dan Lee is an AI tech lead with 10+ years of industry experience across data engineering, machine learning, and applied AI. He founded DataInterview and previously worked as an engineer at Google.