PromptingBasics

Code Generation

By
Dan Lee
Dan Lee
Dec 20, 2025

Generating Code (That Doesn’t Break Later)

Let’s be honest: AI-generated code is either a lifesaver or a landmine. When it’s good, you get a working feature in minutes. When it’s bad, you get a confident pile of bugs, missing edge cases, and imports from a fictional universe.

The difference usually isn’t the model. It’s the prompt.

This guide gives you a practical prompting playbook for generating code that’s more likely to compile, pass tests, and survive code review—whether you’re an AI engineer, SWE, data scientist, or a non-technical professional trying to automate work safely.

The Golden Rule

Treat the prompt like a mini-spec. If you wouldn’t hand it to a teammate, don’t hand it to an LLM.

What to Include in a Code Prompt

Great code prompts answer five questions:

  1. What are we building? (feature goal)
  2. Where does it run? (language, framework, environment)
  3. What constraints exist? (performance, security, style, dependencies)
  4. What inputs/outputs are expected? (types, schemas, examples)
  5. How do we know it’s correct? (tests, acceptance criteria)

If you include these, you’ll reduce the top failure modes: wrong assumptions, wrong libraries, missing edge cases, and mismatch with your codebase.

Example 1: Engineer Mode (FastAPI Endpoint + Tests)

Text
Context: You are a senior Python engineer. We have a FastAPI service and want a new endpoint to score text sentiment.
Instruction: Generate production-ready code for the endpoint and unit tests.
Input Data:
- Framework: FastAPI
- Endpoint: POST /sentiment
- Request JSON: { "text": string }
- Response JSON: { "label": "positive"|"neutral"|"negative", "score": float }
- Use a placeholder sentiment function (do not call external APIs)
Constraints:
- Validate empty/too-long text (max 2,000 chars)
- Return 422 for invalid inputs (FastAPI validation is fine)
- Include type hints and docstrings
Output Indicator:
- Provide 2 code blocks:
1) app.py (FastAPI app + endpoint)
2) test_app.py (pytest tests using TestClient)
- Tests must cover: empty text, too long, normal input

Why this works: it pins down framework, IO contracts, validation rules, and tests—so the model can’t “freewheel.”

Example 2: Non-Technical Mode (Sheets → Clean CSV Script)

Non-technical professionals often need lightweight automation. The key is to be explicit about data shape and success criteria.

Text
Context: I’m a marketing manager exporting leads from a spreadsheet. I need a script to clean the file before uploading it to a CRM.
Instruction: Write a Python script that reads leads.csv and outputs leads_clean.csv.
Input Data:
Columns: email, first_name, last_name, company, source, created_at
Rules:
- Drop rows where email is missing or invalid
- Trim whitespace in all string fields
- Standardize source values to: "LinkedIn", "Webinar", "Referral", "Other"
- created_at should be converted to ISO format (YYYY-MM-DD) if possible; otherwise leave blank
Output Indicator:
- Provide one Python file named clean_leads.py
- Include a short usage section as comments at the top
- Avoid non-standard libraries (use only Python stdlib + pandas)

This is safe automation: clear rules, clear inputs, clear output.

Always Ask for Tests

If you want shippable code, require tests or examples. “Include 3 tests” is the cheapest quality boost you can buy.

A Quick “Quality Checklist” for Generated Code

Before you paste it into your repo, check:

  • Does it match your stack and versions?
  • Are imports real and dependencies acceptable?
  • Are inputs validated and errors handled?
  • Are there tests or at least runnable examples?
  • Are secrets and credentials never hardcoded?

If any of those are missing, update the prompt and regenerate.

Takeaway

AI can generate code fast, but reliability comes from structure. Write prompts like mini-specs: define the environment, IO contracts, constraints, and acceptance tests. Then you’re not “asking for code”—you’re commissioning an implementation you can review, run, and ship with confidence.

Dan Lee

Dan Lee

DataInterview Founder (Ex-Google)

Dan Lee is an AI tech lead with 10+ years of industry experience across data engineering, machine learning, and applied AI. He founded DataInterview and previously worked as an engineer at Google.