4 exercises — learn to write specific, structured prompts that produce reliable, usable outputs.
0 / 4 completed
Prompt anatomy for technical tasks
Role — "Act as a senior backend engineer…"
Task — "Write / Review / Summarise / Generate…"
Context — what technology, audience, or situation
Constraints — length, tone, format, what to avoid
Output format — JSON, markdown, numbered list, code block
Examples — provide 1–2 examples when format matters ("few-shot")
1 / 4
You need ChatGPT to write an error message for a failed file upload. Which prompt will produce the most useful result?
Option B is the well-structured prompt. It includes every component of a clear, specific prompt:
• Task — "write a user-facing error message" • Constraint — "max 20 words" • Context — "file upload failure caused by an unsupported file type" • Audience — "non-technical" users • Tone — "friendly" • Data to use — the supported formats list
The prompt anatomy for technical tasks: Role + Task + Context + Constraints + Output format. The more specific the input, the more useful the output. Vague prompts produce vague output that requires many revision iterations, wasting time.
For example, the output from Option B might be: "Oops! We couldn't upload your file. Please use JPG, PNG, PDF, or DOCX." — which is immediately usable.
2 / 4
You want the LLM to act as a code reviewer. Which system prompt will produce the most useful reviewing behaviour?
Option B is the professional system prompt technique. A good system prompt:
• Assigns an expert role — "senior TypeScript engineer with 10 years of experience" (primes domain knowledge) • Defines specific behaviours — the numbered list of what to always do • Specifies output format — "ISSUE / SUGGESTION / PRAISE" makes the response easy to scan • Sets tone — "concise and direct"
System prompts (instructions given at the start of a conversation, before user messages) shape every subsequent response. A strong system prompt eliminates the need to repeat instructions in every message.
System prompt pattern: You are [role]. When [task], always [behaviour 1], [behaviour 2]. Respond in [format].
3 / 4
You want a structured JSON output from an LLM for a list of tech terms. Which prompt is most effective?
Option C is the complete structured-output prompt. Key elements:
• Output type specified — "JSON array" • Quantity specified — "5 programming terms" • Schema defined — each field named with its type and constraints • Constraints per field — "max 15 words" for definition, enum for difficulty • Clean output instruction — "only the JSON, no additional text or markdown code fences" — this prevents the LLM from wrapping the JSON in prose or ```json blocks
For production use, always add: "Return only valid JSON that can be parsed by JSON.parse()."
This technique is commonly used when piping LLM output into application code. Without specifying the exact schema and "no extra text", you will often receive output that needs manual cleanup before parsing.
4 / 4
Your first prompt got a mediocre result. Which iterative prompting strategy is most effective?
Option C demonstrates targeted iterative prompting — the professional way to refine LLM output.
The pattern: 1. Diagnose — what specifically is wrong? (Too formal? Too long? Missing a key point? Wrong tone?) 2. Constrain — add a specific, measurable constraint that fixes the problem 3. Preserve — explicitly say what to keep ("keep all technical terms")
Common refinement phrases: • "The response was too long. Shorten to 3 sentences max." • "The tone is too casual. Rewrite in professional email style." • "You missed [X]. Include it after the second paragraph." • "The code has a bug on line 3. Fix only that line."
Saying "better" or "improve it" gives the model no direction and often results in a different problem. Restarting with the same prompt will give you a similar mediocre result.