4 exercises — complete bug reports, feature requests, turning "it doesn't work" into actionable issues, and minimal reproducible examples.
0 / 4 completed
The anatomy of a good issue report
Title — [BUG] / [FEATURE] + specific behaviour + trigger condition
Environment — runtime, library, OS, and version numbers
Steps to reproduce — numbered, exact, minimised
Expected vs. Actual — one sentence each, no interpretation needed
Minimal repro — a separate repo with <50 lines that fails — the single highest-impact addition
1 / 4
You found a bug in an open-source library. Which issue report is most likely to get the maintainer's attention and result in a fast fix?
Option C is a complete, actionable bug report. Every element serves a purpose:
Title format — [BUG] + specific error + trigger condition: "TypeError: Cannot read properties of undefined — options.timeout — when timeout is not passed explicitly" tells the maintainer: the class of error (TypeError), the exact property (options.timeout), and the condition (not passed explicitly). They can reproduce this before opening a code editor.
Environment block: Runtime version (Node 20.11), library version (4.2.1), language version (TypeScript 5.3), OS (macOS 14). Bugs are often version-specific. Without this, the maintainer must ask — adding 2–5 days to the fix cycle.
Steps to reproduce — numbered, exact, minimal: 3 steps. Anyone can follow them without interpretation. No ambiguity about how you "tried to use it".
Expected vs. Actual: Two sentences. The maintainer doesn't have to guess what you think should happen. This also protects you — sometimes the "bug" is actually using the library incorrectly, and the expected/actual comparison makes that obvious immediately.
Minimal reproducible example: A separate repo with the exact failing code. This is the single highest-impact thing you can add to a bug report. It often cuts fix time in half because the maintainer can run it locally immediately.
The "line 84 of client.ts" detail: Optional but extremely valuable — you've read the source code, you know where it fails. This signals that you're a serious reporter, not someone who just wants their problem solved by others.
2 / 4
You have an idea for a new feature: a `--dry-run` mode for a CLI tool that shows what it would do without actually doing it. How do you write the feature request?
Option B is a well-structured feature request:
Problem-first framing: "When running deploy on a large project, it's not easy to verify..." — starts with the real-world problem, not the solution. This matters because sometimes maintainers know of a better solution to the same problem. If you lead with "add --dry-run", you close that conversation.
Concrete example output: The code block showing what `tool deploy --dry-run` would print is worth more than a paragraph of explanation. The maintainer immediately understands the scope: what it does, what it prints, what the exit message looks like.
Alternatives considered: "I considered using a dev environment, but..." — shows you thought about it and explains why existing options don't solve the problem. This prevents the maintainer from closing the issue with "just use a test environment".
"Are you willing to implement this?": This is the most underrated line in a feature request. Maintainers are typically overloaded. A feature request that comes with "I'll write the PR if you accept it" is dramatically more likely to be accepted than one that just says "please do this". It changes the request from "add to my backlog" to "review a PR".
Why "many tools have it" is not sufficient justification: Copying features from other tools is not a reason to add complexity to this tool. "Many tools have it" is a signal that others found it useful, but you need to explain why it's valuable specifically here.
3 / 4
You want to report this issue: "The library doesn't handle errors properly." Which rewrite is most useful to a maintainer?
Option C demonstrates the transformation of a vague complaint into a precise, actionable report:
What changed from "doesn't handle errors properly": 1. Specific error class — "unhandled promise rejection" — not just "error", but the exact JavaScript mechanism 2. Specific trigger — "when connection fails during initial handshake" — not just "when it errors" 3. Precise expected vs. actual — gives two acceptable behaviours (Promise rejection OR error event) and explains exactly what happens instead (hangs indefinitely) 4. Impact statement — "applications will leak connection objects" — tells the maintainer why this matters beyond one user's inconvenience
The "expected: two acceptable options" pattern: When the API contract is unclear (should this use Promise rejection or events?), listing both acceptable options and letting the maintainer decide is correct. You're reporting the broken behaviour — you shouldn't have to specify which correct behaviour to implement.
Why "crashes my app" is not enough: Every bug crashes someone's app. What the maintainer needs is: what is the input, what should happen, what actually happened, and where in the library it goes wrong. "Crashes my app" tells them none of this.
How to find the specific failure: Wrapping the call in try/catch, attaching error event listeners, and using Node's `unhandledRejection` process event to capture the stack trace — these are the steps that transform "it crashes" into a specific failure description.
4 / 4
A maintainer comments on your issue: "Can you provide a minimal reproducible example?" You have a complex project where the bug appears. What is the best response?
Option C is the correct response to a "minimal reproducible example" request:
What makes it effective: 1. Clean new project — "started from a blank Node project, installed only your library and uuid" — eliminates the possibility that something in your complex project is interfering 2. Actually minimal — "reproduced in about 40 lines" — not a dump of your entire codebase 3. Self-contained README — "one-step setup and expected vs. actual output" — the maintainer can clone, install, and see the failure in 60 seconds 4. Isolation confirmation — "confirmed that removing uuid does not change the behaviour" — you've done the scientific work of ruling out confounding factors
Why "I can't share the code" is not a dead end: You don't share your company's code — you create a separate, minimal reproduction that contains none of it. The minimal repro replaces your actual code entirely. It's usually 20–50 lines that trigger the exact same failure.
Why minimal repros matter so much: If you provide a 200-file project, the maintainer has to: (1) set it up, (2) figure out which file is relevant, (3) strip out irrelevant parts. If you provide a 40-line repro, they go straight to the bug. The time investment for a minimal repro is usually 30–60 minutes for the reporter but saves hours for the maintainer — and increases your fix probability dramatically.
The "it's obvious from my description" trap: It never is. The maintainer's environment, configuration, version, and assumptions are different from yours. A reproducible example removes all that ambiguity.