4 exercises — structuring a 5-minute demo, opening with a problem-first hook, handling live failures, and responding to executive pressure after demo.
0 / 4 completed
Sprint demo structure
Context (30s): Problem solved + acceptance criteria reminder
Demo (3min): User journey, not code implementation
Outcome (45s): Before vs. after metric or user impact
Q&A (45s): Leave buffer; never run over
Plan B: Always have a recording in case staging fails
1 / 4
You have 5 minutes for your sprint demo. Which structure uses the time most effectively?
Option B is the professional sprint demo structure. Here's why each segment matters:
Context (30s): "What problem this solves" — stakeholders need the 'why' before the 'what'. Many attendees aren't in every planning meeting and don't remember the acceptance criteria from two weeks ago.
Demo (3min): Show the user journey, not the code. Stakeholders and POs care about: can a user do this task now? The technical implementation is secondary in a demo.
Outcome (45s): Tie it to reality — loading time improved? Fewer clicks? Less error rate? This is what makes the demo feel like value, not just "it works."
Questions (45s): Always leave buffer. Demos that run over their time slot make you look unprepared.
Why A fails: Code walkthroughs in 5-minute demos lose non-technical stakeholders in under a minute.
Why C fails: Starting with apologies creates negativity and wastes 40% of your time on what didn't happen.
Why D fails: "Everyone was at planning" is never true. Always set context.
2 / 4
You're opening a sprint demo. Which 30-second opening is strongest?
Option C is the strongest demo opening because it uses the problem-first pattern:
"Two sprints ago, users reported…" — anchors the demo in a real user problem; every stakeholder immediately knows why this matters "most-upvoted UX bug" — adds context about priority and user impact "Today I'm going to show you the fix" — clear promise of what you'll see "including edge cases not in the original ticket" — signals thoroughness; sets up a narrative arc
Why A and B fail: "I'm going to show you the work I did" is feature-first, not problem-first. It leads with what was built, not why anyone should care.
Why D fails: Starting with a technical implementation list (API, middleware, React context) loses non-technical stakeholders immediately. They don't know if this is a big deal or a small change.
Demo opening formula: "[User problem or context] → [What you built to solve it] → [Here's the demo]"
This is the same structure used in good product launches: problem → solution → demonstration.
3 / 4
During the live demo, the feature you're demonstrating throws an unexpected error. The screen shows a 500 Internal Server Error in front of 15 stakeholders. What do you say?
Option C is the professional response to a live demo failure — one of the most anxiety-inducing moments in a presenter's life:
Names the error immediately: "That's a 500 — internal server error" — shows technical literacy; you're not panicking, you're diagnosing Acknowledges the discrepancy: "something differs in staging" — honest, doesn't over-promise a cause Pivots to Plan B immediately: "I have a recording" — the best presenters always have a fallback Commits to a follow-up: "I'll investigate and follow up in Slack before end of day" — keeps the commitment alive and gives stakeholders a time-bound resolution
Why A fails: Ignoring a highly visible error destroys trust; the audience saw it Why B fails: "Works on my machine" is the most notorious phrase in software development. It signals poor testing practices. Why D fails: Blaming another team in a demo humiliates your colleagues; even if true, it's not the time
Live demo failure recovery toolkit: 1. Name what happened (technical competence) 2. Switch to recording/screenshots 3. Commit to investigation with a specific timeline 4. Stay calm — your reaction matters more than the error itself
4 / 4
After the demo, an executive asks: "Can we ship this to all users today?" The feature is working but hasn't been through full QA. How do you respond?
Option C is the ideal response to executive pressure in a demo context. It demonstrates technical maturity, stakeholder empathy, and risk communication:
Validates the request: "I'd love to get it in front of users quickly" — you're not dismissive of their urgency Names the specific risk: "we haven't completed full QA... edge cases to all users simultaneously" — concrete risk, not process-for-process's-sake Proposes a middle path: Feature flags + 5% canary rollout — a technically sophisticated compromise that shows you've thought about this Asks for a decision: "Would that timeline work for you?" — puts the executive in the driver's seat on risk tolerance
Why A fails: Agreeing without flagging risks passes ownership of a risk to someone who doesn't understand it Why B fails: "Impossible" and "we have a process" invites executives to override processes they see as bureaucratic Why D fails: Deflecting without offering a path forward is not helpful; you were just presenting the feature, you should know the deployment considerations
Risk communication formula: "I'd love to [meet their need]. The risk is [specific concern]. One option: [low-risk path to their goal]. Does that work?"