4 exercises — defer out-of-scope questions, respond to skepticism, recover from live failures, and handle scope requests.
0 / 4 completed
Demo question handling patterns
Out of scope — validate → note visibly → "I'll come back to you this week"
Skeptical stakeholder — diagnose before defending; ask what specifically concerns them
Live failure — acknowledge → separate from production quality → switch to backup → commit to follow-up
Scope request — never commit unscoped; give a date for "estimate or questions to resolve"
Parking lot — a visible list of captured out-of-scope items builds trust
1 / 4
Midway through a sprint demo, a stakeholder asks about a feature that was not on this sprint's agenda. How do you handle it professionally?
Option C is the professional out-of-scope question deferral during a demo:
Why this works: 1. Validates the question — "great area to explore" — not dismissive 2. Honest framing — "not part of this sprint's scope" — factual, not evasive 3. Explains why you're deferring — "don't want to do it justice off the cuff" — positions deferral as respect for the question, not avoidance 4. Makes a visible commitment — "let me note it down" + "I'll come back to you this week" — shows it won't be forgotten 5. Protects the meeting structure — "5 minutes at end for questions" — keeps agenda on track 6. Confirms the stakeholder is OK — "does that work?" — closes the loop
The parking lot technique: Many facilitators keep a real-time "parking lot" doc or whiteboard section where out-of-scope questions are noted visibly. Making the note visible to the room signals the question is genuinely captured, not just dismissed.
What to avoid: • Ignoring the question entirely • Going down the rabbit hole (the rest of the room loses their demo time) • "That's not my area" — even if true, offer to get the right person
2 / 4
A senior stakeholder says during the demo: "I'm not convinced this is the right approach. The old system did this better." How do you respond?
Option C is the professional response to skeptical stakeholder feedback during a demo:
Structure: Diagnose before defending 1. Acknowledge the concern — not "you're wrong" but "help me understand better" 2. Ask a diagnostic question — "is it the workflow, performance, or something else?" — turns a vague objection into a specific one you can address 3. Use their answer — distinguish between: concerns you designed around (explain the reasoning), and legitimate concerns you haven't fully addressed 4. Commit to analysis, not immediate capitulation — "discuss with the team and come back" — takes the concern seriously without abandoning your design on the spot 5. Honest framing — "wouldn't want to defend a decision without fully thinking through your point" — signals intellectual honesty
The asymmetry of live demos: A stakeholder can critique in 10 seconds what took 2 weeks to design. Don't defend under time pressure. Use "I'll come back to you" freely — it shows maturity, not weakness.
When the old system objection is nostalgic rather than substantive: "The old system did X" sometimes means "I'm uncomfortable with change." Asking specifically what aspect worked better usually separates legitimate technical concerns from familiarity preferences.
3 / 4
During a live demo, the feature you're presenting crashes. The stakeholders are watching. How do you handle this?
Option C is the professional live demo failure recovery:
Recovery formula: 1. Acknowledge clearly — "demo environment hit an unexpected issue" — state what happened without catastrophizing 2. Separate demo failure from product quality — "not behavior in production or staging" — this is critical. Stakeholders often can't distinguish "demo crashed" from "product has bugs" 3. Make the decision explicitly — "rather than debugging live" — shows you're in control of the agenda 4. Have a backup immediately — recorded walkthrough, screenshots, shared doc — never demo without one 5. Commit to follow-up — "investigate root cause and send written summary" 6. Reassure about evaluation continuity — "everything you need to evaluate is in the recording"
Prevention: • Always test your exact demo flow 30 minutes before the call (not just "it works in dev") • Have a screen recording of the working flow as backup • Demo on staging, not local/dev • If demoing a feature flag, test the flag is on in the demo environment
What makes Option B harmful: "Always happens" trains stakeholders to be skeptical about your team's demo preparation. Even if it's true, saying it publicly damages confidence.
4 / 4
After the demo a stakeholder says: "Can this also do [new feature request]? We'd really want that for the upcoming campaign." You haven't scoped this at all. Which response is best?
Option C handles the scope request during demo without over-committing or under-delivering:
Why "sure, we can do that!" is dangerous: You just committed to something you haven't scoped in front of stakeholders. This creates a verbal contract without a delivery plan. When it either takes longer than expected or gets deprioritized, you lose credibility.
What Option C does right: 1. Validates the request — "interesting use case, I can see why" — not dismissive 2. Explains why you can't commit now — "haven't scoped technical complexity" — honest, not obstructive 3. Makes a concrete next step — "team conversation this week, back to you by [day]" — a specific commitment you can keep 4. Defines what the answer will look like — "rough effort estimate OR questions to resolve" — sets appropriate expectations 5. Asks the right question — "what's the campaign date?" — shows initiative and gathers the data needed to prioritize
The "rough estimate or questions to resolve" pattern: Sometimes you can scope quickly; sometimes the first step is just defining what you need to know to scope. Being explicit about which one you're delivering prevents the "I asked for an estimate and got more questions" frustration.