4 exercises — open sprint demos, explain incomplete items, discuss velocity, and define "done" for stakeholders.
0 / 4 completed
Sprint demo language patterns
Open with the sprint goal — not just "what we did" but "what we set out to accomplish"
Be specific about skipped items — name them, give % complete, explain the blocker, state the plan
Contextualize velocity — use rolling averages, caveat story point comparability
Make DoD concrete — list the checkboxes; offer to share test results
No pre-emptive apologies — save caveats for when they're specific and relevant
1 / 4
You're opening the sprint demo. The audience includes your PM, a product designer, and two stakeholders from the business side. Which opening is most effective?
Option B is a professional sprint demo opening with all required elements:
Sprint demo opening formula: 1. Sprint number/name — grounds everyone on which iteration this covers 2. Sprint goal — "our goal was to..." — gives context for everything they're about to see 3. Agenda — "three things: X, Y, Z" — sets expectations for the demo scope 4. Time-box — "about 20 minutes" — respects stakeholders' time 5. What you'll do per feature — demo + decisions + Q&A — tells them how to watch 6. Immediate start — screen share first, then jump in
What NOT to open with: • "Um" and filler words — rehearse the first 30 seconds • Showing code to non-technical stakeholders • Pre-emptive apologies ("some bugs") — this undermines confidence in your work. Save caveats for when they're relevant and specific • Vague descriptions ("some things") — always be specific about what you completed
Sprint goal framing: Starting with the sprint goal tells the story of why these features exist. It makes the demo a narrative ("here's what we set out to do, here's how we did it") rather than a feature inventory.
2 / 4
During the sprint demo, you need to explain that two planned items were NOT completed this sprint. How do you communicate this professionally?
Option C is the professional skipped item explanation in sprint demos:
Structure for explaining incomplete items: 1. Name the items explicitly — don't let anyone discover silently that something is missing 2. State completion percentage — "80% complete" vs. "not done" is a meaningful difference 3. Give specific reasons per item — different items get different explanations — "blocked by X" (external dependency, specific and blameless) — "scope expanded mid-sprint" (honest about scope change, new story created) 4. State the unblocking status — "now unblocked" vs. "still blocked" matters for planning 5. Give the forward path — "first item in Sprint 15" — shows you've already addressed it 6. Relate to sprint goal — "neither impacts the sprint goal" — addresses the implicit concern
What to avoid: • Blame framing ("the team was slow") — use factual language about blockers and scope • Minimizing ("doesn't matter") — if it was in the sprint, it mattered • Vague promises ("try harder") — give specific forward plans instead
3 / 4
A stakeholder asks "What was the team's velocity this sprint compared to last sprint?" You want to give a meaningful answer. Which response is best?
Option B is a nuanced velocity explanation that avoids common misinterpretations:
Why Option C is insufficient: Raw numbers without context invite misinterpretation. The stakeholder might conclude velocity should always increase, or that velocity in points is directly comparable across sprints.
What Option B does well: 1. Answers the direct question — "34 vs 28" — straight data first 2. Adds context that changes interpretation — maintenance work explanation; rolling average 3. Caveats story point comparability — "stories sized differently across sprints" — prevents false precision 4. Asks about the underlying need — "what are you trying to understand?" — offers to give more relevant data than what was asked
Velocity communication principles: • Use 3-sprint rolling average for planning, not single sprint numbers • Points are relative within a team, not comparable between teams or across sizing shifts • Velocity fluctuates normally — one high sprint isn't a trend • If stakeholders ask about velocity frequently, consider offering a more accessible capacity metric (e.g., "we completed 8 of 9 planned features")
What to say if you don't track velocity formally: "We don't use story points — we track features and bugs shipped. This sprint: X features shipped, Y bugs resolved, Z tech debt items completed."
4 / 4
A stakeholder asks about Definition of Done during the demo: "How do we know this feature is actually finished, not just 'done' by your definition?" Which explanation best answers their concern?
Option C gives a complete Definition of Done explanation that builds stakeholder confidence:
A strong Definition of Done includes: • Code review (peer-reviewed and approved) • Test coverage (unit tests, acceptance criteria coverage) • Environment testing (staging, not just local) • Acceptance criteria verified (linked to the original ticket) • Bug status (no open P1/P2 blockers) • Non-functional requirements (accessibility, performance thresholds) • Deployment readiness (feature flag, deploy process completed)
Why this matters to stakeholders: Non-technical stakeholders have been burned before by "done" that turned out to mean "coded but not tested" or "works on dev but not production." A clear, specific DoD answers the implicit concern behind their question — they want assurance, not reassurance.
When to offer evidence: "I'm happy to share the ticket and test results" — offering specifics signals you have nothing to hide and turns "trust me" into "verify if you want to." This is especially powerful after previous quality issues.
If your team doesn't have a formal DoD: Now is not the time to discover this. Work with your team to define it before the next sprint. Absence of a DoD is itself something to flag to the PM.