MoSCoW — Must / Should / Could / Won't — prioritization
DORA — DevOps Research and Assessment (the four key metrics)
0 / 5 completed
1 / 5
A team lead says: "Our CI/CD pipeline is broken — no one can merge until it's fixed." What does CI/CD stand for?
CI/CD = Continuous Integration / Continuous Delivery (or Continuous Deployment). CI (Continuous Integration) means developers frequently merge code into a shared branch, and an automated pipeline (build + tests) runs on every merge to catch conflicts early. CD (Continuous Delivery) means the pipeline also packages and prepares the software for release — a human triggers the final deployment. CD (Continuous Deployment) goes one step further: every passing change is automatically deployed to production with no human gate. In practice, "CI/CD pipeline" refers to the full automated flow from code commit to deployed software. Say it: "C-I / C-D" (letter by letter). You'll see it written as: CI/CD pipeline, GitHub Actions CI, Jenkins pipeline, GitLab CI, GitHub Actions workflow.
2 / 5
During an incident review, an SRE says: "We've already burned 60% of our error budget this quarter because our SLO specifies 99.9% availability." What is an SLO?
SLO = Service Level Objective. An SLO is a target for a specific reliability metric over a defined period. Example: "99.9% of requests to the payments API return a 2xx or 3xx status in a rolling 30-day window." It's internal — your engineering team sets it to keep the service reliable. Related terms: SLA (Service Level Agreement) = the external contract with customers — what happens if you miss the target (refunds, credits). SLI (Service Level Indicator) = the actual metric being measured (error rate, latency, uptime). Error budget = the allowed unreliability: at 99.9%, you have 0.1% = ~8.7 hours of downtime per year to "spend". Say it: "S-L-O" (letter by letter). Mnemonic: SLI measures → SLO targets → SLA contracts.
3 / 5
A product manager writes in a planning doc: "Let's define clear OKRs before we start the quarter — last time we shipped features without knowing if they moved the needle." What are OKRs?
OKRs = Objectives and Key Results. A goal-setting framework originally developed at Intel and popularized by Google. An Objective describes what you want to achieve (qualitative, inspiring): "Make the checkout experience the fastest in the market." Key Results are measurable milestones that show you're reaching the objective: "Reduce checkout time from 4.2s to 1.8s", "Achieve 99.9% transaction success rate". OKRs are set quarterly and transparently shared across the company. They encourage ambitious goals — 70% achievement is considered success. Compare with KPIs (Key Performance Indicators) — KPIs track ongoing performance metrics; OKRs drive change. Say it: "O-K-Rs" (letter by letter). In conversations: "What are your Q2 OKRs?", "This initiative maps to our North Star OKR."
4 / 5
In a sprint planning meeting, the team discusses: "Should this feature be MoSCoW priority M or S?" What does MoSCoW stand for in this context?
MoSCoW = Must have, Should have, Could have, Won't have — a prioritization technique used in agile and project management. The "o" and "W" letters are padding to make the acronym pronounceable. Must have: non-negotiable requirements — without these, the product doesn't ship. Should have: important but not critical — include if possible. Could have: desirable but lower priority — include only if time allows. Won't have: explicitly out of scope for this release. Using MoSCoW helps teams avoid scope creep by forcing explicit conversations about priority. Say it: "Moscow" (/ˈmɒs.kaʊ/) — same as the city, which sometimes causes amusement in meetings. Common usage: "Let's MoSCoW the backlog before planning."
5 / 5
An engineering manager presents at a review: "Our DORA metrics look strong — deploy frequency is up and MTTR is down." What are DORA metrics?
DORA = DevOps Research and Assessment — a research programme at Google that identified the four key metrics of high-performing software delivery teams. The four DORA metrics are: (1) Deployment Frequency — how often code goes to production (elite: multiple times per day). (2) Lead Time for Changes — how long from commit to production (elite: under 1 hour). (3) Change Failure Rate — what percentage of deployments cause incidents (elite: under 5%). (4) Time to Restore Service (MTTR) — how long to recover from a failure (elite: under 1 hour). Say it: "DORA" as a word (/ˈdɔːrə/). High DORA scores correlate with both technical excellence and better business outcomes. Commonly discussed in DevOps, platform engineering, and engineering leadership contexts.