Intermediate 15 terms

Testing & QA

Professional vocabulary for software testing: test types, methodologies, tooling, and quality assurance practices.

  • Unit Test /ˈjuːnɪt test/

    A test that verifies the behaviour of a single, isolated function or class in isolation from external dependencies. Dependencies (database, network, other services) are replaced with mocks or stubs. Fast to run (milliseconds), numerous in a healthy test suite.

    "The unit test for the calculateTax() function passes 20 different input combinations and verifies the output. It runs in 3ms and has no database dependency — it's pure logic."
  • Integration Test /ˌɪntɪˈɡreɪʃən test/

    A test that verifies that multiple components work correctly together — for example, an API endpoint that reads from a real database. Slower than unit tests but catches issues that unit tests miss: wrong SQL queries, schema mismatches, authentication middleware failures.

    "The integration test spins up a test database, seeds it with fixture data, makes a real HTTP request to the endpoint, and verifies the response body and status code."
  • End-to-End Test (E2E) /end tə end test/

    A test that simulates a complete user journey through the application via a real browser (Playwright, Cypress, Selenium). The slowest test type but provides the highest confidence — it tests the full stack as a user would experience it.

    "We have 12 E2E tests covering critical user journeys: sign up, log in, purchase flow, password reset. They run in 8 minutes on CI — we run unit and integration tests first as they're faster to fail."
  • Mock /mɒk/

    A test double that stands in for a real dependency (database, API, service) during testing. Mocks can be programmed to return specific values and can verify they were called with expected arguments. Enables testing in isolation without real infrastructure.

    "The unit test mocks the payment gateway — we don't want to charge real cards during testing. The mock returns a success response for valid cards and a failure response for test card numbers starting with 4242."
  • Stub /stʌb/

    A test double that provides pre-programmed responses to calls during tests, without verifying call behaviour. A stub always returns the same value; a mock additionally verifies it was called correctly. Often the two terms are used interchangeably in conversation.

    "We stub the weather API in tests — the stub returns a fixed JSON response so our tests don't depend on the external service being available or returning consistent data."
  • Test Coverage /test ˈkʌvərɪdʒ/

    The percentage of production code exercised by tests, measured by line, statement, branch, or function. High coverage doesn't guarantee correctness (tests can be wrong or weak) but low coverage indicates large untested areas. 80% branch coverage is a common team target.

    "Our coverage report shows 73% line coverage — the payment module is only at 41%, which is why we keep finding regressions there. Let's add tests for the critical paths before the next sprint."
  • Test-Driven Development (TDD) /test ˈdrɪvən dɪˈveləpmənt/

    A development approach: write a failing test first, write the minimum code to make it pass, then refactor. The cycle is: Red (failing test) → Green (pass) → Refactor (improve code). TDD results in high test coverage and well-defined interfaces.

    "I'm using TDD for this feature — I wrote the test for the parsing function first, it fails because the function doesn't exist yet, now I'll implement just enough to make it green."
  • Regression /rɪˈɡreʃən/

    A bug introduced when a code change breaks previously working functionality. Regression testing is the practice of running existing tests after changes to catch regressions. Automated test suites prevent regressions from reaching production.

    "The refactor introduced a regression — the login flow broke on Firefox because we changed the session cookie handling. The regression was caught by our E2E tests in CI before it reached production."
  • Flaky Test /ˈfleɪki test/

    A test that produces different results (pass/fail) on the same code without any code changes — intermittently failing. Caused by: timing issues, external dependencies, test isolation failures, shared state. Flaky tests erode trust in the CI system.

    "This test is flaky — it passes 90% of the time but fails randomly due to a race condition in the async setup. We use `await waitFor()` to wait for the async operation to complete before assertions."
  • Test Pyramid /test ˈpɪrəmɪd/

    A model for the ideal proportion of test types: many unit tests (base), fewer integration tests (middle), fewest E2E tests (peak). Higher tests are slower and more expensive to run and maintain. Inverting the pyramid (many E2E, few unit) creates slow, fragile test suites.

    "Our test suite follows the pyramid: 450 unit tests (running in 8s), 80 integration tests (45s), 12 E2E tests (8min). The fast unit tests give us confidence to refactor without waiting for slow E2E runs."
  • Smoke Test /sməʊk test/

    A minimal test that verifies the most critical functionality works after a deployment — is the application up? Can a user log in? Does the main page load? Named after electronics: if you turn on a device and it doesn't smoke, the basic test passes.

    "After every production deployment we run a 2-minute smoke test: hit the health endpoint, load the home page, verify a login succeeds. If smoke tests pass, the on-call engineer gives the all-clear."
  • Load Test /ləʊd test/

    A test that simulates many concurrent users to verify the system performs within acceptable limits under expected production load. Measures: response time at load, throughput (requests/second), error rate, and the load level at which performance degrades.

    "Before the product launch, we load tested the API with 5,000 concurrent users using k6 — the p99 response time was 420ms under load, within our 500ms SLA."
  • Bug Report /bʌɡ rɪˈpɔːt/

    A written record of a software defect that provides enough information for a developer to understand, reproduce, and fix the issue. Components: title, environment, steps to reproduce, expected result, actual result, severity, attachments (logs, screenshots).

    "The bug report was incomplete — it only said 'the button doesn't work'. A good bug report should include: browser version, the exact steps to reproduce, what you expected to see, and what actually happened."
  • Acceptance Testing /əkˈseptəns ˈtestɪŋ/

    Testing conducted to verify that a system meets the agreed acceptance criteria defined in the user story. Done by the product owner or QA before marking a story as done. Verifies the right thing was built, not just that the code works.

    "In acceptance testing, the product owner went through each acceptance criterion: 'user can filter by date' — checked; 'results update without page reload' — checked; 'empty state shows a helpful message' — failed, needs a revision."
  • Continuous Testing /kənˈtɪnjuəs ˈtestɪŋ/

    The practice of running automated tests as part of every code change through CI/CD pipelines, providing immediate feedback. Tests run automatically on every pull request — developers see results within minutes of pushing code rather than discovering issues days later in a manual testing phase.

    "Continuous testing means every PR gets a full test run — all 530 tests complete in under 10 minutes. A developer can't accidentally merge broken code because the merge button is blocked until CI is green."