SOC Analyst Vocabulary: SIEM, Threat Hunting, SOAR, IOC/IOA, and Alert Triage

Master SOC analyst vocabulary for IT security professionals: SIEM, alert triage, threat hunting, IOC, IOA, SOAR playbooks, threat intelligence, escalation language, and shift handoff communication.

The Security Operations Centre (SOC) has its own vocabulary — a dense set of terms used daily by analysts, threat hunters, and blue team engineers. For non-native English speakers working in security, this vocabulary is especially important: misunderstanding a term during an incident response call or a shift handoff can have real consequences.

This article covers the vocabulary you need to work as a SOC analyst, participate in blue team exercises, and discuss threat detection and response clearly in English.


Section 1: The SOC Environment

Security Operations Centre (SOC) A centralised team (or function) responsible for monitoring, detecting, investigating, and responding to security events. SOC analysts work from dashboards, alert queues, and threat intelligence feeds to find and respond to threats.

“Our SOC operates 24/7 across three shifts. Tier 1 analysts handle alert triage, Tier 2 investigates confirmed incidents, and Tier 3 (threat hunters) proactively search for undetected threats.”

Tier 1 / Tier 2 / Tier 3 Analysts The three levels of SOC analyst seniority. Tier 1: monitors the alert queue, performs initial triage, escalates. Tier 2: investigates escalated alerts, analyses incidents, writes incident reports. Tier 3: proactive threat hunting, rule development, advanced incident response.

“I escalated the alert to Tier 2 — it looked like a legitimate login but from an IP in a country the user has never logged in from before. Tier 2 will investigate further.”

Alert Queue (Alert Backlog) The list of security alerts waiting for analyst review. In large organisations, the alert queue can contain hundreds of alerts per day. Effective triage is critical to prioritise high-fidelity alerts over noise.

“Our alert queue was 200 items deep by Monday morning. We triaged by severity and source — only 12 needed hands-on investigation.”


Section 2: SIEM Vocabulary

SIEM (Security Information and Event Management) The central platform of a SOC — collects logs from across the organisation (firewalls, endpoints, cloud services, applications), applies correlation rules to detect suspicious patterns, and raises alerts for analyst review. Examples: Splunk, Microsoft Sentinel, Elastic SIEM, IBM QRadar.

“We onboarded the new SaaS application’s logs into Sentinel yesterday. Now we can correlate its events with endpoint and network logs in a single query.”

Log Source Onboarding The process of configuring a new system to send its logs to the SIEM. Includes parsing raw log formats, normalising fields, and applying initial detection rules.

“We just onboarded the Kubernetes API server as a log source — we’re now alerting on kubectl exec attempts and privileged container creation.”

Detection Rule (Correlation Rule) A logical rule that the SIEM evaluates against incoming log data to identify suspicious patterns. When a rule matches, it generates an alert. Rules may be threshold-based (10 failed logins in 5 minutes), pattern-based, or ML-based.

“The detection rule for credential stuffing: >100 failed authentication attempts from a single IP within 60 seconds, targeting >50 different user accounts. The rule fired at 2:14am — confirmed credential stuffing attack.”

UEBA (User and Entity Behaviour Analytics) Machine learning-based detection that builds a baseline of normal behaviour for users and entities (devices, applications), and alerts on deviations. Detects insider threats and account takeovers that rule-based detection misses.

“UEBA flagged the CFO’s account — bulk file download in the middle of the night, from a device not registered to the CFO. Turned out to be their assistant using a personal laptop. Still a policy violation.”

False Positive An alert that fires but does not represent a real security threat. Excess false positives are a major SOC problem — they create alert fatigue and cause analysts to miss real threats.

“Our web application firewall was generating 500 false positives per day for a legitimate penetration test in progress. We tuned the rule to suppress alerts from the authorised testing IP range.”

False Negative A case where a real security threat occurs but no alert is generated. The most dangerous outcome in security detection.

“The adversary used living-off-the-land techniques — running only built-in Windows tools. Our endpoint rules were focused on malware execution and missed the attack entirely. That’s a false negative we need to fix with new detection rules.”


Section 3: Alert Triage Vocabulary

Triage The process of reviewing an alert to determine whether it represents a real threat, classify its severity, and decide on the next action: dismiss, monitor, escalate, or contain and remediate. Borrowed from medical terminology.

“Triage process for this alert: check the source IP reputation, review the user’s recent activity, correlate with endpoint events. Decision: escalate to Tier 2 — the combination of signs is suspicious.”

Enrich (Alert Enrichment) Adding context to an alert to help the analyst make a better decision. Enrichment sources: threat intelligence (is this IP known malicious?), user identity (who is the affected user?), asset inventory (what is this server?), vulnerability data (is this CVE exploitable?).

“Before escalating, I enriched the alert: the source IP is on three threat intel blocklists, the targeted user is in finance, and the endpoint is unpatched (has CVE-2024-1234 open). This is a high-priority investigation.”

Dismiss / Suppress Mark an alert as not requiring investigation — typically because it has been confirmed as a false positive. Suppression rules prevent the same false positive from generating noise in the future.

“I’m dismissing this alert — it’s the automated backup tool that runs every night at 3am. I’ll create a suppression rule so it doesn’t appear in the queue tomorrow.”

Escalate Passing an alert or incident to a higher tier or different team for further investigation or action. Escalation should include a summary of findings, the evidence reviewed, and the reason for escalation.

“I’m escalating this to Tier 2 with the following context: failed login burst from 45 unique IPs, targeting the same 3 admin accounts, over a 10-minute window. Matches our credential stuffing detection pattern. Raw logs attached.”


Section 4: Threat Hunting Vocabulary

Threat Hunting Proactive security work where analysts search for threats that have bypassed automated detection — without starting from an alert. Based on hypotheses: “what would it look like if an adversary was already in our network, moving laterally?”

“This week’s hunt hypothesis: assume an adversary has compromised a developer’s credential and is trying to pivot to production. I’m querying for anomalous internal network connections from developer-zone hosts to production databases.”

Hypothesis-Based Hunting Threat hunting that starts from a specific, testable hypothesis derived from threat intelligence, incident analysis, or an attacker TTP. More focused and efficient than open-ended data mining.

“Our hypothesis: APT group uses PowerShell encoded commands to evade signature detection. We’re hunting for Base64-encoded PowerShell executions launched from Office applications across all endpoints.”

TTP (Tactics, Techniques, and Procedures) The behaviour patterns of a specific threat actor — how they gain access (tactics), what specific methods they use (techniques), and the detailed steps of those methods (procedures). Described using the MITRE ATT&CK framework.

“This attack matches the TTP of FIN7: spearphishing attachment → VBS macro → Carbanak backdoor. We’ve mapped the indicators to ATT&CK techniques T1566.001, T1059.005, and T1021.002.”

IOC (Indicator of Compromise) A forensic artefact that indicates a system has been compromised. Examples: malicious IP addresses, file hashes of known malware, suspicious domain names, unusual registry keys. Reactive — found after the fact.

“IOCs from the incident: malware hash SHA256:abc123, C2 domain evil-cdn.com, and registry key HKLM\Software\Malware\config. We’ve pushed all three IOCs to the blocklist.”

IOA (Indicator of Attack) A behavioural pattern indicating that an attack may be in progress — even if no malware is present. Examples: unusual process privilege escalation, lateral movement patterns, abnormal data access patterns. Proactive — detects intent before compromise completes.

“The IOA was a chain of events: scheduled task creation by a non-admin user, followed by a new outbound connection to an IP with no previous history. No malware on disk, but clear attacker behaviour pattern.”


Section 5: SOAR and Playbooks

SOAR (Security Orchestration, Automation, and Response) A platform that automates repetitive SOC tasks — enriching alerts, running playbooks, sending notifications, and orchestrating responses across security tools. Reduces analyst mean time to respond (MTTR).

“Our SOAR automatically enriches every alert with IP reputation, user details, and asset data before it appears in the analyst queue. Tier 1 now has context before they even look at an alert.”

Playbook (SOAR Playbook) A documented, automated workflow that defines the steps to investigate and respond to a specific type of incident. Example: phishing playbook — extract URLs and attachments, scan with threat intel, quarantine the email, notify the user, generate a ticket.

“The ransomware playbook automatically isolates the affected endpoint, blocks all C2 IPs at the firewall, revokes the user’s sessions, notifies the IR team, and preserves forensic evidence — without waiting for a human to do each step.”

Runbook A manual version of a playbook — step-by-step instructions for a human analyst. Less automated than a SOAR playbook, but documents the procedure so any analyst can follow it consistently.

“We don’t have a SOAR rule for this scenario yet — I’ll follow the manual runbook for lateral movement investigation: check event logs on the source host, query network flow data, check identity logs.”


Section 6: Threat Intelligence Vocabulary

Threat Intelligence Feed A structured data source providing indicators of compromise (IOCs), threat actor profiles, and TTP descriptions. Examples: MISP, VirusTotal, AlienVault OTX, commercial feeds from CrowdStrike, Recorded Future.

“We subscribe to three threat feeds — one commercial feed for financial sector threats, CISA alerts for critical infrastructure, and an open-source MISP community feed. The commercial feed has the lowest false positive rate.”

STIX and TAXII Standard formats for sharing threat intelligence. STIX (Structured Threat Information eXpression): the data format — defines objects like indicators, campaigns, threat actors, and attack patterns. TAXII (Trusted Automated eXchange of Indicator Information): the transport protocol for sharing STIX data.

“We ingest threat intel in STIX 2.1 format via TAXII from our intelligence provider. The STIX bundles include IP indicators, domains, hashes, and MITRE ATT&CK technique references.”

Attribution Identifying who is responsible for an attack — linking it to a specific threat actor group or nation-state. High-confidence attribution requires extensive evidence and is typically done by specialised threat intelligence teams.

“Attribution is difficult. We can say the techniques are consistent with the APT29 profile, but we can’t say with certainty it’s APT29 based only on the TTPs. Attribution confidence: low.”


Section 7: SOC Communication Language

ContextPhrase
Shift handoff”Handing off at 06:00. Active investigations: two tickets, both triaged. Alert queue is clean. Ongoing monitoring: suspicious login from the contractor account in ticket 4521 — watch for further activity.”
Escalation to Tier 2”Escalating ticket 4521. Source: UEBA alert on finance user. Evidence: logon from new country + bulk file download + sensitive folder access, all within 30 minutes. Preliminary assessment: account compromise. Need deeper investigation.”
Reporting false positive”Closing as false positive. The alert fired on the automated integration test suite running in staging. Creating a suppression rule for the test-runner service account.”
Threat hunting report”Hunt hypothesis: Pass-the-Hash from compromised workstation. Searched Kerberos event logs for anomalous TGT requests — no findings consistent with the hypothesis. Documenting null result and moving to next hypothesis.”