Page cover

microsoftMicrosoft Sentinel: The Complete Guide

From Zero to Security Operations Hero

Table of Contents

Part 1: Introduction to SIEM and Microsoft Sentinel

What is a SIEM?

SIEM stands for Security Information and Event Management. Think of it as a security operations center (SOC) in a box. A SIEM:

  1. Collects logs from every system in your organization (servers, firewalls, endpoints, applications)

  2. Normalizes data into a common format

  3. Correlates events across different sources

  4. Detects threats using rules and machine learning

  5. Alerts analysts when something suspicious happens

  6. Provides tools for investigation and response

The Problem SIEM Solves

Imagine you manage 500 servers, 1000 workstations, 50 network devices, and 100 applications. Each generates thousands of log entries per day:

  • Without SIEM: You have 1,650 log files to check manually. An attacker compromises a server, pivots to five workstations, exfiltrates data through your firewall, and you find out three months later.

  • With SIEM: All logs flow to one place. A detection rule spots the initial compromise in 5 minutes. An automated alert creates an incident. An analyst investigates using a unified interface. Total time to containment: 30 minutes.

Why Microsoft Sentinel?

Microsoft Sentinel is a cloud-native SIEM built on Azure. Unlike traditional SIEMs that require on-premises servers, Sentinel:

  • Scales automatically - ingest 10 GB/day or 10 TB/day without infrastructure changes

  • Pay-as-you-go - no massive upfront licensing fees

  • Integrates natively with Microsoft ecosystem (Azure, Microsoft 365, Defender, Entra ID)

  • AI/ML built-in - anomaly detection without manual tuning

  • SOAR capabilities - automated response with Logic Apps and Playbooks

Market Position (as of 2026):

  • Used by 60% of Fortune 500 companies

  • Processes over 5 trillion security events per month globally

  • Average detection time: 3.5 minutes from event to alert

Part 2: Understanding Sentinel Architecture

Before we start clicking buttons, you need to understand the data flow. This is the foundation of everything.

The Modern Azure Monitor Stack

Component 1: Azure Monitor Agent (AMA)

What is it?

A small software program (extension) that runs on your VMs, containers, or physical machines. It reads logs from the local system and sends them to Azure.

What it replaces

The legacy Log Analytics Agent (also called MMA - Microsoft Monitoring Agent). Microsoft deprecated MMA in August 2024. If you see documentation mentioning MMA, it's outdated.

Installation

You don't install it manually. When you add a VM to a Data Collection Rule (DCR), Azure automatically:

  1. Deploys the AMA extension to the VM

  2. Configures it with the DCR settings

  3. Starts collecting logs

Installation time: 5-10 minutes typically.

Component 2: Data Collection Rules (DCR)

What is it?

A configuration object in Azure (not on the VM!) that defines:

  1. What to collect: Which logs, which tables, which performance counters

  2. From where: Which VMs, containers, or devices

  3. Filter logic: Collect only Critical and Error events (ignore Info)

  4. Destination: Which Log Analytics Workspace to send to

Why DCR is Revolutionary

Old way (MMA):

  • Agent collects EVERYTHING from the VM

  • Sends EVERYTHING to Azure (expensive!)

  • You pay for 1 GB/day of logs

  • 80% of that data is useless noise

New way (AMA + DCR):

  • DCR says: "Only collect EventID 4625 (failed logins)"

  • Agent filters on the VM itself

  • Sends 200 MB/day to Azure

  • Cost savings: 80%

DCR Configuration Example

Let's say you want to monitor failed RDP logins on 100 Windows servers:

DCR Settings:

  • Name: DCR-RDP-Security

  • Platform: Windows

  • Resources: Add all 100 VMs

  • Data source: Windows Security Events

  • Filter: Only Event IDs 4625, 4624, 4672

  • Destination: Your Log Analytics Workspace → SecurityEvent table

Result: All 100 VMs automatically get the AMA agent installed, configured, and start sending only the filtered events to Sentinel.

Component 3: Log Analytics Workspace (LAW)

What is it?

The database where all your logs are stored. It's a specialized Azure resource optimized for:

  • High-volume ingestion (terabytes per day)

  • Fast queries (queries across billions of rows in seconds)

  • Long-term retention (30 days to 2 years)

It's Not Just for Sentinel

Log Analytics Workspace is a general Azure service. It's used by:

  • Azure Monitor - Performance metrics

  • Application Insights - Application telemetry

  • Microsoft Sentinel - Security logs

  • Azure Automation - Runbook logs

  • Microsoft Defender for Cloud - Security recommendations

This is why Sentinel is called "built on Log Analytics" - it's an application that runs queries against the workspace.

Tables in Log Analytics

Each data source creates a table:

Table Name
Source
Example Data

SecurityEvent

Windows Security Events

Event ID 4625 (failed login), 4624 (successful login)

Syslog

Linux syslog

SSH logins, sudo commands, application logs

SigninLogs

Azure AD / Entra ID

User logins to Microsoft 365, Azure Portal

AzureActivity

Azure control plane

Resource creation, deletion, configuration changes

CommonSecurityLog

Firewalls, proxies

Network traffic, blocked connections

SecurityAlert

Microsoft Defender

Malware detections, suspicious activity

Workspace ID and Key

Every workspace has:

  • Workspace ID: A GUID like a1b2c3d4-e5f6-7890-abcd-ef1234567890

  • Primary Key: A long secret string

These are used:

  • By agents to authenticate (in legacy scenarios)

  • By API integrations

  • For troubleshooting

Security: Treat the Primary Key like a password. Anyone with it can send data to your workspace.

Component 4: Microsoft Sentinel Layer

What Sentinel Adds to Log Analytics

If Log Analytics is the database, Sentinel is the security application that runs on top of it. Sentinel adds:

  1. Analytics Rules - Automated threat detection queries

  2. Incidents - Organized alerts with workflow (assign, investigate, close)

  3. Investigation Graph - Visual representation of attack chains

  4. Threat Intelligence - IOC feeds integrated into queries

  5. Watchlists - Custom lists (VIP users, known bad IPs, approved software)

  6. Workbooks - Security dashboards with charts and maps

  7. Automation Rules - Auto-assign, auto-close, trigger playbooks

  8. Playbooks - Logic Apps for automated response (block IP, isolate VM, send email)

  9. Hunting - Proactive threat hunting with saved queries

  10. UEBA - User and Entity Behavior Analytics (ML-based anomaly detection)

Sentinel vs Log Analytics - Key Difference

Log Analytics alone:

  • You can write queries: SecurityEvent | where EventID == 4625

  • You can create alerts

  • You can build dashboards

Sentinel adds:

  • Pre-built content (4000+ detection rules from Microsoft and community)

  • Incident management (triaging, assignment, investigation)

  • Entity enrichment (IP → country, user → risk score)

  • MITRE ATT&CK mapping (T1078 - Valid Accounts)

  • Case management features

Pricing Model

Sentinel is billed separately from Log Analytics:

Tier
Cost
Description

Pay-as-you-go

$2.76/GB analyzed

You pay for what Sentinel analyzes (not just stores)

Commitment Tier

$2.30/GB (100 GB/day)

Cheaper if you have predictable volume

Example:

  • Log Analytics ingestion: 10 GB/day × $2.76 = $27.60/day

  • Sentinel analysis: 10 GB/day × $2.76 = $27.60/day

  • Total: $55.20/day = ~$1,656/month

Cost optimization tricks:

  1. Use Basic Logs (80% cheaper) for non-security tables

  2. Enable Health Monitoring logs to track Sentinel itself

  3. Use Scheduled Rules (run hourly) instead of real-time for low-priority threats

Part 3: Getting Started - Complete Setup Guide

This section will walk you through setting up Sentinel from scratch. By the end, you'll have a fully functional SIEM ready to detect threats.

Prerequisites

Before starting, ensure you have:

  • ✅ Azure subscription with at least Contributor role

  • ✅ At least $50 of Azure credits (for testing)

  • ✅ Basic understanding of Windows/Linux command line

  • ✅ Access to Azure Portal (portal.azure.comarrow-up-right)

Step 1: Create Log Analytics Workspace

1.1 Navigate to Workspace Creation

  1. Sign in to Azure Portal

  2. In the search bar at the top, type: Log Analytics workspaces

  3. Click Log Analytics workspaces (under Services)

  4. Click + Create

1.2 Configure Basic Settings

Basics tab:

Project details:

  • Subscription: Select your subscription from dropdown

  • Resource group: Click Create new → Type rg-sentinel-prod

  • Why new RG: Keeps Sentinel resources organized and makes cleanup easier

Instance details:

  • Name: law-sentinel-prod-01

    • Why this naming: law = Log Analytics Workspace prefix, sentinel = purpose, prod = environment, 01 = instance number

    • Must be globally unique: If taken, try law-sentinel-prod-yourname or add random numbers

  • Region: Choose a region close to your resources

    • Examples: East US, West Europe, Southeast Asia

    • Cost consideration: All regions have the same Log Analytics pricing (as of 2026)

    • Latency consideration: Data ingestion is faster from the same region

1.3 Configure Retention and Daily Cap

Pricing tier:

  • Should show Pay-as-you-go (this is standard)

  • Commitment tiers appear only after you click into pricing settings

Click Review + createCreate

Deployment time: 2-3 minutes

1.4 Post-Creation Configuration (Important!)

Once created, go to your workspace and configure these critical settings:

1. Set Daily Cap (Cost Protection):

  1. Open your workspace

  2. Left menu → Usage and estimated costs

  3. Click Daily cap

  4. Set to 5 GB (for lab/testing) or 50 GB (for production)

  5. Check the box: Stop data collection when the daily limit is reached

  6. Click OK

Why this matters: Prevents runaway costs if misconfigured. If you accidentally collect too much data (e.g., you enable verbose logging on 1000 VMs), the workspace stops ingesting at the cap limit.

Warning: When the cap is reached, no data is collected until the next day (midnight UTC). Use alerts to notify you if approaching the cap.

2. Configure Data Retention:

  1. Left menu → Usage and estimated costs

  2. Click Data Retention

  3. Slide to 90 days (recommended for security)

  4. Click OK

Retention pricing:

  • Days 1-30: Free (included in ingestion)

  • Days 31-90: Free (as of 2026)

  • Days 91+: $0.10/GB/month

Why 90 days: Most security investigations require 30-60 days of historical data. Compliance frameworks (PCI-DSS, HIPAA) often require 90+ days.

1.5 Understand Workspace Structure

Your workspace now exists. Let's explore what's inside:

  1. Left menu → Logs - This is where you write KQL queries

  2. Left menu → Tables - Shows all tables (SecurityEvent, Syslog, etc.)

  3. Left menu → Agents - (Legacy section, ignore for AMA)

Important concept: Your workspace is currently empty. No tables exist yet. Tables are created automatically when the first data arrives.

Step 2: Enable Microsoft Sentinel

2.1 Add Sentinel to Your Workspace

  1. In Azure Portal search bar, type: Microsoft Sentinel

  2. Click Microsoft Sentinel (under Services)

  3. You'll see: Add Microsoft Sentinel to a workspace

  4. Click + Create

  5. You'll see a list of your Log Analytics workspaces

  6. Select law-sentinel-prod-01 (the one you just created)

  7. Click Add

Provisioning time: 2-3 minutes

What happens during provisioning:

  • Sentinel-specific tables are created (SecurityAlert, SecurityIncident, ThreatIntelligenceIndicator)

  • Default watchlists are initialized

  • Content Hub catalog is loaded

  • Built-in analytics rules are deployed (disabled by default)

2.2 Initial Sentinel Tour

After provisioning, Sentinel opens. You'll see the dashboard:

Top metrics (currently all zeros):

  • Total incidents: 0 (no data yet)

  • New incidents: 0

  • Average time to triage: N/A

  • Average time to close: N/A

Main sections (left menu):

Section
Purpose

General

- Overview

Dashboard with metrics

- Logs

KQL query interface

Threat management

- Incidents

Your incident queue (the "mission control")

- Workbooks

Security dashboards

- Hunting

Proactive threat hunting queries

Content management

- Content hub

Install pre-built solutions

- Repositories

Connect GitHub for custom content

Configuration

- Data connectors

Configure data sources

- Analytics

Create and manage detection rules

- Automation

Incident automation and playbooks

- Settings

Sentinel-wide configuration

2.3 Verify Sentinel is Active

Run a test query:

  1. Go to Logs

  2. Close any pop-up tutorials

  3. In the query window, type:

  1. Click Run

Expected result:

  • If query runs (even if it returns 0 results): ✅ Sentinel is working

  • If you get an error: ❌ Something is wrong (unlikely)

Why this works: AzureDiagnostics table exists in every workspace (it stores Azure resource logs). Even if empty, the query validates that the KQL engine is operational.

Part 4: Data Ingestion - Connectors and Collection Rules

Now we connect data sources to Sentinel. This is where logs start flowing.

Understanding Content Hub (The New Way)

Solutions vs Connectors vs Data Sources

This confuses everyone, so here's the breakdown:

  • Solution: A package that includes multiple components (connectors + rules + workbooks). Example: "Apache Log4j Vulnerability Detection"

  • Data Connector: The configuration interface for a specific data source. Example: "Syslog via AMA"

  • Data Source: The actual log type collected. Example: Linux syslog facility "auth"

Flow: Install Solution (Content Hub) → Configure Connector (Data connectors) → Define Collection (DCR) → Data arrives (Tables)

Connector 1: Linux Syslog (Essential for Linux VMs)

Use Case

You have Linux VMs (Ubuntu, RHEL, etc.) and want to collect:

  • Authentication logs (successful/failed SSH)

  • Sudo command execution

  • Application logs (web servers, databases)

  • Kernel messages

Installation Steps

Step 1: Install Solution from Content Hub

  1. Go to Microsoft SentinelContent managementContent hub

  2. In the search box, type: syslog

  3. You'll see: Syslog solution (by Microsoft)

  4. Click the checkbox next to Syslog

  5. Click Install (bottom right)

  6. Wait for installation (1-2 minutes)

  7. Verify: Go to Content hub → Filter: Installed → You should see Syslog with green checkmark

Step 2: Configure Data Connector

  1. Go to ConfigurationData connectors

  2. Search: syslog via ama

  3. Click on Syslog via AMA

  4. Click Open connector page

Step 3: Create Data Collection Rule

  1. On the connector page, click + Create data collection rule

Basics tab:

  • Rule name: DCR-Linux-Syslog-All

  • Subscription: Your subscription

  • Resource group: Same as your workspace (e.g., rg-sentinel-prod)

  • Region: Same as your workspace

  • Platform Type: Linux

  • Click Next: Resources

Resources tab:

  • Click + Add resources

  • Browse and select your Linux VMs

  • Click Apply

  • Note: If you don't have VMs yet, skip this (leave empty). You can add VMs later.

  • Click Next: Collect

Collect tab (CRITICAL):

You'll see a table with all syslog facilities:

Facility
Description
Set to

auth

Authentication/authorization

LOG_DEBUG

authpriv

Private authentication

LOG_DEBUG

cron

Scheduled tasks

LOG_DEBUG

daemon

System daemons

LOG_DEBUG ← Container logs appear here!

kern

Kernel messages

LOG_DEBUG

local0-local7

Custom applications

LOG_DEBUG (all 8)

syslog

Syslog internal

LOG_DEBUG

user

User-level messages

LOG_DEBUG

Set ALL facilities to LOG_DEBUG. Why?

  • LOG_DEBUG captures everything (Debug, Info, Warning, Error, Critical)

  • You can always filter in KQL later

  • Missing a log level means missing attacks

Destination tab:

  • Destination type: Azure Monitor Logs (pre-selected)

  • Subscription: Your subscription

  • Account: Select your workspace (law-sentinel-prod-01)

  • Click Review + createCreate

Verification

After DCR creation:

  1. Go to MonitorData Collection Rules

  2. Find DCR-Linux-Syslog-All

  3. Click it → Left menu → Resources

  4. You should see your Linux VMs listed

  5. Left menu → Data sources → Should show: Linux Syslog

  6. Left menu → Destinations → Should show: Your workspace

Timeline:

  • DCR creation: 1 minute

  • AMA installation on VMs: 5-10 minutes

  • First logs appear in Sentinel: 10-15 minutes

Test Query

After 15 minutes:

If you see logs: ✅ Success!

Connector 2: Windows Security Events (Essential for Windows VMs)

Use Case

You have Windows VMs (Windows 10, Server 2019/2022) and want to collect:

  • Logon events (Event ID 4624, 4625)

  • Account management (Event ID 4720 - user created, 4726 - user deleted)

  • Process creation (Event ID 4688)

  • File access auditing (Event ID 4663)

Installation Steps

Step 1: Install Solution from Content Hub

  1. Go to Content hub

  2. Search: Windows Security Events

  3. Select Windows Security Events solution

  4. Click Install

  5. Wait for installation

Step 2: Configure Data Connector

  1. After installation, click Manage OR go to Data connectors

  2. Search: Windows Security Events via AMA

  3. Click Open connector page

  4. Click + Create data collection rule

Basics tab:

  • Rule name: DCR-Windows-AllSecurity

  • Subscription: Your subscription

  • Resource group: Same as workspace

  • Region: Same as workspace

  • Platform Type: Windows

Resources tab:

  • Click + Add resources

  • Select your Windows VMs

  • Click Apply

Collect tab:

You have three options:

Option
What it collects
Use case
Cost/day (per VM)

All Security Events

Everything (4000+ event types)

Maximum visibility, forensics

$5-10

Common

Most important events (~200 types)

Balanced (recommended)

$2-3

Minimal

Only critical events (~50 types)

Cost-sensitive

$0.50-1

Recommendation: Start with Common for production, All for honeypots/investigations.

Destination tab:

  • Select your workspace

  • Click Review + createCreate

Verification

Check agent installation:

  1. Go to your Windows VM → Extensions

  2. Look for AzureMonitorWindowsAgent

  3. Status: Provisioning succeeded

Test query (after 15 minutes):

Connector 3: Microsoft Defender for Cloud (Endpoint Protection)

Use Case

You have Microsoft Defender (built-in Windows antivirus) and want Sentinel to:

  • Receive malware detection alerts

  • Trigger incidents for threats

  • Correlate endpoint threats with network attacks

Setup (Easiest Connector!)

This connector doesn't require agents or DCRs. It connects Azure services directly.

  1. Go to Data connectors

  2. Search: Microsoft Defender for Cloud

  3. Click Open connector page

  4. Click Connect (that's it!)

  5. Under Configuration, select:

    • ☑️ Create incidents from alerts (Recommended)

    • Severity: Select High, Medium, Low

That's it! No DCR, no agent installation. Defender alerts now flow to Sentinel automatically.

Test

On a Windows VM:

  1. Create the EICAR test file:

    • Open Notepad

    • Paste: X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*

    • Save as virus.com on Desktop

  2. Defender deletes it immediately

  3. Wait 5 minutes

  4. Check SentinelIncidents

  5. You should see: "EICAR test file detected"

Connector 4: Azure Activity (Azure Control Plane Monitoring)

Use Case

Monitor administrative actions on Azure resources:

  • VM created/deleted

  • Storage account access keys regenerated

  • Firewall rules modified

  • Role assignments changed

Setup

  1. Go to Data connectors

  2. Search: Azure Activity

  3. Click Open connector page

  4. Click Launch Azure Policy Assignment wizard

  5. Select subscriptions to monitor

  6. Click Review + createCreate

What this creates: An Azure Policy that automatically connects all Activity Logs from the selected subscriptions.

No agents required: Activity logs are control-plane operations, already stored in Azure.

Test Query

This shows all VM operations in the last hour (create, start, stop, delete).

Summary: What You've Built So Far

At this point, you have:

  • Log Analytics Workspace - The data warehouse

  • Microsoft Sentinel - The security layer

  • Syslog connector - Linux logs flowing in

  • Windows Security Events - Windows logs flowing in

  • Defender for Cloud - Endpoint protection integrated

  • Azure Activity - Azure administrative actions logged

Data flow is active. Logs are being collected. But no detection is happening yet. That requires Analytics Rules (next section).

Part 5: KQL Masterclass - Query Language Deep Dive

Kusto Query Language (KQL) is the language of Sentinel. It's also used in Azure Data Explorer, Application Insights, and Azure Monitor. Learning KQL is one of the most valuable skills in the Microsoft ecosystem.

Philosophy: Think in Pipelines

KQL is not SQL. It's a functional pipeline language. Data flows left-to-right through operators (pipes):

Compare to water filtration:

In KQL:

The Pipe Operator: |

Every KQL query starts with a table name, then pipes (|) transform the data:

Key concept: Each pipe reduces the dataset. You start with millions of rows and progressively filter down to what you need.

Essential Operators

1. where - The Most Used Operator

Filters rows based on conditions.

Performance tip: Filter on indexed columns first (EventID, TimeGenerated, Computer). String searches (contains, matches regex) are slower.

2. project - Select Columns

Choose which columns to display (like SQL SELECT).

Variant: project-away - Remove specific columns:

3. extend - Create Calculated Columns

Add new columns based on existing data.

Use cases:

  • Parsing strings: extend Domain = split(UserPrincipalName, "@")[1]

  • Calculations: extend DurationMinutes = (EndTime - StartTime) / 1m

  • Enrichment: extend Severity = iff(FailedAttempts > 100, "High", "Low")

4. summarize - Aggregation

Group data and calculate statistics (like SQL GROUP BY).

Count rows:

Output:

Multiple aggregations:

Available aggregation functions:

  • count() - Count rows

  • dcount(column) - Count distinct values

  • sum(column) - Sum values

  • avg(column) - Average

  • min(column), max(column) - Min/max

  • make_set(column) - Create array of unique values

  • make_list(column) - Create array (includes duplicates)

  • percentile(column, 95) - 95th percentile

5. join - Combine Tables

Merge data from two tables.

Example: Correlate failed logins with successful logins

Join types:

  • inner - Only rows that match in both tables

  • leftouter - All rows from left, nulls for non-matching right

  • rightouter - All rows from right, nulls for non-matching left

  • fullouter - All rows from both

6. let - Variables and Subqueries

Store intermediate results or values.

Example: Define a time range:

Example: Reusable list:

7. Time Operators

Relative time:

  • ago(1h) - One hour ago

  • ago(1d) - One day ago

  • ago(7d) - Seven days ago

Absolute time:

  • datetime(2026-01-31) - Specific date

  • datetime(2026-01-31T14:30:00) - Specific timestamp

Time parsing:

  • hourofday(TimeGenerated) - 0-23

  • dayofweek(TimeGenerated) - 0 (Sunday) to 6 (Saturday)

  • dayofmonth(TimeGenerated) - 1-31

  • monthofyear(TimeGenerated) - 1-12

Time binning (for charts):

Creates hourly buckets: [00:00, 01:00, 02:00, ...]

Practical KQL Examples

Example 1: Find Brute Force Attacks

Goal: Detect IPs with more than 10 failed login attempts in 5 minutes.

Explanation:

  1. Filter to failed login events

  2. Only last 5 minutes (matches rule frequency)

  3. Group by attacker IP and target computer

  4. Count attempts and collect usernames tried

  5. Filter to only "brute force" level (>10 attempts)

  6. Calculate attack duration

  7. Display results sorted by severity

Example 2: Detect Successful Login After Failed Attempts

Goal: Find IPs that failed many times, then succeeded (credential stuffing).

Why this is dangerous: Attackers often try 100 passwords, then get lucky. This query finds those successes.

Example 3: Detect Suspicious Process Execution

Goal: Find processes started by accounts that logged in via RDP (potential attacker activity).

What it shows: Every process launched by RDP users. Look for suspicious commands (powershell, wget, certutil, net user).

Example 4: Log4Shell Detection in Syslog

Goal: Find JNDI injection patterns in Linux logs.

Explanation:

  • Search for JNDI patterns in syslog messages

  • Extract the protocol (ldap, rmi, dns, etc.)

  • Extract attacker's server IP and port

  • Filter out false positives (where Protocol is empty)

Goal: Create a timeline of attacks over 7 days.

This creates a line chart showing attacks over time. Useful for:

  • Identifying attack waves

  • Correlating with external events (data breaches leaked your IP)

  • Determining peak attack times (often nights/weekends)

Advanced KQL Techniques

Parsing JSON and XML

Scenario: Your application logs contain JSON data in a text field.

Dynamic Thresholds with Percentiles

Scenario: Detect abnormal login counts (more than usual).

Explanation:

  1. Calculate the 95th percentile of logins over the past week

  2. Count today's logins

  3. If today exceeds the baseline, flag as anomalous

IP Geolocation (Manual Enrichment)

While raw logs don't have geolocation, you can enrich them:

Note: Most Sentinel environments don't have GeoIPDatabase by default. Geolocation is added automatically in Incidents via Entity Mapping.

Part 6: Analytics Rules - The Art of Detection

Analytics Rules are the heart of Sentinel. They automatically run queries on a schedule and create incidents when threats are detected.

Rule Types

Type
Description
Use Case

Scheduled

Runs a KQL query on a timer

Most common (95% of rules)

Microsoft Security

Imports alerts from other MS services

Defender for Cloud, Microsoft 365 Defender

Fusion

ML-based correlation of multiple signals

Advanced persistent threats

Anomaly

ML detects unusual behavior

Baselines "normal" then alerts on deviations

Threat Intelligence

Matches IOCs from TI feeds

Known bad IPs, domains, file hashes

We'll focus on Scheduled rules - they're the most flexible and powerful.

Anatomy of a Scheduled Rule

A scheduled rule has 5 tabs:

Tab 1: General (Metadata)

Name: RDP Brute Force Detection

  • Should be descriptive and specific

  • Include the attack type and data source

Description:

  • Explain what it detects

  • Include the threshold logic

  • Reference the EventID if applicable

Tactics (MITRE ATT&CK):

  • Select: Credential Access

  • This maps to MITRE ATT&CK framework tactic

Techniques:

  • Select: T1110.001 - Brute Force: Password Guessing

Severity:

  • Low - Informational, expected behavior

  • Medium - Suspicious, needs investigation

  • High - Likely attack, immediate attention

  • Critical - Active breach, emergency response

For brute force: Choose High

Status: Enabled (or Disabled if you're testing)

Tab 2: Set Rule Logic (The Query)

Rule query:

Alert enrichment - Entity mapping (Crucial!):

  1. Click + Add new entity

  2. Configure IP entity:

    • Entity type: Select IP

    • Identifier: Select Address

    • Value: Select IpAddress

  3. Click + Add new entity again

  4. Configure Host entity:

    • Entity type: Select Host

    • Identifier: Select HostName

    • Value: Select Computer

Why map entities:

  • Sentinel automatically enriches IPs with geolocation

  • Investigation graph visualizes relationships

  • You can pivot to other incidents involving the same IP

Query scheduling:

  • Run query every: 5 minutes

  • Lookup data from the last: 5 minutes

Critical rule: These two values should usually match to avoid gaps or duplicates.

Alert threshold:

  • Generate alert when number of query results: Is greater than 0

This means: "If the query returns any rows, create an alert."

Tab 3: Incident Settings

Create incidents from alerts triggered by this analytics rule: Enabled

Why this matters: Without this, you get alerts but no incidents. Incidents are what SOC analysts work with.

Alert grouping:

Option 1: Group all alerts triggered by this rule into a single incident

  • Use case: You want ONE incident for "RDP attacks today" regardless of how many IPs

  • Not recommended for this rule

Option 2: Group related alerts, triggered by this analytics rule, into incidents

  • Group by: Select IP Address (or IpAddress if shown as column)

  • Re-open closed matching incidents: Disabled

  • Limit the group to alerts created within the selected time frame: 5 hours

Why Option 2: If the same IP attacks 20 times over 2 hours, you get ONE incident with 20 alerts. If a different IP attacks, you get a separate incident. This is the right balance.

Tab 4: Automated Response

This is where you trigger Playbooks (Logic Apps) automatically.

Example use case: When a High-severity brute force incident is created, automatically:

  1. Send email to SOC team

  2. Block the IP at the firewall (via API)

  3. Create a ticket in ServiceNow

For now, leave this empty. We'll cover automation later.

Tab 5: Review + Create

Review your configuration and click Create.

The rule starts running immediately (if Status = Enabled).

Testing Your Rule

Option 1: Manual Test (Before Deployment)

In the Set rule logic tab:

  1. After writing your query, click Test with current data

  2. Sentinel runs the query against the last 50 results

  3. You see:

    • "Query validation passed" - Syntax is correct

    • "Query failed" - Syntax error (fix and retry)

  4. You can see sample results

Note: This doesn't create incidents, just validates the query.

Option 2: Simulate Attack

  1. Create the rule

  2. Generate failed login events (from Kali: xfreerdp3 /u:admin /p:WrongPass /v:YOUR_VM /cert:ignore +auth-only x 15 times)

  3. Wait for rule to run (5 minutes)

  4. Check Incidents page

Pre-Built Rules from Content Hub

When you installed solutions (Syslog, Windows Security Events), you also got pre-built rules. They're disabled by default.

Enabling Pre-Built Rules

  1. Go to AnalyticsRule templates tab

  2. Search for rules (e.g., brute force)

  3. Click on a template

  4. Click Create rule

  5. Review the query and settings

  6. Click through tabs → Create

Popular rules to enable:

Rule Name
What it detects
Severity

"Rare application consent"

OAuth app abuse

Medium

"Malicious Inbox Rule"

Email forwarding to external addresses

High

"Multiple Password Reset by user"

Account takeover attempt

Medium

"TI map IP entity to AzureActivity"

Known malicious IPs in Azure logs

High

"Possible Log4j exploitation"

CVE-2021-44228

Critical

Recommendation: Enable 10-20 rules to start. Too many rules = alert fatigue.

Part 7: Incident Management and Investigation

Analytics rules create Alerts. Alerts are grouped into Incidents. Incidents are what SOC analysts investigate.

The Incident Lifecycle

New: Incident just created, unassigned, needs triage Active: Assigned to an analyst, investigation in progress Closed: Investigation complete, classified, documented

Incident Page - Your Mission Control

Understanding the Incidents Page

When you open Incidents, you see:

Top metrics:

  • Total incidents: All incidents (ever)

  • New incidents: Unassigned, needs attention

  • Active incidents: Currently being worked on

  • Closed incidents: Resolved

Incident list columns:

Column
Meaning

ID

Unique number (e.g., #12345)

Title

From the analytics rule name

Severity

Critical / High / Medium / Low / Informational

Status

New / Active / Closed

Owner

Assigned analyst

Created time

When first detected

Last updated

When last modified

Alerts

Number of alerts in this incident (grouped)

Entities

Number of entities involved (IPs, hosts, accounts)

Products

Source (Microsoft Sentinel, Defender, etc.)

Filtering and Searching

Filters (top of page):

  • Time range: Last 24 hours / 7 days / 30 days / Custom

  • Status: New / Active / Closed

  • Severity: Critical / High / Medium / Low

  • Owner: Unassigned / Assigned to me / Assigned to others

Search box: Type incident ID, title keywords, or entity name (IP, hostname).

Incident Investigation Workflow

Step 1: Triage (First 2 Minutes)

Goal: Determine if this is a real threat or noise.

  1. Open the incident (click on it)

  2. Read the title and description - What did the rule detect?

  3. Check severity - Critical/High requires immediate attention

  4. Review entities:

    • What IPs are involved?

    • Which hosts/users are affected?

    • Any known bad actors (Threat Intel)?

  5. Quick decision:

    • Real threat → Assign to yourself, change status to Active

    • False positive → Close immediately with classification "False Positive"

    • Unclear → Keep as New, ask senior analyst

Step 2: Investigation (10-30 Minutes)

Goal: Understand the full scope of the attack.

2.1 Review Alerts

  1. In the incident panel, scroll to Alerts section

  2. Click on the alert to expand

  3. Review:

    • Original query results - What data triggered this?

    • Original alert - When was it first detected?

    • Time generated - Timeline of attack

2.2 Examine Entities

  1. Scroll to Entities section

  2. Click on each entity to see details:

For IP entity:

  • Geolocation (Country, City, ISP)

  • Threat Intelligence: Is this a known bad IP?

  • Related incidents: Has this IP attacked before?

  • WHOIS information

For Host entity:

  • Operating system

  • Azure metadata (resource group, tags)

  • Other incidents on this host

  • Recent security events

For Account entity:

  • Account type (local, domain, Azure AD)

  • Last successful login

  • Recent activities

  • Group memberships

2.3 Investigation Graph

  1. Click Investigate button

  2. You see a visual graph:

How to use it:

  • Click on any entity to see its details

  • Click Insights to run pre-built queries (e.g., "Show all logins from this IP")

  • Click Related alerts to find similar events

  • Expand the graph to see connections you missed

Real example: You investigate a brute force incident. The Investigation Graph shows the same IP also appears in another incident: "Malicious PowerShell execution". This tells you: The attacker succeeded and is now running commands!

Step 3: Add Comments (Documentation)

  1. In the incident panel, scroll to Comments

  2. Click Add comment

  3. Write detailed notes:

Step 4: Classify and Close

When investigation is complete:

  1. Status → Change to Closed

  2. Classification:

    • True Positive - Suspicious Activity - Real attack, malicious intent

    • Benign Positive - Suspicious But Expected - Authorized security testing, pen-test

    • False Positive - Incorrect Alert Logic - Rule error, tune the query

    • False Positive - Inaccurate Data - Bad log data, misconfigured source

    • Undetermined - Not enough information to classify

  3. Comment (required):

    • Summarize findings

    • State the classification reason

    • Note any follow-up actions

Why classification matters:

  • Tracks rule accuracy (if 80% are False Positives, the rule needs tuning)

  • Metrics for SOC performance

  • Compliance documentation

Part 8: Advanced Topics

Workbooks - Security Dashboards

Workbooks are interactive dashboards. Think Power BI, but for security data.

Creating a Workbook

  1. Go to Workbooks+ Add workbook

  2. Click Edit

  3. Click + AddAdd query

Example query for a brute force dashboard:

  1. Visualization: Select Bar chart

  2. Click Done editing

  3. Click Save

    • Title: Brute Force Attack Dashboard

    • Location: Same resource group as workspace

Advanced workbook features:

  • Parameters: Dropdown to select time range, computer name

  • Tabs: Organize multiple views (Overview, Detailed, Historical)

  • Grids: Clickable tables that update other visualizations

  • Maps: Geolocation of attacks

Example: Real-Time Attack Map

Watchlists - Custom Threat Intelligence

Watchlists are CSV files you upload to Sentinel for reference in queries.

Use Cases

  1. VIP Users: List of executives whose accounts need extra monitoring

  2. Known Bad IPs: IPs you've identified as malicious

  3. Approved Applications: Software allowed in your environment

  4. Service Accounts: Accounts expected to have unusual login patterns

Creating a Watchlist

  1. Go to ConfigurationWatchlists

  2. Click + Add new

  3. Name: VIP-Users

  4. Alias: VIPUsers (used in queries)

  5. Upload CSV file:

  1. SearchKey: UserPrincipalName

  2. Click Create

Using Watchlist in Query

This finds failed logins for VIP users only.

Automation Rules - Incident SOAR

Automation Rules automatically process incidents without human intervention.

Use Case 1: Auto-Assign Based on Severity

Rule: All Critical incidents → Assign to Senior Analyst Team

  1. Go to Automation+ CreateAutomation rule

  2. Name: Auto-Assign-Critical-Incidents

  3. Trigger: When incident is created

  4. Conditions:

    • If Severity Equals Critical

  5. Actions:

    • Change status to Active

    • Assign owner to SOC-Senior-Team@company.com

  6. Order: 1 (runs before other rules)

  7. Click Apply

Use Case 2: Auto-Close Known False Positives

Rule: If incident title contains "EICAR" → Auto-close as test

  1. Create automation rule

  2. Trigger: When incident is created

  3. Conditions:

    • If Title Contains EICAR

  4. Actions:

    • Change status to Closed

    • Classification to Benign Positive - Suspicious But Expected

    • Add comment: Automated closure: EICAR test file detection is expected behavior

  5. Click Apply

Playbooks - Automated Response

Playbooks are Logic Apps that perform actions.

Example: Block IP at Azure NSG

Scenario: When a brute force incident is created, automatically block the attacker IP at the firewall.

Playbook logic:

Creating this requires: Knowledge of Logic Apps and Azure API permissions. Beyond the scope of this guide, but Sentinel has templates you can customize.

Part 9: KQL Quick Reference Cheatsheet

Table Operations

Filtering

Time Filters

Projecting and Extending

Aggregations

Sorting and Limiting

Joins

String Operations

Time Binning (for Charts)

Advanced Aggregations

KQL Best Practices

1. Filter Early

Bad (slow):

Good (fast):

Why: KQL optimizes from left to right. Filtering before projecting is faster.

2. Use Indexed Columns

These columns are indexed (fast):

  • TimeGenerated

  • EventID (in SecurityEvent)

  • Computer

  • Type

These are NOT indexed (slower):

  • UserName, IpAddress, Message, SyslogMessage

Strategy: Filter on indexed columns first, then non-indexed.

3. Limit Time Range

Bad:

This queries ALL data (years worth) - very slow and expensive!

Good:

Rule of thumb: Never query without a time filter unless you specifically need all historical data.

4. Use take for Testing

When developing queries, add | take 10 at the end to preview results quickly:

5. Comment Your Queries

Conclusion

You've now learned Microsoft Sentinel from the ground up:

  1. Architecture - LAW, AMA, DCR, Sentinel layers

  2. Setup - Created workspace, enabled Sentinel, connected data sources

  3. Data Ingestion - Configured connectors and Data Collection Rules

  4. KQL - From basic filters to advanced correlations

  5. Detection - Created analytics rules and tested them

  6. Investigation - Triaged incidents, used Investigation Graph

  7. Advanced - Workbooks, watchlists, automation

Next Steps for Mastery

  1. Practice KQL daily - Spend 30 minutes per day writing queries

  2. Enable 20-30 rules - Start with Microsoft's pre-built templates

  3. Read attack reports - Understand real-world TTPs (MITRE ATT&CK)

  4. Build a home lab - Deploy a honeypot, see real attacks

Certification Path

  • AZ-500: Azure Security Engineer Associate (includes Sentinel module)

  • SC-200: Microsoft Security Operations Analyst (Sentinel-focused)

This guide is continuously updated. Last revision: February 2026.

Stay secure!

Last updated