
Microsoft Sentinel: The Complete Guide
From Zero to Security Operations Hero
Table of Contents
Part 1: Introduction to SIEM and Microsoft Sentinel
What is a SIEM?
SIEM stands for Security Information and Event Management. Think of it as a security operations center (SOC) in a box. A SIEM:
Collects logs from every system in your organization (servers, firewalls, endpoints, applications)
Normalizes data into a common format
Correlates events across different sources
Detects threats using rules and machine learning
Alerts analysts when something suspicious happens
Provides tools for investigation and response

The Problem SIEM Solves
Imagine you manage 500 servers, 1000 workstations, 50 network devices, and 100 applications. Each generates thousands of log entries per day:
Without SIEM: You have 1,650 log files to check manually. An attacker compromises a server, pivots to five workstations, exfiltrates data through your firewall, and you find out three months later.
With SIEM: All logs flow to one place. A detection rule spots the initial compromise in 5 minutes. An automated alert creates an incident. An analyst investigates using a unified interface. Total time to containment: 30 minutes.
Why Microsoft Sentinel?
Microsoft Sentinel is a cloud-native SIEM built on Azure. Unlike traditional SIEMs that require on-premises servers, Sentinel:
Scales automatically - ingest 10 GB/day or 10 TB/day without infrastructure changes
Pay-as-you-go - no massive upfront licensing fees
Integrates natively with Microsoft ecosystem (Azure, Microsoft 365, Defender, Entra ID)
AI/ML built-in - anomaly detection without manual tuning
SOAR capabilities - automated response with Logic Apps and Playbooks
Market Position (as of 2026):
Used by 60% of Fortune 500 companies
Processes over 5 trillion security events per month globally
Average detection time: 3.5 minutes from event to alert
Part 2: Understanding Sentinel Architecture
Before we start clicking buttons, you need to understand the data flow. This is the foundation of everything.
The Modern Azure Monitor Stack

Component 1: Azure Monitor Agent (AMA)
What is it?
A small software program (extension) that runs on your VMs, containers, or physical machines. It reads logs from the local system and sends them to Azure.
What it replaces
The legacy Log Analytics Agent (also called MMA - Microsoft Monitoring Agent). Microsoft deprecated MMA in August 2024. If you see documentation mentioning MMA, it's outdated.
Installation
You don't install it manually. When you add a VM to a Data Collection Rule (DCR), Azure automatically:
Deploys the AMA extension to the VM
Configures it with the DCR settings
Starts collecting logs
Installation time: 5-10 minutes typically.
Component 2: Data Collection Rules (DCR)
What is it?
A configuration object in Azure (not on the VM!) that defines:
What to collect: Which logs, which tables, which performance counters
From where: Which VMs, containers, or devices
Filter logic: Collect only Critical and Error events (ignore Info)
Destination: Which Log Analytics Workspace to send to
Why DCR is Revolutionary
Old way (MMA):
Agent collects EVERYTHING from the VM
Sends EVERYTHING to Azure (expensive!)
You pay for 1 GB/day of logs
80% of that data is useless noise
New way (AMA + DCR):
DCR says: "Only collect EventID 4625 (failed logins)"
Agent filters on the VM itself
Sends 200 MB/day to Azure
Cost savings: 80%
DCR Configuration Example
Let's say you want to monitor failed RDP logins on 100 Windows servers:
DCR Settings:
Name:
DCR-RDP-SecurityPlatform: Windows
Resources: Add all 100 VMs
Data source: Windows Security Events
Filter: Only Event IDs 4625, 4624, 4672
Destination: Your Log Analytics Workspace → SecurityEvent table
Result: All 100 VMs automatically get the AMA agent installed, configured, and start sending only the filtered events to Sentinel.
Component 3: Log Analytics Workspace (LAW)
What is it?
The database where all your logs are stored. It's a specialized Azure resource optimized for:
High-volume ingestion (terabytes per day)
Fast queries (queries across billions of rows in seconds)
Long-term retention (30 days to 2 years)
It's Not Just for Sentinel
Log Analytics Workspace is a general Azure service. It's used by:
Azure Monitor - Performance metrics
Application Insights - Application telemetry
Microsoft Sentinel - Security logs
Azure Automation - Runbook logs
Microsoft Defender for Cloud - Security recommendations
This is why Sentinel is called "built on Log Analytics" - it's an application that runs queries against the workspace.
Tables in Log Analytics
Each data source creates a table:
SecurityEvent
Windows Security Events
Event ID 4625 (failed login), 4624 (successful login)
Syslog
Linux syslog
SSH logins, sudo commands, application logs
SigninLogs
Azure AD / Entra ID
User logins to Microsoft 365, Azure Portal
AzureActivity
Azure control plane
Resource creation, deletion, configuration changes
CommonSecurityLog
Firewalls, proxies
Network traffic, blocked connections
SecurityAlert
Microsoft Defender
Malware detections, suspicious activity
Workspace ID and Key
Every workspace has:
Workspace ID: A GUID like
a1b2c3d4-e5f6-7890-abcd-ef1234567890Primary Key: A long secret string
These are used:
By agents to authenticate (in legacy scenarios)
By API integrations
For troubleshooting
Security: Treat the Primary Key like a password. Anyone with it can send data to your workspace.
Component 4: Microsoft Sentinel Layer
What Sentinel Adds to Log Analytics
If Log Analytics is the database, Sentinel is the security application that runs on top of it. Sentinel adds:
Analytics Rules - Automated threat detection queries
Incidents - Organized alerts with workflow (assign, investigate, close)
Investigation Graph - Visual representation of attack chains
Threat Intelligence - IOC feeds integrated into queries
Watchlists - Custom lists (VIP users, known bad IPs, approved software)
Workbooks - Security dashboards with charts and maps
Automation Rules - Auto-assign, auto-close, trigger playbooks
Playbooks - Logic Apps for automated response (block IP, isolate VM, send email)
Hunting - Proactive threat hunting with saved queries
UEBA - User and Entity Behavior Analytics (ML-based anomaly detection)
Sentinel vs Log Analytics - Key Difference
Log Analytics alone:
You can write queries:
SecurityEvent | where EventID == 4625You can create alerts
You can build dashboards
Sentinel adds:
Pre-built content (4000+ detection rules from Microsoft and community)
Incident management (triaging, assignment, investigation)
Entity enrichment (IP → country, user → risk score)
MITRE ATT&CK mapping (T1078 - Valid Accounts)
Case management features
Pricing Model
Sentinel is billed separately from Log Analytics:
Pay-as-you-go
$2.76/GB analyzed
You pay for what Sentinel analyzes (not just stores)
Commitment Tier
$2.30/GB (100 GB/day)
Cheaper if you have predictable volume
Example:
Log Analytics ingestion: 10 GB/day × $2.76 = $27.60/day
Sentinel analysis: 10 GB/day × $2.76 = $27.60/day
Total: $55.20/day = ~$1,656/month
Cost optimization tricks:
Use Basic Logs (80% cheaper) for non-security tables
Enable Health Monitoring logs to track Sentinel itself
Use Scheduled Rules (run hourly) instead of real-time for low-priority threats
Part 3: Getting Started - Complete Setup Guide
This section will walk you through setting up Sentinel from scratch. By the end, you'll have a fully functional SIEM ready to detect threats.
Prerequisites
Before starting, ensure you have:
✅ Azure subscription with at least Contributor role
✅ At least $50 of Azure credits (for testing)
✅ Basic understanding of Windows/Linux command line
✅ Access to Azure Portal (portal.azure.com)
Step 1: Create Log Analytics Workspace
1.1 Navigate to Workspace Creation
Sign in to Azure Portal
In the search bar at the top, type:
Log Analytics workspacesClick Log Analytics workspaces (under Services)
Click + Create
1.2 Configure Basic Settings
Basics tab:
Project details:
Subscription: Select your subscription from dropdown
Resource group: Click Create new → Type
rg-sentinel-prodWhy new RG: Keeps Sentinel resources organized and makes cleanup easier
Instance details:
Name:
law-sentinel-prod-01Why this naming:
law= Log Analytics Workspace prefix,sentinel= purpose,prod= environment,01= instance numberMust be globally unique: If taken, try
law-sentinel-prod-yournameor add random numbers
Region: Choose a region close to your resources
Examples:
East US,West Europe,Southeast AsiaCost consideration: All regions have the same Log Analytics pricing (as of 2026)
Latency consideration: Data ingestion is faster from the same region
1.3 Configure Retention and Daily Cap
Pricing tier:
Should show Pay-as-you-go (this is standard)
Commitment tiers appear only after you click into pricing settings
Click Review + create → Create
Deployment time: 2-3 minutes
1.4 Post-Creation Configuration (Important!)
Once created, go to your workspace and configure these critical settings:
1. Set Daily Cap (Cost Protection):
Open your workspace
Left menu → Usage and estimated costs
Click Daily cap
Set to 5 GB (for lab/testing) or 50 GB (for production)
Check the box: Stop data collection when the daily limit is reached
Click OK
Why this matters: Prevents runaway costs if misconfigured. If you accidentally collect too much data (e.g., you enable verbose logging on 1000 VMs), the workspace stops ingesting at the cap limit.
Warning: When the cap is reached, no data is collected until the next day (midnight UTC). Use alerts to notify you if approaching the cap.
2. Configure Data Retention:
Left menu → Usage and estimated costs
Click Data Retention
Slide to 90 days (recommended for security)
Click OK
Retention pricing:
Days 1-30: Free (included in ingestion)
Days 31-90: Free (as of 2026)
Days 91+: $0.10/GB/month
Why 90 days: Most security investigations require 30-60 days of historical data. Compliance frameworks (PCI-DSS, HIPAA) often require 90+ days.
1.5 Understand Workspace Structure
Your workspace now exists. Let's explore what's inside:
Left menu → Logs - This is where you write KQL queries
Left menu → Tables - Shows all tables (SecurityEvent, Syslog, etc.)
Left menu → Agents - (Legacy section, ignore for AMA)
Important concept: Your workspace is currently empty. No tables exist yet. Tables are created automatically when the first data arrives.
Step 2: Enable Microsoft Sentinel
2.1 Add Sentinel to Your Workspace
In Azure Portal search bar, type:
Microsoft SentinelClick Microsoft Sentinel (under Services)
You'll see: Add Microsoft Sentinel to a workspace
Click + Create
You'll see a list of your Log Analytics workspaces
Select
law-sentinel-prod-01(the one you just created)Click Add
Provisioning time: 2-3 minutes
What happens during provisioning:
Sentinel-specific tables are created (SecurityAlert, SecurityIncident, ThreatIntelligenceIndicator)
Default watchlists are initialized
Content Hub catalog is loaded
Built-in analytics rules are deployed (disabled by default)
2.2 Initial Sentinel Tour
After provisioning, Sentinel opens. You'll see the dashboard:
Top metrics (currently all zeros):
Total incidents: 0 (no data yet)
New incidents: 0
Average time to triage: N/A
Average time to close: N/A
Main sections (left menu):
General
- Overview
Dashboard with metrics
- Logs
KQL query interface
Threat management
- Incidents
Your incident queue (the "mission control")
- Workbooks
Security dashboards
- Hunting
Proactive threat hunting queries
Content management
- Content hub
Install pre-built solutions
- Repositories
Connect GitHub for custom content
Configuration
- Data connectors
Configure data sources
- Analytics
Create and manage detection rules
- Automation
Incident automation and playbooks
- Settings
Sentinel-wide configuration
2.3 Verify Sentinel is Active
Run a test query:
Go to Logs
Close any pop-up tutorials
In the query window, type:
Click Run
Expected result:
If query runs (even if it returns 0 results): ✅ Sentinel is working
If you get an error: ❌ Something is wrong (unlikely)
Why this works: AzureDiagnostics table exists in every workspace (it stores Azure resource logs). Even if empty, the query validates that the KQL engine is operational.
Part 4: Data Ingestion - Connectors and Collection Rules
Now we connect data sources to Sentinel. This is where logs start flowing.
Understanding Content Hub (The New Way)
Solutions vs Connectors vs Data Sources
This confuses everyone, so here's the breakdown:
Solution: A package that includes multiple components (connectors + rules + workbooks). Example: "Apache Log4j Vulnerability Detection"
Data Connector: The configuration interface for a specific data source. Example: "Syslog via AMA"
Data Source: The actual log type collected. Example: Linux syslog facility "auth"
Flow: Install Solution (Content Hub) → Configure Connector (Data connectors) → Define Collection (DCR) → Data arrives (Tables)
Connector 1: Linux Syslog (Essential for Linux VMs)
Use Case
You have Linux VMs (Ubuntu, RHEL, etc.) and want to collect:
Authentication logs (successful/failed SSH)
Sudo command execution
Application logs (web servers, databases)
Kernel messages
Installation Steps
Step 1: Install Solution from Content Hub
Go to Microsoft Sentinel → Content management → Content hub
In the search box, type:
syslogYou'll see: Syslog solution (by Microsoft)
Click the checkbox next to Syslog
Click Install (bottom right)
Wait for installation (1-2 minutes)
Verify: Go to Content hub → Filter: Installed → You should see Syslog with green checkmark
Step 2: Configure Data Connector
Go to Configuration → Data connectors
Search:
syslog via amaClick on Syslog via AMA
Click Open connector page
Step 3: Create Data Collection Rule
On the connector page, click + Create data collection rule
Basics tab:
Rule name:
DCR-Linux-Syslog-AllSubscription: Your subscription
Resource group: Same as your workspace (e.g.,
rg-sentinel-prod)Region: Same as your workspace
Platform Type:
LinuxClick Next: Resources
Resources tab:
Click + Add resources
Browse and select your Linux VMs
Click Apply
Note: If you don't have VMs yet, skip this (leave empty). You can add VMs later.
Click Next: Collect
Collect tab (CRITICAL):
You'll see a table with all syslog facilities:
auth
Authentication/authorization
LOG_DEBUG
authpriv
Private authentication
LOG_DEBUG
cron
Scheduled tasks
LOG_DEBUG
daemon
System daemons
LOG_DEBUG ← Container logs appear here!
kern
Kernel messages
LOG_DEBUG
local0-local7
Custom applications
LOG_DEBUG (all 8)
syslog
Syslog internal
LOG_DEBUG
user
User-level messages
LOG_DEBUG
Set ALL facilities to LOG_DEBUG. Why?
LOG_DEBUG captures everything (Debug, Info, Warning, Error, Critical)
You can always filter in KQL later
Missing a log level means missing attacks
Destination tab:
Destination type:
Azure Monitor Logs(pre-selected)Subscription: Your subscription
Account: Select your workspace (
law-sentinel-prod-01)Click Review + create → Create
Verification
After DCR creation:
Go to Monitor → Data Collection Rules
Find
DCR-Linux-Syslog-AllClick it → Left menu → Resources
You should see your Linux VMs listed
Left menu → Data sources → Should show: Linux Syslog
Left menu → Destinations → Should show: Your workspace
Timeline:
DCR creation: 1 minute
AMA installation on VMs: 5-10 minutes
First logs appear in Sentinel: 10-15 minutes
Test Query
After 15 minutes:
If you see logs: ✅ Success!
Connector 2: Windows Security Events (Essential for Windows VMs)
Use Case
You have Windows VMs (Windows 10, Server 2019/2022) and want to collect:
Logon events (Event ID 4624, 4625)
Account management (Event ID 4720 - user created, 4726 - user deleted)
Process creation (Event ID 4688)
File access auditing (Event ID 4663)
Installation Steps
Step 1: Install Solution from Content Hub
Go to Content hub
Search:
Windows Security EventsSelect Windows Security Events solution
Click Install
Wait for installation
Step 2: Configure Data Connector
After installation, click Manage OR go to Data connectors
Search:
Windows Security Events via AMAClick Open connector page
Click + Create data collection rule
Basics tab:
Rule name:
DCR-Windows-AllSecuritySubscription: Your subscription
Resource group: Same as workspace
Region: Same as workspace
Platform Type:
Windows
Resources tab:
Click + Add resources
Select your Windows VMs
Click Apply
Collect tab:
You have three options:
All Security Events
Everything (4000+ event types)
Maximum visibility, forensics
$5-10
Common
Most important events (~200 types)
Balanced (recommended)
$2-3
Minimal
Only critical events (~50 types)
Cost-sensitive
$0.50-1
Recommendation: Start with Common for production, All for honeypots/investigations.
Destination tab:
Select your workspace
Click Review + create → Create
Verification
Check agent installation:
Go to your Windows VM → Extensions
Look for
AzureMonitorWindowsAgentStatus: Provisioning succeeded
Test query (after 15 minutes):
Connector 3: Microsoft Defender for Cloud (Endpoint Protection)
Use Case
You have Microsoft Defender (built-in Windows antivirus) and want Sentinel to:
Receive malware detection alerts
Trigger incidents for threats
Correlate endpoint threats with network attacks
Setup (Easiest Connector!)
This connector doesn't require agents or DCRs. It connects Azure services directly.
Go to Data connectors
Search:
Microsoft Defender for CloudClick Open connector page
Click Connect (that's it!)
Under Configuration, select:
☑️ Create incidents from alerts (Recommended)
Severity: Select High, Medium, Low
That's it! No DCR, no agent installation. Defender alerts now flow to Sentinel automatically.
Test
On a Windows VM:
Create the EICAR test file:
Open Notepad
Paste:
X5O!P%@AP[4\PZX54(P^)7CC)7}$EICAR-STANDARD-ANTIVIRUS-TEST-FILE!$H+H*Save as
virus.comon Desktop
Defender deletes it immediately
Wait 5 minutes
Check Sentinel → Incidents
You should see: "EICAR test file detected"
Connector 4: Azure Activity (Azure Control Plane Monitoring)
Use Case
Monitor administrative actions on Azure resources:
VM created/deleted
Storage account access keys regenerated
Firewall rules modified
Role assignments changed
Setup
Go to Data connectors
Search:
Azure ActivityClick Open connector page
Click Launch Azure Policy Assignment wizard
Select subscriptions to monitor
Click Review + create → Create
What this creates: An Azure Policy that automatically connects all Activity Logs from the selected subscriptions.
No agents required: Activity logs are control-plane operations, already stored in Azure.
Test Query
This shows all VM operations in the last hour (create, start, stop, delete).
Summary: What You've Built So Far
At this point, you have:
Log Analytics Workspace - The data warehouse
Microsoft Sentinel - The security layer
Syslog connector - Linux logs flowing in
Windows Security Events - Windows logs flowing in
Defender for Cloud - Endpoint protection integrated
Azure Activity - Azure administrative actions logged
Data flow is active. Logs are being collected. But no detection is happening yet. That requires Analytics Rules (next section).
Part 5: KQL Masterclass - Query Language Deep Dive
Kusto Query Language (KQL) is the language of Sentinel. It's also used in Azure Data Explorer, Application Insights, and Azure Monitor. Learning KQL is one of the most valuable skills in the Microsoft ecosystem.
Philosophy: Think in Pipelines
KQL is not SQL. It's a functional pipeline language. Data flows left-to-right through operators (pipes):
Compare to water filtration:
In KQL:

The Pipe Operator: |
|Every KQL query starts with a table name, then pipes (|) transform the data:
Key concept: Each pipe reduces the dataset. You start with millions of rows and progressively filter down to what you need.
Essential Operators
1. where - The Most Used Operator
where - The Most Used OperatorFilters rows based on conditions.
Performance tip: Filter on indexed columns first (EventID, TimeGenerated, Computer). String searches (contains, matches regex) are slower.
2. project - Select Columns
project - Select ColumnsChoose which columns to display (like SQL SELECT).
Variant: project-away - Remove specific columns:
3. extend - Create Calculated Columns
extend - Create Calculated ColumnsAdd new columns based on existing data.
Use cases:
Parsing strings:
extend Domain = split(UserPrincipalName, "@")[1]Calculations:
extend DurationMinutes = (EndTime - StartTime) / 1mEnrichment:
extend Severity = iff(FailedAttempts > 100, "High", "Low")
4. summarize - Aggregation
summarize - AggregationGroup data and calculate statistics (like SQL GROUP BY).
Count rows:
Output:
Multiple aggregations:
Available aggregation functions:
count()- Count rowsdcount(column)- Count distinct valuessum(column)- Sum valuesavg(column)- Averagemin(column),max(column)- Min/maxmake_set(column)- Create array of unique valuesmake_list(column)- Create array (includes duplicates)percentile(column, 95)- 95th percentile
5. join - Combine Tables
join - Combine TablesMerge data from two tables.
Example: Correlate failed logins with successful logins
Join types:
inner- Only rows that match in both tablesleftouter- All rows from left, nulls for non-matching rightrightouter- All rows from right, nulls for non-matching leftfullouter- All rows from both
6. let - Variables and Subqueries
let - Variables and SubqueriesStore intermediate results or values.
Example: Define a time range:
Example: Reusable list:
7. Time Operators
Relative time:
ago(1h)- One hour agoago(1d)- One day agoago(7d)- Seven days ago
Absolute time:
datetime(2026-01-31)- Specific datedatetime(2026-01-31T14:30:00)- Specific timestamp
Time parsing:
hourofday(TimeGenerated)- 0-23dayofweek(TimeGenerated)- 0 (Sunday) to 6 (Saturday)dayofmonth(TimeGenerated)- 1-31monthofyear(TimeGenerated)- 1-12
Time binning (for charts):
Creates hourly buckets: [00:00, 01:00, 02:00, ...]
Practical KQL Examples
Example 1: Find Brute Force Attacks
Goal: Detect IPs with more than 10 failed login attempts in 5 minutes.
Explanation:
Filter to failed login events
Only last 5 minutes (matches rule frequency)
Group by attacker IP and target computer
Count attempts and collect usernames tried
Filter to only "brute force" level (>10 attempts)
Calculate attack duration
Display results sorted by severity
Example 2: Detect Successful Login After Failed Attempts
Goal: Find IPs that failed many times, then succeeded (credential stuffing).
Why this is dangerous: Attackers often try 100 passwords, then get lucky. This query finds those successes.
Example 3: Detect Suspicious Process Execution
Goal: Find processes started by accounts that logged in via RDP (potential attacker activity).
What it shows: Every process launched by RDP users. Look for suspicious commands (powershell, wget, certutil, net user).
Example 4: Log4Shell Detection in Syslog
Goal: Find JNDI injection patterns in Linux logs.
Explanation:
Search for JNDI patterns in syslog messages
Extract the protocol (ldap, rmi, dns, etc.)
Extract attacker's server IP and port
Filter out false positives (where Protocol is empty)
Example 5: Time Series Analysis - Attack Trends
Goal: Create a timeline of attacks over 7 days.
This creates a line chart showing attacks over time. Useful for:
Identifying attack waves
Correlating with external events (data breaches leaked your IP)
Determining peak attack times (often nights/weekends)
Advanced KQL Techniques
Parsing JSON and XML
Scenario: Your application logs contain JSON data in a text field.
Dynamic Thresholds with Percentiles
Scenario: Detect abnormal login counts (more than usual).
Explanation:
Calculate the 95th percentile of logins over the past week
Count today's logins
If today exceeds the baseline, flag as anomalous
IP Geolocation (Manual Enrichment)
While raw logs don't have geolocation, you can enrich them:
Note: Most Sentinel environments don't have GeoIPDatabase by default. Geolocation is added automatically in Incidents via Entity Mapping.
Part 6: Analytics Rules - The Art of Detection
Analytics Rules are the heart of Sentinel. They automatically run queries on a schedule and create incidents when threats are detected.
Rule Types
Scheduled
Runs a KQL query on a timer
Most common (95% of rules)
Microsoft Security
Imports alerts from other MS services
Defender for Cloud, Microsoft 365 Defender
Fusion
ML-based correlation of multiple signals
Advanced persistent threats
Anomaly
ML detects unusual behavior
Baselines "normal" then alerts on deviations
Threat Intelligence
Matches IOCs from TI feeds
Known bad IPs, domains, file hashes
We'll focus on Scheduled rules - they're the most flexible and powerful.
Anatomy of a Scheduled Rule
A scheduled rule has 5 tabs:
Tab 1: General (Metadata)
Name: RDP Brute Force Detection
Should be descriptive and specific
Include the attack type and data source
Description:
Explain what it detects
Include the threshold logic
Reference the EventID if applicable
Tactics (MITRE ATT&CK):
Select:
Credential AccessThis maps to MITRE ATT&CK framework tactic
Techniques:
Select:
T1110.001 - Brute Force: Password Guessing
Severity:
Low- Informational, expected behaviorMedium- Suspicious, needs investigationHigh- Likely attack, immediate attentionCritical- Active breach, emergency response
For brute force: Choose High
Status: Enabled (or Disabled if you're testing)
Tab 2: Set Rule Logic (The Query)
Rule query:
Alert enrichment - Entity mapping (Crucial!):
Click + Add new entity
Configure IP entity:
Entity type: Select
IPIdentifier: Select
AddressValue: Select
IpAddress
Click + Add new entity again
Configure Host entity:
Entity type: Select
HostIdentifier: Select
HostNameValue: Select
Computer
Why map entities:
Sentinel automatically enriches IPs with geolocation
Investigation graph visualizes relationships
You can pivot to other incidents involving the same IP
Query scheduling:
Run query every:
5 minutesLookup data from the last:
5 minutes
Critical rule: These two values should usually match to avoid gaps or duplicates.
Alert threshold:
Generate alert when number of query results:
Is greater than 0
This means: "If the query returns any rows, create an alert."
Tab 3: Incident Settings
Create incidents from alerts triggered by this analytics rule: Enabled
Why this matters: Without this, you get alerts but no incidents. Incidents are what SOC analysts work with.
Alert grouping:
Option 1: Group all alerts triggered by this rule into a single incident
Use case: You want ONE incident for "RDP attacks today" regardless of how many IPs
Not recommended for this rule
Option 2: Group related alerts, triggered by this analytics rule, into incidents
Group by: Select IP Address (or
IpAddressif shown as column)Re-open closed matching incidents:
DisabledLimit the group to alerts created within the selected time frame:
5 hours
Why Option 2: If the same IP attacks 20 times over 2 hours, you get ONE incident with 20 alerts. If a different IP attacks, you get a separate incident. This is the right balance.
Tab 4: Automated Response
This is where you trigger Playbooks (Logic Apps) automatically.
Example use case: When a High-severity brute force incident is created, automatically:
Send email to SOC team
Block the IP at the firewall (via API)
Create a ticket in ServiceNow
For now, leave this empty. We'll cover automation later.
Tab 5: Review + Create
Review your configuration and click Create.
The rule starts running immediately (if Status = Enabled).
Testing Your Rule
Option 1: Manual Test (Before Deployment)
In the Set rule logic tab:
After writing your query, click Test with current data
Sentinel runs the query against the last 50 results
You see:
"Query validation passed" - Syntax is correct
"Query failed" - Syntax error (fix and retry)
You can see sample results
Note: This doesn't create incidents, just validates the query.
Option 2: Simulate Attack
Create the rule
Generate failed login events (from Kali:
xfreerdp3 /u:admin /p:WrongPass /v:YOUR_VM /cert:ignore +auth-onlyx 15 times)Wait for rule to run (5 minutes)
Check Incidents page
Pre-Built Rules from Content Hub
When you installed solutions (Syslog, Windows Security Events), you also got pre-built rules. They're disabled by default.
Enabling Pre-Built Rules
Go to Analytics → Rule templates tab
Search for rules (e.g.,
brute force)Click on a template
Click Create rule
Review the query and settings
Click through tabs → Create
Popular rules to enable:
"Rare application consent"
OAuth app abuse
Medium
"Malicious Inbox Rule"
Email forwarding to external addresses
High
"Multiple Password Reset by user"
Account takeover attempt
Medium
"TI map IP entity to AzureActivity"
Known malicious IPs in Azure logs
High
"Possible Log4j exploitation"
CVE-2021-44228
Critical
Recommendation: Enable 10-20 rules to start. Too many rules = alert fatigue.
Part 7: Incident Management and Investigation
Analytics rules create Alerts. Alerts are grouped into Incidents. Incidents are what SOC analysts investigate.
The Incident Lifecycle

New: Incident just created, unassigned, needs triage Active: Assigned to an analyst, investigation in progress Closed: Investigation complete, classified, documented
Incident Page - Your Mission Control
Understanding the Incidents Page
When you open Incidents, you see:
Top metrics:
Total incidents: All incidents (ever)
New incidents: Unassigned, needs attention
Active incidents: Currently being worked on
Closed incidents: Resolved
Incident list columns:
ID
Unique number (e.g., #12345)
Title
From the analytics rule name
Severity
Critical / High / Medium / Low / Informational
Status
New / Active / Closed
Owner
Assigned analyst
Created time
When first detected
Last updated
When last modified
Alerts
Number of alerts in this incident (grouped)
Entities
Number of entities involved (IPs, hosts, accounts)
Products
Source (Microsoft Sentinel, Defender, etc.)
Filtering and Searching
Filters (top of page):
Time range: Last 24 hours / 7 days / 30 days / Custom
Status: New / Active / Closed
Severity: Critical / High / Medium / Low
Owner: Unassigned / Assigned to me / Assigned to others
Search box: Type incident ID, title keywords, or entity name (IP, hostname).
Incident Investigation Workflow
Step 1: Triage (First 2 Minutes)
Goal: Determine if this is a real threat or noise.
Open the incident (click on it)
Read the title and description - What did the rule detect?
Check severity - Critical/High requires immediate attention
Review entities:
What IPs are involved?
Which hosts/users are affected?
Any known bad actors (Threat Intel)?
Quick decision:
Real threat → Assign to yourself, change status to Active
False positive → Close immediately with classification "False Positive"
Unclear → Keep as New, ask senior analyst
Step 2: Investigation (10-30 Minutes)
Goal: Understand the full scope of the attack.
2.1 Review Alerts
In the incident panel, scroll to Alerts section
Click on the alert to expand
Review:
Original query results - What data triggered this?
Original alert - When was it first detected?
Time generated - Timeline of attack
2.2 Examine Entities
Scroll to Entities section
Click on each entity to see details:
For IP entity:
Geolocation (Country, City, ISP)
Threat Intelligence: Is this a known bad IP?
Related incidents: Has this IP attacked before?
WHOIS information
For Host entity:
Operating system
Azure metadata (resource group, tags)
Other incidents on this host
Recent security events
For Account entity:
Account type (local, domain, Azure AD)
Last successful login
Recent activities
Group memberships
2.3 Investigation Graph

Click Investigate button
You see a visual graph:
How to use it:
Click on any entity to see its details
Click Insights to run pre-built queries (e.g., "Show all logins from this IP")
Click Related alerts to find similar events
Expand the graph to see connections you missed
Real example: You investigate a brute force incident. The Investigation Graph shows the same IP also appears in another incident: "Malicious PowerShell execution". This tells you: The attacker succeeded and is now running commands!
Step 3: Add Comments (Documentation)
In the incident panel, scroll to Comments
Click Add comment
Write detailed notes:
Step 4: Classify and Close
When investigation is complete:
Status → Change to Closed
Classification:
True Positive - Suspicious Activity - Real attack, malicious intent
Benign Positive - Suspicious But Expected - Authorized security testing, pen-test
False Positive - Incorrect Alert Logic - Rule error, tune the query
False Positive - Inaccurate Data - Bad log data, misconfigured source
Undetermined - Not enough information to classify
Comment (required):
Summarize findings
State the classification reason
Note any follow-up actions
Why classification matters:
Tracks rule accuracy (if 80% are False Positives, the rule needs tuning)
Metrics for SOC performance
Compliance documentation
Part 8: Advanced Topics
Workbooks - Security Dashboards
Workbooks are interactive dashboards. Think Power BI, but for security data.
Creating a Workbook
Go to Workbooks → + Add workbook
Click Edit
Click + Add → Add query
Example query for a brute force dashboard:
Visualization: Select
Bar chartClick Done editing
Click Save
Title:
Brute Force Attack DashboardLocation: Same resource group as workspace
Advanced workbook features:
Parameters: Dropdown to select time range, computer name
Tabs: Organize multiple views (Overview, Detailed, Historical)
Grids: Clickable tables that update other visualizations
Maps: Geolocation of attacks
Example: Real-Time Attack Map
Watchlists - Custom Threat Intelligence
Watchlists are CSV files you upload to Sentinel for reference in queries.
Use Cases
VIP Users: List of executives whose accounts need extra monitoring
Known Bad IPs: IPs you've identified as malicious
Approved Applications: Software allowed in your environment
Service Accounts: Accounts expected to have unusual login patterns
Creating a Watchlist
Go to Configuration → Watchlists
Click + Add new
Name:
VIP-UsersAlias:
VIPUsers(used in queries)Upload CSV file:
SearchKey:
UserPrincipalNameClick Create
Using Watchlist in Query
This finds failed logins for VIP users only.
Automation Rules - Incident SOAR
Automation Rules automatically process incidents without human intervention.
Use Case 1: Auto-Assign Based on Severity
Rule: All Critical incidents → Assign to Senior Analyst Team
Go to Automation → + Create → Automation rule
Name:
Auto-Assign-Critical-IncidentsTrigger:
When incident is createdConditions:
If Severity
EqualsCritical
Actions:
Change status to
ActiveAssign owner to
SOC-Senior-Team@company.com
Order:
1(runs before other rules)Click Apply
Use Case 2: Auto-Close Known False Positives
Rule: If incident title contains "EICAR" → Auto-close as test
Create automation rule
Trigger:
When incident is createdConditions:
If Title
ContainsEICAR
Actions:
Change status to
ClosedClassification to
Benign Positive - Suspicious But ExpectedAdd comment:
Automated closure: EICAR test file detection is expected behavior
Click Apply
Playbooks - Automated Response
Playbooks are Logic Apps that perform actions.
Example: Block IP at Azure NSG
Scenario: When a brute force incident is created, automatically block the attacker IP at the firewall.
Playbook logic:
Creating this requires: Knowledge of Logic Apps and Azure API permissions. Beyond the scope of this guide, but Sentinel has templates you can customize.
Part 9: KQL Quick Reference Cheatsheet
Table Operations
Filtering
Time Filters
Projecting and Extending
Aggregations
Sorting and Limiting
Joins
String Operations
Time Binning (for Charts)
Advanced Aggregations
KQL Best Practices
1. Filter Early
Bad (slow):
Good (fast):
Why: KQL optimizes from left to right. Filtering before projecting is faster.
2. Use Indexed Columns
These columns are indexed (fast):
TimeGeneratedEventID(in SecurityEvent)ComputerType
These are NOT indexed (slower):
UserName,IpAddress,Message,SyslogMessage
Strategy: Filter on indexed columns first, then non-indexed.
3. Limit Time Range
Bad:
This queries ALL data (years worth) - very slow and expensive!
Good:
Rule of thumb: Never query without a time filter unless you specifically need all historical data.
4. Use take for Testing
take for TestingWhen developing queries, add | take 10 at the end to preview results quickly:
5. Comment Your Queries
Conclusion
You've now learned Microsoft Sentinel from the ground up:
Architecture - LAW, AMA, DCR, Sentinel layers
Setup - Created workspace, enabled Sentinel, connected data sources
Data Ingestion - Configured connectors and Data Collection Rules
KQL - From basic filters to advanced correlations
Detection - Created analytics rules and tested them
Investigation - Triaged incidents, used Investigation Graph
Advanced - Workbooks, watchlists, automation
Next Steps for Mastery
Practice KQL daily - Spend 30 minutes per day writing queries
Enable 20-30 rules - Start with Microsoft's pre-built templates
Join the community - Microsoft Sentinel GitHub
Read attack reports - Understand real-world TTPs (MITRE ATT&CK)
Build a home lab - Deploy a honeypot, see real attacks
Recommended Reading
Certification Path
AZ-500: Azure Security Engineer Associate (includes Sentinel module)
SC-200: Microsoft Security Operations Analyst (Sentinel-focused)
This guide is continuously updated. Last revision: February 2026.
Stay secure!
Last updated