
AZ-500 Lab 04: Configuring and Securing ACR and AKS — Walkthrough
Lab source: Microsoft Learning - AZ500 Lab 04
Estimated time: 60–90 minutes (including reading explanations)
Prerequisites:
An Azure subscription with Owner or Contributor role
Global Administrator role in the associated Microsoft Entra tenant
Basic familiarity with the Azure Portal
Cloud Shell vs Local Terminal - You Choose
The original Microsoft lab uses Azure Cloud Shell (a browser-based terminal inside the Azure Portal). This is convenient because az CLI, kubectl, and other tools are pre-installed. However, you are absolutely not limited to Cloud Shell - you can run every command in this lab from your local terminal (macOS, Linux, or Windows).
Option A: Azure Cloud Shell (no setup needed)
Click the terminal icon (
>_) in the top-right of the Azure PortalSelect Bash, choose No storage account required, pick your subscription, click Apply
Everything is pre-installed - just start typing commands
Option B: Local Terminal (recommended for real-world practice)
If you prefer working from your own machine, install these tools first:
1. Azure CLI - the az command:
# macOS (Homebrew)
brew install azure-cli
# Linux (Ubuntu/Debian)
curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bash
# Windows (winget)
winget install Microsoft.AzureCLI2. kubectl - the Kubernetes CLI:
3. Log in to Azure:
This opens a browser window for authentication. After signing in, your terminal is connected to your Azure subscription. If you have multiple subscriptions, set the right one:
4. For editing YAML files: Use any text editor you prefer - VS Code, nano, vim, or even a graphical editor. When the lab says code ./nginxexternal.yaml (Cloud Shell's built-in editor), you can use:
Which option should you choose? For learning, either works. Cloud Shell is faster to start (zero setup). Local terminal is closer to how you will work in a real job - Azure engineers use
azCLI andkubectlfrom their own machines daily. If you have time, try the local approach. The only commands that differ are the initial login (az login) and file editing (use your preferred editor instead ofcode). Everyazandkubectlcommand works identically in both environments.
What You Will Learn in This Lab
This lab covers two critical Azure services for container workloads that appear frequently on the AZ-500 exam:
Azure Container Registry (ACR) - a managed Docker registry for storing and managing container images
Azure Kubernetes Service (AKS) - a managed Kubernetes cluster for orchestrating containerized applications
By the end of this lab, you will understand:
How to create and use ACR to store container images
How to build images using
az acr build(without needing Docker locally)How to deploy and configure an AKS cluster
How AKS authenticates to ACR using managed identities and the AcrPull role
The difference between external (public) and internal (private) Kubernetes services
How to verify connectivity to services running inside the cluster
Key Concepts Before You Start
What is a Container?
A container is a lightweight, standalone package that includes everything needed to run a piece of software - code, runtime, libraries, and configuration. Unlike virtual machines, containers share the host OS kernel, making them much faster to start and more resource-efficient.
What is a Container Registry?
A container registry is a storage and distribution service for container images. Think of it as a "library" where you store your container images and pull them when you need to deploy. Azure Container Registry (ACR) is Microsoft's managed registry service, tightly integrated with Azure services like AKS.
What is Kubernetes?
Kubernetes (K8s) is an open-source platform for automating deployment, scaling, and management of containerized applications. Azure Kubernetes Service (AKS) is a managed Kubernetes offering - Azure handles the control plane (API server, scheduler, etcd), and you only manage the worker nodes.
ACR SKU Tiers
Basic
10 GB
2
No
No
No
Standard
100 GB
10
No
Yes
No
Premium
500 GB
500
Yes
Yes
Yes
For this lab, you use the Basic SKU, which is sufficient for learning. In production, Premium is recommended for features like geo-replication (multi-region availability) and content trust (image signing).
🧠 AZ-500 EXAM CONTENT: ACR Roles - Know These!
The exam frequently tests which ACR role to assign in least-privilege scenarios. Memorize this table:
AcrPull
Yes
No
No
No
AcrPush
Yes
Yes
No
No
AcrDelete
No
No
Yes
No
AcrImageSigner
No
No
No
Sign images (content trust)
Contributor
Yes
Yes
Yes
Yes (full management)
🚨 Exam trap:
AcrPushalready includes pull. You do NOT need bothAcrPullandAcrPush-AcrPushalone is sufficient for push+pull.Exam scenario (from dumps): You have AKS cluster AKS1 and user-assigned managed identity ID1. AKS1 must only pull images. ID1 must push and pull. Which roles?
AKS1 → AcrPull (least privilege for pull-only)
ID1 → AcrPush (push + pull in one role)
Readerrole CANNOT pull images - it only reads registry metadata. This is a common trap.
Task 1: Create an Azure Container Registry
What You Are Doing and Why
You are creating a resource group to organize all lab resources, and then creating an ACR instance where you will store your container images. In Azure, a resource group is a logical container that groups related resources together - it makes cleanup easy (delete the group = delete everything inside it).
Step-by-Step
1. Open your terminal:
Cloud Shell: Sign in to the Azure Portal, click the terminal icon (
>_) in the top-right toolbar, select Bash, choose No storage account required, select your subscription, click Apply.Local terminal: Open your terminal app and run
az login(if you have not already). Make sureazCLI is installed (see the setup section above).
Why Bash? All commands in this lab are written for Bash shell. If you use Cloud Shell, make sure Bash (not PowerShell) is selected in the top-left dropdown. On macOS/Linux local terminal, you are already using Bash or Zsh (both work with these commands). On Windows, use WSL, Git Bash, or the Windows Terminal with Bash.
2. Create the resource group:
What this does: Creates a resource group named AZ500LAB09 in the East US region. All resources for this lab will be created inside this group.
3. Verify the resource group was created:
What this does: Queries all your resource groups and filters for the one named AZ500LAB09. The --query parameter uses JMESPath syntax, and -o table formats the output as a readable table.
4. Register the required Azure resource providers:
Why is this necessary? Azure uses a "resource provider" model. Before you can create a specific type of resource (like a Kubernetes cluster or a container registry), the corresponding resource provider must be registered in your subscription. Some providers are registered by default, but these may not be. Running register is idempotent - if already registered, nothing changes.
5. Create the Azure Container Registry:
What this does:
--name az500$RANDOM$RANDOM- The ACR name must be globally unique across all of Azure (because it becomes part of a DNS name:<name>.azurecr.io). The$RANDOMvariable in Bash generates a random number between 0 and 32767, so using it twice makes collisions extremely unlikely.--sku Basic- Selects the Basic pricing tier (cheapest, suitable for learning).
6. Confirm the ACR was created and note the name - you will need it later:
Write down the value in the NAME column (e.g., az5001234556789). You will use this as <ACRname> throughout the rest of the lab.
🧠 AZ-500 EXAM CONTENT: ACR Security Features
Content Trust: Ensures that pushed images are signed. When enabled, only signed images can be pulled. To push signed images, a user needs both AcrPush AND AcrImageSigner roles. Content trust requires the Premium SKU. Content trust must be enabled on both the registry and the Docker CLI client.
Defender for Containers + ACR scanning:
Scans container images for known vulnerabilities when they are pushed to ACR
Currently scans Linux images only - Windows container images are NOT scanned
Requires Defender for Cloud enhanced features (paid tier) - the free tier does NOT include scanning
Exam question: You upload container images to Registry1 but discover that vulnerability scans were not performed. What should you do? Answer: Modify the Pricing tier settings (upgrade to Defender for Cloud enhanced features/paid tier).
Task 2: Create a Dockerfile, Build a Container, and Push it to ACR
What You Are Doing and Why
You are creating a simple Dockerfile that defines a container image based on Nginx (a popular web server), then building that image and pushing it to your ACR - all without needing Docker installed locally. Azure's az acr build command performs the build remotely on Azure's infrastructure.
Step-by-Step
1. Create a minimal Dockerfile:
What this does: Creates a file called Dockerfile with a single line: FROM nginx. This tells the container build system to use the official Nginx image from Docker Hub as the base image. The resulting container will run an Nginx web server that displays the default "Welcome to nginx!" page.
What is a Dockerfile? A Dockerfile is a text file containing instructions for building a container image. Each instruction creates a layer in the image. The FROM instruction specifies the base image - every Dockerfile must start with FROM.
2. Build the image and push it to ACR:
What this does:
First line: Retrieves your ACR name from Azure and stores it in the
ACRNAMEvariable. The--queryparameter extracts just the name, and--output tsvremoves quotes and formatting.Second line: Builds the container image remotely using ACR Tasks.
--image sample/nginx:v1- Names the imagesample/nginxwith tagv1. Thesample/part is a repository namespace that organizes images.--registry $ACRNAME- Specifies which ACR to use.--file Dockerfile .- Points to the Dockerfile. The trailing.(period) is critical - it sets the build context to the current directory.
Why
az acr buildinstead ofdocker build+docker push? Theaz acr buildcommand offloads the build to Azure. You do not need Docker installed on your local machine or in Cloud Shell. Azure builds the image on its servers and stores it directly in ACR. This is called ACR Tasks - it is faster, more secure (no local Docker daemon needed), and commonly used in CI/CD pipelines.
3. Wait for the build to complete (approximately 2 minutes). You will see build logs streaming in the terminal.
4. Verify the image in the Azure Portal:
Navigate to your resource group AZ500LAB09
Click on your Container Registry resource
In the left menu under Services, click Repositories
You should see
sample/nginxClick on it to see the
v1tagClick on
v1to view the image manifest (SHA256 digest, creation date, platform)
What is an image manifest? The manifest is a JSON document that describes the image - its layers, architecture (linux/amd64), and a unique SHA256 digest (hash). The digest is like a fingerprint - it uniquely identifies this exact version of the image. If someone modifies the image, the digest changes. This is how content trust and image verification work.
Task 3: Create an Azure Kubernetes Service Cluster
What You Are Doing and Why
You are deploying a managed Kubernetes cluster that will run your containerized applications. AKS handles the Kubernetes control plane (API server, scheduler, controller manager, etcd) - you only manage the worker nodes where your containers run.
Step-by-Step
1. In the Azure Portal, search for Kubernetes services in the top search bar and select it.
2. Click + Create → Create a Kubernetes cluster.
3. On the Basics tab, configure:
Subscription
Your Azure subscription
Billing boundary
Resource group
AZ500LAB09
Keep all lab resources together
Cluster preset configuration
Dev/Test
Lower cost; smaller node sizes
Kubernetes cluster name
MyKubernetesCluster
Human-readable identifier
Region
(US) East US
Same region as ACR for lower latency
Fleet Manager
None
Not needed for this lab
Availability zones
None
Dev/Test does not need zone redundancy
AKS pricing tier
Free
No SLA, suitable for learning
Enable long term support
Unchecked
LTS provides extended K8s version support
Kubernetes version
Default
Use the latest stable version
Automatic upgrade
Default
Keeps cluster updated with security patches
Node security channel type
Default
OS-level security updates for nodes
Authentication and Authorization
Local accounts with Kubernetes RBAC
Simplest option for this lab
🧠 AZ-500 EXAM CONTENT: AKS Authentication Options
The exam tests your understanding of AKS authentication models:
Local accounts with Kubernetes RBAC
Uses Kubernetes service accounts and kubeconfig tokens. No Azure AD involved.
Dev/test only. NOT recommended for production.
Microsoft Entra ID with Kubernetes RBAC
Users authenticate via Entra ID. Authorization uses Kubernetes Roles/ClusterRoles bound to Entra ID users/groups.
Production. Portable role bindings.
Microsoft Entra ID with Azure RBAC
Users authenticate via Entra ID. Authorization uses Azure RBAC roles (at cluster, namespace, or resource scope).
Production. Most secure. Centralized management.
Exam question: AKS1 cannot be accessed using Azure AD accounts. What should you do first? Answer: Recreate AKS1 (with Entra ID integration enabled).
Important update: Newer AKS versions now support enabling Microsoft Entra ID integration on existing clusters (previously, this required recreating the cluster). However, the exam dump answer is still "recreate," so be aware of both the dump answer and the current Azure behavior.
Why does this lab use "Local accounts with Kubernetes RBAC"? For simplicity. In production, you should ALWAYS use Microsoft Entra ID integration for centralized identity management, proper auditing, and security controls like Conditional Access.
4. Click Next to the Node Pools tab:
Enable node auto-provisioning
Unchecked
Enable virtual nodes
Unchecked
Other values
Keep defaults
What is a Node Pool? A node pool is a group of worker VMs with the same configuration. AKS always has at least one system node pool that runs core Kubernetes components (CoreDNS, kube-proxy). You can add user node pools for application workloads with different VM sizes, OS types, or scaling rules.
5. Click Next to Networking. If you get a recommendation pop-up about VM size, accept it.
Enable private cluster
Unchecked
We need public access to the API server for this lab
Set authorized IP ranges
Unchecked
Not restricting API access for simplicity
Network configuration
Azure CNI Overlay
Default for new clusters in 2025/2026
DNS name prefix
Default
Auto-generated from cluster name
Network policy
None
Not configuring pod-level network policies in this lab
🧠 AZ-500 EXAM CONTENT: AKS Networking - Deep Dive
This is one of the most heavily tested areas on AZ-500. You must understand the networking options:
Network Plugins:
Azure CNI (Node Subnet)
Pods get IPs directly from the VNet subnet
Direct - pods are first-class VNet citizens
High (each pod uses a VNet IP)
Azure CNI Overlay
Pods get IPs from a private overlay network (10.244.0.0/16)
Via NAT through the node
Low (only nodes use VNet IPs)
Kubenet
Pods get IPs from a private range, routed via UDRs
Via UDRs through the node
Low
Azure CNI Overlay is the default for new clusters as of 2025. It is more IP-efficient than Azure CNI Node Subnet because pod IPs come from a separate, private CIDR and do not consume IPs from your VNet address space.
Exam question: You have a VM with Docker containers and a VNet service endpoint for Storage. Containers cannot access Storage. What should you do? Answer: Install the Container Network Interface (CNI) plug-in. By default, Docker uses a bridge network (172.17.0.0/16) that is invisible to Azure networking. CNI assigns VNet IPs to containers, making them "visible" to the VNet and able to use service endpoints.
Private clusters: Setting "Enable private cluster" to Yes means the AKS API server gets a private IP only. You can only access it from within the VNet (or via VPN/ExpressRoute). In production with strict security requirements, private clusters are recommended.
Network policies: Control pod-to-pod traffic (like NSGs for Kubernetes pods). Options: Azure Network Policy or Calico. This lab uses None, but production clusters should use network policies to enforce micro-segmentation.
6. Click Next to Integrations. Leave all values at default.
Note: In production, you would enable Azure Monitor (Container Insights) for cluster monitoring, Azure Policy for compliance enforcement, and Defender for Containers for runtime threat detection. These are disabled in this lab to simplify the setup.
7. Click Review + Create, then Create.
8. Wait for deployment to complete (approximately 10 minutes).
After Deployment - Explore What Was Created
9. Navigate to Resource groups in the portal. You will see TWO resource groups:
AZ500LAB09 - Your resource group containing the AKS cluster resource and its VNet
MC_AZ500LAB09_MyKubernetesCluster_eastus - The managed resource group automatically created by AKS
What is the MC_ resource group? When AKS creates worker nodes, it places them (and their associated resources like NICs, disks, load balancers, and public IPs) in a separate resource group prefixed with
MC_. You should NOT manually modify resources in this group - AKS manages them. If you delete or change things here, you can break your cluster.
10. Click on the MC_ resource group and explore its contents. You should see:
Virtual Machine Scale Set (VMSS) - your worker nodes
Network interfaces
A managed identity (for the nodes)
NSG, Route table, VNet (if applicable)
11. In your terminal (Cloud Shell or local), connect to your AKS cluster:
What this does: Downloads the cluster's connection credentials (kubeconfig) and merges them into your ~/.kube/config file. This allows kubectl (the Kubernetes CLI) to communicate with your cluster's API server. This works identically in Cloud Shell and on your local machine.
12. Verify the cluster is healthy:
You should see one or more nodes with STATUS: Ready. This means the nodes are healthy and ready to run containers.
What is
kubectl? It is the command-line tool for interacting with Kubernetes clusters. It sends requests to the Kubernetes API server. Common commands:kubectl get(list resources),kubectl apply(create/update resources),kubectl describe(detailed info),kubectl logs(container logs),kubectl exec(execute commands in a container).
Task 4: Grant AKS Permissions to Access ACR
What You Are Doing and Why
Your AKS cluster needs to pull container images from your ACR. By default, AKS has no access to your registry. You must explicitly grant permission by assigning the AcrPull role to the AKS cluster's managed identity on the ACR.
This is a fundamental security principle: no implicit trust. Just because both resources are in the same subscription does not mean they can access each other. You must configure explicit RBAC role assignments.
Step-by-Step
1. Attach ACR to AKS (this is the recommended, simple method):
What this does behind the scenes:
Looks up the AKS cluster's kubelet managed identity (the identity that worker nodes use to pull images)
Creates an RBAC role assignment: assigns the AcrPull role to that managed identity on your ACR
This means your AKS nodes can now pull images from your ACR - but they CANNOT push or delete images (least privilege)
Wait a few minutes for the command to complete and for the role assignment to propagate.
🧠 AZ-500 EXAM CONTENT: Managed Identity + AcrPull - Critical Concept
When you run
az aks update --attach-acr, Azure assigns the AcrPull role to the AKS cluster's kubelet managed identity. This is a system-assigned managed identity that AKS creates automatically.Why managed identity and not a service principal with a client secret?
Managed identity: Azure handles credential creation and rotation automatically. No secrets to store, no expiration to track. This is the recommended, secure approach.
Service principal: You manually create and rotate client secrets. Secrets can expire (default 1 year), get leaked, or be stored insecurely.
🚨 Exam trap: When a question asks for "minimum required privileges" + "minimize administrative effort," the answer almost always involves:
A managed identity (no credentials to manage = less admin effort)
The narrowest RBAC role (AcrPull, not Contributor)
Exam scenario: AKS1 must pull images from Registry1. ID1 (user-assigned managed identity) must push and pull images. Follow least privilege.
AKS1 → AcrPull (pull only)
ID1 → AcrPush (push + pull in one role; NOT AcrPull + AcrPush separately)
2. Grant the AKS cluster the Contributor role to its virtual network.
This step grants the AKS managed identity permissions to manage the VNet where the cluster operates. The original lab hardcodes a VNet name, but the VNet name is different for every deployment. Use these commands to dynamically retrieve it:
Important fix: The original Microsoft lab hardcodes the VNet name as
aks-vnet-30198516. This will NOT match your deployment - the number suffix is randomly generated. The commandaz network vnet listdynamically retrieves the correct VNet name from the managed resource group. Always use dynamic lookups in scripts instead of hardcoded resource names.
What this does: Grants the AKS cluster's control plane managed identity the Contributor role on its own VNet. This allows AKS to create and manage resources like load balancers and public IPs in the VNet (needed for services of type LoadBalancer).
Why Contributor and not a more specific role? AKS needs to create multiple resource types in the VNet (load balancers, public IPs, route tables, etc.). The Contributor role covers all these operations. In a more locked-down production environment, you could use a custom role with only the specific permissions AKS needs, but Contributor is the standard approach for AKS.
Task 5: Deploy an External Service to AKS
What You Are Doing and Why
You are deploying an Nginx web server container to your AKS cluster and exposing it to the internet via a Kubernetes Service of type LoadBalancer. This creates an Azure Load Balancer with a public IP address that forwards traffic to your Nginx pods.
Understanding the YAML Manifest
In Kubernetes, you define your desired state using YAML files called manifests. The manifest below defines two resources:
A Deployment - tells Kubernetes to run a specific container image and how many replicas (copies) to maintain
A Service - defines how to expose the pods to network traffic
Step-by-Step
1. Check your ACR name if you forgot it:
2. Create the nginxexternal.yaml file using your preferred editor:
Cloud Shell note: If you get a pop-up saying "Switch to Classic Cloud Shell", click Confirm, then re-run the ACRNAME variable assignment above and then re-run the
codecommand.
3. Paste the following content into the editor. Replace <ACRname> with your actual ACR name (e.g., az50012345678):
Line-by-line explanation:
apiVersion: apps/v1
Specifies the Kubernetes API version for Deployments
kind: Deployment
This resource is a Deployment (manages pods and replicas)
metadata.name: nginxexternal
Name of this Deployment
spec.replicas: 1
Run exactly 1 copy (pod) of this container
spec.selector.matchLabels
The Deployment manages pods that have the label app: nginxexternal
spec.template.metadata.labels
Labels applied to created pods - must match the selector above
spec.template.spec.containers
List of containers to run in each pod
image: <ACRname>.azurecr.io/sample/nginx:v1
Pull the sample/nginx:v1 image from YOUR ACR
containerPort: 80
The container listens on port 80 (Nginx default)
---
YAML document separator - separates the Deployment from the Service
kind: Service
This resource is a Service (network endpoint for pods)
type: LoadBalancer
Creates an Azure Load Balancer with a public IP
port: 80
The Service listens on port 80
selector: app: nginxexternal
Routes traffic to pods with the label app: nginxexternal
4. Save the file (Ctrl+S) and close the editor (Ctrl+Q).
5. Apply the manifest to your cluster:
Expected output:
What kubectl apply does: It sends the YAML definition to the Kubernetes API server, which then ensures the cluster's actual state matches the desired state described in the file. Kubernetes creates the pods and the service.
AZ-500 EXAM CONTENT: Kubernetes Service Types
The exam tests your knowledge of how services expose applications:
ClusterIP
Internal only (within cluster)
Default type. Assigns an internal IP accessible only from within the cluster.
NodePort
External (via node IP + port)
Opens a port (30000-32767) on every node. Traffic to <nodeIP>:<nodePort> is forwarded to the service.
LoadBalancer
External (via public IP)
Creates an Azure Load Balancer with a public IP. Most common for external-facing services.
ExternalName
DNS alias
Maps the service to a DNS name. No proxying.
Exam question: You need application routing with reverse proxy and TLS termination using a single IP address. What should you implement? Answer: Create an AKS Ingress controller. An Ingress controller is a Layer 7 (HTTP/HTTPS) solution that provides reverse proxy, path-based routing, and TLS termination - all behind a single external IP.
🚨 Exam trap: LoadBalancer = Layer 4 (TCP/UDP) - it does NOT provide TLS termination, path-based routing, or reverse proxy. Ingress controller = Layer 7 (HTTP/HTTPS) - it provides all of those features. If a question mentions "reverse proxy" or "TLS termination," the answer is Ingress, not LoadBalancer.
Task 6: Verify External Service Access
What You Are Doing and Why
You are verifying that the externally exposed Nginx service is reachable from the internet via the public IP assigned by the Azure Load Balancer.
Step-by-Step
1. Get the external IP address of your service:
Expected output (example):
CLUSTER-IP (10.0.123.45): Internal IP within the cluster - only accessible from inside
EXTERNAL-IP (20.81.xxx.xxx): Public IP address assigned by Azure Load Balancer - accessible from the internet
PORT(S) (80:31234/TCP): Port 80 externally mapped to NodePort 31234 internally
Note: If EXTERNAL-IP shows
<pending>, wait 1-2 minutes and run the command again. Azure is provisioning the Load Balancer and public IP.
2. Open a new browser tab and navigate to http://<EXTERNAL-IP> (replace with your actual IP).
3. You should see the "Welcome to nginx!" page. This confirms:
Your container image was successfully pulled from ACR
The AKS cluster is running the container
The LoadBalancer service is correctly routing internet traffic to the pod
What happened behind the scenes?
Kubernetes created a pod running the Nginx container (pulled from your ACR)
The LoadBalancer service triggered Azure to create a public Load Balancer with a public IP
The Load Balancer has a health probe checking port 80 on the pod
When you browse to the public IP, the request flows: Internet → Azure Load Balancer → Node → Pod (Nginx container)
Task 7: Deploy an Internal Service to AKS
What You Are Doing and Why
Now you are deploying a second Nginx service, but this time as an internal service. Internal services use an Azure Internal Load Balancer that assigns a private IP from the VNet. This service is NOT accessible from the internet - only from within the VNet (or from other pods in the cluster).
This pattern is critical in production: public-facing services (like an API gateway) handle external traffic and forward requests to internal services (like databases, backend APIs) that should never be directly exposed to the internet.
Step-by-Step
1. Open the editor to create the internal YAML file:
2. Paste the following content, replacing <ACRname> with your ACR name:
Key difference from the external service: Notice the annotations section:
This single annotation changes the behavior from an external (public) Load Balancer to an internal (private) Load Balancer. The service type is still LoadBalancer, but the annotation tells Azure to create an internal load balancer with a private IP from the VNet subnet instead of a public IP.
3. Save (Ctrl+S) and close (Ctrl+Q).
4. Apply the manifest:
Expected output:
5. Get the internal service details:
Expected output (example):
Notice that EXTERNAL-IP is a private IP (10.x.x.x range) - NOT a public IP. This service is only reachable from within the VNet.
If EXTERNAL-IP shows
<pending>, wait 1-2 minutes and re-run the command.
Write down this private IP address - you will need it in the next task.
AZ-500 EXAM CONTENT: Internal vs External Load Balancer
External Load Balancer:
Public IP address
Accessible from the internet
Used for: public-facing web apps, APIs, portals
Internal Load Balancer:
Private IP address from the VNet subnet
Accessible only from within the VNet (or peered VNets, VPN, ExpressRoute)
Used for: backend services, databases, internal APIs
Created by adding the annotation
service.beta.kubernetes.io/azure-load-balancer-internal: "true"Production architecture pattern:
In a secure architecture, ONLY the API gateway or ingress controller has a public IP. All backend services use internal load balancers and are never directly exposed to the internet.
Task 8: Verify Internal Service Access
What You Are Doing and Why
Since the internal service has a private IP, you cannot access it from your browser (which is on the public internet). To verify it works, you must connect to it from inside the cluster - specifically, from a pod running in the cluster that is on the same network.
You will use kubectl exec to get a shell inside one of the running pods and use curl to make an HTTP request to the internal service's private IP.
Step-by-Step
1. List all pods in the cluster:
Expected output (example):
2. Copy the name of any pod (e.g., nginxexternal-7d9b8c6f4-abc12).
3. Connect interactively to the pod:
Replace <pod_name> with the actual pod name you copied. For example:
What this does:
kubectl exec- Execute a command in a running container-it- Interactive terminal (allocate a TTY and keep STDIN open)-- /bin/bash- Run the Bash shell inside the container
You are now inside the container, at a bash prompt like root@nginxexternal-7d9b8c6f4-abc12:/#.
4. Use curl to access the internal service:
Replace <internal_IP> with the private IP you noted from Task 7 (e.g., 10.224.0.5):
5. You should see the HTML source of the "Welcome to nginx!" page:
This confirms that:
The internal service is running and healthy
It is reachable from within the cluster via its private IP
The Azure Internal Load Balancer is correctly routing traffic
6. Exit the container:
Why test from inside a pod? The internal service has a private IP that is only routable from within the VNet where the AKS cluster operates. Your browser is on the public internet and cannot reach private IPs. By running
curlfrom inside a pod (which is on the cluster's network), you can verify the internal service is working correctly.In production, internal services would be accessed by other services within the same cluster, or from VMs/services in the same VNet or peered VNets.
Clean Up Resources
Always clean up lab resources to avoid unexpected charges.
1. Delete the resource group using either method (both work from Cloud Shell or local terminal):
Option A - Azure CLI (Bash) (recommended if you have been using Bash throughout):
Option B - PowerShell (if you prefer PowerShell or are in Cloud Shell with PowerShell selected):
2. Delete the resource group (this deletes everything inside it):
What -AsJob does: Runs the deletion as a background job so you do not have to wait.
Note: Deletion may take 5-10 minutes. The managed resource group (
MC_AZ500LAB09_MyKubernetesCluster_eastus) is automatically deleted when the AKS cluster is removed. Both--no-wait(Bash) and-AsJob(PowerShell) run the deletion in the background - you do not have to wait for it to finish.
AZ-500 Exam Review: Key Takeaways from This Lab
Here is a consolidated summary of everything from this lab that you should know for the AZ-500 exam:
ACR (Azure Container Registry)
ACR name
Must be globally unique (becomes <name>.azurecr.io)
ACR SKUs
Basic, Standard, Premium - content trust and geo-replication require Premium
az acr build
Builds images remotely on Azure (no local Docker needed) - used with ACR Tasks
AcrPull role
Pull images only - least privilege for AKS
AcrPush role
Push AND pull images - least privilege for CI/CD pipelines
AcrImageSigner
Required (with AcrPush) for signing images with content trust
Defender for Containers
Scans ACR images for vulnerabilities; Linux only; requires paid tier
Content trust
Premium SKU only; ensures only signed images can be pulled
AKS (Azure Kubernetes Service)
Managed resource group
MC_<rg>_<cluster>_<region> - do NOT manually modify
az aks get-credentials
Downloads kubeconfig to connect kubectl to the cluster
--attach-acr
Assigns AcrPull to AKS kubelet managed identity on the ACR
Authentication options
Local accounts (dev/test only), Entra ID + K8s RBAC, Entra ID + Azure RBAC (most secure)
Network plugins
Azure CNI Overlay (default 2025+), Azure CNI Node Subnet, Kubenet
Private cluster
API server gets private IP only - access from VNet only
Network policies
Control pod-to-pod traffic (Azure or Calico)
Service: LoadBalancer
External = public IP; Internal = private IP (with annotation)
Service: ClusterIP
Internal cluster IP only (default type)
Ingress controller
L7 reverse proxy + TLS termination; NGINX or AGIC for production
Defender for Containers
Runtime threat detection on AKS; image scanning on ACR
Managed Identities (used in this lab)
System-assigned
Tied to resource lifecycle; deleted when resource is deleted
User-assigned
Independent lifecycle; can be shared across resources
AKS kubelet identity
System-assigned; used by nodes to pull images from ACR
--attach-acr
Automatically assigns AcrPull to the kubelet managed identity
vs Service principal
Managed identity = no secrets to manage; service principal = manual secret rotation
RBAC Roles (tested in this lab context)
AcrPull
ACR
Pull images only
AcrPush
ACR
Push + pull images
Contributor
VNet
Full management of VNet resources (LB, public IP, etc.)
Network Contributor
VNet
Alternative to Contributor for VNet-only operations
🧠 Exam-Style Practice Questions
Test yourself with these questions based on the concepts from this lab. Try to answer before looking at the solution.
Q1: ACR Role Assignment for AKS
You have an AKS cluster and an ACR. The AKS cluster must pull images from the ACR. What is the minimum required role to assign to the AKS kubelet managed identity on the ACR?
A. Reader
B. AcrPull
C. AcrPush
D. Contributor
Answer: B - AcrPull. Reader only provides metadata access, not the ability to pull images. AcrPush and Contributor provide more than needed (violates least privilege).
Q2: ACR Vulnerability Scanning
You want to enable vulnerability scanning for container images pushed to your ACR. What should you do?
A. Enable content trust on the ACR
B. Upgrade ACR to Premium SKU
C. Enable Defender for Cloud enhanced features (paid tier)
D. Lock the container images
Answer: C - Container vulnerability scanning requires Defender for Cloud enhanced features (paid tier). Content trust is for image signing, not scanning. Premium SKU enables content trust, not scanning. Locking prevents modification.
Q3: Internal AKS Service
You need to expose an AKS service internally (private IP only, no internet access). What should you add to the Service manifest?
A. type: ClusterIP
B. type: NodePort
C. An annotation service.beta.kubernetes.io/azure-load-balancer-internal: "true" with type: LoadBalancer
D. type: ExternalName
Answer: C - The internal load balancer annotation with LoadBalancer type creates an Azure Internal Load Balancer with a private IP. ClusterIP would also work for internal-only access but without load balancer features.
Q4: AKS + Entra ID Integration
You discover that your AKS cluster was created without Microsoft Entra ID integration. Users cannot sign in with Entra ID accounts. What should you do? (Consider: dump answer vs current Azure behavior)
A. Recreate the AKS cluster with Entra ID integration
B. Upgrade the Kubernetes version
C. Enable Azure AD Premium P2
D. Configure User settings in Entra ID
Answer: A (dump answer) - Historically, Entra ID integration could only be configured at cluster creation. Note: As of 2025/2026, newer AKS versions support enabling Entra ID integration on existing clusters, but the exam dump answer remains A. On the exam, follow the dump answer unless a question specifically mentions the newer capability.
Q5: Docker Containers and VNet Service Endpoints
Docker containers on a VM cannot access Azure Storage via a VNet service endpoint. What should you install?
A. Application security group
B. Container Network Interface (CNI) plug-in
C. Azure Firewall
D. VPN Gateway
Answer: B - CNI plug-in. By default, Docker containers use a bridge network (172.17.0.0/16) that is invisible to Azure VNet networking. CNI assigns VNet IP addresses directly to containers, making them "visible" to VNet features like service endpoints and NSGs.
Additional Resources
Last updated