Azure Storage: Blob, Files & Security Configuration (AZ-104)

Storage is everywhere in Azure. VMs need disks. Applications need blob storage. Users need file shares. Logs need somewhere to go. Understanding Azure Storage is non-negotiable for the AZ-104 – it’s 15-20% of the exam and something you’ll work with in every Azure environment.

From the field: Azure Storage is deceptively complex. Most people start with blob storage and think that is all there is. In practice, I use tables for lightweight data, queues for async processing, and file shares for legacy lift-and-shift migrations. Understanding when to use which type saves both cost and headaches.

This post covers storage accounts, blob storage, Azure Files, and the security features that protect your data. By the end, you’ll know when to use which storage type and how to configure it properly.

Career Impact

Why this matters: Storage costs are one of the biggest line items in cloud bills. Engineers who understand lifecycle management and access tiers can save organizations thousands per month.

Resume value: “Designed storage architecture with lifecycle management reducing costs by 40%” or “Implemented secure storage access patterns using SAS and private endpoints”

Azure Storage architecture diagram showing storage accounts, blob containers, file shares, and access tiers

Exam Coverage: Implement and Manage Storage (15-20%) – This includes storage accounts, blob storage, Azure Files, storage security (SAS tokens, encryption), and data protection.

What You’ll Learn

  • Storage account types and when to use each
  • Blob storage: containers, tiers, lifecycle management
  • Azure Files: SMB shares in the cloud
  • Storage security: SAS tokens, encryption, firewalls
  • Replication options and availability

Quick Reference

Service Purpose Access Method
Blob Storage Unstructured data (files, images, backups) REST API, SDKs
Azure Files SMB/NFS file shares SMB 3.0, NFS
Queue Storage Message queuing REST API
Table Storage NoSQL key-value REST API, OData
Disk Storage VM disks Attached to VMs

Storage Account Types

Account Types

Type Services Use Case
Standard general-purpose v2 All (Blob, File, Queue, Table) Most workloads
Premium block blobs Blob (block blobs only) High transaction rates
Premium file shares Files only High-performance file shares
Premium page blobs Blob (page blobs only) Unmanaged VM disks

Default choice: Standard general-purpose v2 (GPv2)

Performance Tiers

  • Standard – HDD-backed, cost-effective
  • Premium – SSD-backed, low latency, high IOPS

Creating a Storage Account

# Azure CLI - Create storage account
az storage account create \
  --name mystorageaccount$RANDOM \
  --resource-group az104-labs \
  --location uksouth \
  --sku Standard_LRS \
  --kind StorageV2 \
  --access-tier Hot
# Azure PowerShell - Create storage account
New-AzStorageAccount -ResourceGroupName "az104-labs" `
  -Name "mystorageaccount$(Get-Random)" `
  -Location "uksouth" `
  -SkuName "Standard_LRS" `
  -Kind "StorageV2" `
  -AccessTier "Hot"

[SCREENSHOT: Azure Portal – Storage account creation form showing performance and redundancy options]

Naming rules:

  • 3-24 characters
  • Lowercase letters and numbers only
  • Globally unique

Replication Options

Redundancy Levels

Option Copies Scope Use Case
LRS 3 Single datacenter Dev/test, easily recreated data
ZRS 3 3 availability zones Production with zone redundancy
GRS 6 Primary + secondary region DR, read from secondary
GZRS 6 3 zones + secondary region Highest availability

RA-GRS/RA-GZRS – Read-access variants allow reading from secondary region.

Choosing Redundancy

Development:       LRS  (cheapest)
Production:        ZRS  (zone failure protection)
Disaster Recovery: GRS  (region failure protection)
Critical:          GZRS (both zone and region protection)

Failover

For GRS/GZRS, you can initiate failover to secondary region:

# Azure CLI - Initiate failover
az storage account failover \
  --name mystorageaccount \
  --resource-group az104-labs
# Azure PowerShell - Initiate failover
Invoke-AzStorageAccountFailover -ResourceGroupName "az104-labs" `
  -Name "mystorageaccount" -Force

Warning: Failover causes data loss of unsynced writes. RPO is typically ~15 minutes.

Blob Storage

Container and Blob Hierarchy

Storage Account
└── Container (like a folder)
    └── Blob (the actual file)
        └── Can include virtual directories (blob name: folder/subfolder/file.txt)

Blob Types

Type Purpose Max Size
Block blob Files, images, videos 190.7 TB
Page blob Random read/write, VHDs 8 TB
Append blob Logs, append-only data 195 GB

Access Tiers

Tier Access Cost Storage Cost Use Case
Hot Low High Frequently accessed
Cool Medium Medium Infrequently accessed (30+ days)
Cold Higher Lower Rarely accessed (90+ days)
Archive Highest Lowest Long-term retention

Archive tier:

  • Offline storage
  • Must rehydrate before access (hours)
  • 180-day minimum storage

Setting Blob Tier

# Azure CLI - Upload blob to specific tier
az storage blob upload \
  --account-name mystorageaccount \
  --container-name backups \
  --name archive/backup-2024.zip \
  --file ./backup.zip \
  --tier Cool

# Azure CLI - Change existing blob tier
az storage blob set-tier \
  --account-name mystorageaccount \
  --container-name backups \
  --name archive/old-backup.zip \
  --tier Archive
# Azure PowerShell - Upload blob to specific tier
$Context = (Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount").Context
Set-AzStorageBlobContent -File "./backup.zip" -Container "backups" `
  -Blob "archive/backup-2024.zip" -Context $Context -StandardBlobTier Cool

# Azure PowerShell - Change existing blob tier
$Blob = Get-AzStorageBlob -Container "backups" -Blob "archive/old-backup.zip" -Context $Context
$Blob.BlobClient.SetAccessTier("Archive")

Lifecycle Management

Automatically move or delete blobs:

{
  "rules": [
    {
      "name": "moveOldBackups",
      "type": "Lifecycle",
      "definition": {
        "filters": {
          "blobTypes": ["blockBlob"],
          "prefixMatch": ["backups/"]
        },
        "actions": {
          "baseBlob": {
            "tierToCool": { "daysAfterModificationGreaterThan": 30 },
            "tierToArchive": { "daysAfterModificationGreaterThan": 90 },
            "delete": { "daysAfterModificationGreaterThan": 365 }
          }
        }
      }
    }
  ]
}
# Azure CLI - Apply lifecycle policy
az storage account management-policy create \
  --account-name mystorageaccount \
  --resource-group az104-labs \
  --policy @lifecycle-policy.json
# Azure PowerShell - Apply lifecycle policy
$Policy = Get-Content -Path "lifecycle-policy.json" | ConvertFrom-Json
Set-AzStorageAccountManagementPolicy -ResourceGroupName "az104-labs" `
  -StorageAccountName "mystorageaccount" -Policy $Policy

[SCREENSHOT: Azure Portal – Storage account lifecycle management blade showing rule configuration]

Azure Files

What Azure Files Does

Cloud-based SMB file shares:

  • Mount from anywhere (Windows, Linux, macOS)
  • Replace on-premises file servers
  • Lift-and-shift applications
  • Azure File Sync for hybrid scenarios

Creating a File Share

# Azure CLI - Create file share
az storage share create \
  --account-name mystorageaccount \
  --name myshare \
  --quota 100  # GB

# Azure CLI - Upload file
az storage file upload \
  --account-name mystorageaccount \
  --share-name myshare \
  --source ./document.pdf
# Azure PowerShell - Create file share
$Context = (Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount").Context
New-AzStorageShare -Name "myshare" -Context $Context -QuotaGiB 100

# Azure PowerShell - Upload file
Set-AzStorageFileContent -ShareName "myshare" -Source "./document.pdf" -Context $Context

Mounting on Windows

# Get storage account key
$storageKey = (Get-AzStorageAccountKey -ResourceGroupName "az104-labs" -Name "mystorageaccount")[0].Value

# Mount as drive
net use Z: \\mystorageaccount.file.core.windows.net\myshare /user:AZURE\mystorageaccount $storageKey

Mounting on Linux

# Install CIFS utilities
sudo apt install cifs-utils

# Create mount point
sudo mkdir /mnt/azure

# Mount
sudo mount -t cifs //mystorageaccount.file.core.windows.net/myshare /mnt/azure \
  -o vers=3.0,username=mystorageaccount,password=<storage-key>,dir_mode=0777,file_mode=0777

Azure File Sync

Sync on-premises file servers with Azure Files:

  • Cache frequently accessed files locally
  • Cloud tiering frees local space
  • Multi-site sync
  • Backup integration

Components:

  • Storage Sync Service
  • Sync Group
  • Registered Server
  • Cloud Endpoint (Azure Files)
  • Server Endpoint (local folder)

Storage Security

Authentication Methods

Method Use Case
Storage Account Key Full access, admin operations
Azure AD User/app identity, RBAC
SAS Token Delegated, time-limited access
Anonymous Public read access (if enabled)

Shared Access Signatures (SAS)

Grant limited access without sharing account keys:

Types:

  • Service SAS – Access to specific service (Blob, File, etc.)
  • Account SAS – Access to multiple services
  • User delegation SAS – Azure AD-based, most secure
# Azure CLI - Generate SAS token
end_date=$(date -u -d "1 day" '+%Y-%m-%dT%H:%MZ')

az storage blob generate-sas \
  --account-name mystorageaccount \
  --container-name uploads \
  --name document.pdf \
  --permissions r \
  --expiry $end_date \
  --https-only
# Azure PowerShell - Generate SAS token
$Context = (Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount").Context
$EndTime = (Get-Date).AddDays(1)

New-AzStorageBlobSASToken -Container "uploads" -Blob "document.pdf" `
  -Context $Context -Permission r -ExpiryTime $EndTime -Protocol HttpsOnly

SAS URL format:

https://mystorageaccount.blob.core.windows.net/container/blob?sv=2021-06-08&se=2024-01-01...

Stored Access Policies

Define SAS parameters in a policy (can be revoked):

# Azure CLI - Create stored access policy
az storage container policy create \
  --account-name mystorageaccount \
  --container-name uploads \
  --name readpolicy \
  --permissions r \
  --expiry 2024-12-31

# Generate SAS using policy
az storage blob generate-sas \
  --account-name mystorageaccount \
  --container-name uploads \
  --name document.pdf \
  --policy-name readpolicy
# Azure PowerShell - Create stored access policy
$Context = (Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount").Context
New-AzStorageContainerStoredAccessPolicy -Container "uploads" -Policy "readpolicy" `
  -Context $Context -Permission r -ExpiryTime "2024-12-31"

# Generate SAS using policy
New-AzStorageBlobSASToken -Container "uploads" -Blob "document.pdf" `
  -Context $Context -Policy "readpolicy"

Encryption

At rest:

  • All data encrypted by default (AES-256)
  • Microsoft-managed keys (default)
  • Customer-managed keys (BYOK via Key Vault)

In transit:

  • Require secure transfer (HTTPS)
# Azure CLI - Require HTTPS
az storage account update \
  --name mystorageaccount \
  --resource-group az104-labs \
  --https-only true
# Azure PowerShell - Require HTTPS
Set-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount" `
  -EnableHttpsTrafficOnly $true

Network Security

Firewall rules:

# Azure CLI - Allow only specific VNet
az storage account network-rule add \
  --account-name mystorageaccount \
  --resource-group az104-labs \
  --vnet-name MyVNet \
  --subnet DataSubnet

# Set default action to deny
az storage account update \
  --name mystorageaccount \
  --resource-group az104-labs \
  --default-action Deny
# Azure PowerShell - Allow only specific VNet
$VNet = Get-AzVirtualNetwork -Name "MyVNet" -ResourceGroupName "az104-labs"
$Subnet = Get-AzVirtualNetworkSubnetConfig -Name "DataSubnet" -VirtualNetwork $VNet
Add-AzStorageAccountNetworkRule -ResourceGroupName "az104-labs" -Name "mystorageaccount" `
  -VirtualNetworkResourceId $Subnet.Id

# Set default action to deny
Update-AzStorageAccountNetworkRuleSet -ResourceGroupName "az104-labs" `
  -Name "mystorageaccount" -DefaultAction Deny

[SCREENSHOT: Azure Portal – Storage account networking blade showing firewall and virtual network configuration]

Private endpoints:

# Azure CLI - Create private endpoint
az network private-endpoint create \
  --name StorageEndpoint \
  --resource-group az104-labs \
  --vnet-name MyVNet \
  --subnet DataSubnet \
  --private-connection-resource-id /subscriptions/.../storageAccounts/mystorageaccount \
  --group-id blob \
  --connection-name StorageConnection
# Azure PowerShell - Create private endpoint
$StorageAccount = Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount"
$VNet = Get-AzVirtualNetwork -Name "MyVNet" -ResourceGroupName "az104-labs"
$Subnet = Get-AzVirtualNetworkSubnetConfig -Name "DataSubnet" -VirtualNetwork $VNet

$Connection = New-AzPrivateLinkServiceConnection -Name "StorageConnection" `
  -PrivateLinkServiceId $StorageAccount.Id -GroupId "blob"

New-AzPrivateEndpoint -Name "StorageEndpoint" -ResourceGroupName "az104-labs" `
  -Location "uksouth" -Subnet $Subnet -PrivateLinkServiceConnection $Connection

Data Protection

Soft Delete

Recover accidentally deleted data:

# Azure CLI - Enable soft delete for blobs
az storage account blob-service-properties update \
  --account-name mystorageaccount \
  --resource-group az104-labs \
  --enable-delete-retention true \
  --delete-retention-days 14
# Azure PowerShell - Enable soft delete for blobs
Enable-AzStorageBlobDeleteRetentionPolicy -ResourceGroupName "az104-labs" `
  -StorageAccountName "mystorageaccount" -RetentionDays 14

Versioning

Keep previous versions of blobs:

# Azure CLI - Enable versioning
az storage account blob-service-properties update \
  --account-name mystorageaccount \
  --resource-group az104-labs \
  --enable-versioning true
# Azure PowerShell - Enable versioning
Update-AzStorageBlobServiceProperty -ResourceGroupName "az104-labs" `
  -StorageAccountName "mystorageaccount" -IsVersioningEnabled $true

Immutable Storage

WORM (Write Once, Read Many) for compliance:

  • Time-based retention – Cannot delete until retention expires
  • Legal hold – Cannot delete while hold is active
# Azure CLI - Set retention policy
az storage container immutability-policy create \
  --account-name mystorageaccount \
  --container-name compliance \
  --period 365
# Azure PowerShell - Set retention policy
$Context = (Get-AzStorageAccount -ResourceGroupName "az104-labs" -Name "mystorageaccount").Context
Set-AzStorageContainerImmutabilityPolicy -Container "compliance" -Context $Context `
  -ImmutabilityPeriod 365

Practice Lab: Complete Storage Setup

# Azure CLI - Complete storage setup
# Create storage account
STORAGE_NAME="azlabstore$RANDOM"
az storage account create \
  --name $STORAGE_NAME \
  --resource-group az104-labs \
  --location uksouth \
  --sku Standard_ZRS \
  --kind StorageV2 \
  --access-tier Hot \
  --https-only true

# Create containers
az storage container create --account-name $STORAGE_NAME --name uploads --public-access off
az storage container create --account-name $STORAGE_NAME --name backups --public-access off

# Upload test file
echo "Test content" > testfile.txt
az storage blob upload \
  --account-name $STORAGE_NAME \
  --container-name uploads \
  --name testfile.txt \
  --file testfile.txt

# Create file share
az storage share create --account-name $STORAGE_NAME --name documents --quota 50

# Enable soft delete
az storage account blob-service-properties update \
  --account-name $STORAGE_NAME \
  --resource-group az104-labs \
  --enable-delete-retention true \
  --delete-retention-days 7

# Clean up test file
rm testfile.txt
# Azure PowerShell - Complete storage setup
# Create storage account
$StorageName = "azlabstore$(Get-Random)"
$StorageAccount = New-AzStorageAccount -ResourceGroupName "az104-labs" `
  -Name $StorageName -Location "uksouth" -SkuName "Standard_ZRS" `
  -Kind "StorageV2" -AccessTier "Hot" -EnableHttpsTrafficOnly $true

$Context = $StorageAccount.Context

# Create containers
New-AzStorageContainer -Name "uploads" -Context $Context -Permission Off
New-AzStorageContainer -Name "backups" -Context $Context -Permission Off

# Upload test file
"Test content" | Out-File -FilePath "testfile.txt"
Set-AzStorageBlobContent -File "testfile.txt" -Container "uploads" `
  -Blob "testfile.txt" -Context $Context

# Create file share
New-AzStorageShare -Name "documents" -Context $Context -QuotaGiB 50

# Enable soft delete
Enable-AzStorageBlobDeleteRetentionPolicy -ResourceGroupName "az104-labs" `
  -StorageAccountName $StorageName -RetentionDays 7

# Clean up test file
Remove-Item "testfile.txt"

Interview Questions

Q1: “Explain Azure Storage access tiers and when you’d use each.”

Good Answer: “Hot tier for frequently accessed data – low access cost but higher storage cost. Cool tier for data accessed less than once a month – higher access cost but lower storage. Cold tier for 90+ day retention with rare access. Archive tier for long-term retention where you can tolerate rehydration delay. I’d use hot for active application data, cool for recent backups, and archive for compliance archives. The key is lifecycle management to automatically move data through tiers based on age.”

Q2: “How would you secure a storage account?”

Good Answer: “Defense in depth. First, disable public access and use private endpoints to keep traffic on Azure backbone. Second, enable HTTPS-only to encrypt in transit. Third, use Azure AD authentication instead of storage keys where possible, and SAS tokens with short expiry for external access. Fourth, network rules to allow only specific VNets. Fifth, enable soft delete and versioning for accidental deletion recovery. Finally, use customer-managed keys if compliance requires it.”

Q3: “A team needs to share large files with external partners. How would you set this up?”

Good Answer: “I’d create a dedicated container for external sharing. Use SAS tokens with specific permissions – read-only if they just need to download, write if they need to upload. Set appropriate expiry based on business need – hours or days, not months. Use stored access policies so I can revoke access if needed by deleting the policy. Enable logging to track who accessed what. For better UX, consider Azure Blob Storage with a simple web front-end, or use Azure Storage Explorer links.”

Key Exam Points

  • GPv2 is the default – Supports all features
  • Replication abbreviations – LRS, ZRS, GRS, GZRS (know what each provides)
  • Access tiers – Hot, Cool, Cold, Archive (know minimum retention)
  • SAS types – Service, Account, User delegation
  • Soft delete – Container and blob level, separate settings
  • Private endpoints – Replace service endpoints for better security

Career Application

On your resume:

  • “Designed storage architecture with lifecycle management reducing costs by 40%”
  • “Implemented secure storage access patterns using SAS and private endpoints”
  • “Migrated 10TB of file shares to Azure Files with Azure File Sync”

Demonstrate:

  • Cost optimization awareness
  • Security-first thinking
  • Understanding of data lifecycle
  • Hybrid storage knowledge

Next Steps

Next in series: This completes the 4-part Azure Administrator series

Related: Terraform Fundamentals – Automate storage deployment

Lab: Create storage account with lifecycle policy and private endpoint

Storage seems simple until you’re paying $10,000/month for hot tier on data nobody has accessed in a year. Design your storage strategy before you need it.

Related Guides

If you found this useful, these guides continue the journey:

The RTM Essential Stack - Gear I Actually Use

Enjoyed this guide?

New articles on Linux, homelab, cloud, and automation every 2 days. No spam, unsubscribe anytime.

Scroll to Top