The Skill That Separates Junior from Senior
“Can you script?” It comes up in every infrastructure interview I’ve sat on either side of. Not “can you code” — scripting and software development are different things. Scripting is automation. It’s taking the thing you just did manually and making sure a computer does it next time, the same way, without you having to remember the steps.
This guide covers both Bash and PowerShell side by side, because the reality of modern infrastructure is that you will touch both. Linux servers running your web stack, Windows servers running Active Directory, cloud platforms that speak both languages. If you only know one, you’re only useful in half the rooms. This isn’t about picking a side — it’s about being the person who can solve the problem regardless of what OS is on the screen.
Career Impact: Scripting is the single highest-leverage skill in infrastructure. Without it, every task is manual — you’re the human cron job, clicking through the same GUI 50 times. With it, you write the solution once and move on to interesting problems. It’s the gateway to DevOps, Cloud Engineering, and Platform roles paying £60-90k+. But more importantly, it’s the difference between going home at 5pm and being stuck at your desk running the same commands on the same 30 servers every Friday afternoon.
Why Scripting Matters (More Than You Think)
Let me paint a picture. You’re a junior sysadmin, three months in. Your manager asks you to check disk space on all 50 servers and report back which ones are above 80% usage. You SSH into the first one, run df -h, note down the numbers, move to the next. Forty minutes later, you’ve got a spreadsheet. Two days later, they ask you to do it again.
A script turns that into a one-liner that runs in seconds. That’s not a marginal improvement — that’s a fundamental shift in what you’re able to do with your time.
But it goes beyond saving time. Scripting gives you:
- Consistency — a script doesn’t forget step 4 because it’s Friday and you’re tired. It runs the same way every time.
- Auditability — when something goes wrong at 2am, there’s a log showing exactly what happened, not “I think I ran the right command.”
- Scalability — the difference between managing 5 servers and 500 is a loop. The work doesn’t increase linearly if you’ve scripted it.
- Recoverability — you can rebuild environments from scripts. Try rebuilding from memory after a disaster. I’ve seen it attempted. It’s not pretty.
Scripting is also how you prove your value. Any sysadmin can click through a GUI. The one who says “I automated that, it runs every night, here’s the report it generates” — that’s the one who gets promoted.
What Even Is a Script?
Strip away the mystique and a script is a text file with commands in it. That’s it. The same commands you’d type into a terminal one by one, saved in a file so you can run them all at once. If you’ve ever typed a command into a terminal, you already know the building blocks of scripting.
When you type ls -la to list files, that’s a command. When you put ten of those commands in a file and run the file, that’s a script. The file doesn’t contain magic — it contains the same instructions you’d give manually, in order, with some logic to handle different situations.
Your first scripts will be ugly. They’ll have hardcoded paths, no error handling, and comments that say things like “# not sure why this works but it does”. That’s fine. Everyone starts there. My first script was a Bash one-liner that checked if Apache was running and restarted it if not. It was about 4 lines, it had no logging, and it ran on a cron job for two years without anyone knowing. That’s scripting in the real world.
The two languages you’ll encounter most in infrastructure are Bash (Linux, macOS, WSL) and PowerShell (Windows, increasingly cross-platform). They solve the same problems with different syntax. Think of them as English and French — different grammar, same concepts, and you can say the same things in either one.
Quick Reference
Keep this table handy. When you’re switching between Linux and Windows systems in the same day (and you will), this is the mental translation layer.
| Task | Bash | PowerShell |
|---|---|---|
| List files | ls -la |
Get-ChildItem |
| Read file | cat file.txt |
Get-Content file.txt |
| Find text in files | grep "text" *.txt |
Select-String "text" *.txt |
| Variable | name="John" |
$name = "John" |
| Loop | for i in 1 2 3; do echo $i; done |
1..3 | ForEach-Object { $_ } |
| Conditional | if [ $x -eq 1 ]; then |
if ($x -eq 1) { |
| Run command | $(command) |
$(command) |
| Pipe | cmd1 | cmd2 |
cmd1 | cmd2 |
Bash Fundamentals
Bash (Bourne Again Shell) is the default shell on most Linux distributions and macOS. If you manage Linux servers — and if you’re in infrastructure, you do — Bash is your daily driver. Every time you open a terminal on a Linux box, you’re already in Bash. Scripts are just a way to save what you type there.
Your First Script
Create a file called first-script.sh. The .sh extension is a convention, not a requirement — Linux doesn’t care about file extensions the way Windows does. But it tells other humans (and your text editor) what they’re looking at.
#!/bin/bash
# This is a comment
echo "Hello, World!"
Make it executable and run it:
chmod +x first-script.sh
./first-script.sh
The chmod +x step is one that trips up every beginner. Linux won’t run a file as a program unless you explicitly mark it as executable. It’s a security feature, not an inconvenience — you don’t want random text files being runnable by accident.
Pro Tip: The #!/bin/bash line is called a shebang. It tells the system which interpreter to use. Without it, the system might try to interpret your Bash script with a different shell (like sh or dash), and things that work in Bash won’t work there. Always include it as the first line. No exceptions.
Variables
A variable is a named container for data. Instead of hardcoding a value everywhere in your script, you store it in a variable and reference it by name. When the value changes — and it will — you change it in one place instead of hunting through 50 lines of code.
Real-world example: you write a script that deploys to a server. The server name appears in 12 places in your script. When you need to deploy to a different server, you change one variable at the top instead of doing find-and-replace and hoping you caught them all.
# Assigning variables (no spaces around =)
name="John"
count=42
today=$(date +%Y-%m-%d)
# Using variables
echo "Hello, $name"
echo "Today is $today"
echo "Count: $count"
Important: No spaces around = when assigning variables. name = "John" will fail, and the error message won’t tell you why — Bash interprets it as trying to run a command called name with = and "John" as arguments. This is the single most common Bash beginner mistake. You will make it. Probably more than once.
Conditionals
Conditionals let your script make decisions. “If this is true, do this thing. Otherwise, do that thing.” Without conditionals, your script is just a list of commands that runs top to bottom with no ability to react to what it finds.
Think about checking if a config file exists before trying to read it, or checking if a service is running before trying to restart it. Without conditionals, your script would blindly try the operation and fail. With them, it checks first and handles the situation gracefully.
#!/bin/bash
file="/etc/passwd"
if [ -f "$file" ]; then
echo "File exists"
elif [ -d "$file" ]; then
echo "It's a directory"
else
echo "File not found"
fi
Common Test Operators
Bash test operators look unusual if you’re used to other languages. The -f, -d, -eq syntax is inherited from the original Unix test command. You get used to it.
-ffile exists and is regular file-ddirectory exists-zstring is empty-nstring is not empty-eqnumeric equal-nenumeric not equal-gtgreater than-ltless than
Loops
Loops are where scripting becomes genuinely powerful. A loop says “do this thing for every item in this list.” That list might be three servers or three hundred. The script doesn’t care — it processes each one the same way.
This is the concept that takes you from “I can automate a task on one server” to “I can automate a task across the entire estate.” When someone asks “can you check all the web servers?” — you don’t check them one by one. You write a loop.
# For loop
for server in web01 web02 web03; do
echo "Checking $server"
ping -c 1 $server
done
# While loop
count=1
while [ $count -le 5 ]; do
echo "Count: $count"
((count++))
done
# Loop through files
for file in /var/log/*.log; do
echo "Processing: $file"
done
That last example — looping through files — is something you’ll use constantly. Processing log files, cleaning up old backups, checking config files across directories. The pattern is always the same: for each thing in this collection, do something with it.
Functions
Functions let you name a block of code and reuse it. Instead of copying the same 10 lines in three places (and having to fix bugs in all three when you find one), you write it once as a function and call it by name.
Functions also make your scripts readable. A script that calls check_service nginx is immediately understandable. A script with the same logic pasted inline three times is not.
#!/bin/bash
# Define function
check_service() {
local service=$1
if systemctl is-active --quiet $service; then
echo "$service is running"
return 0
else
echo "$service is NOT running"
return 1
fi
}
# Call function
check_service nginx
check_service apache2
Note the local keyword. Without it, the service variable would be global — meaning if you called this function twice, the second call would overwrite the variable from the first call. In small scripts this doesn’t matter. In anything beyond 50 lines, it will cause bugs that are genuinely painful to track down.
Error Handling
By default, Bash does something that surprises most people: if a command fails, the script keeps running. No error, no warning, no stop. It just carries on to the next line as if nothing happened. This is fine when you’re typing commands interactively. In a script, it’s how you end up deleting the wrong directory because the cd command silently failed.
These three flags should be at the top of every script you write. Non-negotiable.
#!/bin/bash
# Exit on error
set -e
# Exit on undefined variable
set -u
# Fail on pipe errors
set -o pipefail
# Combined (recommended for scripts)
set -euo pipefail
The set -e flag alone would have prevented half the “script did something unexpected” incidents I’ve seen in my career. Add it. Always.
PowerShell Fundamentals
If Bash is the language of Linux, PowerShell is the language of Windows infrastructure. But it’s more than that — PowerShell is fundamentally different in philosophy. Where Bash pipes text between commands (everything is a string), PowerShell pipes objects. That means when you get output from a PowerShell command, it’s structured data with properties you can access, not just lines of text you need to parse with awk and grep.
This matters less when you’re starting out and more when you’re building complex automation. For now, just know that both languages will get the job done — they just think about data differently.
Your First Script
Create a file called first-script.ps1. Unlike Bash, PowerShell does care about the .ps1 extension — it’s how Windows knows this is a PowerShell script.
# This is a comment
Write-Output "Hello, World!"
Run it:
.\first-script.ps1
Note: You’ll likely hit an execution policy error the first time. PowerShell’s default policy blocks script execution for security. Run Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser to allow locally-created scripts. This is a one-time setup — you won’t need to do it again on the same machine.
Variables
PowerShell variables work the same way conceptually — named containers for data. The syntax is slightly different: every variable starts with $, and you can have spaces around the = sign (unlike Bash, where spaces break everything). PowerShell is more forgiving about formatting in general.
# Assigning variables
$name = "John"
$count = 42
$today = Get-Date -Format "yyyy-MM-dd"
# Using variables
Write-Output "Hello, $name"
Write-Output "Today is $today"
Write-Output "Count: $count"
Conditionals
PowerShell conditionals use curly braces instead of then/fi, which will feel more natural if you’ve seen any C-style language. The comparison operators (-eq, -gt, etc.) are the same as Bash — Microsoft adopted the same convention, which makes switching between the two slightly less painful.
$file = "C:\Windows\System32\drivers\etc\hosts"
if (Test-Path $file -PathType Leaf) {
Write-Output "File exists"
} elseif (Test-Path $file -PathType Container) {
Write-Output "It's a directory"
} else {
Write-Output "File not found"
}
Comparison Operators
PowerShell gives you a few extras beyond what Bash offers natively — -like for wildcard matching and -match for regex are particularly useful when you’re filtering data.
-eqequal-nenot equal-gtgreater than-ltless than-gegreater than or equal-leless than or equal-likewildcard match-matchregex match
Loops
PowerShell gives you more loop options than Bash. The foreach loop and the pipeline ForEach-Object look similar but behave differently — foreach loads everything into memory first, while ForEach-Object processes items one at a time as they flow through the pipeline. For 10 servers, it doesn’t matter. For 10,000 AD user objects, it matters a lot.
# ForEach loop
$servers = @("web01", "web02", "web03")
foreach ($server in $servers) {
Write-Output "Checking $server"
Test-Connection -ComputerName $server -Count 1
}
# For loop
for ($i = 1; $i -le 5; $i++) {
Write-Output "Count: $i"
}
# While loop
$count = 1
while ($count -le 5) {
Write-Output "Count: $count"
$count++
}
# Pipeline ForEach
1..5 | ForEach-Object { Write-Output "Number: $_" }
Functions
PowerShell functions are more structured than Bash functions. The param() block lets you define parameters with types and validation — which means PowerShell will reject bad input before your code even runs. The [Parameter(Mandatory=$true)] attribute means the function will prompt for the value if you forget to provide it, rather than silently using an empty string and doing something unexpected.
function Test-ServiceStatus {
param(
[Parameter(Mandatory=$true)]
[string]$ServiceName
)
$service = Get-Service -Name $ServiceName -ErrorAction SilentlyContinue
if ($service -and $service.Status -eq 'Running') {
Write-Output "$ServiceName is running"
return $true
} else {
Write-Output "$ServiceName is NOT running"
return $false
}
}
# Call function
Test-ServiceStatus -ServiceName "Spooler"
Test-ServiceStatus -ServiceName "FakeService"
Error Handling
PowerShell uses try/catch, which is the same pattern you’ll find in C#, Python, and most modern languages. The -ErrorAction Stop part is important — by default, PowerShell treats many errors as “non-terminating,” meaning they write an error message but keep running. -ErrorAction Stop forces the error into the catch block so you can actually handle it.
The finally block runs regardless of whether the try succeeded or failed. Use it for cleanup — closing connections, releasing files, that sort of thing.
try {
# Code that might fail
Get-Content -Path "C:\nonexistent\file.txt" -ErrorAction Stop
} catch {
Write-Output "Error: $($_.Exception.Message)"
} finally {
Write-Output "Cleanup code runs regardless"
}
Side-by-Side Comparison
This is where it comes together. Same problem, two solutions. When you’re working in a hybrid environment — and most enterprise environments are hybrid — being able to translate between Bash and PowerShell in your head is a genuine advantage. These examples show the same real-world tasks solved in both languages.
List Files Older Than 7 Days
Log rotation, temp file cleanup, backup retention — you’ll find old files that need dealing with constantly. This is one of the first things you’ll automate.
| Bash | PowerShell |
|---|---|
find /var/log -type f -mtime +7 -name "*.log" |
Get-ChildItem -Path C:\Logs -Filter *.log | Where-Object { $_.LastWriteTime -lt (Get-Date).AddDays(-7) } |
Notice how the Bash version is more concise — find is a purpose-built tool. The PowerShell version is more verbose but arguably more readable. Neither is better. They’re different tools for different ecosystems.
Check if Service is Running, Start if Not
This is the script you’ll write in your first week as a sysadmin and still be using variations of five years later. Service goes down, script brings it back up. Simple, effective, and it buys you time to investigate the root cause without users screaming.
| Bash | PowerShell |
|---|---|
|
|
Check Multiple Servers
Here’s the loop concept in action. Monday morning, something’s wrong, and you need to know which of your servers are responding. You could ping them one by one, or you could run this and have the answer in seconds. Put it on a scheduled task or cron job and you’ve got basic monitoring without installing a thing.
| Bash | PowerShell |
|---|---|
|
|
Practical Scripts
Theory is important, but scripts exist to solve problems. These are the kinds of scripts you’ll actually write in your first year — monitoring disk space so you don’t run out at 3am, and auditing user accounts so stale credentials don’t become a security hole. Each one combines variables, conditionals, loops, and functions into something genuinely useful.
Script 1: Disk Space Alert (Bash)
Disks filling up is one of the most common causes of service outages. Databases crash, logs stop writing, applications fail in unexpected ways. This script checks every partition and sends an email when usage crosses a threshold. Put it on a cron job running every hour and you’ll catch problems before they become incidents.
#!/bin/bash
# Check disk space and alert if over threshold
threshold=80
email="[email protected]"
df -h | grep -vE '^Filesystem|tmpfs' | awk '{print $5 " " $1}' | while read output; do
usage=$(echo $output | awk '{print $1}' | tr -d '%')
partition=$(echo $output | awk '{print $2}')
if [ $usage -ge $threshold ]; then
echo "WARNING: $partition at ${usage}% usage" | mail -s "Disk Alert: $(hostname)" $email
fi
done
Look at how this combines everything: a variable for the threshold, a loop to iterate through partitions, a conditional to check each one, and command substitution to get the hostname. This is what scripting looks like in practice — small building blocks assembled into something useful.
Script 2: Disk Space Alert (PowerShell)
Same problem, Windows side. Notice how PowerShell’s object-oriented approach means we don’t need to parse text output — Get-WmiObject gives us structured data with named properties we can work with directly. No awk, no grep, no text parsing.
# Check disk space and alert if over threshold
$threshold = 80
$email = "[email protected]"
Get-WmiObject Win32_LogicalDisk -Filter "DriveType=3" | ForEach-Object {
$drive = $_.DeviceID
$freePercent = [math]::Round(($_.FreeSpace / $_.Size) * 100, 2)
$usedPercent = 100 - $freePercent
if ($usedPercent -ge $threshold) {
$body = "WARNING: $drive at $usedPercent% usage on $env:COMPUTERNAME"
# Send-MailMessage -To $email -Subject "Disk Alert" -Body $body -SmtpServer "smtp.company.com"
Write-Output $body
}
}
Script 3: User Account Audit (Bash)
Security teams love this one. Stale accounts — users who haven’t logged in for weeks or months — are a risk. They’re either ex-employees whose access was never revoked (it happens more than you’d think) or shared accounts that nobody owns. This script flags them so someone can investigate.
#!/bin/bash
# List users who haven't logged in for 30 days
echo "Users with no login in 30+ days:"
echo "================================"
for user in $(cut -d: -f1 /etc/passwd); do
last_login=$(lastlog -u $user 2>/dev/null | tail -1 | awk '{print $4, $5, $6, $9}')
if [[ "$last_login" == *"Never"* ]] || [[ -z "$last_login" ]]; then
echo "$user - Never logged in"
fi
done
Script 4: User Account Audit (PowerShell)
The Active Directory version. In a Windows domain environment, this is the one you’ll run. Note the pipeline approach — get users, filter them, select the fields you care about, sort, format. Each step narrows down the data. This is PowerShell at its best: readable, logical, and processing structured objects rather than parsing text.
# List AD users who haven't logged in for 30 days
$threshold = (Get-Date).AddDays(-30)
Get-ADUser -Filter {Enabled -eq $true} -Properties LastLogonDate |
Where-Object { $_.LastLogonDate -lt $threshold -or $_.LastLogonDate -eq $null } |
Select-Object Name, SamAccountName, LastLogonDate |
Sort-Object LastLogonDate |
Format-Table -AutoSize
Best Practices (Lessons From Getting It Wrong)
These aren’t theoretical guidelines. Every one of these comes from a real situation where not following the practice caused a real problem. Some of them were my problems.
Both Languages
- Use comments — explain the why, not the what.
# Loop through serversis useless — I can see it’s a loop.# Check each web server is responding after the load balancer changetells me why this exists. Six months from now, you won’t remember why you wrote the script. The comments are for future you. - Handle errors — don’t assume success. I’ve seen a backup script that ran for 8 months without anyone noticing it had stopped working. It “ran” every night — the cron job fired — but the backup command inside was failing silently. Eight months of no backups.
- Use meaningful variable names.
$serverListnot$sl.$backupPathnot$bp. You’ll thank yourself when you’re debugging at midnight. - Test in safe environments. Never test in production. If you don’t have a test environment, a VM on your laptop counts. A Docker container counts. Anything that isn’t the server running real workloads.
- Version control your scripts. Git is your friend. When the script breaks after a change,
git diffshows you exactly what changed. When someone asks “who modified the deployment script?” — git log has the answer. - Document parameters. A script that requires 3 arguments but doesn’t tell you what they are is a script nobody else can use. Add a usage message. Future colleagues will appreciate it.
Bash Specific
- Always quote variables —
"$var"prevents word splitting. An unquoted variable containing a filename with spaces will ruin your day. - Use shellcheck — it’s a linting tool that catches common mistakes before they bite you. Install it, run it, trust it.
- Start with
set -euo pipefail— strict mode. Already covered, but it bears repeating. This is not optional. - Use
[[instead of[— double brackets support more features and have fewer gotchas. Single brackets are legacy syntax. - Prefer
$(command)over backticks — backticks work but don’t nest cleanly.$()does.
PowerShell Specific
- Use Verb-Noun naming —
Get-ServerStatusnotCheckServers. PowerShell has a list of approved verbs (Get-Verb). Follow the convention and your functions integrate naturally with the ecosystem. - Use
[CmdletBinding()]— one line at the top of your function that enables common parameters like-Verboseand-Debugfor free. No reason not to. - Prefer pipelines — more memory-efficient than loading everything into a collection and looping through it. For large datasets, this is the difference between a script that works and one that exhausts memory.
- Use
-WhatIfand-Confirm— for anything that changes state. Your script should support dry runs. This is how you test destructive operations safely. - Output objects, not text — let PowerShell handle formatting. If you output objects, users can pipe your function into
Export-CSV,Format-Table, orConvertTo-Jsonwithout modifying your code.
Interview Questions You’ll Face
Scripting comes up in almost every infrastructure interview above helpdesk level. Here are the questions I’ve seen most often, with answers that demonstrate experience rather than just knowledge.
“Tell me about a script you’ve written that saved time.”
“I wrote a Bash script to automate server health checks that ran via cron every hour. It checked disk space, memory usage, and key services, then sent a Slack alert if anything was out of threshold. What used to be a 15-minute manual check across 20 servers became automated, and we caught issues before users noticed them.”
The key here is quantifying the impact. “Saved time” is vague. “Reduced a 15-minute manual process across 20 servers to zero manual effort” is specific and memorable.
“How would you approach automating a repetitive task?”
“First, I’d document the manual steps exactly. Then identify which parts are actually repetitive versus which need human judgment. I’d start with a simple script that handles the core repetition, test it thoroughly in a dev environment, add error handling, and finally deploy it with proper logging so I can troubleshoot if something goes wrong.”
This answer shows methodology, not just ability. Anyone can say “I’d write a script.” The approach — document, identify, build, test, harden, deploy — shows you’ve done it before and learned from getting it wrong.
“Write a script to find files modified in the last 24 hours.”
Bash: find /var/log -type f -mtime -1
PowerShell: Get-ChildItem -Path C:\Logs -Recurse | Where-Object { $_.LastWriteTime -gt (Get-Date).AddDays(-1) }
“Though in practice, I’d add output formatting and potentially export to CSV for reporting — the raw output is useful for you on the command line, but stakeholders want something they can read in a spreadsheet.”
Key Exam Points
If you’re studying for certifications (Linux+, RHCSA, AZ-104, or similar), these are the scripting fundamentals that get tested:
- Bash shebang — always include
#!/bin/bashand know why - Variable syntax — Bash: no spaces around
=, no$when assigning. PowerShell: use$everywhere - Error handling — Bash:
set -euo pipefail, PowerShell: try/catch with-ErrorAction Stop - Testing operators — know the differences between string and numeric tests in both languages
- Pipeline usage — both languages excel at chaining commands, but Bash pipes text while PowerShell pipes objects
Career Application
On your CV/resume: Don’t just list “Bash” and “PowerShell” under skills. That tells a hiring manager nothing. Instead, describe what you did with them. Quantify wherever possible — hours saved, servers managed, processes automated. Here are examples that work:
- “Automated server provisioning with Bash, reducing deployment time from 2 hours to 15 minutes”
- “Developed PowerShell scripts for AD user lifecycle management across 500+ accounts”
- “Created monitoring scripts reducing mean time to detection by 60%”
In your homelab: Every script you write for your homelab is portfolio material. A GitHub repository of well-commented, working scripts demonstrates more than any certification. Disk space monitoring, backup automation, service health checks — these are the same problems you solve in enterprise, just at a smaller scale.
In interviews: When they ask “can you script?” — don’t just say yes. Walk them through a specific example. The problem, your approach, the result. That’s what separates “I’ve read about scripting” from “I’ve used scripting to solve real problems.”
Next Steps
- Next: Cron & Task Scheduler — schedule your scripts to run automatically
- Related: Ansible Fundamentals — configuration management at scale
The best script is the one that runs while you sleep. Start small — automate one thing that annoys you. Then another. Then another. Before long, you’ll wonder how you ever worked without it.

ReadTheManual is run, written and curated by Eric Lonsdale.
Eric has over 20 years of professional experience in IT infrastructure, cloud architecture, and cybersecurity, but started with PCs long before that.
He built his first machine from parts bought off tables at the local college campus, hoping they worked. He learned on BBC Micros and Atari units in the early 90s, and has built almost every PC he’s used between 1995 and now.
From helpdesk to infrastructure architect, Eric has worked across enterprise datacentres, Azure environments, and security operations. He’s managed teams, trained engineers, and spent two decades solving the problems this site teaches you to solve.
ReadTheManual exists because Eric believes the best way to learn IT is to build things, break things, and actually read the manual. Every guide on this site runs on infrastructure he owns and maintains.
Enjoyed this guide?
New articles on Linux, homelab, cloud, and automation every 2 days. No spam, unsubscribe anytime.
