Search This Blog

Wednesday, March 25, 2026

PowerShell Remoting for SQL DBAs: WinRM + SSH Guide (Updated 2026)

PowerShell Remoting for SQL DBAs: WinRM + SSH Guide (Updated 2026)
Originally published in 2023, this post never made it into Google's index, so I'm republishing it with key updates for PowerShell 7+ cross-platform support, HTTPS WinRM best practices (port 5986), and other modern enhancements.

With that said, PowerShell remoting lets you run commands and scripts on remote computers as if you were sitting in front of them. For DBAs and system admins, it's one of the most powerful tools for automating routine management across many servers. 

But I wouldn't call it a core or required skill for DBAs. Essential skills like query tuning, backup/restore, Always On AGs, and HA/DR rank higher. PowerShell remoting matters more for enterprise DBAs managing hundreds of servers, but it's not a must-have for everyone.

This guide introduces PowerShell remoting using both WinRM (classic Windows remoting) and SSH (cross­-platform), with some examples, security notes, and troubleshooting tips.


What PowerShell Remoting Is

PowerShell remoting is a mechanism that sends PowerShell commands and scripts from your machine to one or more remote machines, executes them there, and returns results to you.

There are two main transports you will see today:

WinRM-based remoting:

  • Uses Windows Remote Management (WinRM), HTTP/HTTPS on ports 5985/5986.
  • Built into Windows PowerShell 2.0+ and supported in PowerShell 7+ on Windows.
SSH-based remoting:
  • Uses OpenSSH (TCP port 22 by default).
  • Cross­-platform: Windows, Linux, macOS.
  • Supported natively in PowerShell 7+.

Both approaches allow:

  • 1 to 1 interactive sessions.
  • 1 to many fan­out (run once, execute on many machines).
  • Session reuse for multiple commands, which is important for performance.

Why Even Care About SSH Remoting?

Windows-only SQL Server DBAs usually stick with WinRM + PowerShell ISE, and that works great in domain environments. No argument there.

But here's why PowerShell 7+ SSH deserves a look, even for pure Windows shops:

  • Port 22 sails through firewalls (unlike WinRM's 5985/5986)  
  • Perfect for jump boxes and Azure Arc SQL  
  • No GPO fights over WinRM config  
  • PowerShell 7+ brings faster module loading and true parallel `Invoke-Command`, but VS Code's learning curve can feel steep coming from SSMS's polished GUI.

The Reality: SQL Server on Linux is real now, accordingly to some resources, ~20-30% of new deployments are Azure SQL, containers, RHEL/Ubuntu etc.. 

Bottom line: Use WinRM/PowerShell 5.1 for classic Windows domains. SSH/PS7+ for everything else. Use what matches your environment.



Requirements and Versions


For PowerShell versions 5.1 and below

  • Installed by default on Windows Server and Windows 10.
  • Uses WinRM remoting by default.

For PowerShell versions 7 and up

  • Cross­-platform and installed separately.
  • Supports both WinRM and SSH remoting.
  • Has its own remoting endpoints and configuration, independent from Windows PowerShell.


Additionally, for WinRM-based remoting to work:
  • WinRM must be enabled and configured on the remote computer.
  • The firewall must allow WinRM (HTTP/HTTPS) traffic.
  • Authentication must be possible (domain trust, certificates, or explicit credentials).
  • Name resolution must work (DNS/hosts), or you must use IP plus TrustedHosts.


Enabling WinRM Remoting on Servers (Windows Server)

On Windows Server versions, WinRM is usually installed and often already configured, especially in domain environments. To explicitly configure:

# Run in an elevated PowerShell session on the server
Enable-PSRemoting -Force

What this does:

  • Starts and configures the WinRM service.
  • Creates the default PowerShell endpoint.
  • Configures HTTP listener (port 5985).
  • Creates firewall rules for WinRM.

If your organization enforces WinRM policies via Group Policy, those settings may override this configuration. In that case, work with your AD/GPO administrators.

Enabling WinRM Remoting on Clients (Windows 10/11)

On Windows 10/11, WinRM is present but often disabled by default:

# Run as Administrator
Enable-PSRemoting -Force

If this fails or is restricted, you can also check with:

winrm quickconfig

This tool can help configure the WinRM service level, but PowerShell remoting endpoints still require PowerShell configuration.

Quick Test: Ping a Remote Endpoint

Before running real commands, verify that remoting is working on the remote computer:

Test-WsMan -ComputerName SERVER1


Basic WinRM Remoting Examples


Example 1: Run a Simple Command

Invoke-Command -ComputerName SERVER1 -ScriptBlock {
    Get-Service -Name WinRM
}

-ComputerName can accept a single name or an array of names. -ScriptBlock contains the code that runs on the remote machine.

Example 2: Run on Multiple Servers

$servers = 'SERVER1', 'SERVER2', 'SERVER3'

Invoke-Command -ComputerName $servers -ScriptBlock {
    Get-Service -Name Spooler
}

This is one of the biggest wins of remoting: a single command targeting many servers.

Example 3: Use Explicit Credentials

$cred = Get-Credential

Invoke-Command -ComputerName SERVER1 -Credential $cred -ScriptBlock {
    Get-Process
}

This prompts for a username and password, which PowerShell uses to authenticate to the remote system.

Using Persistent Sessions

Creating and reusing a PowerShell session (PSSession) is more efficient than setting up a new connection for every command. It is also necessary for some workflows, like copying files or maintaining state.

Example 4: Create and Use a PSSession

$session = New-PSSession -ComputerName SERVER1

Invoke-Command -Session $session -ScriptBlock {
    Get-ChildItem C:\Temp
}

# Run another command in the same session
Invoke-Command -Session $session -ScriptBlock {
    Get-Date
}

# Always clean up when done
Remove-PSSession $session

A PSSession keeps the remote environment alive across multiple commands.

Example 5: Copy a File to a Remote Server

$session = New-PSSession -ComputerName SERVER1

Copy-Item -Path "C:\Scripts\Deploy.ps1" `
          -Destination "C:\Admin\Deploy.ps1" `
          -ToSession $session

Remove-PSSession $session

This uses the WinRM channel to transfer the file rather than SMB, which can be useful in locked-down environments.

Example 6: Enter an Interactive Remote Session

Enter-PSSession -ComputerName SERVER1

# Prompt changes to something like:
# [SERVER1]: PS C:\Users\Admin>

# Run commands directly on SERVER1
Get-Process
Get-Service

# Exit the remote session
Exit-PSSession

This is a powerful replacement for RDP in many administrative scenarios.



Example 7: Check Database Backup Status Across SQL Servers

# Shows last full backup date for every database on every SQL Server.
$servers =  @("SQL01\Instance01", "SQL02\Instance02", "SQL03\Instance03")

foreach($server in $servers) {
    $status = Invoke-Sqlcmd -ServerInstance $server -Database "msdb" -Query @"
        SELECT 
            database_name AS DatabaseName,
            MAX(backup_finish_date) AS LastFullBackup
        FROM msdb.dbo.backupset 
        WHERE type = 'D'
        GROUP BY database_name
"@

    $status | Select-Object @{Name="SQLServer";Expression={$server}}, 
              DatabaseName, 
              @{Name="LastFullBackup";Expression={$_.LastFullBackup -f "yyyy-MM-dd HH:mm"}}
}


Example 8: Restart Stuck SQL Agent Jobs Remotely

#Finds failed Agent jobs from today, restarts them automatically.
$SQLServerInstance = "SQL01\Instance01"
$failedJobs = Invoke-Sqlcmd -ServerInstance $SQLServerInstance -Database "msdb" -Query @"
    SELECT j.name AS JobName
    FROM msdb.dbo.sysjobs AS j
    INNER JOIN msdb.dbo.sysjobhistory AS h ON j.job_id = h.job_id
    WHERE h.run_status = 0
    AND h.step_id = 0
    AND h.instance_id = 
         (SELECT MAX(instance_id) FROM msdb.dbo.sysjobhistory
            WHERE job_id = h.job_id)
    AND h.run_date >= CONVERT(INT, CONVERT(VARCHAR(8), GETDATE() - 1, 112));
"@

foreach($job in $failedJobs) {
    Write-Output "Starting Job: $($job.JobName)"
    Invoke-Sqlcmd -ServerInstance $SQLServerInstance -Database "msdb" -Query @"
        EXEC msdb.dbo.sp_start_job @job_name = '$($job.JobName)'
"@
    Write-Output "Started Job: $($job.JobName)"
}



Common Issues and Troubleshooting


1. Access Denied

Symptoms:

  • Access is denied.
  • The client cannot connect to the destination specified in the request.

Checks:

  • Are you running the console as Administrator?
  • Does the account have permissions to connect via WinRM on the remote machine?
  • Are there group policies restricting WinRM or PowerShell endpoints?


2. Network or Name Resolution Problems

Symptoms:

  • WinRM cannot complete the operation.
  • The client cannot connect to the destination specified in the request.

Checks:

  • Test basic reachability:
    Test-Connection SERVER1
    Test-WsMan SERVER1
  • Check DNS: does the hostname resolve correctly?
  • Inspect firewall rules on both client and server.


3. TrustedHosts and Workgroup Machines

On non­domain machines, or when using IP addresses, you may need to configure TrustedHosts:

Set-Item WSMan:\localhost\Client\TrustedHosts -Value "SERVER1,SERVER2" -Force

Use wildcards or ranges only when you understand the security implications.

4. Double-Hop Problem

If a remote session needs to access another network resource (for example, SQL Server or a file share from within the remote script), Kerberos may not delegate credentials by default. Solutions include:

  • Using CredSSP (with caution).
  • Using Kerberos constrained delegation.
  • Using alternate credentials explicitly within the script for downstream access.
  • Running the script from a system that has direct access instead of hopping.

Security Considerations

Security is critical when enabling remoting.

Key points:

  • Prefer HTTPS for WinRM (port 5986) in untrusted or internet­facing scenarios.
  • Use Kerberos authentication within a domain whenever possible; it provides mutual authentication and strong security.
  • Limit who can use remoting:
    • Use groups instead of individual user accounts.
    • Use constrained endpoints or Just Enough Administration (JEA) so users only have the rights they need.
  • Be careful with CredSSP: it can solve double­hop issues but introduces credential­delegation risk. Use it only when necessary and within tightly controlled environments.

If you must use HTTP (5985), keep traffic on internal trusted networks and consider IPsec or VPN protections.



PowerShell Remoting Over SSH

While WinRM is excellent for Windows domain environments, SSH-based remoting is often easier across platforms and through firewalls.

Requirements

  • PowerShell 7 or later on both client and server.
  • OpenSSH server configured on the remote host.
  • PowerShell 7 set as a subsystem or command in SSH configuration.

On Windows, install OpenSSH server and enable it, then configure sshd_config to allow a PowerShell endpoint, for example:

# In sshd_config (simplified example)
Subsystem powershell C:/Program Files/PowerShell/7/pwsh.exe -sshs -NoLogo -NoProfile
On Windows the sshd_config file is located at C:\ProgramData\ssh\sshd_config.
On Linux/Unix/macOS, the file is located /etc/ssh/sshd_config.

Restart the SSH service after changes:

# Linux/macOS
sudo systemctl sudo systemctl restart sshd

# Windows
Restart-Service sshd


Example 9: SSH Remoting with PowerShell 7

$cred = Get-Credential  # user on the SSH server

$session = New-PSSession `
    -HostName "linux01.contoso.com" `
    -UserName $cred.UserName `
    -SSHTransport

Invoke-Command -Session $session -ScriptBlock {
    $PSVersionTable
}

Remove-PSSession $session

On Windows-to-Windows over SSH, the pattern is the same—just change the host.

SSH remoting allows you to use the same PowerShell syntax to manage Linux and Windows, which is especially attractive for hybrid SQL/Oracle shops.



Best Practices for Modern Environments

  • Standardize on PowerShell 7+ for new automation where possible, especially in mixed OS environments.
  • Use HTTPS WinRM listeners and/or SSH for secure remote management.
  • Use PSSessions and run multiple commands per session instead of single-use remoting calls.
  • Incorporate logging and auditing via transcript (Start-Transcript) and central logging solutions.
  • Start small: test remoting on a lab machine, then roll out to production via GPO or configuration management.




Thursday, January 29, 2026

Fix Slow, Bloated MSDB: Purge Old History And Add Missing Indexes

Fix Slow, Bloated MSDB: Purge Old History And Add Missing Indexes
After tempdb, msdb is often the most abused system database, growing unchecked until it tanks your backup reporting and job monitoring.

I've watched MSDB performance degrade across multiple SQL Server instances. It's not optimized out of the box and doesn't get much care, so as it balloons to 100GB+, metadata queries crawl, showing up as top offenders in Activity Monitor.

Another indicator of MSDB performance problems: missing indexes in MSDB showing at the top of the missing indexes DMV results.

In the past, I'd just find and add missing indexes. But MSDB tuning gets often gets overlooked even by the best of us. 



I have my own reasons to keep MSDB lean and fast:
  • I regularly query MSDB to get job status, backup history, and missing backups (manual + automated)
  • Our enterprise backup tool frequently queries the backup history tables, about every 5 minutes. These queries have become resource-intensive and slow, significantly hindering performance on several SQL Servers. Yes, imagine that, your backup tool spends more time querying backup meta data than time it takes to do transaction log backups.
  • Bloated MSDBs (50-100GB). While this may concern some, it doesn't bother me per se..


Solution: 1) Index MSDB tables, 2) Purge old history, 3) Configure job history retention.

The missing index DMVs make the first step easy. 

Here's what they recommend on one of my production servers:

Most out-of-the-box MSDBs would usually show 3-5 missing index recommendations. 



Manage MSDB Size Through Purging

I've written a SQL script that complements the indexing strategy by regularly purging old history records from MSDB. It uses SQL Server's built-in system procedures for the heavy lifting, all you specify is @DaysRetention, and older records are automatically removed from key history tables. View/download it from GitHub:

Purge_MSDB_History_Cleanup.sql



Configure job history retention

You can configure automated purging of job history tables through the SSMS: SQL Server Agent → Right-click → Properties.

























Go to the History page and tweak the values per your needs:

















Click OK button at the right-bottom corner to save the changes.

That's it. Add indexes + regular history purging + job retention settings. These three steps transformed my 100GB monsters into snappy, responsive MSDB database..

Questions, comments or tweaks? Drop them in the comments!



Monday, January 26, 2026

Reality (And Limits) of Instant File Initialization for Transaction Logs in SQL Server 2022

Reality (And Limits) of Instant File Initialization for Transaction Logs in SQL Server 2022

If you’ve spent any time tuning SQL Server, you’ve probably heard “Turn on instant file initialization, it’ll speed things up!” That’s mostly true… but when it comes to transaction log files, the story is a bit more complicated. 

In this post, we’ll peek under the hood of instant file initialization (IFI), see why it gives data files all the love and mostly ignores log files, look at what changed in SQL Server 2022, and talk about sane log sizing in the real world.

What instant file initialization actually does

When SQL Server creates or grows a database file, it normally has to zero out the new space on disk for security reasons. Yes, SQL Server absolutely writes zeros when initializing transaction log space.. That means writing zeros to all the newly allocated bytes so someone can’t read leftover data that used to live on that part of the disk.

Instant file initialization skips that zeroing step for data files (mdf/ndf), as long as:

  • The SQL Server service account has the “Perform volume maintenance tasks” privilege (SE_MANAGE_VOLUME_NAME).
  • Transparent Data Encryption (TDE) is not enabled for that database’s data files.

SQL Server will overwrite whatever’s on disk as real data pages are written later.

With IFI enabled, operations like:

  • Creating a new database,
  • Growing data files, and
  • Restoring databases

can be noticeably faster, especially with large file sizes. Instead of staring at a “frozen” SSMS window while SQL Server writes zeros, you get your file allocations almost instantly.

That’s great for the data files. What about the log files?

Why transaction log files are different

Transaction log files (ldf) are a different beast. SQL Server depends on the log being strictly sequential and fully known so it can:

  • Guarantee crash recovery works correctly.
  • Keep log sequence numbers (LSNs) consistent.
  • Preserve the ACID properties you actually care about when the storage subsystem decides to have “a moment.”

To maintain that integrity, SQL Server has historically required that log files be fully zero-initialized whenever:

  • A log file is created, or
  • A log file grows.

There are no shortcuts or IFI magic. That means:

  • IFI helps with data files.
  • IFI does not help with log files under the classic behavior.

So if you create or grow a large log file, say, tens or hundreds of gigabytes, and your storage isn’t blazing, you can see long pauses while SQL Server zeros that new space. This is why DBAs get burned when a log autogrows by a massive chunk in the middle of the day.


SQL Server 2022: the 64 MB twist

Starting with SQL Server 2022, there’s a helpful optimization for transaction logs, under specific conditions.

The change:

  • Log autogrowth events up to 64 MB can benefit from an IFI‑like optimization.
  • Autogrowths larger than 64 MB still require full zero‑initialization.
  • This only affects log autogrowth, not the initial creation of the log file.

SQL Server 2022 also sets the default log autogrowth for new databases to 64 MB to align with this optimization.

Here’s the 2026 reality check, though:

  • 64 MB is tiny compared to the size of even average databases today.
  • For a 50 GB database, 64 MB is a small bump.
  • For a 1 TB database, 64 MB is basically a rounding error.

So yes, this feature is nice, and it can shave a bit of pain off small log growths, but it’s a convenience, not a design principle. You still need to size your log properly.


Security, TDE, and IFI

IFI interacts with security and encryption in a few key ways.

For data files:

  • IFI requires the SQL Server service account (or service SID) to have the volume maintenance privilege.
  • When you enable TDE, instant file initialization for data files is effectively off for that database; data files must be fully initialized.

For log files:

  • Classic behavior: logs are always fully zero-initialized, regardless of IFI or TDE.
  • In SQL Server 2022+, the ≤ 64 MB log autogrowth optimization does not require the volume maintenance privilege and still works with TDE enabled.

So:

  • Data files: IFI + no TDE + privilege = faster file operations.
  • Log files: still carefully controlled, with a small optimization in SQL Server 2022+ for tiny autogrowths.



Practical sizing: 50 GB database

Let’s start with a small-to-medium sized 50 GB database, something common but not tiny.

Assumptions:

  • Around 50 GB of data.
  • Steady OLTP workload, with some peaks during batch jobs.
  • Regular log backups.


Data file sizing

You don’t want constant data file growth, even if IFI makes it fast.

A reasonable approach:

  • Initial data size: 55–60 GB.
  • Autogrowth: 512 MB or 1 GB, fixed (not percentage).

This keeps growth events relatively infrequent and predictable, and IFI makes them fast when they happen.

Log file sizing

Assume:

  • Peak log usage during heavy windows: 20–30 GB.
  • Log backups every 5–15 minutes.

A practical configuration:

  • Initial log size: 25 GB
  • Autogrowth: 512 MB or 1 GB fixed.

Why this range?

  • 512 MB or 1 GB increments are big enough that you won’t grow constantly, but not so big that each growth is a terrifying event.
  • You’re sizing the log so autogrowth is the exception, not the rule.


Should you use 64 MB autogrowth here?

You could set log autogrowth to 64 MB to ride the SQL Server 2022 optimization, but:

  • If the log needs an extra 8 GB, that means 128 tiny growth events.
  • That’s a lot of metadata work, and it fragments allocation.
  • Overall, it’s usually worse for performance than just sizing correctly with sensible GB-based growth.

Use 64 MB autogrowth only if you really know your workload and have a specific reason. For most 50 GB databases, 512 MB or 1 GB increments are a more practical middle ground.



Practical sizing: 1 TB database

Now let’s move up to a relatively large 1 TB database, the kind where mistakes in log sizing are not “oops,” they’re “call everyone, we have an incident.”

Assumptions:

  • About 1 TB of data.
  • Busy OLTP or mixed workload.
  • Significant batch work or ETL.
  • Regular log backups.

Data file sizing (1 TB)

You want to avoid data growth during peak hours as much as possible.

A reasonable pattern:

  • Initial data size: 1.1–1.2 TB (some headroom).
  • Autogrowth: 8–16 GB fixed.

Example:

  • Initial size: 1.1 TB.
  • Autogrowth: 8 GB.

Why:

  • Even 1% of a 1 TB file is 10 GB, so percentage-based growth is dangerous.
  • 8 GB increments are large but manageable, and IFI helps these grow faster.
  • You still try to grow mostly during maintenance windows, not at noon on Monday.

Log file sizing (1 TB)

Assume:

  • Peak log usage during big operations: 200–300 GB.
  • Frequent log backups (e.g., every 5 minutes during heavy activity).

Here, you really want to avoid autogrowth under load.

A conservative but realistic configuration:

  • Initial log size: 300–400 GB.
  • Autogrowth: 4–8 GB fixed.

For example:

  • Initial size: 350 GB.
  • Autogrowth: 4–8 GB.

Why this range instead of a higher 16–32 GB?

  • 4–8 GB is still substantial at this scale but not absurdly large.
  • If you ever do hit growth, the stall is noticeable but not catastrophic.
  • You’re still designing so growth is rare; you’re not “using” autogrowth as part of normal operations.

Why 64 MB is basically noise here

For a 1 TB database, a 64 MB log growth is almost nothing:

  • If you need an extra 64 GB of log space, that’s 1024 separate 64 MB growths.
  • That’s an overhead factory and a fragmentation generator.
  • Chasing the 64 MB optimization at this scale is the wrong problem to solve.

So for big systems:

  • Choose multi-GB log growth increments that make sense for your workload.
  • Pre-size the log generously (hundreds of GB if needed).
  • Treat the 64 MB optimization as a nice perk if it ever kicks in, nothing more.


The bottom line

  • Instant file initialization is fantastic, for data files.
  • Transaction logs have stricter rules and still need to be fully initialized, with only a small 64 MB optimization in SQL Server 2022.
  • In 2026, 64 MB is tiny compared to real-world database sizes, so this feature is nice but not game-changing.
  • The real win is still boring and reliable: pre-size your log based on observed usage, use sensible fixed autogrowth increments (in MB or low single-digit GB), and avoid relying on autogrowth during peak load.


SQL Server 2025

There are no changes to these rules or behavior in SQL Server 2025. However, this does not rule out the possibility that a future cumulative update could alter this behavior.



Test Script and Performance Validation

So looks like my recommendation to set transaction log autogrowth to a few gigabytes for terabyte-scale databases has sparked quite the debate. Fair enough. I've created a custom T-SQL test script that enables you to benchmark Instant File Initialization (IFI) and transaction log write performance directly in your environment. 

Unlike synthetic benchmarking tools like DiskSpd (which I love and use regularly), this script follows the exact same code path SQL Server takes during real log autogrowth: synchronously zero-initializing every byte of new log space via ALTER DATABASE MODIFY FILE. The 4GB expansion precisely measures your sustained sequential write throughput under realistic conditions, including VLF management overhead and SQL Server's native IO submission patterns. I validated this using sys.dm_io_virtual_file_stats DMV monitoring, and the results align perfectly with DiskSpd benchmarks, confirming production log drive sizing.

The complete script is available on GitHub (keeping this post manageable). I'd appreciate your test results to help determine whether my guidance holds up or needs adjustment!

Instant File Initialization Test Script