Search This Blog

Thursday, August 31, 2023

Getting list of protocols enabled in a SQL Server instance

protocols enabled in a SQL Server instance

There are number of ways you can find out what protocols are enable in a SQL Server instance. The obvious first one is the SQL Server Configuration Tool on the server where the instance is installed. Then you can also look into the sql server error log when the instance first starts up.  These two I think a DBA might be the most familiar with. 

Using the SQL Server Configuration Manager tool:

  • Open SQL Server Configuration Manager. You can search for it in the Windows Start menu or run "SQLServerManager16.msc" for SQL Server 2022.
  • In the left pane, expand "SQL Server Network Configuration".
  • Click on "Protocols for <instance name>" (e.g., "Protocols for MSSQLSERVER" for the default instance).
  • In the right pane, you'll see a list of protocols (Shared Memory, Named Pipes, and TCP/IP) along with their status (Enabled or Disabled).











Then, you could view the SQL Server error log or even query it using the sp_readerrorlog tsql command or use some text searching tool to find that information. The query method however I would say is not a precise one, simply because the SQL Server error logs could get rolled over into a new ones and even get deleted after reaching the maximum number of error log files to keep.

But then it's not like you really really need to know this information. After all, the usual TCP/IP and Named Pipe protocols required to connect remotely are "generally" enabled by default, even the Shared Memory for local connections, depending on the version of SQL Server and the configuration options you choose during the installation, see Default SQL Server Network Protocol Configuration for more information.

The point is, historically and even today if you are installing a default instance of SQL Server, chances are one or two protocols are already enabled using their default configuration, which in case of TCP/IP is the TCP port 1433 and for Named Pipe it is \\.\pipe\sql\query. Both are well-known to the us humans, and to the client tools and the API libraries these tools use so all you have to provide to connect to the SQL Server is the server name, and optionally (I might add, less often if not rarely) the specific protocol you want to use for connection.

So why should anybody care and go through the trouble of looking up that information? If the need does arises, which sometimes it does, it is to troubleshoot connection issues and maybe for security reasons.  However, I also sometimes see old applications that are still not aware that you can now have multiple instances of SQL Server on the same server, by way of installing the additional instances as Named instance, that relies on the SQL Server Browser service to automatically map the instance name to the respective port number each SQL instance is configured for. So such apps do need to know the TCP port number in order to connect to any sql instance not running on the age old default port 1433.

Ok, a caveat is worth mentioning here. the support for Named SQL Server instances have been around since, what 2005? So, I don't think that the developers of such apps, which are often third party vendors, have never heard of it. It maybe that they are now not around anymore or too small of an organization to afford to have enough budget to update their code, while it's customers are still stuck using it for lack of viable alternatives.

Whatever maybe the case,  for me, it is often a necessity than a mere "nice to know", or only to troubleshooting some issues.


So that being said, I can think of many ways to find protocol configuration. But to be frank, even if I tried,  I can't say exactly how many ways you can get this information, especially if you also throw in various  programming languages and APIs etc. The method that I do want to discuss here though involves reading the information from the windows registry, using the PowerShell, basically relying on it's *Item* cmdlets that have been part of PowerShell from the beginning. So I am hoping that the PowerShell statements here would work on any and all versions. In essence, there is no dependency on SQLPS or SQLServer modules, or any other for that matter. Except, if you want to export the results out to an Excel file later.

But first let me share couple queries as well. The first one will show the state of standard protocols i.e. TCP/IP, Named Pipes and Shared Memory.

;with cte as
(
	select 
		@@SERVERNAME [sql_server],
		case	when right(registry_key, 2) = 'Np' then 'Named Pipe'
				when right(registry_key, 2) = 'Sm' then 'Shared Memory'
				when right(registry_key, 3) = 'Tcp' then 'TCP Ip'
			end [protocol],
		value_name [property],
		value_data [property_value],
		case value_data when 0 then 'Disabled'
						when 1 then 'Enabled'
			end [status]
		
	from sys.dm_server_registry
	where registry_key like '%SuperSocketNetLib%'
)
select * from cte
where [protocol] in ('Named Pipe', 'Shared Memory', 'TCP Ip')
and [property] = 'Enabled'
;







And to get the TCP Port number:

SELECT TOP 10 *
FROM sys.dm_server_registry
WHERE registry_key like '%SuperSocketNetLib%'
  AND registry_key not like '%AdminConnection%'
  AND value_name in ('TcpDynamicPorts','TcpPort')
  AND value_data IS NOT NULL
  AND value_data != ''
  AND value_data != '0'


Both of these queries you can run as a Multi Server Query against all your registered SQL Servers in a one go.


Using the PowerShell


Caveat: In the PowerShell statements that follows, I am making an educated and informed, guess on where the SQL Server registry entries are placed by the SQL Server installer. These locations per se are well documented but can change or be different in future versions or in some old versions of SQL Server.


To  be fair, I also think using PowerShell is bit complicated than what we all are used to as a DBA, I might even say it is a hassle.  And, if you think about the feature and behavior changes you might encounter among different versions, not to mention the Dependency Hell, it can be a nightmare!

But, I do think it is more versatile, that comes handy when managing a mid to large number of SQL Servers.  So, lets jump right in.

If you are already logged into the server, you can run the following in the PowerShell and it will return the list of SQL Server protocols enabled on a given SQL instance, which I have lighted in the cmdlet:

Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2022AG01\MSSQLServer\SuperSocketNetLib"  | Select-Object -ExpandProperty ProtocolList


In the following example, the sql instance has tcp and named pipes enabled:





And to get the same information remotely, you can use the same command with Invoke-Command cmdlet:

Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2022AG01\MSSQLServer\SuperSocketNetLib" | 
    Select-Object -ExpandProperty ProtocolList}




To get the tcp port configured for the instance:

Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2019AG01\MSSQLServer\SuperSocketNetLib\Tcp" -Name TcpPort} |
    Select-Object TcpPort








And suppose you want to get the tcp port number for all SQL instances installed on a remote computer:


Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\*\MSSQLServer\SuperSocketNetLib\Tcp" `
    -Name TcpPort -ErrorAction SilentlyContinue | 
    select @{n='Instance' ;e={$_.PSPath.split('\')[5]}}, TcpPort, PSPath} | 
    Format-Table -Property PSComputerName, Instance, TcpPort







Now here, things start to get more interesting in PowerShell because you can pass multiple servers to the -ComputerName parameter above, and not have to worry about providing in it a specific SQL instance name, which often is different on different host servers.









You can even pass your entire server inventory to it from a plain text file, with no headers and only a single value per row:

Invoke-Command -ComputerName (Get-Content -Path "$env:USERPROFILE\Documents\server_inventory.txt" ) -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\*\MSSQLServer\SuperSocketNetLib\Tcp" `
    -Name TcpPort -ErrorAction SilentlyContinue | 
    select @{n='Instance' ;e={$_.PSPath.split('\')[5]}}, TcpPort, PSPath} | 
    Format-Table -Property PSComputerName, Instance, TcpPort


Here is the final script, as complete of a script as I could make it at this moment without adding too much complexities to it. It already contains some explanation notes, which I hope helps:

<#

DISCRIPTION:

The script will return the enabled protocols in a SQL instance on a 
remote server and their pertinent properties. You do have to give it 
a server name. You can even provide multiple servers or even a text 
file with list of all your servers. The SQL instance is an optional 
variable, in which case the script will return protocol information 
on all sql instances installed in the given server/s. A nice thing 
about this script is that it returns this information as a PowerShell 
object, an array object, to be specific. That makes it easier not 
only to display results on the screen, it also allows you to pipe 
the results to a Comma  Separated Values file (CSV) or even Microsoft 
Excel if the required module, ImportExcel,  for it is available on 
the computer where you are running this script from. You can install 
the module from https://github.com/dfinke/ImportExcel.  I decided to 
only display a warning if the module is not available, 
rather than throwing an ugly error.

VARIABLES:

1.   $server_name
     A value for this variable is required
     There are 3 ways you can assing it a value
     
     a.  A single server name
         $server_name = 'MySQLServer'
     
     b.  Multiple server names as an array
         $server_name = @('MySQLServer', 'MySQLServer2', 'MySQLServer3')

     c.  Import server names from a plain text file
         $server_name = Get-Content -Path "$env:USERPROFILE\Documents\sql_servers.csv"         


2.   $instance_name
     Name of the SQL Server instance. For the default sql instance, 
     the value should be MSSQLSERVER, for example: $instance_name  = 'MSSQLSERVER'

     If $instance_name is omitted or set to $null, the script will return protocols
     information for all installed sql instances

     You cannot specify $instance_name if the $server_name contains multiple servers.
     This limitation can be overcome, like some others, but I decided not to at this point.


3.   $export_to_csv
     This is a $true/$false value. If $true then the script will export the results to 
     a CSV file.

4.   $csv_file_path
     Path and name of the CSV file. 
     Default value is "$env:USERPROFILE\Documents\sql_server_enabled_protocols.csv"


5.   $export_to_excel
     This is a $true/$false value. If $true then the script will export the results to
     an Excel file only if the Export-Excel is available.   

6.   $excel_file_path
     Path and name of the Excel file. 
     Default value is "$env:USERPROFILE\Documents\sql_server_enabled_protocols.xlsx"


#>

# Required variables
[string]$server_name     = 'SQLMV01'
[string]$instance_name   = $null # 'MSSQLSERVER'

# Export options
[bool]$export_to_csv     = $false
[string]$csv_file_path   = "$env:USERPROFILE\Documents\sql_server_enabled_protocols.csv"

[bool]$export_to_excel   = $true
[string]$excel_file_path = "$env:USERPROFILE\Documents\sql_server_enabled_protocols.xlsx"


                  
Function Get-sql-protocols
{
    Param 
    (
        [string]$instance_name
 
    )

$computer_name = $env:COMPUTERNAME 
$sql_registry_root        = 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$installed_sql_instances = (Get-Item 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL').GetValueNames()

if ($instance_name -notin ('', $null))
{
    # VALIDATE THE INSTANCE NAME
    
    $instance_name = $instance_name.ToUpper()
    if($installed_sql_instances.Contains($instance_name))
    {
        $installed_sql_instances = $instance_name
    }
    else
    {
        THROW "Error: SQL instance name $instance_name is invalid."
    }
}
   

$my_custom_object = @()

foreach($installed_sql_instance in $installed_sql_instances)
{
    if($installed_sql_instance -eq 'MSSQLSERVER')
    {
        $sql_instance_registry_path = 'HKLM:\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer'
    }
    else 
    {
        $sql_instance_registry_path = Join-Path -Path $sql_registry_root `
                                       -ChildPath "$installed_sql_instance\MSSQLServer"
    }

    $sql_instance_SuperSocketNetLib_path = "$sql_instance_registry_path\SuperSocketNetLib"
    $protocols = Get-ChildItem $sql_instance_SuperSocketNetLib_path

    foreach ($protocol in $protocols)
    {
        foreach($protocolp in $protocol.GetValueNames())
        {
        
            $my_custom_object += [PSCustomObject]@{
                    computer_name    = $computer_name
                    sql_instance     = $installed_sql_instance
                    protocol_name    = $protocol.PSChildName
                    property_name    = $protocolp
                    property_value   = $protocol.GetValue($protocolp)
                }
        
        }
    }
    }

$my_custom_object
}

if($server_name.GetType().Name -ne 'String' -and $instance_name -notin ('', $null))
{
    THROW 'Error: A value of named instance in $instance_name is not compatible with an array for the $server_name'
}
else
{
    $sql_protocols = Invoke-Command -ComputerName $server_name   `
                                    -ScriptBlock ${Function:Get-sql-protocols} `
                                    -ArgumentList $instance_name

    $sql_protocols | Format-Table  computer_name, sql_instance, protocol_name, property_name, property_value
    # Export to a CSV file
    if ($export_to_csv)
    {
        Write-Information 'Exporting to CSV file....'
        $sql_protocols | Select-Object PSComputerName, sql_instance, protocol_name, property_name, property_value | 
                         Export-Csv -Path $csv_file_path -Force -NoTypeInformation
    }

    # Export to Excel file
    if ($export_to_excel)
    {
        
        if (Get-Command -Name Export-Excel -ErrorAction SilentlyContinue)
        {
            Write-Information  'Exporting to Excel file....'
            $sql_protocols | Select-Object PSComputerName, sql_instance, protocol_name, property_name, property_value | 
                             Export-Excel -Path $excel_file_path -WorksheetName "SQLProtocols" `
                             -TableName "SQLProtocols" -TableStyle Light9 -AutoSize -NoNumberConversion '*'
        }
        else
        {
            Write-Warning "Warning:Function Export-Excel not found. Skipping export to Excel..."
        }

    }


}


Download this script from GitHub:

Get List of Protocols Enabled in SQL Server


I hope you find this article helpful, and maybe something you can adapt to your needs if ever needed.



 



Tuesday, August 29, 2023

Remotely Check Disks Space, Find Large Files and Remove Old Files - PowerShell

Remotely Check Disks Space, Find Large Files and Remove Old Files - PowerShell

For a key client, I manage over 200 SQL Server hosts, all running as VMware virtual machines, hosting more than 500 SQL instances and over 5,000 databases. Aside from about 50 large, business‑critical databases, I have limited visibility into the rest, their purpose, supported applications, workloads, or user groups are often unknown.

This is largely due to the organization’s policy: they “buy” rather than build applications. Most software solutions come bundled with the hardware they purchase, often including multiple interfaces, one for IT administrators, one for employees, and another for customers.

The only reason I’m familiar with the large and critical databases is because they naturally draw attention whenever there’s an incident or outage.

Regarding SQL Server storage configuration, I suspect most DBAs, myself included, leave the default settings: auto‑growth enabled with no maximum file size. While exceptions exist, this configuration is common practice.

One example from my past examplies why that’s not always ideal. I once supported a highly transactional publisher database in a complex replication topology, serving over a thousand users entering data continuously during business hours. To maintain DML performance, the database had minimal indexing, and all read workloads were offloaded to replicated databases hosted nationwide and internationally.

In this setup, large transactions posed significant risk because they could clog the replication stream and disrupt business operations. To mitigate that, we set a hard 100 MB limit on the transaction log file. Any oversized transaction would immediately fail, a deliberate safeguard for a system generating more than billion dollars in profit.

In my current environment, such constraints aren’t necessary, so all databases still use the default auto‑growth settings with unlimited size. This works fine, until a disk runs out of space.

We closely monitor SQL Servers for uptime, availability, and performance, including disk usage. Our disk‑space alert rule is straightforward:

% free space falls below 15% and amount of free space is below 100GB


This dual condition minimizes false positives. For example, a 10 TB drive at 5% free space (500 GB remaining) won’t raise an alert, as adequate space remains. This threshold generally provides enough lead time to act before an outage occurs.

On average, I receive about five disk‑space alerts per week, each requiring prompt investigation and resolution. Fortunately, standardized SQL Server configurations and consistent disk layout conventions have helped us maintain overall stability and predictability in this environment:


Drive Letter

Purpose

C:

OS/System derive

D:

SQL Server Software/binaries

K:

Data files

L:

Transaction Log

O:

SQL Server Database Backups

T:

SQL Server Temp files

If the FILESTREAM feature is enabled, it’s typically placed on a separate drive, since those files tend to grow much faster in our environment.

When a disk space alert email arrives for one or more SQL Servers, I can usually make an educated guess about the cause just by looking at the drive letter. Over time, you develop an instinct for the usual suspects.

This write‑up isn’t a detailed troubleshooting guide, but here are a few examples:

O Drive (Database Backups):

  • Backup jobs may be failing or not deleting old backups as intended.
  • A large, one‑time transaction could have generated an oversized log backup and temporarily increased nightly differential backup sizes until the next full backup.
  • Significant ongoing data changes can inflate both transaction log and differential backup sizes.
  • Databases may have grown substantially, increasing backup sizes.
  • New databases might have been restored or populated with large data loads.

L Drive (Transaction Logs):

  • Transaction log backups may be failing or not running at all.
  • A large, single transaction could have caused excessive log growth. Since logs aren’t automatically reclaimed, that space remains allocated until manually reduced.

K Drive (Data Files):

  • The database may be expanding organically faster than anticipated.
  • New databases could have been added.

In some cases, it helps to reach out to the application teams to understand recent changes or unusual activity.

Before connecting to the SQL Server or logging in via RDP, I typically run a PowerShell script that generates a quick disk space summary and lists the largest files on each drive. The script also identifies file types (data, log, or backup), their corresponding SQL instance, and database. I can usually pinpoint the cause and resolve it within five minutes, all without directly logging into the server.

Common fixes include:

  • Adding more disk space when databases have legitimately grown or new ones have been added.
  • Shrinking a large transaction log back to normal size after a one‑off growth event.
  • Deleting outdated or unnecessary backup files.
  • Repairing or restarting failed backup or maintenance jobs.


Now about the PowerShell scripts,  here are the parts of that PowerShell script:

Get the disk space info:

Remember to change the value for $computer_fqdn

This assumes that your currently logged-on user account has the necessary permissions on the remote computer. If not then you can provide a different credentials by adding "-Credential (Get-Credential)" to the Get-WMIObject command.

Get-WMIObject  -ComputerName $computer_fqdn Win32_LogicalDisk -ErrorVariable ErrVar -ErrorAction SilentlyContinue | 
    Where-Object {$_.MediaType -eq 12} |
    Select-Object __SERVER,
                 @{n='DriveLetter';e={$_.Name}}, 
                 VolumeName,   
                 @{n='Capacity (Gb)' ;e={"{0:n0}" -f ($_.size/1gb)}},
                 @{n='FreeSpace (Gb)';e={"{0:n0}" -f ($_.freespace/1gb)}}, 
                 @{n='PercentFree';e={[Math]::round($_.freespace/$_.size*100)}} | 
    Format-Table -AutoSize 

Caveat: The above PowerShell command currently does not support the Mount Points


Results:







Finding Files on a Remote Computer:

More precisely, this script searches for files on a specific disk drive rather than the entire computer. To scan multiple drives, you’ll need to run the script separately for each one. You can also specify a folder or subfolder path if you’d prefer not to search from the root of the drive.

The script is designed to locate the largest files, with the threshold defined by the variable $filter_by_size. You can further refine your search by:

  • Setting a file name wildcard using $filter_by_name
  • Filtering by the last modified date using $filter_by_last_modified_before_days

In the example below, the script searches the O:\ drive for the five largest files whose names contain *FULL*.BAK, are larger than 100 MB, and were last modified more than 8 days ago.

You can adjust any of the filter values to suit your needs. To disable a filter, simply comment it out by placing a hash sign (#) at the beginning of that line. Finally, make sure to update the $computer_name variable with your target server name. You can also include multiple server names, separated by commas, for example:


$computer_name = @("Server1", "Server2", Server3")

# Name of the remote computer
$computer_name = @("SQLServer1")
 
# Drive letter or a subfolder on a drive to search for the large sizes
$remote_folder = 'O:\'   

# Add optional filters
$filter_by_name = '*FULL*.BAK'
$filter_by_size = 100*1024*1024    # 100*1024*1024 = 100MB
$filter_by_last_modified_before_days = -8 # Note the minus (-) sign

$top_n = 5  # Limit the results to the top n files by size
# Make sure the $filter_by_last_modified_before_days is a negative value
if($filter_by_last_modified_before_days -gt 0) {$filter_by_last_modified_before_days = $filter_by_last_modified_before_days * -1}

# Set the filters to default values if not already Set by the caller
if($top_n -eq $null -or $top_n -eq 0 -or $top_n -eq '') {$top_n=50}
if($filter_by_name -eq $null -or $filter_by_name -eq '') {$filter_by_name='*'}
if($filter_by_size -eq $null -or $filter_by_size -eq '') {$filter_by_size=0}
if($filter_by_last_modified_before_days -eq $null -or $filter_by_last_modified_before_days -eq '') 
{$filter_by_last_modified_before_days=0}


# Lets get the fqdn for the remote computer
$computer_fqdn = @()
foreach($computer in $computer_name){$computer_fqdn += @([System.Net.Dns]::GetHostEntry($computer).HostName)}


$large_files = @(Invoke-Command -computername $computer_fqdn -ArgumentList $remote_folder, $filter_by_name, $filter_by_size, $top_n  `
               -scriptBlock {Get-ChildItem $Using:remote_folder -Filter $Using:filter_by_name  -recurse -ErrorAction SilentlyContinue Force | 
                             where-object {$_.length -gt $Using:filter_by_size} | 
                             Sort-Object length -Descending | 
                             select   fullname, 
                                      @{Name="Size(GB)";Expression={[Math]::round($_.length / 1GB, 2)}}, 
                                      @{Name="Size(MB)";Expression={[Math]::round($_.length / 1MB, 0)}}, 
                                      LastWriteTime `
                             -First $Using:top_n
                            }  
                )
# Display
$large_files | sort -Property 'Size(MB)'  -Descending | ft PSComputerName, 'Size(MB)', 'Size(GB)', LastWriteTime, FullName

#$files_to_delete = $large_files | where-object {$_.LastWriteTime -lt ((Get-date).AddDays($filter_by_last_modified_before_days))} | select FullName

$large_files_count = $large_files.Count
"Number of files: $large_files_count"


Download above PowerShell script from GitHub at:

Find Largest Files on a Remote Server


Deleting Files Remotely:

Finally, let’s cover how to delete files on a remote computer.

To remove a single file, you can use PowerShell’s Invoke-Command with Remove-Item inside the script block:

Invoke-Command -ComputerName '<Computer Name>' -ScriptBlock {Remove-Item '<FileName>'}

To delete multiple files, list them directly in the Remove-Item command: 

Remove-Item 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK', 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK'


Alternatively, you can pass the file paths as an array, which is often cleaner when handling many files:


Remove-Item @('O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK', 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK')


You can also use wildcards, as shown in this Microsoft example: 

Remove-Item * -Include *.doc -Exclude *1*


However, I usually first use the previous PowerShell script to search the files on the remove computer that meet my criteria for deletion, and store the results into $large_files variable then use the following PowerShell code to delete the files.

$files_to_delete = $large_files
foreach($file_to_delete in $files_to_delete)
{
    Invoke-Command -ArgumentList $file_to_delete.PSComputerName, $file_to_delete.FullName -ComputerName $file_to_delete.PSComputerName -ScriptBlock {get-item $Using:file_to_delete.FullName}
#    Invoke-Command -ArgumentList $file_to_delete.PSComputerName, $file_to_delete.FullName -ComputerName $file_to_delete.PSComputerName -ScriptBlock {remove-item $Using:file_to_delete.FullName}
}

For safety reasons, I have commented out the line that actually deletes the files, so you have opportunity to review the output before uncommenting and re-execute the FOREACH block.


We can’t always delete files just because we think they’re no longer needed. Sometimes it’s unclear whether certain files are truly redundant or still required. In those cases, instead of deleting them, I prefer to compress the files using NTFS file compression. It compresses them in place, transparently, and operates in online mode, meaning there’s no need to stop SQL Server services before applying it.  
















As far as I know, enabling NTFS compression through PowerShell is a bit tricky. While the Set-ItemProperty cmdlet can modify file and folder attributes such as ReadOnly or Archive, it doesn’t support toggling the Compressed attribute. Instead, I use the compact.exe command from within PowerShell, like this:

compact /C <File Name>

PowerShell does comes with a Compress-Archive command that compresses file/s in Zip format. 

Invoke-Command -ComputerName '<ComputerName>'  -ScriptBlock {

$compress =
@{
  Path = "O:\dbbackup.bak"
  CompressionLevel = "Fastest"
  DestinationPath = "
O:\dbbackup.bak.zip"
}
Compress-Archive @compress
}

But now you still have the original file taking up the disk space. Once you verify that the Zip file was successfully created, you can then manually delete the original file.




Thursday, August 17, 2023

Another option is coming to gain extended support for SQL 2012

Another option is coming to gain extended support for SQL 2012

If you still have SQL 2012 servers, you are probably aware that support for it ended July 2022.  This means there will be no more patches or security updates. 

Of course, the obvious solution is to upgrade to a supported version of SQL Server,  a long time ago! But that's not always an option, even if you can no longer get any support from the vendor.  In that case often there are steps you can take to mitigate or minimize the risks.

Whatever maybe your reasons not to upgrade yet, Microsoft has been offering an option to extend the support for SQL 2012 further, with a condition that you migrate the SQL server to Azure cloud. It doesn't matter what deployment model you choose or bring your own SQL Server licenses. As of this point Microsoft is even offering a free migration service to not only help you move your servers to Azure, it will even do the work for you, provided you are comfortable giving them access they would need.

Recently I came to know of another option that Microsoft is going to offer that doesn't require moving your SQL databases to the Azure cloud. That means you can keep your SQL Servers on premise and still keep getting the security updates.  The program is being referred to as Extended Security Updates enabled by Azure Arc Here is the blog article that discusses it:  

https://cloudblogs.microsoft.com/windowsserver/2023/07/18/new-options-for-windows-server-2012-r2-end-of-support-from-azure/

While that article is about the Windows Server 2012/R2,  there is a line in it that says "This capability will also be available for SQL Server 2012 ". Is this official? I am told by my Microsoft contact that it is so I am going to get few SQL 2012 servers I have that cannot be moved into Azure Cloud yet, have Azure Arc enabled.  

For those are looking for option to get extended support, as I am, I hope this helps.








Monday, August 14, 2023

How to check if powershell is running as administrator with elevated privileges

How to check if powershell is running as administrator with elevated privileges

In case it's not clear from the title, I am referring to the following method of launching PowerShell under elevated security privileges, by  a user with administrative rights.  This allows you to run commands that otherwise wouldn't be allowed even if you are a member of administrators group.











And if you are running  a non-interactive PowerShell script through a Windows or SQL Server scheduled job, or even from a batch file, you can use the PowerShell "Start-Process -Verb RunAs" or the old PsExec utility to start an elevated session.

What I want to show here though is how inside your PowerShell script, you can programmatically determine whether it is launched with "Run as administrator" option.

Here is the code, that I put in the beginning whenever I need to check before going any further inside a script if it has the elevated permissions to perform tasks and/or make whatever configuration changes I wrote the script for:

if(-Not (([System.Security.Principal.WindowsIdentity]::GetCurrent()).Owner -eq "S-1-5-32-544"))
{
    Throw 'Error: Powershell must be launched in elevated privileges mode'
}


Now if you are curious for explanation:

What is "S-1-5-32-544"?

It is known as one of the well-known, universal SIDs in Windows world.  It belongs to the BUILTIN\Administrators group and this SID value is same on every Windows computer.  May be it would have made more sense to check for the name BUILTIN\Administrators instead of the binary SID value, however well-know and universal. And to achieve that I could go through trouble of writing additional lines of code to translate the SID into BUILTIN\Administrators but per Microsoft's documentation that I have seen, that's not necessary as it is always same. 

This SID value (S-1-5-32-544) has four components:

  • A revision level (1)
  • An identifier authority value (5, NT Authority)
  • A domain identifier (32, Builtin)
  • A relative identifier (544, Administrators)


I get it, but why it matters?

When you run a process in Windows, your SID is the owner of that process. But when you run that same process with elevated privileges, Windows sets it's owner SID to S-1-5-32-544.


What happens to my SID?

Your SID is still stored and tracked by Windows, under User property:


PS C:\Users> $current_principal = ([System.Security.Principal.WindowsIdentity]::GetCurrent())
PS C:\Users> $current_principal | Format-List -Property Owner, User
 

# Results
 
Owner : S-1-5-32-544
User    : S-1-5-21-0000000000-000000000-0000000000-000000 


If the PowerShell is not running under elevated privileges, the Owner and User SID values will  be same. Try it and check it out for yourself.


What is this [System.Security.Principal]....?

It is a .Net namespace or more accurately part of a hierarchy of namespace. It contains WindowsIdentity  class, among other classes. It exposes and allows us to interact with the current user, through methods and properties:

PS C:\Users> $current_principal = ([System.Security.Principal.WindowsIdentity]::GetCurrent()) 

PS C:\Users> $current_principal | Get-Member -MemberType All | `
                                  Select-Object -Property Name, MemberType

Name               MemberType
----               ----------
AddClaim               Method
AddClaims              Method
Clone                  Method
Dispose                Method
Equals                 Method
FindAll                Method
FindFirst              Method
GetHashCode            Method
GetObjectData          Method
GetType                Method
HasClaim               Method
Impersonate            Method
OnDeserialization      Method
RemoveClaim            Method
ToString               Method
TryRemoveClaim         Method
WriteTo                Method
AccessToken          Property
Actor                Property
AuthenticationType   Property
BootstrapContext     Property
Claims               Property
DeviceClaims         Property
Groups               Property
ImpersonationLevel   Property
IsAnonymous          Property
IsAuthenticated      Property
IsGuest              Property
IsSystem             Property
Label                Property
Name                 Property
NameClaimType        Property
Owner                Property
RoleClaimType        Property
Token                Property
User                 Property
UserClaims           Property
 



Is there a built-in function in PowerShell Core I can use instead?

Not that I am aware of so far.  It sure would be a welcome addition to it and I think it could be included in the future releases though.

Please feel free to ask any questions or leave feedback in the comments section.