Search This Blog

Thursday, August 31, 2023

Getting list of protocols enabled in a SQL Server instance

protocols enabled in a SQL Server instance

There are number of ways you can find out what protocols are enable in a SQL Server instance. The obvious first one is the SQL Server Configuration Tool on the server where the instance is installed. Then you can also look into the sql server error log when the instance first starts up.  These two I think a DBA might be the most familiar with. 

Then, you could even query the error log using the sp_readerrorlog tsql command or use some text searching tool to find that information. The query method however I would say is not a precise one.

But then it's not like you really really need to know this information. After all, the usual TCP/IP and Named Pipe protocols required to connect remotely are "generally" enabled by default, even the Shared Memory for local connections, depending on the version of SQL Server and the configuration options you choose during the installation, see Default SQL Server Network Protocol Configuration for more information.

The point is, historically and even today if you are installing a default instance of SQL Server, chances are one or two protocols are already enabled using their default configuration, which in case of TCP/IP is the TCP port 1433 and for Named Pipe it is \\.\pipe\sql\query. Both are well-known to the us humans, and to the client tools and the API libraries these tools use so all you have to provide to connect to the SQL Server is the server name, and optionally (I might add, less often if not rarely) the specific protocol you want to use for connection.

So why should anybody care and go through the trouble of looking up that information? If the need does arises, which sometimes it does, it is to troubleshoot connection issues and maybe for security reasons.  However, I also often see old applications that are still not aware that you can now have multiple instances of SQL Server on the same server, by way of installing the additional instances as Named instance, that relies on the SQL Server Browser service to automatically map the instance name to the respective port number each SQL instance is configured for. So such apps do need to know the TCP port number in order to connect to any sql instance not running on the age old default port 1433.

Ok, a caveat is worth mentioning here. the support for Named SQL Server instances have been around since, what 2005? So, I don't think that the developers of such apps, which are often third party vendors, have never heard of it. It maybe that they are now too small of an organization to afford to have enough budget to rewrite their code, while it's customers are still stuck using it for lack of viable alternatives.

Whatever maybe the case,  for me, it is often a necessity than a mere "nice to know", or only to troubleshooting some issues.


So that being said, I can think of many ways to find protocol configuration. But to be frank, even if I tried,  I can't say exactly how many ways you can get this information, especially if you also throw in various  programming languages and APIs etc. The method that I do want to discuss here though involves reading the information from the windows registry, using the PowerShell, basically relying on it's *Item* cmdlets that have been part of PowerShell from the beginning. So I am hoping that the PowerShell statements here would work on any and all versions. In essence, there is no dependency on SQLPS or SQLServer modules, or any other for that matter. Except, if you want to export the results out to an Excel file later.

But first let me share couple queries as well. The first one will show the state of standard protocols i.e. TCP/IP, Named Pipes and Shared Memory.

;with cte as
(
	select 
		@@SERVERNAME [sql_server],
		case	when right(registry_key, 2) = 'Np' then 'Named Pipe'
				when right(registry_key, 2) = 'Sm' then 'Shared Memory'
				when right(registry_key, 3) = 'Tcp' then 'TCP Ip'
			end [protocol],
		value_name [property],
		value_data [property_value],
		case value_data when 0 then 'Disabled'
						when 1 then 'Enabled'
			end [status]
		
	from sys.dm_server_registry
	where registry_key like '%SuperSocketNetLib%'
)
select * from cte
where [protocol] in ('Named Pipe', 'Shared Memory', 'TCP Ip')
and [property] = 'Enabled'
;







And to get the TCP Port number:

SELECT TOP 10 *
FROM sys.dm_server_registry
WHERE registry_key like '%SuperSocketNetLib%'
  AND registry_key not like '%AdminConnection%'
  AND value_name in ('TcpDynamicPorts','TcpPort')
  AND value_data IS NOT NULL
  AND value_data != ''
  AND value_data != '0'


Both of these queries you can run as a Multi Server Query against all your registered SQL Servers in a one go.


Using the PowerShell


Caveat: In the PowerShell statements that follows, I am making an educated and informed, guess on where the SQL Server registry entries are placed by the SQL Server installer. These locations per se are well documented but can change or be different in future versions or in some old versions of SQL Server.


To  be fair, I also think using PowerShell is bit complicated than what we all are used to as a DBA, I might even say it is a hassle.  And, if you think about the feature and behavior changes you might encounter among different versions, not to mention the Dependency Hell, it can be a nightmare!

But, I do think it is more versatile, that comes handy when managing a mid to large number of SQL Servers.  So, lets jump right in.

If you are already logged into the server, you can run the following in the PowerShell and it will return the list of SQL Server protocols enabled on a given SQL instance, which I have lighted in the cmdlet:

Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2022AG01\MSSQLServer\SuperSocketNetLib"  | Select-Object -ExpandProperty ProtocolList


In the following example, the sql instance has tcp and named pipes enabled:





And to get the same information remotely, you can use the same command with Invoke-Command cmdlet:

Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2022AG01\MSSQLServer\SuperSocketNetLib" | 
    Select-Object -ExpandProperty ProtocolList}




To get the tcp port configured for the instance:

Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\SQL2019AG01\MSSQLServer\SuperSocketNetLib\Tcp" -Name TcpPort} |
    Select-Object TcpPort








And suppose you want to get the tcp port number for all SQL instances installed on a remote computer:


Invoke-Command -ComputerName 'SQLVM01' -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\*\MSSQLServer\SuperSocketNetLib\Tcp" `
    -Name TcpPort -ErrorAction SilentlyContinue | 
    select @{n='Instance' ;e={$_.PSPath.split('\')[5]}}, TcpPort, PSPath} | 
    Format-Table -Property PSComputerName, Instance, TcpPort







Now here, things start to get more interesting in PowerShell because you can pass multiple servers to the -ComputerName parameter above, and not have to worry about providing in it a specific SQL instance name, which often is different on different host servers.









You can even pass your entire server inventory to it from a plain text file, with no headers and only a single value per row:

Invoke-Command -ComputerName (Get-Content -Path "$env:USERPROFILE\Documents\server_inventory.txt" ) -ScriptBlock {
Get-ItemProperty "HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\*\MSSQLServer\SuperSocketNetLib\Tcp" `
    -Name TcpPort -ErrorAction SilentlyContinue | 
    select @{n='Instance' ;e={$_.PSPath.split('\')[5]}}, TcpPort, PSPath} | 
    Format-Table -Property PSComputerName, Instance, TcpPort


Here is the final script, as complete of a script as I could make it at this moment without adding too much complexities to it. It already contains some explanation notes, which I hope helps:

<#

DISCRIPTION:

The script will return the enabled protocols in a SQL instance on a 
remote server and their pertinent properties. You do have to give it 
a server name. You can even provide multiple servers or even a text 
file with list of all your servers. The SQL instance is an optional 
variable, in which case the script will return protocol information 
on all sql instances installed in the given server/s. A nice thing 
about this script is that it returns this information as a PowerShell 
object, an array object, to be specific. That makes it easier not 
only to display results on the screen, it also allows you to pipe 
the results to a Comma  Separated Values file (CSV) or even Microsoft 
Excel if the required module, ImportExcel,  for it is available on 
the computer where you are running this script from. You can install 
the module from https://github.com/dfinke/ImportExcel.  I decided to 
only display a warning if the module is not available, 
rather than throwing an ugly error.

VARIABLES:

1.   $server_name
     A value for this variable is required
     There are 3 ways you can assing it a value
     
     a.  A single server name
         $server_name = 'MySQLServer'
     
     b.  Multiple server names as an array
         $server_name = @('MySQLServer', 'MySQLServer2', 'MySQLServer3')

     c.  Import server names from a plain text file
         $server_name = Get-Content -Path "$env:USERPROFILE\Documents\sql_servers.csv"         


2.   $instance_name
     Name of the SQL Server instance. For the default sql instance, 
     the value should be MSSQLSERVER, for example: $instance_name  = 'MSSQLSERVER'

     If $instance_name is omitted or set to $null, the script will return protocols
     information for all installed sql instances

     You cannot specify $instance_name if the $server_name contains multiple servers.
     This limitation can be overcome, like some others, but I decided not to at this point.


3.   $export_to_csv
     This is a $true/$false value. If $true then the script will export the results to 
     a CSV file.

4.   $csv_file_path
     Path and name of the CSV file. 
     Default value is "$env:USERPROFILE\Documents\sql_server_enabled_protocols.csv"


5.   $export_to_excel
     This is a $true/$false value. If $true then the script will export the results to
     an Excel file only if the Export-Excel is available.   

6.   $excel_file_path
     Path and name of the Excel file. 
     Default value is "$env:USERPROFILE\Documents\sql_server_enabled_protocols.xlsx"


#>

# Required variables
[string]$server_name     = 'SQLMV01'
[string]$instance_name   = $null # 'MSSQLSERVER'

# Export options
[bool]$export_to_csv     = $false
[string]$csv_file_path   = "$env:USERPROFILE\Documents\sql_server_enabled_protocols.csv"

[bool]$export_to_excel   = $true
[string]$excel_file_path = "$env:USERPROFILE\Documents\sql_server_enabled_protocols.xlsx"


                  
Function Get-sql-protocols
{
    Param 
    (
        [string]$instance_name
 
    )

$computer_name = $env:COMPUTERNAME 
$sql_registry_root        = 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server'
$installed_sql_instances = (Get-Item 'HKLM:\SOFTWARE\Microsoft\Microsoft SQL Server\Instance Names\SQL').GetValueNames()

if ($instance_name -notin ('', $null))
{
    # VALIDATE THE INSTANCE NAME
    
    $instance_name = $instance_name.ToUpper()
    if($installed_sql_instances.Contains($instance_name))
    {
        $installed_sql_instances = $instance_name
    }
    else
    {
        THROW "Error: SQL instance name $instance_name is invalid."
    }
}
   

$my_custom_object = @()

foreach($installed_sql_instance in $installed_sql_instances)
{
    if($installed_sql_instance -eq 'MSSQLSERVER')
    {
        $sql_instance_registry_path = 'HKLM:\SOFTWARE\Microsoft\MSSQLServer\MSSQLServer'
    }
    else 
    {
        $sql_instance_registry_path = Join-Path -Path $sql_registry_root `
                                       -ChildPath "$installed_sql_instance\MSSQLServer"
    }

    $sql_instance_SuperSocketNetLib_path = "$sql_instance_registry_path\SuperSocketNetLib"
    $protocols = Get-ChildItem $sql_instance_SuperSocketNetLib_path

    foreach ($protocol in $protocols)
    {
        foreach($protocolp in $protocol.GetValueNames())
        {
        
            $my_custom_object += [PSCustomObject]@{
                    computer_name    = $computer_name
                    sql_instance     = $installed_sql_instance
                    protocol_name    = $protocol.PSChildName
                    property_name    = $protocolp
                    property_value   = $protocol.GetValue($protocolp)
                }
        
        }
    }
    }

$my_custom_object
}

if($server_name.GetType().Name -ne 'String' -and $instance_name -notin ('', $null))
{
    THROW 'Error: A value of named instance in $instance_name is not compatible with an array for the $server_name'
}
else
{
    $sql_protocols = Invoke-Command -ComputerName $server_name   `
                                    -ScriptBlock ${Function:Get-sql-protocols} `
                                    -ArgumentList $instance_name

    $sql_protocols | Format-Table  computer_name, sql_instance, protocol_name, property_name, property_value
    # Export to a CSV file
    if ($export_to_csv)
    {
        Write-Information 'Exporting to CSV file....'
        $sql_protocols | Select-Object PSComputerName, sql_instance, protocol_name, property_name, property_value | 
                         Export-Csv -Path $csv_file_path -Force -NoTypeInformation
    }

    # Export to Excel file
    if ($export_to_excel)
    {
        
        if (Get-Command -Name Export-Excel -ErrorAction SilentlyContinue)
        {
            Write-Information  'Exporting to Excel file....'
            $sql_protocols | Select-Object PSComputerName, sql_instance, protocol_name, property_name, property_value | 
                             Export-Excel -Path $excel_file_path -WorksheetName "SQLProtocols" `
                             -TableName "SQLProtocols" -TableStyle Light9 -AutoSize -NoNumberConversion '*'
        }
        else
        {
            Write-Warning "Warning:Function Export-Excel not found. Skipping export to Excel..."
        }

    }


}


Download this script from GitHub:

Get List of Protocols Enabled in SQL Server


I hope you find this article helpful, and maybe something you can adapt to your needs if ever needed.



 



Tuesday, August 29, 2023

Remotely Check Disks Space, Find Large Files and Remove Old Files - PowerShell

Remotely Check Disks Space, Find Large Files and Remove Old Files - PowerShell

 For a very important client, I currently manage over 200 SQL Servers (all VMWare VMs) and over 500 SQL Instances running on them, hosting more than 5000 databases.

Except for about 50 large and critical databases, I don't really know what the rest of the databases are being used for, what type of applications they support, their workload characteristics or even which department/people use them.

Why, because this organization's policy is to "buy", never build a software application they need. And they have a good reason for it, almost all their apps come with the hardware they purchase, that will often have two or more applications 1) UI for the administrators 2) UI for the employees and 3) UI for the customers.

Only reason I do know about the database that are both, large and critical, is simply because being large + critical tend to make them stand out whenever there is any issue or outage. 

Regarding the SQL Server storage, this maybe presumtive to say, but I think most SQL DBAs configure the database with auto growth setting ON and unlimited size. That is the default setting for SQL Server out of the box. There can be and are exceptions to this. Let me share an anecdote from past:

Once upon a time, I once supported a highly transactional database (primary publisher database in a distributed-replication topology) with more than 1000 end users whose job was to constantly add new data, 8 hours a day, 5 days a week. So in order to maintain performance of DML queries, primarily inserts and occasional updates, this database had a very limited number of indexes. No end user queries can be run here.  The workload of read/querying of this data was offloaded to other database servers, located across the country. We had setup a transaction replication with 3 subscriber sql servers in the local data center, which we then configured as publisher (more accurately, as republisher) to sql servers located on 3 different regions of United States, and 2 international locations. These regional and international sql servers were used by the end user applications to query the data and this is where we added any additional indexes required by the end user applications. This was a very carefully planned and configured replication topology. 

We could not afford to let anyone run a large transaction on the primary publisher database, as that would clog the global replication pipeline that we had so carefully constructed.  The stakes were high, as this database generated hundreds of millions in profit, after all this was a very high margin business so most of the revenue was profit. 

So in order to control and limit the flow of replication traffic, one of the things we did was to limit the max file size for the transaction log to 100MB.  Any attempt to run a large transaction would fail as soon as it hit that limit. 

However in this current environment,  so far I have had no need to change those two settings so all databases are configured with auto growth setting ON and unlimited max size. This is fine until of course when the disk runs out of free space.

We do monitor SQL servers for the uptime, availability, performance, all the typical stuff.  And one of the things we monitor is the disk space. And the email alert condition for our disk space monitoring is a simple one:

% free space falls below 15% and amount of free space is below 100GB

This has worked best of us in this environment, as it had practically eliminated false alerts.  So, free space on a 10TB drive can fall below 15%, for example to 500GB, which is only 5% free space, this would still not trigger an alert as overall free space is still over 100GB. We think that "usually" gives us just enough heads up before the disk might completely fill up and cause an outage. 

I usually get on average 5 disk space alerts a week. Then begins the troubleshooting and fixing the issue as soon as possible. 

Fortunately, we have been successful in using a standardized configuration for SQL and for the most part, we use the following convention for the disks configuration:


Drive Letter

Purpose

C:

OS/System derive

D:

SQL Server Software/binaries

K:

Data files

L:

Transaction Log

O:

SQL Server Database Backups

T:

SQL Server Temp files

If the filestream feature is being used then that usually goes on a different drive, simply because they tend to grow much faster in our environment.


So when I see an email alert come through for one (or more) disks on one more SQL Servers, just by looking at the drive letter, I can instantly run some scenarios in my head why that particular drive could be getting low on the disk space. 

Now, this article is not about a how-to troubleshoot disk space issues, but to give some examples:

If it is our O drive (database backups), some of the reasons for it could be:

  • Backup job is 1) failing or 2) failing to automatically delete old backup files as it was designed to!

  • A very huge, one off, transaction was run that 1) created a very large transaction log backup 2) increased the size of the nightly differential backups, until the next weekly full backup
  • Significantly amount of data being changed but not necessarily in a one huge transaction. This would increase the size of transaction backups and differential backups
  • Database/s grew organically and significantly, causing the backup size to growth
  • New databases added either through a restore of a large database or immediately populated through a very large data load (or series of small but frequent data loads)

If the L drive (transaction logs) is out of space:

  • Transaction log is not being backed or failing!
  • A huge, one off, transaction was run. We do not have a job to automatically reclaim the space so the transaction log will keep the it's auto grown size until manually reclaimed 


If the K drive (Data files) is out of space:

  • This is usually because the database is organically growing at much faster pace than we were told
  • New databases added

Of course, you may want to reach out to the application folks to find out what is going on.

And so on...


Before I look into the SQL Server and/or RDP into the server, I have a PowerShell script that I run to get a quick summary of disk space report and largest files on a disk, and in most cases that also tells me the type of file (backup, transaction log, data file etc.) and what particular SQL instance it belongs to and what database. I then connect to the SQL instance in SSMS, do a quick analysis on the reasons, fix the problem and be done with in matter for 5 minutes or even less, all without having to log into the server.

Often, the solution involves:

  • If the reason is organic growth of the database, or new/restored databases, then often it requires adding more disk space
  • If the transaction log grew because of a one off transaction, say from 5GB to 5TB, then it may just need to be shrunk back to 5GB
  • Then sometimes there are old files/backups are lying that are no longer needed and can be safely removed

  • Fixing the failing jobs


Now about the PowerShell scripts,  here are the parts of that PowerShell script:


Get the disk space info:

Remember to change the value for $computer_fqdn

This assumes that your currently logged-on user account has the necessary permissions on the remote computer. If not then you can provide a different credentials by adding "-Credential (Get-Credential)" to the Get-WMIObject command.

Get-WMIObject  -ComputerName $computer_fqdn Win32_LogicalDisk -ErrorVariable ErrVar -ErrorAction SilentlyContinue | 
    Where-Object {$_.MediaType -eq 12} |
    Select-Object __SERVER,
                 @{n='DriveLetter';e={$_.Name}}, 
                 VolumeName,   
                 @{n='Capacity (Gb)' ;e={"{0:n0}" -f ($_.size/1gb)}},
                 @{n='FreeSpace (Gb)';e={"{0:n0}" -f ($_.freespace/1gb)}}, 
                 @{n='PercentFree';e={[Math]::round($_.freespace/$_.size*100)}} | 
    Format-Table -AutoSize 

Caveat: The above PowerShell command currently does not support the Mount Points


Results:







Find files on remote computer:

More accurately though, the script will search for files on a particular disk drive, not the entire computer. For that, you would need to run the script separately for each drive.  Optionally, you can also enter a folder/subfolder if you don't want it to start from the root of the drive.

The following script is written to look for the largest files and you can define what that means through the variable $filter_by_size.  You can then also specify a wild card filter for the file names through $filter_by_name,  and, when the file was last modified in $filter_by_last_modified_before_days.

In below example,  I am looking for top 5 largest files in O:\ containing *FULL*.BAK in the file name and at least larger than 100MB and last modified date is at least before 8 days. You can change the filter values to your needs and if you don't want to use a filter, just comment it out by putting the hash sign (#) in front it. And lastly, don't forget to change the value for $computer_name variable to your server name. You can also enter multiple server names, separated by a comma, for example:


$computer_name = @("Server1", "Server2", Server3")

# Name of the remote computer
$computer_name = @("SQLServer1")
 
# Drive letter or a subfolder on a drive to search for the large sizes
$remote_folder = 'O:\'   

# Add optional filters
$filter_by_name = '*FULL*.BAK'
$filter_by_size = 100*1024*1024    # 100*1024*1024 = 100MB
$filter_by_last_modified_before_days = -8 # Note the minus (-) sign

$top_n = 5  # Limit the results to the top n files by size
# Make sure the $filter_by_last_modified_before_days is a negative value
if($filter_by_last_modified_before_days -gt 0) {$filter_by_last_modified_before_days = $filter_by_last_modified_before_days * -1}

# Set the filters to default values if not already Set by the caller
if($top_n -eq $null -or $top_n -eq 0 -or $top_n -eq '') {$top_n=50}
if($filter_by_name -eq $null -or $filter_by_name -eq '') {$filter_by_name='*'}
if($filter_by_size -eq $null -or $filter_by_size -eq '') {$filter_by_size=0}
if($filter_by_last_modified_before_days -eq $null -or $filter_by_last_modified_before_days -eq '') 
{$filter_by_last_modified_before_days=0}


# Lets get the fqdn for the remote computer
$computer_fqdn = @()
foreach($computer in $computer_name){$computer_fqdn += @([System.Net.Dns]::GetHostEntry($computer).HostName)}


$large_files = @(Invoke-Command -computername $computer_fqdn -ArgumentList $remote_folder, $filter_by_name, $filter_by_size, $top_n  `
               -scriptBlock {Get-ChildItem $Using:remote_folder -Filter $Using:filter_by_name  -recurse -ErrorAction SilentlyContinue Force | 
                             where-object {$_.length -gt $Using:filter_by_size} | 
                             Sort-Object length -Descending | 
                             select   fullname, 
                                      @{Name="Size(GB)";Expression={[Math]::round($_.length / 1GB, 2)}}, 
                                      @{Name="Size(MB)";Expression={[Math]::round($_.length / 1MB, 0)}}, 
                                      LastWriteTime `
                             -First $Using:top_n
                            }  
                )
# Display
$large_files | sort -Property 'Size(MB)'  -Descending | ft PSComputerName, 'Size(MB)', 'Size(GB)', LastWriteTime, FullName

#$files_to_delete = $large_files | where-object {$_.LastWriteTime -lt ((Get-date).AddDays($filter_by_last_modified_before_days))} | select FullName

$large_files_count = $large_files.Count
"Number of files: $large_files_count"


Download above PowerShell script from GitHub at:

Find Largest Files on a Remote Server


Delete files on remote computer:

Now to the final part, deleting files remotely. If you just need to delete a specific file on the remote computer:

Invoke-Command -ComputerName '<Computer Name>' -ScriptBlock {Remove-Item '<FileName>'}


To remove multiple files: 

Remove-Item 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK', 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK'


OR, pass the list of files as an array:


Remove-Item @('O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK', 'O:\MSSQL\BACKUPS\FULL\TESTDB1.BAK')


You can also use wildcards, like this example from the Microsoft: 

Remove-Item * -Include *.doc -Exclude *1*

However, I usually first use the previous PowerShell script to search the files on the remove computer that meet my criteria for deletion, and store the results into $large_files variable then use the following PowerShell code to delete the files.

$files_to_delete = $large_files
foreach($file_to_delete in $files_to_delete)
{
    Invoke-Command -ArgumentList $file_to_delete.PSComputerName, $file_to_delete.FullName -ComputerName $file_to_delete.PSComputerName -ScriptBlock {get-item $Using:file_to_delete.FullName}
#    Invoke-Command -ArgumentList $file_to_delete.PSComputerName, $file_to_delete.FullName -ComputerName $file_to_delete.PSComputerName -ScriptBlock {remove-item $Using:file_to_delete.FullName}
}

For safety reasons, I have commented out the line that actually deletes the files, so you have opportunity to review the output before uncommenting and re-execute the FOREACH block.


Now, we can't always delete the files that we "think" we no longer need, can we? Sometimes you may not be sure if the files are redundant and can be safely deleted. So, you may just want to compress the files.   In situation like this, I prefer to use the NTFS file compression property as it compresses files in place and transparently.  

















However as far as I know, this is a bit tricky to do in PowerShell.  Using the Set-ItemProperty PowerShell command, you can change other attribute bits for a file/folder like ReadOnly, Archive etc. but not the Compressed bit.  So instead I use the "compact.exe" command, still from within the PowerShell, like this:

compact /C <File Name>


PowerShell does comes with a Compress-Archive command that compresses file/s in Zip format. 

Invoke-Command -ComputerName '<ComputerName>'  -ScriptBlock {

$compress =
@{
  Path = "O:\dbbackup.bak"
  CompressionLevel = "Fastest"
  DestinationPath = "
O:\dbbackup.bak.zip"
}
Compress-Archive @compress
}

But now you still have the original file taking up the disk space. Once you verify that the Zip file was successfully created, you can then manually delete the original file.




Thursday, August 17, 2023

Another option is coming to gain extended support for SQL 2012

Another option is coming to gain extended support for SQL 2012

If you still have SQL 2012 servers, you are probably aware that support for it ended July 2022.  This means there will be no more patches or security updates. 

Of course, the obvious solution is to upgrade to a supported version of SQL Server,  a long time ago! But that's not always an option, even if you can no longer get any support from the vendor.  In that case often there are steps you can take to mitigate or minimize the risks.

Whatever maybe your reasons not to upgrade yet, Microsoft has been offering an option to extend the support for SQL 2012 further, with a condition that you migrate the SQL server to Azure cloud. It doesn't matter what deployment model you choose or bring your own SQL Server licenses. As of this point Microsoft is even offering a free migration service to not only help you move your servers to Azure, it will even do the work for you, provided you are comfortable giving them access they would need.

Recently I came to know of another option that Microsoft is going to offer that doesn't require moving your SQL databases to the Azure cloud. That means you can keep your SQL Servers on premise and still keep getting the security updates.  The program is being referred to as Extended Security Updates enabled by Azure Arc Here is the blog article that discusses it:  

https://cloudblogs.microsoft.com/windowsserver/2023/07/18/new-options-for-windows-server-2012-r2-end-of-support-from-azure/

While that article is about the Windows Server 2012/R2,  there is a line in it that says "This capability will also be available for SQL Server 2012 ". Is this official? I am told by my Microsoft contact that it is so I am going to get few SQL 2012 servers I have that cannot be moved into Azure Cloud yet, have Azure Arc enabled.  

For those are looking for option to get extended support, as I am, I hope this helps.








Monday, August 14, 2023

How to check if powershell is running as administrator with elevated privileges

How to check if powershell is running as administrator with elevated privileges

In case it's not clear from the title, I am referring to the following method of launching PowerShell under elevated security privileges, by  a user with administrative rights.  This allows you to run commands that otherwise wouldn't be allowed even if you are a member of administrators group.











And if you are running  a non-interactive PowerShell script through a Windows or SQL Server scheduled job, or even from a batch file, you can use the PowerShell "Start-Process -Verb RunAs" or the old PsExec utility to start an elevated session.

What I want to show here though is how inside your PowerShell script, you can programmatically determine whether it is launched with "Run as administrator" option.

Here is the code, that I put in the beginning whenever I need to check before going any further inside a script if it has the elevated permissions to perform tasks and/or make whatever configuration changes I wrote the script for:

if(-Not (([System.Security.Principal.WindowsIdentity]::GetCurrent()).Owner -eq "S-1-5-32-544"))
{
    Throw 'Error: Powershell must be launched in elevated privileges mode'
}


Now if you are curious for explanation:

What is "S-1-5-32-544"?

It is known as one of the well-known, universal SIDs in Windows world.  It belongs to the BUILTIN\Administrators group and this SID value is same on every Windows computer.  May be it would have made more sense to check for the name BUILTIN\Administrators instead of the binary SID value, however well-know and universal. And to achieve that I could go through trouble of writing additional lines of code to translate the SID into BUILTIN\Administrators but per Microsoft's documentation that I have seen, that's not necessary as it is always same. 

This SID value (S-1-5-32-544) has four components:

  • A revision level (1)
  • An identifier authority value (5, NT Authority)
  • A domain identifier (32, Builtin)
  • A relative identifier (544, Administrators)


I get it, but why it matters?

When you run a process in Windows, your SID is the owner of that process. But when you run that same process with elevated privileges, Windows sets it's owner SID to S-1-5-32-544.


What happens to my SID?

Your SID is still stored and tracked by Windows, under User property:


PS C:\Users> $current_principal = ([System.Security.Principal.WindowsIdentity]::GetCurrent())
PS C:\Users> $current_principal | Format-List -Property Owner, User
 

# Results
 
Owner : S-1-5-32-544
User    : S-1-5-21-0000000000-000000000-0000000000-000000 


If the PowerShell is not running under elevated privileges, the Owner and User SID values will  be same. Try it and check it out for yourself.


What is this [System.Security.Principal]....?

It is a .Net namespace or more accurately part of a hierarchy of namespace. It contains WindowsIdentity  class, among other classes. It exposes and allows us to interact with the current user, through methods and properties:

PS C:\Users> $current_principal = ([System.Security.Principal.WindowsIdentity]::GetCurrent()) 

PS C:\Users> $current_principal | Get-Member -MemberType All | `
                                  Select-Object -Property Name, MemberType

Name               MemberType
----               ----------
AddClaim               Method
AddClaims              Method
Clone                  Method
Dispose                Method
Equals                 Method
FindAll                Method
FindFirst              Method
GetHashCode            Method
GetObjectData          Method
GetType                Method
HasClaim               Method
Impersonate            Method
OnDeserialization      Method
RemoveClaim            Method
ToString               Method
TryRemoveClaim         Method
WriteTo                Method
AccessToken          Property
Actor                Property
AuthenticationType   Property
BootstrapContext     Property
Claims               Property
DeviceClaims         Property
Groups               Property
ImpersonationLevel   Property
IsAnonymous          Property
IsAuthenticated      Property
IsGuest              Property
IsSystem             Property
Label                Property
Name                 Property
NameClaimType        Property
Owner                Property
RoleClaimType        Property
Token                Property
User                 Property
UserClaims           Property
 



Is there a built-in function in PowerShell Core I can use instead?

Not that I am aware of so far.  It sure would be a welcome addition to it and I think it could be included in the future releases though.

Please feel free to ask any questions or leave feedback in the comments section.