15 Characters or Less: The Challenge of SQL Server Listeners Name
There is a hard limit on how long your AlwaysOn listener name can be. That is expected; what may not be expected is that the limit is a mere 15 characters or less.
Yes, even though the maximum length allowed for an AlwaysOn listener DNS name is 63 characters, in practice you can only specify up to 15 characters. This limit comes from the NetBIOS computer name restriction, which is 15 characters, even in the latest versions of Windows 2025. Technically, the maximum length for a NetBIOS name is 16 characters, but the 16th character (NetBIOS Suffix) is reserved for identifying the network device functionality, effectively limiting it to 15 characters.
The limit is for the name part, not the entire Fully Qualified Domain Name (FQDN), which can be up to 255 characters long.
So if you are using SSMS to create an AlwaysOn Listener, it won't even let you type in more than 15 characters:
When I first encountered this issue a few years ago, that should have been my first clue. Did I pay attention and think for a second? Of course not! I spent close to two hours before zeroing in on what the main issue was.
If you bypass the GUI as I did back then and resort to TSQL (or PowerShell), you will receive error similar to this:
Msg 41066, Level
16, State 0, Line 3
Cannot bring the
Windows Server Failover Clustering (WSFC) resource (ID
'70ce3804-e3ed-4db6-ad42-6936daeeb5aa') online (Error code 5942).
The WSFC service
may not be running or may not be accessible in its current state, or the WSFC
resource may not be in a state that could
accept the
request. For information about this
error code, see "System Error Codes" in the Windows Development
documentation.
Msg 19476, Level
16, State 4, Line 3
The attempt to
create the network name and IP address for the listener failed.
The WSFC service
may not be running or may be inaccessible in its current state, or
the values
provided for the network name and IP address may be incorrect. Check the state
of the WSFC cluster and
validate the
network name and IP address with the network administrator.
The error is ugly and not intuitive! SQL Server and FCS logs are not very helpful either.
As a last resort, you could bypass the SQL Server tools completely and create the listener through the Failover Cluster Manager tool, by adding a new resource to your Availability Group of type 'Client Access Point':
On the surface, this will work and complete successfully:
In reality, the newly added Client Access Point will not come online, and your AlwaysOn resource will show as partially failed
Is there a solution or workaround that works?
Sadly, no. Though justifiably annoying, in the Microsoft Windows world, there is no way to get around this limit. The catch-all reason given is of course - "For backward compatibility with legacy Windows Computers and Applications that may still need compatibility with NetBIOS". Even if you do not have any legacy systems, this requirement is still enforced. The Windows Failover Clustering still requires that the cluster name (the virtual computer name that you can connect to) must strictly adhere to NetBIOS naming convention. And this requirement extends to related services, such as SQL listener names for Always On Availability Groups. Bottom line: If the Windows Failover Cluster (WFC) service cannot successfully register your chosen name in the network (per NetBIOS standards), the resource will be unable to come online, even if the initial resource creation appears successful.
A potential alternative could be to install your SQL Servers in the Linux world without Windows Failover Clustering and Active Directory. I am told you could have a longer name for the AlwaysOn Listener in this environment. However, I have not tested this myself and therefore would not count it as a definitive solution. Even if it works, isn't that too much effort just to name your listener with more than 15 characters?
I ended up making a slight change in my naming convention so that the listener name would not exceed 15 characters.
If you are reading this, it's probably because you too have this rare need to clear the SSMS cache for some kind of troubleshooting.
Generally, when I am troubleshooting a connection issue that I wonder is due to a cached or stale connection, the first things that come to mind are clearing the DNS client cache and, occasionally, the ARP cache. Rarely, if ever, do I consider the SSMS cache. Of course, there are other reasons why you may want to clear the cache.
Since almost everything in the computer world uses some kind of cache, it's no surprise that SQL Server Management Studio (SSMS) also has its own cache. This cache makes our lives easier by remembering things like most recently used (MRU) connections and objects, and enabling features like IntelliSense, auto-completion, syntax highlighting, object names, our preferences, and even idiosyncrasies. The cache is stored in what Microsoft refers to as a cache file located in your user profile directory. The cache files are not human-readable per se (it would be a security risk if anyone got hold of those cache files), nor do I currently have any desire to read them. And I'm not concerned right now with the size of the cache or the exact algorithm it uses to decide when to automatically refresh it. However, you can use the keyboard shortcut CTRL+SHIFT+R to manually refresh it.
And depending on the version of SQL Server Management Studio (SSMS), the cache folder also contains one or more settings and configuration files. For example, in the case of SSMS 20.x, one such file is NewSettings.vssettings (first introduced in SSMS 17.x), which stores your customized preferences and environment settings, basically all the options and settings you see here:
Click on Tools menu on top then Options:
The file is created or saved (if it already exists) whenever you close SQL Server Management Studio (SSMS).
If you have customized your SSMS appearance and experience significantly, you should consider backing up this file. It is located at "%USERPROFILE%\AppData\Local\Microsoft\SQL Server Management Studio\%Version%\Settings\SQL Server Management Studio".
A couple of other configuration files you may want to save are Ssms.exe.config and RegSrvr*.xml.
Clearing the cache per se is simple. First, if you have SQL Server Management Studio (SSMS) running, make sure to close it. Copy the following folders somewhere safe in case you need them, then delete them entirely. Alternatively, you could also move (cut and paste) these folders to another location.
"%USERPROFILE%\AppData\Local\Microsoft\SQL Server Management Studio"
%USERPROFILE%\AppData\Roaming\Microsoft\SQL Server Management Studio"
There is a Microsoft article on clearing everything from SSMS cache:
Then when you relaunch the SSMS after clearing the cache, you will see the following warning, which you could ignore by clicking OK if you want to start SSMS with a blank slate, or as I just did a bit ago, save the settings files somewhere before deleting the cache files, then copy NewSettings.vssettings back to its original location.
Here is a PowerShell script to backup then clear the SSMS cache:
# Current User's profile location$ProfilePath = $env:USERPROFILE# Backup location$DateTimeSuffix = Get-Date -Format 'yyyyMMdd_HHmmsss'$BackupPath = Join-Path$ProfilePath"Documents\SQL Server Management Studio - Backup\$DateTimeSuffix"# Create the backup directory if doesn't already existsNew-Item -Path $BackupPath -ItemType Directory -Force
# Get the folder names where SSMS cache files are stored$SSMSFolders = Get-Item"$ProfilePath\AppData\*\Microsoft\SQL Server Management Studio"# Move the SMSS cache folders to Backup directoryForEach($SSMSFolderin$SSMSFolders)
{
$DestPath = Join-Path$BackupPath$SSMSFolder.Parent.FullName.Replace("$ProfilePath\AppData\", "")
New-Item -Path $DestPath -ItemType Directory -Force
Move-Item -Path $SSMSFolder.FullName -Destination $DestPath -Force
}
Database Property LastGoodCheckDbTime to Get the Last Successful DBCC CHECKDB on a Database
As one of the preventive database maintenance tasks, performing database integrity checks on almost every database you have is a best practice, some might say its imperative. However there can be exceptions, such as if you have an exact and identical copy of the same database residing on multiple servers, in which case you may not need to run DBCC CHECKDB everywhere for that database.
If you are like me, you might be using Ola Hallengren's SQL Server Maintenance Solution. It also gives you the option to create SQL Agent scheduled jobs for each task. Unfortunately, some of my colleagues often forget to add and enable the schedule to the jobs and as a result it maybe sometime before I find out we are not doing the required database maintenances. This of course is even more serious issue for tasks involving database backups and index maintenance. Therefore, my quest for a solution.
When Microsoft released SQL Server 2016 SP2, it added a LastGoodCheckDbTime property to the DATABASEPROPERTYEX function. Essentially, this property gives you the date and time of the last integrity check performed using the DBCC CHECKDB command. In other words, if you happen to be using DBCC CHECKTABLE to perform integrity checks on some or even all of the tables, it won't update the timestamp returned by LastGoodCheckDbTime.
Plainly speaking, this article is about searching for a specific value inside all tables and their columns of a SQL database and return list of tables and columns that contains that value.
That being said, third-party search tools have been around for a long time, as well as custom SQL scripts that you can find online for free, for the purpose of searching for specific values within a database. And even if they don’t meet all your requirements, they at least offer a starting point. However, I realized these solutions weren't sufficient for my specific needs, and I found it easier and quicker to write my own script.
This script uses the dynamic SQL queries to search across all user tables and columns. It does allow you to specify filter criteria to search within a particular schema, table, or column or columns of specific data type groups, for example search inside numeric data only. It also includes commands for exporting table data using PowerShell and BCP utilities, which was one of my requirements.
/* -- CREATE A TEST TABLE FOR DEMO IF OBJECT_ID('my_data_search_table', 'U') IS NOT NULL DROP TABLE my_data_search_table;CREATE TABLE my_data_search_table ( id INT, name VARCHAR(100));INSERT INTO my_data_search_table (id, name)VALUES (1996, 'dummy');*/SET NOCOUNT ON;
USE <Your DB Name Here>;
GO-- Drop temp tables if exists
IF OBJECT_ID('tempdb..#t_table_columns') ISNOTNULLDROPTABLE #t_table_columns;
IF OBJECT_ID('tempdb..#results') ISNOTNULLDROPTABLE #results;
GO-- Search stringDECLARE @search_value NVARCHAR(4000) = N'dummy'; -- Example search value/* Limit what column data types to search into Here, you can specify whether to search within numeric data types, other data types, or both. This helps in narrowing down the search to relevant fields, potentially speeding up the search process and reducing the load on the database.*/
-- valid values are numeric, other or both
DECLARE @search_datatype VARCHAR(10)
-- SET @search_datatype = 'numeric'-- Search modeDECLARE @exact_match BIT = 1;
-- EXECUTION OPTIONSDECLARE @execute BIT = 1; -- 1 to execute the search queriesDECLARE @debug BIT = 0; -- 1 to print onlyDECLARE @show_progress BIT = 1; -- 1 to print progress messagesDECLARE @progress_interval INT = 100;
-- Schema, Table, and Column Filtering (NULL means no filter)DECLARE @search_column_name NVARCHAR(1000);
DECLARE @search_table_name NVARCHAR(1000);
DECLARE @search_schema_name NVARCHAR(1000);
-- Dynamic SQL queryDECLARE @SQL NVARCHAR(4000);
-- If wild card searfch is requested i.e. @exact_match = 0
IF @exact_match = 0 SET @search_value = N'%' + @search_value + N'%';
-- Temp table to store columns metadataSELECT s.name AS [schema_name],
t.name AS [table_name],
c.name AS [column_name],
TYPE_NAME(c.system_type_id) AS [column_type]
INTO #t_table_columns
FROM sys.tables t
JOIN sys.schemas s ON t.schema_id = s.schema_id
JOIN sys.columns cON t.object_id = c.object_id
WHERE t.is_ms_shipped = 0
AND TYPE_NAME(c.system_type_id) NOTIN ('image', 'varbinary');
-- Store list of numeric data types into a table variableDECLARE @numeric_types TABLE (name VARCHAR(100));
INSERTINTO @numeric_types (name)
VALUES
('bigint'),
('bit'),
('decimal'),
('int'),
('money'),
('numeric'),
('smallint'),
('smallmoney'),
('tinyint'),
('float'),
('real');
IF (@search_datatype = 'numeric'AND ISNUMERIC(@search_value) = 0)
BEGIN
RAISERROR('Error: Search value (%s) invalid for numeric search.', 16, 1, @search_value)
GOTO QUIT
END-- Apply filters
IF @search_schema_name ISNOTNULLAND @search_schema_name <> ''DELETEFROM #t_table_columns WHERE [schema_name] <> @search_schema_name;
IF @search_table_name ISNOTNULLAND @search_table_name <> ''DELETEFROM #t_table_columns WHERE [table_name] <> @search_table_name;
IF @search_column_name ISNOTNULLAND @search_column_name <> ''DELETEFROM #t_table_columns WHERE [column_name] <> @search_column_name;
IF @search_datatype ISNOTNULLAND @search_datatype <> ''BEGIN
IF @search_datatype NOTIN ('numeric', 'other', 'both')
BEGIN
RAISERROR('Error: Invalid value %s for @search_datatype.', 16, 1, @search_datatype) WITH NOWAIT
RAISERROR('Valid values are 1) numeric 2) other 3) both.', 16, 1) WITH NOWAIT
GOTO QUIT
ENDELSE IF @search_datatype = 'numeric'DELETE #t_table_columns FROM #t_table_columns t
WHERENOTEXISTS (SELECT * FROM @numeric_types v WHERE v.name = t.[column_type])
ELSE IF @search_datatype = 'other'DELETE #t_table_columns FROM #t_table_columns t
INNERJOIN @numeric_types v ON v.name = t.[column_type]
END-- Placeholder for resultsSELECT TOP 0 * INTO #results FROM #t_table_columns;
-- Progress trackingDECLARE @total_columns INT;
DECLARE @counter INT = 0;
-- Variables for cursorDECLARE @schema_name SYSNAME,
@table_name SYSNAME,
@column_name SYSNAME,
@column_type NVARCHAR(500);
-- Declare cursorDECLARE c1 CURSORSTATICFORSELECT * FROM #t_table_columns
ORDERBY [schema_name], [table_name], [column_name];
OPEN c1;
SELECT @total_columns = @@CURSOR_ROWS;
FETCHNEXTFROM c1 INTO
@schema_name, @table_name, @column_name, @column_type;
WHILE @@FETCH_STATUS = 0
BEGINSET @counter = @counter + 1;
-- Progress message
IF @counter % @progress_interval = 0 AND @show_progress = 1
RAISERROR('%i columns of %i processed', 10, 1, @counter, @total_columns) WITH NOWAIT;
-- Build and execute the search querySET @SQL = N'SELECT TOP 1 ''' + @schema_name + N''' AS [schema_name], '''
+ @table_name + N''' AS [table_name], ''' + @column_name
+ N''' AS [column_name], ''' + @column_type
+ N''' AS [column_type] FROM ' + QUOTENAME(@schema_name)
+ N'.' + QUOTENAME(@table_name) + N' WHERE TRY_CAST('
+ QUOTENAME(@column_name) + N' AS VARCHAR(8000)) LIKE '''
+ @search_value + N''';';
IF @debug = 1 PRINT @SQL;
-- Insert into results if match is found
IF @execute = 1 BEGININSERTINTO #results EXEC (@SQL);
IF @@ROWCOUNT > 0 BEGIN
PRINT '';
RAISERROR('*** match found ***', 10, 1) WITH NOWAIT;
PRINT @SQL;
PRINT '';
ENDENDFETCHNEXTFROM c1 INTO
@schema_name, @table_name, @column_name, @column_type;
END-- CleanupCLOSE c1;
DEALLOCATE c1;
-- Display results with export commandsSELECT *,
'SELECT COUNT(*) [' + [schema_name] + '.' + [table_name]
+ '] FROM ' + QUOTENAME([schema_name]) + '.'
+ QUOTENAME([table_name]) + ' WHERE TRY_CAST('
+ QUOTENAME([column_name]) + ' AS VARCHAR(8000)) LIKE '''
+ @search_value + N''';'AS [SQL],
'Send-SQLDataToExcel -Connection "Server=' + @@SERVERNAME
+ ';Trusted_Connection=True;" -MsSQLserver -DataBase "'
+ DB_NAME() + '" -SQL "select * from '
+ QUOTENAME([schema_name]) + '.' + QUOTENAME([table_name])
+ '" -Path "$env:USERPROFILE\Documents\' + DB_NAME() + '.'
+ [schema_name] + '.' + [table_name] + '.xlsx"'AS [PS Export],
'BCP ' + QUOTENAME(DB_NAME()) + '.' + QUOTENAME([schema_name])
+ '.' + QUOTENAME([table_name]) + ' out %USERPROFILE%\Documents\'
+ DB_NAME() + '_' + [schema_name] + '_' + [table_name]
+ '.txt -c -t, -T -S' + @@SERVERNAME AS [BCP Export]
FROM #results
ORDERBY [schema_name], [table_name], [column_name];
QUIT:
Gotchas and work arounds:
The script does not search within columns of certain data types, notably 'IMAGE' and 'VARBINARY'. These data types often store binary data (e.g., files, images, binary large objects (BLOBs)) that require different methods to search and interpret. Searching within such data types would necessitate additional logic for conversion or binary pattern matching, which is not covered by this script. This is by design.
And for columns with data type VARCHAR(MAX) and NVARCHAR(MAX) that store large amounts of text, the script's default method of casting the column to `VARCHAR(8000)` may truncate data in those columns, possibly missing matches in the truncated portion. You can however modify it to VARCHAR(MAX).
There are also couple performance considerations to keep in mind. The script uses a cursor to iterate over potentially thousands of columns across many tables and executes a dynamic SQL query for each column. This can be resource-intensive and slow, especially in large databases with many tables or columns. As a mitigating action, you can set some filters (e.g., specific schemas or tables, numeric data only etc.).
Similarly, for very huge tables (e.g. billions of rows), the script understandly performs very poorly, especially if the table has no matching records at all but the script will still have to scan all rows for each and every column in it to find out that it doesn't have any matching records after all. You can however limit the performance hit by applying some filters mentioned above.
And because it uses dynamic SQL, it also poses a risk of SQL injection which is why I do not recommend to build a UI around it where users can supply any input values to it. You can however convert the script into a stored procedure and add code to validate and sanitize any inputs passed to it.
I believe the script is highly customizable and moderately detailed enough method for searching data inside a SQL Server database and, designed to accommodate some of the more typical search scenarios and preferences. Please ensure you modify and adapt it to suit your specific requirements.
Finding the Latest Backup Timestamps for Your Databases
I wanted to find out the most recent date and time each database was last backed up, focusing specifically on full, differential, and transaction log backups, all displayed on a single line for each database for easier viewing and reading. For example:
This provides a high-level overview of when my databases were last backed up, including differential and transaction log backups, if applicable. The query results also show the duration of each backup (not shown in the above screenshot). Here is the query:
USE msdb
GO
;WITH cte_latest_backup AS (
SELECT
sd.Name AS DatabaseName,
bs.typeAS backup_type,
MAX(bs.backup_start_date) AS backup_start_time,
MAX(bs.backup_finish_date) AS backup_finish_time
FROM sys.sysdatabases sd
INNERJOIN msdb.dbo.backupset bs
ON bs.database_name = sd.name
GROUPBY sd.Name, bs.type
),
cte_full_backup AS
(
SELECT*FROM cte_latest_backup
WHERE backup_type ='D'
),
cte_diff_backup AS
(
SELECT*FROM cte_latest_backup
WHERE backup_type ='I'
),
cte_tlog_backup AS
(
SELECT*FROM cte_latest_backup
WHERE backup_type ='L'
)
SELECT
d.name AS [Database],
d.recovery_model_desc,
full_backup.backup_start_time AS full_backup_start,
diff_backup.backup_start_time AS diff_backup_start,
tlog_backup.backup_start_time AS tlog_backup_start,
DATEDIFF(MINUTE, full_backup.backup_start_time,
full_backup.backup_finish_time) AS full_backup_elapsed_min,
DATEDIFF(MINUTE, diff_backup.backup_start_time,
diff_backup.backup_finish_time) AS diff_backup_elapsed_min,
DATEDIFF(SECOND, tlog_backup.backup_start_time,
tlog_backup.backup_finish_time) AS tlog_backup_elapsed_sec
FROM master.sys.databases d
LEFTJOIN cte_full_backup full_backup
ON full_backup.DatabaseName = d.name
LEFTJOIN cte_diff_backup diff_backup
ON diff_backup.DatabaseName = d.name
LEFTJOIN cte_tlog_backup tlog_backup
ON tlog_backup.DatabaseName = d.name
WHERE d.name NOTIN ('tempdb')
ORDERBY [Database]
Gotchas:
The query doesn't capture other available types of backups like file backup, file group backups, partial backups etc. If your backup strategy includes any of those backup types, please consider adapting the query to meet your needs.
The query doesn't also differentiate whether the full backup was done in copy-only mode or not.
In AlwaysOn AG servers, some of your database backups maybe offloaded or distributed across primary and secondary replicas. This query doesn't capture backups performed on other replicas and therefore the most recent backup date/times displayed by the query may not be accurate for databases that are in an Availability Group.