Search This Blog

Monday, July 31, 2023

A Basic SQL Script to Monitor Long Running SQL Server Agent Jobs

A Basic SQL Script to Monitor Long Running SQL Server Agent Jobs

 As a SQL Server DBA, especially Production/Operations DBA in a relatively large environment, , sooner or later you are going to run into runaway SQL agent jobs or sometimes take longer to complete than they should (for example,  longer than the SLAs, whether formal or informal) and/or affect some other jobs, other queries, applications, users etc.

Or maybe you have other reasons to know what jobs are running and for how long. Generally you would use the Job Activity Monitor in the SQL Server Management Studio. I do too, almost exclusively.  But there are certain SQL instances and situations I run into when I need something automated, and get alerted by email if any of the jobs are running for more than X amount of time.

If you have a third party monitoring tool that provides the functionality, great.  Or maybe you have a better solution of your own, more suitable for your needs.  If not, below is what I have used from time to time at different organizations I have worked for over the years.

This SQL script is convenient and very simple, in other words nothing fancy about it but it gets the job done.  

Informally, I have divided the script into two parts, one checks for the active jobs and the other part sends out an email. You can run the whole script at once or run the first part first, review the results on the screen and then if you would like it to send email, highlight  the second part and run it on it's own (while still remaining in the same SSMS window/session though). 


-- PART 1: Check for active/running jobs and their duration so far
SET NOCOUNT ON
IF OBJECT_ID('tempdb..#job_durations') IS NOT NULL DROP TABLE #job_durations
go
;WITH cte_jobs AS
(
	SELECT 
	       sja.session_id [Session ID],
		   sj.Name [Job Name], 
		   r.command [Command],
		   [Status] = CASE
				WHEN sja.start_execution_date IS NULL THEN 'Not running'
				WHEN sja.start_execution_date IS NOT NULL AND sja.stop_execution_date IS NULL THEN 'Running'
				WHEN sja.start_execution_date IS NOT NULL AND sja.stop_execution_date IS NOT NULL THEN 'Not running'
				END,
		   sja.start_execution_date [Job Start Time],

		   -- the total_elapsed_time is in milliseconds
		   r.total_elapsed_time 'Elapsed Milliseconds',
		   r.total_elapsed_time / (1000) 'Elapsed Seconds',
		   r.total_elapsed_time / (1000) / 60 'Elapsed Minutes',
		   r.total_elapsed_time / (1000) / 60 / 60 'Elapsed Hours',
		   sj.job_id [Job ID] 

	FROM msdb..sysjobs sj
	INNER JOIN msdb..sysjobactivity sja ON sj.job_id = sja.job_id
	INNER JOIN (SELECT TRY_CONVERT(binary(30), SUBSTRING(program_name, 30, 34), 1) job_id, *
					FROM sys.dm_exec_sessions 
					WHERE program_name like 'SQLAgent - TSQL JobStep%') s on sj.job_id = s.job_id
	LEFT JOIN sys.dm_exec_requests r on s.session_id = r.session_id
	WHERE sja.session_id = (SELECT MAX(session_id) FROM msdb..sysjobactivity where job_id = sj.job_id)
)

SELECT cte.*,
		-- Convert the duration into words
		[Duration] = 
			CASE WHEN [Elapsed Hours] > 0 THEN CAST([Elapsed Hours] AS VARCHAR(20)) +' Hours' ELSE '' END +

			CASE WHEN [Elapsed Minutes] > 0 AND [Elapsed Hours] = 0 THEN CAST([Elapsed Minutes] AS VARCHAR(20)) +' Minutes' ELSE '' END +
			CASE WHEN [Elapsed Minutes] > 0 AND [Elapsed Hours] > 0 THEN ', ' + CAST([Elapsed Minutes] - ([Elapsed Hours]*60) AS VARCHAR(20)) + 
					', Minutes' ELSE '' END +

			CASE WHEN [Elapsed Seconds] > 0 AND [Elapsed Minutes] = 0 AND [Elapsed Hours] = 0 THEN CAST([Elapsed Seconds] AS VARCHAR(20)) +
					' Seconds' ELSE '' END  +
			CASE WHEN [Elapsed Seconds] > 0 AND [Elapsed Minutes] > 0 THEN ', ' + CAST([Elapsed Seconds] - ([Elapsed Minutes] * 60) AS VARCHAR(20)) +
					' Seconds' ELSE '' END   
		
INTO #job_durations
		
FROM cte_jobs cte
WHERE  1 = 1
AND [Status]	='Running'
AND [Elapsed Minutes] > 30
--    AND [Elapsed Hours] > 1
--    AND [Job Name] = 'DBA - Simulate Long Running Job'
;

SELECT [Session ID], [Job Name], Command, [Job Start Time], Duration, [Elapsed Milliseconds] FROM #job_durations order by [Elapsed Milliseconds] desc;

IF @@ROWCOUNT = 0 GOTO QUIT

-- PART 2: Send email alert

-- variables for html formatted email
declare @html_body NVARCHAR(MAX), @html_table_head VARCHAR(1000),  @html_table_tail VARCHAR(1000)
set @html_table_tail = '</table></body></html>' ;
set @html_table_head = '<html><head>' + '<style>'
	+ 'td {border: solid black;border-width: 1px;padding-left:5px;padding-right:5px;padding-top:1px;padding-bottom:1px;font: 15px arial} '
	+ '</style>' + '</head>' + '<body>'  
	+ '<pre style="font-weight: bold">'
	+ 'SQL Server Instance: ' + @@SERVERNAME

	+ '<br> <br> Report generated on : '+ cast(getdate() as varchar(100)) 
	+ ' <br> <table cellpadding=0 cellspacing=0 border=0>' 
	+ '<br> <tr> ' 
	+ '<td bgcolor=#E6E6FA><b>Job Name</b></td>'
	+ '<td bgcolor=#E6E6FA><b>Duration</b></td>'
	+ '<td bgcolor=#E6E6FA><b>Command</b></td></tr>' ;

set @html_body = ( SELECT    td = left([Job Name], 50), '',
td = Duration, '',
td = [Command], ''
FROM      #job_durations 
order by [Elapsed Milliseconds] desc
FOR   XML RAW('tr'),ELEMENTS);

SELECT  @html_body = @html_table_head + ISNULL(@html_body, '') + @html_table_tail

EXEC msdb.dbo.sp_send_dbmail  
   --    @profile_name = 'DBMail_DBA',  -- If using a named mail profile
		@recipients = '<EmailAddress>',  
		@body = @html_body,  
		@subject = 'Long Running SQL Server Jobs found',
		@body_format = 'HTML' ;

QUIT:


For a demo, I started two sample jobs, doing nothing but running the WAITFOR DELAY command.  Here is the results displayed on the screen:




And here is the sample email I received:




Couple more screenshots for the same:



These next two screenshots are from a real/production job:



I hope you find this useful as a good starting point for what you may need in your environment.  


Wednesday, July 19, 2023

Tracking the Inaccurate Cardinality Estimates

Tracking the Inaccurate Cardinality Estimates

When it comes to query performance, lately, Cardinality Estimates seem to have been a hot topic with each new release of SQL Server. They are making the query optimizer smarter, helping it generate more efficient query execution plans, and making queries run 

In fact, one might argue that the SQL Server query optimizer, being 'cost-based,' is fundamentally reliant on cardinality estimation, using per-attribute summary statistics (histograms). I might venture to say that in the real world, cardinality inaccuracies are relatively frequent occurrences that lead to slow queries and unpredictable performance. The following description and warning included with the SSMS 'create Extended Events' wizard should be self-explanatory regarding why cardinality estimates matter.

Inaccurate Cardinality Estimates

Occurs when an operator outputs significantly more rows than estimated by the Query Optimizer. Use this event to identify queries that may be using sub-optimal plans due to cardinality estimate inaccuracy. Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.


Wait a minute, according to that description, it's bad for performance when the optimizer underestimates the row count. Does that mean I don't need to worry about it if it overestimates the row count? Actually, overestimation is also bad for performance, perhaps not as severely, but it's still problematic enough that you should not ignore it. When the optimizer overestimates the row count:

  • The optimizer will allocate too much memory to the query, resulting in wasted memory. If this happens frequently, it can starve the SQL Server of memory, leading to spills to disk, hard paging, and so on.
  • The query will be executed with a parallel plan when a serial plan would not only have been faster but also would have avoided the possibility of the infamous CXPACKET waits.
  • The optimizer is more likely to choose scanning the entire index over seeking.


At a very basic level, the Cardinality Estimator is an estimate of the row count for each operation in the query, particularly for columns used in filter conditions (also known as the WHERE CLAUSE) and columns used in JOIN PREDICATES.

How are these estimates calculated?


While the internal algorithms evolve and keep changing, in general, the Optimizer uses the statistics created on the underlying objects/tables. Statistics are implicitly or automatically created when any index is created, and you can also create them separately, independent of any index. Additionally, SQL Server will automatically create statistics it needs if the AUTO_CREATE_STATISTICS setting is ON for the database. There are three key pieces of information in statistics:

  • The header includes, among other things, the last update date, the number of rows sampled, etc. This information can be obtained from the system DMV sys.stats and the function sys.dm_db_stats_properties.
  • Index density measures the uniqueness of a column. Low density indicates higher uniqueness, and vice versa. This information can be obtained from the system function sys.dm_db_stats_histogram.
  • Histogram shows the number of distinct values in a column and the distribution of the data in it. For example, consider the distribution of data for the Code column with 5 distinct values:



If there is an index (and, consequently, statistics on column values) on the Code column, assuming the stats are up-to-date and accurate, the Optimizer is (rightfully so) more likely to use a Seek against that index for the following query because it knows from the statistics that the index contains much fewer records for Code = 'E'.

SELECT * FROM Blood_Type WHERE Code = 'E';


If the filter condition instead contains Code = 'F', the Optimizer is more likely to not only ignore the index but also perform a full scan on the table/clustered index!

While there have been significant improvements in how the CE's are calculated, there will still be instances where those estimates will differ, sometimes significantly, from the reality, i.e., the actual row counts.

Before I get to the Extended Events, let me share couple queries that I use to look at the meta data for the statistics and their histograms.

SELECT -- TOP 10
	   SCHEMA_NAME(o.schema_id) [object_owner],
	   o.name [object_name],
	   s.name [stats_name],
	   (sp.rows_sampled * 100)/ sp.rows percent_sampled,
       sp.*, 
       s.auto_created,
	   s.user_created,
	   s.has_filter,
	   s.is_temporary,
	   s.is_incremental
FROM sys.objects o
inner join sys.stats s ON s.object_id = o.object_id
cross apply sys.dm_db_stats_properties(s.object_id, stats_id) sp
WHERE	1=1
	AND o.is_ms_shipped = 0
	AND s.auto_created = 0
--	AND o.name = 'LookupCodes'
order by sp.rows desc;  

The query retrieves information about statistics for user-created objects, excluding system objects, and sorts the result by the number of rows in the statistics in descending order. It also calculates the percentage of rows sampled for each statistic.












Get index/statistics histograms:

SELECT top 100 
		[schemas].name [schema_name],
		t.name TABLE_NAME,
		i.name index_name,
STATS_DATE(i.object_id, i.index_id) stats_update_date,
INDEX_COL([schemas].name + '.' + t.name, i.index_id, 1) ind_column, h.* FROM sys.tables t INNER JOIN sys.schemas [schemas] ON t.schema_id = [schemas].schema_id INNER JOIN sys.indexes i ON i.object_id = t.object_id INNER JOIN sys.stats s ON i.object_id = s.object_id AND i.index_id = s.stats_id INNER JOIN sys.stats_columns sc ON s.object_id = sc.object_id AND s.stats_id = sc.stats_id AND sc.stats_column_id = 1 -- inner join sys.columns c on c.object_id = s.object_id and c.column_id = sc.column_id CROSS apply sys.dm_db_stats_histogram(i.object_id, i.index_id) h WHERE 1=1 AND t.is_ms_shipped = 0 -- AND i.is_unique = 0 -- AND i.index_id > 1 AND t.name = 'tblxxx' AND i.name = 'idx_nc_tblxxx1';



The query retrieves detailed information about the specified non-clustered index, including its name, schema, last update date for statistics, the name of the first column in the index, and histogram data.











Now, the remainder of the article will focus on leveraging the Extended Events feature to detect these irregularities, often referred to as 'skewed cardinality estimates' in SQL Server documentation. 

And I would like to also emphasize the importance of selective data capture to avoid excessive information gathering.





Creating the extended event

(If you prefer TSQL scripts instead of a GUI Wizard, you find them towards the end of this article.)

To create the extended event, connect to the SQL Server instance and expand the Management--->Extended Events and right click on the Sessions to launch the New Session Wizard:















Click Next on the Introduction page, if it appears on your screen.

On the next page, enter a name for the XE session, I am calling it xe_inaccurate_cardinality_estimate.


Click Next and on the next page, select Do Not Use a Template.


On the next page, make sure to first click on the Channel drop down and check Debug. This is important. Otherwise the "inaccurate_cardinality_estimate" event will not get displayed.












In the Event Library search box, type "inaccurate_cardinality_estimate" and click on the event name to select it. Click on > button to add the event.











Click Next that will bring up option to capture Global Fields.  Here, I will check boxes next to database_id, database_name, sql_text and plan_handle. Click Next.



In the following screen, you can apply filters to refine the results as needed, such as by specifying a filter based on the database_id. I'll be adding a filter that selects events where the actual row count exceeds 1000. Without such a filter, SQL Server might capture events even when the estimated row count is 1 and the actual count is 2. Would you consider this a significant deviation from the estimate? It certainly is from the perspective of Extended Events (XE).

To enhance the effectiveness and practicality of this approach, you may also consider adding an additional filter to capture events where the estimated row count significantly differs from the actual row count. However, for now, I'll stick with filtering based on actual row counts only.

 







'




Click Next, which brings up the page to select storage location for the capture events.










Check box in front of "Save data to a file...", which should automatically be filled in with the name of the event "xe_inaccurate_cardinality_estimate" as the storage file name. You could enter a complete path to the event file, if not by default the event files will be stored under \MSSQL\Log within the installation folder of the instance.


Adjust the Maximum file size, rollover settings to your preferences then click Next

Click Finish to create the event.

Wednesday, July 12, 2023

A high level view of the most costly queries by CPU

A high level view of the most costly queries by CPU

This article is more about identifying High CPU Usage Queries in SQL Server at the Batch and Stored Procedure Levels

Understanding which SQL Server queries consume the most CPU resources is crucial for database performance tuning. However, focusing solely on individual statements might not always provide the complete picture. Instead, examining queries at a batch or stored procedure level offers a broader view, enabling more effective optimizations. This approach aggregates CPU usage across all statements within a batch or stored procedure, facilitating a targeted investigation of high-impact areas.

Overview

The goal here is to aggregate CPU usage by all statements within the same batch or stored procedure, identified by their shared 'plan_handle'. This high-level view helps in pinpointing which queries are most costly in terms of CPU resources. Later on, we can drill down to individual statements with the highest CPU consumption for detailed analysis.


Important Note: CPU time is measured in microseconds, where 1,000,000 microseconds equal one second. For demonstration purposes, this guide focuses on the top 5 queries based on CPU usage, with at least one execution and cached execution plans. Adjustments may be necessary depending on the activity level and type in your specific SQL instance.


The following query aggregates CPU times for batches or stored procedures, ranking them to identify the top consumers.


IF OBJECT_ID('tempdb..#top_costly_queries') IS NOT NULL
DROP TABLE #top_costly_queries;
GO

;WITH cte AS 
(
    SELECT TOP 5 
            SUM(qs.total_worker_time) AS total_cpu_time,
            SUM(qs.execution_count) AS total_execution_count,
            SUM(qs.total_worker_time) / SUM(qs.execution_count) AS avg_cpu_time,
            SUM(qs.total_elapsed_time) / SUM(qs.execution_count) AS avg_elapsed_time,
            COUNT(*) AS number_of_statements,
            qs.plan_handle
    FROM   sys.dm_exec_que;ry_stats AS qs
    GROUP  BY qs.plan_handle
    HAVING SUM(qs.execution_count) > 1
)
SELECT 
    DB_NAME(qp.dbid) AS db_name,
    COALESCE(QUOTENAME(DB_NAME(qp.dbid)) + N'.' + QUOTENAME(OBJECT_SCHEMA_NAME(qp.objectid, qp.dbid)) + 
             N'.' + QUOTENAME(OBJECT_NAME(qp.objectid, qp.dbid)), '') AS obj_name,
    qp.objectid,
    qp.query_plan,
    cte.*
INTO   #top_costly_queries
FROM   cte
       CROSS APPLY sys.dm_exec_query_plan(cte.plan_handle) AS qp
ORDER  BY total_cpu_time DESC;






Analyzing the Results

After identifying the batches or stored procedures with the highest CPU usage, we focus on the specific statements within them:


SELECT t.*,
       SUBSTRING(st.text, ( qs.statement_start_offset / 2 ) + 1, (
            (CASE qs.statement_end_offset 
                WHEN -1 THEN DATALENGTH(st.text)
                ELSE qs.statement_end_offset 
            END - qs.statement_start_offset ) / 2 ) + 1) AS statement_text,
       qs.total_worker_time / qs.execution_count AS avg_cpu_time_stmt,
       qs.*,
       st.*
FROM   sys.dm_exec_query_stats AS qs
       INNER JOIN #top_costly_queries AS t ON t.plan_handle = qs.plan_handle
       CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER  BY qs.total_worker_time DESC;


Partial Results:












This detailed examination helps pinpoint specific statements within the identified batches or stored procedures that warrant further investigation due to their high CPU usage.

Analyzing the results, we might find outliers indicating inefficient queries. The next step involves reviewing the execution plan of these queries, accessible via the 'query_plan' link in the query results, to identify optimization opportunities.

Conclusion


The effectiveness of SQL Server performance tuning lies in a holistic approach. While this guide focuses on CPU usage, other factors such as I/O wait times and overall server performance also play crucial roles. Regularly reviewing and optimizing high-impact queries ensures efficient resource utilization and optimal database performance. Remember, the impact of running performance tuning queries in a production environment should be carefully considered, as they can be resource-intensive themselves. Always aim for a balanced approach, taking into account the full spectrum of performance metrics.


 



Tuesday, July 11, 2023

Generate SQL Script to Allow or Disallow Access To All Data

Generate SQL Script to Allow or Disallow Access To All Data

I had a requirement to temporarily prevent access, read or write, to a particular database for all users except 2 users. There were hundreds of SQL Logins with access to this database. But since 2 users still needed full access to this database I could not simply take it offline or put into a read only mode. And keep in mind this restriction was to be temporary. I could also not simply disable their logins (script at the end of this article) as some users have access to other databases that needed to be maintained.  

One way to accomplish this is by adding users to db_denydatareader & db_denydatawriter database level roles. Users will not be able to directly read/write to the tables or through views, even if they are owner of that table or have database owner permission.  But they can still read/write if allowed through stored procedures, functions etc. So to disallow access to data through the code you can deny execute permissions at the schema level:

DENY EXECUTE ON SCHEMA::[dbo] TO [User1];

For more information on schema permissions:

https://learn.microsoft.com/en-us/sql/t-sql/statements/grant-schema-permissions-transact-sql?view=sql-server-ver16


And that is the approach I ended up taking,  by denying permissions at the schema level to all objects own by that schema. And wrote the following script o generate the SQL statements for each user excluding the two users:

/* 
-- List of valid permissions at the schema level
ALTER
CONTROL
CREATE SEQUENCE
DELETE
EXECUTE
INSERT
REFERENCES
SELECT
TAKE OWNERSHIP
UPDATE
VIEW CHANGE TRACKING
VIEW DEFINITION

Please see: https://learn.microsoft.com/en-us/sql/t-sql/statements/grant-schema-permissions-transact-sql?view=sql-server-ver16

*/
SET NOCOUNT ON
GO
USE <DBName>
DECLARE @schema_owner varchar(100)
DECLARE @schema_permission varchar(100) = 'EXECUTE'
SET @schema_owner = 'dbo'     

declare @valid_permissions table(valid_permission_name varchar(100))
insert into @valid_permissions
values
('ALTER'),
('CONTROL'),
('CREATE SEQUENCE'),
('DELETE'),
('EXECUTE'),
('INSERT'),
('REFERENCES'),
('SELECT'),
('TAKE OWNERSHIP'),
('UPDATE'),
('VIEW CHANGE TRACKING'),
('VIEW DEFINITION')

IF SCHEMA_ID(@schema_owner) is null
BEGIN
		RAISERROR('Error: Schema %s does not exist.', 16, 1, @schema_owner)
		GOTO QUIT
END


if not exists(select * from @valid_permissions where valid_permission_name = @schema_permission)
BEGIN
		RAISERROR('Error: Permission (%s) is not a valid schema permission.', 16, 1, @schema_permission)
		SELECT valid_permission_name FROM @valid_permissions
		GOTO QUIT
END


SELECT 
		name [user_name],
		@schema_owner [schema_name],
		'USE ' + QUOTENAME(db_name()) + ';' + 
		'GRANT ' + @schema_permission + ' ON SCHEMA::' + QUOTENAME(@schema_owner) + ' TO ' + QUOTENAME(name) + ';' [Grant_Schema_Access],

		'USE ' + QUOTENAME(db_name()) + ';' + 
		'DENY ' + @schema_permission + ' ON SCHEMA::' + QUOTENAME(@schema_owner) + ' TO ' + QUOTENAME(name) + ';' [Deny_Schema_Access]

FROM sys.database_principals
WHERE      name not in ('public','dbo','guest','INFORMATION_SCHEMA','sys')
       AND name not in ('AppUser1','AppUser2')
       AND is_fixed_role = 0
ORDER BY name

QUIT:

The results of this script will have script to grant as well as deny the permission in two separate columns, both of which I will be needing:







There is also Revoke option to remove a granted schema permission.

Now, at while back in the past SQL Server allowed 'ALL" keyword to grant or revoke all permissions in one go. That is no longer allowed so you will need to rerun the script for each permissions to grant or revoke. So in my this particular requirement it would be SELECT, INSERT, UPDATE & EXECUTE. To avoid that I could turn this into a stored procedure but I didn't want add complexity or use any dynamic SQL that I may have to resort for that.

If you just need to prevent users from able to login altogether,  you can use the following to generate the SQL script for it;

use AdminDBA
SELECT sp.name [Login_Name],
       dp.name [DB_User],
       'ALTER LOGIN ' + QUOTENAME(sp.name) +' DISABLE;' [Disable_Logins],
       'ALTER LOGIN ' + QUOTENAME(sp.name) +' ENABLE;' [Enable_Logins]
FROM sys.database_principals dp
INNER JOIN sys.server_principals sp ON dp.sid = sp.sid
WHERE dp.name not in ('public','dbo','guest','INFORMATION_SCHEMA','sys')
  AND dp.name not in ('AppUser1', 'AppUser1')
ORDER BY dp.name


Monday, July 10, 2023

A Barebone Script To Generate SQL Script to Create Synonyms

A Barebone Script To Generate SQL Script to Create Synonyms

This stored procedure can be used to generate a SQL script to create synonyms for all tables in a source database. Optionally, on the local SQL instance this SP can also create the synonyms for you, provided the target database and the schema (defaults to dbo and therefore can be omitted) already exists there. 

-- exec spGenerateSynonyms @sourcedb = 'AdminDBA', @targetdb = 'AdminDBA', @sourceschema= 'dbo', @targetschema='newschema', @printOnly = 1
/*
Description: 
Generates script to create synonyms in target db (usually, a blank db) for 
all tables that exist in source db (usually a subscriber db)

Requirements:
Source database and source schema (defaults to dbo so it can be omitted) 
must exist If parameter @PrintOnly is set to 0, meaning the script would  
also run the create synonyms statements then the  target database and 
schema must exist too... 

Caveats:
For now this procedure only generates scripts for tables. 
Synonyms on other types of objects (views, stored procedures etc.) are 
also supported by sql server.


Examples:
exec spGenerateSynonyms @sourcedb = 'EnterpriseSub', @targetdb = 'Enterprise', @printOnly = 1
exec spGenerateSynonyms @sourcedb = 'AdminDBA', @targetdb = 'Enterprise', @sourceschema='dbo',  @targetschema='myapp',  @printOnly = 1

exec spGenerateSynonyms @sourcedb = 'AdminDBA', @targetdb = 'Enterprise', @sourceSchema = 'appuser1', @printOnly = 1
exec spGenerateSynonyms @sourcedb = 'AdminDBA', @targetdb = 'AdminDBA', @targetschema='newschema', @printOnly = 0

exec spGenerateSynonyms @targetdb = 'ProductUserMaster' -- INVALID
exec spGenerateSynonyms @sourcedb = 'AdminDBA', @targetdb = 'AdminDBA'  -- INVALID, SOURCE AND TARGET  DATABASES + SCHEMAS CANNOT BE SAME

*/


set nocount on
go
use AdminDBA   -- Change this to the whatever database you are using to store DBA objects
go
if object_id('spGenerateSynonyms') is null
BEGIN
   PRINT 'Creating stored procedure spGenerateSynonyms'
   exec('create PROCEDURE spGenerateSynonyms as select '''' empty_string')
END
ELSE
   PRINT 'Stored procedure spGenerateSynonyms already exists.'
   PRINT 'Modifying stored procedure spGenerateSynonyms....'
GO
alter  PROCEDURE spGenerateSynonyms 
(
   @sourceDB sysname,                 -- aka base database where the objecgs are located
   @targetDB sysname,                 -- database where the synonyms will be created

   @object_type varchar(100) = 'Tables',   -- type of objects to create synonyms

   @sourceSchema sysname = 'dbo',     -- base schema name... defaults to dbo
   @targetSchema sysname = 'dbo',     -- defaults to dbo

   @PrintOnly bit = 1                 -- whether to also create the synonums or only print the SQL statements to create them
)


as
set nocount on

declare @sql varchar(4000), @def nvarchar(max)
declare @sql_string nvarchar(4000)

declare @sourceschema_id table(schema_id int)
declare @targetschema_id table(schema_id int)


select @sourcedb SourceDB, @sourceSchema SourceSchema, @targetdb TargetDB, @targetSchema TargetSchema

-- ************* VALIDATE INPUTS *********************
if @targetdb is null or @sourcedb is null
begin
   raiserror('Error: Source and target database names are required and cannot be null. Exiting...', 16, 1)
   return -1
end

if @targetdb  = @sourcedb and @sourceSchema = @targetSchema
begin
   raiserror('Error: Source and taget databases + schemas cannot be the same. Exiting...', 16, 1)
   return -1
end

if  db_id(@sourcedb) is null or  databasepropertyex(@sourcedb, 'Status')!='ONLINE'
begin
   raiserror('Error: Source database %s not found or not ONLINE. Exiting...', 16, 1, @sourcedb)
   return -1
end

-- If @PrintOnly=0 the SP will try to create the sysnonyms
-- Therefore the target db and the target schema must already exist
if @PrintOnly = 0 
BEGIN

		if  (db_id(@targetDB ) is null or  databasepropertyex(@targetDB , 'Status')!='ONLINE')
		begin
		   raiserror('Error: The target database %s not found or not ONLINE. Exiting...', 16, 1, @targetDB )
		   return -1
		end

		set @sql = 'select schema_id from [' + @targetDB + '].sys.schemas where name = ''' + @targetschema + ''''
		insert into @targetschema_id  exec (@sql)
		if not exists (select * from @targetschema_id)
		BEGIN
		   raiserror('Error: Target schema %s not found in the %s database . Exiting...', 16, 1, @targetSchema, @targetdb)
		   return -1
		END


END




set @sql = 'select schema_id from [' + @sourceDB + '].sys.schemas where name = ''' + @sourceSchema + ''''
insert into @sourceschema_id exec (@sql)
if not exists (select * from @sourceschema_id)
BEGIN
	raiserror('Error: Source schema %s not found in the %s database . Exiting...', 16, 1, @sourceschema, @sourceDB)
	return -1
END



if object_id('tempdb..#t1_synonyms_definitions') is not null
   drop table #t1_synonyms_definitions
create table #t1_synonyms_definitions(definition nvarchar(max))
set @sql = 'select  ''
Use [' + @Targetdb + ']
IF NOT EXISTS (SELECT * FROM sys.synonyms WHERE schema_id = schema_id(''''' + @targetSchema + ''''') and name = N'''''' + NAME + '''''') 
CREATE SYNONYM [' + @targetSchema + '].['' + name + ''] FOR [' + @sourcedb + '].[' + @sourceSchema + '].['' + name  + '']'' from [' + @sourcedb + '].sys.tables'

if @sourceSchema is not null
set @sql = @sql + ' where schema_name(schema_id) = ''' + @sourceSchema + ''''

insert into #t1_synonyms_definitions exec (@sql)
select * from #t1_synonyms_definitions
declare c1 cursor for select * from #t1_synonyms_definitions
open c1
fetch c1 into @def 
while @@fetch_status = 0
begin
      print @def


      if @printOnly = 0 exec (@def)
      fetch c1 into @def 

end
close c1
deallocate c1

But why synonyms? Synonyms can mask/hide the underlying location of the base objects (think of linked server queries, cross database queries etc.),  regardless of who owns them and/or their names.  Or maybe you have two identical copies of the same app/code, one pointing to the live data and the other for the archive/historical data.

You might be wondering, well views can achieve the same and I can add some logic to make them even useful. And you would be right, views are more useful and certainly has more use cases. But if you are creating views only to mask the underlying location of tables or their names, then I would say synonyms are better because for one thing, the views would not automatically reflect changes to the underlying tables structures. Of course then there are good use cases where you don't want a view to exactly reflect the underlying table structure.