Search This Blog

Monday, July 31, 2023

Monitoring Long-Running SQL Server Agent Jobs with T-SQL (Updated)

Monitoring Long-Running SQL Server Agent Jobs with T-SQL (Updated)

As a SQL Server DBA, especially when working in production or operational environments, you’ll eventually run into SQL Agent jobs that run longer than they should. Maybe they exceed your SLAs or start affecting other jobs, applications, or users. Tools like the Job Activity Monitor in SQL Server Management Studio (SSMS) are great for keeping an eye on things manually, but sometimes you need automation, something that can alert you right away when a job runs past a certain threshold.

In this post, I’ll share a simple and effective T-SQL script that helps you track and get alerts for long-running SQL Agent jobs. It’s easy to plug into your environment, doesn’t depend on any third-party tools, and can be tailored to fit your organization’s needs. Even if you already have access to a full suite of monitoring tools, I still find this method incredibly handy because it’s straightforward, flexible, and quick to adapt wherever and whenever you need it.


Script Overview

The script is divided into two main parts:

  • Part 1 identifies active/running jobs whose run durations exceed a set threshold.

  • Part 2 formats the results and sends an email alert with the job details.

You can execute the whole script at once or run each part separately based on your workflow.


Setup Instructions

  1. Configure Database Mail if not already enabled.

  2. Update the variables at the top of the script, especially the runtime threshold (in minutes) and email address.

  3. Schedule this script as a SQL Agent job or run manually as required.

  4. Review output in SSMS or wait for the automated email alert.

TSQL Script: 

--  TESTED ON SQL SERVER 2016 AND UP ------------

-- Part 1: Identify long-running jobs
SET NOCOUNT ON;

-- Set the alert threshold values below
-- You can set either minutes or seconds or both as a threshold 
-- Max up to 35791 minutes can be specified or we will get Arithmetic overflow error

DECLARE @ThresholdMinutes INT = 30;      -- Change as needed, max up to 35791 minutes
DECLARE @ThresholdSeconds INT = 1;      -- Change as needed


SET @ThresholdMinutes = ISNULL(@ThresholdMinutes, 0);
SET @ThresholdSeconds = ISNULL(@ThresholdSeconds, 0);

DECLARE @RecipientEmail NVARCHAR(400) = '<EmailAddress>'; -- Add actual recipient email

-- Convert Threshold value into Milliseconds
DECLARE @ThresholdMilliseconds INT ;
SET @ThresholdMilliseconds = (@ThresholdSeconds * 1000) + (@ThresholdMinutes * 1000 * 60);

IF OBJECT_ID('tempdb..#job_durations') IS NOT NULL DROP TABLE #job_durations;

;WITH cte_jobs AS (
    SELECT
        sja.session_id AS [Session ID],
        sj.name AS [Job Name],
        r.command AS [Command],
        [Status] = CASE
            WHEN sja.start_execution_date IS NULL THEN 'Not running'
            WHEN sja.start_execution_date IS NOT NULL AND sja.stop_execution_date IS NULL THEN 'Running'
            ELSE 'Not running'
        END,
        sja.start_execution_date AS [Job Start Time],
        r.total_elapsed_time AS [Elapsed Milliseconds],
        r.total_elapsed_time / 1000 AS [Elapsed Seconds],
        r.total_elapsed_time / 1000 / 60 AS [Elapsed Minutes],
        r.total_elapsed_time / 1000 / 60 / 60 AS [Elapsed Hours],
        sj.job_id AS [Job ID]
    FROM msdb..sysjobs sj
    INNER JOIN msdb..sysjobactivity sja ON sj.job_id = sja.job_id
    INNER JOIN (
        SELECT
            TRY_CONVERT(binary(30), SUBSTRING(program_name, 30, 34), 1) AS job_id,
            *
        FROM sys.dm_exec_sessions
        WHERE program_name LIKE 'SQLAgent - TSQL JobStep%'
    ) s ON sj.job_id = s.job_id
    LEFT JOIN sys.dm_exec_requests r ON s.session_id = r.session_id
    WHERE sja.session_id = (SELECT MAX(session_id) FROM msdb..sysjobactivity WHERE job_id = sj.job_id)
)
SELECT
    cte.*,
    [Duration] =
        CASE WHEN [Elapsed Hours] > 0 THEN CAST([Elapsed Hours] AS VARCHAR(20)) + ' Hours' ELSE '' END +
        CASE WHEN [Elapsed Minutes] > 0 AND [Elapsed Hours] = 0 THEN CAST([Elapsed Minutes] AS VARCHAR(20)) + ' Minutes' ELSE '' END +
        CASE WHEN [Elapsed Minutes] > 0 AND [Elapsed Hours] > 0 THEN ', ' + CAST([Elapsed Minutes] - ([Elapsed Hours] * 60) AS VARCHAR(20)) + ' Minutes' ELSE '' END +
        CASE WHEN [Elapsed Seconds] > 0 AND [Elapsed Minutes] = 0 AND [Elapsed Hours] = 0 THEN CAST([Elapsed Seconds] AS VARCHAR(20)) + ' Seconds' ELSE '' END +
        CASE WHEN [Elapsed Seconds] > 0 AND [Elapsed Minutes] > 0 THEN ', ' + CAST([Elapsed Seconds] - ([Elapsed Minutes] * 60) AS VARCHAR(20)) + ' Seconds' ELSE '' END
INTO #job_durations
FROM cte_jobs cte
WHERE 1 = 1
    AND [Status] = 'Running'
    and [Elapsed Milliseconds] > @ThresholdMilliseconds

SELECT [Session ID], [Job Name], [Command], [Job Start Time], [Duration], [Elapsed Milliseconds]
FROM #job_durations
ORDER BY [Elapsed Milliseconds] DESC;

IF @@ROWCOUNT = 0 GOTO QUIT;

-- Part 2: Email alerts for long-running jobs
DECLARE @html_body NVARCHAR(MAX), @html_table_head VARCHAR(1000), @html_table_tail VARCHAR(1000);

SET @html_table_head = 
    '<html><head>' +
    '<style>td {border: solid black; border-width: 1px; padding: 5px; font: 15px arial;}</style>' +
    '</head><body>' +
    '<pre style="font-weight: bold">SQL Server Instance: ' + @@SERVERNAME + '</pre>' +
--    '<br>Alert Threshold: ' + CAST(@ThresholdMilliseconds AS VARCHAR(100)) +
    '<br><br>Report generated on: ' + CAST(GETDATE() AS VARCHAR(100)) +
    '<br><table cellpadding="0" cellspacing="0" border="0">' +
    '<tr><td bgcolor="#E6E6FA"><b>Job Name</b></td>' +
    '<td bgcolor="#E6E6FA"><b>Duration</b></td>' +
    '<td bgcolor="#E6E6FA"><b>Command</b></td></tr>';

SET @html_table_tail = '</table></body></html>';

SET @html_body = (
    SELECT
        td = LEFT([Job Name], 50), '',
        td = [Duration], '',
        td = [Command], ''
    FROM #job_durations
    ORDER BY [Elapsed Milliseconds] DESC
    FOR XML RAW('tr'), ELEMENTS
);

SELECT @html_body = @html_table_head + ISNULL(@html_body, '') + @html_table_tail;

EXEC msdb.dbo.sp_send_dbmail
    -- @profile_name = 'DBMail_DBA',  -- Uncomment and edit if using a named mail profile
    @recipients = @RecipientEmail,
    @body = @html_body,
    @subject = 'Long Running SQL Server Jobs Found',
    @body_format = 'HTML';

QUIT:


Example Output

For a demo, I started two sample jobs, doing nothing but running the WAITFOR DELAY command.  Here is the results displayed on the screen:






If a job exceeds your chosen threshold, you’ll receive an HTML-formatted email alert summarizing job details:











These next two screenshots are from a real/production job:












You can adapt thresholds and recipients to suit your production environment or integrate further enhancements as needed.


Tips

  • Customize thresholds at the top of the script for reuse and clarity.

  • Add exclusions or filters for specific job names as needed.

  • Provide context for others: update email subjects, include server or environment tags.

  • Enhance with automation: Schedule this script as a recurring SQL Agent job.

  • Extend for more detail: Add columns for the affected database, more job properties, or job step text.


Conclusion

My hope is that this script gives DBAs a quick, flexible way to keep tabs on long-running SQL Server Agent jobs - no third-party tools required. With just a few tweaks, it can scale nicely for large enterprise setups or work just as well in smaller environments. Feel free to adapt it to your needs, and if you come up with improvements or new ideas, share them in the comments!


Exercise: Extending to Long-Running Query Monitoring

If you want to take things a step further, you can easily tweak this script’s logic to keep an eye on long-running queries instead of just jobs. By querying dynamic management views like sys.dm_exec_requests and sys.dm_exec_sessions, you can spot active queries that go over a set duration. Grab details like the start_time and total_elapsed_time, join in the actual query text with sys.dm_exec_sql_text, and you’ll be able to create a similar alert system for queries that might be dragging down performance. It’s a great way to dive deeper into SQL Server DMVs and start building your own custom monitoring solutions that go beyond just tracking jobs.



Wednesday, July 19, 2023

Tracking the Inaccurate Cardinality Estimates

Tracking the Inaccurate Cardinality Estimates

When it comes to query performance, lately, Cardinality Estimates seem to have been a hot topic with each new release of SQL Server. They are making the query optimizer smarter, helping it generate more efficient query execution plans, and making queries run 

The SQL Server query optimizer, being 'cost-based', fundamentally relies on cardinality estimation, using per-attribute summary statistics (histograms). This principle applies to any Relational Database Management System (RDBMS), such as Oracle, that employs a cost-based optimizer.

The following description and warning included with the SSMS 'create Extended Events' wizard should be self-explanatory regarding why cardinality estimates matter.

Occurs when an operator outputs significantly more rows than estimated by the Query Optimizer. Use this event to identify queries that may be using sub-optimal plans due to cardinality estimate inaccuracy. Using this event can have a significant performance overhead so it should only be used when troubleshooting or monitoring specific problems for brief periods of time.


Wait a minute, according to that description, it's bad for performance when the optimizer underestimates the row count. Does that mean I don't need to worry about it if it overestimates the row count? Actually, overestimation is also bad for performance, perhaps not as severely, but it's still problematic enough that you should not ignore it. When the optimizer overestimates the row count:

  • The optimizer will allocate too much memory to the query, resulting in wasted memory. If this happens frequently, it can starve the SQL Server of memory, leading to spills to disk, hard paging, and so on.
  • The query will be executed with a parallel plan when a serial plan would not only have been faster but also would have avoided the possibility of the infamous CXPACKET waits.
  • The optimizer is more likely to choose scanning the entire index over seeking.


At a very basic level, the Cardinality Estimator is an estimate of the row count for each operation in the query, particularly for columns used in filter conditions (also known as the WHERE CLAUSE) and columns used in JOIN PREDICATES.

How are these estimates calculated?


While the internal algorithms evolve and keep changing, in general, the Optimizer uses the statistics created on the underlying objects/tables. Statistics are implicitly or automatically created when any index is created, and you can also create them separately, independent of any index. Additionally, SQL Server will automatically create statistics it needs if the AUTO_CREATE_STATISTICS setting is ON for the database. There are three key pieces of information in statistics:

  • The header includes, among other things, the last update date, the number of rows sampled, etc. This information can be obtained from the system DMV sys.stats and the function sys.dm_db_stats_properties.
  • Index density measures the uniqueness of a column. Low density indicates higher uniqueness, and vice versa. This information can be obtained from the system function sys.dm_db_stats_histogram.
  • Histogram shows the number of distinct values in a column and the distribution of the data in it. For example, consider the distribution of data for the Code column with 5 distinct values:



If there is an index (and, consequently, statistics on column values) on the Code column, assuming the stats are up-to-date and accurate, the Optimizer is (rightfully so) more likely to use a Seek against that index for the following query because it knows from the statistics that the index contains much fewer records for Code = 'E'.

SELECT * FROM Blood_Type WHERE Code = 'E';


If the filter condition instead contains Code = 'F', the Optimizer is more likely to not only ignore the index but also perform a full scan on the table/clustered index!

While there have been significant improvements in how the CE's are calculated, there will still be instances where those estimates will differ, sometimes significantly, from the reality, i.e., the actual row counts.

Before I get to the Extended Events, let me share couple queries that I use to look at the meta data for the statistics and their histograms.

SELECT -- TOP 10
	   SCHEMA_NAME(o.schema_id) [object_owner],
	   o.name [object_name],
	   s.name [stats_name],
	   (sp.rows_sampled * 100)/ sp.rows percent_sampled,
       sp.*, 
       s.auto_created,
	   s.user_created,
	   s.has_filter,
	   s.is_temporary,
	   s.is_incremental
FROM sys.objects o
inner join sys.stats s ON s.object_id = o.object_id
cross apply sys.dm_db_stats_properties(s.object_id, stats_id) sp
WHERE	1=1
	AND o.is_ms_shipped = 0
	AND s.auto_created = 0
--	AND o.name = 'LookupCodes'
order by sp.rows desc;  

The query retrieves information about statistics for user-created objects, excluding system objects, and sorts the result by the number of rows in the statistics in descending order. It also calculates the percentage of rows sampled for each statistic.












Get index/statistics histograms:

SELECT top 100 
		[schemas].name [schema_name],
		t.name TABLE_NAME,
		i.name index_name,
STATS_DATE(i.object_id, i.index_id) stats_update_date,
INDEX_COL([schemas].name + '.' + t.name, i.index_id, 1) ind_column, h.* FROM sys.tables t INNER JOIN sys.schemas [schemas] ON t.schema_id = [schemas].schema_id INNER JOIN sys.indexes i ON i.object_id = t.object_id INNER JOIN sys.stats s ON i.object_id = s.object_id AND i.index_id = s.stats_id INNER JOIN sys.stats_columns sc ON s.object_id = sc.object_id AND s.stats_id = sc.stats_id AND sc.stats_column_id = 1 -- inner join sys.columns c on c.object_id = s.object_id and c.column_id = sc.column_id CROSS apply sys.dm_db_stats_histogram(i.object_id, i.index_id) h WHERE 1=1 AND t.is_ms_shipped = 0 -- AND i.is_unique = 0 -- AND i.index_id > 1 AND t.name = 'tblxxx' AND i.name = 'idx_nc_tblxxx1';



The query retrieves detailed information about the specified non-clustered index, including its name, schema, last update date for statistics, the name of the first column in the index, and histogram data.











Now, the remainder of the article will focus on leveraging the Extended Events feature to detect these irregularities, often referred to as 'skewed cardinality estimates' in SQL Server documentation. 

And I would like to also emphasize the importance of selective data capture to avoid excessive information gathering.





Creating the extended event

(If you prefer TSQL scripts instead of a GUI Wizard, you find them towards the end of this article.)

To create the extended event, connect to the SQL Server instance and expand the Management--->Extended Events and right click on the Sessions to launch the New Session Wizard:















Click Next on the Introduction page, if it appears on your screen.

On the next page, enter a name for the XE session, I am calling it xe_inaccurate_cardinality_estimate.


Click Next and on the next page, select Do Not Use a Template.


On the next page, make sure to first click on the Channel drop down and check Debug. This is important. Otherwise the "inaccurate_cardinality_estimate" event will not get displayed.












In the Event Library search box, type "inaccurate_cardinality_estimate" and click on the event name to select it. Click on > button to add the event.











Click Next that will bring up option to capture Global Fields.  Here, I will check boxes next to database_id, database_name, sql_text and plan_handle. Click Next.



In the following screen, you can apply filters to refine the results as needed, such as by specifying a filter based on the database_id. I'll be adding a filter that selects events where the actual row count exceeds 1000. Without such a filter, SQL Server might capture events even when the estimated row count is 1 and the actual count is 2. Would you consider this a significant deviation from the estimate? It certainly is from the perspective of Extended Events (XE).

To enhance the effectiveness and practicality of this approach, you may also consider adding an additional filter to capture events where the estimated row count significantly differs from the actual row count. However, for now, I'll stick with filtering based on actual row counts only.

 







'




Click Next, which brings up the page to select storage location for the capture events.










Check box in front of "Save data to a file...", which should automatically be filled in with the name of the event "xe_inaccurate_cardinality_estimate" as the storage file name. You could enter a complete path to the event file, if not by default the event files will be stored under \MSSQL\Log within the installation folder of the instance.


Adjust the Maximum file size, rollover settings to your preferences then click Next

Click Finish to create the event.

Wednesday, July 12, 2023

A high level view of the most costly queries by CPU

A high level view of the most costly queries by CPU

This article is more about identifying High CPU Usage Queries in SQL Server at the Batch and Stored Procedure Levels

Understanding which SQL Server queries consume the most CPU resources is crucial for database performance tuning. However, focusing solely on individual statements might not always provide the complete picture. Instead, examining queries at a batch or stored procedure level offers a broader view, enabling more effective optimizations. This approach aggregates CPU usage across all statements within a batch or stored procedure, facilitating a targeted investigation of high-impact areas.

Overview

The goal here is to aggregate CPU usage by all statements within the same batch or stored procedure, identified by their shared 'plan_handle'. This high-level view helps in pinpointing which queries are most costly in terms of CPU resources. Later on, we can drill down to individual statements with the highest CPU consumption for detailed analysis.


Important Note: CPU time is measured in microseconds, where 1,000,000 microseconds equal one second. For demonstration purposes, this guide focuses on the top 5 queries based on CPU usage, with at least one execution and cached execution plans. Adjustments may be necessary depending on the activity level and type in your specific SQL instance.


The following query aggregates CPU times for batches or stored procedures, ranking them to identify the top consumers.



IF OBJECT_ID('tempdb..#top_costly_queries') IS NOT NULL
DROP TABLE #top_costly_queries;
GO

;WITH cte AS 
(
    SELECT TOP 5 
            SUM(qs.total_worker_time) AS total_cpu_time,
            SUM(qs.execution_count) AS total_execution_count,
            SUM(qs.total_worker_time) / SUM(qs.execution_count) AS avg_cpu_time,
            SUM(qs.total_elapsed_time) / SUM(qs.execution_count) AS avg_elapsed_time,
            COUNT(*) AS number_of_statements,
            qs.plan_handle
    FROM   sys.dm_exec_query_stats AS qs
    GROUP  BY qs.plan_handle
    HAVING SUM(qs.execution_count) > 1
)
SELECT 
    DB_NAME(qp.dbid) AS db_name,
    COALESCE(QUOTENAME(DB_NAME(qp.dbid)) + N'.' + QUOTENAME(OBJECT_SCHEMA_NAME(qp.objectid, qp.dbid)) + 
             N'.' + QUOTENAME(OBJECT_NAME(qp.objectid, qp.dbid)), '') AS obj_name,
    qp.objectid,
    qp.query_plan,
    cte.*
INTO   #top_costly_queries
FROM   cte
       CROSS APPLY sys.dm_exec_query_plan(cte.plan_handle) AS qp
ORDER  BY total_cpu_time DESC;






Analyzing the Results

After identifying the batches or stored procedures with the highest CPU usage, we focus on the specific statements within them:


SELECT t.*,
       SUBSTRING(st.text, ( qs.statement_start_offset / 2 ) + 1, (
            (CASE qs.statement_end_offset 
                WHEN -1 THEN DATALENGTH(st.text)
                ELSE qs.statement_end_offset 
            END - qs.statement_start_offset ) / 2 ) + 1) AS statement_text,
       qs.total_worker_time / qs.execution_count AS avg_cpu_time_stmt,
       qs.*,
       st.*
FROM   sys.dm_exec_query_stats AS qs
       INNER JOIN #top_costly_queries AS t ON t.plan_handle = qs.plan_handle
       CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) AS st
ORDER  BY qs.total_worker_time DESC;


Partial Results:












This detailed examination helps pinpoint specific statements within the identified batches or stored procedures that warrant further investigation due to their high CPU usage.

Analyzing the results, we might find outliers indicating inefficient queries. The next step involves reviewing the execution plan of these queries, accessible via the 'query_plan' link in the query results, to identify optimization opportunities.

Conclusion


The effectiveness of SQL Server performance tuning lies in a holistic approach. While this guide focuses on CPU usage, other factors such as I/O wait times and overall server performance also play crucial roles. Regularly reviewing and optimizing high-impact queries ensures efficient resource utilization and optimal database performance. Remember, the impact of running performance tuning queries in a production environment should be carefully considered, as they can be resource-intensive themselves. Always aim for a balanced approach, taking into account the full spectrum of performance metrics.


 



Tuesday, July 11, 2023

Generate SQL Script to Allow or Disallow Access To All Data

Generate SQL Script to Allow or Disallow Access To All Data

I had a requirement to temporarily prevent access, read or write, to a particular database for all users except 2 users. There were hundreds of SQL Logins with access to this database. But since 2 users still needed full access to this database I could not simply take it offline or put into a read only mode. And keep in mind this restriction was to be temporary. I could also not simply disable their logins (script at the end of this article) as some users have access to other databases that needed to be maintained.  

One way to accomplish this is by adding users to db_denydatareader & db_denydatawriter database level roles. Users will not be able to directly read/write to the tables or through views, even if they are owner of that table or have database owner permission.  But they can still read/write if allowed through stored procedures, functions etc. So to disallow access to data through the code you can deny execute permissions at the schema level:

DENY EXECUTE ON SCHEMA::[dbo] TO [User1];

For more information on schema permissions:

https://learn.microsoft.com/en-us/sql/t-sql/statements/grant-schema-permissions-transact-sql?view=sql-server-ver16


And that is the approach I ended up taking,  by denying permissions at the schema level to all objects own by that schema. And wrote the following script to generate the SQL statements for each user excluding the two users:

/* 
-- List of valid permissions at the schema level
ALTER
CONTROL
CREATE SEQUENCE
DELETE
EXECUTE
INSERT
REFERENCES
SELECT
TAKE OWNERSHIP
UPDATE
VIEW CHANGE TRACKING
VIEW DEFINITION

Please see: https://learn.microsoft.com/en-us/sql/t-sql/statements/grant-schema-permissions-transact-sql?view=sql-server-ver16

*/
SET NOCOUNT ON
GO
USE <DBName>
DECLARE @schema_owner varchar(100)
DECLARE @schema_permission varchar(100) = 'EXECUTE'
SET @schema_owner = 'dbo'     

declare @valid_permissions table(valid_permission_name varchar(100))
insert into @valid_permissions
values
('ALTER'),
('CONTROL'),
('CREATE SEQUENCE'),
('DELETE'),
('EXECUTE'),
('INSERT'),
('REFERENCES'),
('SELECT'),
('TAKE OWNERSHIP'),
('UPDATE'),
('VIEW CHANGE TRACKING'),
('VIEW DEFINITION')

IF SCHEMA_ID(@schema_owner) is null
BEGIN
		RAISERROR('Error: Schema %s does not exist.', 16, 1, @schema_owner)
		GOTO QUIT
END


if not exists(select * from @valid_permissions where valid_permission_name = @schema_permission)
BEGIN
		RAISERROR('Error: Permission (%s) is not a valid schema permission.', 16, 1, @schema_permission)
		SELECT valid_permission_name FROM @valid_permissions
		GOTO QUIT
END


SELECT 
		name [user_name],
		@schema_owner [schema_name],
		'USE ' + QUOTENAME(db_name()) + ';' + 
		'GRANT ' + @schema_permission + ' ON SCHEMA::' + QUOTENAME(@schema_owner) + ' TO ' + QUOTENAME(name) + ';' [Grant_Schema_Access],

		'USE ' + QUOTENAME(db_name()) + ';' + 
		'DENY ' + @schema_permission + ' ON SCHEMA::' + QUOTENAME(@schema_owner) + ' TO ' + QUOTENAME(name) + ';' [Deny_Schema_Access]

FROM sys.database_principals
WHERE      name not in ('public','dbo','guest','INFORMATION_SCHEMA','sys')
       AND name not in ('AppUser1','AppUser2')
       AND is_fixed_role = 0
ORDER BY name

QUIT:

The results of this script will have script to grant as well as deny the permission in two separate columns, both of which I will be needing:







There is also Revoke option to remove a granted schema permission.

Now, at while back in the past SQL Server allowed 'ALL" keyword to grant or revoke all permissions in one go. That is no longer allowed so you will need to rerun the script for each permissions to grant or revoke. So in my this particular requirement it would be SELECT, INSERT, UPDATE & EXECUTE. To avoid that I could turn this into a stored procedure but I didn't want add complexity or use any dynamic SQL that I may have to resort for that.

If you just need to prevent users from able to login altogether,  you can use the following to generate the SQL script for it;

USE AdminDBA
SELECT sp.name [Login_Name],
       dp.name [DB_User],
       'ALTER LOGIN ' + QUOTENAME(sp.name) +' DISABLE;' [Disable_Logins],
       'ALTER LOGIN ' + QUOTENAME(sp.name) +' ENABLE;' [Enable_Logins]
FROM sys.database_principals dp
INNER JOIN sys.server_principals sp ON dp.sid = sp.sid
WHERE dp.name not in ('public','dbo','guest','INFORMATION_SCHEMA','sys')
  AND dp.name not in ('AppUser1', 'AppUser1')
ORDER BY dp.name;