Search This Blog

Tuesday, November 18, 2025

Writing Better Dynamic SQL

Writing Better Dynamic SQL

This updated and "sort" of a style guide is for SQL developers and, to some extent, DBAs, essentially anyone daring (and sometimes foolish) enough to wrestle with dynamic SQL without losing their mind. Whether you’re writing the code or cleaning up after it, these tips might save you some sleepless nights and a few colorful words. No promises, but at least you’ll have done your best to make someone’s life, and maybe your own, a little less miserable in the future.



Writing code that actually works, runs fast, and doesn’t explode with bugs is great, we all want that. But let's be honest, writing code that you (or anyone else) can still understand six months later is just as important. 

Now, let’s talk about dynamic SQL. Or rather, let’s not. Bring it up in some circles, and you might get the same reaction as announcing you still use tabs instead of spaces (developer humor). And many people (mainly DBAs) will tell you to avoid it like the plague, and they’re not entirely wrong, for two very good reasons:  

  1. Security - hello, SQL injection!  
  2. Performance - or lack thereof.

Think about the last time you had to fix someone else’s “clever” code. Even well-documented scripts can be confusing enough. Add dynamic SQL to the mix, and it starts feeling like a book that keeps shuffling its chapters every time you read it. Back in my younger, supposedly dazzling days, someone once described my dynamic SQL code as “very eloquent.” I took it as a compliment at the time, though, looking back, I’m not entirely sure it was.


So, here are few tips or best practices for writing dynamic SQL, because if we’re going to do something risky, we might as well do it semi-responsibly.

1. Document for Maintainability  


Add some comments around your dynamic SQL masterpiece. Explain why you built it this way, what the parameters are doing, and what sort of cosmic alignment was required for it to work. Dynamic SQL debugging is already a joy, no need to make it a full-blown treasure hunt.


/* EXAMPLE ***************
  Stored procedure to safely query Object ID by Name using dynamic SQL.

  Design considerations:
  - Uses sp_executesql for parameterization to prevent SQL injection.
  - Accepts user input as a parameter rather than string concatenation.
  - Uses QUOTENAME if needing to inject object names to avoid SQL injection.
  - Prints the constructed SQL string for debugging purposes.
*/

IF OBJECT_ID('tempdb..#GetObjectID') IS NOT NULL DROP PROCEDURE #GetObjectID 
GO
CREATE PROCEDURE #GetObjectID
  @ObjectName NVARCHAR(50)
AS
BEGIN
  -- Declare variable to hold dynamic SQL
  DECLARE @SqlCommand NVARCHAR(MAX);

  -- Build parameterized SQL query to select Object ID, filtering by Object Name
  SET @SqlCommand = N'
    SELECT object_id
    FROM sys.objects 
    WHERE name = @NameParam;
  ';
  -- Debug print of the SQL command string
  PRINT 'Executing SQL: ' + @SqlCommand;

  -- Execute dynamic SQL with parameter safely passed in
  EXEC sp_executesql @SqlCommand,
    N'@NameParam NVARCHAR(50)',
    @NameParam = @ObjectName;
END;

GO

-- Test
EXEC #GetObjectID 'sysdbfiles'





2. Prefer Parameterized Execution  

Great advise, but how? 

When executing dynamic SQL, always use sp_executesql with parameters instead of EXEC. It lowers your SQL injection risk, makes SQL Server actually reuse execution plans, and lets you have nice things like output variables.  

This example fetches a record from the sys.objects table using proper parameters, which is just a fancy way of saying “without inviting disaster.”

DECLARE @SqlCommand NVARCHAR(MAX) = N'SELECT * FROM sys.objects WHERE name = @ObjectName;';
DECLARE @ObjectName NVARCHAR(50) = 'sysdbfiles';
EXEC sp_executesql @SqlCommand, N'@ObjectName NVARCHAR(50)', @ObjectName;

This next example qualifies as a public service announcement on what *not* to do:

-- BAD - DO NOT USE **************
DECLARE @ObjectName NVARCHAR(50) = 'sysdbfiles';
DECLARE @SqlCommand NVARCHAR(MAX) = N'SELECT * FROM sys.objects WHERE Name = ''' + @ObjectName + N'''';
EXEC(@SqlCommand);



3. Avoid Unnecessary Dynamic SQL  

  • Treat dynamic SQL like hot sauce: a little goes a long way, and too much will set everything on fire. Only use it when table, column, or object names actually require it, not just because typing EXEC is too convenient and powerful.   
  • For filters and conditions, stick with parameterized T‑SQL or stored procedures. They may be boring, but “boring and secure” beats “exciting and hacked” any day.


4. Manage String Handling Carefully  


  • When working with command strings, use NVARCHAR(MAX). Otherwise, enjoy the mystery of why half your query disappears into thin air.  
  • Watch out for NULLs in concatenation, ISNULL, COALESCE, or CONCAT are your friends here. Pretend you care now, or you’ll definitely care later when your query returns nothing and you have no idea why.  
  • And yes, use the built-in string functions (LTRIM, RTRIM, TRIM, CHARINDEX, STUFF, REPLACE, TRANSLATE, SUBSTRING, REPLICATE, REVERSE). They exist for a reason — mostly to save you from yourself.  


The following example shows how to handle strings properly in dynamic SQL, avoiding truncation, NULL chaos, and other developer regrets. It follows best practices for managing strings safely and cleanly in dynamic SQL construction.


/*
  Example: Building a dynamic query with safe string handling.
  - Uses NVARCHAR(MAX) to avoid truncation.
  - Handles NULL variables safely with ISNULL/COALESCE.
  - Demonstrates concatenation with CONCAT and "+" operator.
  - Uses sp_executesql to parameterize input safely.
*/

DECLARE @schema NVARCHAR(50) = NULL;
DECLARE @table NVARCHAR(50) = 'sysfiles';
DECLARE @column NVARCHAR(50) = 'name';
DECLARE @value NVARCHAR(50) = 'master';

-- Demonstrate concatenation with NULLs handled explicitly
DECLARE @sql1 NVARCHAR(MAX);
SET @sql1 = 'SELECT * FROM ' 
    + ISNULL(@schema + '.', '')  -- Prevent NULL schema from breaking string
    + @table
    + ' WHERE ' + @column + ' = @filterValue';

-- Alternative using CONCAT which treats NULL as empty string
DECLARE @sql2 NVARCHAR(MAX);
SET @sql2 = CONCAT(
    'SELECT * FROM ',
    COALESCE(@schema + '.', ''),  -- COALESCE also protects NULL
    @table,
    ' WHERE ', @column, ' = @filterValue'
);

-- Use sp_executesql with parameter to avoid injection and ensure plan reuse
EXEC sp_executesql @sql1,
   N'@filterValue NVARCHAR(50)',
   @filterValue = @value;

-- Output both SQL strings for debugging
PRINT 'SQL using ISNULL concat: ' + @sql1;
PRINT 'SQL using CONCAT: ' + @sql2;


Key Takeaways:

  • Use NVARCHAR(MAX) for large dynamic strings. Otherwise, enjoy the thrill of wondering why half your SQL command just vanished mid‑execution.  
  • Use ISNULL or COALESCE to keep one pesky NULL from turning your entire concatenated string into nothingness, because apparently, NULL doesn’t believe in teamwork.  
  • Use CONCAT to make concatenation cleaner and to automatically treat NULLs like the empty shells they are. Fewer headaches, more functioning code.  
  • Parameterize values with sp_executesql. It keeps your code secure, faster, and less likely to turn into a free‑for‑all SQL injection party.  
  • Add some debug prints of your constructed SQL. It’s the developer equivalent of talking to yourself, slightly weird, but surprisingly effective when things stop making sense.


5. Debug Effectively  

  • Print your dynamic command strings while developing, it’s the SQL equivalent of talking to yourself, but at least this version occasionally answers back.  

  • If your dynamic statements are going get longer than a toddler’s bedtime story, bump up the text output limit in SQL Server Management Studio (Tools > Options > Query Results > Results to Text). Otherwise, you’ll get half a query and twice the confusion.


6. Object Naming and Injection Safety  


When injecting object names inside dynamic SQL, wrap them with QUOTENAME, because “trusting input” is not a security strategy.

  • Always wrap dynamic object names with QUOTENAME to keep both syntax errors and unwelcome surprises out of your SQL.  
  • Avoid injecting raw table or column names directly, it’s faster to validate them against an allow‑list than to explain later why production went down during peak usage.  

The example below shows how to manage object naming and user input in dynamic SQL the right way, efficient, secure, and refreshingly uneventful.

/*
  Example: Dynamic SQL with safe object naming and injection protection.
  - Uses QUOTENAME to safely delimit schema, table, and column names.
  - Prevents SQL injection via object names with QUOTENAME.
  - Uses sp_executesql with parameters for user inputs.
*/

DECLARE @SchemaName SYSNAME = 'dbo';
DECLARE @TableName SYSNAME = 'sysfiles';
DECLARE @ColumnName SYSNAME = 'name';
DECLARE @FilterValue NVARCHAR(50) = 'master';

DECLARE @Sql NVARCHAR(MAX);

-- Build dynamic SQL with safely quoted object names
SET @Sql = N'SELECT * FROM ' 
    + QUOTENAME(@SchemaName) + N'.' + QUOTENAME(@TableName) 
    + N' WHERE ' + QUOTENAME(@ColumnName) + N' = @Dept';

-- Execute with parameter to avoid injection on values
EXEC sp_executesql @Sql, N'@Dept NVARCHAR(50)', @Dept = @FilterValue;

-- Optional: Print SQL string for debugging
PRINT @Sql;


Remember:  
  • QUOTENAME() wraps object names in delimiters ([]) to prevent injection and syntax errors caused by spaces, special characters, or reserved keywords.

  • Never directly concatenate user input into object names without validation and quoting.

  • Continue using parameterized queries (sp_executesql) to safeguard user-supplied data.

  • Always explicitly specify schema names for clarity and security.


7. Handle Single Quotes Properly  

  • If there’s one thing dynamic SQL loves, it’s breaking because of a single missing quote.  Even to this day this happens to be almost every time I write dynamic SQL.  
  • Make sure to double up single quotes inside your SQL literals, yes, two of them. No, not sometimes. Always.  
  • Print your command strings often while debugging; it’s the only way to spot those sneaky quoting errors before they ruin your day (again).  

The example below shows how to do it right, written by someone who learned this lesson the hard way, repeatedly.

/*
  Example: Handling single quotes dynamic SQL by doubling single quotes.
  - Uses REPLACE to escape single quotes by replacing each single quote with two.
  - Prevents syntax errors caused by unescaped single quotes.
  - Uses sp_executesql with parameters for safer execution when possible.
*/

-- Create O'mighty O'sql table if doesn't already exist
IF OBJECT_ID('O''mighty O''sql') IS NOT NULL DROP TABLE [O'mighty O'sql] ; GO CREATE TABLE [O'mighty O'sql] (id int); GO DECLARE @UserInput NVARCHAR(50) = 'O''mighty O''sql'; -- Input with single quotes -- Unsafe dynamic SQL by direct concatenation (not recommended): DECLARE @SqlUnsafe NVARCHAR(MAX); SET @SqlUnsafe = 'SELECT * FROM sys.objects WHERE name = ''' + REPLACE(@UserInput, '''', '''''') + ''''; -- Double single quotes to escape PRINT 'Unsafe SQL: ' + @SqlUnsafe; EXEC(@SqlUnsafe); -- Better: Use sp_executesql with parameters to avoid manual escaping: DECLARE @SqlSafe NVARCHAR(MAX) = 'SELECT * FROM sys.objects WHERE name = @ObjectName'; PRINT 'Safe SQL: ' + @SqlSafe; EXEC sp_executesql @SqlSafe, N'@ObjectName NVARCHAR(50)', @ObjectName = @UserInput;

GO

-- Drop table O'mighty O'sql 
IF OBJECT_ID('O''mighty O''sql') IS NOT NULL DROP TABLE [O'mighty O'sql] ;



Remember:  

  • When you’re dynamically concatenating strings that contain single quotes, use REPLACE(value, '''', '''''') to double them up. Yes, it looks ridiculous, and yes, it’s necessary, because SQL doesn’t share your sense of humor.  
  • Better yet, use sp_executesql with parameters and skip the manual quote juggling altogether. It’s cleaner, safer, and saves you from explaining to your team why your code exploded over one punctuation mark.  
  • Print out your SQL command as you go; it’s like holding up a mirror to your mistakes before they hit production.  

This approach keeps single quotes from wrecking your dynamic SQL and, more importantly, keeps you from accidentally inventing new injection vectors in the name of “quick testing.”



Summary & Conclusion


Dynamic SQL is a powerful tool that can be incredibly helpful, until it gives you headaches you didn’t ask for. Treat it with respect: comment generously, use parameters, quote your object names properly, and debug like your sanity depends on it (because it probably does). Most disasters you hear about with dynamic SQL probably happened because someone, likely yourself, ignored these rules. So code defensively, document liberally, and maybe keep a stress ball handy. 


Resources



Microsoft Article on sp_executesql

EXEC and sp_executesql – how are they different

Gotchas to Avoid for Better Dynamic SQL

Why sp_prepare Isn’t as “Good” as sp_executesql for Performance


Monday, September 22, 2025

Set Theory vs. Batch Mode in SQL Server

Set Theory vs. Batch Mode in SQL Server

Not long ago, a colleague of mine was completely shocked when he first heard about SQL Server’s Batch Mode feature. His immediate reaction was: “Wait, but SQL is set-based. Doesn’t that mean it already processes everything as a set instead of row by row?”

That’s actually a really common assumption, and it makes sense at first glance. After all, people often describe relational databases as powerful because they “process sets of data at once.” But the reality is a little more nuanced. Let’s clear this up.

Set-Based SQL Is About How You Express Queries

When we say SQL is a set-based language, we’re talking about how you, the database professional, express what you want. SQL is declarative. You write what result set you need, and you leave it up to SQL Server to figure out how to produce it.

For instance:

  • “Give me the top-selling products this season.”

  • “Show me all customers in the West region.”

You usually don't specify whether SQL Server should use a particular index, scan all rows, or parallelize execution. That’s the optimizer’s job.

But, and here’s where the misconception comes in, asking for a set result doesn’t mean SQL Server executes it in one giant, magical sweep. Under the covers, the engine usually streams rows through operators in a plan, often one at a time, kind of like an assembly line.

Enter Batch Mode

That’s where Batch Mode changes the game. Instead of passing rows through operators one at a time, Batch Mode allows the engine to group rows together, typically around 900 at a time, and process them as a single unit. Think of it as SQL Server “vectorizing” execution.

Here’s a mental image:

  • Row Mode: You buy 10 apples at the grocery store, and the cashier scans each apple one by one.

  • Batch Mode: The 10 apples come in a bag, and the cashier just scans the bag once.

The result? Less overhead and faster performance, especially for analytic and large-scale queries.

Set Theory vs. Batch Mode

This is why it’s important not to conflate relational set theory with Batch Mode execution. They live in completely different layers:


Batch Mode

Set Theory / Relational Model

What is it?

Query execution engine optimization

Query Writing or formulation approach

How it works?

Rows, handled in chunks (~900) internally

Sets/relations (no row-by-row logic)

Why use it?

Improve efficiency of analytic queries

Promote declarative, efficient operations

Where it operates?

Physical (implementation/execution)

Logical (query semantics/design)

Example

using a vectorized batch operator

SELECT SUM(column) FROM table;


So, SQL being set-based tells us how we describe queries. Batch Mode tells us how the engine might execute them internally. Two different things, both powerful—but they shouldn’t be confused.

Batch Mode Through the Versions

Batch Mode wasn’t always around. It showed up with SQL Server 2012 alongside Columnstore indexes, which were built for data warehousing workloads. For years, Batch Mode was tied entirely to columnstore. Then SQL Server 2019 broadened access by making Batch Mode "available" on regular rowstore tables as well (as long as the database compatibility level is set to 150 or higher).

Here’s a quick version history:

SQL Server Version

Batch Mode on Columnstore

Batch Mode on Rowstore

Pre 2012

No

No

2012

Yes

No

2014

Yes

No

2016

Yes

No

2017

Yes

No

2019 and later

Yes

Yes


Of course, even when Batch Mode is available, it doesn’t mean every query will use it. The optimizer decides case by case which execution path is best. Sometimes, Row Mode is still the winner.

Wrapping Up

SQL’s set-based nature is about your interface with the language. Batch Mode, on the other hand, is an internal engine optimization that decides how the work actually gets done.



Demo


/* ** Demo: Batch Mode Vs Row Mode **

Let’s walk through a quick demo to see Batch Mode in practice. 
This works in SQL Server 2019 or later, since Batch Mode is available for 
rowstore tables (provided the database compatibility level is 150 or higher).

The script performs the following actions:

1. Creates a BatchModeDemoDB database 
2. Creates table BatchModeDemoTbl
3. Loads data into BatchModeDemoTbl table
4. Runs a query that is likely to use the Batch mode
5. Runs a second query against a smaller number of records to demo the row mode
6. Cleans up the BatchModeDemoDB database

*/


USE master;
GO
IF EXISTS(SELECT * FROM SYS.DATABASES WHERE NAME = 'master' AND compatibility_level < 150 )
BEGIN
	RAISERROR('Batch Mode feature is not available on your current version of SQL Server.', 16, 1)
END;
ELSE
BEGIN
	IF DB_ID('BatchModeDemoDB') IS NOT NULL DROP DATABASE BatchModeDemoDB;
	CREATE DATABASE BatchModeDemoDB;
	ALTER DATABASE [BatchModeDemoDB] SET RECOVERY SIMPLE WITH NO_WAIT;
	ALTER DATABASE [BatchModeDemoDB] MODIFY FILE ( NAME = N'BatchModeDemoDB', SIZE = 100MB)
	ALTER DATABASE [BatchModeDemoDB] MODIFY FILE ( NAME = N'BatchModeDemoDB_log', SIZE = 300MB)
END;
GO

USE BatchModeDemoDB;
ALTER AUTHORIZATION ON DATABASE::[BatchModeDemoDB] TO [sa];

-- Create a test table
DROP TABLE IF EXISTS dbo.BatchModeDemoTbl;
CREATE TABLE dbo.BatchModeDemoTbl
(
    Id       INT IDENTITY(1,1) PRIMARY KEY,
    Category INT NOT NULL,
    Amount   DECIMAL(10,2) NOT NULL
);

-- Insert a large amount of sample data, e.g.  1 million rows
INSERT INTO dbo.BatchModeDemoTbl (Category, Amount)
SELECT TOP (1000000)
    (ABS(CHECKSUM(NEWID())) % 10) AS Category,
    CAST(ABS(CHECKSUM(NEWID())) % 1000 AS DECIMAL(10,2))
FROM sys.objects a
CROSS JOIN sys.objects b
CROSS JOIN sys.objects c;

-- Lets run two sample queries
SET STATISTICS XML ON;

-- ** This query should use the Batch mode **
SELECT Category, SUM(Amount) AS TotalAmount
FROM dbo.BatchModeDemoTbl
GROUP BY Category;


/* ** This next query should use the row mode **
SQL Server is more likely to choose Row Mode 
for small result sets, since the overhead of 
batching isn’t worth it.
*/

SELECT Category, SUM(Amount) AS TotalAmount
FROM dbo.BatchModeDemoTbl
WHERE Id < 1000
GROUP BY Category;


SET STATISTICS XML OFF;
USE master;

-- Clean up
IF DB_ID('BatchModeDemoDB') IS NOT NULL DROP DATABASE BatchModeDemoDB;


After you run the above script, the results should look like this:

















  • In the execution plan (Ctrl+M in SSMS or via the XML output), look for operators like Hash Match, Clustered Index Scan or Aggregate.

  • Check the Actual Execution Mode property. If Batch Mode was used, you’ll see:

  • Actual Execution Mode = Batch
    Estimated Execution Mode = Batch










If the optimizer decided Row Mode was more efficient, you’ll see “Row.”

Next, try some demo queries on your own, compare Row vs. Batch, and look for execution plan changes.

Tip: For wider queries (more columns, larger aggregates), SQL Server is more likely to choose Batch Mode, since batching pays off with vectorized operators.



Resources:


Wikipedia - Set Theory

Wikipedia - Relational Model

SQL Server 2019 - New Feature Batch-mode On Rowstore Data