SQL Server is messing with my Identity

When working with SQL Server, you are going to be using TempDB; that is a certainty.

Using temporary tables, however, isn’t a certainty.
I’m taking it for granted that at one time or another you have used them but it’s not something I can say for certain.

if you aren’t using temporary tables AT ALL, please let me know what you’re using SQL Server for. I’ll either be impressed or shocked!more than likely shocked…

However, one of the features that I find with temporary tables is that they inherit the identity from whatever table they were created from, and more often than not, this is not the identity that you want them to have.

I mean if we wanted them to have that identity, we would have just used the base table, right?

Now I have ways around this that I’ll blog about later on .

However, while working with these temporary tables and their identites, I found something that I just can’t explain yet and thought I would blog about it so when I do figure it out I can look back at this and call myself all kinds of names for being so stupid.


So first of all, let’s set up our base table:

USE tempdb;
GO

-- Create our base table
CREATE TABLE dbo.A
(A_ID INT IDENTITY(1, 1),
x1 INT,
noise1 int DEFAULT 1,
noise2 char(1) DEFAULT 'S',
noise3 date DEFAULT GETUTCDATE(),
noise4 bit DEFAULT 0);

-- Create random data between the range of [0-3]
INSERT INTO dbo.A(x1)
SELECT s1000.n FROM
( SELECT TOP (10) n = 1 FROM sys.columns) AS s10 -- 10
CROSS JOIN
( SELECT TOP (10) n = 1 FROM sys.columns) AS s100 -- 10 * 10
CROSS JOIN
( SELECT TOP (10) n = ABS(CHECKSUM(NEWID())) % 4 FROM sys.columns) AS s1000; -- 100 * 10

SELECT * FROM dbo.A;

Random-ish results but should be something like this:

NoiseTable

Now, SQL Server has a useful little Database Engine Stored Procedure called sp_help that, along with a plethora of other useful information, can return a result set on identity.

If we query our table, we would get the following results:

EXECUTE sp_help 'dbo.A';

IdentityFromSpHelp
What this is telling us is that:

  • Our Identity column is [A_ID] (identity),
  • Our Identity value starts at 1 (seed), and
  • It goes up by 1 (increment) each time.

little tip: check out Tools -> Options -> Keyboard -> Query Shortcuts… you can just highlight a table and hit “Alt + F1” and you have the same results as running the above query…Efficiency for the win!…

which is great if we wanted to know what we started off with, but what about looking into the future? What about knowing what’s going to get inserted next?
Well for the identity column, we can!

DBCC CHECKIDENT(), passing in your table name.

NOTE: I’ve just given you a potentially dangerous command as it has some optimal parameters that can break stuff. Do me a favour and make sure you’re not on Production.

…see the above? How it’s in bold? Do me a favour and double check you’re not on Production? Cheers!…

CheckIdentResultsA

so we’ve looked into the future and we can tell that with sp_help and DBCC CHECKIDENT, our future identity will be:

DBCC CHECKIDENT().current identity value + sp_help.increment

 


Now say that part of our query is only interested in rows where x1 = 1.
The most basic way to create our temporary table with this information is probably the one that the majority would use:

SELECT * INTO #temp FROM dbo.A WHERE x1 = 1;

SELECT * FROM #temp;

…Nice, simple, easy. 3 things that we want….

SelectFrom#Temp

And if we were to ALT+ F1 #temp we would get the following:

TempIdentity

Same identity!…

So with the same identity, we have the same current identity value, right?

DBCC CHECKIDENT('#temp')

CheckIdentResultsTemp

WRONG!…

Now I like this.
I appreciate this.
I’m thankful for this.

But I’m not sure why exactly this happens, apart from SQL Server is smart enough to know that it’s not going to need the exact same current identity value for this new table.
Using a (slightly depreciated) command

SET STATISTICS PROFILE ON;
SELECT * INTO #temp FROM dbo.A WHERE x1 = 1;

IdentityChangeSQLSERver

..it seems like SQL Server is changing the identity for you…


Now just to make this even more confusing, we run the following which for me shows that the last x1 value is 2.

SELECT * FROM dbo.A ORDER BY A_ID DESC;

LatestAx1

So what I’m expecting is that if we were to input this into a temporary table, we wouldn’t expect to see that setidentity…-7.

SET STATISTICS PROFILE ON;
SELECT * INTO #temp2 FROM dbo.A WHERE x1 = 2;

setidentShouldntBethere

But it’s there!…

Fine, okay so, just means that our current identity value has been changed!

So we run our DBCC CHECKIDENT and…

DBCC CHECKIDENT ('#temp2')

takeAwayIdentButRemainTheSame

No change.


And that’s my problem!
I don’t have an answer for this at the moment but when I figure this out, I’ll let you know, and if anyone has the answer, feel free to share 🙂

But at least I have a partial answer:

Question: When subtracting 7 from a value equal no change?
Answer: When SQL Server is involved.

 

Output over cursors

I am not pro-cursor. I am not, however, extremely anti-cursor.

I think that if there is a cursor used for a certain problem it just means we don’t know enough about the problem, the data or the code to think of a better solution.

But I’ve had an experience with cursors and, for my cases, found a better way.

That is what this blog post is going to be about, a case where we were using a cursor until we realised a better solution for it.


Background:

We have a weird table.

It’s our fault though as we’ve made it weird, we haven’t properly normalised it and it’s basically just a massive catch-all for records and for some strange reason, only 1 in 10 records have a ticket number.

So let’s create it!

-- Create the test table.
CREATE TABLE
    [dbo].[ProductActions]
    (
    [product_action_id]    INT IDENTITY(0, 1) PRIMARY KEY,
    [product_id]           INT,
    [allowed_action_id]    INT,
    [ticket_number]        VARCHAR(20),
    [date_actioned]        DATE
    );

-- Populate it.
WITH
    L0    AS(SELECT 1 AS c UNION ALL SELECT 1),
    L1    AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B),
    L2    AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B),
    L3    AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B),
    L4    AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B),
    L5    AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B)
INSERT dbo.ProductActions
    (product_id, allowed_action_id, date_actioned)
SELECT TOP (10000000)
    product_id = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) % 5,
    allowed_action_id = (ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) + 5) / 5,
    date_actioned = CAST(DATEADD(DAY, (SELECT ABS(CHECKSUM(NEWID())) % 50000), 0) AS DATE)
FROM L5;

UPDATE dbo.ProductActions
SET ticket_number = (CASE
                     WHEN product_action_id % 10 = 0
                     THEN 'TCKT'
                           + REPLACE(QUOTENAME(product_action_id, ''''),'''','')
                     ELSE NULL
                     END);

 

 

…that took around 1 minute for 10,000,000 records, including the create and update statements…

Now say something diabolical happens, a bad delete statement, a disgruntled employee who just won the lottery, a chimpanzee riding on a segway, whatever but for some reason all the records holding our ticket numbers get deleted.
However, thankfully (not sure if that’s the right word) we have Excel sheets of the records that were dropped.

ticketNumbersDeleted

… 9 seconds! 9 seconds for that wealthy employee/chimpanzee to cause havoc…

However, we don’t have everything; we have the product_id and the allowed_action_id.
So the business tells us that the dates don’t matter, it just needs those records back and those ticket numbers!


What we did:

Cursors. (I know, I’m sorry.)

However, we had problems.
Our table had a lot of traffic so we weren’t guaranteed an unbroken sequential range of identifiers that we could update.

And we couldn’t remove these foreign inserts from our range as we had no way of identifying which was our records and which were foreign records.

So we created a cursor. Not a performance tuned cursor since we were relatively new to SQL at the time but a run-of-the-mill, Google what it is and grab the template off the internet cursor.

Steps:

  1. Import the records into a holding table
  2. Give the table a sequential identifier for ordering
  3. Get the first record
  4. Insert it into the original table
  5. Grab the identity of the insert we just inserted
  6. Update it to have the correct ticket number
  7. Grab the next record and repeat steps 4 – 6
  8. Finish

All together 8 steps (although steps 3 – 7 are done around 1 million times) but how fast is it?

Step 2:

ALTER TABLE
dbo.DeletedRecordsPaper
ADD [ID] INT IDENTITY(0, 1);
GO

…around 34 seconds, surprisingly long…sign of things to come?…

Steps 3 – 7:

DECLARE @prodID INT,
@allowed INT,
@entered INT;
-- Start of Step 3

DECLARE FillMissingRecords
CURSOR FOR
SELECT product_id, allowed_action_id
FROM dbo.DeletedRecordsPaper
ORDER BY ID ASC;

OPEN FillMissingRecords

FETCH NEXT
FROM FillMissingRecords
INTO @prodID,
@allowed

WHILE @@FETCH_STATUS = 0
BEGIN  -- Step 4
INSERT dbo.ProductActions
(product_id, allowed_action_id, date_actioned)
SELECT @prodID,
@allowed,
GETDATE();
-- Step 5
SELECT @entered = SCOPE_IDENTITY();
-- Step 6
UPDATE dbo.ProductActions
SET ticket_number = 'TCKT' + CAST(@entered AS varchar(10))
WHERE product_action_id = @entered;
-- Step 7
FETCH NEXT
FROM FillMissingRecords
INTO @prodID,
@allowed

END

CLOSE FillMissingRecords

DEALLOCATE FillMissingRecords;

How’d it do? Well it worked; 1 million records took 9 minutes and 35 seconds though.

…9 minutes and 35 seconds…9 minutes and 35 seconds?!?…I could have recreated the table 9 and a half times in that time! I knew it was going to be slow, I even went out and got a coffee while I was waiting but C’MON!!!!

So altogether, with the adding of the sequential identifier, it took 10 minutes to run.

Now I can’t remember why exactly but this happened a few times so we had to run this a few times.


What we learned:

1). Eventually monkies improve with using keyboards, so too do humans and 2). it’s SQL Server. Use Set Theory!

Enter OUTPUT

Looking at this issue the main problem that we could see was that we needed to know what records we had inserted to update them with a ticket number.

We thought that we couldn’t figure out what we had inserted without using SCOPE_IDENTITY, we didn’t know about the inserted table, we didn’t know about variable tables, essentially we didn’t know about OUTPUT.

Steps:

  1. Import the records into a holding table
  2. Declare a holding table for the identities
  3. Insert all the records into the original table and output the values to our holding table
  4. Update all the records inserted to have the correct ticket number
  5. Finish

All together 5 steps, already an improvement, but that’s just steps, not performance. Is it an actual performance improvement?

…to be honest, if it was anyways faster than 9 minutes, I’d be happy…

-- Step 2
DECLARE @entered
TABLE
(
entered_id INT
);

-- Step 3
INSERT dbo.ProductActions
(product_id, allowed_action_id, date_actioned)
OUTPUT inserted.product_action_id
INTO @entered
SELECT product_id,
allowed_action_id,
GETDATE()
FROM dbo.DeletedRecordsPaper;

-- Step 4
UPDATE pa
SET ticket_number = 'TCKT' + CAST(entered_id AS varchar(10))
FROM dbo.ProductActions AS [pa]
JOIN @entered AS e
ON pa.product_action_id = e.entered_id;

 

29 SECONDS!!!! 1 million rows in 29 seconds!

So we’ve gone from 1 million rows in 9 minutes 35 seconds (~1,730 rows a second) to 1 million in 29 seconds (~34,482 rows a second).

…those results flustered me so much that I forgot maths so if anybody wants to work out that improvement percentage, please let me know. I want to write that one down!…


Conclusion:

The most dangerous phrase in the English language is probably “what does this button do?”
The second most dangerous is “we’ve always done it that way”.
There’s always room for improvement testing (on the right system i.e. NOT PRODUCTION) and OUTPUT over CURSORS any day.

 

 

 

Beautifully SARGable, beautiful Query

This blog post is not what I was intending to write, at least not so early in my list of blog postings. Mainly due to the fact that it’s main issue is around SARG-ability which I think is a bit more advanced than normal.

…by ‘a bit more advanced’ I mean any conversation involving me and SARG-ability would carry along these lines…
> Do you know what SARG-ability is?
>> Me? Yeah, totally!
> Great! What is it so?
>>…eh….indexes?

Which isn’t totally wrong but since SARG-ability is a main part of this post, it’s best I give you a definition and an example (BONUS: a lovely video!)

So a query is SARGable if “the DBMS engine can take advantage of an index to speed up the execution of the query.”

This SARG-ability is what we are going to be searching for today because we’re going to be searching a table of 15,000,000 rows.

…why 15,000,000? Well I first wanted to populate as many rows as I could but thanks to the joys that are Cartesian products I ended up crashing my SSMS! So 15,000,000 will do us…

 

Now if this was a book and we had to search every single 15,000,000 pages of that book for the page with the result we want OR if we had the option to go to the back of the book, look up on the index where the the result is and go directly to that page, which do you think would be faster? Easier?

…I’m really hoping you went with the Index option…

Hence, in our example SARG-ability is good and it’s what we want to achieve today.

So with that, on to the actual post…


One of the most common things to search for in SQL Server is dates.  Nice and simple and everyone has (presumably) done it.

SELECT
*
FROM [<my_schema>].[<my_table>]
WHERE [<date_column>] = '<date>';

Fine, grand, no problem…if your table is small but remember, we’ve got a table with 15,000,000 rows. Checking every single row is just not going to cut it here.

Unfortunately that is what happened in this situation.
A requirement was given to Development to get all the rows where the date was this year, and Development were smart enough to know that there is a SQL Server function for that.

DECLARE @Year INT = 2016;
SELECT [Test_Date] FROM [dbo].[DateTest] WHERE YEAR([Test_Date]) = @Year;

So they sent it onto me to approve and apply to the QA environment and I did what I have come to learn is what most DBA’s do to Developers, I said no.

Why? Here’s why?
First lets create a test table to run this against…this insert took around 2 minutes on my machine.


-- First, create our table
CREATE TABLE [dbo].[DateTest] ([Date_Test_Id] INT IDENTITY(1, 1), [Test_Date] datetime2(3));

-- Populate it with 15,000,000 random rows
-- from 1st Jan 1900 to 1st Jan 2017.
INSERT INTO [dbo].[DateTest] ([Test_Date])
SELECT
TOP (15000000)
DATEADD(DAY, 0, ABS(CHECKSUM(NEWID())) % 42734)
FROM [sys].[messages] AS [m1]
CROSS JOIN [sys].[messages] AS [m2];

-- Create an index we can use for our tests
CREATE NONCLUSTERED INDEX [nci_DateTest_Test_Date] ON [dbo].[DateTest] ([Test_Date]);

-- Show the data
SELECT * FROM [DateTest];

…who else knew that there was 42734 days from the 1st of Jan 1900 to the 1st of Jan 2017? I didn’t!…

So now that 1). we have our table and b). we have an index we can use, we can run the developer’s query and be SARGable right?

DECLARE @Year INT = 2016;
SELECT [Test_Date] FROM [dbo].[DateTest] WHERE YEAR([Test_Date]) = @Year;
GO

NonSargable

Nope! Table scan, ignores our Index and reads all 15M (too lazy for all the zeros) for a measely 127,782 rows! It’s not the slowest, taking around 3.960 seconds but still, we want SARGable!!!

…your row results may differ, random is random after all…

So being the kind, gentle, caring soul that I am (as well as being on my fourth coffee) I went back with a SARGable method.

DECLARE @Year INT = 2016;
SELECT [Test_Date] FROM [dbo].[DateTest]
WHERE [Test_Date] < DATEADD(YEAR, (@Year - 1900) + 1, 0)    --    Start of Next Year
AND [Test_Date] > DATEADD(YEAR, (@Year - 1900), - 1);    --    End of Last Year
GO

SARGable_notpretty

Look at that beautiful, beautiful index seek. 127,782 rows in 1.807 seconds, over twice as fast!

But as beautiful as that index seek is, it is not what the developer saw.
> That’s an ugly query, it’s going to be horrible to troubleshoot that.
And off he went.

What he came back with was pleasantly surprising and reminds me that one of the best thing about SQL Server is, that for all it’s restrictive syntax and rules, there is no 1 way to do anything.
What he came back with was the beauty of DATEFROMPARTS.

DECLARE @Year INT = 2016;
SELECT [Test_Date] FROM [dbo].[DateTest] WHERE [Test_Date] BETWEEN DATEFROMPARTS(@year, 1, 1) AND DATEFROMPARTS(@year, 12, 31);
GO

SARGable_pretty

Boom! Index seek of 127,782 rows in 1.807 seconds.

I was happy cause it was SARG-able and he was happy as it was easier to troubleshoot.
So he asked if it could be applied to the QA environment and I did what I have come to learn is what most DBA’s do to Developers, I said no.

In case you’re wondering why, consider two things: the data type of the [Test_Date] column and this example.

SELECT * FROM [dbo].[DateTest] WHERE [Test_Date] = '2016-12-31';

378 row(s) affected.

UPDATE TOP (3) [dbo].[DateTest] SET [Test_Date] = '2016-12-31 12:34:56' WHERE [Test_Date] = '2016-12-31';

3 row(s) affected.

rows_Different_pretty_nonpretty

3 rows difference…I’ll let you figure out why.

SET WHERE_WAS_I ON;

Letting SQL Server Management Studio tell you what you’ve just run

There has been multiple times when I’ve been running part of a large query script and someone something has distracted me.

Next thing I know I look back at my screen and my results pane shows:

(2089 row(s) affected)

Now that might be fine if you’ve ran what I just ran:

SELECT TOP (2089) * FROM sys.all_objects;

but what happens when you take another look at the code and what you thought you’ve just ran is in the middle of the following

INSERT INTO [very_important_table]
SELECT
[id], [random_string_column], [random_int_column]
FROM [user_input] as [ui]
-- if it's in not in the table, then insert it
WHERE NOT EXISTS
(
 SELECT 1 FROM [user_input] as [exists] WHERE [exists].[id] = [ui].[id]
);

SELECT TOP (2089) * FROM sys.all_objects;

DELETE FROM [very_important_table];
--WHERE [important_table_Id] = 1 -- commented out WHERE CLAUSE

Suddenly you’re not sure if you really ran the SELECT statement at all.
Maybe you ran the insert statement and 2089 rows were marked to never be seen again!
Or maybe that other table only had 2089 rows in it and you’ve now deleted every one!!

Now this blog post is not going to deal with fail-safe’s for preventing those scenarios because 1) you should already know how to do that, and b) if you don’t know, then maybe back away until you research it… It’s only going to deal with a nice little way you can figure out what it was that you just ran.

In SSMS, there are 4 different ways to deal with query results.

  1. Results to Grid, the default (Ctrl + D),
  2. Results to Text, quite handy for copying text (Ctrl + T),
  3. Results to File, cause sometimes you just want a really big file (Ctrl + Shift + F), and
  4. No Results, very useful when you’re trying to performance tune something as sending results to the client is actually a performance hit!.

Now you may have noticed that for that list, I gave you the keyboard shortcuts for each option apart from the final one, and there’s a very important reason for that…

…I don’t actually know if it has a shortcut…

Honestly, I’m not sure if there is one (apart from declaring a variable as shown by Kendra Little).
What I do know is that where you go to set No Results to on is also where you go to set WHERE_WAS_I to on.

  • Tools –> Options

SSMS_Tools_Options

  • Query Results –> SQL Server –> Results to Grid

IncludeQueryInResultSet

…check some of these other options out too, they’re pretty useful…

From here we can already see that I have that magical little checkbox “Include the query in the result set” ticked, and 3 check boxes down from that is the No Results checkbox “Discard results after execution”.

…if you want it to take affect, you’ll need to open a new Query Window…

Now seeing as I had that checkbox ticked, all I have to do is open the Messages tab in the results pane and there will be the text that you ran, your little WHERE_WAS_I.

WHERE_WAS_I_MessagePane

…okay so I cheated, I ran the whole thing and only showed you part of the results pane. Sorry I’m not sorry 🙂 …

The good news is that now we know what was ran…EVERYTHING…but seeing as this is a massively contrived example…I mean who calls a table [very_important_table]…it works for what we want.
From this we can see

  • 3 queries ran
    • 3 results were returned.

Like I said, it’s not a fail-safe but just a little nudge to let you know where you are. Hopefully it should help you out with knowing what it was that you just ran…even if what it tells you wasn’t what you wanted to see!

P.S. this is closest that I’ve found to a shortcut for this. If you click the Query Options button below and click the checkboxes that way it will also work but only for the session window that you are in!

  • Query Options –> Results

QueryOptions

T-SQL Tuesday #77: Recursive Common Table Expression (CTE)

tsql2sday150x1501

Favourite SQL Server Feature:

Is there is any event that is a great starting off point for new bloggers than T-SQL Tuesday? It gives you everything you need, a topic, a semblance of choice, a deadline

This month’s T-SQL Tuesday is hosted by Jens Vestergaard (b/t) and the topic is out favourite SQL Server Feature.

Now I didn’t realise that I had a favourite SQL Server feature until I had to sit down and think about it and I realised that the moment that I learned how to use CTE’s, I’ve been using them non stop. So much so that in a previous job, a developer once said he could recognize a stored procedure I had written just because it contained a CTE!

it didn’t help that in that case he was right 🙁 

According to Books Online, a CTE is a “temporary named result set” that is defined within the scope of a single statement. In case you are worries about the ‘single statement’ aspect of that, don’t be. With temp tables, Variables table, etc, SQL Server got you covered ;).

As this is the first blog post, I’ll keep this short and sweet. The main capacity of CTE’s that I admire is the RECURSIVE element to them.

Recursive CTE’s require only 4 aspects

  1. An anchor statement
  2. A joining statement e.g. UNION ALL
  3. A recursive statement, and
  4. A terminator clause

Actions Speak Louder Than Words:

With the release of the new STRING_SPLIT function in SQL Server 2016 that everyone is looking forward to, it’s probably fitting that the example of a recursive CTE that I’ll be using is to split a string. Since this post is about Recursive CTE’s I’ll be focusing on the architecture of the CTE more than what is in the script though!

So for our example, say we are given a variable sting with a list of elements in it…


DECLARE @string VARCHAR(100) = 'SELECT,INSERT,UPDATE,DELETE,EXECUTE'

…and we are tasked with splitting this string out into it’s different parts. Now there are multiple different ways that this could be accomplished (nearly all of them faster and more efficien) but we’re going the recursive CTE route!


DECLARE @string VARCHAR(100) = 'SELECT,INSERT,UPDATE,DELETE,EXECUTE'
-- This value is used to split out the string :
, @delimiter CHAR(1) = ',';

-- Add on a final delimter to get the last element
SELECT @string = @string + @delimiter;

WITH delimiting_cte
( ID, original_text, remaining_text, delimited_text )
AS
(
-- Anchor stmt :
SELECT
CAST( 1 as SMALLINT ),
@string,
RIGHT( @string, ( LEN( @string ) - CHARINDEX( @delimiter, @string ) ) ), --remaining_text
SUBSTRING( @string, 1, CHARINDEX( @delimiter, @string ) - 1 ) -- delimited_text
-- Joining Statement :
UNION ALL
-- Recursive stmt : remove each delimited value to put in own row...
SELECT
CAST( c.ID + 1 as SMALLINT ),
c.original_text,
RIGHT( c.remaining_text, ( LEN( remaining_text ) - CHARINDEX( @delimiter, c.remaining_text ) ) ), -- remaining_text
SUBSTRING( c.remaining_text, 0, CHARINDEX( @delimiter, c.remaining_text )) -- delimited_text
FROM delimiting_cte as [c]
WHERE
-- Terminator clause: Until no delimiter left in the [remaining_text] column...
remaining_text like '%['+@delimiter+']%'
)
SELECT
ID,
original_text,
remaining_text,
delimited_text
FROM delimiting_cte as [c];

Anchor Statement:

The anchor statement is static, it doesn’t change. You can take that query out, run it all day long and you’d get the same results. No changes here, this is what the recursive derives itself from!

-- Anchor stmt :
SELECT
CAST( 1 as SMALLINT ),
@string,
RIGHT( @string, ( LEN( @string ) - CHARINDEX( @delimiter, @string ) ) ),
SUBSTRING( @string, 1, CHARINDEX( @delimiter, @string ) - 1 )

However, it has the basic limitation of a CTE in that it requires distinct column names

So we have two options, define them at the start:


WITH delimiting_cte
( ID, original_text, remaining_text, delimited_text )
AS
(
 SELECT ...

 

or define them inside the anchor statement itself:

WITH delimiting_cte
AS
(
SELECT [ID] = CAST(1 AS SMALLINT),
[original_text] = @string...

Whatever we choose the only caveat is that each column must have a distinct column name.


Joining Statement:

Nice and simple, we need something to join the anchor and the recursive statement together:

-- Joining Statement :
UNION ALL

(Ever wonder what happens if you change this to UNION? INTERSECT? EXCEPT? Go on, give it a go and find out!)

 

Recursive Statement:

Now this is metaphorically where the magic happens. There are a couple of things here that are worth pointing out.

This is the complete opposite of the Static Statement, this will not run on it’s own! It needs the Anchor Statement to actually execute. This is because you are SELECTing from the CTE while still defining the CTE!!
I can’t think of another aspect in SQL that has this behaviour but if anyone knows, let me know!
On the same level, we call the column names themselves as well here but we don’t have to give these guys distinct column names. SQL Server is smart enough to get their position and match them up with the column name in the Anchor Statement, much like a regular UNION ALL expression.
However, like a regular UNION ALL expression, the columns in the Recursive Statement need to be the same data types as the Anchor Statement otherwise it throws a slight hissy fit errors out!

-- Recursive stmt : remove each delimited value to put in own row...
SELECT
CAST( c.ID + 1 as SMALLINT ),
c.original_text,
RIGHT( c.remaining_text, ( LEN( remaining_text ) - CHARINDEX( @delimiter, c.remaining_text ) ) ), -- remaining_text
SUBSTRING( c.remaining_text, 0, CHARINDEX( @delimiter, c.remaining_text )) -- delimited_text
FROM delimiting_cte as [c]

Terminator Clause:

SQL server has a MAXRECURSION setting in it. Pretty smart when you think about it unless you’re a fan of Infinite Loops. Unless specified otherwise, Recursive CTE’s will stop after 100 recursions. (and no, I’m not going to tell you how to increase this limit, it’s right there in the Books Online).
However, if we want the CTE to not error out, it may be a good idea to stop it before it hits that limit so that’s why we have Terminator clauses.

WHERE
-- Terminator clause: Until no delimiter left in the [remaining_text] column...
remaining_text like '%['+@delimiter+']%'

Now we can specify this inside the CTE or in the statement after it but like already stated, best have this somewhere (unless you like error messages…)

Conclusion:

And that’s the surface of recursive CTE’s, if not scratched then definitely slightly scraped.
They are so many more uses for these guys and some genius ones have already been blogged about.
Jeff Moden uses CTE’s for his Tally Table and even gives you a glimpse into his string splitter which I definitely recommend checking out.
And this is definitely my favourite SQL Server feature…so far 🙂