Recently I was asked to create a temporary user with SELECT permissions on a database.
So far, not a problem. Taking advantage of the pre-defined roles in SQL Server, I just add this new user to the pre-defined role [db_datareader], which grants SELECT permissions to all tables and views in a database.
Why would I grant SELECT permissions this way and not manually choose the tables and views that this user could connect to?
This is a test server so there is no sensitive information held that I’m worried about blocking access to,
I didn’t get the exact requirements of the tables this user is to query so I don’t know which tables/views to grant access to and which to deny access to (I consider this a mistake on my part and something I have to act on next time),
The test user is only required for 2 days, after which it is getting reviewed and deleted as quickly as I can, and
Why grant SELECT on tables individually when I can grant on all tables in 1 fell swoop?
Kalen Delaney ( blog | twitter ) has an excellent blog post about Windows Fast Startup and, while I’m not going to repeat what she has said here because, like I already mentioned, it’s an excellent post and I encourage you to read it ( and maybe give her a lil’ subscribe 😉 ), what I will mention is that I encountered this feature with my new laptop and had it interfering with my SQL Server testing (again read her post as to possible causes why).
I am pretty sure that if I was a fish, I would not survive long enough to grow old as I would fall for the first piece of bait hanging from a lovely, shiny thing that I could see.
The only defence that I have is that, as I’m still a Junior DBA, I can make these mistakes as long as
a). they are not extremely serious (no dropping production databases for this guy!), and
b). I’m expected to learn from them and not repeat them!
And like most things, it started innocently enough. A simple support ticket coming in with the following error message.
Msg 229, Level 14, State 5, Line 65
The SELECT permission was denied on the object ‘Removable’, database ‘LocalTesting’, schema ‘Superflous’.
What I did:
I saw this error message and immediately thought to myself
AH! No problems, they just need SELECT permissions on that object. 2 second job.
And seeing as the ticket was nice enough to provide the login and user that was receiving the error message (we’ll say it was a user called “NewUser”), I could join that with the error message and grant permissions.
GRANT SELECT ON OBJECT::Superflous.Removable TO NewUser;
Following this was a quick test to impersonate myself as the user and see if it works;
-- Test 01.
EXECUTE AS USER = 'NewUser';
SELECT USER_NAME(), SUSER_SNAME();
SELECT * FROM dbo.GenericView;
As far as I was aware, I was happy it worked; the user, once notified, was happy it worked and I went on my merry way to grab some celebratory coffee.
Until on the way back I bumped into my Senior DBA and told him proudly what I had done…
What I should have done:
The following is a simplified reproduction of that conversation…
>Is that a new View?
>> Is that a new User?
> No…although it’s called New.
>> Could they SELECT from that View before?
> Yeah, as far as I know.
>> Alright, so did anything change before the call?
> eh…I didn’t check
>> Okay, from now on: Check.
It was at that stage that we started getting other tickets in from other users with the same error message. So rather than fixing the underlying problem, I had fixed a symptom for a single user.
The symptom was the User not having permission to select, but the underlying problem was that the View had changed.
At this stage I was still confused as it’s a view, what does it matter if the query creating it has changed, how could this have broken permissions?
Again, jumping the gun, I didn’t check…
Our problem view has two different schemas and when we check the ownership of the two different schemas, we get the following:
-- Who owns what?
SELECT dp.name AS Owner, s.*
FROM sys.schemas AS s
JOIN sys.database_principals AS dp ON s.principal_id = dp.principal_id
WHERE s.name in ('dbo', 'Superflous');
Originally, our Superflous.Removable table was in a different database on it’s dbo schema where the owner of the view (dbo) had permissions to select from.
Since the owner of the view (OV) had permissions on this schema and the OV gave select permissions on the view to the user (NU), the NU inherited the OV’s permissions.
So SQL Server hit the view, saw it was owned by the OV and didn’t need to check permissions for our NU.
The view first hit the table dbo.Foo , saw that it was owned by OV and so didn’t need to check permissions.
Now the view calls across to the other database, see’s the owner is not the OV so checks the permissions.
However the OV has access permissions on this table so the NU gets these access permissions, therefore we have no problem!
Now we had recently done a change to have the information from the other database brought over to our database via Replication.
This meant a re-write of our View using the new table and schema with it’s new owner. This new schema that our NU or the OV did not have permissions for.
What this meant was the same procedure was followed by the SQL Server engine with the only difference being that, instead of going across to the other database, it went to our new schema Superflous.Removable . It saw the OV did not have access permissions, so it denied access permissions for ourNU.
So basically, when NewUser went to select from our view, they hit the new schema, SQL Server realised it needed to check their permissions and, when none were found, access was denied.
All I had done by jumping the gun and fixing the symptom was made it so that when SQL Server traversed down the ownership chain for the view and came to the new schema, it checked permissions, found the SELECT permission for only this user and continued on.
This was the reason that the view worked for the user but no one else!
This MyStuff database principal should not be the owner of our Removable table, in fact the Superflous schema should not even exist, so it was a simple matter of transferring ownership to dbo.
ALTER AUTHORIZATION ON SCHEMA::Superflous TO dbo;
Now all the users, who have read access on the dbo schema, are able to use this view with no further hassles.
Problem solved! Right?
Stop Jumping the Gun!
All the above is what I did.
Trying to fix the permission error, I granted SELECT permission.
Trying to fix the ownership chain, I transferred ownership.
Mainly in trying to fix the problem, I continually jumped the gun.
Which is why I am still a Junior DBA.
What my Senior DBA did was fix the replication script so the new schema wouldn’t get created in the first place, and the table would get created in dbo.
Which is why he’s my Senior DBA.
Jumping the gun isn’t going to give you a head start. It is just going to delay you. Knowing the problems, as well as knowing the solutions, is the answer.
I’m learning the problems…I’ll have the solutions soon, and I aim to share them too.
Well another aspect to take from that post was the importance of knowing; if you do not know something, then it is important for you to learn it.
That being said something that I did not know, but recently learned, was that there is nothing stopping a Primary Key from also being a Foreign Key.
there is nothing stopping a Primary Key from also being a Foreign Key
When you think about it, this lack of knowledge came from incorrect assumptions. You read Primary KEYand Foreign KEYand you think to yourself, well they are both keys aren’t they? Same thing.
That is the trap that I fell down and the trap is not knowing and making invalid assumptions. So let’s hopefully help you with knowing what the differences between them are.
First let’s create our tables:
-- Create our Foo and Bar table.
IF OBJECT_ID(N'dbo.Bar', N'U') IS NOT NULL
DROP TABLE dbo.Bar;
IF OBJECT_ID(N'dbo.Foo', N'U') IS NOT NULL
DROP TABLE dbo.Foo;
CREATE TABLE dbo.Foo
FooID int IDENTITY(1, 1) NOT NULL
CONSTRAINT [PK_dbo_Foo] PRIMARY KEY CLUSTERED,
CREATE TABLE dbo.Bar
CONSTRAINT [PK_dbo_Bar] PRIMARY KEY CLUSTERED,
CONSTRAINT [FK_dbo_Bar_dbo_Foo] FOREIGN KEY (BarID)
REFERENCES dbo.Foo (FooID)
-- Declare our holding table.
DECLARE @FooIDs TABLE (FooID int);
-- Insert into Foo.
INSERT INTO dbo.Foo (FooValue)
-- Gather the new ID's from foo.
OUTPUT inserted.FooID INTO @FooIDs (FooID)
SELECT LEFT(NEWID(), 8) FROM sys.all_objects;
-- Insert Foo's ID into the Bar table.
INSERT INTO dbo.Bar (BarID, BarValue)
SELECT FooID, RIGHT(NEWID(), 8) FROM @FooIDs;
-- Select our tables.
SELECT * FROM dbo.Foo;
SELECT * FROM dbo.Bar;
Primary Keys (PK)
a column or combination of columns that contain values that uniquely identify each row in the table
A Primary key is a column or combination of columns that contain values that uniquely identify each row in the table.
That’s it; it just has to uniquely identify the row.
…btw you are going to hear the word “unique” a lot with regard to Primary keys…
Now there are other types of keys that can do the same (Surrogate Keys, Composite Keys, Unique Keys, Alternate Keys, etc) but these are outside the scope of this post.
So if we attempt to insert another record into our Primary Key column/column combo that violates this unique, identifying property, we’re going to have a bad time.
We have to use IDENTITY_INSERT syntax because I’ve created the tables using IDENTITY and, if we were to insert a record into the identity column without turning IDENITY_INSERT on first, then another error pops up before the PK violation error that we want.
However, if we were to create our table without specifying the Primary Key constraint then the above insert would work and you would have duplicate entries populating your table, silently and deadly.
Foreign Keys (FK)
a column or combination of columns that is used to establish and enforce a link between the data in two tables to control the data that can be stored in the foreign key table
A Foreign key is a column or combination of columns that is used to establish and enforce a link between the data in two tables to control the data that can be stored in the foreign key table.
That’s it; it just has to establish and enforce a link between data.
If we try to violate this link, SQL Server will throw a different error and not let us.
Yet if we were to create our table without specifying our Foreign key, then there would be no real link between our tables. So if our business depends on a record not being in Bar without being in Foo and we don’t have a constraint specified to that extent…
Unfortunately, I’m hard pressed to think of a way you can ensure this.
In fact, I don’t even like the above definition for Foreign keys as it states that two tables are necessary for a Foreign key constraint when only one is needed:
IF OBJECT_ID(N'dbo.HR', N'U') IS NOT NULL
DROP TABLE dbo.HR;
CREATE TABLE dbo.HR
CONSTRAINT [PK_dbo_HR] PRIMARY KEY CLUSTERED,
FirstName varchar(20) NOT NULL,
SurName varchar(20) NOT NULL,
ManagerID int NULL
CONSTRAINT [FK_EmployeeID_Manager_ID] FOREIGN KEY
REFERENCES dbo.HR (EmployeeID)
-- Check for foreign key
SELECT * FROM sys.foreign_keys WHERE [parent_object_id] = OBJECT_ID('dbo.HR');
-- Check for primary key
SELECT * FROM sys.key_constraints WHERE [parent_object_id] = OBJECT_ID('dbo.HR');
-- Check for everything.
If you check the two definitions for Primary key and Foreign key you’ll see that, even though they are both called keys, they serve two different purposes; namely identifying rows and enforcing links.
And those two purposes are not mutually exclusive!
A column/column combo that identifies a row can also be used to enforce a link back to another table (or itself, as shown above with Foreign keys).
The assumption, that if you were one then you couldn’t be the other, was incorrect. If your business rules call for it, don’t let a column being one type of key stop it from being the other.
Let’s not go to C to get to B
You may be thinking that this is a lot of hassle and that’s not an unfair thought.
Why not just not declare any key constraints and let the data fall as they may?
I will admit that is a fair bit of effort to constantly define and declare the different key constraints when creating tables, especially as Developers are focused on efficiency, but it is worth it!
Now, while the following appears to hold true for any foreign key constraint (I haven’t finished testing yet), I found these while testing the above so I’m going to include them here.
SQL Server loves primary key and foreign key constraints.
A primary key gets a unique index created on it to enforce that it is unique and, since it has an index placed upon it, it can be used to speed up query selection.
A foreign key is special though as it forces a constraint and the query optimiser can use these constraints to take certain shortcuts 🙂
Query Optimizer (QO) Examples
-- Join our tables
SELECT F.* FROM dbo.Foo AS [F] JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT B.* FROM dbo.Foo AS [F] JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
Thanks to our constraint the QO knows that if something were to exist in Bar, it must be in Foo and, since we are not selecting or ordering anything from Foo, it straight up ignores it.
Less reads, less IO; in general all around better performance.
Does this work with other joins though?
Like above, with for something to exist in Bar it must exist in Foo, see if you can figure out why the QO figures it is safe to ignore some joins.
SELECT F.* FROM dbo.Foo AS [F] LEFT JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT B.* FROM dbo.Foo AS [F] LEFT JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT F.* FROM dbo.Foo AS [F] RIGHT JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT B.* FROM dbo.Foo AS [F] RIGHT JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT F.* FROM dbo.Foo AS [F] FULL OUTER JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
SELECT B.* FROM dbo.Foo AS [F] FULL OUTER JOIN dbo.Bar AS [B] ON F.FooID = B.BarID;
The “SET operators” (UNION, UNION ALL, INTERSECT, EXCEPT) act a bit differently.
I’ll let you take a look at them yourself though.
There is a lot that I have yet to learn about SQL Server, in fact that is the main reason that I created this blog; so I could read back on these posts sometime in the future and smile at my ignorance.
Hopefully the main aspect that I take from this post though is that it is okay not to know something as long as you have the desire and the initiative to learn.
Making it clear to anyone reading this but this post is about SQL Server even though I start off talking a bit about PostgreSQL.
…I know, weird right?…
I have a PostgreSQL instance on my home laptop that I haven’t used yet.
I intend to start using it soon as a sort of hobby as I feel that there are things to be learned about databases from it. Like comparing features available to PostgreSQL that are not in SQL Server and vice-versa or the different ways the same tasks are accomplished in both platforms.
However SQL Server is the platform used in my work, I still have so much to learn with it (especially with 2016 coming out!!!), and I just find it so damn interesting so I haven’t touched PostgreSQL yet.
All that being said, I have signed up to few newsletters from PostgreSQL (General, Novice, etc) and they are fascinating.
Unfamiliar words like pglogical and rsync are combined with known words like publisher and subscriber and the community itself is so vast and supportive that it rivals the #SQLFamily (from what I’ve seen and yes. I am being biased to SQL Server 🙂 ).
One of those newsletters was regarding a problem a user was having with creating databases.
When he would create a new database it was not empty as he expected but was filled with user tables, logins, etc.
What was going on?
The explanation was pretty much what you would expect, just called by a different name.
He had basically written to his Model database (called template1 in PostgreSQL) sometime ago without realising it.
PostgreSQL has the following syntax with creating databases:
CREATE DATABASE DatabaseName WITH TEMPLATE TemplateName
The new database settings are created from whatever template is specified using the WITH TEMPLATE syntax (defaults to template1 apparently).
This works the same as SQL Server, the new databases inheriting the settings from the Model system database, but in our case it is implicit. There is no call for WITH TEMPLATE Model.
This is perfectly valid syntax.
CREATE DATABASE DatabaseName
The only difference that I can tell at the moment is that PostgreSQL can have multiple different templates while SQL Server has just one; Model.
Is this restriction on database templates a good thing or a limitation? Personally I go with the former but you may feel differently.
This brought me back to the system databases and there was something that I realised.
A lot of new users, and I was included in this list not too long ago, do not think about the system databases.
I’m not sure I can fault them as well as it’s probably not a priority. There is so much to learn with regard to DDL statements, DML statements, Deadlocking, General T-SQL, etc, that the system databases are just a little folder under Databases that does not get opened.
However, and I can’t stress these enough, these are important!
Master: Logon accounts, sys info, etc
MSDB: backup history, restore history, job history etc,
Tempdb: temp tables, cursors, ordering, etc
Model: new databases, creation of tempdb
And that is just scratching the surface!
Take care of these databases, do not limit yourself to looking after just the user databases.
They are not the only databases that need to be backed-up and they are not the only databases that can experience corruption.
I’m hoping that you believe me with this but, unfortunately, the best lessons are learned.
You should have a destructive sandbox SQL Server, (NOT PRODUCTION!!!), hopefully a little laptop at home to call your own; something that nooby else would mind you destroying basically.
Choose a system database, anyone will do; delete that database, drop it, whatever you want just make it unavailable and see how far you can get using SQL Server.
…Hell isn’t it?…
Now imagine that happened unexpectantly and unwanted on a Monday morning because you weren’t taking care of your system databases.
Update: Thanks to “kuopaz”, for pointing out that I had forgotten to add the unnamed constraint to the example where I say unnamed constraints will work fine.
Kenneth Fisher (b | t) recently wrote about Re-Evaluating Best Practices and, reading his post, I couldn’t help but agree with him. Especially with regard to:
Times change, knowledge changes so best practices have to change. Don’t rest on your knowledge or the knowledge of others. Make sure you understand not only what the best practice is but why it’s the best practice. And then question it.
Now I’m not going to bring up the Microsoft PLE of 300 advice as that example has been taken out and waved in front of people’s faces so many times that I feel it’s lost it’s impact and, as far as I am aware, it’s the only case where the best practice is so widely ineffectual.
However, the statement…
Make sure you understand not only what the best practice is but why it’s the best practice.
… is, for me, the crucial statement in his post and the catalyst for the following post as I’ve fallen for a case where the best practices are not applicable; Naming Constraints.
In this post, we are going to be looking at the best practice of giving logical, descriptive names to constraints in tables.
The following code is going to create a table called dbo.NamingConstraints with an Primary key column, a named constraint column and an unnamed constraint column.
We can check these constraints with the following two queries, the first for the Primary key, and the second for the CHECK constraints, with the results in Figure 1.1.
[sys].[key_constraints] AS kc
kc.[parent_object_id] = OBJECT_ID(N'[dbo].[NamingConstraints]', N'U');
[sys].[check_constraints] AS cc
cc.[parent_object_id] = OBJECT_ID(N'[dbo].[NamingConstraints]', N'U');
Constraints are best used to ensure referential and data integrity. Therefore they are commonly seen when data considered against business logic is attempted to be inserted into the database, and error messages are thrown.
When these error messages occur, they normally are passed into error logs from whatever application is integreated into our database. In these cases it is easier to deal with descriptive names than non descriptive ones.
Taking our two CHECK constraints as examples, if we were to introduce error messages…
Looking at the first error message reported (Figure 2.1), we know from the error message that something is wrong in the Table dbo.NamingConstraints and the column is UnNamedConstraint but that is it. If this table had multiple constraints, and we weren’t the one to create this table and the constraints, we would have to do some (potentially lengthy) investigation to figure out what is causing the error and then correct it.
Compare that with the error message for our named constraint (Figure 2.2).
As we have a proper, descriptive constraint name here, straight away we can say that the error occurred as we tried to insert a value that was not greater than 0.
When Naming Constraints is not applicable.
Do not name constraints on temporary tables.
Why as in what would a use case for this be? I use this a lot to step through code with different variables, especially with stored procedures.
Two windows, side by side, running them step by step and comparing the results in each.
…I know, fun right?…
Or why as in why should you not name constraints on temporary tables? Well that’s going to require a bit more detail.
So if we were troubleshooting a procedure and attempted to pass results into a temporary table…
CREATE TABLE #Temp02 (
[Col1] int IDENTITY(1, 1) NOT NULL,
[Col2] varchar(256) CONSTRAINT [Col2_neq_Forbidden] CHECK ([Col2] <> 'Forbidden')
INSERT INTO #Temp02 ([Col2])
… we should have no problem.
But say we try to do that in two seperate windows…
… Big, angry error message telling usthat it could not create the constraint and that an object alreadt exists in the database.
Now say that we were to not explicitly name the constraints on these tables?
CREATE TABLE #Temp02
Col1 int IDENTITY(1,1) NOT NULL,
Col2 varchar(256) CHECK (Col2 <> 'Forbidden')
INSERT INTO #Temp02 (Col2)
SELECT name FROM sys.all_objects;
SELECT * FROM #Temp02;
No problem! Since we have not explicitly named the constraint, SQL Server does not violate it’s rule for identifiers and so does not throw an error message!
Yes, I know that this could be classed as an extreme edge case but that is not the caveat that I’m talking about here.
If you are in the habit of not skipping over reading the actual SQL code, you may be wondering to yourself.
Well, the temp tables are called the same name and they should follow the rules for identifiers as well, no? Why aren’t they throwing an error?
Well that’s because temp tables are treated a bit differently than other objects.
Consider the following example where we try to find our temp table in TempDB:
SELECT * FROM tempdb.sys.tables WHERE name = '#Temp02';
Nothing. It doesn’t exist. But we didn’t drop it and we haven’t closed the scope so it can’t have just disappeared!
If we change our select statement to the LIKE operator with an ending %…
SELECT * FROM tempdb.sys.tables WHERE name LIKE '#Temp02%';
SQL Server, knowing that temp tables could get created multiple times concurrently (especially if created in Stored Procedures), gets around the rule for identifiers with temp tables by adding a unique suffix onto each temp table that is created.
Therefore, it doesn’t violate the rule, it doesn’t error out and multiple concurrent sme-named temp tables can be created.
Why doesn’t this unique suffix happen with constraints aswell? Is this on purpose? By Design? Well the only answer I can give is, I don’t know.
But what I do know is that, in these cases with temp, don’t name your constraints.
In my last post, I was looking into creating new temporary tables from a SELECT INTO syntax when I ran across an issue that I couldn’t explain.
I realised that a situation like that cannot be allowed to continue on a blog post about SQL Server so I documented what I was experiencing (probably pretty poorly I may add) and said that when I had an answer, I would let you know.
However, I will try as best I can to explain it, mainly for myself though so I can look back on it.
Summary of the problem:
We have a table with an identity value of 1,000. When we select a subset of that table into a new table the identity value of the new table decreases to match the highest identity value of that subset.
From initial investigations, there is no visible evidence to show how this is achieved, so how is this happening?
SELECT * INTO #temp FROM dbo.A WHERE x1 = 1;
SELECT * FROM #temp;
When running a SELECT INTO query, if you were to enable STATISTICS PROFILE beforehand,
SET STATISTICS PROFILE ON;
SELECT A_ID, x1 INTO #Test FROM dbo.A WHERE x1 = 1;
…you will see an Argument column with the following code:
The second phase performs an insert into the table the first phase created. This insert is done withidentity_insert semantics, so the identity values from the source table end up in the destination, unchanged. The highest value actually inserted is set as the last value used. You can use IDENT_CURRENT or sys.indentity_columns to see it.
So there is no addition/subtraction going on here.
SQL Server is simply going:
> Are we done inserting? We are? Great, what was that last identity value? 998? Great, that’s your new identity value for this table!
When working with SQL Server, you are going to be using TempDB; that is a certainty.
Using temporary tables, however, isn’t a certainty.
I’m taking it for granted that at one time or another you have used them but it’s not something I can say for certain.
…if you aren’t using temporary tables AT ALL, please let me know what you’re using SQL Server for. I’ll either be impressed or shocked!…more than likely shocked…
However, one of the features that I find with temporary tables is that they inherit the identity from whatever table they were created from, and more often than not, this is not the identity that you want them to have.
…I mean if we wanted them to have that identity, we would have just used the base table, right?…
Now I have ways around this that I’ll blog about later on .
However, while working with these temporary tables and their identites, I found something that I just can’t explain yet and thought I would blog about it so when I do figure it out I can look back at this and call myself all kinds of names for being so stupid.
So first of all, let’s set up our base table:
-- Create our base table
CREATE TABLE dbo.A
(A_ID INT IDENTITY(1, 1),
noise1 int DEFAULT 1,
noise2 char(1) DEFAULT 'S',
noise3 date DEFAULT GETUTCDATE(),
noise4 bit DEFAULT 0);
-- Create random data between the range of [0-3]
INSERT INTO dbo.A(x1)
SELECT s1000.n FROM
( SELECT TOP (10) n = 1 FROM sys.columns) AS s10 -- 10
( SELECT TOP (10) n = 1 FROM sys.columns) AS s100 -- 10 * 10
( SELECT TOP (10) n = ABS(CHECKSUM(NEWID())) % 4 FROM sys.columns) AS s1000; -- 100 * 10
SELECT * FROM dbo.A;
Random-ish results but should be something like this:
Now, SQL Server has a useful little Database Engine Stored Procedure called sp_help that, along with a plethora of other useful information, can return a result set on identity.
If we query our table, we would get the following results:
EXECUTE sp_help 'dbo.A';
What this is telling us is that:
Our Identity column is [A_ID] (identity),
Our Identity value starts at 1 (seed), and
It goes up by 1 (increment) each time.
…little tip: check out Tools -> Options -> Keyboard -> Query Shortcuts… you can just highlight a table and hit “Alt + F1” and you have the same results as running the above query…Efficiency for the win!…
which is great if we wanted to know what we started off with, but what about looking into the future? What about knowing what’s going to get inserted next?
Well for the identity column, we can!
NOTE: I’ve just given you a potentially dangerous command as it has some optimal parameters that can break stuff. Do me a favour and make sure you’re not on Production.
…see the above? How it’s in bold? Do me a favour and double check you’re not on Production? Cheers!…
so we’ve looked into the future and we can tell that with sp_help and DBCC CHECKIDENT, our future identity will be:
DBCC CHECKIDENT().current identity value + sp_help.increment
Now say that part of our query is only interested in rows where x1 = 1.
The most basic way to create our temporary table with this information is probably the one that the majority would use:
SELECT * INTO #temp FROM dbo.A WHERE x1 = 1;
SELECT * FROM #temp;
…Nice, simple, easy. 3 things that we want….
And if we were to ALT+ F1 #temp we would get the following:
So with the same identity, we have the same current identity value, right?
Now I like this.
I appreciate this.
I’m thankful for this.
But I’m not sure why exactly this happens, apart from SQL Server is smart enough to know that it’s not going to need the exact same current identity value for this new table.
Using a (slightly depreciated) command
SET STATISTICS PROFILE ON;
SELECT * INTO #temp FROM dbo.A WHERE x1 = 1;
..it seems like SQL Server is changing the identity for you…
Now just to make this even more confusing, we run the following which for me shows that the last x1 value is 2.
SELECT * FROM dbo.A ORDER BY A_ID DESC;
So what I’m expecting is that if we were to input this into a temporary table, we wouldn’t expect to see that setidentity…-7.
SET STATISTICS PROFILE ON;
SELECT * INTO #temp2 FROM dbo.A WHERE x1 = 2;
…But it’s there!…
Fine, okay so, just means that our current identity value has been changed!
So we run our DBCC CHECKIDENT and…
DBCC CHECKIDENT ('#temp2')
And that’s my problem!
I don’t have an answer for this at the moment but when I figure this out, I’ll let you know, and if anyone has the answer, feel free to share 🙂
But at least I have a partial answer:
Question: When subtracting 7 from a value equal no change? Answer: When SQL Server is involved.
I am not pro-cursor. I am not, however, extremely anti-cursor.
I think that if there is a cursor used for a certain problem it just means we don’t know enough about the problem, the data or the code to think of a better solution.
But I’ve had an experience with cursors and, for my cases, found a better way.
That is what this blog post is going to be about, a case where we were using a cursor until we realised a better solution for it.
We have a weird table.
It’s our fault though as we’ve made it weird, we haven’t properly normalised it and it’s basically just a massive catch-all for records and for some strange reason, only 1 in 10 records have a ticket number.
So let’s create it!
-- Create the test table.
[product_action_id] INT IDENTITY(0, 1) PRIMARY KEY,
-- Populate it.
L0 AS(SELECT 1 AS c UNION ALL SELECT 1),
L1 AS(SELECT 1 AS c FROM L0 AS A CROSS JOIN L0 AS B),
L2 AS(SELECT 1 AS c FROM L1 AS A CROSS JOIN L1 AS B),
L3 AS(SELECT 1 AS c FROM L2 AS A CROSS JOIN L2 AS B),
L4 AS(SELECT 1 AS c FROM L3 AS A CROSS JOIN L3 AS B),
L5 AS(SELECT 1 AS c FROM L4 AS A CROSS JOIN L4 AS B)
(product_id, allowed_action_id, date_actioned)
SELECT TOP (10000000)
product_id = ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) % 5,
allowed_action_id = (ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) + 5) / 5,
date_actioned = CAST(DATEADD(DAY, (SELECT ABS(CHECKSUM(NEWID())) % 50000), 0) AS DATE)
SET ticket_number = (CASE
WHEN product_action_id % 10 = 0
+ REPLACE(QUOTENAME(product_action_id, ''''),'''','')
…that took around 1 minute for 10,000,000 records, including the create and update statements…
Now say something diabolical happens, a bad delete statement, a disgruntled employee who just won the lottery, a chimpanzee riding on a segway, whatever but for some reason all the records holding our ticket numbers get deleted.
However, thankfully (not sure if that’s the right word) we have Excel sheets of the records that were dropped.
… 9 seconds! 9 seconds for that wealthy employee/chimpanzee to cause havoc…
However, we don’t have everything; we have the product_id and the allowed_action_id.
So the business tells us that the dates don’t matter, it just needs those records back and those ticket numbers!
What we did:
Cursors. (I know, I’m sorry.)
However, we had problems.
Our table had a lot of traffic so we weren’t guaranteed an unbroken sequential range of identifiers that we could update.
And we couldn’t remove these foreign inserts from our range as we had no way of identifying which was our records and which were foreign records.
So we created a cursor. Not a performance tuned cursor since we were relatively new to SQL at the time but a run-of-the-mill, Google what it is and grab the template off the internet cursor.
Import the records into a holding table
Give the table a sequential identifier for ordering
Get the first record
Insert it into the original table
Grab the identity of the insert we just inserted
Update it to have the correct ticket number
Grab the next record and repeat steps 4 – 6
All together 8 steps (although steps 3 – 7 are done around 1 million times) but how fast is it?
ADD [ID] INT IDENTITY(0, 1);
…around 34 seconds, surprisingly long…sign of things to come?…
Steps 3 – 7:
DECLARE @prodID INT,
-- Start of Step 3
SELECT product_id, allowed_action_id
ORDER BY ID ASC;
WHILE @@FETCH_STATUS = 0
BEGIN -- Step 4
(product_id, allowed_action_id, date_actioned)
-- Step 5
SELECT @entered = SCOPE_IDENTITY();
-- Step 6
SET ticket_number = 'TCKT' + CAST(@entered AS varchar(10))
WHERE product_action_id = @entered;
-- Step 7
How’d it do? Well it worked; 1 million records took 9 minutes and 35 seconds though.
…9 minutes and 35 seconds…9 minutes and 35 seconds?!?…I could have recreated the table 9 and a half times in that time! I knew it was going to be slow, I even went out and got a coffee while I was waiting but C’MON!!!!…
So altogether, with the adding of the sequential identifier, it took 10 minutes to run.
Now I can’t remember why exactly but this happened a few times so we had to run this a few times.
Looking at this issue the main problem that we could see was that we needed to know what records we had inserted to update them with a ticket number.
We thought that we couldn’t figure out what we had inserted without using SCOPE_IDENTITY, we didn’t know about the inserted table, we didn’t know about variable tables, essentially we didn’t know about OUTPUT.
Import the records into a holding table
Declare a holding table for the identities
Insert all the records into the original table and output the values to our holding table
Update all the records inserted to have the correct ticket number
All together 5 steps, already an improvement, but that’s just steps, not performance. Is it an actual performance improvement?
…to be honest, if it was anyways faster than 9 minutes, I’d be happy…
-- Step 2
-- Step 3
(product_id, allowed_action_id, date_actioned)
-- Step 4
SET ticket_number = 'TCKT' + CAST(entered_id AS varchar(10))
FROM dbo.ProductActions AS [pa]
JOIN @entered AS e
ON pa.product_action_id = e.entered_id;
29 SECONDS!!!! 1 million rows in 29 seconds!
So we’ve gone from 1 million rows in 9 minutes 35 seconds (~1,730 rows a second) to 1 million in 29 seconds (~34,482 rows a second).
…those results flustered me so much that I forgot maths so if anybody wants to work out that improvement percentage, please let me know. I want to write that one down!…
The most dangerous phrase in the English language is probably “what does this button do?”
The second most dangerous is “we’ve always done it that way”.
There’s always room for improvement testing (on the right system i.e. NOT PRODUCTION) and OUTPUT over CURSORS any day.