Ordering and Choices in PowerShell

Ah! Hash tables!

Ever since I saw a presentation by Anthony Howell ( Blog ), aka PoshWolf, about them, read the blog post by Kevin Marquette ( Blog ) about them, and was enlightened by multiple PowerShell community members (e.g. Chris Dent, Mathias “IISResetMe” Jessen, etc.) about them, I’ve loved hash tables.

However, there is one drawback I have with hash tables: they don’t default to the order in which they are inserted. I’m OK with this since I come from a DB background, and I’m used to order not being enforced unless I specify an ORDER BY. Not everyone is as lenient as we are, though, and the vast majority of the louder masses expect this ordering.

Now, there are ways around this unordered aspect of hash tables. The main way is the [ordered] type accelerator; you can add this to the start of your hash table to keep the default orderings.
I use this when I want hash tables, but also want guaranteed orderings.

Well, that was a short blog post. Thanks!

Sacrificing Idiomatic for Changeable

I know about using [ordered], and now you’ve learned about using [ordered], but what about people using my scripts that aren’t part of either knowledge group?
Oh, I could add a comment, and I could train the people, but what happens if I get hit by a bus win the lottery in the morning? There will eventually be a time when I’m not there, and people will have to use my scripts.

So, [ordered] seems a bit too unknown at the moment for the people who regularly use my scripts.

Which is a pity, cause the current version uses [ordered] hash tables pretty extensively.
Is there something else that we can use instead?

Yes, Arrays

Ah, the humble array. They’re not that bad once you get to know them.
Once I started thinking about these little data types, I realised that they could be a potential answer to this problem.

$pscustomobjectArray = @(
    [PSCustomObject] @{ ID = 0; Name = 'Test'; Type = 'Dev' }
    [PSCustomObject] @{ ID = 1; Name = 'Test2'; Type = 'Dev' }
    [PSCustomObject] @{ ID = 2; Name = 'PreProd_1'; Type = 'PreProd' }
    [PSCustomObject] @{ ID = 3; Name = 'PreProd_2'; Type = 'PreProd' }
    [PSCustomObject] @{ ID = 4; Name = 'Prod_1'; Type = 'Prod' }
    [PSCustomObject] @{ ID = 5; Name = 'Prod_2'; Type = 'Prod' }
    [PSCustomObject] @{ ID = 6; Name = 'Prod_3'; Type = 'Prod' }
)

for ($i = 0; $i -lt $pscustomobjectArray.Count; $i++) {
    "The $i-th object is $($pscustomobjectArray[$i])"
}
The index matches the ID.

It would just need a bit of a rewrite…

Script

Initialize-Choice_v2.ps1

Old = Left. Rewrite = Right.
The default is because there’s always one person who just runs these things as is…

Thankfully, arrays keep their order, so I can rely on them in the script.

In Use

What does this mean? This means that you can use the standard input that people are accustomed to in PowerShell, such as PSCustomObject, CSV, JSON, etc.

As long as we can select the property, we can then filter it down to this specific property.

Example

For each of these examples, we’re passing in a list of choices. Then we’re filtering down the list to the option we chose.

We can do this multiple times, if we want.

Here’s our list of choices to… aha… choose from.

1…2…3, a…b…c…

Example PSCustomObject


$ChoicesCustomObject = [PSCustomObject]@{
    Name = 'Test01'
    Value = '1a'
}, [PSCustomObject]@{
    Name = 'Test01'
    Value = '1b'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2a'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2b'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3a'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3b'
}
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCustomObject | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCustomObject | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
1a. 1b.

Example Csv

$ChoicesCSV = @'
"Name","Value"
"Test01","1a"
"Test01","1b"
"Test02","2a"
"Test02","2b"
"Test03","3a"
"Test03","3b"
'@ | ConvertFrom-Csv
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCSV | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCSV | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
2a. 2b.

Example JSON

$ChoicesJSON = @"
[
  {
    "Name": "Test01",
    "Value": "1a"
  },
  {
    "Name": "Test01",
    "Value": "1b"
  },
  {
    "Name": "Test02",
    "Value": "2a"
  },
  {
    "Name": "Test02",
    "Value": "2b"
  },
  {
    "Name": "Test03",
    "Value": "3a"
  },
  {
    "Name": "Test03",
    "Value": "3b"
  }
]
"@ | ConvertFrom-Json
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesJSON | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesJSON | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
$ChosenOption
3a. 3b.

Example Multiple

$ChoicesCustomObject = [PSCustomObject]@{
    Name = 'Test01'
    Value = '1a'
}, [PSCustomObject]@{
    Name = 'Test01'
    Value = '1b'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2a'
}, [PSCustomObject]@{
    Name = 'Test02'
    Value = '2b'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3a'
}, [PSCustomObject]@{
    Name = 'Test03'
    Value = '3b'
}
<# Get a unique list of available choices #>
$AvailableChoices = $ChoicesCustomObject | Select-Object -ExpandProperty Name -Unique
<# Get the chosen index from the user #>
$ChosenIndex = Initialize-Choice_v2 -Title "Test Get-Choice" -Caption "Select an option" -Choices $AvailableChoices 
<# Filter down to the chosen option #>
$ChosenOption = $ChoicesCustomObject | Where-Object Name -eq $AvailableChoices[$ChosenIndex] 
<# Filter down to the value #>
$ChosenValue = Initialize-Choice_v2 -Title "Choose Value" -Caption "Select a value for $($ChosenOption.Name)" -Choices $ChosenOption.Value
<# Return the chosen option #>
$ChosenOption[$ChosenValue]
1b only.

Caveat

Yes, I am aware that the values are off by one; for example, the first value shows 1 on the confirmation screen but returns a 0. I bowed to requests from people who would be using the code but were not comfortable with indexes starting at 0.

Feel free to change that in your script.

Overall

This is a case of “exception making the rule”. I’ve said before that I prefer not to have user prompting in my scripts, since a script that requires human intervention is not something that can be fully automated.
However, whether it’s risk-averse business leaders or novices to scripting, there are times when an expectation of human interaction is present.

If there is no getting away from this, then I can make the process easier. Here’s hoping that making it easier and easier to use will lead to more automation… or at least more.

Hopefully, this script can help.

How SQL Server’s Compute Scalar Will Impede You

Time to read: ~ 2 minutes

Words: 417

Postgres Blogs

We’re getting more and more Postgres instances at work. To get up to speed on Postgres, I’ve started to expand my RSS feed to include Postgres blogs.

It was one of those blogs, by David Stokes, that captured my attention; namely, “How PostgreSQL’s Aggregate FILTER Will Spoil You

It got me thinking, “Huh, does SQL Server not have filters? And, if not, how can we work around that?”

So, cheers, David, for the idea for this post.

The Big Set up

We’re going to use the same setup that David has in his post

USE tempdb;
GO

CREATE TABLE z (a int, b int, c int);
GO

INSERT INTO z VALUES (1, 10, 100), (2, 20, 200), (3, 30, 300), (4, 40, 400);
GO

SELECT a,b,c FROM z;
GO
I learnt how to increase font size.

Stupid, but does it work

Now, SQL Server doesn’t have the filter option, but we can do some pretty weird things, like a SELECT...WHERE statement with no FROM clause.

SELECT
	a,b,c,
	[filter?] = (SELECT b WHERE b > 11)
FROM z;
GO
Does it B useful though?

So, is that it? Can we just throw that into a COUNT() function and essentially have the same thing as Postgres?

SELECT
	all_rows	= COUNT(*),
	b_gt_11		= COUNT((SELECT b WHERE b > 11))
FROM z;
GO
B better.

Workaround

We can rewrite that strange little filter we have using an OUTER APPLY.

SELECT
	a,b,c,
	[filter?] = bees.bee
FROM z
OUTER APPLY (SELECT b WHERE b > 11) AS bees (bee);
GO
Same, same, but different.

Now, can we throw that in a COUNT() function?

SELECT
	all_rows	= COUNT(*),
	b_gt_11		= COUNT(bees.bee)
FROM z
OUTER APPLY (SELECT b WHERE b > 11) AS bees (bee);
GO
B-utiful!

Things I don’t know

  • Postgres, yet. Still learning this one.
  • Why the OUTER APPLY works, but the subquery doesn’t?
    Let’s review the plans and see if we can identify a culprit.
Does not Compute Scalar

Strangely, the first Compute Scalar (from the left, after the SELECT) is simply a scalar operator applied to the other Compute Scalar.

It’s enough to block aggregation, though, but I’ll leave it to more specialised people than I to tell me why. I’ll just store this under “errors out” for future reference.

  • Performance Implications
    This is a test on 4 rows. What would happen if this were tens of millions of rows?

    I leave that to the reader.

Initialize-Choice

Dear Host, you don’t have to use Read-Host. There is a choice

Words: 590

Time to read: ~ 3 minutes.

Interactive

I’m an advocate of automation. I don’t think automation that requires user input is the correct way to do automation. However, there are times when user interaction is wanted.

I didn’t say “needed”, I said “wanted”.

Either a case of severe risk-aversion, or being burnt by bad bots, or skirting the fine line between wanting to be involved with the work but not wanting to do all the work.

Effectively, creating a Big Red Button, with a secondary “are you sure?” screen, followed by a “last chance” pop-up. You know, the “let’s introduce extra options to make sure things won’t go wrong” approach.

Sorry, did I say I was an automation advocate? I think I meant to write automation snob.

The Question

Most of the work I’ve been doing lately has been project-orientated, production DBA work e.g. building new servers, migrating across cloud providers, enabling TDE…

…working with Distributed AGs. I think this is the first SQL Server technology about which I swing from love to hate about in equal measure.

So, automation is nearly non-existent right now on my tasks. But, when I got asked if there was an easier way to get user input; taking into account bad answers, shortcuts, default options, etc., rather than using a plethora of Read-Host commands and double-checking the user input, well…

I didn’t know.

But, thankfully, I speak to people who are more experienced and smarter than I am when it comes to PowerShell, and I remembered them mentioning $Host.UI.PromptForChoice() before.

Initialize-Choice.ps1

Here’s the link to the code. I’m going to have to ask you to excuse the formatting. I’ve forgotten how to embed the code in an aesthetic way and have misplaced my desire to research how again.

https://github.com/shaneis/RandomScripts/blob/28e2c96e08de0e863134962f7c120c5ffc97abc0/Initialize-Choice.ps1

What this provides us with is a short-hand way to generate a choice option for the user, that will not accept answers outside of the given list.

Pudding

$Choices = [ordered]@{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-Choice -Title 'Test 01' -Choice $Choices

Nice and simple.

The choices that we laid out have been given a shortcut number, and their ordering has been preserved. Any user input outside of the choices are automatically rejected, we can add descriptions for extra help, and we’ve even thrown in a “Cancel” choice as well.

In fact, ordering is the only thing left to mention about this function. It’s why it’s written the way it is and why it only accepts hashtables with the [ordered] accelerator on them.

Here’s how it works when I first wrote it using only hashtables.

$Choices = @{
  "Option 1" = "This is the first option"
  "Option 2" = "This is the second"
  "Option 3" = "The third"
  "Option 4" = "You're not going to believe this..."
}
Initialize-ChoiceUnordered -Title 'Test 01' -Choice $Choices

4, 2, 3, 1! Debugging this would sure be fun!

Option 1 is still option 1.
But now Option 3 has gotten jealous and pushed Option 2 down.
While Option 4 is just happy to be included.

Seeing as any code that parses the choice relies on the user picking the right option and the choices being in a determined order, it seemed like a bug to not have the input order preserved.

Sin é

So that’s it.

Feel free to use and abuse it. I would prefer that you use and improve it but it’s your party, you can do what you want to.

It’s your choice.

T-SQL Tuesday #164: Code That Made You Feel A Way

Time to read: ~ 4 minutes

Words: 899

Welcome to T-SQL Tuesday, the monthly blogging party where we receive a topic to post.

This month we have Erik Darling ( blog ) asking us to post about “code that makes [us] feel a way“.

A while back, I was tasked to performance tune some code that brought me through all five stages of grief. It’s best to take you through the events from the developer’s viewpoint. There’s less cursing that way.

Denial

Hey, what’s up? You were asked to troubleshoot some code. Not a big deal; these requests come in from time to time. 

You have code that keeps coming up as a high consumer of CPU on our systems? OK?

It had gotten to the stage where it kept appearing in sp_WhoIsActive? What’s that? And should it not be “whom is active”? Well, agree to disagree.

Let’s see the code so. Wow, that’s a small scroll bar! Yeah, that’s one of ours.

No, it is. I swear it is.

I’m not surprised you can’t find it in stored procedures; we dynamically create it. Here’s the procedure. Yeah, all 900 lines! Pretty cool, huh?

What do you mean, why? We had to! We had to react to the different parameters that get passed in.

Anger

Alright, alright! Calm down. I misspoke.

Yeah, that’s it, breathe. There aren’t different parameters. It’s a parameter. Singular.

No, we still need to generate the query dynamically. Cause the XML could be different. Huh? Yeah, the parameter is XML; it could be anything in there. So we pass the XML in as a nvarchar(max) string.

You spilt some of your coffee there when you banged the desk.

Then we use sp_xml_preparedocument at the start, but then we have to use sp_xml_removedocument at the end.

You’ve never heard of those before? I thought you were a DBA?

We use the “prepare” to get the XML into a state that’s easier to consume, and we use the “remove” so we don’t get memory leaks!

Your face is getting a bit red, did you know that?

It’s SQL Server; it uses enough memory it can share some!

Did we read the docs? No, why? It can use one-eighth of the total memory available for SQL Server. Ah, but that’s “can”, not “will”.

Yes, yes, cursors upon cursors upon cursors. Why? We don’t know how many values are in each XML node in the params, so we have to split them out.

We then join them into a comma-delimited string, which is then used in IN clauses… woah! Is that the sound of your teeth grinding? You know that’s not good for your health. What do you mean neither am I?

Anyway, then we parse all of that down to a massive, what do you call it, swiss-army knife, lego-block, dynamic query built based on what’s passed in. You don’t call it that? I thought you did. What do you call it so? Wow, that’s not a word I had heard of before.

It’s not too bad, though! We pass in everything as literal values, so it’s faster that way. We read up on that parameter sniffing issue you said can happen. That isn’t going to catch us out here!

Modern problems require modern solutions, as they say. What’s that? It’s not a modern problem? I’m a what? Are you allowed to call me that?

Bargaining

You want me to re-write it? You’ll help? It’ll take a lot of time. It’s not really a priority.

Plus, it’s not even worth it. How much CPU? Is that a “illion” with an “m” or “illion” with a “b”? Per run? Is that a lot? Well, I really wouldn’t know, now would I? 

Yeah, good point; you’ll also have to check how much memory it uses. Hey! Probably one-eight of the total memory available for SQL Server. Alright, it wasn’t that bad a joke; everyone’s a critic!

You’ll give me credit? But it’s fine in Dev; there’s no point. I suppose we could remove some of the WHERE OR IS NULL branches since we know if they’re empty when we create the query.

Nah, that would make the procedure a bit too big. Anything over 1000 lines is too much. Well, 985 is still less than 1000!

Depression

Why are you crying? No, we care, we do, we really do. No, not all the code is like this; this is old. Yes, we stopped using that XML parameter pattern. Yes, and the cursors. Well, most of them, at least. 

Your tears are mixing with the coffee you spilt, you know that? 

Here, there’s really no need for the fetal position in the middle of the office. You have yet to see the code for the busier databases.

You know what, why don’t you go home and come back in tomorrow, and I’ll go over that way, OK?

Acceptance

Hey! There’s our favourite DBA. You seem a lot cheerier.

Oh, you don’t have to perf-tune that query anymore? That’s great; see, it wasn’t so bad. What’s that? The latest release failed? It ran fine in Dev.

You’re permitted to set up a QA environment to test performance before code gets released? What kind of code? Any stored procedure?

Ah, that’s OK. No, really, it’s fine. We started using ORMs and embedding SQL into the app a while ago. This won’t affect us.

You’ve started crying again.

T-SQL Tuesday 157 – End of Year Activity

Words: 544

Time to read: ~ 2 minutes

Read-Clipboard

Welcome to T-SQL Tuesday, the monthly blog part where we are given a topic and asked to blog about it. This month we have Garry Bargsley, asking us to discuss end-of-year activities.

Here

My current workplace has a change freeze in place for the end of the year. Usually, I would say that this is risk-averse.
But seeing as the nature of the business (payments) means that the Thanksgiving/Black Friday/Christmas time is the busiest time of the year, I’m willing to cut them some slack.

So, what to do when we cannot deploy to production? Oh, we’ll still be busy! There are always management-approved fixes that get through, annual processes to complete, and project planning that has to be…well, planned.

But, my priority for this end-of-year is documentation.

Docs

We have a few different tools for documentation. Examples are Confluence, Google Sheets, Google Docs, etc.

But most of the time, documentation takes the form of scripts saved to source control.

These scripts are multiprocess and cross-team dependent and can quickly end up like me doing DIY. One hand trying to steady the nail, the other wielding the hammer, and the whole situation collapsing into swear words and tears.

We can’t currently have a “hit-Enter-and-leave-it” bunch of scripts because we have to stop midway for another team’s work or to check the results of what we’ve just run.

Notebooks

If we used Notebooks, this would be so much easier. I could create the code, save a snippet of the results to the notebooks, and then future executors could see what to expect.

No-books

We don’t use Notebooks.

Plain .sql files for me, it is! 

To ease the documentation burden and have some semblance of tidiness, I created a PowerShell tool that splits it out “all pretty like”.

Format-TextTable

Now, with a combination of Read-Clipboard from the ImportExcel module, I can grab results and turn them into a text table that I can add back into the script.

Simple example: we want to save the database name, create date, and compatibility level of the databases on an instance.

… I said simple example – not good or useful example.

SELECT 
	database_name = name,
	create_date,
	compatibility_level
FROM sys.databases;
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows

Now, let’s copy and paste the results into a comment on that script as a form of “documentation”.

Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that all have different spacing and are not aligned

Now, this time I’m going to copy those results, and run the following code in my PowerShell Core window, before pasting into the comment block.

Read-Clipboard | Format-TextTable | Set-Clipboard
Picture shows an SSMS screen selecting name, create_date, and compatibility level from sys.databases. Returning 5 rows. Underneath the query in a comment are the pasted results that are wrapped in a text table format and aligned under their header name

Given the option, I know which one I’d choose.

Code

You can pick up the source-code here.

Feel free to check it out and add/remove from it.

Set-Clipboard

I am aware that I could spend minutes of my time to evenly align the spaces. And, yes, I know that SSMS can do find-and-replace using simple Regex.
But if I manually structure the output every time, for all different types of columns & data types, with different spacings…

I’d be better off arguing with the company to start using Notebooks.

Until then, this works for me. Hopefully, it will work for you.

And hey, maybe it will help improve your documentation in the New Year. That has a higher chance of happening than my one to improve my DIY skills.

T-SQL Tuesday #155 – Write to Read, Not to Run

Words: 861

Time to read: ~ 5 minutes

Welcome to T-SQL Tuesday, the monthly blogging party where we talk about a topic given to us by the host. This month, we have Steve Jones (blog | twitter) asking us about Dynamic SQL.

There are a myriad number of uses for Dynamic SQL – I’ve already read some of the published posts and I’m impressed with the range. (Yeah, I’m writing this late; I’m aware).

I’m aiming for something different. I want to talk about the things I believe Dynamic SQL should have. You can disagree with me if you’d like; I’d welcome it (as long as you can justify it) but here are my thoughts on writing Dynamic SQL.

DECLARE

To make a contrived example, I’ve taken Andy Mallons (blog | twitter) script to return SQL Agent Job statuses, and converted it to Dynamic SQL

DECLARE
@job_sql AS nvarchar(max),
@job_name AS nvarchar(128) = N'syspolicy_purge_history', — = N'No existy',
@debug_mode_on AS bit = 1; — 0;
DECLARE
@nl AS nchar(2),
@actual_job_name AS nvarchar(128);
/* Newline for formatting */
SET @nl = NCHAR(13) + NCHAR(10);
/* Job existence check */
IF @job_name IS NOT NULL
BEGIN
SET @actual_job_name = (
SELECT
[name]
FROM msdb.dbo.sysjobs
WHERE
[name] = @job_name
);
IF @actual_job_name IS NULL
BEGIN
DECLARE @err_msg AS nvarchar(max);
SET @err_msg = FORMATMESSAGE(N'Cannot find any job labelled: %s', @job_name);
RAISERROR(N'%s', 0, 1, @err_msg) WITH NOWAIT;
RETURN
END;
END;
SET @job_sql = N'USE msdb;
SELECT TOP (1)
is_running = CASE
WHEN ja.job_id is NOT NULL AND ja.stop_execution_date IS NULL
THEN 1
ELSE 0
END,
last_run_time = ja.start_execution_date,
next_run_time = ja.next_scheduled_run_date,
last_job_step = js.step_name,
job_outcome = CASE
WHEN ja.job_id IS NOT NULL AND ja.stop_execution_date IS NULL
THEN N''Running''
WHEN jh.run_status = 0
THEN N''Failed''
WHEN jh.run_status = 1
THEN N''Succeeded''
WHEN jh.run_status = 2
THEN N''Retry''
WHEN jh.run_status = 3
THEN N''Cancelled''
END
FROM dbo.sysjobs AS j
LEFT JOIN dbo.sysjobactivity AS ja
ON ja.job_id = j.job_id
AND ja.run_requested_date IS NOT NULL
AND ja.start_execution_date IS NOT NULL
LEFT JOIN dbo.sysjobsteps AS js
ON jh.job_id = js.job_id
AND js.step_id = ja.last_executed_step_id
LEFT JOIN dbo.sysjobhistory AS jh
ON jh.job_id AND j.job_id
AND jh.instance_id = ja.job_history_id;'
/* Add filter: job_name */
IF @actual_job_name IS NOT NULL SET @job_sql = CONCAT(
@job_sql, @nl,
N'WHERE
j.[name] = @ds_job_name'
);
/* Add sorting */
SET @job_sql = CONCAT(
@job_sql, @nl,
N'ORDER BY
ja.start_execution_date DESC;
'
);
IF @debug_mode_on = 1
BEGIN
RAISERROR(N'%s%s', 0, 1, @job_sql, @nl) WITH NOWAIT;
END;
ELSE
BEGIN
EXECUTE [master].[sys].sp_executesql
@stmt = @job_sql,
@param1 = N'@ds_job_name AS nvarchar(128)',
@ds_job_name = @actual_job_name;
END;
view raw JobStatus.sql hosted with ❤ by GitHub

I was going to apologise for how long and messy it looked but I realised that’s par for the course with Dynamic SQL.

SET

There is this maxim that I’ve heard bandied about regarding code:

Code is read much more often than it is written[…]

https://devblogs.microsoft.com/oldnewthing/20070406-00/?p=27343

There should be an addendum on that quote for DBAs:

DBAs troubleshoot Dymanic SQL more often than they write it

Probably others, but definitely me

Saying that, Dynamic SQL should have these items to help with that effort.

Proper Formatting

It’s extremely easy to write Dynamic SQL so that it comes out in one string. A hodge-podge of plus signs, variable assignments, and red text that sometimes it seems like a foreign coding language

>++++++++[<+++++++++>-]<.>++++[<+++++++>-]<+.+++++++..+++.>>++++++[<+++++++>-]<+ +.------------.>++++++[<+++++++++>-]<+.<.+++.------.--------.>>>++++[<++++++++>–
]<+.

The above code sample is apparently a working “Hello World” program in one of those languages.

Don’t do this, properly format your Dynamic SQL. It will help when the code shows up in your monitoring toolkits. You have them (and know how to use them), right?

To help check your formatting, Dynamic SQL should include…

A Debug Method

Troubleshooting is so much easier when you know what you are going to run. The amount of Dynamic SQL where you have to build that in your head while reading the code is ridiculous!

If you give a procedure filled with Dynamic SQL to a junior DBA, then you’re going to see how fast fear and resignation fills someones face.

It’s straightforward to create a debug method that shoots out the code that is going to be run. Secondary benefit is it ensures that you format your code properly because you can see how it is going to turn out

IF @debug_mode_on = 1
BEGIN
	RAISERROR(N'%s%s', 0, 1, @job_sql, @nl) WITH NOWAIT;
END;
ELSE
BEGIN
	EXECUTE [master].[sys].sp_executesql
		@stmt = @job_sql,
		@param1 = N'@ds_job_name AS nvarchar(128)',
		@ds_job_name = @actual_job_name;
END;

A.O.B

There are a few other things that I like to add to Dynamic SQL but I will grant are not necessary. I’ll leave them to you to make up your own minds about.

Sanitise inputs

If the user passes in an object, ensure it’s there

/* Job existence check */
IF @job_name IS NOT NULL
BEGIN
	SET @actual_job_name = (
		SELECT
			[name]
		FROM msdb.dbo.sysjobs
		WHERE
			[name] = @job_name
	);

	IF @actual_job_name IS NULL
	BEGIN
		DECLARE @err_msg AS nvarchar(max);

		SET @err_msg = FORMATMESSAGE(N'Cannot find any job labelled: %s', @job_name);
		RAISERROR(N'%s', 0, 1, @err_msg) WITH NOWAIT;
		RETURN
	END;
END;

Escaping input

Same point really – if you want to raise a warning or error with what is passed in, use something that escapes user input.

I tend to use FORMATMESSAGE for these bits.

EXECUTE

These might appear to be overkill but I have an cough contrived cough example; the code I took from Andy’s github and converted to Dynamic SQL!

Quick Test with Debug

If we pass in no job name and leave debug mode on; it splits out the code sans any WHERE clause before the ORDER BY to bring us back all jobs:

If we change up the job name to one that does not exist e.g. @job_name = N'No existy'; then we verify that the job doesn’t exist and error out:

Passing in a job that does exist, then adds that job name to the WHERE clause (parameterised, of course):

Let’s Run It!

Turn off debug mode and…

That’s throwing up an error message – but I’d be confident in saying that anyone glancing over the code in the gist would miss the three mistakes.

Yes, three of them – I wrote one on purpose but created two more by accident.

Luckily we can turn debug more back on, grab the output, and parse it in a new window:

The AND on the JOIN clause was intentional – the terminating semi-colon before the WHERE clause was not, neither was the mistake on the alias. The joys of re-writing code.

Fix these mistakes up in our Dynamic SQL, turn debug mode back off, and ready to re-run? Probably took around 5 minutes going slow, but now when we re-run the code:

Result Set

That’s one example of why I have believe Dynamic SQL should have proper formatting, and debugging, and some small others.

I had more examples but they normally come to me around 02:00 in the morning while I’m on swearing and sweating on an incident call.

I’d prefer to fix and forget them, after I make sure they don’t happen again.

Attempting SUM() OVER () in PowerShell

Words: 891

Time to read: ~ 5 minutes

Pro-Cras-Tin-Ation!

Like most things in life, this piece of work came about while attempting to complete something else. It’s not a bad thing, I expect it at this stage.

Easy Like Sunday Morning

I find it easy to get the total of a row in SQL. Hell, when it is not particularly important, I’ll even go the easy route and use a calculated column in the table.

CREATE TABLE dbo.PushupsOctober
(
	pushup_date date NOT NULL
		CONSTRAINT PK_PushupsOctober PRIMARY KEY CLUSTERED,
	attempt_01 tinyint NULL,
	attempt_02 tinyint NULL,
	attempt_03 tinyint NULL,
	attempt_04 tinyint NULL,
	attempt_05 tinyint NULL,
	attempt_06 tinyint NULL,
	attempt_07 tinyint NULL,
	attempt_08 tinyint NULL,
	total_pushups_per_day AS (ISNULL(attempt_01, 0) + ISNULL(attempt_02, 0) + ISNULL(attempt_03, 0) + ISNULL(attempt_04, 0) + ISNULL(attempt_05, 0) + ISNULL(attempt_06, 0) + ISNULL(attempt_07, 0) + ISNULL(attempt_08, 0))
);
GO

Then, all I have to do is insert the data and SQL will automatically take care of calculating the total per row for me.

INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/07/2020 00:00:00', 20, 20, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/08/2020 00:00:00', 20, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/09/2020 00:00:00', 20, 20, 25, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/10/2020 00:00:00', 25, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/11/2020 00:00:00', 0, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/12/2020 00:00:00', 25, 25, 25, 25, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/13/2020 00:00:00', 20, 15, 15, 25, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/14/2020 00:00:00', 30, 30, 20, 20, 25, 20, 20, 20);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/15/2020 00:00:00', 25, 25, 25, 25, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/16/2020 00:00:00', 25, 25, NULL, NULL, NULL, NULL, NULL, NULL);
INSERT INTO dbo.PushupsOctober (pushup_date, attempt_01, attempt_02, attempt_03, attempt_04, attempt_05, attempt_06, attempt_07, attempt_08) Values('10/17/2020 00:00:00', 25, NULL, NULL, NULL, NULL, NULL, NULL, NULL);
GO

SELECT	*
FROM	dbo.PushupsOctober;
GO
Why count when not need to?

Once you have the total per row, you throw in a SUM(that total) OVER () and you have a grand total. Thank you to Kevin Wilkie ( blog | twitter ) for re-igniting my curiosity about Window Functions again.

SELECT	*,
		SUM(p.total_pushups_per_day) OVER () AS total_so_far 
FROM	dbo.PushupsOctober AS p;
GO
Total total

Easy Like Monday Morning

PowerShell is a different beast. Please don’t get me wrong; I still love the language. I don’t find it easier to get a row total and then a grand total though.

It’s possible! I’m just hoping that there is a better way. Saying all that here is my attempt at a row total and grand total using PowerShell.

If you have a better way (you choose the conditions that satisfy “better”) please let me know.

Grabbing the Data

First, let’s grab the data from the table in our database.

$data_2 = Invoke-DbaQuery -SqlInstance localhost -Database LocalTesting -Query @'
SELECT * FROM dbo.PushupsOctober;
'@

Removing Unwanted Properties

Here’s where I remembered that I had a calculated column, realised that it would be cheating to use it and decided it needed to go. Thankfully, this also enabled me to get rid of those pesky columns that get returned from Invoke-DbaQuery when you forget the parameter -As PSObject!

$data_2 = $data_2 | Select * -ExcludeProperty RowError, RowState, Table, ItemArray, HasErrors, total_pushups_per_day

Grabbing Property Names

There’s a couple of things that we need here. We need a way to add up all the “attempt” columns so we need a way to select them all.

$props = $data_2[0].PSObject.Properties | Where-Object Name -like 'attempt*' | Select-Object -ExpandProperty Name

There, that should do nicely!

Grabbing the Values for those Properties

Now, we can iterate over them and get all the values.

foreach ($p in $props) { $data_2[0] | Select -ExpandProperty $p }

Potential Problem

See all those empty lines? Yep, that’s a potential problem for Measure-Object.

$hasToBeAnEasierWay = foreach ($p in $props) { $data_2[0] | Select -ExpandProperty $p }
$hasToBeAnEasierWay | Measure-Object -Sum

Removing NULL or WhiteSpace

Thankfully, there’s a way to get rid of those empty lines.

$hasToBeAnEasierWay | Where-Object { -Not [String]::IsNullOrWhiteSpace($_) } | Measure-Object -Sum

Putting it ALL Together

Now that we have the skeleton of a script ready, let’s put it all together.

Row total

$data_2 | ForEach-Object -Begin {
    $props = $data_2[0].PSObject.Properties | Where-Object Name -like 'attempt*' | Select-Object -ExpandProperty Name
} -Process {
    $total = $null
    $hasToBeAnEasierWay = $null

    $hasToBeAnEasierWay = foreach ($prop in $props) {
        $_ | Select-Object -ExpandProperty $prop
    }
    $total = ($hasToBeAnEasierWay | Where-Object { -Not [String]::IsNullOrWhiteSpace($_) } | Measure-Object -Sum).Sum

    $_ | Select-Object -Property *, @{
        Name = 'total_per_day'
        Expression = { $total }
    }, @{
        Name = 'days_left'
        Expression = { ((Get-Date -Date '2020-10-31') - (Get-Date -Date $_.pushup_date)).Days }
    }
} -OutVariable data_3 | Format-Table -Autosize

We now have the row total in our total_per_day property. And, with our use of -outvariable data_3, we have the results saved into a variable called $data_3 .

Grand Total

Once we have a single column that we can sum up to give us our grand total, then PowerShell makes this operation trivial.

I do have to use Format-List here because Format-Table can’t fit all the properties in so our new property total_so_far won’t show up.

$data_3 | Select-Object -Property *, @{
    Name = 'total_so_far'
    Expression = { ($data_3 | Measure-Object -Property total_per_day -Sum).Sum }
} | Format-List

There We Go!

While, I’d argue that it’s not as easy as SQL, it’s completely possible to get row totals and grant totals in PowerShell.

Honestly though, I hope there’s an easier way. Otherwise, I’m going to do it in SQL and then grab it out into PowerShell afterwards.

T-SQL Tuesday #131: Data Analogies, or: Explain Databases Like I’m Five!

Words: 906

Time to read: ~ 5 minutes

Welcome to T-SQL Tuesday! This month’s host is Rob Volk (blog|twitter) and the topic is Data Analogies.

Analogies

Come in! Come in, my boy!
Now, your father sent you to me to explain what I used to do.
Well, I was a DBA, my boy!
What do you mean “what does that do”? I took care of the databases! I made sure that they were looked after, that they had vim and vigour, and that they didn’t go down.

What? No, I don’t mean “down-down”, it’s not a direction! I mean that they didn’t go offline. No, not “off-off”, well not quite… It was my duty to ensure that they were available. Got it? No?

Database Administration

Well, let’s take that whiskey cabinet over there Jas…Jaeysin. Let us say that the whiskey cabinet is our database, and it is our job to see that it is never empty. We do that by lots of ways; by checking on it and making sure that nothing is alarming, like empty bottles. We check that there is space if someone wants to add more whiskey and that anyone who wants something from it, can get it. Like me, hehe.

What? You don’t understand how that’s like being a DBA? Well think about it my boy, I would check on the databases, make sure nothing was alarming, and that the data was always available for whoever wanted it.

Security

What’s that? You want some? Ho ho, my boy, you are one for jests. I tell you what, try opening the cupboard door on the left. Yes, the one where you can see all the sticks and cherries through the glass. Not a problem for you, was it? Put back the cherry, please. And wipe your hands…NOT ON THE GLASS!
Nevermind, my boy, nevermind, I shouldn’t have put the soda water in a dangerous place like that…inside a cupboard…away from the ledge. Try and open the right cupboard door now. Yes, the one with the fancy bottles and the shiny lights. Yes, I’m aware it’s locked. Now see, you cannot open that door but I can because I have permission to open that door.
That was another part of my job, making sure that people had the right permission to get what they wanted, and that people without permission could not.

What’s that? Who decides on the permissions? Well, back then, it was a business decision. Where those higher up in standing would pick, and I would have to follow their instructions. Now, I’m in charge.

What do you mean that’s not what your father says? Right, well, I’ll be having a few words with him, and we’ll see more about this “under his partner’s thumb” business. No, I can’t open it either. Because I don’t have the key. Yeah well, I may be db_owner, but not sysadmin… Nevermind.

Performance

What else did I do? Well, I made sure those who wanted data from the database knew where they could get the data and could get the data promptly.
Well, do you see the whiskey cabinet? Yes, the one on the right. Yes, the one you’re not allowed open. Yes, you’re allowed to look…oi! Quit your cheek, or I’ll add that to the list of things I’ll have to talk to your father about.
Now, if someone were to go to that cabinet wanting a nice Scotch, they only have to reach into the middle shelf, and they would have a whole choice of Scotch to choose from. I know that because I know that my middle shelf is only for Scotch.
Same with the databases; I helped people to create tables to hold their similar data the same way I choose that shelf to have my Scotch.

And see the way that the bottles get fancier as you scan your eyes from left to right. I ordered that shelf, so the most expensive bottles were all the way over there on the right. And, the least expensive bottles are all over to the left.
Same with the databases; I would create indexes so that people could go to what they wanted, whether that be the “expensive bottles” or not.
No more looking through the entire table for what they want, they knew exactly where to go and knew when they could stop looking as well.

What? Indexes, my boy, an index. No, it hasn’t nothing to do with your finger! Wait! You may be on to something there. Tell me, have you ever run your index finger down a phone book? A phone book. You’ve never heard of it? Hold on…this thing? Ever seen this? Big yellow book, lots of information? Yes, I know we have Google, nevermind.

Redundancy

Was that it? No lad, that was not it. The world of data is vast and open. So much more than what an analogy using a whiskey cabinet can provide. But I will leave you with one more analogy; Redundancy. It’s always essential to have a redundant copy of your data, a recovery plan if a disaster were ever to strike.
Open that desk drawer for me, my boy. Yes, it’s another bottle of whiskey. A redundant backup, if you would. Now I believe I see your father pull into the drive, so it is time for you to leave. I shall go see if my backup can still be restored.

Goodbye, you little disaster.

Minimum Permissions for Get-DbaDbUser

Words: 806

Time to read: ~ 4 minutes

Update: 2020-07-15 – Thank you Garry Bargsley for being an unofficial editor 🙂

Update: 2020-07-17 – Thanks to Shawn Melton for spot-checking this and letting me know ALL permissions needed!

TL;DR:
All Users:
A user on the database with ALTER ANY USER permission.

Current User and System Users:
To work against all databases for the current user and system users requires CONNECT ANY DATABASE.


Update


2020-07-17

Thanks to Shawn Melton for pointing out that CONNECT ANY DATABASE allows the user to see only themselves and the system users.

To see all users from Get-DbaDBUser, the caller will need a user on the databases and the permissions ALTER ANY USER.

CONNECT ANY USER

Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -Database __DBA -EnableException -Verbose |
    Group-Object -Property Database

4 records are returned – the user itself and the system users.

User & ALTER ANY USER

USE __DBA;
GO

CREATE USER LimitedPermissions FROM LOGIN LimitedPermissions;
GO

GRANT ALTER ANY USER TO LimitedPermissions;
GO

The LimitedPermissions login now has a user in the database and we’ve granted that user the ALTER ANY USER permission.

Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -Database __DBA -EnableException -Verbose |
    Group-Object -Property Database

Now, we can see all the users; the user itself, the system users, and the other user I created on the database.


Original Article


The Backstory

Work is in the process of automating tasks. Part of this automation includes verifying the automation that we’ve done.

Where am I going with this?

Well, when we’ve automated the creation of database users we also want to verify that we’ve created the users that we say we’ve created.

My fellow co-workers have, thankfully, seen the dbatools light and we use the command Get-DbaDbUser to get the users in a database and compare the list against the users we were supposed to create.

If there are any users that should have been created but don’t show up, well then we have a problem.

The Principle of Least Privilege

Works fine for me […] but it looks like […] can’t run it with her “public” access to the db server.

I’m not going to sugarcoat things – the person that sent me the request has more access than they rightly need. The “public” access worker did not need any of that access so I wasn’t going to just give her the same level.

Plus, we’re supposed to be a workforce that has embraced the DevOps spirit and DevOps is nothing if it doesn’t include Security in it.

So, if I could find a way to give the user enough permission to run the command and not a lot more, then the happier I would be.

But, I was surprised how difficult it was to find out what permissions were needed to run Get-DbaDbUser. Even more surprised when I failed and realised I’d have to find out myself.

If anyone else can Google/Bing it and get the answer, please let me know 😐

The Test

Let’s create a new user with no permissions in SQL Server.

USE [master];
GO

CREATE LOGIN LimitedPermissions WITH PASSWORD = N'MorePermissionsMoreProblems!';
GO

Now let’s test it out. I have a database in my instance called __DBA. Can we access the users in that database?

<#
    $Cred
    -----
    Username = LimitedPermissions
    Password = 'MorePermissionsMoreProblems!'
#>
Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -Database __DBA -EnableException

It doesn’t work. What’s even more surprising is that it silently doesn’t work. No warnings, no permissions errors, or nothing. And I included the -EnableException switch!

The Investigation

It’s good to know that you can check out the contents of the dbatools (and other) commands from PowerShell. No, I’m not talking about opening the .ps1 files. I’m talking about using the Function:\ psdrive.

Get-ChildItem -Path Function:\Get-DbaDbUser |
    Select-Object -ExpandProperty Definition

See those $server.databases and $db.users? For me, that means that it’s using SMO (Server Management Objects). If there was any hope of me google/binging permissions before this, well it’s gone now.

The Will is going

To cut a rather long story short, eventually I came to the idea of thinking that maybe it only needs to connect to the database. So let’s try that.

USE __DBA;
GO

CREATE USER LimitedPermissions FROM LOGIN LimitedPermissions;
GO

And now let’s try our Get-DbaDbUser command again.

Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -Database __DBA -EnableException -Verbose |
    Select-Object Database, Name, LoginType, UserType

Double-Checking

Let’s try all of the databases on the instance now

Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -EnableException -Verbose |
    Group-Object -Property Database
Oh it has the system databases as well now!

Apart, from the system databases (excluding model) it only works on __DBA.

Give it all

Now, let’s use the CONNECT ANY DATABASE server permission.

USE [master];
GO
GRANT CONNECT ANY DATABASE TO LimitedPermissions;
GO

And we’ll run against all databases again.

Get-DbaDbUser -SqlInstance localhost -SqlCredential $Cred -EnableException -Verbose |
    Group-Object -Property Database

Sin é

That’s it! Minimum permissions that I could find for Get-DbaDbUser is the permission to connect to the database.

Hope that helps!

Retrospective: Speaking at DataScotland

Words: 315

Time to read: 1.5 minute

Data Scotland

On the 13th of September 2019, I spoke at DataScotland; my first time talking at a data conference.

My quasi-clickbait title was Feel Validated with dbachecks. If you guessed that I was talking about dbachecks then you’re right.

This is a brief retrospective of that time. Thinking back on that time still makes me relive the emotions that I felt. Nervousness, excitement, and panic.

Good times!

Appreciation

This was my first time speaking at a conference as well as my first time attending Data Scotland.

I recommend that you check it out. It’s an amazing conference created by passionate people and staffed by dedicated volunteers.

The Good

What didn’t count as the good is the way to put this!

  • Amazing fellow speakers.
  • Getting to meet other first-time speakers.
  • Talking with the volunteers.
  • Speaking with attendees.
  • Seeing people who I hadn’t seen in a long time.

Thank you to Craig Porteous, Louise Paterson, Paul Broadwith, and Robert French for all your work and encouragement.

Thank you as well to Brent Miller, Andrew Pruski, David Alcock, and John McCormack for help with the presentation.

The Bad

Feeling drained.

I don’t put this down to DataScotland though.
You may not have heard from me for the last month. I felt drained and took time off from public exposure.

A full year of constant working on the day job and personal work.
2 conferences a month on average for the last year.
Spreading myself out on projects about SQL, PowerShell, Containers, Python, AWS, and Azure without rest.
It’s not something I could sustain without factoring in sharpening the axe time.

Overall

I’m easing myself back into things again with the caveat of planning ahead and making sure I don’t overwhelm myself.
First thing on my list, planning for DataScotland next year.

Whether it’s speaking, volunteering, or attending, I’ll be there.