T-SQL Tuesday #150 – Your First Technical Job

Words: 597

Time to read: ~ 3 minutes

Welcome to T-SQL Tuesday 150. This month, Kenneth Fisher (Blog | Twitter) asked us about our first technical job.

I’ve talked before about my first technical job for what feels like too many posts. So I will go back even further to my first job that required domain-specific knowledge.

Today, I will about being a DBA and being a beach lifeguard. Well, I’m going to attempt to anyway.

Are there any similarities between the two?

Pre-conceived notions

First up, we have the notion that comes with the jobs. 

Baywatch deeply colours the picture that springs to mind when thinking of lifeguards.

Athletic supermodels and super hunks, slow-motion springing across the beach, rescue buoy in hand, ready to save lives!

Have you ever tried to run across beaches in Ireland? 

90% of them are sharp stones! I’ve heard that the same percentage of statistics are made up, but I think you can get the picture. 

It’s not so much sprinting across the beach as speed limping while being pelted with rain.

Ideas of DBA-ness, while not as universal as lifeguarding, also had the same gallant imagery for me. 

Dreams of the heroic DBA, toiling at the keyboard, magically reviving downed servers, recovering data from corrupted DBs against all odds.
A sage wizard in the corner who can safely navigate the waters of performance traps.

Romantic ideas, eh? 

I’ve since re-defined that. Not so much a wizard as it is a cinema usher.

Rushing around, trying to have things in the proper place before the next deadline.
Cleaning up after people have thoroughly enjoyed themselves. 
Attempting to keep a low profile while keeping things moving along. 
Trying not to gorge on food while you work…no? Just me?

Oddly though, I enjoy(ed) both. 

Lifeguarding, even with its rocks and rain and day-long stretches of staring at an empty sea, and DBA work, even with its constant shift of intense workloads and tight deadlines.


What about the differences though?

Pathways

There is a defined method to becoming a lifeguard, and certifications are required.

There are practical tests, and there are renewals. There are exams on nearly everything! 
Actually not everything; they don’t test if you can run on stones.

These certs are required – “ no cert-y, no lifeguard-y “, a former teacher used to say. 

He was friendly, but I don’t think his poetry career ever really took off.

Once you were certified, though, that was it. All done until your next renewal or CPR techniques got updated.

DBAs, not so much. 

It’s all on you.

This job is the only one I know of where we have a phrase for someone who “fell” into the role. Either due to proximity to the server or from mistakenly looking up when a manager mentions the word “database”.

There are shops where you can get by just ensuring that backups get taken – maybe restoring the odd backup to get data from a deleted table.

There are also places where the constant thrum of projects and daily work and new technologies will have you sprinting to keep up.

Sure, there are certifications out there, but they change as much as the underlying technology changes. 

There is no ” definition of done ” despite people creating videos, lectures, and courses about it. You learn and learn and continue learning until people stop producing new technologies.

Take a guess what happens then? Here’s a hint, it rhymes with “a mother mew fecknowledgy”…

Sorry, he was an influential teacher….


So now for the million dollar question: which job do I like best?

It depends.

TSQL2sday #149: Advice you’d give your younger self

Words: 459

Time to read: ~ 3 mins

Welcome to TSQL Tuesday 149! The monthly blogging party where we are given a topic to write a post around.

This month Camila Henrique (blog) asked us about the advice we would give our younger selves.

It’s not that I’m smart; it’s just that I stay with problems longer…mainly cause I’ve caused them.

Albert Einstein, probably

Nobody likes a show-off

Really? There’s no need for a stored procedure because you can write out the syntax to update a business order by memory? Wow, that’s great. </sarcasm>

Are you expecting everyone new to learn the code as well?
How useful is that going to be when you leave?

Run this procedure and pass in the business order id” versus “So, what you gotta do here is join these tables based on these ids but only when the square root of -1 isn’t i.
Which do you think is the better option, Einstein?

Also, do you realise how much time you are wasting by manually typing out all the T-SQL code each and every time?!

Simple advice here; do not turn down a good idea because you think you don’t need it. You’re not half as smart as you think you are, and you’re twice as dumb as you feel.

Pride and Prejudice

If you want to go fast, go alone. If you want to go far, go together. Whatever you do, go away!

My sister, I guess

This is another instance of not knowing a good thing when it’s staring you in the face! If programming languages could stare, that is.

You’re soon going to be shown PowerShell as a way to automate some work. Yes, it’s version 2, so it will be rough around the edges. Hell, you’re rough around the edges! You’d like people to take a chance on you, so take a chance with it.

It, like you, will improve. However, it’ll improve at a rate and level that you can only hope to achieve.

Thanks to PowerShell, you will meet people that you never would otherwise. You will learn aspects that will improve every facet of your life. Prejudice ruins a lot of things, don’t let it ruin you!

There are two mistakes one can make along the road to truth… not going all the way, and not starting. No documentation is a close third though

Buddha, I’m assuming

To tie these up together and link back to a recent post by Ken Fisher (blog | Twitter). There is no such thing as a one-off request.

There is nearly always a request that starts with, “Hey, you know that query you ran for me the last day?”.

Learn to document, learn to automate, and learn to use Source Control, ya git.

Only One Join-Path Is Needed

Time to read: ~ 3 minutes

Words: 571

Update: Learning from my mistakes aka Failing Up

Update: Reliably informed that `-AdditionalChildPath` was added after 5.1

Join Me For a Moment

There’s a multitude of scripts out in the wild with a chain of Join-Path commands. Initially, when I wanted to create a path safely, a Join-Path cmdlets chain was also my go-to. However, after reading up on the documentation, I realised another way: I only need a singular instance of the Join-Path command.

Target Location

My PowerShell console is open in my home folder, and I’ve a test file: /home/soneill/PowerShell/pester-5-groupings/00-run-tests.ps1.

If I wanted to create a variable that goes to the location of that file, one of the safe ways of doing that is to use Join-Path.

Long Form

I mean, I could create the variable myself by concatenating strings, but then I’d have to take the path separator into account depending if I’m on Windows or not.

Apparently not…

$var = ".\PowerShell\pester-5-groupings\00-run-tests.ps1"

[PSCustomObject] @{
  Type      = 'Long Form'
  Separator = 'Manual entry: \'
  Variable  = $var
  Path      = try {Get-ChildItem -Path $var -ErrorAction Stop} catch {'Error!'}
}
Forward becomes back

I thought this wouldn’t work but, when running the code samples, it appears that PowerShell doesn’t mind me using a forward-slash (/) or a back-slash (\); it’ll take care of the proper separator for me.

UPDATE: This way works fine from a file but run the script from a PowerShell terminal and it’s a no-go.

No, you’re not the one for me

UPDATED UPDATE: Thanks for Cory Knox (twitter) and Steven Judd (twitter) for pointing out that this fails because it’s using /bin/ls instead of the Get-ChildItem alias:

Manual Creation

A more explicit, cross-platform method would be to use the [IO.Path]::DirectorySeparatorChar.

$sep = [IO.Path]::DirectorySeparatorChar
$var = ".${sep}PowerShell${sep}pester-5-groupings${sep}00-run-tests.ps1"

[PSCustomObject] @{
  Type      = 'Manual Creation'
  Separator = "[IO.Path]::DirectorySepartorChar: $sep"
  Variable  = $var
  Path      = try {Get-ChildItem -Path $var -ErrorAction Stop} catch {'Error!'}
}
The long way around

This method works fine but creating the path can get very long if I don’t use a variable. Even using a variable, I have to wrap the name in curly braces because of the string expansion method I used. That’s not something that I would expect someone picking up PowerShell for the first time to know.

-f Strings

In case you’re wondering, another string expansion method here would be to use -f strings.

$sep = [IO.Path]::DirectorySeparatorChar
$varf = '.{0}PowerShell{0}pester-5-groupings{0}00-run-tests.ps1' -f $sep

[PSCustomObject] @{
  Type = 'F String'
  Separator = "[IO.Path]::DirectorySepartorChar: $sep"
  Variable  = $varf
  Path      = try {Get-ChildItem -Path $varf -ErrorAction Stop} catch {'Error!'}
}
It’s hard to google for the F word

Many Join-Path Commands

Better yet would be if I didn’t have to account for the separator at all. Here’s where the multiple Join-Path cmdlets come into play.

$var2 = Join-Path -Path . -ChildPath PowerShell | Join-Path -ChildPath pester-5-groupings | Join-Path -ChildPath 00-run-tests.ps1
 
[PSCustomObject] @{
  Type      = 'Many join paths'
  Separator = 'Taken care of: Join-Path'
  Variable  = $var2
  Path      = try {Get-ChildItem -Path $var2 -ErrorAction Stop} catch {'Error!'}
}
One, Two, Many, Lots

Multiple Join-Path commands work fine. No real issue with people using this way, but there is another!

Only One Join-Path Needed

Join-Path has a parameter called -AdditionalChildPath that takes the remaining arguments from the command line and uses them in much the same way as a Join-Path command chain would.

$var3 = Join-Path -Path . -ChildPath PowerShell -AdditionalChildPath 'pester-5-groupings', '00-run-tests.ps1'

[PSCustomObject] @{
  Type = 'AdditionalChildPaths'
  Separator = 'Taken care of: Join-Path'
  Variable = $var3
  Path = try {Get-ChildItem -Path $var3 -ErrorAction Stop} catch {'Error!'}
}
One join to rule them all…

More Output than Put Out

So there you go—more than one way to join a path. Use whichever ones work for you. It’s good to know your options, though.

Table Column Differences Part 03 – Compare-SqlTableColumns

Words: 470

Time to read: ~ 2 minutes

Don’t talk to me about it!

Four years ago (I know, where did the time go?), I wrote about Table Column Differences with T-SQL and PowerShell.

A Michal commented on the post, asking how to get a specific output from his search.

Hi,

Thanks for your sharing. What if I also want to compare case sensitively columns and the order of them (syncwindows). How can I presented it on powershell.

I mean that in the final table I want to show also something like: column_a, column_A –> case sensitive

AND

column_a, column_a –> different order in the table

Thanks in advance

Michal

I confess that I never got around to answering Michal until a few weeks ago when I found myself with some rare free time.

Since then, I’ve written a script, slapped it into a function, and threw it up on Github.

Here’s hoping that it does what you want this time Michal, thanks for waiting.

Shall I Compare Thee to Another Table?

The first thing that we need to do is have a couple of SQL tables to compare.

So, I threw up a Docker container and created a couple of tables with nearly the same layout.

(Get-DbaDbTable -SqlInstance localhost -Table 'dbo.DifferenceTable01', 'dbo.DifferenceTable02').Columns |
        Select-Object -Property Parent, Name, ID, DataType |
        Format-Table -GroupBy Parent
I’m liking the new PowerShell formatting

You can see that there are around three differences here

  1. Column orders, e.g. col9 has id 6 in dbo.DifferenceTable01 but id 5 in dbo.DifferenceTable02.
  2. Column case sensitivity, e.g. col7 does not match COL7.
  3. Column presence, e.g. col3 doesn’t exist in dbo.DifferenceTable01 at all.

While Compare-Object has the -CaseSensitive switch, I don’t think that it would be helpful in all these cases. Or else I didn’t want to use that command this time around.

So, I wrote a function to get the output we wanted, and yes, I now include myself among that list of people wishing for that output.

I’m allowed to be biased towards the things that I write 🙂

Compare-SqlTableColumns

Compare-SqlTableColumns -SqlInstance localhost -Table1 'dbo.DifferenceTable01' -Table2 'dbo.DifferenceTable02' |
        Format-Table

I’ve tried to include everything you could want in the function output, i.e. column names, column ids, and statuses.

Something I’ve started to do lately is wrapping a [Diagnostics.StopWatch] in my verbose statement to see where potential slow parts of the function are.

I’d like to think that 0.2 seconds for this example aren’t too bad.

$x = Compare-SqlTableColumns -SqlInstance localhost -Table1 'dbo.DifferenceTable01' -Table2 'dbo.DifferenceTable02' -Verbose

$x | Format-Table

Thou hast less columns than thine brother…

Feel free to use and abuse this function to your hearts content. I know that there are a few things that I’d add to it. Comparing across different instances being an obvious one that I’d like to put in.

Hopefully though, someone out there will find it helpful.

Here’s looking at you, Michal.

T-SQL Tuesday #143 – Short code examples

Time to read: ~ 2 minutes

Words: 328

Welcome to T-SQL Tuesday, the monthly blog post invitational where we’re given a topic and asked to write about it.

This month we have John McCormack (Blog | Twitter) asking, “What are your go-to handy scripts“?

For this post, I’m going to break these down into different languages.

SQL

I once had the annoyingly complex T-SQL to change MS format time into a human-readable format memorised.

SELECT
    time_MS_format = [TimeMSFormat],
    converted_time = '2021-10-12 ' + 
    STUFF(
        STUFF(
            RIGHT('000000' + X.TimeMSFormat, 6), 3, 0, ':'
        ), 6, 0, ':'
    )
FROM (VALUES
     ('00000')
   , ('00500')
   , ('01000')
   , ('10000')
   , ('10500')
   , ('100000')
   , ('100500')
   , ('115500')
   , ('120000')
) X ([TimeMSFormat]);

Then I read a blog post from Kenneth Fisher (Blog | Twitter) about the in-built msdb database function dbo.agent_datetime.

SELECT
    time_MS_format = [TimeMSFormat],
    new_function = msdb.dbo.agent_datetime(20211012, X.TimeMSFormat)
FROM (VALUES
     ('00000')
   , ('00500')
   , ('01000')
   , ('10000')
   , ('10500')
   , ('100000')
   , ('100500')
   , ('115500')
   , ('120000')
) X ([TimeMSFormat]);

If I run sp_helptext on that function, it reminds me of that Andy Mallon (Blog | Twitter) post.

It would be more performant if I stripped that function and used the code directly but the code is too handy to use for the infrequent times I need it.

PowerShell

I’ve talked before about using ConvertTo-SqlSelect in a blog post before and I still use that function alot!

Another short piece of code that I use is more for formatting than anything else. You can populate a variable with an array of properties names. Select-Object can use this variable to return information.

 $Properties = 'SqlInstance', @{Name = 'DatabaseName'; Expression = {$_.Name}}, 'Status', 'RecoveryModel', 'Owner'

Get-DbaDatabase -SqlInstance localhost -SqlCredential $Cred | Select $Properties

A useful snipper for reporting is to use a combination of Sort-Object and the Format-* commands with the -GroupBy parameter.

Get-DbaDatabase -SqlInstance localhost -SqlCredential $Cred |
    Select $Properties |
    Sort-Object RecoveryModel |
    Format-Table -GroupBy RecoveryModel

Sin é

When I sit down and write this post, I realise that I don’t have a lot of handy scripts. Either I re-write things constantly (that’s likely), or I don’t know enough yet (also likely). I should fix that.

The Surprising Working of TrimEnd

Time to read: ~ 2 minutes

Words: 397

A couple of days ago, I was running some unit tests across a piece of PowerShell code for work and a test was failing where I didn’t expect it to.

After realising that the issue was with the workings of TrimEnd and my thoughts on how TrimEnd works (versus how it actually works), I wondered if it was just me being a bit stupid.

So I put a poll up on Twitter, and I’m not alone! 60% of the people answering the poll had the wrong idea as well.

Let’s have some code to show what we mean.

'Shanes_sqlserver'

Incorrect Ideas

The vast majority of code that I have seen out in the wild has strings as the inner portion of TrimEnd

'Shanes_sqlserver'.TrimEnd('sqlserver')


The code works how I thought that it would, removing the “sqlserver” portion of the string at the end. Now, let’s try it again and remove the underscore as well.

'Shanes_sqlserver'.TrimEnd('_sqlserver')



See! Where has my “s” and “e” gone?!

Let’s look at the overload definitions for TrimEnd by running the code without the brackets after the method.

'Shanes_sqlserver'.TrimEnd


No overload definition takes a string; they either take a char or an array of chars. Is that what’s happening here?

# Takes an array of chars
'Shanes_sqlserver'.TrimEnd('_', 's', 'q', 'l', 'e', 'r', 'v')

# Turns a string into an array of chars
'Shanes_sqlserver'.TrimEnd('_sqlerv')

# Order doesn't matter either
'Shanes_sqlserver'.TrimEnd('vrelqs_')

A New Way of Thinking

So TrimEnd takes the characters that we provide inside the method and removes them from the end until it reaches the first non-matching character.

This example explains why our first example, with TrimEnd('sqlserver'), removes everything up to the underscore.

'Shanes_sqlserver'.TrimEnd('sqlserver')
# -----^ First non-matching character (_)


However, when we include the underscore, the first non-matching character shuffles back.

'Shanes_sqlserver'.TrimEnd('_sqlserver') 
# --^ First non-matching character (n)

Initial Problem

Now that we have a new understanding of how TrimEnd works, how can we remove the “_sqlserver” part of the string?

Split it in two.

'Shanes_sqlserver'.TrimEnd('sqlserver').TrimEnd('_')
# -----^  First non-matching character (_)
# ----^  First non-matching character after first TrimEnd (s)

This rewrite works for us since we have a defined character that acts as a stop-gap. If that stop-gap isn’t possible, then -replace may be our best option.

'Shanes_sqlserver' -replace '_sqlserver'

Always good to get a better understanding of PowerShell. If my tests catch more of these misunderstandings that I can learn from, then I’m OK with that!

T-SQL Tuesday #140: What have you been up to with containers?

Time to read: ~ 2 minutes

Words: 335

Kubernetes

So this is a post that will not be educational, but it’s the latest encounter that I’ve had with containers, so it’s the most present in my mind.
Hey, hopefully, it brings a laugh to some people.

I’ve been looking into Kubernetes. I’ve not gotten very far with it, but I managed to set up a replica in Ubuntu WSL2 on my laptop.


Everything was all well and good apart from being unable to connect to the database from Azure Data Studio but again, all good.

Fast forward a couple of days where I’m trying to share screen, and my laptop started getting very slow, the fans started getting very loud, and the performance just tanked.

Taking a look at the ol’ Task Manager, I saw a “vmmem” process taking a massive amount of memory. A quick google search led to the culprit being virtual machines.

Here started what I can only describe as a Benny Hill sketch where I tried to remove the pods only to have the Kubernetes create the pods again!

Remove the pods – check for pods – the same amount created a few seconds ago!
Argh!!!

Containers

Eventually, I dropped the pods and managed to get my laptop under control.
Still wanting to have a SQL instance to work with, I managed to spin up a Docker container and have a developer instance of SQL 2019 up and running on my laptop.

Thankfully I know enough about containers to stop the instance when I don’t need it and only start it up again when I do.

It’s strange to think that the day has arrived where I resort back to my knowledge of containers as the familiar option!
There’s a good thing in there somewhere, maybe put a backstop into my learnings? Just enough to know how to stop if the situation goes wrong or go too far.

I still intend to continue researching Kubernetes, but maybe I’ll deepen my knowledge on Containers in the meantime.

ConvertTo-SQLSelect

Words: 651

Time to read: ~ 3 minutes

Update 2021-06-17: It now accepts pipeline input

It’s been a busy month for me so there’s not a lot of outside work research that has been going on.

That being said, there has been quite a gap since I wrote a blog post so I figured that I might as well write something

So what do I have to write about?

SELECT Statements

There are times when I want to mess about with data in SQL Server, data that I have obtained in PowerShell. This will require some way to get the information from PowerShell into SQL Server.

I know of a few ways to do this.

dbatools

There is the dbatools module and the Write-DbaDbTableData function.

Get-Help -Name Write-DbaDbTableData -Full

If I wanted to write the properties of 50 modules from PSGallery into SQL Server, I can use the function handy enough.

Find-Module | Select-Object -First 50 | Write-DbaDbTableData -SqlInstance localhost -Database WAT -Table dbatools_Insert -WhatIf

ImportExcel

There is also the ImportExcel module and the ConvertFrom-ExcelToSQLInsert function.

Get-Help -Name ConvertFrom-ExcelToSQLInsert -Full
Find-Module | Select-Object -First 50 | Export-Excel -Path .\Documents\Excel\temp_20210614.xlsx;
ConvertFrom-ExcelToSQLInsert -TableName ImportExcel_Insert -Path .\Documents\Excel\temp_20210614.xlsx -UseMsSqlSyntax

Being Picky

Both of these were a bit too much for me though. I only wanted a quick and easy way to have the data available in a SELECT statement.

I can use ImportExcel and ConvertFrom-ExcelToSQLInsert but that is dependent on the table already existing, never mind having to save the data in an Excel file first.

Don’t get me wrong – I’m aware that you don’t need Excel installed on the computer where you’re running these commands from. You still need to save the files somewhere though. The function doesn’t take data from variables.

I can use dbatools and Write-DbaDbTableData. This function is not dependent on the table having to already exist. It will create the table for you if you tell it to. Thank you -AutoCreateTable; even though I recommend pre-sizing your columns if you want to go with this method.

However, I don’t want to have to create the table beforehand.

ConvertTo-SQLSelect

So I wrote a primitive function to have the data available in a SELECT statement that I can run in an SSMS or Azure Data Studio window.

You can find the code for it here on Github:
ConvertTo-SQLSelect

I can pass a bunch of objects into it and it will create the SELECT for me using the good ol’ VALUES clause.

Although I’m pretty sure this is basically what ORMs do under the cover before people who knew what they were doing looked at them…

ConvertTo-SQLSelect -Data (Find-Module | Select-Object -First 50)
… there’s more data here….

Caveats

There are a couple of caveats to be aware of…

  • It doesn’t allow pipeline input.

It probably could but that would require a sit-down and think about how to do it. Like I said; this was a quick and dirty put-together function.

It now accepts pipeline input – although I’m sure it isn’t the best way I could have implemented that…

-999..1000 | ForEach-Object -Process { (Get-Date).AddDays($_) } | ConvertTo-SQLSelect
  • There are no data types.

There are strings and they get inserted as strings but that’s okay for me for a quick playthrough. Any data conversions, I can do once I have the data in an SSMS window.

  • It doesn’t like single quotes

Yeah, I have no real excuse for this one. I should really fix that before I use this function again…

It can handle single quotes now

  • There is also no help comments for this.

There should be, even though there is only one parameter. There should also be tests! I am filled with good intentions that are yet to see fruition though…

That being said, I’ve had to use it a few times already that has meant that writing it has already paid off.

So feel free to use, abuse, and/or improve it as you see fit.

I hope you find it useful.

Pester 5 and Group-Object – Best Friends

Figuring out how to group the output of your Pester tests

Words: 830

Time to read: ~ 4 minutes

I’ve been working with Pester v5 lately.

Pester v5 with PowerShell v5 at work & Pester v5 with PowerShell Core outside of work.

There are quite a few changes from Pester version 3, so it’s almost like learning a new language… except it’s based on slang. I think that I’m speaking eloquently, and then I’ve suddenly insulted someone and Pester no longer wants to play nice with me.

Initial Tests

I’ve got the following data that I’m using to test Pester v5.

BeforeDiscovery -ScriptBlock {
    $Groups = @(
        [PSCustomObject] @{
            Server = 1
            Group = 'A'
            Value = '86b7b0f9-996f-4c19-ac9a-602b8fe4d6f2' -as [guid]
        }, 
        [PSCustomObject] @{
            Server = 1
            Group = 'B'
            Value = 'e02913f7-7dae-4d33-98c9-d05db033bd08' -as [guid]
        },
        [PSCustomObject] @{
            Server = 2
            Group = 'A'
            Value = '96ad0394-8e9e-4406-b17e-e7d47f29f927' -as [guid]
        },
        [PSCustomObject] @{
            Server = 2
            Group = 'B'
            Value = 'f8efa8b6-e21b-4b9c-ae11-834e79768fee' -as [guid]
        }
    )
}
Image showing the data in the Before Discovery block
Test data

Usually, I would only use -TestCases to iterate through the data. I know that in Pester v3, I could wrap the It blocks inside a foreach () {}, and it would be okay. Hell, in most of my testings, it was faster. It doesn’t matter; I liked using -TestCases, and the performance difference is negligible to me.

That is still an option with Pester v5. I can run the below code to confirm.

Describe -Name 'Attempt: 01' -Tag '01' -Fixture {
    Context -Name 'Server: <_.Server>' -Fixture {
        It -Name 'should have a guid for its value: <_.Value>' -TestCases $Groups {
            $_.Value | Should -BeOfType [guid]
        }
    }
}
ForEach on the It block

If I look at the data, I can see that I’ve got two different values for Server; 1 and 2. It would be great if I could group the tests by those server values.

For me, Pester has three main blocks; Describe, Context, and It.
I know that Pester v5 has a -ForEach parameter for each of these blocks. I’ve already tried using the -ForEach parameter against the It block, and it didn’t do what I wanted.

Reminder of the ForEach on the It block

I’ll try it against the Context block instead and see if it works.


Describe -Name 'Attempt: 02' -Tag '02' -Fixture {
    Context -Name 'Server: <_.Server>' -Foreach $Groups {
        It -Name 'should have a guid for its value: <_.Value>' -Test {
            $_.Value | Should -BeOfType [guid]
        }
    }
}
ForEach on the Context block

That kind of works but we’ve got the same server in two different groups. Let’s move the groups up to the Describe level.

Describe -Name 'Attempt: 03 - Server: <_.Server>' -Tag '03' -Foreach $Groups {
    Context -Name 'Server: <_.Server>' -Fixture {
        It -Name 'should have a guid for its value: <_.Value>' -Test {
            $_.Value | Should -BeOfType [guid]
        }
    }
}
ForEach on the Describe block

We’ll that’s not what I wanted. Instead of 1 describe block, we have multiple blocks; 1 per group.

Grouped Data

Now, I’m going to start using Group-Object. My data by itself doesn’t seem to work.

$Groups = @(
    [PSCustomObject] @{
        Server = 1
        Group = 'A'
        Value = '86b7b0f9-996f-4c19-ac9a-602b8fe4d6f2' -as [guid]
    }, 
    [PSCustomObject] @{
        Server = 1
        Group = 'B'
        Value = 'e02913f7-7dae-4d33-98c9-d05db033bd08' -as [guid]
    },
    [PSCustomObject] @{
        Server = 2
        Group = 'A'
        Value = '96ad0394-8e9e-4406-b17e-e7d47f29f927' -as [guid]
    },
    [PSCustomObject] @{
        Server = 2
        Group = 'B'
        Value = 'f8efa8b6-e21b-4b9c-ae11-834e79768fee' -as [guid]
    }
)
Results of the Test Data

We can pass that data into Group-Object to group our data by a certain property. In my case, I want to group the data by the Server property.

$Groups | Group-Object -Property Server
Grouped Test Data

Taking a look at the first group, I only have the data for that single property value.

($Groups | Group-Object -Property Server)[0].Group
Inside the first group of Test Data

Now, I’ll try the Pester code again.

Grouped Tests

First, I’ll try putting the groups into the It blocks and see if that works.

Describe -Name 'Attempt: 05' -Tag '05' -Fixture {
    BeforeDiscovery -ScriptBlock {
        $GroupedGroups = $Groups | Group-Object -Property Server
    }

    Context -Name 'Server: <_.Name>' -Fixture {
        It -Name 'should have a guid for its value: <_.Group.Value>' -ForEach $GroupedGroups {
            $_.Group.Value | Should -BeOfType [guid]
        }
    }
}
Grouped on the It block

It doesn’t fully work. The data is grouped but the results seems to be concatenating the values. I’d like it better if they were split out to separate tests per value.

This time, I’ll group the data in the context blocks and then pass the groups into the It blocks. I’ll do this by passing the groups into the -ForEach parameter of the It block using $_.Group.

Describe -Name 'Attempt: 04' -Tag '04' -Fixture {
    BeforeDiscovery -ScriptBlock {
        $GroupedGroups = $Groups | Group-Object -Property Server
    }

    Context -Name 'Server: <_.Name>' -ForEach $GroupedGroups {
        It -Name 'should have a guid for its value: <_.Value>' -TestCases $_.Group {
            $_.Value | Should -BeOfType [guid]
        }
    }
}
Grouped on Context and passed to It block

In the previous code blocks, I used the BeforeDiscovery block in the Describe block. If you don’t want to use that, you can pass the Group-Object cmdlet to the ForEach parameter as a subexpression.

Describe -Name 'Attempt: 06 - Server: <_.Name>' -Tag '06' -ForEach ($Groups | Group-Object -Property Server) {
    Context -Name 'Server: <_.Name>' -Fixture {
        It -Name 'should have a guid for its value: <_.Value>' -TestCases $_.Group {
            $_.Value | Should -BeOfType [guid]
        }
    }
}
Without using BeforeDiscovery on the Describe block

Pass or Fail

I’ve encountered this obstacle of grouping objects in tests a couple of times. I’m hoping that by writing this down, I’ll be able to commit the information to memory.

Hey, if it doesn’t, I can always grab the code and figure it out.

T-SQL Tuesday #135: The outstanding tools of the trade that make your job awesome

Welcome to T-SQL Tuesday, the brainchild of Adam Machanic ( twitter ) and ward of Steve Jones ( blog | twitter ).
T-SQL Tuesday is a monthly blogging party where a topic gets assigned and all wishing to enter write about the subject.
This month we have Mikey Bronowski ( blog | twitter ) asking us about the most helpful and useful tools we know of or use.

Tools of the trade are a topic that I enjoy. I have a (sadly unmaintained) list of scripts from various community members on my blog. This list is not what I’m going to talk about though. I’m going to talk about what to do with or any scripts.

I want to talk about you as a person and as a community member. Why? Because you are the master of your craft and a master of their craft takes care of their tools.

Store Them

If you are using scripts, community-made or self-made, then you should store them properly. By properly, I’m talking source control. Have your tools in a centralised place where those who need it can access it. Have your scripts in a centralised place where everyone gets the same changes applied to them, where you can roll back unwanted changes.

Check out Brett Miller’s ( blog | twitter ) presentation “GitOps – Git for Ops people“.

Version Them

If you are using community scripts, then more likely than not, they are versioned. That way you’re able to see when you need to update to the newest version. No matter what language you’re using, you can add a version to them.

PowerShell has a ModuleVersion number, Python has __version__, and SQL has extended properties.

Or even take a page out of Bret Wagner’s ( blog | twitter ) book and try XML comments.

Take Care of Them

If you take care of these tools, if you store them, version them, and make them accessible to those who need them, then they will pay you back a hundredfold.
You’ll no longer need to re-write the wheel or pay the time penalty for composing them. The tools will be easy to share and self-documented for any new hires.
Like the adage says: Take care of your tools and your tools will take care of you.