T-SQL Tuesday #120 – What were you thinking?

Words: 712
Time to read: ~ 3.5 minutes

10 Years

T-SQL Tuesday is upon us once more. T-SQL Tuesday number 120 means something else as well. 120 monthly posts equals 10 years that Adam Machanic’s ( twitter | blog ) blog post party has been going on.

Wayne Sheffield ( twitter | blog ) is hosting this month’s event. Wayne asks us something that I’m sure we’ve all thought at some stage.

What were you thinking?

https://blog.waynesheffield.com/wayne/archive/2019/11/t-sql-tuesday-120-what-were-you-thinking/

In the beginning…

The first exploration of a system leaves a lasting impact. When you first get a chance to delve into the database, you capture a shot of what the coding standards are like. You gleam the past experiences of the developers.

I’m looking for instances of NOLOCK if I’m being honest.

…there are impressions.

This impression was a What were you thinking? experience.

  1. DEADLOCK_PRIORITY LOW on most procedures.
  2. A lot of hierarchial data types.
  3. VARCHAR(MAX) on most columns
  4. Variables at the start of procedures used in equality WHERE clauses. e.g. DECLARE @Success int; Set @Success = 4; ... WHERE StatusId = @Success.
  5. Functions that return a single, deterministic value.
  6. Multi-statement Table-Valued Functions with WHILE statements.
  7. A plethora of indexes on the tables, all single-column indexes.

I’ve said enough.

If you had seen my face at that moment, you would have laughed. Imagine me staring, horrified, eyes darting around the screen mouthing What the…

A little thinking saves a lot of shouting

Granted it took getting a coffee and staring in disbelief at the code before I recovered. It took getting another coffee after the first before I could rationalise what I was seeing.

I took what I knew, which was these developers were smart. I tried to match that with what I was seeing. And there was an answer.

Theoretical, not Physical

The codebase read like developers who were not used to interacting with a database. Developers who thought of the database as a “place to shove data” and that’s all.

It was clear they had tried to follow the DRY (Don’t Repeat Yourself) approach (#4, #5).

They had read the documentation on hierarchical data types and Microsoft’s saying:…

The built-in hierarchyid data type makes it easier to store and query hierarchical data

https://docs.microsoft.com/en-us/sql/relational-databases/hierarchical-data-sql-server?view=sql-server-ver15

…instead of a parent/child relationship tables. (#2)

They had tried to translate the .Net data type [string] into the database. Deciding that varchar(max) was its closest match. (#3)

They had tried to query the data in a row-by-row approach, instead of a set-based method (#6).

And, they had tried to deal with the consequences of these and other decisions. (#1, #7)

Understanding, not blame

It’s hard to stay annoyed at people when you can understand their motives. Their mindset is the most effective deterrent to anger I can think of. There’s no blame but understanding. You want to help them improve. And that’s where this on-going process is now.

To move away from multi-statement Table-Valued functions with WHILE statements. Here’s inline Table-Valued functions with a recursive CTE (Common Table Expression) instead.

To use variables when you have to but be aware of the change in statistics that it brings.

The difference it can make to a query and a database when the data types are apt. How memory grants, logical page reads, and more are affecting by blobs.

How DEADLOCK_PRIORITY LOW is not an option if every procedure has it! How indexes can be of more than a single column. That there is such a thing as an INCLUDES!

Seeing now that the driving force they have is to create features. But the pain force they feel is database performance. I can grok their choices and actions at the time.

Still, it didn’t stop me going What were you thinking? at the outset.

I’m no better

I’m trying to learn different languages and frameworks at the moment. If someone more knowledgeable was to come along and see my interactions with Linux. If they were to critique my Python files. Or attempt to suppress a groan at my PromQL. I’d appreciate an air of understanding, not blame at that time.

So well done to the people who dived in and attempted the work even if they didn’t know how at the time. To paraphrase; those whos face is marred by dust and sweat and blood deserve the credit.

But don’t think I didn’t see those TRANSACTION LEVEL READ UNCOMMITTED that you’re using as NOLOCKS!

Retrospective: Speaking at DataScotland

Words: 315

Time to read: 1.5 minute

Data Scotland

On the 13th of September 2019, I spoke at DataScotland; my first time talking at a data conference.

My quasi-clickbait title was Feel Validated with dbachecks. If you guessed that I was talking about dbachecks then you’re right.

This is a brief retrospective of that time. Thinking back on that time still makes me relive the emotions that I felt. Nervousness, excitement, and panic.

Good times!

Appreciation

This was my first time speaking at a conference as well as my first time attending Data Scotland.

I recommend that you check it out. It’s an amazing conference created by passionate people and staffed by dedicated volunteers.

The Good

What didn’t count as the good is the way to put this!

  • Amazing fellow speakers.
  • Getting to meet other first-time speakers.
  • Talking with the volunteers.
  • Speaking with attendees.
  • Seeing people who I hadn’t seen in a long time.

Thank you to Craig Porteous, Louise Paterson, Paul Broadwith, and Robert French for all your work and encouragement.

Thank you as well to Brent Miller, Andrew Pruski, David Alcock, and John McCormack for help with the presentation.

The Bad

Feeling drained.

I don’t put this down to DataScotland though.
You may not have heard from me for the last month. I felt drained and took time off from public exposure.

A full year of constant working on the day job and personal work.
2 conferences a month on average for the last year.
Spreading myself out on projects about SQL, PowerShell, Containers, Python, AWS, and Azure without rest.
It’s not something I could sustain without factoring in sharpening the axe time.

Overall

I’m easing myself back into things again with the caveat of planning ahead and making sure I don’t overwhelm myself.
First thing on my list, planning for DataScotland next year.

Whether it’s speaking, volunteering, or attending, I’ll be there.

T-SQL Tuesday #118 – Your fantasy SQL feature

Words: 865

Time to read: ~ 5 minutes

T-SQL Tuesday Time

Welcome back to another installment of T-SQL Tuesday, the monthly blog post call. This month we have Kevin Chant ( twitter ) who has asked us for…

[…] a post about a fantasy SQL Server feature you’ve got in mind.

Kevin Chant

It’s hard for me to believe that my last T-SQL Tuesday post was back in May 2019 but, when I look back over the list of my blog posts, that’s the last one.

I can only put it down to “what I want to do” being out of sync with “what I can do with the time I have”.

So, with that major gap in T-SQL Tuesday posts in place, I’d like to start writing these again.

Beginning with this one, and an apology.

An Apology

I’m starting with an apology for this post because, no matter how I phrase this in my head, I cannot make it seem like I am not complaining.

So I ask that you forgive me if this post comes across as me whining about the level of effort that is currently involved with this.

Fantasy SQL Server feature

My fantasy SQL Server feature is…

  • A performance rating.

I’m not talking about TCP ratings nor am I talking about Sentry One’s Health Score (although I’ll admit that’s pretty close) nor Brent Ozar ( twitter ) and sp_BlitzFirst.

What I would like is a performance rating, an X out of 100, a Low / Medium / High, a sub-par / on-par / above-par description of how your SQL Server is doing.

Why this?

I’m not whinging about this due to a mis-guided want to compare my instances against others. Believe me, I know the state of my instances are not up there.

Nor is a case of wanting to show that my instaces are “in the top 10 in Ireland / Europe / the world”. Believe me, I realised a long time ago that, while I enjoy what I do, I do not want to take the sacrifices needed to get to that level.

DevOps is the union of people, process, and products to enable continuous delivery of value to our end users.

Donovan Brown

We are trying to take major steps with DevOps in our company. To be more transparent, to reduce silos, and to share knowledge so we can get releases out to customers faster. So we can get value to our customers out there faster.

So when a Pull Request (PR) gets sent to me and I respond with concerns, suggestions, and pull some data from our instances to show as an example, I’m really not expecting this response.

Thanks for this but we’re not quite sure what you mean. Could you give us a number please? Like, our SQL Server is doing an x out of 100?

Response

It wasn’t until I was asked this and looked into how you could go about acheiving this that I realised how difficult this is?

First of all, are you talking query performance or SQL Server health?

If it’s the first, how are you going to measure that? Duration? CPU? IO? Sure Query Store would be a great help…

If it’s the later, sure include RPO and RTO. How do you measure HA and DR? Does deadlocks come into play here or query performance?

Are a failed statistics job going to affect the rating on SQL Server Health? Cause I know that it’s going to have an effect on query performance!

Fantasy Feature

So that’s my fantasy feature.

I want a performance rating built into SQL Server. One that you can measure against your own servers, or against telemetry gathered from other servers.

Break it down however you wish.

  • Rating per Query Duration is way up but your Rating per Memory is down.
  • Your Rating per Deadlock has become nearly nonexistent but your Rating per Dirty / Phantom Reads … I got some bad news there…
  • Your Batch Transactions Rating has gone up from the Last Version push but that’s because you stopped doing CURSORS and WHILE loops. Go you, we we’re thinking it was about time!

I don’t have an exact defintion

I don’t know if I’d want this as a single rating. SQL Server is more than the sum of it’s parts.

I don’t know if I’d want this as multiple ratings summed up since I don’t know how you’d weight them. Different companies have different concerns.

I also know that we have tools for this

We have Query Store, we have AGs, we have Performance counters, we have sp_Blitz%, we have Workload tools, we have git, and TFS, and Azure Devops, and AWS CloudFormation, and docker containers “kubeterised” into a CI pipeline.

I’m fully aware that we have nearly everything at our disposal to make this happen. All we need is time, a plan, and the ability to progressively see this through.

Like I said at the start, I apologise if this comes across as me whining.

But that’s not what this T-SQL Tuesday asked. It asked for your Fantasy Feature.

Well my name is Shane O’Neill and right now, I want to know that my SQL Server instance is doing X out of 100.

You tell me that and I’ll work on improving it.

Splitting Functions from Scripts in bulk

Time to read: 2.5 minutes

Words: 504

Previously on…

I’ve talked before about a couple of topics that this blog post pertains to

That is the relevant information so you’re up to speed on where I am.

Bring on the stupid

The stupid thing that I was doing was that I was manually, visually scanning the script, copying out the function definitions, and pasting them into their own function files.

This was long, this was tedious, and this was not a efficient use of my time.

Especially since the scripts were not laid out as logically as I would have liked.

Personally if I were to have nested functions in a script, I would have them towards the beginning of the file. Together, maybe in a little region that I’ve called “functions”.

Actually, if I have to have a “functions” region, then I have too many functions and I’m going to split them out anyway.

The scripts I was looking at were not laid out this way.

Sure there were what appeared to be a function region but there were also functions further down the script, created just before they were needed.

Hence, manually scanning the whole script, taking a note and a copy of each function before moving on again.

Long, tedious, wasteful.

There is a way!

Like I mentioned at the start, in the “pertinent” region, Chris Dent has a function that we have availed of before that we can use her.

Let’s take a look at what it gives us…

First of all, we get a list of the build scripts.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1

So we now have a list of the scripts. Each one of these scripts may, or may not, have one or many functions defined within them.

How are we going to get these?

We pipe this list to our Get-FunctionInfo function.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested

Perfect! Now to automate the final part of manual process. Can we grab the definition of these functions and split them out to a separate file per function?

First question is can we grab the function definitions?

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested |
 ForEach-Object {
    $_.Scriptblock.Ast.Parent.Extent.Text 
 }
I’m going to ignore that GetCurrentDateFormat function

Final bit

Now that we know that we can grab the function definition, it’s a quick step to out the contents into a file.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested |
 ForEach-Object {
    $_.Scriptblock.Ast.Parent.Extent.Text |
        Out-File -FilePath ".\Git\build-scripts\build\$($_.Name).ps1"
 }

And just to double check…

Lovely!

All the functions are split off into their own .ps1 file where they can be reviewed, tests can be created for them, and/or improved.

It’s nice to push the bottleneck down the pipeline. Now I’m wondering if there’s a way we can bulk introduce Pester tests…

Dot Sourcing with PSScriptRoot

TL;DR: Use . $PSScriptRoot\ instead of . .\ if you’re using where the script is as a reference to load other files.

Words: 1033

Time to read: ~ 5 minutes

Update (2019-08-14): Thanks to Cory Knox ( twitter | github | twitch ) pointed out that $PSScriptRoot is not available in PS2.

I wrote before about our Build Process and how I was in the process of splitting them out. Even how, in the course of splitting out the functions and testing them, I found a bug in our current process.

First split

The first split that I did, I consider relatively simple.

I extracted the functions that were defined in the monolithic script into their own .ps1 file.
Then I created a Pester ( github | twitter) file for each function.

I did this so I could confirm that the functions worked as they were expected to work.
Also so that I could confirm that the functions still worked as they were expected to work if I made any changes.

And I plan to make changes to them in the future.

It was here that I found the bug in the old build process and it was here that I was able to sell the idea of isolating the function definitions and creating tests for them.

However, as with most relatively simple changes, it created an unforeseen problem that I didn’t have a test for.

You have to put back

The functions that I had isolated out from the script and tested were still being called from the script.

So we had to load them back in.

That seems simple enough even if it’s not something that I or others have really looked up before. But I’ve had to so below is my minimal, complete, reproducible example.

Let’s Dot Source them into the script.

Get-Help about_scopes

To add a function to the current scope, type a dot (.) and a space before
the path and name of the function in the function call.

about_scopes

But where

Adding these functions back into the script should be an easy process. The layout of the folders and the scripts for these examples are:

  • The script is in the parent folder
    Blogs\PSScriptRootVersusDot\script.ps1
  • The extracted functions are in the same folder
    Blogs\PSScriptRootVersusDot \<extracted functions>.ps1

So our frame of reference is our script, and we know where our functions to import are based on the location of our script.

Luckily PowerShell has us covered there

Get-Help about_scripts

To run a script in the current directory, type the path to the current
directory, or use a dot to represent the current directory, followed by a
path backslash (.).
For example, to run the ServicesLog.ps1 script in the local directory,
type:
.\Get-ServiceLog.ps1

about_scripts

So we need to use a dot (.) to add a function into the current scope and we can use a dot (.) to run a script in the current directory? Let’s check it out…

Careful, this is wrong… 😉

Example 01

function Get-Name {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Name
    )

    begin {}

    process {
        if (-not ($PSBoundParameters.ContainsKey('Name'))) {
            $Name = 'there'
        }
        
        [PSCustomObject]@{
            Name = $Name
            Message = "Hello $Name"
        }
    }

    end {}
}

This function doesn’t really do much but it’s vital for the following function.

function ConvertTo-Message {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . .\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name),`n$($MessageDetails.Message)"
    }
}

Let’s check this out now…

ConvertTo-Message -Verbose

It works!

So my understanding was, that if you need to import a function, you only need to use dots; Dot source and dot location it.
In this, as with many things, my understanding was wrong.

What I failed to fully grasp was the words “the current directory“. Now most of my scripts so far don’t use the *-Location cmdlets but one of the build scripts did.

Let’s make a change to our ConvertTo-Message function to change the location and see how that affects us and whether our importing still works…

Example 02

function ConvertTo-Message02 {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Push-Location -Path ..\
        Write-Verbose "We had to go back up for some reason to $((Get-Location).Path)"

        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . .\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name), $($MessageDetails.Message)"
    }

    end {
        Pop-Location
        Write-Verbose "We're back to $((Get-Location).Path)!"
    }
}

ConvertTo-Message02 -Verbose
Hello where?

Explain or I start swinging

The dot used to represent the location is, as I’ve said before, for the current location. Our ConvertTo-Message02 script changed it’s location as part of the script.

When we used the “dot source dot location” method, we weren’t using where our function is as a frame of reference to import the other functions. We were using what directory we are currently in.

If we change the location or try and call the function from anywhere that is not the directory where the function is defined, the function is not going to work.

Push-Location C:\
ConvertTo-Message -Verbose
Pop-Location

Anywhere

What we can do is actually use our function as a frame of reference.

PowerShell has a lovely automatic variable that we can use for this called $PSScriptRoot

Get-Help about_automatic_variables

$PSItem
Same as $_. Contains the current object in the pipeline object. You can use
this variable in commands that perform an action on every object or on
selected objects in a pipeline.

about_automatic_variables

Example 03

Let’s try again, shall we?

function ConvertTo-Message03 {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Push-Location -Path ..\
        Write-Verbose "We had to go back up for some reason to $((Get-Location).Path)"

        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . $PSScriptRoot\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name), $($MessageDetails.Message)"
    }

    end {
        Pop-Location
        Write-Verbose "We're back to $((Get-Location).Path)!"
    }
}

Let’s try the hard test first. We’ll move to the root of the C:\ drive and try and run it from there.

Push-Location C:\
ConvertTo-Message -Verbose
Pop-Location
Hello THERE!

Push

Now that I know how to properly use the location of a script as a frame of reference, am I going to use it more?

Yes and no.

Yes, it is great for catching these errors and for short, sharp scripts.

But I should really be pushing these up into a module. We use them often enough that there is no reason why we shouldn’t.

That’s the next action I guess. At least I have more knowledge than when I started.

That’s what counts.

-ExcludeProperty in PowerShell Core

Words: 183

Time to read: ~ 1 minute

A while ago I talked about an issue that I had in Windows PowerShell when I was trying to use the -ExcludeProperty parameter of Select-Object.

In case you missed it, it was one of my first posts, you can read it here.

Browsing StackOverflow

Checking out other peoples code is a great way to get exposed to different coding styles and ideas so I like to get a daily email of PowerShell questions from StackOverflow.

In the comments of one of these questions, Michael Klement ( twitter ) pointed out something, a little detail that I didn’t know but really appreciate.

There is a difference between Select-Object in Windows PowerShell and PowerShell Core

Difference

Let’s take a basic example

Windows PowerShell

# Doesn't work but doesn't work *silently*
[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant

# Works
[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant -Property *

PowerShell Core

[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant

More Intuitive

Sometimes I’m more excited about the little things as I think they are more impactful. I’m excited about this.

The more you know!

Pester showed me a bug in our existing build process. Can you find it?

Words: 729

Time to read: ~ 4 minutes

Continuous Improvement

Working on the goal of continuous improvement of our processes, I got given access to the PowerShell scripts for our Build Process.

Credit where credit is due, these PowerShell scripts were created by people unfamiliar with the language.

They have done a great job with their research to build scripts that do what they want so I’m not going to nit-pick or critique.

I’m just going to improve and show/teach my colleagues why it’s an improvement.

Original State

The current state of the script is monolithic.

We have a single script that defines functions and then calls them later on. All mixed in with different foreach () and Write-Hosts.

Here’s a rough outline of the script.

$param01 = $args[0]
$param02 = $args[1]
$param03 = $args[2] -replace 'randomstring'

... Generic PowerShell commands ...

function 01 {
    function 01 definition
}

function 02 {
    function 02 definition
}

function GetPreviousTag {
    function GetPreviousTag definition
}

... More generic PowerShell commands ...
... that call our GetPreviousTag function ...
... etc ...

That was it.

1 giant script.
0 tests.

Extracting Functions for Tests

Now scripts are notoriously hard to test, I’ve written about how I’ve done that before but, honestly, if you really want to know then you need to check out Jakub Jares ( blog | twitter ).

Knowing how difficult testing scripts are, the first thing I decided to do was take the functions in the script and split them out. This way they can be abstracted away and tested safely.

I also didn’t want to take on too much at one time so I choose a random function, GetPreviousTag, and only actioned that one first.

Taking a look at GetPreviousTag

The simplest test that I can think of is a pass/fail test.

What do we expect to happen when it passes and what do we expect to happen when it fails.

To figure that out we’ll need to see the GetPreviousTag function. So I copied and pasted the code over to its own file GetPreviousTag.ps1. (sanitised, of course)

function GetPreviousTag {
    # Run the "git describe" command to return the latest tag
    $lastTag = git describe
    # If no tag is present then return false
    if ([string]::IsNullOrEmpty($lastTag)) {
        return $false
    }
    else {
        # If a tag is returned then we need to ensure that its in our expected format:
        # If a commit has taken place but the tag hasn't been bumped then the git describe command will return 
        # refs/tags/1.1.0.a.1-33-gcfsxxxxx, we only want the 1.1.0.a.1 part of the tag so we split off everything after
        # the "-" and trim the "refs/tags/" text.   
        $lastTagTrimmed = $lastTag.Split("-") | Select-Object -First 1
        $lastTagTrimmed = $lastTagTrimmed -replace 'refs/tags/',''
        # Verify that last tag is now in the expected format
        if ([regex]::Match($lastTagTrimmed,'\d+\.\d+\.\d+\.\c\.\d+')) {
            return $lastTagTrimmed
        }
        else {
            return $false
        }
    }
}

It’s nicely commented and glancing through it, we can see what it does.

  • Gets the output of git describe
    • If there’s no output:
      • return $false
    • If there is output:
      • Split on a dash, and get the first split
      • Remove the string ‘refs/tags/’
        • If the remainder matches the regex:
          • Return the remainder
        • If the remainder does not match the regex:
          • return $false

So we have our pass outcome, the remainder, and fail outcome, $false.

More importantly, can you see the bug?

The Pester Test

Here is the Pester test I created for the above function.

It’s relatively simple but even this simple test highlighted the bug that had gone hidden for so long.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path) -replace '\.Tests\.', '.'
. "$here\$sut"

Describe "GetPreviousTag" {
    Context -Name 'Pass' -Fixture {
        Mock -CommandName git -MockWith {
            'refs/tags/1.1.0.a.1-33-gcfsxxxxx'
        }

        It -Name 'returns expected previous tag' -Test {
            GetPreviousTag | Should -BeExactly '1.1.0.a.1'
        }
    }

    Context -Name 'Fail : empty git describe' -Fixture {
        Mock -CommandName git -MockWith {}

        It -Name 'returns false' -Test {
            GetPreviousTag | Should -BeFalse
        }
    }

    Context -Name 'Fail : regex does not match' -Fixture {
        Mock -CommandName git -MockWith {
            'refs/tags/NothingToSeeHere-33-gcfsxxxxx'
        }

        It -Name 'returns false' -Test {
            GetPreviousTag | Should -BeFalse
        }
    }
}

Thanks to the above Pester test, I was able to find the bug, fix it, and also be in a position to improve the function in the future.

If you can’t find the bug, run the above test and it should show you.

Finally

If there’s one thing to take away from this post, it is to test your scripts

I’ve found Pester so useful that I decided to put my money where my mouth is…literally.

It’s more than deserved. Now back to continuous improvement…