T-SQL Tuesday #123: Life hacks to make your day easier

Words: 557

Time to read: ~ 3 minutes

T-SQL Tuesday has come back around! Another month without me getting a separate blog post out. I’ll get a non-T-SQL Tuesday blog post out eventually! This month, Jess Pomfret ( twitter | blog ) asks us about our life hacks.

The Humble Tomato

There is not enough time in the day. Until somebody figures out how to squish a few more hours into a day, that’s something that we’re going to have to accept.

In an effort to have some sort of time management, I try the Pomodoro technique.

It can be summarised by the points in this Wikipedia article

• Decide on the task to be done.
• Set the pomodoro timer (traditionally to 25 minutes).
• Work on the task.
• End work when the timer rings and put a checkmark on a piece of paper.
• If you have fewer than four checkmarks, take a short break (3–5 minutes), then go to step 2.
• After four pomodoros, take a longer break (15–30 minutes), reset your checkmark count to zero, then go to step 1

Wikipedia

Two Problems

I have 2 problems with the above bullet points

  1. It’s a very manual process, and
  2. The first one says I have to decide on the task to be done first.

In the immortal words of Homer Simpson:

Can’t someone else do it!

Homer Simpson

PowerShell

So, in an effort to resolve my problems, I created a PowerShell script.

You can find it here on my GitHub.

This script implements the Pomodoro technique for me, with blocks broken down into ~20/25 minutes of work and 5 minute breaks for reading blogs.

Knowing that there is so much topics to learn and stay on top of, every 3 work blocks are scheduled to learning.

After every 5 blocks, I get a longer break to grab a coffee, go for a walk, or interact with the Data/Automation family.

I even get a Toast notification when it’s time for me to change actions!

Here’s how it’s worked out for me today.

FYI – I missed starting this script when I came in thanks to back to back meetings from 08:00!

This solves the problems that I had with the Pomodoro technique for me.

  1. It’s no longer manual, it’s giving me an alert each time for the time blocks
  2. I no longer have to decide on what to do next.

The most I have to decide on is the -Random switch. Which effectively means do I want to try and get through all the items in my learning list or just pick a random one each time.

Seeing as my learning list is currently…

[String[]]$Item = 'JavaScript','Research','PSKoans','Blog','Python','PowerShell','C#','dbachecks','DBAFundamentals','Entity Framework','Containers','R'

…with undoubtedly more coming, I should really make it -Random by default!

In case you’re wondering what the “Mark” column is for, I’m trying to improve the current documentation of the system. So every Work action with an entry in the Mark column means get documenting!

Caveats

This is not a law. 

If I’m working on something and I enter focus mode, I’ll ignore the suggested blocks and keep working. Same can be said for deadlines; I’ll take the 5-minute breaks and then get back to the task.

Also, I can’t think of anyone who’s at their desk all day (I’ve been double-booked for meetings today alone!). But it’s a nice way of coming back, glancing at the PowerShell console, and knowing I’ll be making the most of my time.

T-SQL Tuesday #121: Gifts received for this year

Words: 401

Time to read: ~ 2 minutes.

T-SQL Tuesday, the monthly blogging party created by Adam Machanic, is still going strong. This months T-SQL Tuesday is hosted by Malathi Mahadevan ( blog | twitter ) who asks us…

“[…] what are a few things would you consider as gifts, and why?”

Invitation Post

There are many items that I consider gifts. More than I can reasonably write down in a blog post. So, in the interest of brevity, I’m going to break it down into gifts of SQL past, the gifts of SQL present, and the gifts of SQL future.

Gifts of SQL Past

I have been gifted with being mentored by DBAs that were knowledgeable and passionate about the Data Community.

I have been gifted with mentors who encouraged me to go to User Groups, volunteer at conferences and soak up as much data learnings as I could.

I have been gifted with mistakes that I have made in the past that have allowed me to learn from them. (I have definitely been gifted with the knowledge of how important test labs are!)

Gifts of SQL Present

I am gifted with a job that seems to actively repel me away from any sort of comfort zone. 

I am gifted with an active Data Community on Twitter, Slack, and in person for any issues that I run into.

I am gifted with a PowerShell community on Discord and Slack who seem to be constantly battling for the right to call themselves the “most welcoming community”.

I am gifted with a shrinking of borders so that anyone anywhere can be considered a helpful resource to a data question.

Gifts of SQL Future

I’m being gifted with new technologies to help out with old and new problems.

I’m being gifted with a field that is ever-changing. Fluid, dynamic, and will never become stagnant nor boring.

I’m being gifted with putting names to welcome faces and meeting up with new Data Community friends.

Mostly though, I’m being gifted with the opportunity to help others out in the future. Whether that be with blog posts, tweets, talks, examples, or just encouragement.

As good as all the previous gifts are, I believe the best gift is the gifts that you can give back. So, here’s hoping that the next year sees us all help more than hinder, learn more than laze, and teach more than troll.

Those are what I consider gifts.

Splitting Functions from Scripts in bulk

Time to read: 2.5 minutes

Words: 504

Previously on…

I’ve talked before about a couple of topics that this blog post pertains to

That is the relevant information so you’re up to speed on where I am.

Bring on the stupid

The stupid thing that I was doing was that I was manually, visually scanning the script, copying out the function definitions, and pasting them into their own function files.

This was long, this was tedious, and this was not a efficient use of my time.

Especially since the scripts were not laid out as logically as I would have liked.

Personally if I were to have nested functions in a script, I would have them towards the beginning of the file. Together, maybe in a little region that I’ve called “functions”.

Actually, if I have to have a “functions” region, then I have too many functions and I’m going to split them out anyway.

The scripts I was looking at were not laid out this way.

Sure there were what appeared to be a function region but there were also functions further down the script, created just before they were needed.

Hence, manually scanning the whole script, taking a note and a copy of each function before moving on again.

Long, tedious, wasteful.

There is a way!

Like I mentioned at the start, in the “pertinent” region, Chris Dent has a function that we have availed of before that we can use her.

Let’s take a look at what it gives us…

First of all, we get a list of the build scripts.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1

So we now have a list of the scripts. Each one of these scripts may, or may not, have one or many functions defined within them.

How are we going to get these?

We pipe this list to our Get-FunctionInfo function.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested

Perfect! Now to automate the final part of manual process. Can we grab the definition of these functions and split them out to a separate file per function?

First question is can we grab the function definitions?

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested |
 ForEach-Object {
    $_.Scriptblock.Ast.Parent.Extent.Text 
 }
I’m going to ignore that GetCurrentDateFormat function

Final bit

Now that we know that we can grab the function definition, it’s a quick step to out the contents into a file.

Get-ChildItem -Path .\Git\build-scripts\ -Filter *.ps1 |
    Get-FunctionInfo -ErrorAction SilentlyContinue -IncludeNested |
 ForEach-Object {
    $_.Scriptblock.Ast.Parent.Extent.Text |
        Out-File -FilePath ".\Git\build-scripts\build\$($_.Name).ps1"
 }

And just to double check…

Lovely!

All the functions are split off into their own .ps1 file where they can be reviewed, tests can be created for them, and/or improved.

It’s nice to push the bottleneck down the pipeline. Now I’m wondering if there’s a way we can bulk introduce Pester tests…

Dot Sourcing with PSScriptRoot

TL;DR: Use . $PSScriptRoot\ instead of . .\ if you’re using where the script is as a reference to load other files.

Words: 1033

Time to read: ~ 5 minutes

Update (2019-08-14): Thanks to Cory Knox ( twitter | github | twitch ) pointed out that $PSScriptRoot is not available in PS2.

I wrote before about our Build Process and how I was in the process of splitting them out. Even how, in the course of splitting out the functions and testing them, I found a bug in our current process.

First split

The first split that I did, I consider relatively simple.

I extracted the functions that were defined in the monolithic script into their own .ps1 file.
Then I created a Pester ( github | twitter) file for each function.

I did this so I could confirm that the functions worked as they were expected to work.
Also so that I could confirm that the functions still worked as they were expected to work if I made any changes.

And I plan to make changes to them in the future.

It was here that I found the bug in the old build process and it was here that I was able to sell the idea of isolating the function definitions and creating tests for them.

However, as with most relatively simple changes, it created an unforeseen problem that I didn’t have a test for.

You have to put back

The functions that I had isolated out from the script and tested were still being called from the script.

So we had to load them back in.

That seems simple enough even if it’s not something that I or others have really looked up before. But I’ve had to so below is my minimal, complete, reproducible example.

Let’s Dot Source them into the script.

Get-Help about_scopes

To add a function to the current scope, type a dot (.) and a space before
the path and name of the function in the function call.

about_scopes

But where

Adding these functions back into the script should be an easy process. The layout of the folders and the scripts for these examples are:

  • The script is in the parent folder
    Blogs\PSScriptRootVersusDot\script.ps1
  • The extracted functions are in the same folder
    Blogs\PSScriptRootVersusDot \<extracted functions>.ps1

So our frame of reference is our script, and we know where our functions to import are based on the location of our script.

Luckily PowerShell has us covered there

Get-Help about_scripts

To run a script in the current directory, type the path to the current
directory, or use a dot to represent the current directory, followed by a
path backslash (.).
For example, to run the ServicesLog.ps1 script in the local directory,
type:
.\Get-ServiceLog.ps1

about_scripts

So we need to use a dot (.) to add a function into the current scope and we can use a dot (.) to run a script in the current directory? Let’s check it out…

Careful, this is wrong… 😉

Example 01

function Get-Name {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Name
    )

    begin {}

    process {
        if (-not ($PSBoundParameters.ContainsKey('Name'))) {
            $Name = 'there'
        }
        
        [PSCustomObject]@{
            Name = $Name
            Message = "Hello $Name"
        }
    }

    end {}
}

This function doesn’t really do much but it’s vital for the following function.

function ConvertTo-Message {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . .\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name),`n$($MessageDetails.Message)"
    }
}

Let’s check this out now…

ConvertTo-Message -Verbose

It works!

So my understanding was, that if you need to import a function, you only need to use dots; Dot source and dot location it.
In this, as with many things, my understanding was wrong.

What I failed to fully grasp was the words “the current directory“. Now most of my scripts so far don’t use the *-Location cmdlets but one of the build scripts did.

Let’s make a change to our ConvertTo-Message function to change the location and see how that affects us and whether our importing still works…

Example 02

function ConvertTo-Message02 {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Push-Location -Path ..\
        Write-Verbose "We had to go back up for some reason to $((Get-Location).Path)"

        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . .\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name), $($MessageDetails.Message)"
    }

    end {
        Pop-Location
        Write-Verbose "We're back to $((Get-Location).Path)!"
    }
}

ConvertTo-Message02 -Verbose
Hello where?

Explain or I start swinging

The dot used to represent the location is, as I’ve said before, for the current location. Our ConvertTo-Message02 script changed it’s location as part of the script.

When we used the “dot source dot location” method, we weren’t using where our function is as a frame of reference to import the other functions. We were using what directory we are currently in.

If we change the location or try and call the function from anywhere that is not the directory where the function is defined, the function is not going to work.

Push-Location C:\
ConvertTo-Message -Verbose
Pop-Location

Anywhere

What we can do is actually use our function as a frame of reference.

PowerShell has a lovely automatic variable that we can use for this called $PSScriptRoot

Get-Help about_automatic_variables

$PSItem
Same as $_. Contains the current object in the pipeline object. You can use
this variable in commands that perform an action on every object or on
selected objects in a pipeline.

about_automatic_variables

Example 03

Let’s try again, shall we?

function ConvertTo-Message03 {
    [CmdletBinding()]
    param (
        [Parameter(Position = 0)]
        [String]
        $Receiver
    )

    begin {
        Push-Location -Path ..\
        Write-Verbose "We had to go back up for some reason to $((Get-Location).Path)"

        Write-Verbose -Message "[$((Get-Date).TimeOfDay)][$($MyInvocation.MyCommand)] Importing function Get-Name"
        . $PSScriptRoot\Get-Name.ps1
    }

    process {
        $GetNameParams = @{}

        if ($PSBoundParameters.ContainsKey('Receiver')) {
            $GetNameParams.Add('Name', $Receiver)
            Write-Verbose ($GetNameParams | Out-String)
        }

        $MessageDetails = Get-Name @GetNameParams

        "To $($MessageDetails.Name), $($MessageDetails.Message)"
    }

    end {
        Pop-Location
        Write-Verbose "We're back to $((Get-Location).Path)!"
    }
}

Let’s try the hard test first. We’ll move to the root of the C:\ drive and try and run it from there.

Push-Location C:\
ConvertTo-Message -Verbose
Pop-Location
Hello THERE!

Push

Now that I know how to properly use the location of a script as a frame of reference, am I going to use it more?

Yes and no.

Yes, it is great for catching these errors and for short, sharp scripts.

But I should really be pushing these up into a module. We use them often enough that there is no reason why we shouldn’t.

That’s the next action I guess. At least I have more knowledge than when I started.

That’s what counts.

-ExcludeProperty in PowerShell Core

Words: 183

Time to read: ~ 1 minute

A while ago I talked about an issue that I had in Windows PowerShell when I was trying to use the -ExcludeProperty parameter of Select-Object.

In case you missed it, it was one of my first posts, you can read it here.

Browsing StackOverflow

Checking out other peoples code is a great way to get exposed to different coding styles and ideas so I like to get a daily email of PowerShell questions from StackOverflow.

In the comments of one of these questions, Michael Klement ( twitter ) pointed out something, a little detail that I didn’t know but really appreciate.

There is a difference between Select-Object in Windows PowerShell and PowerShell Core

Difference

Let’s take a basic example

Windows PowerShell

# Doesn't work but doesn't work *silently*
[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant

# Works
[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant -Property *

PowerShell Core

[PSCustomObject]@{
    Version = $PSVersionTable.PSVersion
    Redundant = [Guid]::NewGuid()
} | Select-Object -ExcludeProperty Redundant

More Intuitive

Sometimes I’m more excited about the little things as I think they are more impactful. I’m excited about this.

The more you know!

Pester showed me a bug in our existing build process. Can you find it?

Words: 729

Time to read: ~ 4 minutes

Continuous Improvement

Working on the goal of continuous improvement of our processes, I got given access to the PowerShell scripts for our Build Process.

Credit where credit is due, these PowerShell scripts were created by people unfamiliar with the language.

They have done a great job with their research to build scripts that do what they want so I’m not going to nit-pick or critique.

I’m just going to improve and show/teach my colleagues why it’s an improvement.

Original State

The current state of the script is monolithic.

We have a single script that defines functions and then calls them later on. All mixed in with different foreach () and Write-Hosts.

Here’s a rough outline of the script.

$param01 = $args[0]
$param02 = $args[1]
$param03 = $args[2] -replace 'randomstring'

... Generic PowerShell commands ...

function 01 {
    function 01 definition
}

function 02 {
    function 02 definition
}

function GetPreviousTag {
    function GetPreviousTag definition
}

... More generic PowerShell commands ...
... that call our GetPreviousTag function ...
... etc ...

That was it.

1 giant script.
0 tests.

Extracting Functions for Tests

Now scripts are notoriously hard to test, I’ve written about how I’ve done that before but, honestly, if you really want to know then you need to check out Jakub Jares ( blog | twitter ).

Knowing how difficult testing scripts are, the first thing I decided to do was take the functions in the script and split them out. This way they can be abstracted away and tested safely.

I also didn’t want to take on too much at one time so I choose a random function, GetPreviousTag, and only actioned that one first.

Taking a look at GetPreviousTag

The simplest test that I can think of is a pass/fail test.

What do we expect to happen when it passes and what do we expect to happen when it fails.

To figure that out we’ll need to see the GetPreviousTag function. So I copied and pasted the code over to its own file GetPreviousTag.ps1. (sanitised, of course)

function GetPreviousTag {
    # Run the "git describe" command to return the latest tag
    $lastTag = git describe
    # If no tag is present then return false
    if ([string]::IsNullOrEmpty($lastTag)) {
        return $false
    }
    else {
        # If a tag is returned then we need to ensure that its in our expected format:
        # If a commit has taken place but the tag hasn't been bumped then the git describe command will return 
        # refs/tags/1.1.0.a.1-33-gcfsxxxxx, we only want the 1.1.0.a.1 part of the tag so we split off everything after
        # the "-" and trim the "refs/tags/" text.   
        $lastTagTrimmed = $lastTag.Split("-") | Select-Object -First 1
        $lastTagTrimmed = $lastTagTrimmed -replace 'refs/tags/',''
        # Verify that last tag is now in the expected format
        if ([regex]::Match($lastTagTrimmed,'\d+\.\d+\.\d+\.\c\.\d+')) {
            return $lastTagTrimmed
        }
        else {
            return $false
        }
    }
}

It’s nicely commented and glancing through it, we can see what it does.

  • Gets the output of git describe
    • If there’s no output:
      • return $false
    • If there is output:
      • Split on a dash, and get the first split
      • Remove the string ‘refs/tags/’
        • If the remainder matches the regex:
          • Return the remainder
        • If the remainder does not match the regex:
          • return $false

So we have our pass outcome, the remainder, and fail outcome, $false.

More importantly, can you see the bug?

The Pester Test

Here is the Pester test I created for the above function.

It’s relatively simple but even this simple test highlighted the bug that had gone hidden for so long.

$here = Split-Path -Parent $MyInvocation.MyCommand.Path
$sut = (Split-Path -Leaf $MyInvocation.MyCommand.Path) -replace '\.Tests\.', '.'
. "$here\$sut"

Describe "GetPreviousTag" {
    Context -Name 'Pass' -Fixture {
        Mock -CommandName git -MockWith {
            'refs/tags/1.1.0.a.1-33-gcfsxxxxx'
        }

        It -Name 'returns expected previous tag' -Test {
            GetPreviousTag | Should -BeExactly '1.1.0.a.1'
        }
    }

    Context -Name 'Fail : empty git describe' -Fixture {
        Mock -CommandName git -MockWith {}

        It -Name 'returns false' -Test {
            GetPreviousTag | Should -BeFalse
        }
    }

    Context -Name 'Fail : regex does not match' -Fixture {
        Mock -CommandName git -MockWith {
            'refs/tags/NothingToSeeHere-33-gcfsxxxxx'
        }

        It -Name 'returns false' -Test {
            GetPreviousTag | Should -BeFalse
        }
    }
}

Thanks to the above Pester test, I was able to find the bug, fix it, and also be in a position to improve the function in the future.

If you can’t find the bug, run the above test and it should show you.

Finally

If there’s one thing to take away from this post, it is to test your scripts

I’ve found Pester so useful that I decided to put my money where my mouth is…literally.

It’s more than deserved. Now back to continuous improvement…

Hiding Warnings in dbatools.

Words: 726

Time to read: ~ 4 minutes

Warning by default

dbatools warns by default.

From what we’ve seen, beginners to PowerShell do not want to be greeted by a sea of red when they use, mis-use, or abuse a dbatools command.

PowerShell errors normally contain useful information on what went wrong. With this information, the “what went wrong” can be fixed.

That being said, if you are writing PowerShell scripts and not raising valid error messages then I highly advise you to go back and fix that.

dbatools raises these error messages as friendly warning messages since we’ve found people will read a warning message quicker than they will read an error message.

Enabling Errors.

dbatools has a common parameter called -EnableException which changes this behaviour. When you use this switch parameter with a dbatools command, it should change the lovely, user-friendly warning to a lovely-but-unfairly-treated error message.

Here is the default process that raises a warning.

# Get the first backup from this path.
$BackupPath = (Get-ChildItem -Path D:\BACKUPS\ -Filter *.bak)[0]

# Restore it to a SQL Server instance that does not exist.
$BackupPath | Restore-DbaDatabase -SqlInstance DoesNotExist -OutputScriptOnly
Works on PowerShell 7.0

Here is the process when we use -EnableException that raises an error.

# Restore to a SQL Server instance that does not exist with errors.
$BackupPath | Restore-DbaDatabase -SqlInstance DoesNotExist -OutputScriptOnly -EnableException
Even shows the line.

Hiding the messages.

Depending on whether you use the default parameters or add in -EnableException parameter, you have ways to hide these messages.

Hiding default warnings

# Create a splat to save me typing over and over again.
$RestoreParam = @{
    SqlInstance      = 'DoesNotExist'
    OutputScriptOnly = $true
}

# Warnings actions
$WarningToDo = @{
    WarningAction   = 'SilentlyContinue'
    WarningVariable = 'RestoreWarning'
}

# Hide and capture the warning.
$BackupPath | Restore-DbaDatabase @RestoreParam @WarningToDo
Warning! Warning!

Here we can see that the warning message was not output but it was captured to our variable called “RestoreWarning”.

Hiding Errors

Errors are slightly different and I’ll show you why, although you can do the same thing, I use a try ... catch ... block instead.

I also don’t try and hide errors but relay them out.

try {
    $BackupPath | Restore-DbaDatabase @RestoreParam -EnableException
} catch {
    "Error happened! Here is the error message"
    $_
}
Try and catch

I was originally worried that the above method meant that you couldn’t reference the error that just occurred like we could with the warning methods and $RestoreWarning but I had forgotten about the $error variable!

# What was the latest error that we had?
$Error[0]
Tada

I highly recommend that you capture that latest error message though $RestoreError = $Error[0] because the next error will be put on that stack and our error will be pushed down!

Hide the error

However, you can hide the error message that is raised by either an empty catch block or an -ErrorVariable.

Empty catch block

Please don’t do this.

I’m happy that PSScriptAnalyzer has a way to check for empty catch blocks so it is incentive to not do this. However it is possible.

# Empty catch = Boo!
try {
    $BackupPath | Restore-DbaDatabase @RestoreParam -EnableException
} catch {

}
Nothing!

Do I like it? No. It is possible? Yes.

ErrorVariable

I mentioned that capturing errors with -ErrorVariable is a bit different and I’m going to show you why.

First let’s show that using -ErrorVariable can work to capture the error stream and stop it showing on the console.

$BackupPath | Restore-DbaDatabase @RestoreParam @ErrorsToDo -EnableException
Captured!

And now here is the difference between -WarningVariable and -ErrorVariable.

# Count the number of objects in these variables.
$RestoreWarning.Count
$RestoreError.Count
order by int or string, it works…

Let’s take a look at $RestoreError so.

2 different types of objects
All the info

If you are wondering why this is I believe it’s because when we use -WarningVariable method we are letting dbatools do the commands, bubble up the main error into a single warning, and then we are capturing that warning message.

When we use -ErrorVariable method, we are saying “give me everything”, there is no bundling happening, and we are seeing all the errors that occur – including internal functions and cmdlets that dbatools use.

All in All

I like the way that dbatools shows errors as warnings by default for everyday and beginning users.

I appreciate the fact that dbatools allows us to turn of that behaviour for our scripts and tool making purposes.

However, if you want nothing, you can have that too.

Finally, like everything, know your tools so you can get the most use out of them!