Figuring out a PowerShell version of Dynamic SQL

Next step is wondering if we can avoid SQL/PowerShell injection…

Words: 569

Time to read: ~ 3 minutes

It’s Been A While But…

…recently, I ran into an issue in PowerShell that, if it had been in SQL, I would have solved it quite handily with some Dynamic SQL.

Alas, this is PowerShell” I thought to myself. “And there is no way that one knows of that one can create dynamic commands that can be built up itself!“.

Now, there is two things that you have to realise for when I’m thinking to myself:

  1. I think more fancy that I am in real life, and
  2. I’m nearly always wrong!

So please see below for my example problem and the “dynamic PowerShell” created to overcome the issue!

TL;DR

Script blocks.

My Example Issue…

…involves a basic json file.


{
"ServerName": "Mine",
"Database": {
"Table": {
"TableName": "Lime",
"Fruit": true
},
"DatabaseName": "Mime",
"View": {
"ViewName": "Rhyme",
"IsYAVowel": false
}
}
}

view raw

BasicJson.json

hosted with ❤ by GitHub

Now, let’s say that we need to find the values of a certain property, let’s say “IsYAVowel”. Easy enough, we can just use something like the following:

$jsonFile = Get-Content -Path .\BasicJson | ConvertFrom-Json
$jsonFile.Database.View.IsYAVowel
UsingDotNotationOnJsonVariable
False = Yes… 

But that doesn’t really help our cause of needing to use dynamic PowerShell…so I’m going to ignore it and use this more “verbose” way instead!


Get-Content -Path .\BasicJson.json |
ConvertFrom-Json |
Select-Object -ExpandProperty Database |
Select-Object -ExpandProperty View |
Select-Object -ExpandProperty IsYAVowel

Like we said, easy enough. We are basically…

  • changing the json into a object,
  • we’re selecting and expanding the Database property,
  • selecting and expanding the View property, and
  • finally selecting and expanding the IsYAVowel property to get it’s value.
Manual
Is Y a vowel? False

What we cannot do is either of the following:


Get-Content -Path .\BasicJson.json |
ConvertFrom-Json |
Select-Object -ExpandProperty Database.View.IsYAVowel
Get-Content -Path .\BasicJson.json |
ConvertFrom-Json |
Select-Object -Property Database.View.IsYAVowel

NotPossible
Bottom result is a great example showing PowerShell expects a single property.

So what are we supposed to do when we are asked to select the value from a property when we are given the path like these:

  • Database.View.IsYAVowel“, or
  • Database.Table.Fruit“, or
  • Database.DatabaseName“, or even
  • ServerName“?

That is where a Dynamic way of PowerShell can come in handy!

Splitting the string…

… seems simple enough.

Richard Siddaway ( blog | twitter ) had a great post recently about “Variable Squeezing” that we are going to use here to show splitting the strings and assigning them to a variable.


$propertyPath = 'Database.Table.Fruit'
($propertyPathProperties = $propertyPath -split '\.')

PulpIncluded
Note the backslash ( \ ) before the dot ( . ) because…regex

Now that we have an ordered list of properties we can place them into our string.


$propertyPathProperties | ForEach-Object -Begin {
$Stmt = "Get-Content -Path .\BasicJson.json |`n`tConvertFrom-Json"
} -Process {
$Stmt += " |`n`tSelect-Object -ExpandProperty $_"
} -End {
$Stmt
}

DynamicStringMaker
Formatted for your viewing pleasure!

You know that it won’t be as simple as this but we’re going to try to call that directly and see what happens.

$Stmt
& $Stmt

 

CallingAString
Has that been treated as a single command?

How can we get PowerShell to treat that as a Script Block?

Well it turns out that the answer is pretty damn simple (once you know what to look for).

PowerShell, like other languages, has types and one of those types is [scriptblock] which has the lovely method called Create(). If we look at the Overload Definition (by running the method without the accompanying brackets) we can see that all it needs is a string so let’s pass our string script in!


[scriptblock]::Create
[scriptblock]::Create($Stmt)
$ActualStmt = [scriptblock]::Create($Stmt)

ScriptblockOverloadAndCreate.png
ActualStmt <> Stmt

 

It looks just like a string now, right? Has that done any bit of difference? Well let’s see…


[PSCustomObject]@{
    Name = '$Stmt'
    Type = $Stmt.GetType().FullName
}, [PSCustomObject]@{
    Name = '$ActualStmt'
    Type = $ActualStmt.GetType().FullName
}

StringAndSciptBlockTypes
One of these things is not liked the other!

And now, we can execute our $ActualStmt and…


&amp; $ActualStmt

Finale.pn
This is a very anti-climactic screenshot…

And we now have “dynamic PowerShell”, or as they’re normally called, Scriptblocks!

Author: Shane O'Neill

DBA, T-SQL and PowerShell admirer, Food, Coffee, Whiskey (not necessarily in that order)...

6 thoughts on “Figuring out a PowerShell version of Dynamic SQL”

    1. I told you not to ask 🙂

      We had a really chaotic environment with self hosted customers trying to manage a big internal application that they should have hosted with us and instead decided to “save money” by skipping the more expensive hosted costs.

      Unfortunately what was communicated over and over again hosting yourself meant the cost was discounted because of the additional manpower you would be throwing behind the app, which in 95% of cases, never happened.

      Early on the company had allowed each customer to request customizations not only to the usual suspects like reports, but to core business logic and functionality sometimes without involving the development team.

      There were various levels of quality to these effective forks of the database, and internal teams weren’t the only source, the customer often times the client site would have a helpful person who would slowly become the Accidental DBA, but that same person might also be pressed into making helpful changes to the queries, indexes, or or whatever and often times under time pressure might forget to work with our team to manage the excellent change management process we had engineered.

      So, when it came time to upgrade their product, we needed a backup of their database to verify that we even captured the changes as they diverged so much. We tested on each customer’s current version of reality (and in the order of a few thousand stored procedures) for each upgrade. Woo enterprise software!

      Most of our customers pledged to create a test environment, and in some cases they handled it well and there were no issues. In most cases we had significant delays, and in some cases we dealt with a level of mismanagement of resources that traipsed into negligence by customer IT teams and database resourcves.

      With the goal in mind of YEARLY upgrades, we had to get creative to be able to simply get the customer’s data to present a faithful change script.

      So what did we always pretty much have?
      * SA on a SQL box, and an active connection.
      * Management sign off that they didn’t care as long as we didn’t break anything.
      * We oftentimes didn’t even have PowerShell 3.0 on the client machines, though we did on the SQL Servers.

      What did we need?
      * All their data and schema, as quickly as possible without direct access to anything else and with the minimum time blocking their work.

      We didn’t have the luxury of database backups, and the client machines we were allowed to use were often incredibly locked down, but the SQL Server’s were at the same time amazingly misconfigured and could do all sorts of awful things, often times including … accessing the internet directly.

      So it came down to sanctioned data exfiltration because of poorly manged IT teams and CEOs who wanted it done and weren’t so concerned about how as much as now.

      Connect – dump PowerShell scripts through pipeline of indeterminate size (hence chunking and base64ing) rebuild scriptblocks (including a base64 encoded version of Schemazen and bcp.exe heh) – build queries which just return data to bulk copy down (since most of their data was often stored in one giant documents table which we never needed), use schemazen to script out their current schema state, and then compress that all and either put it on a file share, post it over a secure channel, or even convert it to binary so it can be inserted and selected out 😐

      Really most of it I could have used xp_cmdshell for directly, but it has string limits and my code got large when I started including modules (imagine putting all of dbatools in an xp_cmdshell call) so chunking and executing on the target machine had its benefits.

      I also oftentimes had terribly slow links for the client machines, hence the local bcp dumping of relevant data.

      Was this crazy? Yes. Was it fun? kinda.

  1. Love the post, I had this same type of problem and attacked it the same way.
    I started adding more scriptblocks and ran into the issue of being annoyed at setting up bigger and bigger strings, so I looked into what to do about that, and the blog post I referenced is still up https://techstronghold.com/blogs/scripting/powershell-tip-convert-script-block-to-string-or-string-to-script-block

    It allows you to get the definition of a normal script block, convert it to a string, pass the string, and then convert it back to a scriptblock again.

    I used it to do horrible things, specifically to allow me to send chunked powershell files in base64 over a remote sql connection in xp_cmdshell to re-assemble and run.

    DONT ASK.

Leave a Reply

Discover more from No Column Name

Subscribe now to keep reading and get access to the full archive.

Continue reading