Normally, I would have skipped this month because I’ve talked about permissions before, and I feel like it’s one of the better posts I’ve written. But I’m trying to write more, so that can’t be an excuse anymore.
So, let’s talk about AGs, DistAGs, and SQL Server authentication login SIDs.
AGs and DistAGs
In work, we have Availability Groups (AGs) and Distributed Availability Groups (DistAGs), depending on the environment level, e.g. Prod, Pre-Prod, QA, Dev, etc.
When we plan failovers for maintenance, we run through a checklist to ensure we can safely complete the failover.
One of these checks is to ensure that the correct logins with the correct SIDs are on each replica. Otherwise, we could get flooded with a lot of “HEY! My application doesn’t connect anymore! Why you break my application“, when apps try and log into the new primary and the user SIDs don’t match the login SIDs.
While I don’t have a problem with doing a query across registered servers, I threw together a quick script in PowerShell with dbatools that does the job of checking for me. And, like most things that happen in business, this temporary solution became part of our playbook.
Who knows! Maybe this quick, throwaway script could also become part of other people’s playbooks!
I’m not sure how I feel about that…
Scripts
We’re using dbatools for this because I think it’s the best tool for interacting with databases from a PowerShell session regardless of what Carbon Black complains about.
Then, we can use the ever-respected Get-DbaLogin to get a list of logins and their SIDs per replica. If we have a mismatch, we will have an issue after we failover. So best nip that in the bud now (also, I had this phrase SO wrong before I looked it up!).
$InstanceName = 'test'
# $InstanceName = $null # Uncomment this line to get ALL
$Servers = Get-DbaRegServer
if ($InstanceName) {
# Find the server that ends in our instance name
$Servers = $Servers | Where-Object ServerName -match ('{0}$' -f $InstanceName)
}
# Get all replicas per Instance name that are part of AGs...
$DAGs = Get-DbaAvailabilityGroup -SqlInstance $Servers.ServerName |
Where-Object AvailabilityGroup -notlike 'DAG*' | # Exclude DAGs which just show our underlying AG names, per our naming conventions.
ForEach-Object -Process {
$instance = $null
$instance = $_.InstanceName
foreach ($r in $_.AvailabilityReplicas) { # I'd like 1 replica per line, please.
[PSCustomObject] @{
InstanceName = $instance
Replica = $r.Name
}
}
} |
Select-Object -Property InstanceName, Replica -Unique |
Group-Object -Property InstanceName -AsHashTable
# Get a list of Logins/SIDs that don't match the count of replicas, i.e. someone forgot to create with SIDs...
foreach ($d in $DAGs.Keys) {
Write-Verbose "Working on instance: $d" -Verbose
Get-DbaLogin -SqlInstance $DAGs[$d].Replica |
Group-Object -Property Name, Sid |
Where-Object Count -ne $DAGs[$d].Count |
Select-Object -Property @{ Name = 'Instance'; Expression = { $_.Group[0].InstanceName }},
@{ Name = 'Occurances'; Expression = { $_.Count }},
@{ Name = 'Login/SID'; Expression = { $_.Name }}
}
I got 99 problems, and Login/SIDs are one! Or I have 5 Login/SID problems.
Cure
They say “prevention is better than cure”, and I’d love to get to a stage where we can “shift left” on these issues. Where we can catch them before someone creates a login with the same name but different SIDs on a replica.
But we’re not there yet. At least, we can find the issues just before they impact us.
Welcome to T-SQL Tuesday, the monthly blogging party where we are given a topic and have to talk about it. Today, we have Rob Farley ( blog | bluesky ), talking about integrity.
I’ll admit that it’s been a while since I’ve written a blog post. It’s been a combination of either burnout or busyness, but let’s see if I still have the old chops. Plus, I’m currently sick and resting in bed, so I have nothing better to do.
I’m one of the few who haven’t had experiences with corruption in our database. Apart from Steve Stedman’s ( blog ) Database Corruption Challenge, that is.
With that being said, what do I have to say about this topic then? Well, let’s talk about the first version of corruption checking automation that we introduced in work.
Now, this is just the bare bones and many different iterations since then, but the essence is here.
Overview
Like many shops out there, we can’t run corruption checking on our main production database instance. So, then, what do we do? We take the backups and restore them to a test instances, and then run corruption checking on those restored databases.
At least this way we can test the backups we take can be restored, as well.
But, I don’t want to spend every day manually restoring and corruption checking these databases, so let’s automate this bit…
Limitations
A quick scour of the interwebs brought back something extremely close to what I want by Madeira Data Solutions. It had some extras that I didn’t want, though.
More importantly, it used some functions that our dreaded antivirus software still screams false positives about. So, they would stop our script from running if we even tried.
It has been quite a while since I have coded just for fun, so I’m thankful to Reitse for suggesting this. Unfortunately, I don’t have a pre-baked idea for this T-SQL Tuesday, so let’s see what we can come up with.
Echos
Around December 2021, Wordle hit the virtual scenes. Yeah, nearly two years ago. How do you feel about that?
I got swept up in that wave for a while in the same way I got swept up in the other trends of my time, like Pokemon, Sodoku, and Angry Birds.
Eventually, I stopped when I found a PowerShell script by Kieran Walsh ( github | twitter ) where you could narrow down to the correct answer by putting in the results of your guess each round.
This hack led to me realising how much time I was spending on Wordle and that I should stop, much like I did with Pokemon, Sodoku, and Angry Birds.
So, what better thing to do than to try and recreate that PowerShell script in T-SQL
Rules
I must recreate as much of the script as possible in T-SQL in only one hour. Yes, I’m aware that’s more of a rule than rules but Wordle needs five letters dammit, and “rule” was crying out for that prosthetic “s”!
Total (code)
Don’t worry, you just have to fill in the variables on lines 19-26.
Split
A few things need to be taken care of out of the bat.
The potential answers have to be stored somewhere in the database. Thankfully, I had the answers in a text file, so creating a table and then inserting them was easy.
I could do the insert with flat files, but I already have PowerShell open so…
Next, we need the variables that we can create. If I can finish this before the 1-hour mark, I’ll turn this into a stored procedure with parameters and everything! Until then, it’s script and variable times.
DECLARE
@known_letters AS varchar(5),
@excluded_letters AS varchar(26),
@position1 AS char(1),
@position2 AS char(1),
@position3 AS char(1),
@position4 AS char(1),
@position5 AS char(1),
@correct_letters AS xml,
@all_answers_sql AS nvarchar(MAX);
/* region Enter Variables here */
SET @known_letters = '';
SET @excluded_letters = '%[]%';
SET @position1 = NULL;
SET @position2 = NULL;
SET @position3 = NULL;
SET @position4 = NULL;
SET @position5 = NULL;
/* endregion Enter Variables here */
The PowerShell code has known_letters, excluded_letters, positions, and wrong_positions.
I can do all these easily enough, except for wrong_positions. I can’t think of a way to do hashtables in SQL that doesn’t equal a secondary table or user-table type, etc. I’ll leave that to the end if I have time.
known_letters is an array of strings. I haven’t updated the SQL Server version on my laptop in a while, so there is no string_split for me. Let’s do the XML way so.
/* region KnownLetters */
SELECT @correct_letters = CONCAT(
'<known_letters>',
REPLACE(@known_letters, ',', '</known_letters><known_letters>'),
'</known_letters>'
);
SELECT
[known] = [l].[y].value('.', 'char(1)')
INTO #KnownLetters
FROM
(
VALUES
(@correct_letters)
) AS [x] ([kl])
CROSS APPLY [kl].nodes('/known_letters') AS [l] (y);
/* endregion KnownLetters */
excluded_letters I can get away with by using some LIKE jiggery-pokery, where it will search for any characters between the square brackets.
positions I can split out into individual variables. I can more easily deal with them then, and it only ends up as an extra five variables this way.
Creating the table would have been handier if I had made a column for each character, but I didn’t, so it’s some SUBSTRING logic for me to get the individual characters out.
If we do know the positions of some of the letters, then I can strip out a lot of the potential answers straight away. I’m not a fan of Swiss-army-knife WHERE clauses, so I’ll do the dynamic SQL.
I’m also not a fan of WHERE 1=1 in my dynamic code, but I’m running low on time here, and it’s faster to add that in first and start everything else with an AND than it is to check if this is the first clause in the WHERE section or not.
Plus, I’m less against WHERE 1=1 than I am against Swiss-army-knife WHERE clauses.
/* region Known Positions */
CREATE TABLE #AllAnswers
(
[wordle_answers] char(5),
[char1] char(1),
[char2] char(1),
[char3] char(1),
[char4] char(1),
[char5] char(1)
);
SET @all_answers_sql = N'SELECT
[wa].[wordle_answers],
[g].[char1],
[g].[char2],
[g].[char3],
[g].[char4],
[g].[char5]
FROM [dbo].[WordleAnswers] AS [wa]
CROSS APPLY (
VALUES (
(SUBSTRING([wa].[wordle_answers], 1, 1)),
(SUBSTRING([wa].[wordle_answers], 2, 1)),
(SUBSTRING([wa].[wordle_answers], 3, 1)),
(SUBSTRING([wa].[wordle_answers], 4, 1)),
(SUBSTRING([wa].[wordle_answers], 5, 1))
)
) AS [g] ([char1], [char2], [char3], [char4], [char5])
WHERE 1=1';
IF @position1 IS NOT NULL SET @all_answers_sql = CONCAT(
@all_answers_sql,
N'
AND [g].[char1] = ',
QUOTENAME(@position1, '''')
);
IF @position2 IS NOT NULL SET @all_answers_sql = CONCAT(
@all_answers_sql,
N'
AND [g].[char2] = ',
QUOTENAME(@position2, '''')
);
IF @position3 IS NOT NULL SET @all_answers_sql = CONCAT(
@all_answers_sql,
N'
AND [g].[char3] = ',
QUOTENAME(@position3, '''')
);
IF @position4 IS NOT NULL SET @all_answers_sql = CONCAT(
@all_answers_sql,
N'
AND [g].[char4] = ',
QUOTENAME(@position4, '''')
);
IF @position5 IS NOT NULL SET @all_answers_sql = CONCAT(
@all_answers_sql,
N'
AND [g].[char5] = ',
QUOTENAME(@position5, '''')
);
SET @all_answers_sql = CONCAT(@all_answers_sql, N';')
PRINT @all_answers_sql;
INSERT INTO #AllAnswers
EXECUTE [sys].[sp_executesql] @stmt = @all_answers_sql;
/* endregion Known Positions */
Finally, we can UNPIVOT the individual characters for the words and join them with the known_letters to single down to those answers. As well as excluding characters that we know aren’t in the word.
Or else just return everything we have, minus excluded characters.
IF LEN(@known_letters) > 0 BEGIN
SELECT
*
FROM #AllAnswers AS [w]
UNPIVOT
(
[chars] FOR [chr2] IN ([w].[char1], [w].[char2], [w].[char3], [w].[char4], [w].[char5])
) AS [unpvt]
JOIN #KnownLetters AS [kl]
ON [unpvt].[chars] = [kl].[known]
WHERE
[unpvt].[wordle_answers] NOT LIKE @excluded_letters
END
ELSE
BEGIN
SELECT
*
FROM #AllAnswers AS [a]
WHERE [a].[wordle_answers] NOT LIKE @excluded_letters;
END;
Guilt
In the PowerShell script, you can add characters in the excluded_letters parameter that exist in the known_letters parameter, and it will correctly ignore them.
Alas, Tempus fugit and I didn’t get to do the same for this T-SQL version. Maybe that has something to do with translating “time flies” into Latin and then looking up other sayings in Latin, but we can’t say for sure. Mea culpa!
However, it’s been around 50 minutes with minor troubleshooting here and there, so time to test this bad boy.
I’ll take the first answer returned each time unless it is the answer we chose previously.
PowerShell
I’m not going to use the wrong_positions parameter here since I didn’t re-create that in T-SQL. Still, I got lucky and got the correct answer on the third guess
T-SQL
The T-SQL method doesn’t show each iteration as well as the PowerShell does. And, there’s more human brain power required to make sure you don’t enter the same letter in the known_letters and the excluded_letters variables.
Overall though, well done with a respectable four guesses
Point
I’m not going to say that there is no point to these exercises.
Fun is a valid a point as any other. In a work world filled with more demands on our time than the number of Pokemon (there’s still only 150, right?), more technologies to learn than combinations in Sodoku puzzles, and more people demanding the seemingly impossible out of you so that you want to yeet them at solid objects … something something Angry Birds, it’s a welcome change to do something just for fun once in a while.
I have created a module cause nobody wants to do timesheets no more; they want PowerShell to do it for ya. Well, if this is what you need, then this is what I’ll give ya. (Ahem, apologies about that, songs get stuck in my head sometimes).
I Confess
I’ve worked with PowerShell for years but have never published a module before. I’ve helped write changes to a few, e.g. dbatools, dbachecks, and a few internal ones.
But the actual creating and publishing one needs adding to my experience list.
There was a lot of gnashing of the teeth, wailing of the cries, and reading of the documentation.
There were a few things that I wanted to do before I published.
Creating tests against all the functions; done. Creating documentation for all the functions; done.
These were the easy sections; publishing the module was where I encountered the speedbumps.
So here’s a quick list of the aspects that threw me for a loop.
.PSD1 vs .PSM1 Files
I’m aware that the auto-loading of PowerShell modules boils down to a combination of the PSModulePath environment variable ($ENV:PSModulePath) and the .psm1 file for the module. But is there a default way to auto-generate that file?
I thought it was using the New-Module Manifest command, but nope, that creates the .psd1 file. At least I don’t have to worry about that.
The best practice is not to auto-load everything into the .psm1 file. It’s supposed to be more performant to re-create the functions’ definitions there. That’s not what I did.
Publishing
First of all, yes. Anyone can publish to the PSGallery – you need an account.
Did I know that you needed an account? Hell no. Did I find out? Hell yeah.
To be fair, they say as much when you try to publish the module, asking you for a NuGetApiKey. Pop open your profile in PSGallery, and you can generate it from there.
Missing Values in the .PSD1 File
Remember a few paragraphs ago when I said I didn’t have to worry about the .psd1 file? Yeah, I was wrong. The command New-ModuleManifest is excellent. But, a few key features get missed from the default options.
The Description field doesn’t have an entry, yet it’s a required key to publish a module. Simple enough to open a text editor and update the values there; simple, if annoying.
This next bit is on me: after you have filled out the description field and tried to publish the module, you will get the same error message. That’s because the description field, starting off empty, will also be a comment. Re-open the editor, remove the hash/pound/octothorp that makes the field a comment, save, and you should be good to go.
NodeJS, I Think?
There were other tangles with the Publish-Module command that pushes to the PSGallery. I’ve chalked them down to a sinister combination.
The Linux knowledge needed for troubleshooting vs the amount of Linux knowledge I had.
I switched out of my WSL and tried to publish from my Windows Desktop. It went as smooth as… a very smooth thing.
Return 0
Overall, it was a simple process made more difficult due to lack of experience. Easy enough for anyone to pick up, annoying but unmanageable. Would I do it again?
Well, I’ve got improvements to make to PSTimeSheets, so… yeah!
Update: Reliably informed that `-AdditionalChildPath` was added after 5.1
Join Me For a Moment
There’s a multitude of scripts out in the wild with a chain of Join-Path commands. Initially, when I wanted to create a path safely, a Join-Path cmdlets chain was also my go-to. However, after reading up on the documentation, I realised another way: I only need a singular instance of the Join-Path command.
Target Location
My PowerShell console is open in my home folder, and I’ve a test file: /home/soneill/PowerShell/pester-5-groupings/00-run-tests.ps1.
If I wanted to create a variable that goes to the location of that file, one of the safe ways of doing that is to use Join-Path.
Long Form
I mean, I could create the variable myself by concatenating strings, but then I’d have to take the path separator into account depending if I’m on Windows or not.
I thought this wouldn’t work but, when running the code samples, it appears that PowerShell doesn’t mind me using a forward-slash (/) or a back-slash (\); it’ll take care of the proper separator for me.
UPDATE: This way works fine from a file but run the script from a PowerShell terminal and it’s a no-go.
No, you’re not the one for me
UPDATED UPDATE: Thanks for Cory Knox (twitter) and Steven Judd (twitter) for pointing out that this fails because it’s using /bin/ls instead of the Get-ChildItem alias:
Manual Creation
A more explicit, cross-platform method would be to use the [IO.Path]::DirectorySeparatorChar.
This method works fine but creating the path can get very long if I don’t use a variable. Even using a variable, I have to wrap the name in curly braces because of the string expansion method I used. That’s not something that I would expect someone picking up PowerShell for the first time to know.
-f Strings
In case you’re wondering, another string expansion method here would be to use -f strings.
Multiple Join-Path commands work fine. No real issue with people using this way, but there is another!
Only One Join-Path Needed
Join-Path has a parameter called -AdditionalChildPath that takes the remaining arguments from the command line and uses them in much the same way as a Join-Path command chain would.
A Michal commented on the post, asking how to get a specific output from his search.
Hi,
Thanks for your sharing. What if I also want to compare case sensitively columns and the order of them (syncwindows). How can I presented it on powershell.
I mean that in the final table I want to show also something like: column_a, column_A –> case sensitive
AND
column_a, column_a –> different order in the table
Thanks in advance
Michal
I confess that I never got around to answering Michal until a few weeks ago when I found myself with some rare free time.
Since then, I’ve written a script, slapped it into a function, and threw it up on Github.
Here’s hoping that it does what you want this time Michal, thanks for waiting.
Shall I Compare Thee to Another Table?
The first thing that we need to do is have a couple of SQL tables to compare.
So, I threw up a Docker container and created a couple of tables with nearly the same layout.
You can see that there are around three differences here
Column orders, e.g. col9 has id 6 in dbo.DifferenceTable01 but id 5 in dbo.DifferenceTable02.
Column case sensitivity, e.g. col7 does not match COL7.
Column presence, e.g. col3 doesn’t exist in dbo.DifferenceTable01 at all.
While Compare-Object has the -CaseSensitive switch, I don’t think that it would be helpful in all these cases. Or else I didn’t want to use that command this time around.
So, I wrote a function to get the output we wanted, and yes, I now include myself among that list of people wishing for that output.
I’m allowed to be biased towards the things that I write 🙂
I’ve tried to include everything you could want in the function output, i.e. column names, column ids, and statuses.
Something I’ve started to do lately is wrapping a [Diagnostics.StopWatch] in my verbose statement to see where potential slow parts of the function are.
I’d like to think that 0.2 seconds for this example aren’t too bad.
Feel free to use and abuse this function to your hearts content. I know that there are a few things that I’d add to it. Comparing across different instances being an obvious one that I’d like to put in.
Hopefully though, someone out there will find it helpful.
Welcome to T-SQL Tuesday, the brainchild of Adam Machanic ( twitter ) and ward of Steve Jones ( blog | twitter ). T-SQL Tuesday is a monthly blogging party where a topic gets assigned and all wishing to enter write about the subject. This month we have Mikey Bronowski ( blog | twitter ) asking us about the most helpful and useful tools we know of or use.
Tools of the trade are a topic that I enjoy. I have a (sadly unmaintained) list of scripts from various community members on my blog. This list is not what I’m going to talk about though. I’m going to talk about what to do with or any scripts.
I want to talk about you as a person and as a community member. Why? Because you are the master of your craft and a master of their craft takes care of their tools.
Store Them
If you are using scripts, community-made or self-made, then you should store them properly. By properly, I’m talking source control. Have your tools in a centralised place where those who need it can access it. Have your scripts in a centralised place where everyone gets the same changes applied to them, where you can roll back unwanted changes.
If you are using community scripts, then more likely than not, they are versioned. That way you’re able to see when you need to update to the newest version. No matter what language you’re using, you can add a version to them.
PowerShell has a ModuleVersion number, Python has __version__, and SQL has extended properties.
If you take care of these tools, if you store them, version them, and make them accessible to those who need them, then they will pay you back a hundredfold. You’ll no longer need to re-write the wheel or pay the time penalty for composing them. The tools will be easy to share and self-documented for any new hires. Like the adage says: Take care of your tools and your tools will take care of you.
Now, we can see all the users; the user itself, the system users, and the other user I created on the database.
Original Article
The Backstory
Work is in the process of automating tasks. Part of this automation includes verifying the automation that we’ve done.
Where am I going with this?
Well, when we’ve automated the creation of database users we also want to verify that we’ve created the users that we say we’ve created.
My fellow co-workers have, thankfully, seen the dbatools light and we use the command Get-DbaDbUser to get the users in a database and compare the list against the users we were supposed to create.
If there are any users that should have been created but don’t show up, well then we have a problem.
The Principle of Least Privilege
Works fine for me […] but it looks like […] can’t run it with her “public” access to the db server.
I’m not going to sugarcoat things – the person that sent me the request has more access than they rightly need. The “public” access worker did not need any of that access so I wasn’t going to just give her the same level.
Plus, we’re supposed to be a workforce that has embraced the DevOps spirit and DevOps is nothing if it doesn’t include Security in it.
So, if I could find a way to give the user enough permission to run the command and not a lot more, then the happier I would be.
But, I was surprised how difficult it was to find out what permissions were needed to run Get-DbaDbUser. Even more surprised when I failed and realised I’d have to find out myself.
If anyone else can Google/Bing it and get the answer, please let me know 😐
The Test
Let’s create a new user with no permissions in SQL Server.
USE [master];
GO
CREATE LOGIN LimitedPermissions WITH PASSWORD = N'MorePermissionsMoreProblems!';
GO
Now let’s test it out. I have a database in my instance called __DBA. Can we access the users in that database?
It doesn’t work. What’s even more surprising is that it silently doesn’t work. No warnings, no permissions errors, or nothing. And I included the -EnableException switch!
The Investigation
It’s good to know that you can check out the contents of the dbatools (and other) commands from PowerShell. No, I’m not talking about opening the .ps1 files. I’m talking about using the Function:\ psdrive.
See those $server.databases and $db.users? For me, that means that it’s using SMO (Server Management Objects). If there was any hope of me google/binging permissions before this, well it’s gone now.
The Will is going
To cut a rather long story short, eventually I came to the idea of thinking that maybe it only needs to connect to the database. So let’s try that.
USE __DBA;
GO
CREATE USER LimitedPermissions FROM LOGIN LimitedPermissions;
GO
And now let’s try our Get-DbaDbUser command again.
I will confess to only starting this post late. So my tips and tricks will not be well thought out or planned. They will involve PowerShell though, something that I think about daily.
What we know
I consider it to be common knowledge that you can open up PowerShell from the explorer.
By default, my PowerShell opens up to “C:\Users\Shane”.
But by typing “PowerShell” into the location bar of an explorer, you can open a PowerShell session.
The PowerShell session will open to the location the explorer was open.
Et Viola
Reverse it
Did you know that you can drag and drop onto a PowerShell console?
Let’s create an empty text file.
New-Item -Name TestEmptyFile.txt -ItemType File
And we can see that it shows up in the open explorer location.
If we were to drag and drop the file into our PowerShell console window, it will return the full path to that file
Learn from History
If you spend a lot of time in a PowerShell console, it’s not rash to presume that you’re going to be running some of the same commands over and over again.
That’s where PowerShell’s history comes into play.
By using the command Get-History or even its alias h , you can see the commands that you’ve run before:
#Hashtag
Claudio Silva ( blog | twitter ) mentions in his T-SQL Tuesday post about using PSReadline’s HistorySearchBackward and HistorySearchForward.
I’ve fallen into the habit of using #.
Get-History returns an Id that we can use with our #. On our PowerShell console, if we want to run the 2nd command in our history, we only need to type #2 and then press Tab.
If we don’t know the Id but know a word, phrase, or substring of the command we can use #<word | phrase | substring of the command> to look through our history for the command.
So to find the command Get-History that we ran, we can use #Hist and then press Tab.
If it’s still not the right command, we can keep pressing Tab until we find the previous command that we’re looking for.
..but Sweet
I’m pretty sure I haven’t blown your socks off in amazement with these tips and tricks. But they work, they’re semi-useful, and they should be helpful.
In that story I talked about Constrained Endpoints and how, by using them, I could do a take the best bits of automation & delegation and not have to worry about unlocking James anymore.
Well, I was wrong
A while after I created that Constrained Endpoint, I was greeted one day by James saying he was receiving a weird error when he tried to unlock his account.
Connecting to remote server ‘server_name’ failed with the following error message : The creation of a new Shell failed. Verify that the RunAsPassword value is correctly configured and that the Group Policy setting “Disallow WinRM from storing RunAs credentials” is Disabled or Not Configured. To enable WinRM to store RunAs credentials, change this Group Policy setting to Disabled. For more information, see the about_Remote_Troubleshooting Help topic.
The fact that this occurrence came the day after I had reset my password, and the fact that the error message contained the words “[v]erify that the RunAsPassword value is correctly configured” was not something that was lost on me.
Luckily, PowerShell is fabulously easy to explore with it’s Get-Help command so it was a simple case to look for commands around Session Configurations – Get-Command -Name *Session*Configuration* – and look at the help contents of the Set-PSSessionConfiguration cmdlet.
Make sure you include proper help in your functions, it’ll help you immensely when you come back to it after some time.
I threw in my username and my new password, did a quick test to see if the endpoint was available for me ( it was ), asked James to test that it was available for him ( it was ), and I closed off the ticket.
Aesop Out
Constrained Endpoints are not a technology that I am familiar with yet. It’s nice to know that I can take a look at the error messages, use some troubleshooting processes – check out the book “How to Find a Wolf in Siberia” by Don Jones ( blog | twitter ) – and figure it out.
Then again, the technology world is filled with new technologies and if you have a job where you know everything about your technology stack then congratulations to you.
For everyone else, get used to not knowing. Network, Search, Learn. You’ll be obliviously proficient in no time!