Start-IncreasingPushback

Words: 498

Time to read: ~3 minutes

Intro

If you look back over some of the posts that I wrote in October this year, you may have realised that there was a motif going on.

I used a homebrew pushup tracker as a data source for a couple of blog posts. A group of friends and I were attempting to “push out” (excuse the pun) 3,000 pushups over the month.

Spoilers: We didn’t reach the target. 

Try Again

I’m okay with failure. If you learn from your failures, then I don’t even consider them as failures. This scenario didn’t fall into this case, though. The only reasons that I could think that I didn’t reach the target are:

  1. I started after nearly a week into the month had passed, and
  2. I tried to do too much, too fast, in as little rounds as possible per day.

So, with these lessons under my belt, I decided to try again.

Smarter

I figured that it was simple enough to fix my first mistake, I’d start on the first day of the month this time.

The second mistake was something that I figured would also be simple. Rather than attempting to do as many as I could in as little rounds as possible, I’d do ten sets a day and that was it. If I focus more on the process than the goal, I figured that it would get me over the line eventually.

Challenge 01

If I do a set every half hour, I’d have the ten completed in 5 hours. I mean, hey, we’re in lockdown. I have 5 hours to spare.

But I didn’t.

Work, meetings, calls, focus and flow all sapped the time away from me.

So I tried again.

I’ve started getting up early in the mornings do to research and blog posts (like this one for example), so I’d try and get them done then.

Ten sets every 5 minutes should have me completed in just under an hour; more than enough time to spare.

Challenge 02

Pushups are hard! Even when I’m not trying to rep out as many as I can, they still take a toll on the body. Soon a five-minute break is not enough, and I’m taking longer and longer rests.

Fine, if that’s the way we’re going to do this, then I’m going to go with the flow.

Scripting

Seeing as I needed a little extra rest each round, I decided to create a PowerShell script that would help calculate that rest for me.

https://github.com/shaneis/RandomScripts/blob/master/Start-IncreasingBackoff.ps1

Using the Script

For once, I’ve added comment based help to my script so I can run

Get-Help -Name Start-IncreasingBackup -Examples

and get examples of what the script does!

Now, I can run this script and get a timer that will let me know when to start my activities and that will give me more and more rest each time!

Now to see what else I can use this for!

Uncommon SQL

Words: 612

Time to read: ~ 3 minutes

Intro

Recently the DBA Team Lead and I were reviewing some SQL code, and we came across some SQL that neither of us had frequently encountered before. This led to a brief watercooler moment where we shared some uncommon SQL that we had seen. Perfect blog post material, I think.

ODBC Date Functions

From a previous post, I talked about ODBC date functions.

I’m using AdventureWorks2014 here.

/* The 10 employees who have been the longest at the company */
SET NOCOUNT ON;

SELECT TOP (10) WITH TIES
        HE.JobTitle,
        HE.HireDate,
        {d '2006-06-30'} AS start_of_company,
        DATEDIFF(DAY, {d '2006-06-30'}, HE.HireDate) AS days_since_company_start
FROM    HumanResources.Employee AS HE
ORDER BY    days_since_company_start;

INSERT Alias

An unexpected item that we found recently was that INSERT INTO statements care about correct column names. That’s all though, nothing else seems to faze them.
This means that you can add the most ridiculous aliases or part names to the column and SQL Server won’t care. As far as I can tell, it will just ignore them.

/* Prefixes get prefixed */
SET NOCOUNT ON;

IF OBJECT_ID(N'dbo.Hires', N'U') IS NOT NULL
BEGIN
    DROP TABLE dbo.Hires;
END;

CREATE TABLE dbo.Hires (
    hire_id int IDENTITY(1, 1) NOT NULL
        CONSTRAINT [PK dbo.Hires hire_id] PRIMARY KEY CLUSTERED,
    job_title varchar(50) NOT NULL,
    hire_date datetime2(7) NOT NULL,
    is_on_salary bit NULL
        CONSTRAINT [DF dbo.Hires is_on_salary] DEFAULT (0)
);

TRUNCATE TABLE dbo.Hires;

WITH OldestHires AS (
SELECT TOP (10) WITH TIES
        HE.JobTitle AS job_title,
        HE.HireDate AS hire_date,
        ROW_NUMBER() OVER (ORDER BY HE.HireDate) AS rn
FROM    HumanResources.Employee AS HE
ORDER BY    HE.HireDate
)
INSERT INTO dbo.Hires (
    [0].[1].[2].[3].[4].[5].[6].[7].[8].[9].job_title,
    a.b.c.d.e.f.g.h.i.j.k.l.m.n.o.p.q.r.s.t.u.v.w.x.y.z.hire_date,
    [1/0].[%].[OUT_OF_BOUNDS].[   ].is_on_salary
)
SELECT  OH.job_title,
        OH.hire_date,
        CASE 
            WHEN OH.rn % 3 = 0 THEN NULL
            ELSE 1
        END AS is_on_salary
FROM    OldestHires AS OH;

SELECT  *
FROM    dbo.Hires;
GO

Default Option

Let’s contrive an example. Let us say that we have a table called dbo.Hires and we’ve added a column called is_on_salary.
Since most of the hires are salaried, we have added a new default constraint setting the value to 0.
Unfortunately, it looks like the default constraint hasn’t been applied yet…

/* Our dbo.Hires table */
SET NOCOUNT ON;

SELECT  *
FROM    dbo.Hires;

= DEFAULT

Recently, my DBA Team Lead pointed me to a piece of code where the syntax was: UPDATE T SET COLUMN = DEFAULT

Now, I had never seen this before, and I wasn’t quite sure that this method would work. I wasn’t wholly surprised, though when a quick test proved that it does.

/* UPDATE DEFAULT */
SET NOCOUNT ON;

UPDATE  dbo.Hires
SET     is_on_salary = DEFAULT
WHERE   is_on_salary IS NULL;

SELECT  *
FROM    dbo.Hires;

What about with no default?

Okay, that seems to add the default constraint to a column. What about when there is no defined constraint on the column. Will it error out then?

/* Removing our default constraint */
ALTER TABLE dbo.Hires
    DROP CONSTRAINT [DF dbo.Hires is_on_salary]

SELECT  'Pre update' AS [status],
        *
FROM    dbo.Hires;

UPDATE  dbo.Hires
SET     is_on_salary = DEFAULT
WHERE   is_on_salary = 0;

SELECT  'Post update' AS [status],
        *
FROM    dbo.Hires;

Nope! As mentioned in the docs – if there is no default, and the column can become NULL, then NULL will be inserted.

CURRENT

Finally, we have CURRENT.
While the vast majority of scripts manually define the database context for commands, such as ALTER DATABASE AdventureWorks, etc., you can tell SQL Server:
Hey! Use the current database context!

/* CURRENT Database Context */
SET NOCOUNT ON;

ALTER DATABASE AdventureWorks2014 SET PAGE_VERIFY NONE;

SELECT 'Pre change' AS [status], [name], page_verify_option_desc FROM [sys].[databases] WHERE [name] = N'AdventureWorks2014';

ALTER DATABASE CURRENT SET PAGE_VERIFY CHECKSUM;

SELECT 'Post change' AS [status], [name], page_verify_option_desc FROM [sys].[databases] WHERE [name] = N'AdventureWorks2014';

And so forth

Thre’s probably a lot more but these are the ones that we talked about. If you have any uncommon SQL, let me know!

Using PowerShell for Lazy Maths

Words: 317
Time to Read: ~ 2 minutes

Pre-Coffee Question

I was asked today if I could figure out how many ways I could arrange the following equation to get the number 18?

// Replacing the "?"s with either "+", "-", "*", or "/" how many ways can you get 18?

2 ? 2 ? 2 ? 2 ? 2

Now I’m sure with enough time and effort that I probably could. But if you ask me this before I’ve had any coffee, the best you’re going to get is a dirty look.

Again, that’s the best that you’re going to get…

So I got lazy.

I know PowerShell.

I know that I can create the formula in PowerShell.

I know that I can invoke the formula in PowerShell to get the result.

So with an icy glare at the offending message and a sip at my scalding coffee, I create the script.

The Script

I’m not saying that this script is any good. Like I said, I wrote the script before the caffeine had hidden the tiredness from my brain.

It works, and that was the main thing for me at the time.

$operators = '+', '-', '*', '/'

$Permutations = foreach ($op1 in $operators) {
  foreach ($op2 in $operators) {
    foreach ($op3 in $operators) {
      foreach ($op4 in $operators) {
        $MatsIzHard = [scriptblock]::Create("2 $op1 2 $op2 2 $op3 2 $op4 2")

        [PSCustomObject]@{
          Formula = $MatsIzHard.ToString()
          Result = $MatsIzHard.InvokeReturnAsIs()
        }
      }
    }
  }
}
Defining the script and then calling the `$Permutations variable to get the results.
Like I said, seems to work…

Now that I have the results in the $Permutations variable, I can look for any results where the Result property is 18.

$Permutations.Where({ $_.Result -eq 18 })
2 rows returned from the `$Permutations variable showing 18 as the result
Yes, I can answer your question.

Yes, I can get 18 from that expression 2 ways…

On to my Actual Job

Overall that took around 3 minutes to complete. Which was the perfect time required for the caffeine to kick in and for me to be ready to start my day.

Now, that PowerShell was not pretty, dynamic, or efficient.
It was probably only fast because the number of iterations was so low.

But I’m happy since even though I can’t do maths before coffee, I can at least write PowerShell.

Pluralsight’s VS Code Extension

Words: 276

Time to read: ~ 1 minute

I’m a fan

I’m open about this – I like Pluralsight.

I admire the array of learning materials that they have for different topics. I find it reassuring seeing the names of Pluralsight authors and recognising them independently as knowledge leaders in their field.

I even have a personal subscription since I was devastated when I moved jobs and was removed from the old jobs plan!

So, even though it’s been out for a while now, when I saw that they now have an extension for VS Code it was enough to make me install it on my machine.

The Pluralsight extension page in VS Code

I’m currently going through some C# Fundamentals by Scott Allen ( twitter | blog ) on Pluralsight with an aim to work my way into Entity Framework.

With the Pluralsight extension on VS Code, I can open up some sample code file and Pluralsight will recommend some clips based on the file

List of Pluralsight clips based on the open file in VS Code.

Clicking on one of those clips will open a new tab in VS Code like below:

A tab in VS Code with a title, course description, and a button saying "Watch Clip"

Clicking the “Watch Clip” button will bring up a dialog from VS Code asking if you meant to go to an external website.

A dialog box from VS Code asking permission if you want to open an external website.

Opening or copying & going to the link will open up the Pluralsight video segment from the recommendations.

Insert Dad joke response here

Give it a go and see how it works for you. I use Azure Data Studio for my SQL code but I’m tempted to open up the .sql files in VS Code and see what Pluralsight recommends for them.

What do I think it will say? Who wrote this? Urgh! Kill it, kill it with fire!

… I never said I wrote good SQL…

Talking Data Bits

Words: 160

Time to read: ~ 1 minute.

Yes I am aware that these are normally posted directly after the podcast

I had the absolute pleasure of talking with Kevin Hill ( Blog | Twitter | Podcast ) on the first episode of his new podcast “Data Bits”.

You can check out Kevin’s blog post announcing it here and the actual podcast here.

It was an energising experience reliving past times, talking data tales, and cringing over shared scars received from working with data.

Next, Kevin had Andy Levy ( Blog | Twitter ) as his second guest on the show. It was uplifting hearing them talk shop and realising that maybe I should get more “aggressively friendly”.

You can check out the episode with Andy here.

I’m currently catching up with the next few episodes including Ginger Grant ( Blog | Twitter ), Jules Behrens ( Twitter ), and Louis Davidson ( Blog | Twitter ).

Thankfully, I’ve been able to scrounge some more time to listen to podcasts in this lock-down. There’s always an upside 😁!

T-SQL Tuesday #122 – Imposter Syndrome

Imposter syndrome.

If there was ever a phrase that brings about the dreaded Writer’s Block it would be Imposter Syndrome. I’m going to fall back to what I know I can talk about while the back part of my brain ruminates on what to write.

T-SQL Tuesday

T-SQL Tuesday is the brainchild of Adam Machanic ( Twitter | Blog ). It is a monthly blogging call by the Data community. Each month a topic for the community is offered up and a blog post is written by all wishing to take part.

This month we have John Shaulis ( Twitter | Blog ) as the host and John has given the topic of Imposter Syndrome.

I want to read your stories about when you’ve experienced, seen, or overcome imposter syndrome!

John Shaulis

You can find the invitation to this month’s topic at John’s blog post here.

Imposter Syndrome

I’ve gone through 2 cups of coffee staring at a blank page while waiting for my brain to dredge up some relatable Imposter Syndrome moment. It’s taken this long for me to realise that it’s a futile exercise. Imposter Syndrome is, by the definition that John has kindly given us, a personal experience. 

One’s success […] result of one’s […] efforts or skills.

Imposter Syndrome Definition

So, with a third cup of coffee in hand, and a deep refusal to believe that I have a caffeine dependency, allow me to tell you about the times that I feel the Imposter Syndrome the most.

Unsurprisingly

Normal working days are not the days that I feel Imposter Syndrome.

Fielding questions at work doesn’t bother me.

There’s a sense of investigation, an aim to teach, and a feeling of joy that people are taking an active interest in SQL Server.

Investigating issues and researching requests don’t trigger it either.

There are set goals here; either fix the issue or can the request be done? Plus, there is an amazing community out there that can be called upon on Twitter, Slack, etc. for answers or guidance.

Unsurprisingly the days when I am the sole person interested in SQL Server are not the days when I feel Imposter Syndrome.

Surrounded but Isolated

Imposter Syndrome hits like a brick when I’m not alone. Conferences, User Groups, and meetups are the main culprits.

Probably stemming from being a Junior DBA going to my first User Group and realising the extent of the topics I didn’t even know that I didn’t know!

Trying to learn the basics of administrating databases and going to a meeting where people are arguing about single-thread CPU performance and NUMA nodes…

Talking to people who are dealing with columnstore partitioning problems and massive memory grants because they’re bulk-loading 3 trillion rows at once.

All the while I’m there thinking my main problem was telling developers “set-based = good, row-by-row = bad”.

The same is true for conferences. 

Can’t Stop, Won’t Stop

Please don’t get me wrong. I love going to conferences, I do! 

There are few other places where you can get curated knowledge to that extent! Where you can talk to people who get it when you say “Can you believe that they wanted sysadmin privileges just to take backups!?!”.

I love seeing what people are doing, what new technologies are coming up, and catching up with people who experience the same problems that I do.

And there’s no way I’m going to stop, the “fors” don’t just outweigh the “againsts”, they completely smother them!

But it fires off my Imposter Syndrome like nothing else does!

T-SQL Tuesday #121: Gifts received for this year

Words: 401

Time to read: ~ 2 minutes.

T-SQL Tuesday, the monthly blogging party created by Adam Machanic, is still going strong. This months T-SQL Tuesday is hosted by Malathi Mahadevan ( blog | twitter ) who asks us…

“[…] what are a few things would you consider as gifts, and why?”

Invitation Post

There are many items that I consider gifts. More than I can reasonably write down in a blog post. So, in the interest of brevity, I’m going to break it down into gifts of SQL past, the gifts of SQL present, and the gifts of SQL future.

Gifts of SQL Past

I have been gifted with being mentored by DBAs that were knowledgeable and passionate about the Data Community.

I have been gifted with mentors who encouraged me to go to User Groups, volunteer at conferences and soak up as much data learnings as I could.

I have been gifted with mistakes that I have made in the past that have allowed me to learn from them. (I have definitely been gifted with the knowledge of how important test labs are!)

Gifts of SQL Present

I am gifted with a job that seems to actively repel me away from any sort of comfort zone. 

I am gifted with an active Data Community on Twitter, Slack, and in person for any issues that I run into.

I am gifted with a PowerShell community on Discord and Slack who seem to be constantly battling for the right to call themselves the “most welcoming community”.

I am gifted with a shrinking of borders so that anyone anywhere can be considered a helpful resource to a data question.

Gifts of SQL Future

I’m being gifted with new technologies to help out with old and new problems.

I’m being gifted with a field that is ever-changing. Fluid, dynamic, and will never become stagnant nor boring.

I’m being gifted with putting names to welcome faces and meeting up with new Data Community friends.

Mostly though, I’m being gifted with the opportunity to help others out in the future. Whether that be with blog posts, tweets, talks, examples, or just encouragement.

As good as all the previous gifts are, I believe the best gift is the gifts that you can give back. So, here’s hoping that the next year sees us all help more than hinder, learn more than laze, and teach more than troll.

Those are what I consider gifts.

T-SQL Tuesday #120 – What were you thinking?

Words: 712
Time to read: ~ 3.5 minutes

10 Years

T-SQL Tuesday is upon us once more. T-SQL Tuesday number 120 means something else as well. 120 monthly posts equals 10 years that Adam Machanic’s ( twitter | blog ) blog post party has been going on.

Wayne Sheffield ( twitter | blog ) is hosting this month’s event. Wayne asks us something that I’m sure we’ve all thought at some stage.

What were you thinking?

https://blog.waynesheffield.com/wayne/archive/2019/11/t-sql-tuesday-120-what-were-you-thinking/

In the beginning…

The first exploration of a system leaves a lasting impact. When you first get a chance to delve into the database, you capture a shot of what the coding standards are like. You gleam the past experiences of the developers.

I’m looking for instances of NOLOCK if I’m being honest.

…there are impressions.

This impression was a What were you thinking? experience.

  1. DEADLOCK_PRIORITY LOW on most procedures.
  2. A lot of hierarchial data types.
  3. VARCHAR(MAX) on most columns
  4. Variables at the start of procedures used in equality WHERE clauses. e.g. DECLARE @Success int; Set @Success = 4; ... WHERE StatusId = @Success.
  5. Functions that return a single, deterministic value.
  6. Multi-statement Table-Valued Functions with WHILE statements.
  7. A plethora of indexes on the tables, all single-column indexes.

I’ve said enough.

If you had seen my face at that moment, you would have laughed. Imagine me staring, horrified, eyes darting around the screen mouthing What the…

A little thinking saves a lot of shouting

Granted it took getting a coffee and staring in disbelief at the code before I recovered. It took getting another coffee after the first before I could rationalise what I was seeing.

I took what I knew, which was these developers were smart. I tried to match that with what I was seeing. And there was an answer.

Theoretical, not Physical

The codebase read like developers who were not used to interacting with a database. Developers who thought of the database as a “place to shove data” and that’s all.

It was clear they had tried to follow the DRY (Don’t Repeat Yourself) approach (#4, #5).

They had read the documentation on hierarchical data types and Microsoft’s saying:…

The built-in hierarchyid data type makes it easier to store and query hierarchical data

https://docs.microsoft.com/en-us/sql/relational-databases/hierarchical-data-sql-server?view=sql-server-ver15

…instead of a parent/child relationship tables. (#2)

They had tried to translate the .Net data type [string] into the database. Deciding that varchar(max) was its closest match. (#3)

They had tried to query the data in a row-by-row approach, instead of a set-based method (#6).

And, they had tried to deal with the consequences of these and other decisions. (#1, #7)

Understanding, not blame

It’s hard to stay annoyed at people when you can understand their motives. Their mindset is the most effective deterrent to anger I can think of. There’s no blame but understanding. You want to help them improve. And that’s where this on-going process is now.

To move away from multi-statement Table-Valued functions with WHILE statements. Here’s inline Table-Valued functions with a recursive CTE (Common Table Expression) instead.

To use variables when you have to but be aware of the change in statistics that it brings.

The difference it can make to a query and a database when the data types are apt. How memory grants, logical page reads, and more are affecting by blobs.

How DEADLOCK_PRIORITY LOW is not an option if every procedure has it! How indexes can be of more than a single column. That there is such a thing as an INCLUDES!

Seeing now that the driving force they have is to create features. But the pain force they feel is database performance. I can grok their choices and actions at the time.

Still, it didn’t stop me going What were you thinking? at the outset.

I’m no better

I’m trying to learn different languages and frameworks at the moment. If someone more knowledgeable was to come along and see my interactions with Linux. If they were to critique my Python files. Or attempt to suppress a groan at my PromQL. I’d appreciate an air of understanding, not blame at that time.

So well done to the people who dived in and attempted the work even if they didn’t know how at the time. To paraphrase; those whos face is marred by dust and sweat and blood deserve the credit.

But don’t think I didn’t see those TRANSACTION LEVEL READ UNCOMMITTED that you’re using as NOLOCKS!

Retrospective: Speaking at DataScotland

Words: 315

Time to read: 1.5 minute

Data Scotland

On the 13th of September 2019, I spoke at DataScotland; my first time talking at a data conference.

My quasi-clickbait title was Feel Validated with dbachecks. If you guessed that I was talking about dbachecks then you’re right.

This is a brief retrospective of that time. Thinking back on that time still makes me relive the emotions that I felt. Nervousness, excitement, and panic.

Good times!

Appreciation

This was my first time speaking at a conference as well as my first time attending Data Scotland.

I recommend that you check it out. It’s an amazing conference created by passionate people and staffed by dedicated volunteers.

The Good

What didn’t count as the good is the way to put this!

  • Amazing fellow speakers.
  • Getting to meet other first-time speakers.
  • Talking with the volunteers.
  • Speaking with attendees.
  • Seeing people who I hadn’t seen in a long time.

Thank you to Craig Porteous, Louise Paterson, Paul Broadwith, and Robert French for all your work and encouragement.

Thank you as well to Brent Miller, Andrew Pruski, David Alcock, and John McCormack for help with the presentation.

The Bad

Feeling drained.

I don’t put this down to DataScotland though.
You may not have heard from me for the last month. I felt drained and took time off from public exposure.

A full year of constant working on the day job and personal work.
2 conferences a month on average for the last year.
Spreading myself out on projects about SQL, PowerShell, Containers, Python, AWS, and Azure without rest.
It’s not something I could sustain without factoring in sharpening the axe time.

Overall

I’m easing myself back into things again with the caveat of planning ahead and making sure I don’t overwhelm myself.
First thing on my list, planning for DataScotland next year.

Whether it’s speaking, volunteering, or attending, I’ll be there.