YouTube Keyboard Shortcuts

As SQL Server related as YouTube is…so depends on what you watch I guess

By the time this goes to post 24 Hours of PASS will have just finished (it was great!) and PASS ( twitter | blog ) will be busy converting all the recording of the sessions and putting them up so we can watch them again.

This made me realize how much resources are available up on YouTube (I use it a lot for learning but there is a difference between doing and realizing). Not just with 24 Hours of PASS but with the Local Groups, Virtual Groups, and even some individuals have started to use it as well (Robert Cain, Bert Wagner, Rob Sewell, etc.)

Here are a few of the YouTube shortcuts that I use while watching these videos:

Go To Percentage of the Video:

Press a number to go to that percentage of the video…
1 = 10%,
2 = 20%,
3 = 30%,
etc.

Funnily enough, I found this by pressing the keys at the top of my keyboard, sequentially going up and getting to the end of the video. Pressing 8 brought me to 80%, 9 brought me to 90% and 0, thinking it would end the video, brought me back to the start again!

So 0 = 0%!

“Home” and “End” go to the start and end of the video respectively.

Pause the video…
“K” or “Spacebar”.

Say that you want more control though, and don’t want to go to a certain percentage of the video, but just skip a few sections…

Forwards and backwards….
“Left arrow” = Back 5 seconds,
“J” = Back 10 seconds…
“Right arrow” = Forward 5 seconds…
“L” = Forward 10 seconds.

Video too loud for you?

Volume Control…
“Up arrow” = Increase volume,
“Down arrow” = Decrease volume.

Not enough time in the day?

Speed up the playback
“Shift” + “.”

Then when you’ve encountered it at full speed and if you’re like me, your Imposter-Syndrome kicks in and you start freaking out at how eloquently the presenters are speaking and how fluidly they can type, let’s slow it back down again.

Slow down the playback
“Shift” + “,”

Having trouble hearing?

Turn on Subtitles/Closed Captions
“C”

Subtitles too small? You can adjust them.

“+” = Increase the subtitles font
“-” = Decrease the subtitle font

That’s it! Check these out an up your YouTube watching game.
Also check out those links, there’s more informative videos coming every day!

Exporting Special Characters out of SQL Server using PowerShell.

PowerShell is ußer-useful!

So I’ve talked before about keeping new lines when copying results to a different window in SQL Server and about copying new lines out of SQL Server into reports.

These topics have come about as they are both issues that I’ve had to deal with. Well, another of those issues is dealing with exporting special characters out of SQL Server using PowerShell.

The Lay-out.

We already have our table called “dbo.NewLineNotes” from before when we were trying to copy new lines out of SQL Server so we’re going to add another row.
Now personal experience for me centered around the German Eszett (“ß”) but you may encounter this with other characters.

-- Insert some special characters...
INSERT INTO dbo.NewLineNotes (Notes)
VALUES (N'This is a ß')

Now if you were to use the code from keeping new lines post…

and open up the csv file we would get…

My german is non-existant but I know that’s wrong!

“What do we do when we fall down?”…

Well with SQL Server, I normally break things down into the smallest parts and slowly build it up until it breaks. For this, it breaks when we get to Export-CSV as everything before it works!

What we want…

PowerShell is even easier for troubleshooting methodology as , and we’ve talked about it before, Get-Member and Get-Help are there to help us!

We know that it’s Export-CSV that is somehow screwing up our special character so the obvious next step…

help Export-CSV -Full;

And we can see a parameter just shine at us!

Looks like ASCII is not for me!

So we have to define an “Encoding” do we? I used “UTF8” and modified my query…

Eszett? More like EZ-zett!

And special characters are no longer an issue for us 🙂

 

Multiple Inline Constraints

SQL New Blogger:

Time to investigate: 10 mins 
Time to test: 10 mins
Time to write: 10 mins

While creating a script for some new tables I came across a few columns that were designated to have both CHECK constraints and DEFAULT constraints.

Now this isn’t a problem of itself, it can be easily achieved by using a CREATE TABLE statement and then using 2 ALTER TABLE statements to create the constraints.

Old Style:

The problem that I had with this was that, so far, I was going along and creating these tables & columns with the constraints created in-line and it just galled me to have to break this flow and create these constraints as ALTER statements.

Checking the examples in the new Microsoft Docs didn’t show any examples that I could find of creating both constraints together on the same column so I experimented and found out that you can!

Here’s how…

New Style:

Notice 2 things here:

  1. There is no need to specify a FOR <column name> on the default constraint because SQL Server can tell the constraint is to work on the column it is currently defining.
  2. There is no comma separating the two constraints. This would break the inline property of these statements and SQL Server would think you’ve messed up syntax on a constraint (this got me for a sec).

Great, I can keep my constraints inline!

That’s a wrap

Documentation is useful but they do not cover every situation. Have a test environment; Hypothesize, test, and verify. You never know what you’d find.

Copying New Line Data out of SQL Server

 

A lot of the time, DBAs are asked to run adhoc reports for various business people and, more often than not, the expected medium for these reports is Excel.

Now for the most part this seems simple enough…

  • Run the T-SQL report
  • Highlight the results
  • Copy the results
  • Paste into an Excel worksheet

Simples!…right?

How do you deal with carriage returns though? New line feeds? Tabs? Commas when you’re trying to comma delimit?

Try and copy them into an Excel worksheet and what you’re going to get is confusion, alarm, and vexation.

Not exactly the clear reporting that the business people are hoping for.

So what can we do? Panic? Grab another coffee? Roll your “r’s”?

Yes, yes, and not yet…


Karaoke…

I have mentioned before that we can use CHAR(10) and CHAR(13) for new lines and carriage returns in SQL Server so I’ll leave it up to an exercise to the reader to create a table with these “troublesome” bits of information in them (plus if you came here from Google, I assume you already have a table with them in it).

For me, I’ve just created a single table dbo.NewLineNotes that has a single entry with a new line in it.

SQL Server is left, Report is right

So a straight-up copy and paste isn’t going to cut it here. If we have more than 1 row, we’re not going to get a 1 entry to 1 row in the report that we are looking for. How do people deal with this?

1 Way:

Well, depending on what tool you have, the answer could be as simple as a right-click and selecting “Open in Excel”

Intact but on 1 line 🙂

Or Another:

Let’s proceed with the impression that you do not have RedGate tools (cough free trial cough) and cannot avail of the right-click righteousness, what do you do then.

Well…have you thought about PowerShell?

Hear me out on this but you probably already have your query but found the new lines are screwing up the report. So let’s throw that query into a variable

$NewLineQuery = 'SELECT Notes FROM dbo.NewLineNotes'

Then what we have to do is somehow connect to the SQL Server instance and database.

Let’s go with the very basics here as that’s all we really need. Invoke-SqlCmd, and yes I know it has problems. I’ve linked and talked about them before. It works for us in this situation though.

Yup, that’s good old VS code!
Now the more code-centered readers among you may have spotted and asked why I used -ExpandProperty and not just -Property , or even why I included it at all.
Well, apart from the thought that code online should be like code in scripts (legible with no aliases), we’re dealing with new lines here!
If we don’t specify ​-expand then what we actually get is…
comma delimited or ellipses delimited?

How does that help us with Reports?

If you work with PowerShell for the smallest amount of time, then I hope you’ve run into the command Export-CSV. See help for details…

help Export-Csv -Full

This will output a delimited file (defaults to comma but we can change that if we want) to wherever we want. We can then open it up in Excel or whatever other tool you use.

Let’s see if that splits our information into a new line!

IT’S ALIVE!!! ugh I mean…IT WORKS!!!

Another another…

There are tons of different ways to do this but this is what I used.

Quick, dirty, and effective.

In the short term, I’m okay with that!

 

T-SQL Tuesday #92 – Lessons Learned the Hard Way…

That’s T-SQL Tuesday #92, not 92 lessons learned the hard way

T-SQL Tuesday this month is hosted by Raul Gonzalez ( blog | twitter ) and the topic this month is “Lessons learned the hard way”.

I make no effort to hide the fact that I am not the biggest fan of GUIs, and I’ve been fortunate enough to turn that dislike into an admiration of command line tools. I said “an admiration” not that I’m any good at them yet! I have been fortunate enough to provide a function for dbatools.io (have you helped them out yet?) but just goes to show that anyone can help out, regardless of skill level.

In case you ever wondered where this dislike came from, let me tell you a hypothetical story about…my friend that I used to work with.

Now my friend wasn’t a DBA then, he wasn’t even an Accidental DBA, he was more a “that guy is good with databases, ask him” kind of guy. In short, my friend knew just enough to be dangerous without knowing that he could be.

Back in the SQL Server 2012 days…

…which was either today or 5 years ago, depending on what version of SQL Server you’re running but we’ll say 5 years ago, my friend was working as a SQL Support Engineer for a software provider.

The provider didn’t handle backups, that was all taken care of by 3rd parties. In case something went wrong, these 3rd parties provided the backups and either the software provider, or the in-house I.T. would restore them. (FYI, I’m very cautious of 3rd party backup tools as well).

One Friday, we did a release…

…and eventually a bug was discovered in the release that could have potentially had some data impact (no particular reason to say Friday, I just don’t think you should release on one).

So a plan was made to request a 2 week old backup and to compare the current data against the current production database.

GUI Time…

My friend goes to the Object Explorer, opens the “Databases” node, and sees that there is two databases there; Live ([TheEarlyBird]) and a disused copy of Live ([TheEarlyBird2]) that is a day old and can be overwritten.

Not knowing any better, my friend right-clicks the old copy, clicks “Tasks”, then “Restore”, then “Database…”, and a lovely GUI pops up.

Now my friend doesn’t know any better, he thinks that the GUI is here to help him and in most of the cases it is. What my friend failed to realize is that there is a difference between helping him and doing the work for him…

Setting Up…

The 3rd party backup file has not yet been retrieved but that stops my friend not! This is a urgent case so my friend forges ahead, thinking that he can get everything set up and ready then all he would have to do is select the file when it was made available.

Files Page:

  • My friend would be overwriting the disused database so this would not need to be changed.

Options Page:

  • Checked the box “Overwrite the existing database (WITH REPLACE)” as we are overwriting the disused database

File is now available…

So my friend goes back to the General Page, clicks the “Device” radio button, and selects the backup file…

Can you figure out what went wrong here?

…and clicks “OK” to start the restore!

Errors! Errors galore…

My friend encounters errors:

Exclusive access could not be obtained because the database is in use.

This confuses my friend as this is a disused copy of the database, the only person who should be on it is himself.

Does my friend go and maybe check out EXEC sp_Who2; to see who else could be on this database? No, remember that my friend knows just enough to be dangerous. My friend goes back to “Tasks”, “Restores”, “Databases”, goes to the Options Page and checks the box labelled “Close existing connections to destination database”….

If you figured out the above, you know that this is even worse…

With that, my friend clicks the “OK” to restore the database and continues on his merry way…the dumb fool that he is.

SQL Server 2012 GUIs…

…have this little “optimization” technique where it looks at the name on the database backup file and matches up with the database name.

Now what this actually meant was the moment that my friend clicked the “Device” button, all his work was gone and his destination database reverted to the Live Database!

The first time my friend clicked “OK” to restore wasn’t a problem since there were connections and the Live database wasn’t affected.
But then my friend goes back and clicks “Close existing connections to destination database”…just enough knowledge to be dangerous…

So in summary, what my friend had done was kick every single connection off of Live and then effectively wiped 2 weeks worth of data.

Thank goodness for tail-log backups!

GUIs are good for….

…discovery.

They give you the option to script out the configurations you have chosen. If my friend had chosen to script out the restore, rather then clicking “OK” to run it, maybe he would have caught this mistake when reviewing it – rather than overwriting the Live database with 2 week old data and spending a weekend in the office with 3 colleagues fixing it.

Plus if you ever want to ensure that you know something, try and script it out from scratch.

Failures or Learning Experiences?

There is this saying that…

…there is no such thing as failure

I guess it’s a personal experience but I say that it is thanks to “my friend” that I was able to do 2 side-by-side WITH STOPAT database restores today.

Oh and FYI SQL Server 2012 Enterprise Core Mainstream Support ends today.
I’m very upset about that… 😐

Comparing Column Values in the Same Table

The Set-Up:

This is yet another time that a blog post has come about from a question by a developer. They’re good guys, I guess, they keep me on my toes.

This time it was with change logging. We didn’t have Change Data Capture (CDC), or Temporal Tables enabled (have you seen the YouTube videos by Bert Wagner ( blog | twitter ) on these?). What we did have was “manual logging” and no, I’m not even talking about Triggers.

What we had was INSERT statements, directly after a MERGE statement, that inserted into a table variable a hard-coded name of the column, the old value, and the new value.

Is that what I would do? Doesn’t matter, it was there before I got there, seems to work, and is low down on the list of priorities to change.

The question was, every time that they needed to add a column to a table, and change log it, they had to add multiple lines to the change tracking procedure and the procedure was getting gross and hard to maintain.

Something to do with DRYness?

Create Table:

You know the drill by now, I quite like to play along so let us facilitate that (from now on I’m going to use Gist, formatting with native WordPress is starting to annoy me).

This will create our table and, luckily, all of it’s columns are important enough to warrant capturing when they get changed.

Despite their looks, these values are “important”

Old, Way WHERE old=way

Let’s take a look at the code that they were using, shall we?

And the results?

XML anyone?

You can probably see the problem here.

Hey! It’s legacy code, let’s focus on just 1 problem at at time!

The main issue that I was asked about was every time a column was deemed important and needed to be added to the list, they had to insert another INSERT INTO @ChangeLogTemp... and they thought that it wasn’t sustainable in the long run.

Hmmm it also comes across as very RBAR doesn’t it? Every time we want to include another column to the change tracking, we have to add them row by agonizing row. The script is already big enough, if we keep adding more, it will get massive!

Set based is 90% of the time the right way to go but how do we do set based solutions on the same table?

New JOIN Way ON new = way

The first thing I do is to change that table variable into a temp table. Stats, indexes (if necessary), and I can query the results as we go along. Much better!

Temp > Variable?

The second thing is that, whether by luck or by design, the legacy code has the same naming conventions for the columns; new column values are have the prefix “New%” in the column name and old columns have the “Old%” prefix.
This works for us because we can now split the new columns into 2 derived tables, New and Old, and that way we have the differences.

Potential problem here…

Have you ever tried to find the differences between two consecutive rows of data? It’s fiendishly difficult. WHERE Column1 on row1 != Column1 on row2 apparently just does not work, le sigh.

I’ve talked before about PIVOT but now I’m going to introduce you to it’s little brother, UNPIVOT, which “rotating columns of a table-valued expression into column values

I say “little brother” because the whole document talks about PIVOT, with only brief mentions of UNPIVOT in the notes.

If you’re writing documentation like this, please stop.

With UNPIVOT we can create a table of our rows around our ID and Column names…

Potential problem averted!

… and with this, we can join on our ID and Column names and get to our more intuitive WHERE OldValue != NewValue.

Bringing it all together!

And it works!

wasn’t this replaced by JSON?

It’s not great though.

The whole thing was supposed to be to reduce the amount of changes required when they need to include or exclude columns. All in all though, it’s just 6 lines less. Not exactly the great return that you’d expect.
Yeah, true with the old way for every column we want to add we have to add an extra 6 lines while the new way adds 2.

That means for 1,024 columns:

  • The old way could have at least 6,144 lines per table. (1024 * 6)
  • The new way could have at least 2,048 lines per table (not explaining this calculation >:( )

So, is there anything else that we can do?

Dynamically?

I’ve talked before about T-SQL automation with Dynamic SQL and this should be a good candidate for that.

What can we make dynamic here though? How about…

  1. The new and old columns bit?
  2. The FOR ColumnName IN([Column1], [Column2], [Column3], [Column4], [Column5], [Column6]) bit?
  3. The CAST(ISNULL([Old/NewColumn], '') AS nvarchar bit?

Explain it to me.

  1. The new and old columns.

Well, temp tables exist in the tempdb database, they just get a suffix of a lot of underscores and a hex value.

So to get our column names, we can just query the sys.tables and sys.columns catalog views in [tempdb] and we should have what we need.

We can add a filter clause too

2. The FOR ColumnName IN (

I’ve talked before about concatenating values so we can use that to generate this part of the script.

LEN(tc.name) – 3 to remove the “old”/”new” prefix

3. The CAST(ISNULL(...

This is basically the same as the above. Don’t be put off by needing to add CAST(ISNULL( before the column names, it’s not as complex as you’d think.

STUFF just doesn’t look as pretty… 🙁

Now that we have our dynamic bits, let’s create the full statements.

Full Dynamic Script

Results are good!

We’ve seen this before

Overall, the script is longer at nearly double the lines but where it shines is when adding new columns.
To include new columns, just add them to the table; to exclude them, just add in a filter clause.

So, potentially, if every column in this table is to be tracked and we add columns all the way up to 1,024 columns, this code will not increase.
Old way: at least 6,144.
New way: at least 2,048.
Dynamic: no change

Summary:

Like the script, this was a massive post. Back at the start, I said that a developer came to me because they wanted to get more DRY (?) and stop needing to add more content to the stored procedure.

Do you think the developer used this?

Nope!

I can’t say that I blame them, it’s slightly ugly and unwieldy, and I wrote it so I should love it.
Yet if something was to go wrong and the need was there to open the procedure and troubleshoot it, the first person to open this up is going to let out a groan of despair!

So this request turned into a proof of concept and nothing more. No skin off my back, I have a growing list of tasks to accomplish by 5 minutes ago. Better get back to them.

DBA Fundamentals July 2017

What’s On?

July is a pretty busy month for the DBA Fundamentals Virtual Group with 3 seperate sessions being made available for the SQL Community. As well as giving away discount codes to the PASS Summit.

If you haven’t considered going before, now may be the time to do so. Nearly every review of the summit has people saying that they consider it to be the start of their careers, which is pretty high praise!

Use our discount code VC15GBQ6 for $150.00 off the cost of PASS Summit; currently $1895.00 until the 23rd of July. With the discount code now it will be $1745.00.
Also if you use our code, you will be entered in a drawing for one winner to get a $500.00 Amazon Gift Card.

The next big date for the PASS Summit price is the 23rd of July as the cost goes up another $300-400 after that!

Sessions.

DevOPs and the DBA

Hamish Watson ( blog | twitter ), 11th July, 12:30 – 13:30 Brisbane (10th July, 02:30 – 03:30 UTC)

Register: dbafun.org

You may have heard the word “DevOps” and wondered whether it is just another buzzword and/or what it can do for you.

In this session I will demystify the concepts of DevOps and we will look at two aspects of DevOps – Continuous Integration & Continuous Delivery.

Continuous Integration is the practice in which software developers frequently integrate their work with that of other members of the development team. It also involves automating tests around the integrated work.

Continuous Delivery is the next step after Continuous Integration in the deployment pipeline and is the process of automating the deployment of software to test, staging, and production environments.

Database migrations/changes are an area that may not be typically automated or utilise Continuous Delivery.

Through the use of a comprehensive live demo to a running production database the audience will learn the benefits and how to implement Continuous Delivery in their database systems deployment pipeline.

Hamish Watson is a Systems Management Specialist with a passion for efficient application deployment using DevOps methodologies.

He has 19 years IT experience in managing large scale databases on JADE & SQL Server technologies. He has been managing SQL Server since SQL Server 2000 and pragmatic architectural design is his main focus at Jade Software.

Educating and helping others learn is a driver for Hamish and he is a PASS Chapter Leader, International speaker and a repeat guest lecturer at a local university. Follow him at @TheHybridDBA or at https://hybriddbablog.com

SQL Server Performance Tuning Made Easy

Pinal Dave ( blog | twitter ), 11th July, 11:00 – 12:00 (11th July, 16:00 – 17:00 UTC)

Register: dbafun.org

:SQL Server Performance Tuning is still a mystery to many. Quite often even an experienced SQL Server DBA, often gets confused as to how to figure out where to start with this entire process. In this module we are going to learn about how to get started with SQL Server Performance Tuning. We will go over some very important scripts which will help us to get started with the SQL Server Performance Tuning exercise. At the end of the session the author will share his three important scripts which he uses at his customer sites all the time.

Pinal Dave has been a part of the industry for more than eleven years. During his career he has worked both in India and the US, mostly working with SQL Server Technology – right from version 6.5 to its latest form. Pinal has worked on many performance tuning and optimization projects for high transactional systems. He received his Master of Science from the University of Southern California and a Bachelors of Engineering from Gujarat University. Additionally, he holds many Microsoft certificates. He has been a regular speaker at many international events like TechEd, SQL PASS, MSDN, TechNet and countless user groups.

Pinal writes frequently writes on his blog http://blog.sqlauthority.com on various subjects regarding SQL Server technology and Business Intelligence. His passion for the community drives him to share his training and knowledge. His previous experience includes Technology Evangelist at Microsoft and Sr. Consultant at SolidQ. Prior to joining Microsoft he was awarded the Microsoft MVP award for three continuous years for his outstanding community service and evangelizing SQL Server technology. He was also awarded the Community Impact Award – Individual Contributor.

Extending DevOps to SQL Server

Grant Fritchey ( blog | twitter ), 18th July, 11:00 – 12:00 (18th July, 16:00 – 17:00 UTC)

Register: dbafun.org

Most organizations are under pressure to speed up the software delivery cycle, whether that’s to respond more quickly to the needs of the business, the needs of your customers or just to keep up with the competition. Unfortunately the database is commonly considered a bottleneck. Without the right processes in place, database change management can slow things down, adding risk, uncertainty, and getting in the way of development and operations working together to deliver. Any organization that wants to fully benefit from a DevOps approach is going to have to overcome some specific challenges presented by the database. This session will teach you how to take DevOps principles and practices and apply them to SQL Server so that you can speed up the database delivery cycle at the same time you protect the information contained within.

Grant Fritchey, Microsoft Data Platform MVP, has more than twenty years’ experience in IT. That time was spent in technical support, development and database administration. Grant currently works as a Product Evangelist at Red Gate Software. Grant writes articles for publication at SQL Server Central and Simple-Talk. He has published several books including, “SQL Server Execution Plans” and “SQL Server Query Performance Tuning.” Grant Fritchey currently serves on the Board of Directors of the PASS organization, the leading source of educational content and training on the Microsoft Data Platform, as the Executive Vice President in charge of governance and finance. Grant teaches and presents at events, large and small, all over the world.

Don’t Forget!

Any questions, hit us up on Slack (joining instructions here 🙂 ), or Twitter

 

 

Exit mobile version
%%footer%%