Being Lazy, or is it Efficient?

My main client the other week decided to update an ETL I had created for them over a year ago and add another couple of imports to it. This basically meant creating some PowerShell code and SQL code and schema to import the data.

I had started to write stuff out by hand when I suddenly remembered a PowerShell Script I had previously written and later wrote about for Red-Gate’s Simpletalk.

I won’t rewrite the post here, but basically if I give it the name of a CSV file, it’ll attempt to create the table schema, an import stored procedure and some of the object information for PowerShell. I still have to make some edits usually but this saves me a lot of work.

And it’s a good thing because after creating and testing the two imports that they requested last week, on Friday they scrapped that approach and decided to combine the data into a single file which means I needed to ditch all that work and start from scratch.

I wrote all the necessary PowerShell, SQL, and did the testing in less than 30 minutes. It was fairly trivial and fairly efficient. Or perhaps I was just being lazy.

But the time it took me to write the original script to create the scripts has definitely saved me time and as a result saved my customer money. So in the long rung, it’s well worth it!

And that’s all for this week!

Production Code for your SQL database

I realized after writing my earlier post that today was T-SQL Tuesday. I wasn’t going to contribute, but after seeing some posts, I thought I’d give a very quick shot at it. This month, Tom Zika (t | b) asks us to talk about what makes code “production grade”. You can find his full invitation here.

There’s some great columns there, but I’ve noticed something that many developers assume (and honestly, it’s a good thing) and that’s that they work in a company with good source control and a decent release procedure. Sadly, with my clients, it’s rarely the case. Often I’m inheriting code that’s only available on the production server itself, or there’s 20 different contributors (ok I’m exaggerating, but not by much) and each has their own stash of code.

Ultimately this means the production server really the only single source of truth. So that leads me to my first item.

Select * and other shortcuts

It should be obvious, but while I may often use Select * while developing code, I’d never put it into production. Even if it works, it’s messy. But I’d go a step further. I prefer to fully qualify all my columns. For example

select Emp_Num, First_Name, Last_Name, City from Employee_Table

vs

select E.Emp_Num, E.First_Name, E.Last_Name, E.City from Employee_Table E

Now the above is an extremely artificial example. But now imagine I want to join it to say a table of phone numbers (because the original developer was smart enough to realize an employee could have multiple phone numbers and didn’t simply add columns to the Employee_Table.)

So now someone comes along and rewrites the first as:

select Emp_Num, First_Name, Last_Name, City, Phone_Num from Employee_Table E
inner join Employee_Phones EP on EP.Emp_Num = E.Emp_Num

Now, they’re of course deploying to production as they go and suddenly the above code breaks. Fortunately, they’re fairly smart and realize the fix and go in and edit it to

select E.Emp_Num, E.First_Name, E.Last_Name, E.City, EP.Phone_Num  
from Employee_Table E  
inner join Employee_Phones EP on EP.Emp_Num = E.Emp_Num 

So it’s a simple thing, but by making a habit of fully qualifying your column names, you can avoid future errors.

Failing Gracefully

When I’m writing quick and dirty code, while I try to avoid errors of course, I’m not overly worried about leaving the system in unstable state. By this I mean, if I’m debugging code with a cursor in it and it breaks and I have to manually drop the cursor that’s fine. Same thing with transactions. Yeah, I might block someone else’s work, but I’ll pretty quickly realize what I’ve done and be able to commit or rollback my transaction.

In production code, without going into details on TRY/FAIL blocks and all that, I would argue that any code that contains a cursor, a transaction or anything else that could potentially block processing absolutely needs to have robust error handling. I’ll ignore the debate about what the best way to handle it is, in part because sometimes rolling back is the right answer, trying again might be the right answer, or even finishing the transaction and then cleaning up data later. The point is, you can’t afford to fail in an ungraceful way and leave your system in an unknown state.

Alerting

I didn’t have this on my mind when I started out with this post, but the last bit reminded me of it. It’s not code per se, but more jobs and the like. Generally, I’m a huge fan of alerts. If something has failed, I want an alert! But, I realized a long time ago, that alerts have to be actionable. This means the person receiving it has to both be able to act on it and that it actually needs to be acted upon. If something fails and it needs no action (and the action can be as simple as simply noting it for future reference) then don’t bother alerting. Log it or at the very least, retry before you send an alert. Years ago at one client they had a job that would fail once about every 100 days. It ran once in the morning. It had an alert that met the above criteria, I or another DBA could react to it and in this case the reaction was simply “retry the job”. I finally analyzed it and realized that given the failure mode, simply waiting a minute and retrying was a far better solution than alerting us. I did some math on the failure mode and realized that this new setup should cause failure on the second attempt (and then send us an alert) once every 10,000 days. So the initial alert was sort of pointless when there was a better way of handling it.

Conclusion

So, to sum things up: avoid errors, if you do have errors, handle them gracefully, and if you have to alert, ma

Change My Mind… T-SQL Tuesday #146

Andy Yun (t | b) is this month’s host. Our challenge this month is:

 …to think about something you’ve learned, that subsequently changed your opinion/viewpoint/etc. on something.

I’m going to give a twofer for this one, since he’s also allowing non-technical answers.

8K Blocks

“SQL Server writes everything in 8K blocks.” I recall learning this probably close to 2 decades ago. And, it makes sense, at a lot of levels. And it was “confirmed” when we reformatted the disks on one of our production servers into 64K blocks so SQL Server could read/write 8 blocks at a time. Performance definitely improved. But, then I learned from Argenis Fernandez that this isn’t necessarily true. SQL Server will write what it wants to write. And if you think about it, that makes sense. If you update one record and it’s a single value you’re updating, SQL Server isn’t going to simply sit there and not write your 1 byte change to disk. And it’s not going to make up random 8191 bytes just so it can satisfy this rule of 8K. Yes, SQL Server will try to consolidate disk I/O and be efficient about it, but even then, it may not matter. Gone are the days where we’re writing data to bare metal (some of us are old enough to recall an option in early versions of SQL Server where one could create a database on a “raw” partition to get a performance boost). No, today we’re probably writing through multiple layers, more than we realize. For one client for example, a disk write from SQL Server will pass through an encryption layer, then to the OS, which will pass it through a disk driver layer that will then pass it on to the VM which will have its own layers. At that point, even if SQL Server were trying to only write 8K blocks, it’s quite possible every other layer has its own rules.

Yes, changing our disk formatting from 8K blocks to 64K blocks helped. It helped us. But, your requirements and situation may be different and ultimately you may end up writing more or less than 8K blocks all the time. (and I hope I summed up Argenis’s position accurately.)

Toss the Rope Down

As many know, I’m a caver. I’ve been caving for decades. Early in my caving career I learned vertical caving and back then we still used what was known as a “3-knot” system or “prussiks”. That hardware has improved and changed. But one habit took longer. It was (and unfortunately still is) common to tie one end of the rope to a tree or other rigging point, and drop the rest down the pit. Sure, you ended up with a pile of rope at the bottom, but no one really cared, as long as you didn’t step on it (which is another myth for another time). This helped guarantee that your rope actually reached the bottom. The only thing that sucks more than rappelling down a pit and reaching the knot at the end of the rope 50′ from the bottom of the pit is rappelling down a pit and realizing 50′ from the bottom of the pit that you forgot to put a knot in your rope.

But somewhere along the line, folks started to realize, “hey, that rope at the bottom of the pit is useless. It’s FAR more useful if we can leave it at the top of the pit.” As the depth of most pits are known, it’s actually not that hard to measure out the rope you think you need (plus a bit extra) and then rig the rope so that you have extra at the top. Why is this important? Some call this “rigging for rescue” (or more accurately, one part of the bigger concept).

Imagine the scenario where you’re ascending the rope and have an equipment failure. You can’t go up and can’t go down. If all the extra rope is below you, it doesn’t do you any good. You can’t do anything with it. But, if that extra 10′ or 20′ (or more) is at the top and you’ve rigged correctly, someone at the top can, without too much effort, safely change the rigging (with you still on the rope) to a 3:1 or if nothing else, a 2:1 haul system. Suddenly that extra rope sitting at the top of the pit becomes useful.

Beginners will still often toss the extra rope to the bottom of the pit, but more experienced cavers will rig it to stay at the top and this may literally save lives.

Conclusion

Stop and think about practices that you do now that you may have learned that could be wrong or no longer applicable. And more importantly, do those bad practices interfere with doing something that’s better or safer? With SQL Server, over the past few decades, a lot has changed and improved, but are you still doing something you were taught 2 decades ago because “well that’s the way things are done.” A lot has changed in 2 decades. Make sure your knowledge is still valid!

What T-SQL Do You Find Difficult?

It’s been awhile since I’ve focused exclusively on SQL Server or even T-SQL and perhaps this would make a great question for someone to ask on a T-SQL Tuesday, but for now I’ll ask.

An open question to any of my readers who work with T-SQL: “What command or construct do you find difficult?”

Let’s be honest, we all have something about T-SQL that trips us up. I’m hoping the basic SELECT statement doesn’t. Though, start to throw in stuff like an outer join and a correlated subquery, it can get complicated pretty quickly. But even those I can generally get written without too much effort.

The one construct I do encounter infrequently, but enough that I feel I should fully grok it by now is a DELETE with a join. It’s actually not all that complicated, but I often have to look up the syntax to remind myself.

But the one that gets me every single time: PIVOT. I’ve probably only written a half-dozen pivots in my life and that’s 6 too many in my opinion. I’m not sure what it is about them, but my mind still doesn’t just fully grasp the syntax in an easy way.

Thinking about it, I think one reason is because when I’ve had to use them, it has never been a trivial case.

Fortunately, since my primary focus is on the operations side of being a DBA, and most of the code I write is to support that, I don’t come across the need for a pivot very often. I suppose though if I were writing a lot more code, I’d use it more and understand it better.

So, what’s your T-SQL nemesis?