Production Code for your SQL database

I realized after writing my earlier post that today was T-SQL Tuesday. I wasn’t going to contribute, but after seeing some posts, I thought I’d give a very quick shot at it. This month, Tom Zika (t | b) asks us to talk about what makes code “production grade”. You can find his full invitation here.

There’s some great columns there, but I’ve noticed something that many developers assume (and honestly, it’s a good thing) and that’s that they work in a company with good source control and a decent release procedure. Sadly, with my clients, it’s rarely the case. Often I’m inheriting code that’s only available on the production server itself, or there’s 20 different contributors (ok I’m exaggerating, but not by much) and each has their own stash of code.

Ultimately this means the production server really the only single source of truth. So that leads me to my first item.

Select * and other shortcuts

It should be obvious, but while I may often use Select * while developing code, I’d never put it into production. Even if it works, it’s messy. But I’d go a step further. I prefer to fully qualify all my columns. For example

select Emp_Num, First_Name, Last_Name, City from Employee_Table

vs

select E.Emp_Num, E.First_Name, E.Last_Name, E.City from Employee_Table E

Now the above is an extremely artificial example. But now imagine I want to join it to say a table of phone numbers (because the original developer was smart enough to realize an employee could have multiple phone numbers and didn’t simply add columns to the Employee_Table.)

So now someone comes along and rewrites the first as:

select Emp_Num, First_Name, Last_Name, City, Phone_Num from Employee_Table E
inner join Employee_Phones EP on EP.Emp_Num = E.Emp_Num

Now, they’re of course deploying to production as they go and suddenly the above code breaks. Fortunately, they’re fairly smart and realize the fix and go in and edit it to

select E.Emp_Num, E.First_Name, E.Last_Name, E.City, EP.Phone_Num  
from Employee_Table E  
inner join Employee_Phones EP on EP.Emp_Num = E.Emp_Num 

So it’s a simple thing, but by making a habit of fully qualifying your column names, you can avoid future errors.

Failing Gracefully

When I’m writing quick and dirty code, while I try to avoid errors of course, I’m not overly worried about leaving the system in unstable state. By this I mean, if I’m debugging code with a cursor in it and it breaks and I have to manually drop the cursor that’s fine. Same thing with transactions. Yeah, I might block someone else’s work, but I’ll pretty quickly realize what I’ve done and be able to commit or rollback my transaction.

In production code, without going into details on TRY/FAIL blocks and all that, I would argue that any code that contains a cursor, a transaction or anything else that could potentially block processing absolutely needs to have robust error handling. I’ll ignore the debate about what the best way to handle it is, in part because sometimes rolling back is the right answer, trying again might be the right answer, or even finishing the transaction and then cleaning up data later. The point is, you can’t afford to fail in an ungraceful way and leave your system in an unknown state.

Alerting

I didn’t have this on my mind when I started out with this post, but the last bit reminded me of it. It’s not code per se, but more jobs and the like. Generally, I’m a huge fan of alerts. If something has failed, I want an alert! But, I realized a long time ago, that alerts have to be actionable. This means the person receiving it has to both be able to act on it and that it actually needs to be acted upon. If something fails and it needs no action (and the action can be as simple as simply noting it for future reference) then don’t bother alerting. Log it or at the very least, retry before you send an alert. Years ago at one client they had a job that would fail once about every 100 days. It ran once in the morning. It had an alert that met the above criteria, I or another DBA could react to it and in this case the reaction was simply “retry the job”. I finally analyzed it and realized that given the failure mode, simply waiting a minute and retrying was a far better solution than alerting us. I did some math on the failure mode and realized that this new setup should cause failure on the second attempt (and then send us an alert) once every 10,000 days. So the initial alert was sort of pointless when there was a better way of handling it.

Conclusion

So, to sum things up: avoid errors, if you do have errors, handle them gracefully, and if you have to alert, ma

T-SQL Tuesday – The Conference That Changed Everything For Me

My faithful readers get a double dose today, only because when I wrote my earlier post I had not yet seen the invite for this month’s T-SQL Tuesday. Otherwise I would have started with this post (and perhaps written a better version of it. This will be a bit hurried).

Like many I’m picking PASS Summit. No, not very creative, but true and accurate. I should note my first conference was SQL Connections back in I believe 2006 or 2007 in Orlando and that had a fairly important impact on me too. But my first PASS Summit in 2015 had a bigger one. I managed to go in the place of our SQL Server User Group organizer provided I attended the User Group update the day before and also represent us officially in that capacity. I of course did both.

But I also had an ulterior motive for going. Two of my best friends from college lived in Seattle and I had not seen them in years, in fact in well over a decade. So it was a good chance to catch up with them. (Let me just say, flying from the east coast to the west coast and trying to go to bed at 1:00 AM West coast time, but waking up at 7:00 AM doesn’t work well!)

That said, the real reason this conference was so important was because I met Kathi Kellenberger @AuntKathi. She gave a presentation on how to get published. For years I had given thought to writing a book and with the recent death of my father, who had always wanted to write the Great American Novel this seemed like an interesting session to attend. She of course gave a great presentation. I spoke briefly with her afterwards and then went on to the next session. But her session stayed in my mind. Later that day I tracked her down and asked further questions and before I knew it I was introduced to her rep at Apress.com. Very quickly I was discussing my idea with him and before I knew it, he expressed and interest and suggested I submit a more formal idea via email. Within a few weeks of the conference I did so and my idea was accepted. That was the easy part. Translating my thoughts to paper was a bit harder. But a year later by the 2016 Pass Summit I was a published author. My dad wasn’t around to see it, but the book was dedicated to him. It wasn’t the Great American Novel and honestly, sales never lived up to even my more pessimistic expectations, but that doesn’t matter. Someone paid me for my writings! And you can still buy a copy of IT Disaster Response: Lessons Learned in the Field, my take on combining IT Disaster response with thoughts on plane crashes and cave rescues. It’s not the most technical book, nor was it intended to be, but it was meant to be sort of a different and more holistic way of looking at responding to disasters. Instead of talking about “do backups like this” it talks about using ICS (Incident Command System) and CRM (Crew Resource Management) techniques to help respond to your disaster.

I’m not here to sell you on my book but talk about how that one conference and that one chance encounter with the right person changed my life. But I won’t stop you from buying it. It’s a quick and I thikn fun read! And you might even learn something.

I’ve enjoyed all my PASS Summits, including 2020 when I finally had a chance to present (albeit remotely) and SQL Saturdays (where I’ve learned a LOT and owe too many people to name a great deal of thanks for all they’ve taught) but that first Summit was the one that probably had the most impact.

Thanks for reading.

T-SQL Tuesday #152 – My Rant

Thanks to Deb Melkin for hosting this month’s T-SQL Tuesday and developing the topic. Instead of calling this a rant, perhaps I should call it a “I told you so.” There’s a common refrain among DBA of “It depends” when asked a question. And that’s generally accurate. But this is the case of me saying “it doesn’t depend, do as I say” and being ignored.

Ironically when I took my Database class in college, it was long enough now that the idea of a “Sequel Database” (that along should tell you how long ago this was) was basically described as an up and coming technology that perhaps had a future. Talk about a bold and accurate prediction! That said, one of the things then that fascinated me, and still does, is that SQL (in general, not the product SQL Server) is based on work done by Edgar F. Codd and has a fairly strict mathematical basis. (Which is another reason I rail against those who claim that RDBMS and SQL will eventually be replaced. That’s like saying Algebra will be replaced. There may be other branches of mathematics developed that are far better for their specific domains, but the validity and usability of Algebra will never go away.).

In any event, one of the key concepts that Codd developed was that of “a table”. A table has several particular parts to its overall definition. The one critical for this blog is that a table itself has no implicit order. Now, many folks will do a query multiple times and always get the same results every time. But that’s more a factor of how SQL Server happens to handle reads. At my last full-time job, I was easily able to prove to the programmers that a query on the UAT box would result in a different order than on Prod because of the number of CPUs and disks. But that’s not what I’m here to talk about.

My “I told you so moment” goes back further to a table that was about as simple as you can get. It had a single row. Now, I think we can all agree that a single row will always return the same order, right? I can’t recall exactly why the developer felt that this table was the solution to his problems, but I pushed back. I asked at the very least he put in a where clause. He felt that would impact performance too much and besides, with one row, it would always return his results. I of course asked, “What happens if eventually the table has two rows?” “Oh, well my row will return first anyway.” “No it won’t.” Well he wouldn’t budge and I had bigger fish to fry. At the time there really was no reason to expect this table to grow. But I tucked it away in the back of my mind.

Sure enough, about a year later, which was 3 months after the developer left, we started to get really weird results on the webpage that was relying on that table. It seems that another developer realized this table was a perfect place for him to store the data that he needed (I’m assuming it was some sort of config data, but it was honestly so long ago I can’t recall) so he added a row. Now HE was smart enough to add a where clause to his query. But the original “Don’t worry about it query” still had no where clause. And sure enough, sometimes was returning the new row instead of the original. Fortunately this was a 5 minute fix. But I can only imagine how long it would have taken to find the problem if I hadn’t remember it in the first place.

So, while as DBA I will often say “it depends”, I will always be adamant in saying that tables are unordered by definition and you absolutely need a where clause or an order by if you want to guarantee specific results. Though, I suppose it depends, if you don’t care about the order or need a specific set of data you can ignore my rant. There are cases where that’s valid too.

Thus ends my TED talk.

T-SQL Tuesday

I wasn’t sure what I was going to write about today and then saw it was T-SQL Tuesday, this month’s question from Kenneth Fisher. The topic is “Your First Technical Job”.

I had to actually stop and think about this one for a second. The requirement was to not talk about ones first DBA job, but ones first technical job.

I entered RPI, my alma mater in the fall of 1985. This means I should have graduated in 1989. However, I graduated in 1990 and that made all the difference in my career. One thing that RPI had then, and still has, is a very robust Co-Op program. These were (and I believe still are) an opportunity to work for a company in your field for a summer and at least one semester. Unlike some college programs like this, RPI’s Co-Op office made sure that you were doing actual work in your field, not relegated to say filing papers in an office, or getting coffee for the full-time employees. You also were paid, just like an employee. Given my financial status as a student, this was a huge help.

Through a series of connections, I found myself working at company in Troy called Laboratory MicroSystems. It’s long gone, but it was a good company. It was founded by a pair of RPI alumns, one of which who had also been an Outing Club member and for awhile, tended to hire Outing Club members. It was our own networking group.

That said, my housemate and best friend had just completed his Co-Op there, so there was an opening. After a quick interview, I was hired and started working a few weeks later.

Let me say that this was a great opportunity for me. I had learned a lot in the classroom (even if my grades didn’t always show it!) But, I had never actually applied such skills in the real world. And to further complicate things, my manager was on vacation my first week. Not only was I thrown into the deep end, I didn’t even have a lifeguard.

My friend, when he was there, had taken the PC version of a package they had, called Thermal Analysis and ported it over to a mini-computer (I believe some sort of HP model). To give you an idea of how long ago this was, all the work was done in Fortran.

My job ironically enough, was to take the HP version of the software, that now had been updated and improved and back-port it BACK to the PC. Up until now all my projects in school had been 1, maybe 2 files. So any linking (yeah kids these days are wondering what I mean there) could be done by hand and I had never created a make file. (go google it kids). So very quickly in my first week I learned how to actually compile and link large groups of files in a complex, real-world environment.

Within a few weeks I was deep into the code and really starting to understand how it worked and how to apply my classroom skills. It was a great experience. After about 8 months on the job, it was time to go back to the classroom. But I wasn’t through yet. After another semester of school, I went back for another Co-Op and worked another semester and summer. At this point my manager had moved onto another company and the rest of the company moved onto a new product and project (one that ultimately made the company valuable enough to be sold and for the founders to move on.) I found myself in an unusual position of being 20 years old and the lead developer on a product with no other teammates other than a 20 something sales manager who spent more time on the phone talking about his supposed sexual exploits than actually make sales. But he’s a story for another time.

During this time there are several things that stand out as I think greatly helping me in my career.

The first was adding pop-up help to the Thermal Analysis program. One has to remember, this was back in the days of purely DOS programs, so one normally didn’t have overlapping windows, and the like. But a few customers had wanted some sort of help system added. It took some work, but I finally found a way to do it. Fortran was great for the computational aspects and the 3rd party library helped us display plots on the screen. However, Fortran wasn’t great for developing a UI. After reading and research, I realized I could do what I wanted with some C and Assembler code. This is also when I learned that their call stacks were reversed from Fortran. So I ended up having Fortran calling C code, using a keyword Pascal to ensure the stack would read in the right direction (don’t ask me why it used that for the keyword) and then the C code called ASM to call the DOS interrupts to allow a pop-up to be displayed, and then the C code populated the pop-up with the help text. Learning how to do this, really helped me with my problem solving skills and to learn that “solutions exist” it’s just a matter of finding them.

Another project was one completely unrelated to Thermal Analysis. I can’t recall the name we gave the problem, but it was software we wrote for GE Plastics that basically involved setting up a material on a tensile test machine (a machine that would pull stuff apart at whatever speed you told it to. In this case, the test took a year to run!) GE had provided the hardware and the hardware specific language to write this in and I was off. This had to handle reboots, power outages and the like. Fortunately we were using an external A/D (analog/digital) converter box made by Perkin-Elmer that could store a certain number of data points. This meant we could reboot the PC and reread in data to see if it were new or not. The software was the first I’d seen that had any sort of try/catch (though I believe they called it something else). So I was able to learn and develop good error-handling techniques. Something I’m still working on and improving on to this day.

But, ultimately, this job really led me to where I am today through a convoluted series of steps. The office network was Novell Server with Thinwire ethernet. For those who don’t recall what Thinwire is (or was) the key details was that rather than every network cable running back to a central switch, the cable ran in a serial line from computer to computer and any break in it resulted in a network outage. There were many ways this could fail.

Well between my software work, I started to be the go-to guy for diagnosing network issues. And then for issues with the Novell Server itself.

After college, for various reasons (including I suspect because both my parents had worked for themselves) I wasn’t eager to work for another company right away. So I went into consulting and my first client was, you guessed it, Laboratory MicroSystems. I started handling all their IT needs, which continued to grow. When I had started, they barely filled one floor of the building they were in. By now they were spread across 3 floors and still growing.

And that product everyone else had started working on during my Co-Op? Well by now they were porting it to use SQL Server 4.21a, and they realized that they could use a specialist to go to customer sites and install SQL Server before they sent out their specialists to install the actual software.

And that folks is how, I started out programming in Fortran and ended up as a SQL Server DBA.

Someday, I’ll tell you how the Streets of Laredo figures into this story or how the last I had heard about the now formal sales manager involved him and a radio contest or even how after I stopped consulting I did one last gig for them as a 1 day Oracle consultant or the time I saved them (after being bought) from moving into a new office with only 2 outlets for 24 people.

But in the meantime, now back to studying for a test for my next change of career to hopefully become a Physicians Assistant!

T-SQL Tuesday #148 – Advice on Running a User Group

Today’s T-SQL Tuesday edition comes from Rie Merrit and she is asking about “Advice on Running a User Group.”

Fortunately she’s only asking for 1-2 ideas, not an entire book (though there’s at least one book out there on the topic, which I’ll admit I’ve skimmed but not read cover to cover).

It Starts at the Door

This is actually an area I’ve not done as well in as I’d like, but I’m going to continue to work on. For your in-person meetings (we remember what those were like, right?) find one of your more outgoing, sociable members, ideally someone who is good with names and details, and position them by the door to greet people. When someone new comes in, this person should make sure they get their name, ask them if they have any particular interests, and then introduce them to others, ideally with similar interests.

It can be very intimidating to walk into a new User Group meeting where you know no one, and every already there is already happily chatting away and you end up feeling like an outsider.

By assigning someone to the role of greeter, ideally any new person instantly can be made to feel welcome. Besides simply introducing them, the greeter can explain how things work in terms of schedule, where the bathrooms are, where food is at, etc. This keeps newcomers from feeling lost and left out.

On the flip side of this advice, the greeter has to make sure they’re not too enthusiastic either. If the newcomer indicates they’d rather just sit in the corner and listen and leave, that’s fine too. The goal isn’t to force everyone to socialize. The goal is to make it easier for those who wish to.

I can guarantee that if you make people feel welcome, they’re more likely to come back.

It Pays to have Sponsor

Or more accurately, its sponsors that make it possible to pay for food and other costs. Several years ago at a User Group Leader meeting at PASS Summit, I listened as a speaker talked about looking for sponsors you might not normally consider, i.e. going outside of getting sponsorship from technical companies. This has worked really well for me in the past. But before you even go that far, you need to get some data. And since we’re DBAs, we should be good with data. I recommend once a year, collecting data about your group with some questions such as:

  • How many people receive your weekly or monthly emails. You don’t need an exact number, but is it 100, 300, 500, 1000?
  • How many people typically attend your meetings? (and now ask in-person versus on-line if you’re doing hybrid)
  • Where are they coming from?
  • How many years have they been in the industry?
  • Do you have a breakdown by age range?

You’re trying to get a sense of demographics. This will come in handy when you look for sponsors that are non-technical (for technical sponsors you will want different demographics). But with the data from my group, I have approached a number of different sponsors such as banks, insurance agencies and the like. My sales pitch is generally along the lines of:

I can put your name in front of 400 people via email and 20-30 people in person that are in your demographic (generally 40-50 years of age, higher income) that are probably in the market for your services (such as life insurance, investment opportunities, etc).

I’ve had a lot of luck with this approach. Sometimes I’ve gotten a check right there, sometimes they’ve had to go up their chain of command, but now they have data to sell the idea to their boss. And sometimes, you find out a prospect is not a good match. This happened with me when I approached a contact at the local, then new casino. Turns out their target demographic was older, retired women. Apparently they spend a lot of time and money at the casino. In contrast, mid-life professional DBAs don’t gamble much!

The other key detail when approach a sponsor is being clear on what you’re selling them. You probably recognize this without really realizing it. At any conference you’ve been to you’ve seen Platinum Sponsors, Gold Sponsors, etc. The more someone is willing to pay, the more mention they get, the bigger their logo may be featured, etc. This works for user groups. My advice here is to not overdue the number of sponsorships and to deliver on what you promise. For my group, pre-Covid, I would typically try to have no more than 3-4 sponsors at a time, and total over a year, perhaps 6 or so. Some sponsors would sponsor for 3 meetings, some for the entire year. There were discounts for an annual sponsor as opposed to a quarterly sponsor. If you were a quarterly or greater sponsor, besides having your logo in emails and being mentioned from time to time, you were given the opportunity once a quarter or so to give a 5 minute pitch before the group. Some took advantage of that, some didn’t. But I have to say those who did, I think made a better impact when they could introduce themselves and point to the food and say they were glad to sponsor our group.

I’ll close with one final comment on sponsors: not all need to provide a direct financial contribution. We have a local hotel that has provided us 1-2 free room nights a year. We typically use one to put up a speaker who is coming in from out of town, and the second as part of our annual holiday raffle. We also had the local garbage collection company provide a free year’s service as a prize for our annual raffle. That was surprisingly one of our more popular prizes. In SQL Server you don’t have to worry about garbage collection and for a year neither did one of our DBAs!

In Conclusion

I can’t speak for other user groups, but I do know we’re probably very close to going back to in-person meetings in the near future so I’ll be dusting off the playbook and doing the above as well as other things in order to build up our successful in-person attendance again.

I look forward to seeing what other group leaders advise!

SQL Upgrades

Fear

Today’s another T-SQL Tuesday, thanks to Steve Jones. This month he’s asked us to talk about SQL upgrades.

I thought I’d write about a SQL Upgrade that wasn’t.

As a DBA I’ve worked with SQL Server since 4.21a. Yes. I’m that old. And a common experience I’ve had, which I suspect I share with many DBAs, is that every company I’ve worked with, either as an employee, or as a consultant has been reluctant to upgrade.

And often it’s understandable. Depending on their licensing situation and finances, an upgrade can be costly, especially for a smaller company. So companies often soldier on with older versions. In fact, just the other day on Twitter, I saw a post from someone asking about tuning queries on SQL 2020. In any event, especially as a consultant, I’m not alone in seeing resistance to upgrades.

But one client really stands out. I started working with them about a decade ago. They were running SQL 2005. Now, if you do the math, you’ll realize that by this time SQL Server 2008 and SQL 2012 had come out.

Technically I was brought in for a separate project, but when they became aware of my skills, they asked me to help tune their current database. There were some easy lifts (such as removing a duplicate index on a particular table that did nothing to help select performance, but obviously hurt insert performance). But by far one of their worst performing procedures was one that was handed an XML string which it then had to parse. I made several recommendations but realized that as long as they were depending on this stored procedure to parse the XML and were using SQL 2005, this would be a huge bottleneck. There was a pretty common and known issue with the way that SQL 2005 parsed XML.

The simple and obvious upgrade was to upgrade, ideally to SQL 2012. The answer was a firm “no”. It wasn’t even the cost, it was fear. It seems they, like many folks who had done the upgrade from SQL 2000 to SQL 2005 had hit several performance issues due to changes in the query optimizer and were afraid that they’d have as bad or worse issues upgrading to SQL 2008 or SQL 2012. They were far more comfortable with the devil they knew than the devil they didn’t.

Now, I can fully appreciate this, but it was frustrating. It was more frustrating because their business was fairly seasonal in nature, which meant that they could have done the upgrade soon after their busy season and had 8-9 months to debug performance issues. They would have also gained other benefits from the upgrade.

I’d love to say I finally had shown them enough proof of the advantages, had shown them how low the risk would be, and how to mitigate such risks. Alas, I didn’t. They decided (and to be fair there were other reasons completely unrelated to their SQL Server concerns) to migrate to a full LAMP stack. SQL Server was out, MySQL was in. Of course they had other issues there, but those weren’t my problem.

Upgrading SQL Server Today

I knew then, and still know now, that the upgrade would have been far less of an issue than they feared. But, I couldn’t and still can’t completely dismiss their fears. They were far from the only company that had been somewhat burned by the SQL 2000 to 20005 upgrades. I’ve heard of other companies that resisted upgrades during that era for similar reasons .I still sometimes hear echoes of such fears. And let’s be honest, often a database server is absolutely essential to the core of a company. Unlike front end code that might be easily rolled back, often SQL upgrades are very hard to roll back. (In reality it’s not as bad as some think, but it’s often non-trivial also.)

I will give Microsoft credit for listening. Since then, I know they’ve taken a lot of steps to mitigate such fears and have made backwards functionality a far higher priority than they used to. Other than licensing costs, honestly, at this point, I think companies should be on a regular update cycle. There’s almost always performance benefits and these days, the drawbacks are pretty minimal. And honestly, if you’re on Azure, you’re being upgraded without knowing it.

My advice, don’t fear the upgrade, anymore.

Change My Mind… T-SQL Tuesday #146

Andy Yun (t | b) is this month’s host. Our challenge this month is:

 …to think about something you’ve learned, that subsequently changed your opinion/viewpoint/etc. on something.

I’m going to give a twofer for this one, since he’s also allowing non-technical answers.

8K Blocks

“SQL Server writes everything in 8K blocks.” I recall learning this probably close to 2 decades ago. And, it makes sense, at a lot of levels. And it was “confirmed” when we reformatted the disks on one of our production servers into 64K blocks so SQL Server could read/write 8 blocks at a time. Performance definitely improved. But, then I learned from Argenis Fernandez that this isn’t necessarily true. SQL Server will write what it wants to write. And if you think about it, that makes sense. If you update one record and it’s a single value you’re updating, SQL Server isn’t going to simply sit there and not write your 1 byte change to disk. And it’s not going to make up random 8191 bytes just so it can satisfy this rule of 8K. Yes, SQL Server will try to consolidate disk I/O and be efficient about it, but even then, it may not matter. Gone are the days where we’re writing data to bare metal (some of us are old enough to recall an option in early versions of SQL Server where one could create a database on a “raw” partition to get a performance boost). No, today we’re probably writing through multiple layers, more than we realize. For one client for example, a disk write from SQL Server will pass through an encryption layer, then to the OS, which will pass it through a disk driver layer that will then pass it on to the VM which will have its own layers. At that point, even if SQL Server were trying to only write 8K blocks, it’s quite possible every other layer has its own rules.

Yes, changing our disk formatting from 8K blocks to 64K blocks helped. It helped us. But, your requirements and situation may be different and ultimately you may end up writing more or less than 8K blocks all the time. (and I hope I summed up Argenis’s position accurately.)

Toss the Rope Down

As many know, I’m a caver. I’ve been caving for decades. Early in my caving career I learned vertical caving and back then we still used what was known as a “3-knot” system or “prussiks”. That hardware has improved and changed. But one habit took longer. It was (and unfortunately still is) common to tie one end of the rope to a tree or other rigging point, and drop the rest down the pit. Sure, you ended up with a pile of rope at the bottom, but no one really cared, as long as you didn’t step on it (which is another myth for another time). This helped guarantee that your rope actually reached the bottom. The only thing that sucks more than rappelling down a pit and reaching the knot at the end of the rope 50′ from the bottom of the pit is rappelling down a pit and realizing 50′ from the bottom of the pit that you forgot to put a knot in your rope.

But somewhere along the line, folks started to realize, “hey, that rope at the bottom of the pit is useless. It’s FAR more useful if we can leave it at the top of the pit.” As the depth of most pits are known, it’s actually not that hard to measure out the rope you think you need (plus a bit extra) and then rig the rope so that you have extra at the top. Why is this important? Some call this “rigging for rescue” (or more accurately, one part of the bigger concept).

Imagine the scenario where you’re ascending the rope and have an equipment failure. You can’t go up and can’t go down. If all the extra rope is below you, it doesn’t do you any good. You can’t do anything with it. But, if that extra 10′ or 20′ (or more) is at the top and you’ve rigged correctly, someone at the top can, without too much effort, safely change the rigging (with you still on the rope) to a 3:1 or if nothing else, a 2:1 haul system. Suddenly that extra rope sitting at the top of the pit becomes useful.

Beginners will still often toss the extra rope to the bottom of the pit, but more experienced cavers will rig it to stay at the top and this may literally save lives.

Conclusion

Stop and think about practices that you do now that you may have learned that could be wrong or no longer applicable. And more importantly, do those bad practices interfere with doing something that’s better or safer? With SQL Server, over the past few decades, a lot has changed and improved, but are you still doing something you were taught 2 decades ago because “well that’s the way things are done.” A lot has changed in 2 decades. Make sure your knowledge is still valid!

What T-SQL Do You Find Difficult?

It’s been awhile since I’ve focused exclusively on SQL Server or even T-SQL and perhaps this would make a great question for someone to ask on a T-SQL Tuesday, but for now I’ll ask.

An open question to any of my readers who work with T-SQL: “What command or construct do you find difficult?”

Let’s be honest, we all have something about T-SQL that trips us up. I’m hoping the basic SELECT statement doesn’t. Though, start to throw in stuff like an outer join and a correlated subquery, it can get complicated pretty quickly. But even those I can generally get written without too much effort.

The one construct I do encounter infrequently, but enough that I feel I should fully grok it by now is a DELETE with a join. It’s actually not all that complicated, but I often have to look up the syntax to remind myself.

But the one that gets me every single time: PIVOT. I’ve probably only written a half-dozen pivots in my life and that’s 6 too many in my opinion. I’m not sure what it is about them, but my mind still doesn’t just fully grasp the syntax in an easy way.

Thinking about it, I think one reason is because when I’ve had to use them, it has never been a trivial case.

Fortunately, since my primary focus is on the operations side of being a DBA, and most of the code I write is to support that, I don’t come across the need for a pivot very often. I suppose though if I were writing a lot more code, I’d use it more and understand it better.

So, what’s your T-SQL nemesis?

And Fun Was Had By All

I want to give shout-outs to Rob Farley, Peter Shore, Deborah Melkin, and Rob Sewell for making the final Capital Area SQL Server User Group meeting of the year a rousing success. And this being T-SQL Tuesday, I’m going to try to very loosely tie this into the topic this month How much do you love meeting in person, where would you like for your next event to take place, and why Costa Rica? as invited by Xavier Morera.

tsqltuesday
T-SQL Tuesday

First, let me get his questions out of the way:

  1. Favorite Conference: The easy answer has been SQL Pass, but honestly, at this point, any where I get to see folks in person!
  2. Best Venue: Ignoring Pass at Seattle, I have to say Manchester UK was nice, simply because it was my first overseas SQL Saturday, or perhaps Virginia Beach SQL Saturday, because Monica Rathbun and her group provided a nice charcuterie board!
  3. Best Presenter: Oh, this is a tough one. I’m going to take a pass. But then cheat and answer below. Sort of.
  4. Next event and why it’ll be Costa Rica: I’m suspecting sort of a bias in this question, but to be honest, I’d love to go. I think 2022 will be a bit too busy for me to visit, but perhaps 2023 or 2024. Maybe I can work in some caving then too!

That all said, I want to get back to my shout-outs above and tie that into this T-SQL Tuesday.

As the coordinator for the Capital Area SQL Server User Group, one of my primary roles, in fact perhaps the most important, is finding speakers to present. I’ve tried over the past few years to have a good variety and to bring some variety. We haven’t really missed a meeting since the pandemic has started, but we have been virtual for well over a year now. This has presented both drawbacks and opportunities. The biggest drawback of course is the lack of actual in-person interaction and the feeling of connectedness that has brought. On a personal note it also means not only have I not gotten out of cooking dinner the night of meetings, but often, I’m juggling getting something together for dinner and getting the session started (though last night my wonderful wife did take care of dinner for me.)

On the flip side, being virtual has allowed me to invite speakers who might not otherwise be willing or able to travel in person to Albany NY and for attendees from across the country to show up. It has also given me the opportunity to experiment a bit more with formats.

Last year, instead of our traditional in-person holiday party format, we did a version of “Bluff the Listener” where I asked various presenters to tell their worst IT/SQL horror stories, but one was lying. It was a success and a lot of fun.

Not wanting to repeat that, this year I decided to ask the above 4 presenters to present lightning rounds. That’s not so bad, except I added a twist. They didn’t get to choose their topics, they were given them: 10 minutes before they were scheduled to present. (And yes, some may I stole this idea from Buck Woody, I’d like to say I was inspired).

I’ll admit I was very nervous about this idea. It seemed a bit gimmicky and it could have been a complete disaster with lesser speakers. Fortunately, all four brought their A-Game.

Rob Farley, presenting from the future, in I believe a public work space, managed to give one of the best talks on column-store indices I’ve seen. Given he had only 10 minutes of prep, I was impressed. His presentation included the use of Powerpoint in sort of a “green screen” mode so he could draw on his screen and we could see what he was drawing.

Peter Shore followed up talking about Tips in Advancing a Career in Data. Again, off-the-cuff with limited prep time, he did very well with this topic. I think in some ways this was almost harder than the more technical topics because you can’t fall back on a demo or graphics.

Deborah Melkin followed, talking about the Best new SQL Server features (2019, 2022, Azure). I had announced previously that the best speaker would be awarded a prize. By I think unanimous declaration, even before Rob Sewell finished out the night with his presentation, the other speakers decided Deborah was the winner. She included some demos in her presentation, which, given the lead time, really impressed folks.

Closing out the evening, Rob Sewell entertained us with a demo of SQL Injection. Not surprisingly, he made use of PowerShell and Notebooks.

As I said, it was an entertaining and educational evening. I purposely set expectations low and made sure folks understood that the entertainment value was as much, if not more important than actual educational value. But I was very pleased with how educational it turned out to be. It was a nice way to end the year and honestly, I think a decent way to get a break from the bad news that seems to have surrounded us lately.

I do have a theory though about why the educational part turned out as well as it did though. In general I’ve always enjoyed lightning talks and I honestly, think they’re among the hardest type of talk to give. Sometimes people promote them as a good introduction to speaking for novice speakers, but I’m not so sure. To give a successful lightning talk, one really has to strip a presentation to the bare essentials and really focus on just one or two key concepts. This can be difficult. But done well I think it really makes those concepts stick.

Now, combine that with topics only being given out 10 minutes in advance, I think that really forces a presenter to focus on key concepts even more. I wouldn’t give an inexperienced presenter a random topic, and even with an experienced presenter, I’d give them a chance to decline a topic if they feel it’s completely outside their wheelhouse. But otherwise, give them a chance to see what they can do. It might surprise you. Heck, it might surprise them.

So, to go back and answer a question from above: Best Presenter… at least last night Deborah Melkin, who if nothing else proved her Google-foo was impressive.

And I think if I can find volunteers, I will definitely try to do an in-person version of this at a future SQL Saturday or Data Saturday or other conference.

Thanks to all who participated and joined us. It was a blast. But honestly, next year, I hope to see you all in person at our holiday party!

Changing Technologies – T-SQL Tuesday

Select <columns> from Some_Table where Condition=’Some Value’

T-SQL Tuesday Topic

The above statement is pretty much the basis of what started my current career. Of course it actually goes back further than that. I have a Computer Science Degree from RPI. So I’ve done programming, learned hardware and more. I even took an Intro to Databases course while at RPI. I still recall the professor talking about IBM and something called Structured Query Language. The book had a line that went something like “while not the most popular database technology, its use may grow in the future.” Boy did it.

When I first started working with SQL Server, it was 4.21 and for a startup. I had a lot to learn. Back then, a lot was by experience. Sometimes I made mistakes. But I learned.

When I started at that startup, if one could write basic queries and backup and restore a database, one was a half-way decent DBA. Knowing how to tune indices was a definite bonus, as was knowing things like how to set up log-shipping and replication.

Back then, besides experience, I learned new stuff two ways: SQL Server Magazine and the SQL Connections conference. Work paid for both. It was worth it. But honestly, there wasn’t too much to learn. But there also weren’t as nearly as many resources as there were today.

Fast forward 30+ years and here I’ve written a book, worked for several startups, regularly write about databases and database related topics, and often presented at User Groups, SQL Saturdays and at the now defunct PASS Summit. Today as a consultant I regularly touch the SQL Server Query Engine, SSAS, SSRS, SSIS, use PowerShell, write the occasional C# and VB.Net, sometimes do work on a Linux machine or VM and more. A lot has changed.

Obviously the technology has changed. So how have I responded? By doing what I said above. This may sound like a tautology or even circular reasoning but it’s true. When I would go to a SQL Saturday, I’d often attend 3-5 different topics. I’d learn something. But then I started presenting. And that forced me to learn. As much as I may like to think I know about a topic, when I go to present about it, I like to ensure I know even more. This forces me to read white papers, other articles and perhaps attend other sessions.

When I’ve written an article, I’ve often had to do a lot of research for it.

So strangely, I would say a bit part of keeping my skills up to date is not just learning from others, but from teaching. Teaching forces me to keep my skills up.

In summation, I’ve responded by learning from others, but also forcing myself to teach myself before I taught others. It’s a feedback loop. The more technology changes, the more I reach out and learn and the more learn, the more I do outreach.

The impetus for this week’s blog was Andy Leonard’s call for a T-SQL Tuesday topic.