Thanks to Deb Melkin for hosting this month’s T-SQL Tuesday and developing the topic. Instead of calling this a rant, perhaps I should call it a “I told you so.” There’s a common refrain among DBA of “It depends” when asked a question. And that’s generally accurate. But this is the case of me saying “it doesn’t depend, do as I say” and being ignored.
Ironically when I took my Database class in college, it was long enough now that the idea of a “Sequel Database” (that along should tell you how long ago this was) was basically described as an up and coming technology that perhaps had a future. Talk about a bold and accurate prediction! That said, one of the things then that fascinated me, and still does, is that SQL (in general, not the product SQL Server) is based on work done by Edgar F. Codd and has a fairly strict mathematical basis. (Which is another reason I rail against those who claim that RDBMS and SQL will eventually be replaced. That’s like saying Algebra will be replaced. There may be other branches of mathematics developed that are far better for their specific domains, but the validity and usability of Algebra will never go away.).
In any event, one of the key concepts that Codd developed was that of “a table”. A table has several particular parts to its overall definition. The one critical for this blog is that a table itself has no implicit order. Now, many folks will do a query multiple times and always get the same results every time. But that’s more a factor of how SQL Server happens to handle reads. At my last full-time job, I was easily able to prove to the programmers that a query on the UAT box would result in a different order than on Prod because of the number of CPUs and disks. But that’s not what I’m here to talk about.
My “I told you so moment” goes back further to a table that was about as simple as you can get. It had a single row. Now, I think we can all agree that a single row will always return the same order, right? I can’t recall exactly why the developer felt that this table was the solution to his problems, but I pushed back. I asked at the very least he put in a where clause. He felt that would impact performance too much and besides, with one row, it would always return his results. I of course asked, “What happens if eventually the table has two rows?” “Oh, well my row will return first anyway.” “No it won’t.” Well he wouldn’t budge and I had bigger fish to fry. At the time there really was no reason to expect this table to grow. But I tucked it away in the back of my mind.
Sure enough, about a year later, which was 3 months after the developer left, we started to get really weird results on the webpage that was relying on that table. It seems that another developer realized this table was a perfect place for him to store the data that he needed (I’m assuming it was some sort of config data, but it was honestly so long ago I can’t recall) so he added a row. Now HE was smart enough to add a where clause to his query. But the original “Don’t worry about it query” still had no where clause. And sure enough, sometimes was returning the new row instead of the original. Fortunately this was a 5 minute fix. But I can only imagine how long it would have taken to find the problem if I hadn’t remember it in the first place.
So, while as DBA I will often say “it depends”, I will always be adamant in saying that tables are unordered by definition and you absolutely need a where clause or an order by if you want to guarantee specific results. Though, I suppose it depends, if you don’t care about the order or need a specific set of data you can ignore my rant. There are cases where that’s valid too.
I wasn’t sure what I was going to write about today and then saw it was T-SQL Tuesday, this month’s question from Kenneth Fisher. The topic is “Your First Technical Job”.
I had to actually stop and think about this one for a second. The requirement was to not talk about ones first DBA job, but ones first technical job.
I entered RPI, my alma mater in the fall of 1985. This means I should have graduated in 1989. However, I graduated in 1990 and that made all the difference in my career. One thing that RPI had then, and still has, is a very robust Co-Op program. These were (and I believe still are) an opportunity to work for a company in your field for a summer and at least one semester. Unlike some college programs like this, RPI’s Co-Op office made sure that you were doing actual work in your field, not relegated to say filing papers in an office, or getting coffee for the full-time employees. You also were paid, just like an employee. Given my financial status as a student, this was a huge help.
Through a series of connections, I found myself working at company in Troy called Laboratory MicroSystems. It’s long gone, but it was a good company. It was founded by a pair of RPI alumns, one of which who had also been an Outing Club member and for awhile, tended to hire Outing Club members. It was our own networking group.
That said, my housemate and best friend had just completed his Co-Op there, so there was an opening. After a quick interview, I was hired and started working a few weeks later.
Let me say that this was a great opportunity for me. I had learned a lot in the classroom (even if my grades didn’t always show it!) But, I had never actually applied such skills in the real world. And to further complicate things, my manager was on vacation my first week. Not only was I thrown into the deep end, I didn’t even have a lifeguard.
My friend, when he was there, had taken the PC version of a package they had, called Thermal Analysis and ported it over to a mini-computer (I believe some sort of HP model). To give you an idea of how long ago this was, all the work was done in Fortran.
My job ironically enough, was to take the HP version of the software, that now had been updated and improved and back-port it BACK to the PC. Up until now all my projects in school had been 1, maybe 2 files. So any linking (yeah kids these days are wondering what I mean there) could be done by hand and I had never created a make file. (go google it kids). So very quickly in my first week I learned how to actually compile and link large groups of files in a complex, real-world environment.
Within a few weeks I was deep into the code and really starting to understand how it worked and how to apply my classroom skills. It was a great experience. After about 8 months on the job, it was time to go back to the classroom. But I wasn’t through yet. After another semester of school, I went back for another Co-Op and worked another semester and summer. At this point my manager had moved onto another company and the rest of the company moved onto a new product and project (one that ultimately made the company valuable enough to be sold and for the founders to move on.) I found myself in an unusual position of being 20 years old and the lead developer on a product with no other teammates other than a 20 something sales manager who spent more time on the phone talking about his supposed sexual exploits than actually make sales. But he’s a story for another time.
During this time there are several things that stand out as I think greatly helping me in my career.
The first was adding pop-up help to the Thermal Analysis program. One has to remember, this was back in the days of purely DOS programs, so one normally didn’t have overlapping windows, and the like. But a few customers had wanted some sort of help system added. It took some work, but I finally found a way to do it. Fortran was great for the computational aspects and the 3rd party library helped us display plots on the screen. However, Fortran wasn’t great for developing a UI. After reading and research, I realized I could do what I wanted with some C and Assembler code. This is also when I learned that their call stacks were reversed from Fortran. So I ended up having Fortran calling C code, using a keyword Pascal to ensure the stack would read in the right direction (don’t ask me why it used that for the keyword) and then the C code called ASM to call the DOS interrupts to allow a pop-up to be displayed, and then the C code populated the pop-up with the help text. Learning how to do this, really helped me with my problem solving skills and to learn that “solutions exist” it’s just a matter of finding them.
Another project was one completely unrelated to Thermal Analysis. I can’t recall the name we gave the problem, but it was software we wrote for GE Plastics that basically involved setting up a material on a tensile test machine (a machine that would pull stuff apart at whatever speed you told it to. In this case, the test took a year to run!) GE had provided the hardware and the hardware specific language to write this in and I was off. This had to handle reboots, power outages and the like. Fortunately we were using an external A/D (analog/digital) converter box made by Perkin-Elmer that could store a certain number of data points. This meant we could reboot the PC and reread in data to see if it were new or not. The software was the first I’d seen that had any sort of try/catch (though I believe they called it something else). So I was able to learn and develop good error-handling techniques. Something I’m still working on and improving on to this day.
But, ultimately, this job really led me to where I am today through a convoluted series of steps. The office network was Novell Server with Thinwire ethernet. For those who don’t recall what Thinwire is (or was) the key details was that rather than every network cable running back to a central switch, the cable ran in a serial line from computer to computer and any break in it resulted in a network outage. There were many ways this could fail.
Well between my software work, I started to be the go-to guy for diagnosing network issues. And then for issues with the Novell Server itself.
After college, for various reasons (including I suspect because both my parents had worked for themselves) I wasn’t eager to work for another company right away. So I went into consulting and my first client was, you guessed it, Laboratory MicroSystems. I started handling all their IT needs, which continued to grow. When I had started, they barely filled one floor of the building they were in. By now they were spread across 3 floors and still growing.
And that product everyone else had started working on during my Co-Op? Well by now they were porting it to use SQL Server 4.21a, and they realized that they could use a specialist to go to customer sites and install SQL Server before they sent out their specialists to install the actual software.
And that folks is how, I started out programming in Fortran and ended up as a SQL Server DBA.
Someday, I’ll tell you how the Streets of Laredo figures into this story or how the last I had heard about the now formal sales manager involved him and a radio contest or even how after I stopped consulting I did one last gig for them as a 1 day Oracle consultant or the time I saved them (after being bought) from moving into a new office with only 2 outlets for 24 people.
But in the meantime, now back to studying for a test for my next change of career to hopefully become a Physicians Assistant!
“Now, in the ’60s, there were only two other cars made in America that had Positraction, and independent rear suspension, and enough power to make these marks.” – Mona Lisa Vito
In my first stint as a computer consultant, I was visiting a potential client and noticed a magazine called I believe The American Bee Journal. I was a little surprised at first that such a magazine even existed, but then it dawned on me that it made perfect sense and that probably every specialty had a trade magazine or the like of its own.
In the world of SQL Server we’ll talk about query plans and clustered indexes vs non-clustered indexes and use other words specific to our trade.
In caving we’ll talk about speleothems and karst and other words that the average person might not recognize.
And mechanics can talk about Positraction and skid marks.
Knowing the language of a particular specialty can be important when it comes to understanding it.
I’ve been reflecting upon this lately as I continue to study so I can apply to a Physician’s Assistant program. I’m about to finish up my first semester of classes and one of the classes I’ve been taking and really enjoying is Anatomy and Physiology I. I still have a second class to take, but I’ve been loving things so far. It is, to me, absolutely fascinating to learn how the body works. For example, learning the physiology of muscle contractions is in three words, absolutely fucking cool. And any caver who has vertical experience would realize it’s not much different from how we ascend a rope.
Part of what I’m learning to is the language. In fact one of the first lectures and labs was simply on the language to use describe where things are. To someone not familiar with the language, it may sound like gibberish to say that the tibia is lateral to the fibula and the lateral malleolus is at the distal end, but such a description can help someone who knows the language orient themselves as to its location. Similarly if someone says they have a sore sternocleidomastoid muscle, I’d know where it is, based simply on the name. (I’d also honestly wonder why they simply didn’t say they had a sore muscle in their neck). In that case, the name of the muscle basically describes its origin (the sternum and clavicle or cleido) and insertion (the mastoid process). (If you’re curious, if you turn your head to the left, you can see the right sternocleidomastoid sort of bulging from the right side of your neck).
Honestly, at times I feel like I’m at a Broadway play and the orchestra is playing the overture and the curtain is slowly being drawn back to reveal what’s behind it. I’m excited by what I’ve learned and seen so far and excited to see what more I’ll see as the curtain continues to be drawn back!
Or, another week in the life of a DBA and other duties as assigned
So a few weeks ago a developer at a client of mine reported that some recently deployed code wasn’t working. I tried it and of course it worked for me. That isn’t unusual since I have sysadmin rights on that box. I tried execute as using her ID and it failed. Not uncommon, sometimes in production permissions don’t get promoted the way they should. So I checked her permissions and the permissions of the users actively trying to call the stored procedure. Strangely, they all had the proper permissions, at least as far as I could tell. Then I had that lightbulb insight and realized I had been misinterpreting the error message execute as was giving me. “The server principle “DOMAIN\USerXYZ” is not able to access the database “ImportantDB” under the current security context.”
I realized I had been troubleshooting the wrong problem. It wasn’t that the person didn’t have permissions to execute, it was that no one other than sysadmins had the right to CONNECT to the database. A simple:
GRANT CONNECT SQL
And all was good to go! Or was it? It took us some digging to figure out why this had happened on a production database. Apparently when the database was designed in Dev, the developers had the rights they need to connect, so never thought about who else might need to connect. Apparently they had created it with very limited Grant Connect rights. When the database was moved to production, in this case, with a backup and restore the same lack of rights moved upstream with it.
Now, in the opposite direction, a vendor wanted to send a file to my client in their UAT environment. I fired up the PowerShell script to retrieve the file and decrypt it. The decryption failed. It took me awhile to figure out the problem. The client has a rule that every 2 years we must upgrade our RSA Public Key with them. Ironically, I had just completed the most recent update last month and moved it into production. Apparently though, their rule doesn’t apply to their UAT environment. Which came as sort of a shock to me, since they’re always so insistent on us following their security requirements. Of course beyond the irony of them not following their own rules, the file they had asked we download, wasn’t there.
The PM contacted them and they assured him the file would be there on Friday. Well here it is Tuesday and we’ll see if this time the file is there.
In any case, this time, the problem wasn’t promoting a change from UAT to PROD, but the client’s failure to move a change from PROD to UAT.
Such is life.
So sometimes I’m the creek without the paddle and sometimes I’m down the creek…
I think I speak for many DBAs when they say they’ve all had a programmer tell them, “The database is too slow, we need faster hardware.” Then as a DBA, you go in, look at their query and realize they’re running a table scan across a huge 100GB plus table without any indices. You analyze the data and overall usage further and decide to put in an index and suddenly their query is running in sub-second times instead of minutes.
Or, in a case I was working with with a client years ago, they had a stored procedure that sometimes would suddenly get slow. Their solution was to occasionally issue a WITH RECOMPILE command when calling it and things would get better. It worked, but was hardly ideal. After analyzing the stored procedure and usage I realized that it was suffering from a form of parameter sniffing and with a bit of work I was able to recommend a rewrite that solved their problem in a better way.
What is critical here is understanding HOW exactly data is stored and how the query engine works and how we can take advantage of how the database engine works to improve things.
And honestly, for me, I’ve always loved understanding “how things work.” I’ve marveled at seeing Bob Ward open up the debugger, live on stage, and show how a particular piece of code in the SQL engine works.
And now, I’m starting to undergo the same journey in the human body.
Let me caveat things by saying I am only half-way through my Anatomy & Physiology I class (and same with my Bio I class) and I greatly recognize how much more I have yet to learn (and that’s only out of what we know. We’re still learning more!)
However, I continue to be blown away with “how things work.” Mother Nature has had millions and millions of years to make things work. What strikes me as interesting at times is how both conservative and ingenious things can be. We all start from a single cell that divides and forms a sphere of cells. This soon forms a neural tube and then starts to specialize further. Somehow from that single cell we end up with our feet at one end and our skull at the other, completely different, but from the same source.
But, even then when you dig deeper you realize that at some level hands are really feet, but with an opposable joint and far more flexibility and dexterity. The internal structures are very similar. Then you move up to the arms and legs and realize even the limbs start out fairly similar: the upper limb has a single main bone and the lower limb has two bones. Of course the joints start to differ, a knee joint is VERY different from an elbow joint. Nature is conservative.
This week we started to study skeletal muscles and how they work. This builds on earlier lessons in both the parts of a cell and histology. A generic animal cell has sort of a skeleton made up of fibers the cell produces. These fibers can come in one of several forms. In most cases they’re simply there to give a certain structure to the cell, or perhaps via the use of other proteins, connect to other cells. They often may look haphazard in design and function, but they get the job done.
Until you look at skeletal muscle. Then you see two of the fibers are laid out a bit differently. One, the thin “actin” fibers are laid out radially around larger fibers, the myosin. You may see these fibers in other cells, but in muscle cells their layout is different. And this makes all the difference. The myosin fibers have “heads” on them that can, in the right circumstances both attach to the actin fiber, but also essentially pull on it by flipping its head from a “forward” to a “backwards” position. The head can then release, flop back to its original position, grab the actin again, and again flip back to the tail facing position. I’ve left out a lot of details, but if you think of yourself using both hands to pull upon a rope, you get the same basic idea. Or, for the caving readers among me, it’s a pair of handled ascenders and it’s basically using a Texas system!
To me, this is really an ingenious re-use of existing structures inside the cell. The more I learn about the human body (or biology in general) the more I’m amazed.
I’m loving “seeing under the covers” and this is part of it!
I wrote last year about how my job title seems to be DBA and other Duties as assigned. A little incident yesterday got me thinking about that again. This time though, it got me thinking more about the DBA side of that title.
I’ve mentioned before I’ve written a number of ETLs for various clients. Some are fairly complex and some are fairly simply PowerShell scripts. In most cases, at the end of the script I send an email with a success or failure status and some additional information if appropriate.
Over the last two weeks I had noticed that the email for one particular ETL was coming in later and later. I finally found somet time to investigate. I looked at the destination table and noticed that some of the tables had extremely large numbers of rows. Now, my first inclination was that the source data had increased (which would be a good thing, it meant the client was basically selling more widgets). But the increase seemed too dramatic and large. My next thought was perhaps the export itself was simply giving me more data.
So I decided to look more closely at the data and I noticed something interesting. Picking on of the tables at random, I simply did a
select * from RandomTable order by GoodIndex
SSMS returned approximately 24,000 rows. But something stood out. There appeared to be duplicates rows. Lots of them. This raised a lot of suspicions.
I then ran the query on my UAT box which in this case actually loads a copy of the production data. I got back about 1,600 rows. VERY curious.
Back on production now I ran
select distinct * from RandomTable order by GoodIndex
This time I got back the same number as UAT. Extremely curious.
Then it dawned on me, about 2 weeks ago, we had made a change to where the script looked for the source data, a number of CSV files. UAT had not changed, but Prod had. The reason for this change was to be able to get the ETL PowerShell script to run on the proper production server (the original location had security issues reading from the original CSV file location.)
So my first thought was that the team that had updated their export had somehow left out the command to delete the old file and was simply appending. I was about to write a pointed email when I stopped myself.
I looked at my code and realized that when I had setup the script on the new production box, I had properly handled all the permissions except one: the step that truncated the destination table. Sure enough, my code was no longer truncating the table before I inserted new information. Fortunately I realized my mistake before sending that pointed email.
So was this a DBA issue or a “other duties as assigned” issue? I don’t know, but I’d say as a DBA I should have not created the problem in the first place, but thankfully, due to proper logging and emails I was able to catch it and solve it fairly quickly.
Moral of the story: Pay attention to details. Notice when things start to drift. They can be the sign of a larger issue.
Today’s T-SQL Tuesday edition comes from Rie Merrit and she is asking about “Advice on Running a User Group.”
Fortunately she’s only asking for 1-2 ideas, not an entire book (though there’s at least one book out there on the topic, which I’ll admit I’ve skimmed but not read cover to cover).
It Starts at the Door
This is actually an area I’ve not done as well in as I’d like, but I’m going to continue to work on. For your in-person meetings (we remember what those were like, right?) find one of your more outgoing, sociable members, ideally someone who is good with names and details, and position them by the door to greet people. When someone new comes in, this person should make sure they get their name, ask them if they have any particular interests, and then introduce them to others, ideally with similar interests.
It can be very intimidating to walk into a new User Group meeting where you know no one, and every already there is already happily chatting away and you end up feeling like an outsider.
By assigning someone to the role of greeter, ideally any new person instantly can be made to feel welcome. Besides simply introducing them, the greeter can explain how things work in terms of schedule, where the bathrooms are, where food is at, etc. This keeps newcomers from feeling lost and left out.
On the flip side of this advice, the greeter has to make sure they’re not too enthusiastic either. If the newcomer indicates they’d rather just sit in the corner and listen and leave, that’s fine too. The goal isn’t to force everyone to socialize. The goal is to make it easier for those who wish to.
I can guarantee that if you make people feel welcome, they’re more likely to come back.
It Pays to have Sponsor
Or more accurately, its sponsors that make it possible to pay for food and other costs. Several years ago at a User Group Leader meeting at PASS Summit, I listened as a speaker talked about looking for sponsors you might not normally consider, i.e. going outside of getting sponsorship from technical companies. This has worked really well for me in the past. But before you even go that far, you need to get some data. And since we’re DBAs, we should be good with data. I recommend once a year, collecting data about your group with some questions such as:
How many people receive your weekly or monthly emails. You don’t need an exact number, but is it 100, 300, 500, 1000?
How many people typically attend your meetings? (and now ask in-person versus on-line if you’re doing hybrid)
Where are they coming from?
How many years have they been in the industry?
Do you have a breakdown by age range?
You’re trying to get a sense of demographics. This will come in handy when you look for sponsors that are non-technical (for technical sponsors you will want different demographics). But with the data from my group, I have approached a number of different sponsors such as banks, insurance agencies and the like. My sales pitch is generally along the lines of:
I can put your name in front of 400 people via email and 20-30 people in person that are in your demographic (generally 40-50 years of age, higher income) that are probably in the market for your services (such as life insurance, investment opportunities, etc).
I’ve had a lot of luck with this approach. Sometimes I’ve gotten a check right there, sometimes they’ve had to go up their chain of command, but now they have data to sell the idea to their boss. And sometimes, you find out a prospect is not a good match. This happened with me when I approached a contact at the local, then new casino. Turns out their target demographic was older, retired women. Apparently they spend a lot of time and money at the casino. In contrast, mid-life professional DBAs don’t gamble much!
The other key detail when approach a sponsor is being clear on what you’re selling them. You probably recognize this without really realizing it. At any conference you’ve been to you’ve seen Platinum Sponsors, Gold Sponsors, etc. The more someone is willing to pay, the more mention they get, the bigger their logo may be featured, etc. This works for user groups. My advice here is to not overdue the number of sponsorships and to deliver on what you promise. For my group, pre-Covid, I would typically try to have no more than 3-4 sponsors at a time, and total over a year, perhaps 6 or so. Some sponsors would sponsor for 3 meetings, some for the entire year. There were discounts for an annual sponsor as opposed to a quarterly sponsor. If you were a quarterly or greater sponsor, besides having your logo in emails and being mentioned from time to time, you were given the opportunity once a quarter or so to give a 5 minute pitch before the group. Some took advantage of that, some didn’t. But I have to say those who did, I think made a better impact when they could introduce themselves and point to the food and say they were glad to sponsor our group.
I’ll close with one final comment on sponsors: not all need to provide a direct financial contribution. We have a local hotel that has provided us 1-2 free room nights a year. We typically use one to put up a speaker who is coming in from out of town, and the second as part of our annual holiday raffle. We also had the local garbage collection company provide a free year’s service as a prize for our annual raffle. That was surprisingly one of our more popular prizes. In SQL Server you don’t have to worry about garbage collection and for a year neither did one of our DBAs!
In Conclusion
I can’t speak for other user groups, but I do know we’re probably very close to going back to in-person meetings in the near future so I’ll be dusting off the playbook and doing the above as well as other things in order to build up our successful in-person attendance again.
I look forward to seeing what other group leaders advise!
As I’ve previously talked about, I’m enjoying taking classes as I prepare to apply to Physician’s Assistant School. I’m a month in and still loving it.
As a toddler I apparently had a habit of turning on and off light switches. As a toddler, I have a vague memory of going with my parents to visit a friends apartment (which means it was probably in New Haven) and turning on and off a switch, not immediately sure what it did. I’m not sure it turned on the light in a closet or in the room at the end of the hall, but it wasn’t immediately obvious to my 2-3 year old mind what it did. But I was committed to flipping it until I found out.
I’ve always had a curiosity about how things worked. For some, simply knowing “it works” or “this is how I do something” is good enough. And to be honest, I often apply that in my own life. But, I have a strong bias that understanding the fundamentals goes a long ways to improving ones ability to do their job. In the world of SQL Server for example, I know many programmers who know enough to write a select statement, but have no clue how that’s executed under the covers in SQL Server. And for them, that’s fine. But as a DBA I’ve always wanted to know more. It’s a reason why I’ve read books such as The Database Relational Model by CJ Date and others. I want to understand better how things work.
And so, I’m loving my current classes, especially Anatomy and Physiology I. We’re still in the introductory phase, but starting to dive deeper. Yesterday afternoon’s class for example we started to dive into things like Carrier Mediated Transport channels and chemically gated ion channels and more.
Did you know that the inner well of the plasma membrane of a cell is typically at a -70mV potential compared to the outside, and this is due mainly to the cells ability, nay, requirement to transport Na+ ions outside to the extracellular medium? Well now you do.
“But why would one care?” you might ask. Well, from this we get to the point where in certain cells, by properly manipulating this potential we can cause cells to contract and then un-contract (I hesitate to use the word expand here). And by doing that, when we have large groups of specialized cells, we have muscle fiber and with muscle fibers we can build muscles, such as the heart.
And this all starts with a miniscule voltage difference between the inside and outside of a cell.
Honestly, the more I learn about just the basics of how a typical cell works and the amount of chemical activity in it and across its plasma membrane, the more I sit back and just say “Wow, that’s just so incredibly cool.”
So yeah, I’m over a month in now and still loving it.
Today’s another T-SQL Tuesday, thanks to Steve Jones. This month he’s asked us to talk about SQL upgrades.
I thought I’d write about a SQL Upgrade that wasn’t.
As a DBA I’ve worked with SQL Server since 4.21a. Yes. I’m that old. And a common experience I’ve had, which I suspect I share with many DBAs, is that every company I’ve worked with, either as an employee, or as a consultant has been reluctant to upgrade.
And often it’s understandable. Depending on their licensing situation and finances, an upgrade can be costly, especially for a smaller company. So companies often soldier on with older versions. In fact, just the other day on Twitter, I saw a post from someone asking about tuning queries on SQL 2020. In any event, especially as a consultant, I’m not alone in seeing resistance to upgrades.
But one client really stands out. I started working with them about a decade ago. They were running SQL 2005. Now, if you do the math, you’ll realize that by this time SQL Server 2008 and SQL 2012 had come out.
Technically I was brought in for a separate project, but when they became aware of my skills, they asked me to help tune their current database. There were some easy lifts (such as removing a duplicate index on a particular table that did nothing to help select performance, but obviously hurt insert performance). But by far one of their worst performing procedures was one that was handed an XML string which it then had to parse. I made several recommendations but realized that as long as they were depending on this stored procedure to parse the XML and were using SQL 2005, this would be a huge bottleneck. There was a pretty common and known issue with the way that SQL 2005 parsed XML.
The simple and obvious upgrade was to upgrade, ideally to SQL 2012. The answer was a firm “no”. It wasn’t even the cost, it was fear. It seems they, like many folks who had done the upgrade from SQL 2000 to SQL 2005 had hit several performance issues due to changes in the query optimizer and were afraid that they’d have as bad or worse issues upgrading to SQL 2008 or SQL 2012. They were far more comfortable with the devil they knew than the devil they didn’t.
Now, I can fully appreciate this, but it was frustrating. It was more frustrating because their business was fairly seasonal in nature, which meant that they could have done the upgrade soon after their busy season and had 8-9 months to debug performance issues. They would have also gained other benefits from the upgrade.
I’d love to say I finally had shown them enough proof of the advantages, had shown them how low the risk would be, and how to mitigate such risks. Alas, I didn’t. They decided (and to be fair there were other reasons completely unrelated to their SQL Server concerns) to migrate to a full LAMP stack. SQL Server was out, MySQL was in. Of course they had other issues there, but those weren’t my problem.
Upgrading SQL Server Today
I knew then, and still know now, that the upgrade would have been far less of an issue than they feared. But, I couldn’t and still can’t completely dismiss their fears. They were far from the only company that had been somewhat burned by the SQL 2000 to 20005 upgrades. I’ve heard of other companies that resisted upgrades during that era for similar reasons .I still sometimes hear echoes of such fears. And let’s be honest, often a database server is absolutely essential to the core of a company. Unlike front end code that might be easily rolled back, often SQL upgrades are very hard to roll back. (In reality it’s not as bad as some think, but it’s often non-trivial also.)
I will give Microsoft credit for listening. Since then, I know they’ve taken a lot of steps to mitigate such fears and have made backwards functionality a far higher priority than they used to. Other than licensing costs, honestly, at this point, I think companies should be on a regular update cycle. There’s almost always performance benefits and these days, the drawbacks are pretty minimal. And honestly, if you’re on Azure, you’re being upgraded without knowing it.
It’s been awhile since I’ve focused exclusively on SQL Server or even T-SQL and perhaps this would make a great question for someone to ask on a T-SQL Tuesday, but for now I’ll ask.
An open question to any of my readers who work with T-SQL: “What command or construct do you find difficult?”
Let’s be honest, we all have something about T-SQL that trips us up. I’m hoping the basic SELECT statement doesn’t. Though, start to throw in stuff like an outer join and a correlated subquery, it can get complicated pretty quickly. But even those I can generally get written without too much effort.
The one construct I do encounter infrequently, but enough that I feel I should fully grok it by now is a DELETE with a join. It’s actually not all that complicated, but I often have to look up the syntax to remind myself.
But the one that gets me every single time: PIVOT. I’ve probably only written a half-dozen pivots in my life and that’s 6 too many in my opinion. I’m not sure what it is about them, but my mind still doesn’t just fully grasp the syntax in an easy way.
Thinking about it, I think one reason is because when I’ve had to use them, it has never been a trivial case.
Fortunately, since my primary focus is on the operations side of being a DBA, and most of the code I write is to support that, I don’t come across the need for a pivot very often. I suppose though if I were writing a lot more code, I’d use it more and understand it better.