Moving the Needle – Hard

One of the things I enjoy is problem solving or “debugging”.  I don’t necessarily mean debugging code, though I’ve done plenty of that.  One particular class of problems I like solving is when something isn’t working “right”.  I’m currently involved on one such issue.

Just before the holidays, the lead developer at one my of my clients put me in touch with a team in another division to help them solve some performance issues they were having with their SQL Server. This is the sort of issue I generally like to sink my teeth into.

I started poking around and asking questions. I was a bit crushed when in the initial review they listed all the things they had tried and I had to nod my head sagely (which, being a remote worker went unnoticed by them) because they had tried all the basic things. They had, fortunately for them, ruled out a lot of the easy fixes.

So now it came down to some digging. I won’t go into too many details, but will cover some of the things uncovered and tried. For one thing, they have 44 SQL jobs that run every 20 seconds and basically do a poll of a database to see if there’s any work to be done. So, every 20 seconds 44 SQL jobs would fire up, do a quick select and then go back to sleep.  On their new server, they were on average taking 6 seconds a piece.  In addition, the CPU would spike to 100% for about 5-6 seconds and then drop back down. We are also seeing a lot of wait states of the MSQL_XP variety (accounting for about 1/2 the time the system is waiting and averaging about 61.1 ms each time. [Thanks to Brent Ozar’s script here!])

We tried three things, two helped, one didn’t.

First, I asked them to spread the jobs out. So now, basically 2-3 jobs are started every second. This means over a 20 second period all 44 jobs are run, but not all at once.  This had an immediate impact, the jobs now were taking about 2-3 seconds. A small victory.

Secondly, we changed the MAXDOP settings from 0 to 4.  This appeared to have no impact on the jobs. In retrospect makes a lot of sense. Each job is a separate task and basically single-threaded, so SQL Agent won’t care about the MAXDOP.

For those who aren’t familiar with SQL Server, MAXDOP is short for “Maximum Degree of Parallelism” This controls how much SQL Server will try to spread out a task among its CPUs. So for example you had 100 tests to grade and sort into alphabetical order and you had 1 person to grade them. That one person would have to do all the work. You might decide that having 100 people is 100 times faster since every person can grade a test at the same time. But then you have to hand out the 100 tests and then collect the tests and resort them back into alphabetical order, and this takes longer than you think.  So by playing around, you realize it’s actually faster to only have 10 people grade them and sort them.  In other words, sometimes, the effort of spreading out the work itself takes longer than the time saved by spreading it out.)

But, one thing that didn’t change was the CPU spike. But, since the poll jobs were twice as fast, we were happy with that improvement.

However, the real goal of the poll jobs was to wake up ETL jobs to handle large amounts of data. These were running about 1/2 as fast as they’d like or expected.

Here, MAXDOP does seem to have changed things.  In most cases, the ETL jobs are running close to twice as fast.

But, here’s the funny thing. I didn’t really care. Yes, that was our goal, but I’d have been content if they had run twice as slow. Why? Because at the point we changed the MAXDOP settings, my goal wasn’t to improve performance, it was simply to move the needle, hard.  What I meant by that was, by changing the MAXDOP from 0 (use all 32 CPUs) to 4 I was fairly confident, for a variety of reasons, I’d impact performance.  And I did in fact expect performance to improve.  But, there were really 3 possible outcomes:

  1. It improved. Great, we know we’re on the right track, let’s tweak it some more.
  2. It got worse. Great, this is probably NOT the solution, but let’s try it the other way and instead of 4 CPUs, try say 16 or even a larger value. At least we know that the MAXDOP is having an impact.
  3. Nothing change. In this case, we can pretty much rule out parallelization being a factor at all.

In other words by forcing SQL Server to use only 4 CPUs instead of all 32, I expected a change. If I didn’t see a change, one way or the other, I could mostly rule out parallelization.

Finally, once we saw that a MAXDOP of 4, we started to play with the threshold of parallelization. In this case we ended up with option 3 above. We tried a fairly small value (5) and a fairly large value (100) and haven’t seen much of a difference. So the cost threshold doesn’t seem to have much of an impact.

So, we’re not fully there yet, there’s a number of other factors we need to consider.  But sometimes when you’re approaching the problem, don’t be afraid to move the needle, in any direction, hard, can tell you if you should continue to try that approach. In this case with MAXDOP it indicated we were on the right track, but with the cost threshold, we’re probably not.

We’ve got a lot more to do, including seeing if we can eliminate or speed up the MSQL_XP wait states, but we’re on our way. (For the record, I don’t expect much change on this one, it’s really SQL Server saying, “hey, I called out to an external procedure and am waiting to hear back” so we can’t tweak the query or do other things that would make much of a difference.”

 

 

 

 

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s