White (K)nights

I apologize for skipping two weeks of blog posts, but I was a bit busy; for about 11 days my family and I were visiting Europe for the first time. It was a wonderful trip. It started with a trip to Manchester UK for a SQL Saturday event.

I had sort of forgotten exactly how much further north we were until it dawned on me how early dawn was.  Actually we had noticed the night before as we walked back from the amazingly wonderful speakers’ dinner how light it was despite how late it was.  When I woke up at around 4:30 AM (a bit of jetlag there) I noticed despite the blackout curtains how bright it was around their edges. I later looked it up, and it appears that technically it never reached “night” there, but simply astronomical twilight.

Ever since seeing the movie “White Nights” my wife has always wanted to experience the white nights of Russia. This wasn’t that, but it was close.

This trip followed up on the heels of the amazingly successful Thai Cave Rescue that I had previously commented on. As long term readers know, I’m a caver who also teaches cave rescue and has a role as the Northeast Coordinator of the National Cave Rescue Commission. During the 18 day saga, I and others were called upon by various media outlets to give our insight and perspective. I was fortunate, I only did a little under a dozen media events. Our National Coordinator, Anmar Mirza did well over 100, and most of those in about a 5 day period. A link to one of my media events is here: The Takeaway.

I don’t want to talk about the operation itself, but I want to talk about White Knights. We love our White Knights: the term often refers to a character who will ride into town and single-handedly solve the town’s problems. The truth is, white knights rarely if ever exist and that most problems require a lot more effort to solve.

We’ve seen this in politics, and we saw this with this cave rescue. Let me start by saying I think the work Elon Musk has done with SpaceX is amazing. SpaceX has in fact single-handedly revolutionized the space launch market.

It was perhaps inevitable that Musk’s name would show up in relation to this cave rescue. Musk has previously gotten attention for attempting to help with the power outage crisis in Puerto Rico and now his vow to help the people of Flint (both by the way I think worthy causes and I wish him and more importantly the people he’s trying to help, well).

But here’s the thing, a cave rescue isn’t solved by a white knight. It’s solved by a lot of effort and planning with a lot of people with a variety of skills and experience. There’s rarely a magic breakthrough that magically makes things easier.

And I’ll be blunt: his “submarine” idea, while interesting, was at best a PR distraction and at worst, possibly caused problems.

“But Greg, he was trying to help, how could this make things worse?”  I actually disengaged from an online debate with some Musk fanbois who couldn’t see why Musk’s offer was problematic. To them, he was the white knight that could never do wrong.

Here’s the thing: I know for a fact that several of us, myself included, had to take part of our allotted airtime or written coverage to address why Musk’s idea probably wouldn’t work. This meant less time or room for useful information to be passed on to the audience. Part of my role as regional coordinator is to educate people about cave rescue, and I can’t do this effectively when I’m asked to discuss distractions.

“But so what, that didn’t impact the rescue.” No, it didn’t. But, it appears from the Twitter fights I’ve seen, and other information, that at least some resources on the ground were tasked to deal with Musk. This does mean that people had to spend time dealing with both Musk and the publicity. This means those resources couldn’t be spent elsewhere. At least one report from Musk (which honestly I question) suggests he actually entered the cave during the rescue operations. This means that resources had to be spent on assuring his safety and possibly prevented another person who could have provided help in other ways (even if it was simply acting as a sherpa) from entering.

And apparently, there’s now a useless “submarine” sitting outside the cave.  I’ll leave discussion of why I had problems with the submarine itself for another post.

But here’s one final reason I have problem with Musk bringing so much attention to himself and his idea: It could have lead to second guessing.

Let’s be clear: even the cave divers themselves felt that they would most likely lose some of the kids; this was exactly how dangerous the rescue was. This is coming from the folks who best knew the cave and best understand the risks and issues.  Some of the best cave divers in the world, with rescue experience, who were on-site, thought that some kids would die in the attempt to rescue them. And, if reports are true, they were aware of Musk’s offer and obviously rejected it (and in fact one suggested later that Musk do something anatomically impossible with it.)

Had the rescuers worst fears come true, Musk fan bois would have second guessed every decision. In other words, people would have put more faith in their favorite white knight, who had zero practical experience in the ongoing operations , than they would have in the very people who were there and actively involved. I saw the comments before and during the operations from his fans and all of them were upset that their favorite white knight wasn’t being called in to save the day. I can only imagine how bad it would have been had something tragic occurred.

This is why I’m against white knights. They rarely if ever solve the problem, and worse when they do ride into town, they take time and energy away from those who are actually working on the problems. Leave the white knights on a chess board.

“Today is D-Day”

As I’m writing this, word has rocketed around the world that the 12 soccer players and their coach have been safely rescued from Tham Luang cave. We are awaiting word that all the rescuers themselves, including one of the doctors that had spent time with the boys since they were found, are still on their way out.

Unfortunately, one former Thai SEAL diver, Saman Kunan, who had rejoined his former teammates to help in the rescue, lost his life. This tragic outcome should not be forgotten, nor should it cast too large of a shadow on the amazing success.

What I want to talk about though is not the cave or the rescue operations, but the decision making progress. The title for this post comes from Narongsak Osottanakorn’s statement several days ago when they began the evacuation operations.

 

The term D-Day actually predates the famous Normandy landings that everyone associates it with. However, success of the Normandy landings and their importance in the ultimate outcome of WWII has forever cemented that phrase in history.

One of the hardest parts of any large scale operation like this is making the decision on whether to act. During the Apollo Program, they called them GO/NO GO decisions. Famously you can see this in the movie Apollo 13 where Gene Kranz goes around the room asking for a Go/No Go for launch. (it was pointed in a Tindellgram out before the Apollo 11 landing, that the call after the Eagle landed should be changed to Stay/No Stay – so there was no confusion on if they were “go to stay” or “go to leave”.)

While I’ve never been Flight Commander for a lunar mission, nor a Supreme Allied Commander for a European invasion, I have had to make life or death decisions on much smaller operations. A huge issue is not knowing the outcome. It’s like walking into a casino. If you knew you were always going to win, it would be an easy decision on how to bet. But obviously that’s not possible. The best you can do is gather as much information as you can, gather the best people you can around you, trust them and then make the decision.

What compounds the decision making progress in many cases, and especially in cave rescue is the lack of communication and lack of information. It can be very frustrating to send rescuers into the cave and not know, sometimes for hours, what is going on. Compound this with what is sometimes intense media scrutiny (which was certainly present here with the entire world watching), and one can feel compelled to rush the decision making progress. It is hard, but generally necessary to resist this. In an incident I’m familiar with, I recall a photograph of the cave rescue expert advising rescue operations, standing in the rain, near the cave entrance waiting for the waters to come down so they could send search teams in.  Social media was blowing up with comments like, “they need to get divers in there now!” “Why aren’t the authorities doing anything?”  The fact is, the authorities were doing exactly what the cave rescue expert recommended; waiting for it to be safe enough to act. Once the waters came down, they could send people and find the trapped cavers.

The incident in Thailand is a perfect example of the confluence of these factors:

  • There was media pressure from around the world with people were asking why they were taking so long to begin rescuing the boys and once they did start to rescue them, why it took them three days. Offers and suggestions flowed in from around the world and varied from the absurd (one suggestion we received at the NCRC was the use of dolphins) to the unfortunately impractical (let’s just say Mr. Musk wasn’t the only one, nor the first, to suggest some sort of submarine or sealed bag).
  • There was always a lack of enough information. Even after the boys had been found, it could take hours to get information to the surface, or from the surface back to the players. This hinders the decision making process.
  • Finally of course are the unknowns:
    • When is the rain coming?
    • How much rain?
    • How will the boys react to being submerged?
    • What can they eat in their condition?

And finally, there is, in the back of the minds of folks making the decisions the fact that if the outcome turned tragic, everyone will second guess them.

Narongsak Osottanakorn and others had to weigh all the above with all the facts that they had, and the knowledge that they couldn’t have as much information as they might want and make life-impacting decisions. For this I have a great deal of respect for them and don’t envy them.

Fortunately, in this case, the decisions led to a successful outcome which is a huge relief to the families and the world.

For any operation, especially complex ones, such as this rescue, a moon landing or an invasion of the beaches of Normandy, the planning and decision making process is critically important and often over shadowed by the folks executing the operation. As important as Neil Armstrong, Buzz Aldrin and Michael Collins (who all to often gets overlooked, despite writing one of the better autobiographies of the Apollo program) were to Apollo 11, without the support of Gene Kranz, Steve Bales, and hundreds of others on the ground, they would have very likely had to abort their landing.

So, let’s not forget the people behind the scenes making the decisions.

 

The Thai Cave Rescue

“When does a cave rescue become a recovery?’ That was the question a friend of mine asked me online about a week ago. This was before the boys and their coach had been found in the Thai cave.

Before I continue, let me add a huge caveat: this is an ongoing dynamic situation and many of the details I mention here may already be based on inaccurate or outdated information. But that’s also part of the point I ultimately hope to make: plans have to evolve as more data is gathered.

My somewhat flippant answer was “when they’re dead.” This is a bit of dark humor answer but there was actually some reasoning behind it. Before I go on, let me say that at that point I actually still had a lot of hope and reason to believe they were still alive. I’m very glad to find that they were in fact found alive and relatively safe.

There’s a truth about cave rescue: caves are literally a black-hole of information. Until you find the people you’re searching for, you have very little information.  Sometimes it may be as little as, “They went into this cave and haven’t come out yet.” (Actually sometimes it can be even less than that, “We think they went into one of these caves but we’re not even sure about that.”)

So when it comes to rescue, two of the items we try to teach students when teaching cave rescue is to look for clues, and to try to establish communications. A clue might be a footprint or a food wrapper. It might be the smell of a sweaty caver wafting in a certain direction. A clue might be the sound of someone calling for help. And the ultimate clue of course is the caver themselves. But there are other clues we might look for: what equipment do we think they have? What experience do they have? What is the characteristics of the cave? These can all drive how we search and what decisions we make.

Going back to the Thai cave situation, based on the media reports (which should always be taken with a huge grain of salt) it appeared that the coach and boys probably knew enough to get above the flood level and that the cave temps were in the 80s (Fahrenheit).  These are two reasons I was hopeful. Honestly, had they not gotten above the flood zone, almost certainly we’d be talking about a tragedy instead. Had the cave been a typical northeast cave where the temps are in the 40s (F) I would have had a lot less hope.

Given the above details then, it was reasonable to believe the boys were still alive and to continue to treat the situation as a search and eventually rescue situation.  And fortunately, that’s the way it has turned out. What happens next is still open for speculation, but I’ll say don’t be surprised if they bring in gear and people and bivouac in place for weeks or even months until the water levels come down.

During the search process, apparently a lot of phone lines were laid into parts of the cave so that easier communications could be made with the surface. Now that they have found the cavers, I’d be shocked if some sort of realtime communications is not setup in short order. This will allow he incident commander to make better informed decisions and to be able to get the most accurate and up to date data.

So, let me relate this to IT and disasters. Typically a disaster will start with, “the server has crashed” or something similar. We have an idea of the problem, but again, we’re really in a black-hole of information at that moment. Did the server crash because a hard drive failed, or because someone kicked the power cord or something else?

The first thing we need to do is to get more information. And we may need to establish communications. We often take that for granted, but the truth is, often when a major disaster occurs, the first thing to go is good communications. Imagine that the crashed server is in a datacenter across the country. How can you find out what’s going on? Perhaps you call for hands on support. But what if the reason the server has crashed is because the datacenter is on fire? You may not be able to reach anyone!  You might need to call a friend in the same city and have them go over there.  Or you might even turn on the news to see if there’s anything on worth noting.

But the point is, you can’t react until you have more information. Once you start to have information, you can start to develop a reaction plan. But let’s take the above situation and imagine that you find your datacenter has in fact burned down. You might start to panic and think you need to order a new server.  You start to call up your CFO to ask her to let you buy some new hardware when suddenly you get a call from your tech in the remote. They tell you, “Yeah, the building burned down, but we got real lucky and our server was in an area that was undamaged and I’ve got it in the trunk of my car, what do you want me to do with it?”

Now your previous data has been invalidated and you have new information and have to develop a new plan.

This is the situation in Thailand right now. They’re continually getting new information and updating their plans as they go. And this is the way you need to handle you disasters, establish communications, gather data and create a plan and update your plan as the data changes. And don’t give up hope until you absolutely have to.

A Lost Sked

Not much time to write this week. I’m off in Alabama crawling around in the bowels of the Earth teaching cave rescue to a bunch of enthusiastic students. The level I teach focuses on teamwork. And sometimes you find teams forming in the most interesting ways.

Yesterday our focus was on some activities in a cave (this one known as Pettyjohn’s) that included a type of a litter known as a Sked. When packaged it’s about 9″ in diameter and 4′ tall. It’s packaged in a bright orange carrier. It’s hard to miss.

And yet, at dinner, the students were a bit frantic; they could not account for the Sked. After some discussion they determined it was most likely left in the cave.

As an instructor, I wasn’t overly concerned, I figured it would be found and if not, it’s part of the reason our organization has a budget for lost or broken equipment, even if it’s expensive.

That said, what was quite reassuring was that the students completely gelled as a team. There was no finger pointing, no casting blame. Instead, they figured out a plan, determined who would go back to look for it and when. In the end, the Sked was found and everyone was happy.

The moral is, sometimes an incident like this can turn into a group of individuals who are blaming everyone else, or it can turn a group into a team where everyone is sharing responsibility. In this case it was it was the latter and I’m quite pleased.

RCA or “get it running!”

How often have any of us resorted to fixing a server issue by simply rebooting the server?  Yes, we’re all friends here, you can raise your hands. Don’t be shy. We all know we’ve done it at some point.

I ask the question because of a recent tweet I saw with the hashtag #sqlhelp where Allan Hirt made a great comment:

Finding root cause is nice, but my goal first and foremost is to get back up and running quickly. Uptime > root cause more often than not.

This got me thinking, when is this true versus when is it not? And I think the answer ends up being the classic DBA answer, “it depends”.

I’m going to pick two well studied disasters that we’re probably all familiar with. But we need some criteria.  In my book IT Disaster Response: Lessons Learned in the Field I used the definition:

Disaster: An unplanned interruption in business that has an adverse impact on finances or other resources.

Let’s go with that.  It’s pretty broad, but it’s a starting point. Now let’s ignore minor disasters like I mention in the book, like the check printer running out of toner or paper on payroll day. Let’s stick with the big ones; the ones that bring production to a halt and cost us real money.  And we’re not going to restrict ourselves to IT or databases, but we’ll come back to that.

The first example I’m going to use is the Challenger Disaster. I would highly recommend folks read Diane Vaughen’s seminal work: The Challenger Launch Decision: Risky Technology, Culture, and Deviance at NASA. That said, we all know that when this occurred, NASA did a complete stand-down of all shuttle flights until a full RCA was complete and many changes were made to the program.

On the other hand, in the famous Miracle on the Hudson, airlines did not stop flying after the water landing. But this doesn’t mean a RCA wasn’t done. It in fact was; just well after the incident.

So, back to making that decision.  Here, it was an easy decision. Shuttle flights were occurring every few months and other than delaying some satellite launches (which ironically may have led to issues with the Galileo probe’s antenna) there wasn’t much reason to fly immediately afterwards.  Also, while the largest points were known, i.e. something caused a burn-thru of the SRB, it took months to determine all the details. So, in this case, NASA could and did stand-down for as long as it took to rectify the issues.

In the event of the Miracle on the Hudson, the cause was known immediately.  That said, even then an RCA was done to determine the degree of the damage, if Sullenberg and Skiles had done the right thing, and what procedural changes needed to be made.  For example one item that came out of the post-landing analysis was that the engine restart checklist wasn’t really designed for low altitude failures such as they experienced.

Doing a full RCA of the bird strike on US Airways 1549 and stopping all over flights would have been an economic catastrophe.  But it was more than simply that. It was clear, based on the millions of flights per year, that this was a very isolated incident. The exact scenario was unlikely to happen again.  With Challenger, there had only been 24 previous flights, and ALL of them had experienced various issues, including blow-bys of the primary O-ring and other issues with the SRBs.

So back to our servers.  When can we just “get it running” versus taking downtime to do a  complete RCA vs other options?

I’d suggest one criteria is, “how often has this happened compared to our uptime?”

If we’ve just brought a database online and within the first week it has crashed, I’m probably going to want to do more of an immediate RCA.  If it’s been running for years and this is first time this issue has come up, I’m probably going to just get it running again and not be as adamant about an immediate RCA. I will most likely try to do an RCA afterwards, but again, I my not push for it as hard.

If the problem starts to repeat itself, I’m more likely to push for some sort of immediate RCA the next time the problem occurs.

What about the seriousness of the problem? If I have a server that’s consistently running at 20% CPU and every once in awhile it leaps up to 100% CPU for a few seconds and then goes back to 20% will I respond the same way as if it crashes and it takes me 10 minutes to get it back up? Maybe.  Is it a web-server for cat videos that I make a few hundred off of every month? Probably not. Is it a stock-trading server where those few seconds costing me thousands of dollars?  Yes, then I almost certainly will be attempting an RCA of some short.

Another factor would be, what’s involved in an RCA? Is it just a matter of copying some logs to someplace for later analysis and that will simply take a few seconds or minutes, or am I going to have to run a bunch of queries, collect data and do other items that may keep the server off-line for 30 minutes or more?

Ultimately, in most cases, it’s going to come down to balancing money and in the most extreme cases, lives.  Determining the RCA now, may save money later, but cost money now. On the other hand, not doing an RCA now might save money now, but might cost money later.  Some of it is a judgement call, some of it depends on factors you use to make your decision.

And yes, before anyone objects, I’m only very briefly touching upon the fact that often an RCA can still be done after getting things working again. I’m just touching upon the cases where it has to be done immediately or evidence may be lost.

So, are your criteria for when you do an RCA immediately vs. getting things running as soon as you can? I’d love to hear them.

And credit for the Photo by j zamora on Unsplash

Crane Operators

Talking online with friends the other day, someone brought up that crane operators in NYC can make $400-$500K a year. Yes, a year. I figured I’d confirm that before writing this post and it appears to be accurate.

At first glance one may think this is outrageous, or perhaps they chose the wrong field. I mean I enjoy being a DBA and a disaster geek, but I can’t say I’ve ever made $400K in one year!  And for what, I mean you lift things up and them down. Right?

Let me come back to that.

So, last night, I got paid quite a tidy bundle (but not nearly that much) for literally logging into a client computer, opening up VisualCron and clicking on a task and saying, “disable task”. On one hand, it seemed ridiculous;  not just because of what they were paying me, but because this process was the result of several meetings, more than one email and a review process.  All to say, “stop copying this file.”

But, this file was part of a key backup process for a core part of the client’s business. I had initially setup an entire process to ensure that a backup was being copied from an AIX server in one datacenter to a local NAS and then to the remote datacenter.  It is a bit more complex than it sounds.  But it worked. And the loss of a timely backup would impact their ability to recover by hours if not days. This could potentially cost them 100s of thousands of dollars if not into the millions.

So the meetings and phonecalls and emails weren’t just “which button should Greg click” but covered questions like, “do we have the backups we think we have?” “Are they getting to the right place(s)?” “Are they getting there in a timely fashion?”  And even, “when we uncheck this, we need to make sure the process for the day is complete and we don’t break it.”

So, me unchecking that button after hours, as much as it cost the company was really the end of a complex chain of events designed to make sure that they didn’t risk losing a LOT of money if things went wrong. Call it an insurance payment if you will.

Those crane operators in NYC? They’re not simply lifting up a beam here and there and randomly placing it someplace. They’re maneuvering complex systems in tight spaces with heavy loads where sudden gusts can set things swaying or spinning and a single mistake can do $1000s in damage or even kill people.

It’s not such much what they’re being paid to do, as much as how much they are being paid to avoid the cost of a mistake. I wasn’t paid just to unclick a button. I was paid (as were the others in the meetings) to make sure it was the right button and at the right time and that it wouldn’t cost even more.

Sometimes we’re not paid for what we do, as much as we’re paid for what we’re not doing.

 

Mistakes were made

I generally avoid Reddit, I figure I have enough things in my life sucking my time. But from time to time one link comes across my screen that I find interesting. This is one of them.

The user accidentally deleted a production database. Now, I think we can all agree that deleting a database in production is a “bad thing”. But, whose fault is this really?

Yes, one could argue the employee should have been more careful, but, let’s backup.

The respondents in the thread raise several good points.

  • Why were the values in the documentation ones that pointed to a LIVE, production database? Why not point to a dev copy or even better yet, one that doesn’t really exist. They expect the person to update/change the parameters anyway, so worst case if they put in the fake ones in is, nothing happens.
  • Why didn’t Production have backups? This is a completely separate question, but a very important one!
  • Why fire him? As many pointed out, he had just learned a VERY valuable lesson, and taught the company a very valuable lesson too!

I’ll admit, I’d something similar in my career at one of my employers. Except I wasn’t an intern, I was the Director of IT, and my goal in fact WAS to do something on the live database. The mistake I made was a minor one in execution (reversed the direction of an arrow on the GUI before hitting the Execute button) but disastrous in terms of impact. And of course there wasn’t a recent enough backup.

I wasn’t fired for that.  I did develop and enforce our change control documents after that and always ensured, as much as possible that we had adequate backups. (Later in a my career, a larger, bigger muckup did get me… “given the opportunities to apply my skills elsewhere”, but there were other factors involved and I arguably wasn’t fired JUST for the initial issue.)

As the Director of IT, I made a point of telling my employees that story. And I explained to them, that I expected them to make mistakes. If they didn’t they probably weren’t trying hard enough. But I told them the two things that I wouldn’t accept would be lying about a mistake (trying to cover it up, or blame others, etc) and repeatedly making the same mistake.

I wrote in an earlier post that mistakes were avoidable. But as I pointed out, it’s actually more complex than that. Some mistakes are avoidable. Or, at least they can be managed. For example, it is unfortunately likely that at some point, someone, somewhere, will munge production data.  Perhaps they won’t delete it all, or perhaps they’ll do a make a White Ford Taurus type mistake, but it will happen.  So you have safeguards in place. First, limit the number of people in a position to make such a mistake. Second, have adequate backups. There’s probably other steps you can do to reduce the chance of error and mitigate it when it does eventually happen. Work on those.

But don’t fire the person who just learned a valuable lesson. They’re the one that is least likely to make that mistake again.  Me, I’d probably fire the CTO for not having backups and for having production values in documentation like that AND for firing the new guy.