SQL Data Partners Podcast

I’ve been keeping mum about this for a few weeks, but I’ve been excited about it. A couple of months ago, Carlos L Chacon from SQL Data Partners reached out to me about the possibility of being interviewed for their podcast. I immediately said yes. I mean, hey, it’s free marketing, right?  More seriously, I said yes because when a member of my #SQLFamily asks for help or to help, my immediate response is to say yes.  And of course it sounded like fun.  And boy was I right!

What had apparently caught Carlos’s attention was my book: IT Disaster Response: Lessons Learned in the Field.  (quick go order a copy now.. that’s what Amazon Prime is for, right?  I’ll wait).

Ok, back? Great. Anyway, the book is sort of a mash-up (to use the common lingo these days) of my interests in IT and cave rescue and plane crashes. I try to combine the skills, lessons learned, and tools from one area and apply them to other areas. I’ve been told it’s a good read. I like to think so, but I’ll let you judge for yourself. Anyway, back to the podcast.

So we recorded the podcast back in January. Carlos and his partner Steve Stedman were on their end and I on mine. And I can tell you, it was a LOT of fun. You can (and should) listen to it here.  I just re-listened to it myself to remind myself of what we covered. What I found remarkable was the fact that as much as I was really trying to tie it back to databases, Carlos and Steve seemed as much interested, if not more in cave rescue itself. I was ok with that.  I personally think we covered a lot of ground in the 30 or so minutes we talked. And it was great because this is exactly the sort of presentation, combined  with my air plane crash one and others I’m looking to build into a full-day onsite consult.

One detail I had forgotten about in the podcast was the #SQLFamily questions at the end. I still think I’d love to fly because it’s cool, but teleportation would be useful too.

So, Carlos and Steve, a huge thank you for asking me to participate and for letting me ramble on about one of my interests.  As I understand it my Ray Kim has a similar podcast with them coming up in the near future also.

So thought for the day is, think how skills you learn elsewhere can be applied to your current responsibilities. It might surprise you and you might do a better job.

 

 

 

Hours for the week

Like I say, I don’t generally post SQL specific stuff because, well there’s so many blogs out there that do. But what the heck.

Had a problem the other day. I needed to return the hours worked per timerange for a specific employee. And if they worked no hours, return 0.  So basically had to deal with gaps.

There’s lots of solutions out there, this is mine:

Alter procedure GetEmployeeHoursByDate @startdate date, @enddate date , @userID varchar(25)
as

— Usage exec GetEmployeeHoursByDate ‘2018-01-07’, ‘2018-01-13’, ‘gmoore’

— Author: Greg D. Moore
— Date: 2018-02-12
— Version: 1.0

— Get the totals for the days in question

 

 

set NOCOUNT on

— First let’s create simple table that just has the range of dates we want

; WITH daterange AS (
SELECT @startdate AS WorkDate
UNION ALL
SELECT DATEADD(dd, 1, WorkDate)
FROM daterange s
WHERE DATEADD(dd, 1, WorkDate) <= @enddate)

 

select dr.workdate as workdate, coalesce(a.dailyhours,0) as DailyHours from
(
— Here we get the hours worked and sum them up for that person.

select ph.WorkDate, sum(ph.Hours) as DailyHours from ProjectHours ph
where ph.UserID=@userid
and ph.workdate>= @startdate and ph.workdate <= @enddate
group by ph.workdate
) as a
right outer join daterange dr on dr.WorkDate=a.WorkDate — now join our table of dates to our hours and put in 0 for dates we don’t have hours for
order by workdate

GO

There’s probably better ways, but this worked for me. What’s your solution?

The Basics

Last night at our local SQL Server User Group meeting we had the pleasure of Deborah Melkin speaking.  I first met Deborah at our Albany SQL Saturday Event last year. She gave: Back to the Basics: T-SQL 101. Because of the title I couldn’t help but attend. It wasn’t the 101 part by itself that caught my eye. It was the “Back to the Basics”. While geared to beginners, I thought the idea of going back to the basics of something I take for granted was a great idea. She was also a first time speaker, so I’ll admit, I was curious how she would do.

It was well worth my time. While I’d say most of it was review, I was reminded of a thing or two I had forgotten and taught a thing or two.  But also very importantly, she had a great ability to break down the subject into a clearly understandable talk. This is actually harder than many people realize. I’ve heard some brilliant speakers, who simply can’t convey their message, especially on basic items of knowledge, in a way that beginners can understand it.

So, after the talk last summer, I cornered her at the Speaker’s Dinner and insisted she come up with a follow up, a 201 talk if you will. Last night she obliged, with “Beyond the Select”.  What again struck me about it, was other than a great tip in SSMS 17.4 (highlighting a table alias will show you what the base table is), again nothing was really new to me. She talked about UDFs; I’ve attended entire sessions on UDFs. She talked about CTE; I’ve read extensively about them. She discussed windowing functions; we’ve had one of our presenters present on them locally. Similarly with some of the other items she had brought up.

Now, this is NOT a slight at all, but really a compliment. Both as an attendee and as the guy in charge of selecting speakers, it was great to have a broad-reaching topic. Rather than a deep-drive, this was a bit of everything that gave the audience a chance to learn a bit of everything if they hadn’t seen it before (and based on the reactions and feedback I know many learned new stuff) and to compare different methods of doing things.  For example what’s the advantage of a CTE vs. a derived table vs. a temp table.  Well the answer is of course the DBA’s favorite answer, “it depends”.

As a DBA with decades of experience and as an organizer, it’s tempting to have a Bob Ward type talk every month. I enjoyed his talk last month. But, honestly, sometimes we need to go back and review the basics. We’ll probably learn something new or relearn something we had forgotten. And with talks like Deborah’s, we get to see the big picture, which is also very valuable.

So my final thought this week is that in any subject, not only should we be doing the deep dives that extend our knowledge, but we should review our basics. As DBAs, we do a select every day. We take it for granted, but how many people can really tell you clearly the order of operations? Review the basics once in awhile. You may learn something.

And that’s why I selected this topic for this week’s blog.

The Streisand Effect

I had originally planned on a slightly different topic for this week’s blog, but an email I received from my alma mater last night changed my mind.  First, a little background. I’m a 1990 graduate of RPI in Troy NY, a fact I’m quite proud of. Second, lately there has been a growing controversy over the shape and direction of the school administration, led by Dr. Shirley Jackson.  Let me say that I find Dr. Jackson’s credentials impressive and many of her initiatives have led RPI into the right direction for the 21st Century.

But (and you knew that was coming), all is not rosy in Troy (especially today as I write, it’s a dreary, cloudy day).

So let’s back up a bit though and discuss the Streisand Effect. Originally and mainly the effect refers to bringing unwanted attention to something by trying to suppress access to information in the first place. A similar reaction can be had by telling someone, “Don’t think about a pink elephant.”  Ok, how many of you were thinking of a pink elephant before I told you not to?  Now how many are thinking about one?  But I’m serious, please stop thinking about a pink elephant. There’s no such thing as a pink elephant. Ok, now I’m just being cruel about the whole pink elephant thing.  I’ll stop.

So, back to RPI. As I mentioned not everything is as pink and rosy as it might be.  Generally in cases like this, you have one of three choices. Hire a PR firm that advises you on a course of action, admit the issues and work to solve them, or to shut up and hope things blow over and people stop talking about it.

In this case the Alumni office at RPI decided to take the 4th option. They decided to send a letter to all alumni, including many who probably had no inkling of the ongoing controversies or if they did, didn’t care that it was going on.  The letter was written by an RPI professor in response to a set of well-written and researched articles and a website setup by a bunch of upset alumni. Like good RPI graduates, the alumni backed up their criticisms with research and data.  (For example the website notes how RPI’s credit rating has tanked over the years. An easily verifiable fact.)

The letter unfortunately did not address any of the data (except for one) and instead included highlights such as:

Could it be that the residual racism and sexism (no to mention heightism) that sits in the backs of the minds of the white male majority of our alumni makes it just a bit easier to see Dr Jackson as outside of her league, … out of her place?

Yes, somehow pointing out ongoing critical financial issues results in the Alumni office calling all alumni racists and sexists. Based on the reaction on several social media forums I’m on, after this letter, several alumni who were giving or thinking about giving have changed their minds.  I obviously know only a small subset of the thousands of alumni/ae who must have received this email.  But not a single one I know was convinced by this email to START donating to RPI. And at least one person who wasn’t aware of the majority of the issues said they were made aware as a result of this email and would stop donating.

So effectively, the RPI Alumni office has not only seriously insulted its donor base, it has brought attention to issues that many of the donor base apparently were not even aware of. Streisand Effect is now in full force!

I want to toss in one aside here to be clear: I am not ignorant of the fact, nor do I deny the fact that Dr. Jackson certainly has faced some pushback because of her identity. I’ve see comments about her skin color and gender and have pushed back against such comments. They’re not relevant to the issues at hand.  She has faced and overcome a great deal of discrimination and dislike simply because of who she is.  But, that does not make her or the Board of Trustees immune of criticism based on their actual actions and ones that are backed up by data. The drop in RPI’s credit rating is not due to who she is but rather the actions she and the Board of Trustees have made over the past 18 years.

In closing, as I step off my soapbox here; I realize this blog post is a bit off-topic from my usual fare, but it’s not really. It comes down to how we approach problems.  Trying to ignore them doesn’t necessarily make them go away, but shaming a wider audience doesn’t help either, it only brings more attention to the issue. If in 2003 Barbra Streisand had decided to simply drop the issue of the photographs of her home, the issue would have faded into the woodwork and most people wouldn’t have cared.

In 2018, if the RPI alumni office hadn’t blasted an insulting and condescending email, devoid of facts to its entire alumni base, fewer alumni would know about the issues. But I can guarantee now, many alumni that weren’t aware, or didn’t care, now do.

Think about this when trying to do damage control at your company.

Too Secure 2

A quick followup to my blog post from the other day.

So, today I tried to update a service at the client. But of course, with IE locked down and cookies not allowed, I can’t update the service. Hmm. Tell me how that’s more secure?

And my wife just came back from work last night, talking about how she’s no longer able to get to a website critical for her job; because the firewall rules changed.  All this in the name of security.

Yes, we can be too secure!

Too Secure

There’s an old joke in IT that the Security Office’s job isn’t done until you can’t do yours.

There’s unfortunately at times some truth to that.  And it can be a bigger problem than you might initially think.

A recent example comes to mind. I have one client that has setup fairly strict security precautions. I’m generally in favor of most of them, even if at times they’re inconvenient. But recently, they made some changes that were, frustrating to say the least and potentially problematic.  Let me explain.

Basically, at times I have to transfer a file created on a secured VM I control to one of their servers (that in theory is a sandbox in their environment that I can play in). Now, I obviously can’t just cut and paste it. Or perhaps that’s not so obvious, but yeah, for various reasons, through their VDI, they have C&P disabled. I’m ok with that. It does lessen the chance of someone accidentally cutting and pasting the wrong file to the wrong machine.

So what I previously did was something that seemed strange, but worked. I’d email the file to myself and then open a browser session on the said machine and get the file there. Not ideal and I’ll admit there are security implications, but it does cause the file to get virus scanned at at least two places and I’m very unlikely to send myself a dangerous file.

Now, for my webclient on this machine, I tended to use Firefox. It was kept up to date and as far as I know, up until recently, on their approved list of programs.  Great. This worked for well over a year.

Then, one day last week, I go to the server in question and there’s no Firefox. I realized this was related to an email I had seen earlier in the week about their security team removing Firefox from a different server, “for security reasons”. Now arguably that server didn’t need Firefox, but still, my server was technically MY sandbox. So, I was stuck with IE. Yes, their security team thinks IE is more secure than Firefox.  Ok, no problem I’ll use IE.

I go ahead, enter my userid and supersecret password. Nothing happens. Try a few times since maybe I got the password wrong. Nope. Nothing.  So I tried something different to confirm my theory and get the dreaded, “Your browser does not support cookies” error. Aha, now I’m on to something.

I jump into the settings and try several different things to enable cookies completely. I figure I can return things to the way they want after I get my file. No joy. Despite enabling every applicable options, it wouldn’t override the domain settings and cookies remained disabled.  ARGH.

So, next I figured I’d re-download FF and use that. It’s my box after all (in theory).

I get the install downloaded, click on it and it starts to install. Great! What was supposed to be a 5 minute problem of getting the file I needed to the server is about done. It’s only taken me an hour or two, but I can smell success.

Well, turns out what I was smelling was more frustration. Half-way through the install it locks up. I kill the process and go back to the file I downloaded and try again. BUT, the file isn’t there. I realize after some digging that their security software is automatically deleting certain downloads, such as the Firefox install.

So I’m back to dead in the water.

I know, I’ll try to use Dropbox or OneDrive. But… both require cookies to get setup.  So much for that.

I’ve now spend close to 3 hours trying to get this file to their server.  I was at a loss as to how to solve this. So I did what I often do in situations like this. I jumped in the shower to think.

Now, I finally DID manage to find a way, but I’m actually not going to mention it here. The how isn’t important (though keeping the details private are probably at least a bit important.)

Anyway, here’s the thing. I agree with trying to make servers secure. We in IT have too many data breaches as it is. BUT, there is definitely a problem with making things TOO secure. Actually two problems. The first is the old joke about how a computer encased in cement at the bottom of the ocean is extremely secure. But also unusable.  So, their security measures almost got us to the state of making an extremely secure  but useless computer.

But the other problem is more subtle. If you make things too secure, your users are going to do what they can to bypass your security in order to get their job done. They’re not trying to be malicious, but they may end up making things MORE risky by enabling services that shouldn’t be installed or by installing software you didn’t authorize, thus leaving you in an unknown security state (for the record, I didn’t do either of the above.)

Also, I find it frustrating when steps like the above are taken, but some of the servers in their environment don’t have the latest service packs or security fixes. So, they’re fixing surface issues, but ignoring deeper problems. While I was “nice” in what I did; i.e. I technically didn’t violate any of their security measures in the end, I did work to bypass them. A true hacker most likely isn’t going to be nice. They’re going to go for the gold and go through one of at least a dozen unpatched security holes to gain control of the system in question. So as much as I can live with their security precautions of locking down certain software, I’d also like to see them actually patch the machines.

So, security is important, but let’s not make it so tight people go to extremes to by pass it.

 

She’s smart and good looking.

Now, if you work from home like I do, this exercise won’t really work, but if you work in an office, look around at your coworkers and start to notice what gender they present as. Most likely you’ll notice a lot of men and a few women.

Sexism is alive and well in the tech world. Unfortunately.

We hear a lot about efforts (which I support by the way) like Girls and Data and Girls Who Code. These are great attempts at addressing some of the gender issues in the industry.  We’ve probably all heard about the “Google Manifesto” (and no, I’m not linking to it, since most of the “science” in it is complete crap and I don’t want to give it any more viewership than it has had. But here’s a link to the problems with it.)

We know that grammar school and middle girls have a strong interest in the STEM field. And yet, by the time college graduation rolls around, we have a disproportionately smaller number of them in the computer sciences for example.  So the above attempts to keep them interested help, but honestly only address part of the problem.

The other side is us men.  Yes, us.  We can tell our daughters all day long, “you’re smart, you can program”.  “You too can be a DBA!” and more. But what do we tell our sons?  We need to tell the that women can program. We should be telling them about Ada Lovelace and Admiral Grace Hopper. We should be making sure they realize that boys aren’t inherently better at STEM then girls.  We should be making sure they recognize their own language and actions have an impact.

What do we do ourselves when it comes to the office environment? Do we talk too much? Evidence suggests we do.

Do we subconsciously ignore the suggestions of our female coworkers or perhaps subconsciously give more support or credence to the suggestions of our male coworkers?  While I can’t find a cite right now, again evidence again suggests we do.

Who is represented at meetings?  Are they a good ol’ boys network?  Who do we lunch with, both at work and when we network?

If you’re a member of a user group that has speakers, what does the ratio of speakers look like to you? Do they reflect groups ratio? Do they reflect the ratio of the industry?

I think it’s great that we have programs such as Girls who Code and Girls and Data, but we as men have to work on ourselves and work on our actions and reactions.

Some suggestions: “Sometimes, simply shut up.” I’ve started to do this more, especially if I’m in a group of women. LISTEN. And you know what, if you’re thinking right now, “well duh… because women talk so much I’d never get a word in anyway” you’re falling victim to the cliches and perpetuating the problem.

Support the women you work with. If they have a good idea, make sure it gets the same discussion as other ideas. And if one of your coworkers tries to co-opt it as their own, call them on it.  If you have a coworker (and I’ve had these) that is continually cutting off women in meetings, call them on it.

Seek out women speakers for your user groups. I’d suggest for example Rie Irish and her talk “Let her Finish”.  I asked Rie to speak at our local user group. Partly because of serendipity (I contacted one of our women members to let her know about the talk) we got the local Women in Technology group to advertise our meeting and ended up with a number of new members.

And finally, the title. Watch your language. Unless you’re working at a modelling agency or similar, you probably should never be introducing a coworker as “She’s smart and good looking.”  Think about it, would you ever introduce a male coworker as “He’s a great DBA and handsome too boot!”  Your coworkers, male or female are just that, coworkers in a professional setting, treat them as such.

Two final thoughts:

  1. If somehow this blog post has impacted you more than the brilliant posts of Rie Irish, Mindy Curnutt, or others who have spoken on sexism in the industry, I’d suggest you examine your biases, not give credit to my writing.
  2. If you have suggestions for women speakers for my local user group, especially local ones who can make the second Monday of the month, please let me know.

 

 

 

 

Comfort Zone

Humans are by nature, a creature of habit and familiarity. We’ll often go to the same restaurant time after time, not necessarily because it’s the best, but because we’re most familiar with it. One reason why McDonald’s is so popular is NOT because they serve the best hamburgers, but because you’re pretty comfortable, no matter where you go, knowing that you’ll get exactly the same hamburger every time.

However, if you never have anything other than McDonald’s you can miss out on some wonderful food.

I often try to get out of my comfort zone. Sometimes we have to do so to grow. Of course everyone’s comfort zone is different. I love to crawl through holes in the ground (and please, keep it simple, we call it caving, not spelunking.) To me, that’s a comfortable environment.

But recently I’ve been doing something outside of my comfort zone; I’ve been taking a sales training class. The truth is, being a consultant, as much as I love the tech side, I really need to sell myself. Sales IS part of what I need to do. And I’m not comfortable doing it.

But, to expand I have to learn how. And I have to admit, I’ve learned a lot. It’s been worth it.

Another thing I’m doing to step a wee bit out of my comfort zone is to schedule a weekly blog post. Rather than do it hit or miss, I’m going to try to make it more formal.

So, what have you done to step out of your comfort zone lately?  How has it worked for you?

Oh and if you’re ever in upstate New York and want to go caving, let me know.

Riffing on a theme

As I’ve mentioned in the past, I often write these as inspiration strikes and today it did.

Specifically I’m inspired by Grant Fritchey’s latest blog post THERE IS A MAGIC BUTTON, A RANT.

First, I couldn’t resist doing this simply because I could put a pun in the title. (Read his entire article to see what I mean.)

But more so, because it’s part of a theme I’ve heard for decades. “This new technology will put people out of jobs!”

The truth is, sometimes it’s true.  I mean, how many buggy-whip manufacturers do you see these days? How many do you think existed before Ford started rolling the Model T off of the assembly line? How many after?

Yes, the Model T put many buggy-whip makers out of jobs. BUT, it created far more jobs than it eliminated.

Automated elevators put out elevator operators out of a job. But you know what, for the most part, that’s a good thing. Let’s use our human capital in a better, wiser way.

For decades I’ve heard that one technology or another will make SQL useless or pointless.  At one point it was object oriented databases. Now it’s NoSQL. But you know what, SQL is not only still here, it’s thriving and now often when folks use the term NoSQL instead of meaning NO SQL, they’re using it as a short-hand for Not ONLY SQL.

So, performance tuning automation? Great. I love it. Bring it on. It WILL in fact mean less work I have to do on that front in many cases. But you know what, it won’t fix the situation I’m in right now where a customer has a server they use for “database conversions”. The problems include databases still in FULL Recovery mode, but no log backups, DBCCs not being done in weeks or months on some of these and the tempDB log filling up the drive on Saturday while I was sitting in a talk at the Albany SQL Saturday.

Yes, at some point most of those issues will probably be handled automatically, but until they do, I’ll be busy. And when they DO automate those, I’ll have moved on to new issues.

There’s always a place for trained people.

Since I like to link my topics to other ideas like plane crashes, I’ll point out that autopilots on modern commercial airliners are amazing things. They really can pretty much handle any part of normal flight operations including take-offs and landings. But, what they can’t handle is the unexpected. And this is actually an issue in two ways. First of course, is the fact that it took human ingenuity to safely land the Miracle on the Hudson.  There’s no autopilot out there that could make that decision and pull it off.

The second, and this is actually a bigger issue automated cars are facing is that with the use of an autopilot/self-driving car, 99.99% of the time, operations become SO mundane the pilot or “driver” ends up out of the loop. They end up reading, falling asleep or simply not paying attention. This means that when they ARE required to interact, there can be a several second delay before they’re fulling aware of the situation and can react. In a plane this may or may not be an issue depending on the altitude the situation occurs at.

In a self-driving car, we’re already seeing situations were the “driver” can’t get back into the control loop fast enough and an accident occurs.

So, while automation can eliminate a lot of the drudgery and “take away jobs” we still need humans in the loop and there is no foreseeable end to the jobs we’ll be needed to do.

So don’t despair about automation.

 

Don’t Break the Chain!

If one backup is good, two is better right?

Not always.

Let me start by saying I’ve often been very skeptical of SQL Server backups done by 3rd party tools. There’s really two reasons. For one, many years ago (when I first started working with SQL Server) they often simply weren’t good. They had issues with consistency and the like. Over time and with the advent of services like VSS, that issue is now moot (though, I’ll admit old habits die hard).

The second reason was I hate to rely on things that I don’t have complete control over. As a DBA, I feel it’s my responsibility to make sure backups are done correctly AND are usable. If I’m not completely in the loop, I get nervous.

Recently, a friend had a problem that brought this issue to light. He was asked to go through their SQL Server backups to find the time period when a particular record was deleted so they could develop a plan for restoring the data deleted in the primary table and in the subsequent cascaded deletes. Nothing too out of the ordinary. A bit tedious, but nothing too terrible.

So, he did what any DBA would do, he restored the full backup of the database for the date in question. Then he found the first transaction log and restored that.  Then he tried to restore the second transaction log.

The log in this backup set begins at LSN 90800000000023300001,  which is too recent to apply to the database. An earlier log backup that  includes LSN 90800000000016600001 can be restored.

Huh? Yeah, apparently there’s a missing log.  He looks at his scheduled tasks. Nope, nothing scheduled. He looks at the filesystem.  Nope, no files there.

He tries a couple of different things, but nope, there’s definitely a missing file.  Anyone who knows anything about SQL Server backups, knows that you can’t break the chain. If you do, you can’t get too far. This can work both ways. I once heard of a situation where the FULL backups weren’t recoverable, but they were able to create a new empty database and apply five years worth of transaction logs. Yes, 5 years worth.

This was the opposite case. They had the full backup they wanted, but couldn’t restore even 5 hours worth of logs.

So where was that missing transaction log backup?

My friend did some more digging in the backup history files in the MSDB and found this tidbit:

backup_start_date backup_finish_date first_lsn last_lsn physical_device_name
11/9/2016 0:34 11/9/2016 0:34 90800000000016600000 90800000000023300000 NUL

There was the missing transaction backup.  It was a few minutes after the full backup, and definitely not part of the scheduled backups he had setup.  The best he can figure is the sysadmin had set SAN Snapshot software to take a full backup at midnight and then for some reason a transaction log backup just minutes later.

That would have been fine, except for one critical detail. See that rightmost column (partly cut-off)? Yes, ‘physical_device_name’. It’s set to NUL.  So the missing backup wasn’t made to tape or another spot on the disk or anyplace like that. It was sent to the great bit-bucket in the sky. In other words, my friend was SOL, simply out of luck.

Now, fortunately, the original incident, while a problem for his office, wasn’t a major business stopping incident. And while he can’t fix the original problem he was facing, he discovered the issues with his backup procedures long before a major incident did occurr.

I’m writing about this incident for a couple of reasons.  For one, it emphasizes why I feel so strongly about realistic DR tests.  Don’t just write your plan down. Do it once in awhile. Make it as realistic as it can be.

BTW, one of my favorite tricks that I use for multiple reasons is to setup log-shipping to a 2nd server.  Even if the 2nd server can never be used for production because it may lack the performance, you’ll know very quickly if your chain is broken.

Also, I thought this was a great example of where doing things twice doesn’t necessarily make things less resistant to disaster. Yes, had this been setup properly it would have resulted in two separate, full backups being taken, in two separate places. That would have been better. But because of a very simple mistake, the setup was worse than if only one backup had been written.

I’d like to plug my book: IT Disaster Response due out in a bit over a month. Pre-order now!