Crane Operators

Talking online with friends the other day, someone brought up that crane operators in NYC can make $400-$500K a year. Yes, a year. I figured I’d confirm that before writing this post and it appears to be accurate.

At first glance one may think this is outrageous, or perhaps they chose the wrong field. I mean I enjoy being a DBA and a disaster geek, but I can’t say I’ve ever made $400K in one year!  And for what, I mean you lift things up and them down. Right?

Let me come back to that.

So, last night, I got paid quite a tidy bundle (but not nearly that much) for literally logging into a client computer, opening up VisualCron and clicking on a task and saying, “disable task”. On one hand, it seemed ridiculous;  not just because of what they were paying me, but because this process was the result of several meetings, more than one email and a review process.  All to say, “stop copying this file.”

But, this file was part of a key backup process for a core part of the client’s business. I had initially setup an entire process to ensure that a backup was being copied from an AIX server in one datacenter to a local NAS and then to the remote datacenter.  It is a bit more complex than it sounds.  But it worked. And the loss of a timely backup would impact their ability to recover by hours if not days. This could potentially cost them 100s of thousands of dollars if not into the millions.

So the meetings and phonecalls and emails weren’t just “which button should Greg click” but covered questions like, “do we have the backups we think we have?” “Are they getting to the right place(s)?” “Are they getting there in a timely fashion?”  And even, “when we uncheck this, we need to make sure the process for the day is complete and we don’t break it.”

So, me unchecking that button after hours, as much as it cost the company was really the end of a complex chain of events designed to make sure that they didn’t risk losing a LOT of money if things went wrong. Call it an insurance payment if you will.

Those crane operators in NYC? They’re not simply lifting up a beam here and there and randomly placing it someplace. They’re maneuvering complex systems in tight spaces with heavy loads where sudden gusts can set things swaying or spinning and a single mistake can do $1000s in damage or even kill people.

It’s not such much what they’re being paid to do, as much as how much they are being paid to avoid the cost of a mistake. I wasn’t paid just to unclick a button. I was paid (as were the others in the meetings) to make sure it was the right button and at the right time and that it wouldn’t cost even more.

Sometimes we’re not paid for what we do, as much as we’re paid for what we’re not doing.

 

Starting off the New Year… wrong

Ok, I had plans for a great blog post on time and how we measure it and how it really is arbitrary. But… that will have to wait. (I suppose that brings its own irony about time and waiting and all that. Hmm, now I may need a post about that!)

But, circumstances suggested a different post.

This one started with an email from my boss last year (who will remain nameless for his sake. 🙂  “Greg, I hacked together this web page so my team-members can track their hours. I need you to make a few tweaks to it. And don’t worry, we’ll replace it by the end of the year.”  Yes, that sound you hear is your head hitting the desk like mine, because you KNOW what is coming next. Here we are in January of 2018 and guess what, the web page is still in place.

And of course, it’s not just HIS (now former, he’s moved on) team that is using it. Apparently his boss liked what it could do, so his boss (my boss^2) declared that ALL his teams would use it.  So instead of just 4-5 people using it, there’s close to 30-40 people using it.

So, remember the first rule of development, “Oh don’t worry about this, we’ll replace it soon” never happens.  His original code made a lot of assumptions (which at the time I suppose seemed reasonable). For example, since many schedules are formed around work-weeks, he was originally storing hours by the week (and then day of the week) they were worked in. For example, if someone worked for 6 hours on January 5th of last year, the table stored that as “WEEK 1, day 5”.

Now, before anyone completely thinks this is nuts, do keep in mind, many places, including my last job at GE did everything by Week of the Year.  There’s some advantages to this (that I won’t go into here).

For the most part, the code worked and I didn’t care. But, at one point I was asked to do a report based not on Work Weeks, but on an actual calendar month. So, I had to figure out for example that February 1st occurred during Work Week 5, on the 4th day. And go from there.

So, I hacked that together.  Then where was a request for another report and another. Pretty soon it was getting to be a pain to keep track of all this. And I realized another problem. My boss had gotten lucky in 2017. January 1st occurred on a Sunday, which meant that Work Week 1, Day 1 was a natural starting point.

I started to worry about what would happen when 2018 rolled around. His code would probably break.  So I finally took the plunge and started to refactor the code.  I also added new features as they were requested.  Things were going great.

Until yesterday.  So now we get to another good rule of development: “Use Version Control” So simple. Even for a one person shop like me it’s a good idea. I had put it off for awhile, but finally downloaded the git plug-in for Visual Studio and started to use it.

So yesterday, a user reported that they couldn’t enter hours for last week. I pulled up the app and realized that yes, in fact it was coded in such a way you couldn’t go back into a previous year to enter hours. No problem, I can fix that!

Well, let me add a rule I had sort of missed; “Understand HOW to use Version Control”. I won’t go into details, but let’s just say, I wasn’t committing like I should have been or thought I was. So, in an attempt to get a clean base and all, I merged things… “poorly“. I had the branches I wanted, but had not properly committed stuff to them.

The work of the last few weeks was GONE. I know, you’re saying, “just go to your backups Greg, because of course a person who writes about DR has proper backups, right?”  Yeah, right! Let’s not go there!

Anyway, I spent 4 hours last night and 2 this morning recreating the code. Fortunately, it dawned on me, being .NET code, it was really just CLR and that perhaps with a good decompiler, I could get most of the code back without too much effort. I want to give props to RedGate’s .NET Reflector. Of the two decompilers I tried, it was clearly the better one. While I lost my variable names and the decompiled code isn’t quite what I’d call human written, it was close enough I could in short order recreate hours and hours of work I had done.

In the meantime, I also talked to some of my programming buddies on an RPI chat server (ask me about it sometime) about git and better procedures.

And here’s where I realized my fundamental mistake. It wasn’t just misunderstanding how Visual Studio was interacting with git and the branches I was creating, it was that somewhere in the back of my head I had decided that commits should be used only when major steps were completed. It was almost like I was afraid I’d run out; or perhaps complicate the history with too many of them.  But, you know what? Commits aren’t scarce resources. And I’m the only one reading the history. I don’t really care if I’ve got 10 commits or 100. The more the better in many ways. Had I been making commits much more frequently, I’d have saved myself a lot of work. (And I can discuss having a remote store at some point, etc.)

So really, the point of this hastily, sleep-deprived written blog isn’t so much to talk about dates and apps that never get replaced, but about a much deeper problem I was having. It wasn’t just failing to fully understand git commands, it was failing to understand how I should be using git in the first place.

So, to riff on an old phrase, “Commit Early, Commit often”.

Oh and I did hack together a way for that user to enter hours for last year. It’s good enough. I’m sure I won’t need it a year from now. The code will be all replaced or refactored by then I’m sure!

 

Maps

This is actually about a bit more than simply maps. It’s really about what a map is. It’s an abstraction. This may sound obvious, but it’s an important concept.

“I have a map of the United States… Actual size. It says, ‘Scale: 1 mile = 1 mile.’ I spent last summer folding it. I hardly ever unroll it. People ask me where I live, and I say, ‘E6.” – Stephen Wright

Forget the impracticality of such a map, it makes a fine joke, it’s still just a map. We don’t live on maps.  Piaget and others have done a lot of research into the development of the minds of young children and it’s fascinating. At first, I think one can argue they live in a very solipsistic world. Nothing exists except what they can see or touch. Soon though concepts of object permanence come into being. But they’re still not quite at the concept of true abstraction. At very young ages, if you hold up a picture of a room to a small enough child, they’ll try to grab stuff of the picture. And why not? They can grab things in the physical world.  Soon they start to realize the picture is just that, not the actual room, but a flattened portable version of it.

Around age 5 (I’m going on memory here) we start to ask children to draw maps. It might be a map of their room, or a map of how to get to school. They’re crude and often, to our adult minds wildly inaccurate, but, they’re starting to achieve a truer concept of abstraction.  Their maps aren’t even miniaturized pictures of the real world, they’re completely new, abstract designs of the world.

Eventually, around 8th grade or so, they’re probably introduced to Algebra. Up until then much of their math might have been things like 8 * ___ = 56. They’ll dutifully fill in the 7. It’s a blank, but it’s not really abstract. Then they start to solve problems like “56/x = 8, solve for x”.  Again they’ll write down 7, but now they’ve learned that a letter can represent a number and be used that way later on.

What’s interesting about the above is, many people never stop to realize that * or / or + or many other mathematical symbols are really just abstractions. Yes, numbers are in a sense an abstraction. There’s nothing inherent about the symbol 5 that means it MUST represent five of an object. It’s just a convenience (and it’s shorter than writing ///// hash marks). But the mathematical symbols are even more of an abstraction. If I wrote 8♒︎__ = 56, most of us could probably figure out what ♒︎ represents there.

Anyway, that aside, why am I writing about maps and ultimately abstraction? Because, it is so vitally important to my job and perhaps the jobs of most of my readers and it’s been on my mind lately.

I’ve been working on an SSIS package for a client. The original spec was “Import these 20 or so (out of 240) CSV files out of this ZIP file.”  I thought about it and after teaching myself a bit more about SSIS (I don’t usually do much in it) I realized I could create a container that did most of the work. And I could assign variables to the container. The variables include things like what database connection to use and where the files might be located. And, most importantly, a variable for the NAME of the file/table (by using a table with the same basename as the file, I can use a single variable and line stuff up nicely.) And part of what I had to (re)-learn was the scope of variables within SSIS packages and how to expose the ones I wanted as parameters.

Now, honestly, it probably took me more time to set up this level of abstraction to make it workable than if, for the 20 or so tables I had hard-coded everything.

BUT, I’ve got enough experience to guess at what was coming next; and I’m going to guess you know what’s coming next.

“Hey, this is great, but you know what, we think we want all 240 files in that ZIP file loaded. Can yo do that?”

“Sure!”  And at this point, the problem isn’t the amount of work, it’s the tediousness of it; “create table schema, create a new file connector, copy the SSIS container, update the variable for the table, go in and reconnect the file connect to the new table, test”.  It’s a LOT less work than if I had to hardcode everything by hand.  The container name even automatically is updated based on the table variable. This actually has an unintended, positive side-effect. If for some reason I start to duplicate a container (say I forgot I had already imported table Customer_Val, the container name won’t update using the variable since you can’t have duplicate container names. I caught this very problem the other day. So this was an unintended, but very useful additional side effect!)

And of course, once I move the final code to their UAT environment and then the Prod environment, it’s simply a matter of updating 3-4 variables in the deploy (for the database server, file location and where to set emails) and I’m done.

By taking the time to abstract out as much as I could, I’ve made my life easier. Up front it was definitely a bit more work, but in the long run, has saved me a lot of effort.

Nothing revolutionary about the concept to us developers. BUT, stop and try to think of an environment where maybe variables or such layers of abstraction don’t exist. Such environments exist and there’s some valid reasons for them, but ultimately, being able to abstract stuff can make our lives MUCH easier and our code that much more powerful.

And it appears that abstraction at this level is perhaps the ONE real intellectual advantage we have over other intelligence on this planet. And I’m not even so sure about that!

So I’ll end in pointing out that even the written word is really just a level of abstraction. So this column is about itself. Or something like that.

 

Don’t Break the Chain!

If one backup is good, two is better right?

Not always.

Let me start by saying I’ve often been very skeptical of SQL Server backups done by 3rd party tools. There’s really two reasons. For one, many years ago (when I first started working with SQL Server) they often simply weren’t good. They had issues with consistency and the like. Over time and with the advent of services like VSS, that issue is now moot (though, I’ll admit old habits die hard).

The second reason was I hate to rely on things that I don’t have complete control over. As a DBA, I feel it’s my responsibility to make sure backups are done correctly AND are usable. If I’m not completely in the loop, I get nervous.

Recently, a friend had a problem that brought this issue to light. He was asked to go through their SQL Server backups to find the time period when a particular record was deleted so they could develop a plan for restoring the data deleted in the primary table and in the subsequent cascaded deletes. Nothing too out of the ordinary. A bit tedious, but nothing too terrible.

So, he did what any DBA would do, he restored the full backup of the database for the date in question. Then he found the first transaction log and restored that.  Then he tried to restore the second transaction log.

The log in this backup set begins at LSN 90800000000023300001,  which is too recent to apply to the database. An earlier log backup that  includes LSN 90800000000016600001 can be restored.

Huh? Yeah, apparently there’s a missing log.  He looks at his scheduled tasks. Nope, nothing scheduled. He looks at the filesystem.  Nope, no files there.

He tries a couple of different things, but nope, there’s definitely a missing file.  Anyone who knows anything about SQL Server backups, knows that you can’t break the chain. If you do, you can’t get too far. This can work both ways. I once heard of a situation where the FULL backups weren’t recoverable, but they were able to create a new empty database and apply five years worth of transaction logs. Yes, 5 years worth.

This was the opposite case. They had the full backup they wanted, but couldn’t restore even 5 hours worth of logs.

So where was that missing transaction log backup?

My friend did some more digging in the backup history files in the MSDB and found this tidbit:

backup_start_date backup_finish_date first_lsn last_lsn physical_device_name
11/9/2016 0:34 11/9/2016 0:34 90800000000016600000 90800000000023300000 NUL

There was the missing transaction backup.  It was a few minutes after the full backup, and definitely not part of the scheduled backups he had setup.  The best he can figure is the sysadmin had set SAN Snapshot software to take a full backup at midnight and then for some reason a transaction log backup just minutes later.

That would have been fine, except for one critical detail. See that rightmost column (partly cut-off)? Yes, ‘physical_device_name’. It’s set to NUL.  So the missing backup wasn’t made to tape or another spot on the disk or anyplace like that. It was sent to the great bit-bucket in the sky. In other words, my friend was SOL, simply out of luck.

Now, fortunately, the original incident, while a problem for his office, wasn’t a major business stopping incident. And while he can’t fix the original problem he was facing, he discovered the issues with his backup procedures long before a major incident did occurr.

I’m writing about this incident for a couple of reasons.  For one, it emphasizes why I feel so strongly about realistic DR tests.  Don’t just write your plan down. Do it once in awhile. Make it as realistic as it can be.

BTW, one of my favorite tricks that I use for multiple reasons is to setup log-shipping to a 2nd server.  Even if the 2nd server can never be used for production because it may lack the performance, you’ll know very quickly if your chain is broken.

Also, I thought this was a great example of where doing things twice doesn’t necessarily make things less resistant to disaster. Yes, had this been setup properly it would have resulted in two separate, full backups being taken, in two separate places. That would have been better. But because of a very simple mistake, the setup was worse than if only one backup had been written.

I’d like to plug my book: IT Disaster Response due out in a bit over a month. Pre-order now!

Small Disasters

Today was an interesting confluence of events. I was exchanging emails with an associate who is in the middle of getting a Master’s in Disaster Management and we were talking about scale and scope of disasters.

At about the same time I was monitoring email from one of my clients. The thread started out with a fairly minor report: Viewpoint Drive – Water Main Break. Not a huge, earth shattering disaster. Simply a notice that there was a waterline break in a nearby road and asked people if they noticed issues to let management know.

Within an hour there was a follow-up email stating that there was no longer adequate water pressure in the building and that folks should go home and finish their workday there. Furthermore, employees were told that for the next day the company was securing water bottles for drinking water and would be bringing in portable toilets.

Now, when people think about disasters, often they think about fires and other things that might destroy a building. But, that’s pretty rare.  It’s the other things that companies don’t necessarily plan for. Your company may have adequate backups of all its servers (but are you sure?) but does it have a plan for not having water?

I’ve worked with managers who have basically said, “eh, we can work around that.” Truth is, legally in most cases they can’t. If the building doesn’t have potable water and working sanitation facilities many municipalities won’t allow it to be occupied.

So does your company have a plan? Are the people who can authorize expenditures in on the loop? Who is going to declare a disaster and put the plan into motion? Who will sign for the porta-potties when they show up?  These are some of the things you have to think about.

So disasters about more than just a good set of backups. Sometimes it’s about the toilets. Think about that.

 

Testing

This ties in with the concept of experimentation. Thomas Grohser related a story the other night of a case of “yeah, the database failed and we tried to do a restore and found out we couldn’t.”

Apparently their system could somehow make backups, but couldn’t restore them. BIG OOPS.  (Apparently they managed to create an empty database and replay 4.5  years of transaction logs and recover their data. That’s impressive in its own right.)

This is not the first time I’ve worked with a client or heard of a company where their disaster recovery plans didn’t pass the first actual need of it. It may sound obvious, but companies need to test the DR plans. I’m in fact working with a partner on a new business to help companies think about their DR plans. Note, we’re NOT writing or creating DR plans for companies, we’re going to focus on how companies go about actually implementing and testing their DR plans.

Fortunately, right now I’m working with a client that had an uncommon use case. They wanted a restore of the previous night’s backup to a different server every day.

They also wanted to log-ship the database in question to another location.

This wasn’t hard to implement.

But what is very nice about this setup is, every 15 minutes we have a built-in automatic test of their log-backups.  If for a reason log-backups stop working or a log gets corrupt, we’ll know in fairly short time.

And, with the database copy, we’ll know within a day if their backups fail.  They’re in a position where they’ll never find out 4.5 years later that their backups don’t work.

This client’s DR plan needs a lot of work, they actually have nothing formal written down. However, they know for a fact their data is safe. This is a huge improvement over companies that have a DR plan, but have no idea if their idea is safe.

Morale of the story: I’d rather know my data is safe and my DR plan needs work than have a DR plan but not have safe data.

Documentation

Do it, it’s important.

Ok, I suppose I should expand a bit upon that and in this case add an actual example.

So last night, I again attended the local SQL Server User Group meeting. The talk this month was by Ray Kim and was on Documentation for Techies.  While we all agree that documentation is good, it’s sort of interesting how rare most techs actually do it. Ray’s talk covered some of this and further talked about exactly how valuable it is. In addition, several audience members spoke about how proper documentation saved their company a great deal of money simply by giving their tech support people the ability to answer questions in a far faster form.

I got thinking about some of the clients I’ve worked for and how I’ve wanted to document stuff, but often they have very little actually setup in the way of procedures to handle documentation. This is unfortunate, because it can cost them money. For example, for a client right now I’m working on automating a task.  It turns out that there’s not much documentation, so I’m basically struggling to figure things out as a I go.

One thing you hear tech folks talk about a lot is “oh the code is self-documenting”. And sometimes it is.  Since I work in SQL, often, but not always it’s clear what the code is doing. For example

Select firstname, lastname from Clients where ClientID=@ClientID

probably doesn’t need a comment saying what it does.  It’s pretty clear.  But a more complex query might need some commenting, or it may need some explanation as why a particular approach was taken. For example I was recently writing a stored procedure where the where clause was not quite what one would expect if one were to naively write it in the most obvious manner.  However, the obvious manner would have resulted in a table scan of a very large table. By writing what I did, I could ensure a seek would occur.

I also had a habit, which after thinking about last night and testing today, I’m going to modify a bit. Often I’d write procedures such as:

-- Usage: Exec FOO
-- Author: Greg D. Moore
-- Date: 2016-03-15
-- Version: 1.0
-- This simply returns bar when executed
if OBJECT_ID('foo', 'p') is not null drop procedure foo
go
create procedure foo
as
select 'bar'
go

Now, note technically this is a script (T-SQL) that will drop and then create the procedure, so it’s more than just the script. But it’s useful for me because I can ensure I’m running the latest and greatest and drop the old one if it exists before running it.

But, last not got me thinking. What happens if 3 years down the road someone comes along and needs to edit my code. Let’s say the client didn’t do a good job of keeping track of source code and they have to extract the scripts to create the procedures from SQL Server itself using say SSMS?

The results end up looking much more like this:

USE [Baz]
GO
/****** Object:  StoredProcedure [dbo].[foo]    Script Date: 03/15/2016 10:47:22 ******/
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[foo]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[foo]
GO
USE [Baz]
GO
/****** Object:  StoredProcedure [dbo].[foo]    Script Date: 03/15/2016 10:47:22 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create procedure [dbo].[foo]
as
select 'bar'
GO

Ignore the extra USE statements and the SSMS generated comments and SET statements. Notice my comments are gone.  This actually makes sense because in the first script, the comments occur before a GO statement so the SQL engine interprets them as completely separate from the statements to create the actual stored proc.  All my useful comments are now history.

BUT, there’s a simple solution. Move the comments to after the first GO statement.

if OBJECT_ID('foo', 'p') is not null drop procedure foo
 
go
 
-- Usage: Exec FOO
-- Author: Greg D. Moore
-- Date: 2016-03-15
-- Version: 1.0
-- This simply returns bar when executed
-- Version: 1.1
-- Comments moved below GO statement
 
create procedure foo
as
 
select 'bar'
go

Now if I use SSMS to generate my script I get:

USE [Baz]
GO

/****** Object: StoredProcedure [dbo].[foo] Script Date: 03/15/2016 10:48:53 ******/
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[foo]’) AND type in (N’P’, N’PC’))
DROP PROCEDURE [dbo].[foo]
GO

USE [Baz]
GO

/****** Object: StoredProcedure [dbo].[foo] Script Date: 03/15/2016 10:48:53 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

— Usage: Exec FOO
— Author: Greg D. Moore
— Date: 2016-03-15
— Version: 1.0
— This simply returns bar when executed
— Version: 1.1
— Comments moved below GO statement

create procedure [dbo].[foo]
as

select ‘bar’

GO

Now my great documentation is preserved. This is a small thing but down the road could save the next developer a lot of trouble.

So, stop and think about not only documentation, but how to make sure it’s preserved and useful in the future.