Too Secure

There’s an old joke in IT that the Security Office’s job isn’t done until you can’t do yours.

There’s unfortunately at times some truth to that.  And it can be a bigger problem than you might initially think.

A recent example comes to mind. I have one client that has setup fairly strict security precautions. I’m generally in favor of most of them, even if at times they’re inconvenient. But recently, they made some changes that were, frustrating to say the least and potentially problematic.  Let me explain.

Basically, at times I have to transfer a file created on a secured VM I control to one of their servers (that in theory is a sandbox in their environment that I can play in). Now, I obviously can’t just cut and paste it. Or perhaps that’s not so obvious, but yeah, for various reasons, through their VDI, they have C&P disabled. I’m ok with that. It does lessen the chance of someone accidentally cutting and pasting the wrong file to the wrong machine.

So what I previously did was something that seemed strange, but worked. I’d email the file to myself and then open a browser session on the said machine and get the file there. Not ideal and I’ll admit there are security implications, but it does cause the file to get virus scanned at at least two places and I’m very unlikely to send myself a dangerous file.

Now, for my webclient on this machine, I tended to use Firefox. It was kept up to date and as far as I know, up until recently, on their approved list of programs.  Great. This worked for well over a year.

Then, one day last week, I go to the server in question and there’s no Firefox. I realized this was related to an email I had seen earlier in the week about their security team removing Firefox from a different server, “for security reasons”. Now arguably that server didn’t need Firefox, but still, my server was technically MY sandbox. So, I was stuck with IE. Yes, their security team thinks IE is more secure than Firefox.  Ok, no problem I’ll use IE.

I go ahead, enter my userid and supersecret password. Nothing happens. Try a few times since maybe I got the password wrong. Nope. Nothing.  So I tried something different to confirm my theory and get the dreaded, “Your browser does not support cookies” error. Aha, now I’m on to something.

I jump into the settings and try several different things to enable cookies completely. I figure I can return things to the way they want after I get my file. No joy. Despite enabling every applicable options, it wouldn’t override the domain settings and cookies remained disabled.  ARGH.

So, next I figured I’d re-download FF and use that. It’s my box after all (in theory).

I get the install downloaded, click on it and it starts to install. Great! What was supposed to be a 5 minute problem of getting the file I needed to the server is about done. It’s only taken me an hour or two, but I can smell success.

Well, turns out what I was smelling was more frustration. Half-way through the install it locks up. I kill the process and go back to the file I downloaded and try again. BUT, the file isn’t there. I realize after some digging that their security software is automatically deleting certain downloads, such as the Firefox install.

So I’m back to dead in the water.

I know, I’ll try to use Dropbox or OneDrive. But… both require cookies to get setup.  So much for that.

I’ve now spend close to 3 hours trying to get this file to their server.  I was at a loss as to how to solve this. So I did what I often do in situations like this. I jumped in the shower to think.

Now, I finally DID manage to find a way, but I’m actually not going to mention it here. The how isn’t important (though keeping the details private are probably at least a bit important.)

Anyway, here’s the thing. I agree with trying to make servers secure. We in IT have too many data breaches as it is. BUT, there is definitely a problem with making things TOO secure. Actually two problems. The first is the old joke about how a computer encased in cement at the bottom of the ocean is extremely secure. But also unusable.  So, their security measures almost got us to the state of making an extremely secure  but useless computer.

But the other problem is more subtle. If you make things too secure, your users are going to do what they can to bypass your security in order to get their job done. They’re not trying to be malicious, but they may end up making things MORE risky by enabling services that shouldn’t be installed or by installing software you didn’t authorize, thus leaving you in an unknown security state (for the record, I didn’t do either of the above.)

Also, I find it frustrating when steps like the above are taken, but some of the servers in their environment don’t have the latest service packs or security fixes. So, they’re fixing surface issues, but ignoring deeper problems. While I was “nice” in what I did; i.e. I technically didn’t violate any of their security measures in the end, I did work to bypass them. A true hacker most likely isn’t going to be nice. They’re going to go for the gold and go through one of at least a dozen unpatched security holes to gain control of the system in question. So as much as I can live with their security precautions of locking down certain software, I’d also like to see them actually patch the machines.

So, security is important, but let’s not make it so tight people go to extremes to by pass it.

 

When Life hands you Lemons

You make lemonade! Right? Ok, but how?

Ok, this is the 21st Century, now we use mixes. That makes it even easier, right?

But, I’ve given this some thought, and like many procedures there’s not necessarily a right way to do it. That said, I may change the procedure I use.

Ok, so I use one of those little pouches that make a lemon-flavored drink. I’m hesitant to call it actual lemonade, but let’s go with it.

Typically my process is to take the container, fill a drinking glass and if the container is empty, or has only a little bit left in it, make more. (Obviously if there’s a lot left, I just put the container back in the refrigerator. 🙂

So still pretty simple, right? Or is it.

Basically you put the powder in the container and then add water.

Or do you put the water in and then add the powder?

You may ask, “What difference does it make?”

Ultimately, it shouldn’t, in either case you end up with a lemon-flavored drink as your final product.

All along I’ve been going the route of putting the powder in first then adding the water. There was a rational reason for this: the turbulence of the water entering the container would help mix it and it would require less shaking. I thought this was pretty clever.

But then one night as I was filling the container with water (it was sitting in the sink) I got distracted and by the time I returned my attention to it, I had overfilled the container and water was flowing over the top.  Or rather, somewhat diluted lemon-flavored was flowing over the top.  I had no idea how long this had been going on, but I knew I had an over-filled container that had to have a bit more liquid poured off before I could put it away. It also meant the lemon-flavored drink was going to be diluted by an unknown amount. That is less than optimal.

So the simple solution I figured was to change my procedure. Add the water first and then add the flavoring. That way if there was too much water in the container, I could just pour off the extra and then add the proper amount of powder and have an undiluted lemon-flavored drink.

That worked fine until one day as I was pouring the package, it slipped through my fingers into a half-filled container.  Now I had to find a way to fish it out. Ironically, the easiest way to do it was to overfill it so the package would float to the top. Of course now I was back to diluted lemon-flavored drink. And who knows what was on the outside of the powder package that was now inside the water.

Each procedure has its failure modes. Both, when successful, get me to the final solution.

So, which one is better?

I put in the powder first and then put in the water. I could say I have a rational reason like preferring slightly diluted lemon-flavored drink over a possibly contaminated lemon-flavored drink from a dropped in packet.

But the truth is, it really doesn’t matter which order I do the steps in. Neither failure is completely fatal and in fact about equivalent in their seriousness.

Old habits die hard, so I stick with my original method.

But, the point is that even in a process as simple as making lemon-flavored drink, there’s more than one way to do it, and either way may be workable. Just make sure you can justify your reasoning.

Testing

This ties in with the concept of experimentation. Thomas Grohser related a story the other night of a case of “yeah, the database failed and we tried to do a restore and found out we couldn’t.”

Apparently their system could somehow make backups, but couldn’t restore them. BIG OOPS.  (Apparently they managed to create an empty database and replay 4.5  years of transaction logs and recover their data. That’s impressive in its own right.)

This is not the first time I’ve worked with a client or heard of a company where their disaster recovery plans didn’t pass the first actual need of it. It may sound obvious, but companies need to test the DR plans. I’m in fact working with a partner on a new business to help companies think about their DR plans. Note, we’re NOT writing or creating DR plans for companies, we’re going to focus on how companies go about actually implementing and testing their DR plans.

Fortunately, right now I’m working with a client that had an uncommon use case. They wanted a restore of the previous night’s backup to a different server every day.

They also wanted to log-ship the database in question to another location.

This wasn’t hard to implement.

But what is very nice about this setup is, every 15 minutes we have a built-in automatic test of their log-backups.  If for a reason log-backups stop working or a log gets corrupt, we’ll know in fairly short time.

And, with the database copy, we’ll know within a day if their backups fail.  They’re in a position where they’ll never find out 4.5 years later that their backups don’t work.

This client’s DR plan needs a lot of work, they actually have nothing formal written down. However, they know for a fact their data is safe. This is a huge improvement over companies that have a DR plan, but have no idea if their idea is safe.

Morale of the story: I’d rather know my data is safe and my DR plan needs work than have a DR plan but not have safe data.

Documentation

Do it, it’s important.

Ok, I suppose I should expand a bit upon that and in this case add an actual example.

So last night, I again attended the local SQL Server User Group meeting. The talk this month was by Ray Kim and was on Documentation for Techies.  While we all agree that documentation is good, it’s sort of interesting how rare most techs actually do it. Ray’s talk covered some of this and further talked about exactly how valuable it is. In addition, several audience members spoke about how proper documentation saved their company a great deal of money simply by giving their tech support people the ability to answer questions in a far faster form.

I got thinking about some of the clients I’ve worked for and how I’ve wanted to document stuff, but often they have very little actually setup in the way of procedures to handle documentation. This is unfortunate, because it can cost them money. For example, for a client right now I’m working on automating a task.  It turns out that there’s not much documentation, so I’m basically struggling to figure things out as a I go.

One thing you hear tech folks talk about a lot is “oh the code is self-documenting”. And sometimes it is.  Since I work in SQL, often, but not always it’s clear what the code is doing. For example

Select firstname, lastname from Clients where ClientID=@ClientID

probably doesn’t need a comment saying what it does.  It’s pretty clear.  But a more complex query might need some commenting, or it may need some explanation as why a particular approach was taken. For example I was recently writing a stored procedure where the where clause was not quite what one would expect if one were to naively write it in the most obvious manner.  However, the obvious manner would have resulted in a table scan of a very large table. By writing what I did, I could ensure a seek would occur.

I also had a habit, which after thinking about last night and testing today, I’m going to modify a bit. Often I’d write procedures such as:

-- Usage: Exec FOO
-- Author: Greg D. Moore
-- Date: 2016-03-15
-- Version: 1.0
-- This simply returns bar when executed
if OBJECT_ID('foo', 'p') is not null drop procedure foo
go
create procedure foo
as
select 'bar'
go

Now, note technically this is a script (T-SQL) that will drop and then create the procedure, so it’s more than just the script. But it’s useful for me because I can ensure I’m running the latest and greatest and drop the old one if it exists before running it.

But, last not got me thinking. What happens if 3 years down the road someone comes along and needs to edit my code. Let’s say the client didn’t do a good job of keeping track of source code and they have to extract the scripts to create the procedures from SQL Server itself using say SSMS?

The results end up looking much more like this:

USE [Baz]
GO
/****** Object:  StoredProcedure [dbo].[foo]    Script Date: 03/15/2016 10:47:22 ******/
IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[foo]') AND type in (N'P', N'PC'))
DROP PROCEDURE [dbo].[foo]
GO
USE [Baz]
GO
/****** Object:  StoredProcedure [dbo].[foo]    Script Date: 03/15/2016 10:47:22 ******/
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
create procedure [dbo].[foo]
as
select 'bar'
GO

Ignore the extra USE statements and the SSMS generated comments and SET statements. Notice my comments are gone.  This actually makes sense because in the first script, the comments occur before a GO statement so the SQL engine interprets them as completely separate from the statements to create the actual stored proc.  All my useful comments are now history.

BUT, there’s a simple solution. Move the comments to after the first GO statement.

if OBJECT_ID('foo', 'p') is not null drop procedure foo
 
go
 
-- Usage: Exec FOO
-- Author: Greg D. Moore
-- Date: 2016-03-15
-- Version: 1.0
-- This simply returns bar when executed
-- Version: 1.1
-- Comments moved below GO statement
 
create procedure foo
as
 
select 'bar'
go

Now if I use SSMS to generate my script I get:

USE [Baz]
GO

/****** Object: StoredProcedure [dbo].[foo] Script Date: 03/15/2016 10:48:53 ******/
IF EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[foo]’) AND type in (N’P’, N’PC’))
DROP PROCEDURE [dbo].[foo]
GO

USE [Baz]
GO

/****** Object: StoredProcedure [dbo].[foo] Script Date: 03/15/2016 10:48:53 ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO

— Usage: Exec FOO
— Author: Greg D. Moore
— Date: 2016-03-15
— Version: 1.0
— This simply returns bar when executed
— Version: 1.1
— Comments moved below GO statement

create procedure [dbo].[foo]
as

select ‘bar’

GO

Now my great documentation is preserved. This is a small thing but down the road could save the next developer a lot of trouble.

So, stop and think about not only documentation, but how to make sure it’s preserved and useful in the future.

GIGO

A huge tenet of programming is GIGO: Garbage In/Garbage Out.

Years ago when I was practicing for a play (Night of January the 16th by Ayn Rand). I was the bailiff.  At one point in the play I’m handed a copy of a check that is evidence. I’m supposed to “read” what’s on the check. Of course since it’s a play, I have my lines memorized.

But during this dress rehearsal I’m given a piece of paper with actual writing on it. Unfortunately it was just some random writing. But my brain went into segfault and I stopped. Part of my brain wanted to read what’s on the piece of paper.  Part of my brain wanted to say my lines, but it could no longer remember them.

It was a perfect example of how easy it is to scramble the input for our brains.  In the actual performances we made sure the piece of paper was actually blank.

I was reminded of this the other night when Steve Harvey made his gaff on live television. I was curious how he could make such a mistake but I had my suspicions. And I was right.  The cue card apparently was VERY poorly designed and his visual input system (i.e. his eyes and brain) screwed up. Read here for more details. Bad input lead to bad output.

These are humorous examples, but in the software world, these can be very dangerous.

At one point during the shuttle program, they found an error where the arm thought it had rotated more than 360 degrees, a physical impossibility. This link has some details (though in my recollection the issue was not a rounding error but that the code went from 0-360 instead of 0-359 or 1-360).  Garbage in could have lead to potentially bad garbage out.

Much more recently however, here’s an example of intentional “garbage” in. This is part of the encryption software used on many firewalls. Your bank or other financial institution for example may be using this code.

Ironically true garbage, as in a purely random number, might be better. But here it seems someone poisoned the input with their specific number and then set it up to use the results in a dangerous manner. I say dangerous because the 3rd party using this code may not realize that they’re completely vulnerable to having all their data seen.  About the only thing worse than unencrypted data is data you think is encrypted but isn’t.  In the former, I’m probably going to pay far more attention to who has access.  I’ll add too that some of us suspect the NSA had a hand in this.

This is by the way I highly recommend folks don’t write their own encryption. Unless you’re an expert you’re liable to screw it up.

Moral: So be careful of your inputs, they definitely influence your outputs, both in code and in your brain.

Rolling in the Deep

I was at SQL Saturday in Boston this past weekend and I sat in on a session given by Paresh Motiwala: “Why do we shun using tools for DBA job?”

It’s a decent question and sometimes rolling your own (hence the title here and yes I’m listening to Adele right now) is the right answer.

But often, it’s not.

A case in point are DBAs who avoid using the built-in SQL Server maintenance plans, especially for simple tasks such as Backups.

Now, I’ll start out by saying straight up, sometimes they’re not the optimal solution. (You may want to backup certain partitions on a different a rotation schedule and the like as one example.)

Below is a recent situation I came across.

But, if you do decide to roll your own, please do NOT do the following:

  1. First write a sproc that has to be placed into each database as it’s created in order to run the backup. (If you DO decide to go this route, please make this a scripted part of your release procedure so it’s not missed.)
  2. If you DO in fact ignore suggestion #1, I will point out that it doesn’t do much good to make the first step of your sproc to check to see if the database exists. BIG HINT: If the database doesn’t exist, the sproc won’t run in the first place! Yes, I know you’re saying, “Obviously” but this is the situation I just came across.
  3. If you DO in fact ignore suggestion #2, you probably don’t need to call your own special “alert sproc”.
  4. But if you DO in fact ignore suggestion #3, make sure your “alert sproc” does more than call a “email sproc” and pass it a few parameters.
  5. And if you DO in fact ignore suggestion #4, please make sure your “email sproc” does a bit more than build a message and call the built-in SQL Server stored proc to send an email.

Now granted, there may be reasons to do some of the above. Perhaps your alert sproc also calls some sort of 3rd party monitoring tool. Or your email needs are very specific.

In the case I just rectified, none of that was true.  So there was a lot of additional complexity (which really didn’t work anyway) for no good reason.

Another problem this roll your own backup setup had was that it used the same filename every time for its backups. i.e. On Monday the backup name was M:\Backups\FOO_FULL.BAK.  On Tuesday it was M:\Backups\FOO_FULL.BAK, etc.

In theory (and generally in practice) each of these would be backed-up to a 3rd party so it was in theory possible to find the backup for a specific day, but that was an added complexity; and probably not one you want in an actual DR situation.  Also, if for some reason the backup to the the 3rd party failed (but the local backups continued) they’d definitely lose the ability to restore specific days of backups.

In addition, the person who built this procedure setup differential backups for MOST databases to run every 15 minutes.  Now, I wouldn’t necessarily call that a terrible idea, but in this case, almost certainly not the best approach in my opinion.  However, again, the same file name was used each time.

This means that in a DR event, the company could restore the previous nights backup and if they wanted, the most recent Diff backup and that was it.  If they wanted to restore to a point in time in between, that was impossible. And in my experience this is far more common than most other restore needs.

Finally, the developer who wrote all this clearly did not understand what the CHECKPOINT command did. He had scheduled a checkpoint job to run every 30 minutes.  Again, in general, not only not necessary, but probably a bad idea. However in this case it not only was not necessary, the reason given in the job comments was completely wrong.  He seemed to think it would keep the transaction logs from growing.

This is of course NOT what it does and sure enough on the one database still with FULL RECOVERY enabled the transaction log was far larger than the actual database. (Fortunately it was a lightly used database or the disk might have filled up years ago.)

Since discovering all this, I’ve gone and replaced all this complexity with a set of maintenance jobs.  These will guarantee each system database is backed up weekly (for their needs this should be fine) with unique names.  User databases will be backed up nightly and retained for 4 nights (and perhaps extended once we determine fully how much disk space we want to set aside for this.)  Transaction logs will be performed every 15 minutes. These too will have unique names.

Now the customer can restore to any point in time in the last 4 days (if they go to their 3rd party backup, even further back) up to 15 minutes before a failure (and in some cases if the log is still available and they can backup the tail of the log, up to the instant before the failure).

If they add additional databases, they don’t have to worry about remembering to put in 3 separate sprocs for each database added and adding new jobs to the SQL Server Agent.

Now they not only have a far more robust backup plan, they have one that is far easier to maintain.  Oh and one that will actually send an email if there’s a problem with the backup.

The morale is: Don’t make things more complex unless you absolutely need to and if you do, make sure you actually achieve the goals you’re trying to achieve.

Getting the right answer by suggesting the wrong one

I’m a participant on a CMC called Lily It is based out of my alma mater, RPI.  At some point, someone created a rule (which I’ve seen elsewhere so it’s hardly unique) that sometimes the fastest way to get the right answer to a question is to post the wrong answer.

There is truth to that.  I think in part it can be summed up with this XKCD cartoon.  Many of us who are involved in technology seem to have an incessant need to be “right”.  So when we see something wrong, we’re compelled to correct the mistake.

But, to be wrong, it has to be clearly wrong.  To go back to my cave rescue experience, if I recommend a 3:1 haul system and you recommend a 2:1, neither of us is necessarily wrong. We might be optimizing for different factors.  On the other hand, if you recommend we use 11mm rope for the haul line and I whip out some clothesline I’ve had in my car for a few years and suggest it should be good enough, after all it’s only Bill we’re rescuing, I’m clearly going to be wrong and need to be corrected.

These thoughts about being wrong and trying to find the right answer were prompted by a coding problem that has consumed far too much of my time. I finally came up with an answer that worked, but not one that I liked.

Essentially I’m building a Combobox (loading it from a datatable) in vb.net

It has key,value pairs, let’s call them (“Test1”, “A”), (“Test2”, “B”) and so forth.
(note VB.net appears to call these a DisplayMember,ValueMember pair and they can be loaded with a dictionary type, so in my mind it’s what they call the “valuemember” is what I’d consider the lookup key and that illustrate my misunderstanding of the issue.)

However, once I load the record in question, I want the selected value in the dropdown to reflect the value in the record (which of course is stored as “A” or “B” etc.)

There appears to be no way in VB.Net to easily say something like:

cbxResource.SelectedValue = Itemrecord.Value

Then I tried:

cbxResource.SelectedItem = Itemrecord.Item just to see if it would work. It doesn’t.

Googling suggests something like:

cbxResource.SelectedIndex = cbxResource.FindString(Itemrecord.Item)

That does indeed work, if I know the DisplayMember name. But that’s I want to display, not what I store in Itemrecord and as such means I don’t know it.

It strangely seems I can not set the index based on the ValueMember, just the DisplayMember.  To me this is strange since coming from a DB world, it appears the value member would be the key I’d want to look  up to select the Displaymember to be displayed.

I finally settled on a hack.  What if I switched the two?

cbxResources.DisplayMember = “Resource”
cbxResources.ValueMember = “Description”

cbxResources.SelectedIndex = cbxResources.FindStringExact(Itemrecord.Item)

cbxResources.DisplayMember = “Description”
cbxResources.ValueMember = “Resource”

I’m not sure I like this answer. It seems to me it should be far simpler. Or that I’m fundamentally misunderstanding how the control should be setup and used.  But for now it’s the hack that’s going into my code.

So why publish here?  Well either it’s a great work-around and I can save other folks the hours of fruitless searching I experienced, or someone can say, “It’s on the Internet and it’s wrong; I have to correct it!”

I’ll take either answer.

Moral: Sometimes being wrong is the right thing to do.