SQL Pass 2018

Next week I’m off to the SQL Pass conference in Seattle.  This will be my 4th peregrination to Seattle in 4 years. This has become an annual trip for me. There’s one very obvious reason for going and then a 2nd also important reason.  SQL Pass is one of the top events for folks who work with SQL Server. It’s a 3 day conference (plus up to 2 days of pre-con events, including at least one meeting I’ll be attending as our local group leader) full of technical sessions covering a wide range of topics related to SQL Server and related technologies.

Four years ago, when I first attended, I was a newbie and wasn’t sure what to expect. My father had recently passed and I wasn’t entirely sure I still wanted to make the trip. But tickets had been bought and the price to attend been paid, so I decided to go. One of the first (perhaps the first) session I attended, was a session by Kathi Kellenberger on how to get published as an author. I had for years toyed with an idea for a book and I figured it couldn’t hurt to attend and perhaps learn something. Her session was quite helpful and I approached her afterwards for more input and she introduced me to one of the editors at Apress. I pitched my idea and a few months later, the contracts were signed.  All I had to do now was actually write the thing.  So, I ended up writing IT Disaster Response: Lessons Learned in the Field. (btw, I do obviously recommend it, it covers IT disasters, plane crashes and cave rescues. It’s not your standard cut and dry boring book on disasters.)

A friend of mine who owns a book shop once said, “anyone can write a book, it’s harder actually publish a book.” I had now done both. It was a bit bittersweet because my dad had been an English major and had always wanted to write a book and be published. Now, admittedly, he wanted to write fiction, which I think is far harder, and in his day, the idea of “print on demand” like what Apress tends to do, didn’t really exist.  And to be honest, at the end of the day, as Kathi warned me, if I was in it for the money, I’d be better off in terms of hours spent, getting a job at McDonald’s.

But, I digress. That book ended up being my first foray into actually getting paid to write.  As I mentioned in an earlier blog post I’ve now contributed to Red Gate’s Simple Talk program with my post on an Intro to PowerShell. And my second post has been submitted and accepted and hopefully is going up in a few weeks or so.

So, to say my first PASS event changed my live would probably be accurate.

Beyond that one session four years ago, I’ve attended many other sessions and learned a wealth of knowledge and leveraged that in my job and in finding speakers for my local SQL Server User Group which I now lead. One of my favorite speakers I had in the last year was Bob Ward who did a remote presentation for us about SQL Server on Linux. And this despite me being a Patriots fan and him being a *cough* Cowboys fan.

So again, I look forward to seeing a lot of my #sqlfamily out in Seattle next week. But I still won’t be doing karaoke, sorry Aunt Kathi!

But I also mentioned a second reason for visiting: my non-sqlfamily, what I might call my #rocfamily.  The Rensselaer Outing Club has a number of alumns who all live in the area and we’ve started a yearly tradition of getting together for take-out Thai food at the house I stay in. ROC in its own way changed my life, among other things, teaching me how to be a leader and an effective decision maker.

In addition to all my fellow ROCcers, there’s at least one from my days on sci.space.* on Usenet (where I can still be found btw) and a few other friends I’ve made over the years. I’m quite looking forward to seeing them all.

So see you all next week in Seattle!

The Color Purple

Ok, I’ll admit this post was inspired by a political article about a state turning purple. But,  it’s not about politics.

It’s about wavelengths and human perception.

What is purple anyway? Historically it was the color of royalty, in part because the secret to making a dye of that color was a closely guarded secret, and expensive to boot.

Technically, it’s a combination of red and blue, but when seen as a spectral color (e.g. splitting white light with a prism) it has a wavelength between 380-420nm and is called violet.  So we can see violet as a distinct wavelength or we can see purple as the combination of two colors. In fact, we can’t really create violet on a computer screen. Any purple you see in this post is a combination of red and blue (if we’re talking RGB space, 127,0,255 btw).

But what are we really seeing? That’s the part that fascinates me.

It’s tempting to think our senses accurately perceive the world, but the truth is that they at best our brains form an approximation of the world, and color is one of them. For example, in our eyes, most of us only have 3 types of cells, cones, that are sensitive to light and they are sensitive over a different range of wavelengths; Graph showing the wavelengths the S, M, L cones the human eye are sensitive to. Looking at this graph (courtesy of Wikipedia Commons) you’ll see that pretty much only the blue cones are reacting to wavelength of violet.  But, if you mix red and blue, the cones react a bit differently, and we see purple.

So, we’re seeing a color that arguably isn’t what we might think: i.e. violet is not purple, even if we normally equate them and for all intents and purposes they can look the same to us.

Even then, we’re missing out. For example, in most people their lens blocks UV light (which is a good thing in the long run). But some people who have had their lens replaced w/o UV blocking materials can see into the UV. And of course some people, mainly men (because two cone genes are on the X chromosome) are colorblind. But even more rare (at this time there appears to be a single individual who has been identified) is tetrachromacy, there there’s a 4th cone, that is most sensitive between the green and red cones above.

Even more weird are “impossible colors” such as Bluish-Yellow, or Reddish-Green. I’ll let you Google those, but they’re pretty cool.

So, next time you see the color purple, stop and think about what your brain is really seeing, or not seeing. Are you actually seeing violet or purple?

P.S: we’ve got nothing on the Mantis Shrimp, which has between 12-16 different types of cones! That said, it doesn’t seem to be much better at picking out colors than your or I. But I do have to wonder what it would be like to have a higher cone count.

P.P.S: A story I once read, that I’ve never been able to verify is that during WWII the English experimented with lighting some airfields with lights in the near UV range (i.e. just outside of normal human vision) because they had discovered some folks could see into the near UV range. The idea was that by using such fields at night, without any normal light, they could safely operate at night without the Germans bombers seeing them. Apparently the idea fell apart when further research discovered that the people most likely to be able to see into near UV were blond-haired, blue-eyed of German descent.  I’d normally write this off as conflating a number of myths, including the carrots make your eyes better, but I’ve seen elsewhere that apparently blue-eyed people ARE more likely to see into the UV range (from what I’ve read, it appears some UV may leak in through the iris). I’d love to find more details on this particular idea (which until I do, I will consider a UL). That said, I’ve got to say, I’ve found my night-vision is far better than most people I know. I guess that makes up for my normal vision for which I need glasses!

The Soyuz Abort

Many of you are probably aware of the Soyuz abort last week. It reminded me of discussions I’ve had in the past with other space fans like myself and prompted some thoughts.

Let’s start with the question of whether Soyuz is safe. Yes but…

When Columbia was lost on re-entry a lot of folks came out of the woodwork to proclaim that Soyuz was obviously so much safer since no crew had died the ill-fated Soyuz 11 flight in 1971. The problem with this line of thought was that at the time of Columbia, Soyuz had only flown 77 times successfully vs 89 successful flights since the Challenger Disaster. So which one was safer? If you’re going strictly on the successful number of flights, the Space Shuttle. Of course the question isn’t as simple as that. Note I haven’t even mentioned Soyuz 1, which happened before Soyuz 11 and was also a fatal flight.

Some people tried to argue that the space shuttle was far less safe because during the program it had killed 14 people during its program life vs 4 for Soyuz.  I always thought this was a weird metric since it all came down to the number of people on board. Had Columbia and Challenger only flown with 2 on each mission, would the same folks argue they were equally safe as Soyuz?

But we can’t stop there. If we want to be fair, we have to include Soyuz-18a. This flight was aborted at a high altitude (so technically they passed the Karman Line and are credited with attaining space.)  Then in 1983, Soyuz T-10a also suffered an abort, this time on the pad.

So at this point I’m going to draw a somewhat arbitrary line as to what I consider a successful mission: the crew obtains an orbit sufficient to carry out a majority of their planned mission and returns safely. All the incidents above, Soyuz and Space Shuttle are failed missions.  For example, while Soyuz-11 and Columbia attained orbit and carried out their primary missions, they failed on the key requirement to return their crew safe.

Using that definition, the shuttle was far more successful. There was one shuttle flight that did undershoot the runway at Edwards, but given the size of the lakebed, landed successfully.  We’ll come back to that in a few.

Now let me add a few more issues with the Soyuz.

  • Soyuz-5 – failure of service module to separate, capsule entered upside-down, and the hatch nearly burned through. The parachute lines also tangled resulting in a very hard landing.
  • TMA-1 – technical difficulties resulted in the capsule going into a ballistic re-entry mode.
  • TMA-10 – Failure of the Service Module to separate caused the capsule to re-enter in an improper orientation (which could have lead to a loss of the crew and vehicle) which ended up causing the capsule also re-enter in a ballistic re-entry mode. The Russians initially did not tell the US.
  • TMA-11 – Similar issue as TMA-10, with damage to the hatch and antenna that was abnormal.

And there have been others of varying degree. I’m also ignoring the slew of Progress failures, including the 3 more recent ones that were launched on a rocket very similar to the current Soyuz-FG.

So, what’s safer, the Soyuz or the Space Shuttle?  Honestly, I think it’s a bit of a trick question. As one of my old comrades on the Usenet Sci.space.* hierarchy once said, “any time a single failure can make a significant change in the statistics, means you really don’t have enough data.” (I’m paraphrasing).

My personal bias is, both programs had programmatic issues (and I think the Russians are getting a bit sloppier when it comes to safety) and design issues (even a perfectly run shuttle program had risks that could not have been solved, even if they might have prevented both Challenger and Columbia).  However, I think the Russian Soyuz is ultimately more robust. It appears a bit more prone to failures, but it has survived most of them. But, that still doesn’t make it 100% safe. Nor does it need to be 100% safe.  To open the new frontier we need to take some risks.  It’s a matter of degree.

“A ship in harbor is safe, but that is not what ships are built for.” – John A. Shedd.

A spacecraft is safe on the ground, but that’s not what it’s built for.

In the meantime, there’s a lot of, in my opinion naive, talk about decrewing ISS. I suspect the Russians will fly the Soyuz TM-11 flight as scheduled. There’s a slight chance it might fly uncrewed and simply serve to replace the current Soyuz TM-9 capsule, but it will fly.

 

Crying Wolf

We all know the story of the boy who cried wolf. Last week we had a nationwide example of that.

I’m about to break an unwritten rule I have for this blog in that I try to avoid politics as much as possible. But here I’m going to try to steer away from any particular partisan position and try to discuss the impact of both certain policies and the resulting reactions.

So, to be upfront, I am not a fan of President Trump, nor do I subscribe to his brand or style of politics. That said, let’s carry on.

So, at approximately 2:20 PM EDT on Thursday of last week, millions of Americans had their phones buzz, beep, play some sort of tune, etc.  By the build up and reaction, you would have thought it was the end of the world. Ironically, the system MIGHT someday be used to actually alert us to the end of the world.  Hopefully not.

The event I’m referring to of course was a test of a new system that many phones classify as a “Presidential Alert”.  It’s really the latest in a series of systems the US has had over the years to alert citizens to potential dangers or crises.

Some of my readers may be old enough to recall AM radios that had two markings on them, small triangles with a CD in them. This was for the CONELRAD alert system that was in place from 1953-1963. This was designed to be used strictly in the event of a nuclear attack and was never intended nor used in the event of a natural disaster.

It was replaced by the Emergency Broadcast System. This system was actually used to alert local and regional areas to extreme weather events and other natural disasters.  In 1997 it was replaced by the Emergency Alert System. The EAS was designed to take advantage of the expanding ways of reaching people. This ultimately included the ability to send text alerts to phones in the US.

There are, and have been from day one of the design for phones, three types of alerts, the “Presidential Alert”, alerts for extreme weather or other events and Amber alerts.  Phones have had the ability to receive these alerts for close to a decade now; and, importantly, for the second two type of alerts, the ability to shut them off. Phones can NOT turn off the Presidential Alert. This is by design and this has been a feature of the system from day 1. In other words, despite what many in social media seemed to believe, this feature was baked in long before President Trump took office.

So enough history, let’s get to the the wolf cry. Both before and after, I saw people all over Facebook and other media proclaiming how bothered they were and upset that the President had the ability to text them directly. He (or ultimately she) can’t.

Ok, that’s not quite true. My understanding is that the President can issue a statement through the White House Communications Director that gets passed on to the appropriate people that would activate the EAS and the WEA and the statement would go out. But the idea of President Trump or any President sitting at their desk and picking up their phone and texting all of America is not true.  It’s a myth and image built up by folks who are quite frankly paranoid. This does not mean that the system can’t be abused. However, there are numerous checks in the system that I’m extremely doubtful that such non-emergency use would ever actually intentionally occur.

But, the fact that people apparently feel so strongly about the risks troubles me. There’s no doubt that this President uses social media in ways unlike any previous President. This President is far more likely to say what’s on his mind without much filter. Some people love him for that, some vilify him.

BUT, this man is the President, NOT the Office of the President nor the entire Executive Branch. This is an important distinction and one to keep in mind. Regardless of how one feels about the State of the Union, there are still checks on the actual authority he can wield.  And ultimately if the system did get abused, one would hope that someone along the chain would say “no” or if it got beyond that Congress would ultimately enact additional safeguards.

For a system like the EAS and the WEA to work, we need to test them. And we need to have faith they are properly used. Yes, sometimes mistakes happen in an unscheduled test going out, or worse, a test mistakenly sending out a message that a real event is transpiring. These mistakes NEED to be avoided and minimized so that people don’t panic (which can cause harm, including death in some cases). But the testing needs to happen to make sure the system DOES work when needed.  We need to have a general faith, though perhaps tempered with SOME caution of abuse of the system.  (BTW, I do realize there’s some controversy over exactly what transpired in the Hawaii incident and in fact might actually illustrate an actual abuse of the system by an individual.)

But we should not let the partisan social media actions of one particular President make us never believe the boy who cried wolf. Someday the cry may be real.

As long as the national level tests like the one that occurred last Thursday remain infrequent, with a clear purpose, and are clearly tests, I will continue to advocate for them.

P.S. Oh, one more addendum, anything you see about John McAfee concerning the test, or the E911 of your phone should be basically ignored.

P.P.S One of the eeriest experiences of my life was walking into my apartment and catching a rebroadcast of the movie Countdown to Looking Glass. It made me better understand how folks could have fallen for the Orson Welles broadcast of The War of the Worlds. Now I would never advocate searching for a bootleg copy of the movie on Youtube, but if you can find a copy it’s worth watching in my opinion, and honestly, the last minute or so still sort of freaks me out.

 

Safety Third

This is actually the name of an episode of Dirty Jobs. But it’s a title that has stuck with me because it’s near and dear to the sort of things I like to think about. Mike Rowe has a good follow-up article here. The title and show ruffled feathers, but he’s right, it’s an important concept to discuss.

You’ll often hear the mantra “Safety First”. This often means in work places things like wearing fall protection when working at height, or wearing a life vest when working in water, or ear protection, or other safety measures. The idea being that above all else, we have to be safe.

I got thinking about this while reading Rand Simberg’s book, Safe is Not an Option.  He argues that trying to make safety the highest priority of spaceflight is holding us back. I tend to agree.  And I’d like to argue out that despite NASA talking about safety in public announcements, the truth is NASA hasn’t always been upfront about it and also it has made decisions where safety wasn’t first (and I would argue in some cases those decisions were justified).

Now I know at least a few of my readers have read the Rogers Commission Report on the Challenger shuttle disaster.  It’s worth the read, especially Dr. Feynman’s appendix. One of the issues that came up during the investigation was exactly how safe the Shuttle was. (Here I’m referring to the entire system, the orbiter, SRBs and ET). Some at NASA were claiming that the Shuttle had a 1 in 100,000 chance at a loss of an orbiter. (a loss of a an ET or SRB as long as it didn’t impact the Shuttle wasn’t really a concern, as all ETs were lost at the end of each mission and at least 2 SRBs were lost due to other issues). As Feynman pointed out, this meant you could fly the Shuttle every day for 300 years and only have one accident.  What was the reasoning behind such an argument? Honestly, nothing more than wishful thinking.   As we know, the shuttle was far less safe, 1 in 67.5.  That’s a hugely different number.

There were many reasons that lead to either accident and I won’t delve into them here; though I would highly recommend The Challenger Launch Decision by Diane Vaughen as a comprehensive analysis of the decision making that helped lead up to the Challenger disaster.

But let’s talk a bit about how things could have been made safer, but NASA correctly decided NOT to go down that route.  One early iteration of the shuttle design had  additional SRBs mounted to the orbiter that would have been used to abort during an additional 30 seconds of the flight envelope1. I can’t determine if these 30 seconds would have overlapped with the critical 30 seconds Challenger’s final mission. But let’s assume they did. The total cost would have added $300 million to the development of the program and reduced the payload capacity of the orbiter2..

In a system already beset with cost considerations and payload considerations, this might have meant the program never got off the ground literally. Or if it did, it would fail to meet its payload guidelines.  All this for 30 more seconds of additional safety. Would that have been worth it? Arguably not.

Another design decision was to eliminate thrust termination for the SRBs. Again, this is something that would have arguably made the ascent portion of the flight safer: in theory.  The theory being that since you can’t normally shut down the SRBs, you can’t perform an orbiter separation, which means the orbiter can’t detach during the first 2 minutes of the flight and hence can’t perform a return to launch site abort.

But again, adding that safety feature didn’t necessarily make things better. For one thing, it really only would have been useful above a certain altitude since below that altitude all the orbiter could have done is detach from the stack and fallen into the sea with too little time to get into a glide position and make it back to a runway.

But there was a bigger issue: the thrust termination was determined to be violent enough it would probably have damaged the orbiter if used. This could have been mitigated by beefing up the orbiter structure. But this would have imposed an 8,000 lb payload penalty. Since the shuttle was already having trouble reaching its 65,000 lb payload goal, this was determined to be unacceptable3.

So, NASA could have made the decision of “safety first” and ended up with a shuttle system that never would have flown. And given the political calculus at the time, it’s unlikely NASA could have come up with a better solution nor had Congress fund it. The shuttle was an unfortunate compromise brought about a host of factors. But it did fly.

As I like to tie this back to some of my other interests; so what about caving and cave rescue.? I mentioned in a previous post how we’ve moved away from treating one line in the system strictly as a belay line. But what if I told you we often only use one line! There are many places in caving and cave rescue where we do not have a belay line. A good example is for a caver ascending or descending a rope.  This is called Single Rope Technique or SRT. There are some who come to caving from other activities and ask “where’s your belay? You have to have a belay!”

But, a belay line (here used in the sense of catching a caver from a potentially dangerous fall if their mainline fails) is actually far less safe.  I’ll give an example. First let’s start with some possible failure modes

  1. Main rope being cut or damaged to the point of failure
  2. The point the rope is rigged to (the anchor point) failing
  3. Your ascent or descent system failing

So the idea is, if one of those 3 things happen, the belay line will catch you.  But there’s issues with that theory. One major issue is that large drops in caves are often accompanied by air movement and waterfalls. The air movement, or even simple movements by the caver (and influenced by the rope in some cases) can cause a twisting motion. This means that before you know it, your belay line has been twisted around your mainline and you can no longer ascend or descend. You’re stuck. Now combine this with being in a waterfall and you’ve become a high-risk candidate for hypothermia, drowning, and harness hang syndrome.  In other words, your belay line has now increased your chances of dying. So much for the attitude safety first.

Even if you avoid those issues, you haven’t really solved the possible failure modes I listed. If you think about it, anything that’s going to damage your mainline is possible to your belay line. There are some differences, your belay line, for example because it’s moving is far less likely to wear through in a single spot like a mainline might from being bounced on during an ascent. On the other hand it’s more possible to suffer a shock load over a sharp edge if it’s not attended well.

If your mainline anchor point fails, you’re relying on your belay anchor point to be stronger. If it’s stronger, why not use it for your mainline? (there are reasons not to, but this is a question that should cross your mind.)

Finally, for equipment failure, catastrophic failure is rare (only seen in movies honestly) and other failures are better mitigated by proper inspection of your equipment and close attention to proper technique.

Of course the safest thing to do, if we were really putting safety first would to never go caving. But where’s the fun in that.

We can insist on safety first in much of what we do, but if we do, we inhibit ourselves from actually accomplishing the activity and in some cases can actually make things LESS safe by trying to add more safety. And safety is more than simply adding additional pieces to a system. It’s often proper procedures. Rather than adding a belay line, focusing on better rigging and climbing technique for example. Or even simply accepting that sometimes things can go sideways and people may be injured or die.  We live in a dangerous world and while we can make things safer and often should, we should be willing to balance our desire for safety with practicality and the desirability of the goal.

I’m going to end with two quotes from an engineer I respected greatly, Mary Shafer who formerly worked at NASA at what was Dryden Flight Research Center and is now the Armstrong Flight Research Center at Edwards Air Force Base.

Insisting on absolute safety is for people who don’t have the balls to live in the real world.

and

There’s no way to make life perfectly safe; you can’t get out of it alive.

For a more complete record of Mary’s thoughts, I direct you to this post.

Footnotes

    1. Space Shuttle – The First Hundred Missions. Dennis Jenkins, 2001. Page 192
    2. Ibid.
    3. Ibid