Oil Change Time and a Rubber Ducky

Sometimes, the inspiration for this blog comes from the strangest places. This time… it was an oil change.

I had been putting off changing my oil for far too long and finally took advantage of some free time last Friday to get it changed. I used to change it myself, but for some reason, in this new car (well new used car, but that’s a story for another day) I’ve always paid to get it changed. (And actually why I stopped changing it myself is also a blog post for another day.)

Anyway, I’ve twice now gone to the local Valvoline. This isn’t really an add for Valvoline specifically but more a comment on what I found interesting there.

So, most places where I’ve had my oil changed, you park, go in, give them your name and car keys and wait. Not here, they actually have you drive the car into the bay itself and you sit in the car the entire time. I think this is a bit more efficient, but since, instead of lifting the car, they have a pit under the car, I suppose they do risk someone driving their car into the pit (yes, it’s guarded by a low rail on either side, but you know there are drivers just that bad out there).

So, while sitting there I observed them doing two things I’m a huge fan of: using a checklist and calling out.

As I’ve talked about in my book and here in my own blogs, I love checklists. I recommend the book The Checklist Manifesto. They help reduce errors.  And while changing oil is fairly simple, mistakes do happen; the wrong oil gets put in, the drain plug isn’t properly tightened, too much gets put in, etc.

So hearing them call out and seeing them check off on the computer what they were doing, helps instill confidence. Now, I’m sure most, if not all oil change places do this, but if you’re sitting in the waiting room, you don’t get to see it.

But they also did something else which I found particularly interesting: they did a version of Pointing and Calling.  This is a very common practice in the Japanese railway system. One study showed it reduced accidents by almost 85%. So while changing my oil, the guy above would call out what he was doing. It was tough to hear everything he was calling out, but I know at one point the call was “4.5 Maxlife”  He then proceeded to put in what I presume was 4.5 quarts of the semi-synthetic oil into my engine (I know it was the right oil because I could see which nozzle he selected). I didn’t count the clicks, but I believe there was 9.  Now, other than the feedback of the 9 clicks, the guy in the pit couldn’t know for sure that it was the right oil and amount, but, I’m going to guess he had a computer terminal of his own and had his screen said “4 quarts standard” he’d have spoken up.  But even if he didn’t have a way of confirming the call, by speaking it out loud the guy above was engaging more of his brain in his task, which was more likely to reduce the chances of him making a mistake.

I left the oil change with a high confidence that they had done it right. And I was glad to know they actually were taking active steps to ensure that.

So, what about the rubber duck?

Well, a while back I started to pick up the habit of rubber duck debugging. Working at home, alone, it’s often hard to show another developer my code and ask, “Why isn’t this working?”  But, if I encounter a problem and I can’t seem to figure out why it’s not working. I now pull out a rubber duck and start working through the code line by line. It’s amazing how well this works.  I suspect that by taking the time to slow down to process the information and by engaging more of my brain (now the verbal and auditory portions), like pointing and calling, it helps bring more of my limited brain power to bear on the problem.  And if that doesn’t work, I still have my extended brain.

PS As a reminder, this coming Saturday I’m speaking at SQL Saturday Philadelphia. Don’t miss it!

Hours for the week

Like I say, I don’t generally post SQL specific stuff because, well there’s so many blogs out there that do. But what the heck.

Had a problem the other day. I needed to return the hours worked per timerange for a specific employee. And if they worked no hours, return 0.  So basically had to deal with gaps.

There’s lots of solutions out there, this is mine:

Alter procedure GetEmployeeHoursByDate @startdate date, @enddate date , @userID varchar(25)
as

— Usage exec GetEmployeeHoursByDate ‘2018-01-07’, ‘2018-01-13’, ‘gmoore’

— Author: Greg D. Moore
— Date: 2018-02-12
— Version: 1.0

— Get the totals for the days in question

 

 

set NOCOUNT on

— First let’s create simple table that just has the range of dates we want

; WITH daterange AS (
SELECT @startdate AS WorkDate
UNION ALL
SELECT DATEADD(dd, 1, WorkDate)
FROM daterange s
WHERE DATEADD(dd, 1, WorkDate) <= @enddate)

 

select dr.workdate as workdate, coalesce(a.dailyhours,0) as DailyHours from
(
— Here we get the hours worked and sum them up for that person.

select ph.WorkDate, sum(ph.Hours) as DailyHours from ProjectHours ph
where ph.UserID=@userid
and ph.workdate>= @startdate and ph.workdate <= @enddate
group by ph.workdate
) as a
right outer join daterange dr on dr.WorkDate=a.WorkDate — now join our table of dates to our hours and put in 0 for dates we don’t have hours for
order by workdate

GO

There’s probably better ways, but this worked for me. What’s your solution?

The Basics

Last night at our local SQL Server User Group meeting we had the pleasure of Deborah Melkin speaking.  I first met Deborah at our Albany SQL Saturday Event last year. She gave: Back to the Basics: T-SQL 101. Because of the title I couldn’t help but attend. It wasn’t the 101 part by itself that caught my eye. It was the “Back to the Basics”. While geared to beginners, I thought the idea of going back to the basics of something I take for granted was a great idea. She was also a first time speaker, so I’ll admit, I was curious how she would do.

It was well worth my time. While I’d say most of it was review, I was reminded of a thing or two I had forgotten and taught a thing or two.  But also very importantly, she had a great ability to break down the subject into a clearly understandable talk. This is actually harder than many people realize. I’ve heard some brilliant speakers, who simply can’t convey their message, especially on basic items of knowledge, in a way that beginners can understand it.

So, after the talk last summer, I cornered her at the Speaker’s Dinner and insisted she come up with a follow up, a 201 talk if you will. Last night she obliged, with “Beyond the Select”.  What again struck me about it, was other than a great tip in SSMS 17.4 (highlighting a table alias will show you what the base table is), again nothing was really new to me. She talked about UDFs; I’ve attended entire sessions on UDFs. She talked about CTE; I’ve read extensively about them. She discussed windowing functions; we’ve had one of our presenters present on them locally. Similarly with some of the other items she had brought up.

Now, this is NOT a slight at all, but really a compliment. Both as an attendee and as the guy in charge of selecting speakers, it was great to have a broad-reaching topic. Rather than a deep-drive, this was a bit of everything that gave the audience a chance to learn a bit of everything if they hadn’t seen it before (and based on the reactions and feedback I know many learned new stuff) and to compare different methods of doing things.  For example what’s the advantage of a CTE vs. a derived table vs. a temp table.  Well the answer is of course the DBA’s favorite answer, “it depends”.

As a DBA with decades of experience and as an organizer, it’s tempting to have a Bob Ward type talk every month. I enjoyed his talk last month. But, honestly, sometimes we need to go back and review the basics. We’ll probably learn something new or relearn something we had forgotten. And with talks like Deborah’s, we get to see the big picture, which is also very valuable.

So my final thought this week is that in any subject, not only should we be doing the deep dives that extend our knowledge, but we should review our basics. As DBAs, we do a select every day. We take it for granted, but how many people can really tell you clearly the order of operations? Review the basics once in awhile. You may learn something.

And that’s why I selected this topic for this week’s blog.

Debugging – Powerhell (sic)

Generally I don’t plan to talk too much about specific programming problems. There’s too many blogs out there about how to solve particular problems; and most of them are generally good and even better, accurate.

I’m going to bore most of you, so if you want to jump to the summary, that’s fine. I won’t be insulted (heck I won’t even know!)

That said, last night I spent a lot of time debugging a particular PowerShell script I had written for a client.

I want to talk a bit about what was going on and how I approached the problem.

So, first the setup: It’s really rather simple

  1. Log into an SFTP site
  2. Download a zip file
  3. Expand the zip file
  4. Run an SSIS package to import the files in the zip package to a SQL Server

I had written the script several months ago and it’s been in production since then.  It’s worked great except for one detail. When it was put into production, we didn’t yet have the service account to run it under, so I set it up to run as a scheduled task using my account information. Not ideal, but workable for now.

Since then, we received the service account. So, I had made several accounts to move to the service account, but it wasn’t working.  Yesterday was the first time in awhile I had time to work on it so I started looking at it in more detail.

Permissions Perhaps?

So, the first problem I had was, I’d run the scheduled task and nothing would happen. Ok, that’s not entirely accurate. The script would start, but never run to completion. It just sort of sat there. And it certainly was NOT downloading the zip file.

My first thought was it might be a files permission or other security issue. So the first thing I did was write a really simply script.

get-date | out-file “D:\foo\timetest.text”

Then I setup a scheduled task and ran it manually. And sure enough, timetest.txt showed up exactly where it was supposed to.

So much for security in the OS.

out-file to the Rescue!

So, my next step is probably one of the oldest debug techniques ever: I put in statements in the code to write to a file as I hit various steps in the code.  Not quite as cool as a debugger, but hey, it’s simple and it works.

It was simple stuff like:

“Path $SftpPath Set” | out-file ftp_log.txt -append

BTW, if you’re not familiar with Powershell, one of the nice things is in the above statement, it’ll automatically insert the variable $SftpPath into the string it’s printing out.

This started to work like a charm. And then… jackpot!

“Setting Session using $sftpURL and Password: $password and credentials: $Credential” | out-file ftp_log.txt -append

When I ran it, I’d get a line in the log like:

Setting Session using secureftp.example.com and Password: System.Security.SecureString and Credentials: System.Management.Automation.PSCredentials

Not the most informative, but useful. And it’s nice to know that one can’t easily just print out the password!

But, when I ran it as the service I was getting something VERY different.

Setting Session using secureftp.example.com and Password:  and Credentials:

HUGE difference. What the heck was happening to the password and credentials?

That’s when the first lightbulb hit.

Secure credentials in PowerShell

Let’s take a slight side trip here.  We all know (right, of course you do) that we should never store passwords in cleartext in our scripts.  So I did some digging and realized that PowerShell actually has a nice way to handle this. You can pass a plaintext string to a cmdlet and write that out to a file.  Then when you need credentials, you read from the file and it gets parsed and handed to whatever needs it.

$password = get-content $LocalFilePath\cred.txt | ConvertTo-SecureString

Of course first you have to get the password INTO that cred.txt file.

That’s easy!

if (-not (test-path $LocalFilePath\cred.txt))
{
read-host -AsSecureString | ConvertFrom-SecureString | Out-File     $LocalFilePath\cred.txt
}

So, the first time the program is run, if the cred.txt file isn’t found, the user is prompted for it, they enter the password, it gets put into a secure string and written out. From then on, the cred.txt file is used and the password is nice and secure.

And I confirmed that the service account could actually see the cred.txt file.

So what was happening?

The Key!

It took me a few minutes and when I hit a problem like this I do what I often do, step away from the keyboard. I had to stop and think for a bit. Then it dawned on me. “How does Powershell know how to encrypt and decrypt the password in a secure fashion. I mean at SOME point it needs to have a clear-text version of to pass to the various cmdlets. It wouldn’t be very secure if just anyone could decrypt it.  That’s when it struck me! The encryption key (and decryption key) has to be based on the USER running the cmdlet!

Let me show some more code

$password = get-content $LocalFilePath\cred.txt | ConvertTo-SecureString
$Credential = New-Object System.Management.Automation.PSCredential (‘ftp_example’, $Password)

$sftpURL = ‘secureftp.example.com’

$session = New-SFTPSession -ComputerName $sftpURL -Credential $Credential

So first I’d get the password out of my cred.txt file and then create a Credential Object with a username (ftp_example) and the aforementioned password.

Then I’d try to open a session on the SFTP server using that credential.  And that’s exactly where it was hanging when I ran this under the service account. Now I was on to something. Obviously the $password and $Credential wasn’t getting set as we saw from my debug statements.  It wasn’t able to decrypt cred.txt!

Great, now I just need to have the service create its own cred.txt.  But, I can’t use the same technique where the first time the service runs it prompts the user for the password.  This bothers me. I still don’t have a perfect solution.

For now I fell back on:

if (-not (test-path $LocalFilePath\cred_$env:UserName.txt))
{
# after running once, remove password here!
“password_here” | ConvertTo-SecureString -AsPlainText -Force | ConvertFrom-SecureString | Out-File $LocalFilePath\cred_$env:UserName.txt
}

Note I changed the name of the cred.txt file to include the username, so I could have one when I ran it and another when the service account ran it. This makes testing far easier AND solves conflicts when debugging (i.e. “which credentials are currently being used).

So for now the documentation will be “after running the first time, remove the password”. I think more likely I’ll write one-time script that runs to setup a few things and then deletes itself. We’ll see.

Anyway, this worked much better. Now my debug lines were getting the password and all. BUT… things were STILL hanging.  Talk about frustrating. I finally tracked it down to the line

$session = New-SFTPSession -ComputerName $sftpURL -Credential $Credential

Caving Blind

As you know, I love caving. And for caving, a headlamp is a critical piece of equipment. Without it you have no idea where you’re going.  That’s how I felt here. Things were simply hanging and I was getting NO feedback.

I tried

$session = New-SFTPSession -ComputerName $sftpURL -Credential $Credential | out-file ftp_log.txt -append

But I still wasn’t getting anything. I tried a few other things, including setting parameters on how New-SFTPSession would handle errors and warnings. But still nothing. It was like caving blind. The cmdlet was being called, but I couldn’t see ANY errors.  Of course when I ran it as myself, it ran fine. But when I ran it as a service, it simply hung.

I was getting frustrated I couldn’t get any feedback. I figured once I had feedback, the rest would be simple. I was mostly right.

I needed my headlamp! I needed to see what was going on!

Finally it dawned on me, “wrap it in an exception”. So now the code became:

try
{
$session = New-SFTPSession -ComputerName $sftpURL -Credential $Credential
}
catch
{
$exception = $_.Exception.Message
“Error in New-SFTPSession – $exception – ” | out-file ftp_log.txt -append
}

Now THIS got me somewhere. I was getting a timeout message!  Ok, that confused me. I mean I knew I had the right username and password, and it never timed out when I ran it. Why was it timing out when the service ran it?

So again, I did what I do in cases like this. Got up, went to the fridge, got some semi-dark chocolate chips and munched on them and ran through my head, “why was the secure FTP session timing out in one case, but not the other?”  At times like this I feel like Jack Ryan in Hunt for Red October, “How do you get a crew to want to get off a nuclear sub…”

Eureka!

DOH, it’s SECURE FTP. The first time it runs it needs to do a key exchange!  I need to accept the key!  At first I panicked though. How was I supposed to get the service to accept the key?  Fortunately it turned out to be trivial. There’s actually a parameter -Acceptkey. I added that… and everything ran perfectly.

Except the SSIS package

That’s a separate issue and again probably related to security and that’s my problem to debug today. But, I had made progress!

Summary

Now, quite honestly, the above is probably more detail than most care to read. But let me highlight a couple of things I think that are important when debugging or trouble shooting in general.

First, simplify the problem and eliminate the obvious. This is what I did with my first script. I wanted to make sure it wasn’t something simple and obvious. Once I knew I could write to a file using the service account that ruled out a whole line of questions. So I could move on to the next step.

Often I use an example from Sesame Street of “one of these things is not like another.”  In this case, I had to keep figuring out what was different between when I ran it and when the service account ran things. I knew up front that anything requiring keyboard input would be a problem. But, I thought I had that covered. Of course starting to compare and contrast the results of decrypting the cred.txt file showed that there was a problem there. And the issue with the initial acceptance of the SFTP key was another problem.

So, gather information and compare what works to what doesn’t work. Change one thing at a time until you narrow down the problem.

The other issue is being able to get accurate and useful information. Using debugging statements if often old school, and I know some developers look down on them, but often the quick and dirty works. So I had no problem using them. BUT, there are still cases where they won’t work. If for example a cmdlet hangs or doesn’t give output to standard output, they won’t work. Catching the exception definitely solved this.

The biggest problems I really ran into here is I’m still a beginner with PowerShell. I’m loving it, but I often still have to lookup things. But the basic troubleshooting skills are ones I’ve developed and honed over the years.

And quite seriously, sometimes don’t be afraid to walk away and do something else for a bit. It almost always brings a fresh perspective. I’ve been known to take a shower to think about a problem. Working from home that is easier for me than it might be for someone in an office. But even then, go take a walk. Or do something.  Approach the problem with a fresh mind. All too often we start down the wrong path and just keep blindly going without turning around and reevaluating our position.

When we teach crack and crevice rescue in cave rescue we tell our students, “If you’re not making progress after 15 minutes, let someone else try. The fresh approach will generally help.”

Looking back on this, the problems seem pretty obvious. But it took a bit of work to get there. And honestly, I love the process of troubleshooting and/or debugging. It’s like being a detective, compiling one clue after another until you get the full picture.

So now on to comment some of the code and then figure out why the SSIS package isn’t executing now (I have ideas, we’ll see where I get to!)

Starting off the New Year… wrong

Ok, I had plans for a great blog post on time and how we measure it and how it really is arbitrary. But… that will have to wait. (I suppose that brings its own irony about time and waiting and all that. Hmm, now I may need a post about that!)

But, circumstances suggested a different post.

This one started with an email from my boss last year (who will remain nameless for his sake. 🙂  “Greg, I hacked together this web page so my team-members can track their hours. I need you to make a few tweaks to it. And don’t worry, we’ll replace it by the end of the year.”  Yes, that sound you hear is your head hitting the desk like mine, because you KNOW what is coming next. Here we are in January of 2018 and guess what, the web page is still in place.

And of course, it’s not just HIS (now former, he’s moved on) team that is using it. Apparently his boss liked what it could do, so his boss (my boss^2) declared that ALL his teams would use it.  So instead of just 4-5 people using it, there’s close to 30-40 people using it.

So, remember the first rule of development, “Oh don’t worry about this, we’ll replace it soon” never happens.  His original code made a lot of assumptions (which at the time I suppose seemed reasonable). For example, since many schedules are formed around work-weeks, he was originally storing hours by the week (and then day of the week) they were worked in. For example, if someone worked for 6 hours on January 5th of last year, the table stored that as “WEEK 1, day 5”.

Now, before anyone completely thinks this is nuts, do keep in mind, many places, including my last job at GE did everything by Week of the Year.  There’s some advantages to this (that I won’t go into here).

For the most part, the code worked and I didn’t care. But, at one point I was asked to do a report based not on Work Weeks, but on an actual calendar month. So, I had to figure out for example that February 1st occurred during Work Week 5, on the 4th day. And go from there.

So, I hacked that together.  Then where was a request for another report and another. Pretty soon it was getting to be a pain to keep track of all this. And I realized another problem. My boss had gotten lucky in 2017. January 1st occurred on a Sunday, which meant that Work Week 1, Day 1 was a natural starting point.

I started to worry about what would happen when 2018 rolled around. His code would probably break.  So I finally took the plunge and started to refactor the code.  I also added new features as they were requested.  Things were going great.

Until yesterday.  So now we get to another good rule of development: “Use Version Control” So simple. Even for a one person shop like me it’s a good idea. I had put it off for awhile, but finally downloaded the git plug-in for Visual Studio and started to use it.

So yesterday, a user reported that they couldn’t enter hours for last week. I pulled up the app and realized that yes, in fact it was coded in such a way you couldn’t go back into a previous year to enter hours. No problem, I can fix that!

Well, let me add a rule I had sort of missed; “Understand HOW to use Version Control”. I won’t go into details, but let’s just say, I wasn’t committing like I should have been or thought I was. So, in an attempt to get a clean base and all, I merged things… “poorly“. I had the branches I wanted, but had not properly committed stuff to them.

The work of the last few weeks was GONE. I know, you’re saying, “just go to your backups Greg, because of course a person who writes about DR has proper backups, right?”  Yeah, right! Let’s not go there!

Anyway, I spent 4 hours last night and 2 this morning recreating the code. Fortunately, it dawned on me, being .NET code, it was really just CLR and that perhaps with a good decompiler, I could get most of the code back without too much effort. I want to give props to RedGate’s .NET Reflector. Of the two decompilers I tried, it was clearly the better one. While I lost my variable names and the decompiled code isn’t quite what I’d call human written, it was close enough I could in short order recreate hours and hours of work I had done.

In the meantime, I also talked to some of my programming buddies on an RPI chat server (ask me about it sometime) about git and better procedures.

And here’s where I realized my fundamental mistake. It wasn’t just misunderstanding how Visual Studio was interacting with git and the branches I was creating, it was that somewhere in the back of my head I had decided that commits should be used only when major steps were completed. It was almost like I was afraid I’d run out; or perhaps complicate the history with too many of them.  But, you know what? Commits aren’t scarce resources. And I’m the only one reading the history. I don’t really care if I’ve got 10 commits or 100. The more the better in many ways. Had I been making commits much more frequently, I’d have saved myself a lot of work. (And I can discuss having a remote store at some point, etc.)

So really, the point of this hastily, sleep-deprived written blog isn’t so much to talk about dates and apps that never get replaced, but about a much deeper problem I was having. It wasn’t just failing to fully understand git commands, it was failing to understand how I should be using git in the first place.

So, to riff on an old phrase, “Commit Early, Commit often”.

Oh and I did hack together a way for that user to enter hours for last year. It’s good enough. I’m sure I won’t need it a year from now. The code will be all replaced or refactored by then I’m sure!