Fixing a “Simple” Leak

I was reminded of the 80/20 rule yesterday as I was briefing my team on a project I needed them to work on.  Ok, actually I was telling my kids about this, but it was still for a project.

In this case, I was pointing out that 80% of the effort for a project often is used on completing only 20% of the project. Researching this a bit further, there’s an actual name for this rule: the Pareto Principle. We’ve probably all come across this rule or some variation of it in our lives.

While not exactly the situation here, the point I was trying to make with them is that sometimes when fixing a simple problem, the issues and work that spin off easily take 10 times as much time and effort as the original problem.

In this specific case, I was asking them to do what was essentially an annoying and dirty job, clean up some edges to sheetrock before I finally re-enclosed a portion of the basement ceiling. They started later in the day than I would have preferred, but honestly, as this project has been sitting on hold for a few months, a couple more hours didn’t really matter. That said, I think the actual work took them longer than they thought it would.

So what prompted yesterday’s work was a small leak that I fixed over a year ago.

Over the years, we had noticed a slight leak in the downstairs bathroom. It wasn’t always apparent, but it was slowly getting worse. Essentially water was soaking into the sheetrock and walls of the finished portion of our basement.

Now, I’m a fairly handy guy, I spent several summers in high school and a bit of college working for my father in the construction trade. While he’d point out my finish work needed more work, in general, if it’s involved in residential construction, I’ve done it and I can feel comfortable doing it. But, besides learning how to use the tools, I also learned an important lesson: no project is ever as simple as it appears. This is actually one reason I hate starting home-improvement projects. I know that it’s going to turn into a lot more work than it originally looks like. It isn’t exactly the 80/20 rule, but I’m reminded of it. So let me dive a bit into what was and still is involved in fixing this simple leak.

First, I had to identify where it was.  From lots of inspection, guesswork and experience, I guessed in the wall behind the tiles.  Ok, so that means, rip out the tiles and the plaster and lathe underneath.  That’s simple enough, and honestly fun, albeit it dusty.

Second, once the leak was found, replace the plumbing.  With modern Pex plumbing, that’s actually the quickest part of the work. So yes, actually FIXING the leak, took maybe an hour. I will note I also took the opportunity to move the showerhead up by about 9″. One thing I hate are low showerheads!

WP_20181223_001

New Pex plumbing to showerhead

But, hey, while I’m in here, I might as well run the wiring for a fan for the bathroom because it’s always needed one.  And given a weird quirk of construction in this house, the easiest way is to run it into the long wall of the bathtub/shower, then over to the outside wall and then up to the approximate location of where the fan will go.  So there’s another hour or two for a project to was started to fix a leak.

Oh and while I’m at it, let me take some photos with a tape measure in them of where the pipes are for future reference. So there’s a bit more work.

Great, the leak is fixed.

Except, obviously the shower can’t be used as is with open walls. So now it’s a matter of getting backerboard and putting that in, and sealing it.  What I used is waterproof as it is, so we left it at that. And I say we, because for much of this product both the kids were helping with it. And quite honestly, that’s where this part of the  project sat for months. The shower was usable, though a bit ugly.

But, we still had the basement to deal with. That meant ripping out the damaged sheetrock and studs that had rotted. That was fun. Not! For that I actually used a full respirator, body suit, and sprayed anti-fungal stuff liberally. Some of the water damage here was actually older than the bathroom leak and was due to poor grading and then more recently, runoff from the roof of my addition (a problem that gutters finally solved).

But hey, if we’re putting in new walls, might as well put in better insulation and if we can seal the old concrete (hint that went poorly) and oh, put in a couple of dimmable lights for a work area and some network jacks.

WP_20190106_001

Basement wall in progress

Once that was all roughed in, it was time to at least sheetrock the walls.

WP_20190106_004

Sheetrocked walls

And that is basically where the basement project sat until yesterday. I’ll come back to that in a minute.

As for the bathroom, there was only so long I wanted to look at the backerboard. It was time to finally tile it.  Oh, but before I could do that, I had to put that fan in. Besides finding just enough room in the outside wall between the framing for the 2nd floor and the window and other vagaries, it just fit. Of course that was just 1/2 the battle. The other 1/2 was then wiring up the switch. Oh and while I’m at it, might as well run a circuit for a GFCI outlet since the bathroom was lacking one.  Once all that was done, THEN, I could tile.

WP_20190804_001

Tiled and Grouted

And yes, you may note the window does intrude into the shower space. Hey, I didn’t build the house!  What you can’t see is the replaced trim on the top edge and left edge of the tile that my daughter literally spent hours sanding and resealing. It looks great.

Oh and I still have to find replacement cones for behind the handles! So that’s another thing to do for the project.

But back to the basement. The area in front of the wall has become my son’s de facto computer space when he’s home from college. It was ‘good enough’.  But the ceiling still needed its sheetrock replaced and I needed to tape and paint the new wall.  This has waited until now.

The problem with the ceiling is when I pulled down the old stuff, I didn’t have nice clean edges to butt the new sheetrock against.  It was ragged where it had broken, or broke at awkward places so I couldn’t easily put in new sheetrock.

But I also took advantage of this time to reroute all my network drops so they will be hidden in the ceiling and come out nicely to my rack.

So yesterday, the kids did the dirty work of trimming the edges, cleaning stuff up, etc. It looks great and will make my job of sheetrocking much easier.

20200121_081823

Open basement ceiling

By the way, I should note that the board you see sticking out is what I had used in the past when I had to slide in here to do some wiring or other work. This was sort of my own private Jefferies Tube. This should now be relatively easy to sheetrock, right?

Well, except for one small detail.

20200121_081837

Houston, we have a problem.

Yes, that is a piece of electrical cable that was run OUTSIDE the studs and essentially between the join of where two pieces of sheetrock met at an inside corner.  I absolutely HAVE to move this before I can sheetrock.

So that’s going to be a few more hours of work before I can even start to sheetrock. I have to identify which circuit this is, cut power, cut the wire, reroute it, put the ends in a junction box (which code says can’t be hidden!) and then make sure it’s safe.

After all that work, I can finally get around to sheetrocking the ceiling. Then I’ll have to mud and tape all the joints, sand, mud again, prime and then finally get the walls and ceiling painted.

But the good news is, the leak is fixed. That was the easy party!

Does this “simple” project of fixing a leak remind you of any projects at work? It does for me!

Crossing the Threshold…

So it’s the usual story. You need to upgrade a machine, the IT group says, “no problem, we can virtualize it, it’ll be better! Don’t worry, yes, it’ll be fewer CPUs, but they’ll be much faster!”

So, you move forward with the upgrade. Twenty-three meetings later, 3 late nights, one OS upgrade, and two new machines forming one new cluster, you’re good. Things go live.  And then Monday happens. Monday of course is the first full day of business and just so happens to be the busiest day of the week.

Users are complaining. You look at the CPU and it’s hitting 100% routinely. Things are NOT distinctly better.

You look at the CPUs and you notices something striking:

cpu not being used

CPU 8 is showing a problem

4 of the CPUs (several are missing on this graphic) are showing virtually no utilization while the other  8 are going like gang-busters.  Then it hits you, the way the IT group setup the virtual CPUs was not what you needed.  They setup 6 sockets with 2 cores each for a total of 12 cores. This shouldn’t be a problem except that SQL Server Standard Edition uses the lower of either 4 sockets or 24 cores. Because your VM has 6 sockets, SQL Server refuses to use two of them.

You confirm the problem by running the following query:

SELECT scheduler_id, cpu_id, status, is_online FROM sys.dm_os_schedulers

This shows only 8 of your 12 CPUs are marked visible_online.

This is fortunately an easy fix.  A quick outage and your VM is reconfigured to 2 sockets with 6 cores a piece. Your CPU graphs now look like:

better CPU

A better CPU distribution

This is closer to what you want to see, but of course since you’re doing your work at night, you’re not seeing a full load. But you’re happier.

Then Monday happens again.  Things are better, but you’re still not happy. The CPUs are running on average at about 80% utilization. This is definitely better than 100%. But your client’s product manager knows they’ll need more processing power in coming months and running at 80% doesn’t give you much growth potential. The product manager would rather not have to buy more licenses.

So, you go to work. And since I’m tired of writing in the 2nd person, I’ll start writing in 1st person moving forward.

There’s a lot of ways to approach a problem like this, but often when I see heavy CPU usage, I want to see what sort of wait stats I’m dealing with. It may not always give me the best answer, but I find them useful.

Here’s the results of one quick query.

Fortunately, this being a new box, it was running SQL Server 2016 with the latest version service pack and CU.  This mean that I had some more useful data.

CXPackets

CXPackets and CXConsumer telling the tale

Note one of the suggestions: Changing the default Cost Threshold for Parallelism based on observed query cost for your entire workload.

Given the load I had observed, I guessed the Cost Threshold was way too low. It was in fact set to 10.  With that during testing I saw a CPU graph that looked like this:

43 percent CPU

43.5% at Cost Threshold of 10

I decided to change the Cost Threshold to 100 and the graph quickly became:

25 percent CPU

25% at Cost Threshold of 100

Dropping from 43.5% to 25.6%. That’s a savings you can take to the bank!

Of course that could have been a fluke, so I ran several 5 minute snapshots where I would set the threshold to 10, collect some data and then to 100 for 5 minutes and collect data.

CXPacket_10      CXPacket_10_Waittime_MS
635533 5611743
684578 4093190
674500 4428671
CXConsumer_10              CXConsumer_10_Waittime_MS
563830 3551016
595943 2661527
588635 2853673
CXPacket_100   CXPacket_100_Waittime_MS
0 0
41 22
1159 8156
CXConsumer_100            CXConsumer_100_Waittime_MS
0 0
13 29443
847 4328

You can see that over 3 runs the difference between having a threshold of 10 versus 100 made a dramatic difference in the total time spent waiting in the 5 minute window.

The other setting that can play a role in how parallelization can impact performance is MAXDOP. In this case testing didn’t show any real performance differences with changing that value.

At the end of the day though, I call this a good day. A few hours of my consulting time saved the client $1,000s of going down the wrong and expensive road of adding more CPUs and SQL licenses. There’s still room for improvement, but going from a box where only 8 of the 12 CPUs were being used and were running at 100% to a box where the average CPU usage is close to 25% is a good start.

What’s your tuning success story?

Small Victories

Ask most DBAs and they’ll probably tell you they’re not a huge fan of triggers.  They can be useful, but hard to debug.  Events last week reminded me of that. Fortunately a little debugging made a huge difference.

Let me set the scene, but unfortunately since this was work for a client, I can’t really use many screenshots. (or rather to do so would take far too long to sanitize them in the time I allocate to write my weekly blog posts.)

The gist is, my client is working on a process to take data from one system and insert it into their Salesforce system.  To do so, we’re using a 3rd party tool called Pentaho. It’s similar to SSIS in some ways, but based on Java.

Anyway, the process I was debugging was fairly simple. Take account information from the source and upsert it into Salesforce. If the account already existed in Salesforce, great, simply perform an update. If it’s new data, perform an insert.  At the end of the process Pentaho returns a record that contains the original account information and the Salesforce ID.

So far so good. Now, the original author of the system had setup a trigger so when these records are returned it can update the original source account record with the Salesforce ID if it didn’t exist previously. I should note that updating the accounts is just one of many possible transformations the entire process runs.

After working on the Pentaho ETL (extract, transform, load) for a bit and getting it stable, I decided to focus on performance. There appeared to be two main areas of slowness, the upsert to Salesforce and the handling of the returned records. Now, I had no insight into the Salesforce side of things, so I decided to focus on handling the returned records.

The problem of course was that Pentaho was sort of hiding what it was doing. I had to get some insight there. I knew it was doing an Insert into a master table of successful records and then a trigger to update the original account.

Now,  being a 21st Century DBA and taking into account Grant Fritchey’s blog post on Extended Events I had previously setup a Extended Events Session on this database. I had to tweak it a bit, but I got what I wanted in short order.

CREATE EVENT SESSION [Pentaho Trace SalesForceData] ON SERVER
ADD EVENT sqlserver.existing_connection(
    ACTION(sqlserver.session_id)
    WHERE ([sqlserver].[username]=N'TempPentaho')),
ADD EVENT sqlserver.login(SET collect_options_text=(1)
    ACTION(sqlserver.session_id)
    WHERE ([sqlserver].[username]=N'TempPentaho')),
ADD EVENT sqlserver.logout(
    ACTION(sqlserver.session_id)
    WHERE ([sqlserver].[username]=N'TempPentaho')),
ADD EVENT sqlserver.rpc_starting(
    ACTION(sqlserver.session_id)
    WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho')),
ADD EVENT sqlserver.sql_batch_completed(
    ACTION(sqlserver.session_id)
    WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho')),
ADD EVENT sqlserver.sql_batch_starting(
    ACTION(sqlserver.session_id)
    WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho'))
ADD TARGET package0.ring_buffer(SET max_memory=(1024000))
WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF)
GO

It’s not much, but it lets me watch incoming transactions.

I could then fire off the ETL in question and capture some live data. A typical returned result looked like

exec sp_execute 1,N'SourceData',N'GQF',N'Account',N'1962062',N'a6W4O00000064zbUAA','2019-10-11 13:07:22.8270000',N'neALaRggAlD/Y/T4ign0vOA==L',N'Upsert Success'

Now that’s not much, but I knew what the Insert statement looked like so I could build an insert statement wrapped with a begin tran/rollback around it so I could test the insert without actually changing my data.  I then tossed in some set statistics IO ON and enabled Include Actual Execution Plan so I could see what was happening.

“Wait, what’s this? What’s this 300K rows read? And why is it doing a clustered index scan on this table?”  This was a disconcerting. The field I was comparing was the clustered index, it should be a seek!

So I looked more closely at the trigger. There were two changes I ended up making.

       -- Link Accounts
       --MERGE INTO GQF_AccountMaster T
       --USING Inserted S
       --ON (CAST(T.ClientId AS VARCHAR(255)) = S.External_Id__c
       --AND S.Transformation in ('Account'))
       --WHEN MATCHED THEN UPDATE
       --SET T.SFID = S.Id
       --;
       
       if (select transformation from Inserted) ='Account'
       begin
              MERGE INTO GQF_AccountMaster T
              USING Inserted S
              ON T.ClientId  = S.External_Id__c
              WHEN MATCHED THEN UPDATE
              SET T.SFID = S.Id
       end

An astute DBA will notice that CAST in there.  Given the design, the Inserted table field External_Id__C is sort of a catch all for all sorts of various types of IDs and some in fact could be up to 255 characters. However, in the case of an Account it’s a varchar(10).

The original developer probably put the CAST in there since they didn’t want to blow up the Merge statement if it compared a transformation other than an Account. (From what I can tell, T-SQL does not guarantee short-circuit evaluation, if I’m wrong, please let me know and point me to definitive documentation.) However, the minute you cast that, you lose the ability to seek using the index, you have to use a scan.

So I rewrote the commented section into an IF to guarantee we were only dealing with Account transformations and then I stripped out the cast.

Then I reran and watched. My index scan of 300K rows was down to a seek of 3 rows. The trigger now performed in subsecond time. Not bad for an hour or so of work. That and some other improvements meant that now we could handle a few 1000 inserts and updates in the time it was previously taking to do 10 or so.  It’s one of those days where I like to think my client got their money’s worth out of me.

Slight note: Next week I will be at PASS Summit so not sure if/when I’ll be blogging. But follow me on Twitter @stridergdm.

IIS FTP 530 error

To my usual readers, you can ignore this. This is simply my way of making sure that the next time I have to google an issue, or someone else days, there’s a better chance of finding a solution. Note, there are other places posting the same solution.

But simply put, over the weekend client’s IT group rebooted a server and an FTP process started to fail. It took a lot of digging to solve it.

It appears that two things happened:

The IUSR to the FTP root directory got munged (it’s not 100% clear and this may not be a necessary step. But, things did NOT work for me until I did this, but then they continued to work when I removed the IUSR to see if I could recreate the problem.)

The part that was 100% necessary was this: https://manage.accuwebhosting.com/knowledgebase/941/FTP-Error-530-User-cannot-log-in-home-directory-inaccessible.html go to Step 8 and add all users. Why this was removed, I don’t know. But adding this after re-adding the IUSR step seems to have solved the issue.

That said, before anyone asks, “why in the world are you using FTP in the 21st century?” I won’t disagree other than to say this is purely an internal process that simply moves some non-PII data to a 2nd server.  Not a huge justification, but there it is.

Call 911, If You Can

Also known as “things have changed”

For one my of clients I monitor and maintain some of the jobs that run on their various servers. One of them had started to fail about two weeks ago. The goal of the job was basically to download a file from one server, transfer it to another and upload it.  Easy-peasy. However, sometimes the job fails because there’s no file to transfer (which really shouldn’t be a failure, but just a warning).  So, despite the fact that it had failed multiple days in a row, I hadn’t looked at it. And of course no one was complaining (though that’s not always a good reason to ignore a job failure!)

So yesterday I took a look and realized the error message was in fact incorrect. It wasn’t failing because of a lack of a new file, but because it could no longer log into the primary server. A quick test showed the password had been changed. This didn’t really surprise me as this client is going through and updating a number of accounts and passwords. This was simply obviously one and we had missed this one. (Yes, this is where better documentation would obviously be a good idea.  We’re working on that.)

So, I figured the fix would be easy, simply email the right person, get the new password and update the process.  I also was taking the time to update the script to that the password would be encrypted moving forward, right now it’s in plain text and to give the correct error in the event of login in failure.

Well, the person who should have the password wasn’t even aware of this process. As we exchanged emails, and the lead developer chimed in, the conclusion was that this process probably shouldn’t be using this account, and that perhaps even then, this process may no longer be necessary.

So, now my job is to track down the person who did or does rely on this process, find out if they still are and then finish updating the password.  Of course if they’re not, we’ll stop this process. In some ways that’s preferable since it’s one less place to worry about a password and one less place to maintain.

Now, the above details are somewhat specific to this particular job, but, I’m sure all of us have found a job running on a server someplace and wondered, “What is this doing?” Sometimes we find out it’s still important. Sometimes we discover that it’s no longer necessary. In a perfect world, our documentation would always be up to date and our procedures would be such that we immediately remove unnecessary jobs.

But the real world is far messier unfortunately.

(and since the full photo got cropped in the header, here it is again)

Call 911. If you can

Apparently not only can guest rooms can not be called from this phone

And as a reminder, if you enjoy my posts, please make sure to subscribe.

 

Marshmallows Part II

I’ll have to admit, I can rarely tell in advance when one of my posts will hit all the buttons and generate views and when it’ll fall flat. But as I don’t always write for my audience, sometimes I write for my own reasons, I can live with that.

So, how to follow-up on a post that didn’t receive many views, write a follow-up post. You can call me a slow learner.

Actually, it’s about learning. Last time I wrote about my microwave and doing a quick experiment with marshmallows to prove it was really dead.  After 2 days without a microwave it was time to get a new one. Of course I couldn’t get what I wanted because the space it had to fit into was limited in size.  That could have been resolved, but would have meant redoing the cabinet space it had to fit into. And if I were going to redo the cabinet space there, I might as well redo the rest of the cabinets. And if I’m going to redo the cabinets, I really need to redo the counters. And very quickly a replacement $100 microwave I can get in an hour would become a 3-week $10,000 kitchen remodel. I opted for the $100 microwave over the one I really wanted.

And the results are shown at the top of post (and below in case the top doesn’t appear)

melted marshmallow picture

10 seconds of marshmallows in the microwave

It’s quite interesting to me. The best heating was beyond the area of the rotating plate.  But this also shows the value of the rotating plate since if there’s a few sports, if I put a something to heat and everything was stationary, it would take forever to heat since there’s little to no microwave energy there. (This can get complex because of the size of the wave and the height of the material, etc.)

Now, I’d have done more experiments, but it seems a certain someone in the house enjoys marshmallows more than I do and had eaten a bunch and this was all I had.

But, I have a working microwave and I’ve proven how important the rotating plate can be (not that I had much doubt).

And that’s science to me; doing experiments and learning.

Oh and about the SQL query I was updating. It’s going into production this week hopefully. I was about to eek out about a 10-20% improvement. Beyond that, not much I could do because it really ends up scanning an entire table, on purpose.  Only so much you can do there.

One last thing: there may not be a post next week because I’ll be teaching at the NCRC weeklong cave training class in Indiana and will have limited internet and time.

Marshmallows

Though I attended RPI, which is generally considered an engineering school, my degree is a BS in Computer Science. I say that because I consider myself more of a scientist than an engineer at times. And honestly, we all start out as scientists, but many of us lose that along the way.

Anyone who has had a small child has observed a scientist in action. No, they’re not in a lab full of test tubes and beakers and flasks giving off noxious smells. But they are in the biggest lab there is, the world. They also don’t necessarily realize it. Nor do parents. But every time they drop a Cheerio, they’re testing gravity.  Fortunately (or unfortunately depending on your point of view) so far every time they’ve managed to prove that gravity works. This is the most obvious example, but when you stop to think about it, much of the first few years of life is all about experimenting. Most of the time it goes well, but sometimes, as a burnt hand will attest, the experiment has a less than ideal outcome.

And it’s the fear of burned hands that leads to parents to utter that common  refrain, “Don’t touch that!” or the variation “Don’t do that!”.  Soon, over time, our experimentation starts to get reined in until we do very little of it. This can be inhibiting.

Years ago I used to teach an “Introduction to Windows” adult education class. It was I believe a 6 week class and I taught several over the course of a couple of years. It didn’t take me long to realize the biggest constraint on the students ability to succeed in the class was that they had internalized “Don’t do that, you might break something.” Once I realized that, half my teaching pedagogy simply became, “Touch that, you won’t break it, and if you do, it’s not a big deal, and if it is, we’ll fix it anyway.” Seriously, more than anything else, I had to encourage most of my students to experiment with the computer.

More recently I realized I had stopped doing as many experiments in my life as I should be doing. About 1.5 weeks ago I attended a Wilderness Medicine Conference a friend of mine had told me about. At the end of the very wet, cold, rainy day, a bunch of us went outside and tried to start a fire. Starting a fire, let alone in such conditions was something most of the students had never done. I had, but not in years. With some effort, and experimentation, including using the outside box of a single serving size package of Fruit Loops, we finally managed to get the fire going.

But this got me thinking. When I go hiking, I carry a tiny ziplock back in my jacket with some firestarting materials. They’re there in case of an emergency. But, the thing is, I had never actually tried them and realized if I didn’t know how well they worked in practice, I couldn’t rely on them in emergency. So, I went outside, and started a fire. And I learned that yes, my materials ARE adequate, but the dryer lint needed to be pulled apart more than I realized. I tried again later in the week, and added the use of a toilet paper roll to form sort of a chimney so the starting fire would draft better. This, and the better pulling of the lint worked even better and a single match was sufficient this time.  This gave me more confidence that in an emergency, in less than ideal conditions I could get an actual fire going.

But, I wasn’t done! Our microwave broke this weekend. But, before I wrote it off, I wanted to make sure it wasn’t a fluke or something else. So, in this case I decided to get a bag of marshmallows and lay them out inside the microwave to see if I was getting ANY energy out of the magnatron. Turns out, nope, nada, nothing. So, today or tomorrow I will be buying a new microwave. But, it was a fun, and later tasty experiment.

Without delving deep into the scientific method here, I’ll say at a simple level, science is about having a hypothesis and testing it. The testing it is important.

To bring this back to SQL. First, you have a hypothesis that your backups will work. Have you tested that hypothesis? If not, do so immediately. Even if they do, you might learn something now that will be important when you have to do it for real. Perhaps you learn the volume your backups are on only has write access. Or perhaps you learn you need to retrieve your encryption keys and the person who controls access to them is on vacation. Or perhaps your RPO is 4 hours and the restore takes 6 hours.  So, experiment.

query plan

Capture of a random query plan

Recently for one client I’ve spent some time experimenting with various changes to help improve the performance of some queries. Not everything I tried worked, but some things did. So, again experiment.

I’m curious what recent experiments you may have done, SQL or otherwise. What were their outcomes?