Ask most DBAs and they’ll probably tell you they’re not a huge fan of triggers. They can be useful, but hard to debug. Events last week reminded me of that. Fortunately a little debugging made a huge difference.
Let me set the scene, but unfortunately since this was work for a client, I can’t really use many screenshots. (or rather to do so would take far too long to sanitize them in the time I allocate to write my weekly blog posts.)
The gist is, my client is working on a process to take data from one system and insert it into their Salesforce system. To do so, we’re using a 3rd party tool called Pentaho. It’s similar to SSIS in some ways, but based on Java.
Anyway, the process I was debugging was fairly simple. Take account information from the source and upsert it into Salesforce. If the account already existed in Salesforce, great, simply perform an update. If it’s new data, perform an insert. At the end of the process Pentaho returns a record that contains the original account information and the Salesforce ID.
So far so good. Now, the original author of the system had setup a trigger so when these records are returned it can update the original source account record with the Salesforce ID if it didn’t exist previously. I should note that updating the accounts is just one of many possible transformations the entire process runs.
After working on the Pentaho ETL (extract, transform, load) for a bit and getting it stable, I decided to focus on performance. There appeared to be two main areas of slowness, the upsert to Salesforce and the handling of the returned records. Now, I had no insight into the Salesforce side of things, so I decided to focus on handling the returned records.
The problem of course was that Pentaho was sort of hiding what it was doing. I had to get some insight there. I knew it was doing an Insert into a master table of successful records and then a trigger to update the original account.
Now, being a 21st Century DBA and taking into account Grant Fritchey’s blog post on Extended Events I had previously setup a Extended Events Session on this database. I had to tweak it a bit, but I got what I wanted in short order.
CREATE EVENT SESSION [Pentaho Trace SalesForceData] ON SERVER ADD EVENT sqlserver.existing_connection( ACTION(sqlserver.session_id) WHERE ([sqlserver].[username]=N'TempPentaho')), ADD EVENT sqlserver.login(SET collect_options_text=(1) ACTION(sqlserver.session_id) WHERE ([sqlserver].[username]=N'TempPentaho')), ADD EVENT sqlserver.logout( ACTION(sqlserver.session_id) WHERE ([sqlserver].[username]=N'TempPentaho')), ADD EVENT sqlserver.rpc_starting( ACTION(sqlserver.session_id) WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho')), ADD EVENT sqlserver.sql_batch_completed( ACTION(sqlserver.session_id) WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho')), ADD EVENT sqlserver.sql_batch_starting( ACTION(sqlserver.session_id) WHERE ([package0].[greater_than_uint64]([sqlserver].[database_id],(4)) AND [package0].[equal_boolean]([sqlserver].[is_system],(0)) AND [sqlserver].[username]=N'TempPentaho')) ADD TARGET package0.ring_buffer(SET max_memory=(1024000)) WITH (MAX_MEMORY=4096 KB,EVENT_RETENTION_MODE=ALLOW_SINGLE_EVENT_LOSS,MAX_DISPATCH_LATENCY=30 SECONDS,MAX_EVENT_SIZE=0 KB,MEMORY_PARTITION_MODE=NONE,TRACK_CAUSALITY=ON,STARTUP_STATE=OFF) GO
It’s not much, but it lets me watch incoming transactions.
I could then fire off the ETL in question and capture some live data. A typical returned result looked like
exec sp_execute 1,N'SourceData',N'GQF',N'Account',N'1962062',N'a6W4O00000064zbUAA','2019-10-11 13:07:22.8270000',N'neALaRggAlD/Y/T4ign0vOA==L',N'Upsert Success'
Now that’s not much, but I knew what the Insert statement looked like so I could build an insert statement wrapped with a begin tran/rollback around it so I could test the insert without actually changing my data. I then tossed in some set statistics IO ON and enabled Include Actual Execution Plan so I could see what was happening.
“Wait, what’s this? What’s this 300K rows read? And why is it doing a clustered index scan on this table?” This was a disconcerting. The field I was comparing was the clustered index, it should be a seek!
So I looked more closely at the trigger. There were two changes I ended up making.
-- Link Accounts --MERGE INTO GQF_AccountMaster T --USING Inserted S --ON (CAST(T.ClientId AS VARCHAR(255)) = S.External_Id__c --AND S.Transformation in ('Account')) --WHEN MATCHED THEN UPDATE --SET T.SFID = S.Id --; if (select transformation from Inserted) ='Account' begin MERGE INTO GQF_AccountMaster T USING Inserted S ON T.ClientId = S.External_Id__c WHEN MATCHED THEN UPDATE SET T.SFID = S.Id end
An astute DBA will notice that CAST in there. Given the design, the Inserted table field External_Id__C is sort of a catch all for all sorts of various types of IDs and some in fact could be up to 255 characters. However, in the case of an Account it’s a varchar(10).
The original developer probably put the CAST in there since they didn’t want to blow up the Merge statement if it compared a transformation other than an Account. (From what I can tell, T-SQL does not guarantee short-circuit evaluation, if I’m wrong, please let me know and point me to definitive documentation.) However, the minute you cast that, you lose the ability to seek using the index, you have to use a scan.
So I rewrote the commented section into an IF to guarantee we were only dealing with Account transformations and then I stripped out the cast.
Then I reran and watched. My index scan of 300K rows was down to a seek of 3 rows. The trigger now performed in subsecond time. Not bad for an hour or so of work. That and some other improvements meant that now we could handle a few 1000 inserts and updates in the time it was previously taking to do 10 or so. It’s one of those days where I like to think my client got their money’s worth out of me.
Slight note: Next week I will be at PASS Summit so not sure if/when I’ll be blogging. But follow me on Twitter @stridergdm.