[SalesForce] Detecting Trigger -> Workflow -> Trigger

There is a lot of information out there on preventing trigger recursion. I have a related question I would like to pose but it has some key differences. I was not able to find an existing answer that covered this case but if I overlooked one feel free to just point that out.

Does anyone know a surefire way to detect the second execution of a trigger caused by a workflow field update with the following criteria:

  • Trigger logic still works when there are no field updates – meaning don't only run the second time.
  • We can detect what the workflow actually changed – By default only Trigger.new is updated by the workflow but we are interested in the difference between Trigger.new before workflow and Trigger.new after workflow because we already made some calculations and may need to adjust them or roll them back.
  • A subsequent update to the same record in the same execution context (via APEX) is considered a new trigger run and is not mistaken for a workflow field update even if there are no workflows on the object.
  • (Flexible) Third party APEX code does not need to call global utility methods to reset the cache of this trigger pattern.

The closest I have come so far is to cache a static Trigger.mid list at the end of a trigger run with a key created from the Trigger.old items. Then when a trigger is running, the trigger checks to see if it has an entry in the Trigger.mid cache for the Trigger.old key and if it does, it pulls that and uses it in place of Trigger.old. This gives us the actual changes that were made by the workflows. However, I have not figured out how to correctly clear the cache in the scenario where there are no workflows so a subsequent update to the exact same record list in APEX causes a problem because its seen as a second run caused by a workflow field update. Any thoughts on other approaches or how to tweak this approach to meet the requirements?

I have not added any code to this question because I think it's really more a theoretical question but if need be I am happy to add some.

Best Answer

Here is my attempt at solving this problem - adapted from my code here.

public class MyObjectServices {
    public static Set<Id> recordsProcessed = new Set<Id>();
    public static Map<Id, MyObject__c> cachedObjects();     

    public static void myTriggerMethod(Map<Id, MyObject__c> oldMap, Map<Id, MyObject__c> newMap){
        Boolean isFirstRun = true;

        //The number of records processed in all "batches" to this point
        Integer sizeBefore = recordsProcessed.size();

        //Add the ids all records being processed this "batch". If they've already
        //been processed, the set will prevent duplicates from being added
        recordsProcessed.addAll(newMap.keySet());

        //Determines if the records included in the current "batch" of 200 
        //have been processed before
        if (recordsProcessed.size() == sizeBefore){
            isFirstRun = false;
            //Removes them from the set so they can be re-processed in the same transaction
            recordsProcessed.removeAll(newMap.keySet());

        } else {
            cachedObjects = newMap;
        }
    }
}

After evaluating this logic, if you are in a first run, isFirstRun will be true and you will use the oldMap and newMap as before. If you are evaluating this code after a workflow, isFirstRun will be false, and the cachedObjects will be the versions of the objects before workflow.

Now, after the workflow run, the ids will be removed from the map, allowing subsequent updates on the same records to interact as usual. The only shortcoming I see is that this will not work if the trigger itself calls an update on the same records (which is enforced by Apex but workarounds exist).