Here is my attempt at solving this problem - adapted from my code here.
public class MyObjectServices {
public static Set<Id> recordsProcessed = new Set<Id>();
public static Map<Id, MyObject__c> cachedObjects();
public static void myTriggerMethod(Map<Id, MyObject__c> oldMap, Map<Id, MyObject__c> newMap){
Boolean isFirstRun = true;
//The number of records processed in all "batches" to this point
Integer sizeBefore = recordsProcessed.size();
//Add the ids all records being processed this "batch". If they've already
//been processed, the set will prevent duplicates from being added
recordsProcessed.addAll(newMap.keySet());
//Determines if the records included in the current "batch" of 200
//have been processed before
if (recordsProcessed.size() == sizeBefore){
isFirstRun = false;
//Removes them from the set so they can be re-processed in the same transaction
recordsProcessed.removeAll(newMap.keySet());
} else {
cachedObjects = newMap;
}
}
}
After evaluating this logic, if you are in a first run, isFirstRun
will be true and you will use the oldMap
and newMap
as before. If you are evaluating this code after a workflow, isFirstRun
will be false, and the cachedObjects
will be the versions of the objects before workflow.
Now, after the workflow run, the ids will be removed from the map, allowing subsequent updates on the same records to interact as usual. The only shortcoming I see is that this will not work if the trigger itself calls an update on the same records (which is enforced by Apex but workarounds exist).
Recent conversation on another question prompted me to take a look into this.
The documentation has changed since this question was originally asked, but a key observation here is that when allOrNone = false, any dml retrying that Salesforce does will not contain the record(s) that previously failed.
That fact, combined with the fact that we are unable to add or remove records from trigger context variables (TCVs from here on) gives us another method to detect recursion.
In normal operation, the TCVs for each trigger chunk will have absolutely no overlap with one another. By storing the Ids given in the TCVs in a separate static Set<Id>
, if we detect that there is any overlap, we can say with confidence that we're working in an allOrNone = false context, and use that to remove records previously marked as having been handled.
I threw together a class to act as a proof of concept
public class AntiRecursion{
// An extension of the normal "static set<Id>" that allows finer ganularity
// With this, we say "this Id has been processed by classes with these names"
private static Map<Id, Set<String>> recIdToWorkDone = new Map<Id, Set<String>>();
private static Set<Id> bulkIds = new Set<Id>();
public static void track(Map<Id, SObject> sobjMap){
Set<Id> localBulkIds = bulkIds.clone();
localBulkIds.retainAll(sobjMap.keySet());
// Any overlap in the bulkIds means we're in allOrNone = false territory
// and there was at least one failure
// Remove the Ids from our encountered Ids collection so we don't think we're
// recursing
if(!localBulkIds.isEmpty()){
recIdToWorkDone.keySet().removeAll(sobjMap.keySet());
}
bulkIds.addAll(sobjMap.keySet());
}
public static Boolean check(Id sobjId){
return check(sobjId, null);
}
// Returns true if the sobjId has not been encountered yet
// Returns false if the sobjId has been encountered (i.e. we're in recursive execution)
public static Boolean check(Id sobjId, String className){
if(sobjId == null){
throw new System.IllegalArgumentException('Id parameter cannot be null');
}
Set<String> result = recIdToWorkDone.get(sobjId);
Boolean wasNull = result == null;
if(wasNull){
recIdToWorkDone.put(sobjId, new Set<String>{className});
}else{
result.add(className);
}
if(String.isBlank(className)){
return wasNull;
}else{
return wasNull || !result.contains(className);
}
}
}
Example usage
trigger testTrigger on Physical_Inventory__c (before update) {
AntiRecursion.track(trigger.newMap);
Integer i = 0;
for(Physical_Inventory__c pi :trigger.new){
system.debug(pi);
if(AntiRecursion.check(pi.Id)){
system.debug('record has not undergone recursion');
}else{
system.debug('record has recursed');
}
if(i == 0){
pi.addError('just unlucky, I guess');
i++;
}
}
}
...and the anonymous apex to test it
List<Physical_Inventory__c> piList = new List<Physical_Inventory__c>();
for(Integer i = 0; i < 10; i++){
// In my org Phyiscal_Inventory__c is a custom object with an autonumber name, so
// all I need to do is add a blank instance
piList.add(new Physical_Inventory__c(
));
}
List<Database.SaveResult> srList;
srList = Database.insert(piList, true);
// Test the behavior by changing this between allOrNone = true and = false
srList = Database.update(piList, true);
// clean up after ourselves
delete piList;
Specifying allOrNone = false, and calling AntiRecursion.track()
in the trigger results in us removing the previously encountered Ids from recIdToWorkDone
(the desired behavior).
If the call to AntiRecursion.track()
is commented out, then it's equivalent to the simple static Set<Id>
recursion prevention mechanism (which results in us mistakenly marking rolled-back and retried records as having been previously handled when allOrNone = false)
Benefits of this approach:
- No queries required
- No abuse of Queueable
- No need for a new custom object
- No cleanup
- Relatively simple extension to an established approach
Drawbacks of this approach:
- Takes up more heap space than other approaches
- Can't be used with insert triggers
On that last point, I learned that partial failures on inserts cause entirely new record Ids to be assigned on the retries (meaning we'd never mark them as being part of a re-entrant case). Re-entrancy is mostly an issue with updates though, so it might not matter too much.
Best Answer
You wouldn't be able to use trigger for this either. Just like workflows, triggers only fire when a record is inserted, edited, or deleted. If you have a job that you want to run on a scheduled basis that goes through records and updates them, then you should use scheduled apex.