Great question. In opening this can of worms, I would like to noodle the premise if you don't mind :-) just to learn if we're solving performance in the right quadrant. What motivates your question exactly?
easy problem ^ hard problem
easy solution | easy solution
|
[start here] | [graduate to here]
|
<--------------------+-------------------->
|
[dragons be here] | [data scientists]
|
easy problem | hard problem
hard solution | hard solution
Definitely performance is important, but the Force.com platform is pretty good at keeping you within reasonable boundaries. You don't have to worry about nginx
vs iis
vs apache
serving XYZ
requests per second. Float above that stuff. Salesforce throws smarts and hardware at those problems so we don't have to.
As a service layer developer, err on the side of inspecting:
- performance of callouts (web services, third party hooks hanging off your controllers, or crazy stuff living in JavaScript code facing your users via custom buttons, etc)
- if you're an ISV developing real product, keep swathes of DML out of tests for your patience/sanity's sake, check large inserts and large deletes aren't anywhere near governor limits,
- check your use of asynchronous tools / set-and-forget methods (like @future, batches, schedules) to detour any heavy lifting away from execution contexts invoked by user interfaces,
Rather than doing legwork for the sake of the Apex runtime, optimize for you the architect, us the developers, them the future maintainers. The Apex runtime will get faster and smarter, you don't need to do it any favours. Principle of least astonishment and semantics wins over tricks every time.
Governor limits are the thoughtful and useful straitjacket that gives us a gentle slap in the face as course correction if code falls outside those reasonable boundaries.
As a client-side developer, invest your valuable time:
taking advantage of speedy (JavaScript Remoting) and reactive (Streaming API) features to offer the snappiness (or perceived snappiness) your users expect, decoupled from Apex performance,
check the expires
attributes of pages holding JavaScript clients, the cache control
attributes of static resources (zips of course, concatenated CSS/JS courtesy a non-overkill build script)
profile first, shoot later!
Here's my opinion on what "best practice" is for trigger handlers.
- Your
Trigger
should only answer when
questions.
- Your
Handler
should only answer what
/which
questions.
- What operation(s) to perform?
- Which records to act on (criteria)?
- Your
Service
should only answer how
questions.
- How to perform each action?
- How to apply each filter (criterion)?
Each layer will pass stateful information (the trigger records) down as needed. In my observation, the term Helper
class is not used rigorously as these terms above.
In your scenario, all your logic can be written in a CaseService
class. Something like the following:
public with sharing class CaseService
{
public static void updateChildAccounts(List<Case> records)
{
Set<Id> accountIds = new Set<Id>();
for (Case record : cases) accountIds.add(record.AccountId);
List<SObject> recordsToUpdate = new List<SObject>();
for (Account child : [
SELECT Id, (SELECT Id FROM ABCs__r) FROM Account
WHERE Id IN :accountIds
]){
// set account fields
recordsToUpdate.add(child);
for (ABC__c grandchild : child.ABCs__r)
{
// set ABC__c fields
recordsToUpdate.add(grandchild);
}
}
recordsToUpdate.sort();
update recordsToUpdate;
// error handling strongly recommended, but omitted here for brevity
}
}
There's a lot to unpack about how the above was written to consume just one query and one dml operation. About the query limits, you did consume a second query by getting the children, but this type of sub-query consumes a separate governor limit. Usually this "aggregate query" limit is not one you have to worry about overly much.
The next thing to understand is that you can Create Records for Different Object Types. You can insert up to ten different types, but if you alternate back and forth between Account
and ABC__c
, each chunk counts towards that maximum. That's where sort comes in, and thankfully the first step in the sort sequence is to check the type of sObject. So after you call sort
you're back down to two chunks and you're good to go!
As for exception handling, you should read up on how to best handle a DmlException
. I'm having a surprisingly hard time finding any good resources to link at the moment, but I'll try to come back and add it in if I find one.
My basic pattern for the rest would look like:
Trigger
trigger CaseTrigger on Case (before insert)
{
CaseTriggerHandler handle = new CaseTriggerHandler(trigger.new, trigger.oldMap);
if (trigger.isBefore)
{
if (trigger.isInsert) handle.beforeInsert();
if (trigger.isUpdate) handle.beforeUpdate(); // if you needed it
}
if (trigger.isAfter) // if you needed it
{
// etc.
}
}
Handler
public with sharing class CaseTriggerHandler
{
@TestVisible static Boolean bypassTrigger = false;
final List<Case> newRecords;
final Map<Id, Case> oldMap;
public CaseTriggerHandler(List<Case> newRecords, Map<Id, Case> oldMap)
{
this.newRecords = newRecords;
this.oldMap = oldMap;
}
public void beforeInsert()
{
if (bypassTrigger) return;
CaseService.updateChildAccounts(newRecords);
}
public void afterInsert() { /*if needed*/ }
public void beforeUpdate() { /*if needed*/ }
// etc.
}
Best Answer
Good question, but there are MANY possible answers, so I will just throw in my 2 cents.
The first and easiest way to 'BULKIFY' is to leverage collections in order to save yourself SOQL calls and DML statements.
Here's an older, but still great resource by Jeff Douglass on utilizing collections in Salesforce.
http://blog.jeffdouglas.com/2011/01/06/fun-with-salesforce-collections/
IMO, I would say that leveraging collections is the first and best place to start in trying to optimize and bulkify your triggers. I will now try to show a few examples of how leveraging collections can save you many Governor limit headaches.
This code uses one DML statement for each Account in trigger.new
The example above makes a DML call for every account in trigger.new. If this is a mass insert, you will run into governor limit issues.
This code now uses one DML statement total, regardless of the size of trigger.new
This example moves the DML outside of the loop. Instead you add a new custom object to the list inside of the loop. Once you have gone through the entire list of trigger.new, you insert the list of custom objects.
This code uses one SOQL query for each Account in trigger.new
The example above makes a SOQL query for every contact in trigger.new. If this is a mass insert, you will run into governor limit issues.
This code now uses one SOQL query total, regardless of the size of trigger.new
This example above utilizes a map to store all accounts related to the contacts in trigger.new. The advantage here is that one single SOQL query gathers all the accounts. You can then get the account easily within the loop without have to query the database. You now have the same trigger with a sinlge SOQL query regardless of the size of trigger.new
I believe this is one of the best practices to optimize your triggers for bulk operations.
To take it a step further, there are a few more things that we can do to optimize our triggers. One of the best practices is to only use one trigger per object.
Lets assume you have two specific pieces of business logic that you need to apply after an account is created. The easy way to accomplish this would be to create 2 triggers on the account object.
This could work well depending on your situation. What if you have logic in trigger2 that is dependent on the outcomes of trigger1? There is no guarantee the order in which your triggers will run, so in some cases trigger1 will run first and in others trigger2 will run first.
A simple approach to solve this is to combine the logic into a single trigger
This works technically, as you can now control the order of the operations, and it is a best practice to have only 1 trigger per object, but still can be improved a bit. Lets say for arguments sake this is a fairly large trigger, with a few different pieces of complex logic.
There are a few things that jump out that might be a problem.
So how do we fix that?
We would want to move the logic from the trigger itself into a utility or handler class.
Handler
You have solved both of the problems mentioned above here. You can now reuse your code. You can call these public static methods from other places in order to reuse code. You can now also segment your testing and test individual smaller methods when testing your trigger, as you no longer have to make a DML call and run the whole trigger, you can just test individual methods.
Hopefully this handles some of your bulkification/best practices questions. There is actually quite a bit further you can go with optimizing but then we get into trigger frameworks and interfaces, but I think this is a decent start to some of the best practices when writing your triggers.
P.S. On a side note, this might be the kick I needed to actually start a blog as this turned out to be much lengthier than I originally planned.