[SalesForce] Is using Database.executeBatch from a trigger an anti-pattern

In a managed package we have a trigger on Account that propagates a custom field status value from the Account to a large number of related custom objects and wrote the logic in a batchable to avoid hitting governor limits.

But we recently hit this error:

System.AsyncException Database.executeBatch cannot be called from a
batch start, batch execute, or future method

because a second managed package uses a batchable to update the Account.

This seems like a pretty nasty sort of coupling: if (in a trigger) you use up the one and only one layer of batchable possible, any other code that tries to use a batchable (that causes your trigger to run) then breaks.

So is using Database.executeBatch from a trigger an anti-pattern? Is it a better trade-off to schedule such updates instead?

Best Answer

I would certainly regard it as an anti-pattern yes, the Salesforce documentation here, states this...

Use extreme care if you are planning to invoke a batch job from a trigger. You must be able to guarantee that the trigger will not add more batch jobs than the five that are allowed. In particular, consider API bulk updates, import wizards, mass record changes through the user interface, and all cases where more than one record can be updated at a time.

I personally think this statement does not go far enough to discourage this, as it basically breaks bulkificaiton, which as you know is a big no no. Unless your users of the given object fully accept and understand the implicatons (and its often hard to be confident they do tbh), i would never use it.

Instead, as you suggest create a scheduled job and have the trigger insert into some kind of work action object to process work in the background, then either chatter post or email notification of any failed work items back to the user, with a means to re-queue the work once they have resolved the issue.

There is of course on the horizon the most welcome Batch Apex FlexQueue. Note however this has its own limit of up to 100 queued items.

If the Apex flex queue has the maximum number (100) of jobs, this method returns an error and doesn’t place the job in the queue.

So even with this in place, my standard advice is to avoid linking Batch Jobs or @future work to granular user or API actions as you can rapidly flood the platform, resulting in failure to enqueue or the platform slowing overall processing down to compensate. Instead consider a more robust way to group such work with a solid user notification and error recovery solution in place.

Related Topic