[SalesForce] Chaining Queueables: Clarification & Practical Usage

I am trying to understand what I can and can't do when chaining queueables. I have read several tutorials on this but it seems contradictory that I can't create more than 1 child job, but I can create up to 50 in a transaction.

Trailhead says the following, which is commonly referred to verbatim in various blogs on the subject. Emphasis mine.

You can add up to 50 jobs to the queue with System.enqueueJob in a
single transaction.

When chaining jobs, you can add only one job from
an executing job with System.enqueueJob, which means that only one
child job can exist for each parent queueable job. Starting multiple
child jobs from the same queueable job is a no-no
.

No limit is
enforced on the depth of chained jobs
, which means that you can chain
one job to another job and repeat this process with each new child job
to link it to a new child job. However, for Developer Edition and
Trial orgs, the maximum stack depth for chained jobs is 5, which means
that you can chain jobs four times and the maximum number of jobs in
the chain is 5, including the initial parent queueable job.

I find this contradictory that you can add up to 50 jobs in one transaction, but you can't start one job from the other.

I am trying to implement a pattern where I keep things that need to be synchronous synchronous, but do everything else using Queueable. This results in a pattern where I do something like the following:

In the Trigger

    if (Trigger.isBefore && Trigger.isUpdate) {
        ContactTriggerHandler.fireMyProcess(Trigger.oldMap, Trigger.newMap);
    
    } 

In the Trigger Handler

    public static void fireMyProcess(Map<Id,Contact> oldMap, Map<Id,Contact> newMap)
    {
        List<Contact> contacts = new List<Contact>();
        for (Contact c : newMap.values())
            if (should be processed asynchronously)
                contacts.add(c);
    
        jobId = System.enqueueJob(new ContactTriggerHandlerQueueable(contacts)); 
    }

Then, a Queueable process that actually implements my logic.

    public class ContactTriggerHandlerQueueable implements Queueable {

        public ContactTriggerHandlerQueueable(List<Contact> contacts)
        {
            // Save off my Contacts into a class variable
        }

        public void execute(QueueableContext context) {
            // My business Logic
        }
    }

Assume here that I have a similar pattern for Account triggers. There are no callouts.

The problem with this approach is that when my business logic updates an Account, the Account trigger fires, and another Queueable is created, but I get the error : Too many queueable jobs added to the queue: 2

So this violates the no-child-Queueables rule – my Contact Queueable ends up creating an Account Queueable which is considered illegal chaining.

I then tried a similar pattern where I queued up a Queueable inside of a Queueable:

    public ContactTriggerHandlerQueueable(List<Contact> contacts)
    {
        // Save off my Contacts into a class variable
    }

    public void execute(QueueableContext context) {
       jobId = System.enqueueJob(new SomeOtherTriggerHandlerQueueable(contacts)); 
    }

This is far less preferable than the 1st pattern because (1) it requires every trigger to be aware whether it was called from a Queueable using Limits.getQueueableJobs() and (2) the preceding logic must explicitly invoke the Queueable, when normally the downstream trigger would pick it up.

This too fails, perhaps because there is a trigger that's creating a Queueable downstream.

That said, how do I practically utilize the feature to add up to 50 jobs in a transaction, but not start multiple jobs from the same job? I am really struggling to reconcile these two concepts since they seem contradictory.

Best Answer

There's two modes of operation in Salesforce: synchronous and asynchronous. The rule is that if you're synchronous, you get 50 jobs for that transaction. Once you go asynchronous, you get only one child allowed. This is done to prevent "rabbits." A rabbit is code that reproduces until it consumes all available resources. In a Queueable, this would be a rabbit:

public class Rabbit implements Queueable {
    public void execute(QueueableContext context) {
        System.enqueueJob(this);
        System.enqueueJob(this);
    }
}

As you can see, one job would spawn 2, those would spawn 4, and those would spawn 8... you end up with exponential growth. Without governor limits, doing such a thing could bring the servers down, which is why we have limits in place, so that this can't happen.

While this is an explicit example, it would be more challenging to necessarily find this condition when you have multiple triggers that each may enqueue a job, perhaps indefinitely.

So, generally speaking, you'll need to make a design choice in order to fix the problem. On potential solution would be to write a utility class, perhaps something like the following:

public class QueueableUtil implements Queueable {
    Queueable[] payload;
    static QueueableUtil self;
    QueueableUtil(Queueable item) {
        payload = new Queueable[] { item };
    }
    public static Id enqueueJob(Queueable item) {
        if(!System.isQueueable()) {
            return System.enqueueJob(new QueueableUtil(item));
        } else {
            self.payload.add(item);
            return null;
        }
    }
    public void execute(QueueableContext context) {
        self = this;
        payload.remove(0).execute(context);
        if(!payload.isEmpty()) {
            System.enqueueJob(this);
        }
    }
}

Now, you can call your items using the new method:

QueueableUtil.enqueueJob(new ContactTriggerHandlerQueueable(Trigger.new));

If you're in a Queueable context, your job will be added to a queue to be executed, and if you're not, you'll be queued normally.

I haven't tested this, but it looks like it should work. Please keep in mind that you may hit heap limits if your queue gets too big, so try to design your Queueable jobs to have as small of a footprint as possible.

Keep in mind that this requires discipline; you'll need to use this utility all the time in order for it to be useful, because it will only queue itself from a non-queueable context so as to not violate the chaining rules.

You may also need to consider adding additional logic if you need to worry about scheduled, future, and batchable contexts, which have similar restrictions (only one call per transaction). I wouldn't consider this a full solution, but it's better than nothing.

Keep in mind that if you use a method like this, it is entirely possible to chain even more than 50 jobs in a single transaction. You may want to add some safeguards to prevent runaway processes.

Related Topic