[SalesForce] Is it possible to change the size of the Auto-Chunked Batches in bulk API

We are implementing a Bulk API V2 client in order to insert hundreds of thousands of records.
Our current implementation is the following :

  1. first we create a job
  2. we add a batch containing all the record to insert and add it to the created batch
  3. we close the job so salesforce can start the processing of the records.

Salesforce automatically creates chunks of 200 records from the uploaded data :

https://developer.salesforce.com/docs/atlas.en-us.222.0.api_asynch.meta/api_asynch/asynch_api_concepts_limits.htm

The problem is that our triggers and workflows are quite complicated, and inserting 200 records in a single apex transaction hits governor limits (CPU time).

Is it possible to reduce the chunk size to 50 records for example ? I see no options in the bulk api allowing that.

Or do we need to go back to Bulk API V1 to do that ?

Thanks for your help.

Best Answer

You cannot configure the chunk size in either Bulk API version. Unless you're using API 20 or before (which definitely should not be an option), chunks will be 200 records in size. You can configure the batch size, but that's more about managing the Bulk API's limits on the byte size of a batch than anything else, and doesn't alter the chunk size. Because the Bulk API also has limits on the number of batches processed per 24 hours, using a very low batch size (e.g., less than 200 records) is not an effective end-run.

Because triggers and workflows prevent you from inserting records in chunks of 200, you should consider using the REST API or SOAP API instead. For example, you might consider using the sObject Collections resource to insert batches of 50 records at a time. You might or might not depending on your specific use case want to implement your own parallelism to replicate how the Bulk API processes batches in parallel.

Related Topic