I've done jqGrid before in Apex with @RemoteActions
and it's definitely the way to go. I think with Ajax toolkit and SOAP, you'll have too much XML bloat compared to JSON for large sets of records.
As for loading all 78000 records, I would say you should stick to the limits of loading 2000 records at a time and than breaking those into further subsets of 100 records / page on the client side.
To pagination with large record sets I would recommend using the standard set controller
for pagination. Just make sure to include an order clause to keep the records returning in the same order.
Than load the next 2000 records in each subsequent call. And reduce the number of records you return from the server to speed things up in between 'set loads' as you get to the end of each set and need more pages.
You can also count
the number of records and return it in a complex object from the @RemoteAction
to keep your pagination accurate:
public class returnType{
List<sObject> objects {get; set;}
Integer totalRecords {get; set;}
Integer nextPage {get; set;}
Integer lastPage {get; set;}
}
UPDATE
Ok using the SSC to maintain pagination (at least this is the way I would do it with jqGrid:
@RemoteAction
global static returnType pagination(Integer pageNumber){
ApexPages.StandardSetController ssc = new ApexPages.StandardSetController(Database.getQueryLocator([SELECT Id FROM sObject ORDER BY Id LIMIT 50000]);
ssc.setPageNumber(pageNumber);
ssc.Pagesize(2000);
returnType retVal = new returnType();
retVal.objects = ssc.getRecords();
retVal.nextPage = pageNumber + 1;
retVal.previousPage = pageNumber - 1;
return retVal;
}
I don't have my original code in front of me, but generally, the SSC should be re-instantiated with each call. As for getting past 50k records, and to your total set, the query locator should continue to the end of the set reguardless of size taking the larger records towards the end of the set as you get to the end of [0..n-1]
pages.
In practice I would use an ORDER BY clause to ensure your records are consistently returned in the same order to ensure that your getting all results.
Why not design your own querry cursor in Apex?
Theory: Salesforce ID fields are autoincremented when created and behave like a number so you can use Less than and Greater than operators.
So if you can send the lastRecordID to your rest endpoint you can create your own querry cursor.
eg.
Select id from Opportunity where Id >= '00658000003rJFwAAM' order by Id Limit 2000
is a perfectly valid SOQL and will return all records having ID greater than 00658000003rJFwAAM. Thus this can be used for the Pagination logic in your stateless rest endpoint
Always store the last recordId and use that ID to get next set of records, No need to use offset anymore. Given that you design stateful client and stateless server.
This method is used for querying large data volume when the normal SOQL with where clause on other fields timeout.
Source: https://developer.salesforce.com/blogs/developer-relations/2015/11/pk-chunking-techniques-massive-orgs.html
Best Answer
There is a way to do it through the 2nd approach, but not with
offset
, here is what I suggest:order by id
to your first request, retrieve the first batch of 2000 records (for example), store the last id retrievedorder by id
andlimit 2000
, but also with where-clause conditionwhere id > 'last_retrieved_id'
This way you'll be able to retrieve all records just like with offset mechanism and separate them in different transactions (for example, with the help of
@RestResource
) to avoid CPU TIME OUT.Example: