In order to investigate this further I created a new dev org on which to play, installed my package, created a specialist implementation of my AdaptiveBatch API and tried out various types of processing.
I stumbled upon the answer to the issue because I started receiving Developer Script Exception emails - I was not the recipient of these on the sandbox where we first saw this problem.
The logs confirm that the start and finish method async executions for the batch were being triggered and processed fine. The processing was failing with the strange internal error only during the initialization of the execute method async execution.
The start method returns a query locator (SOQL). Where this references one of our package's custom fields the namespace prefix was not specified. The processing seems to handle this fine, as it always has historically across our code base, with log entries like:
08:17:01.0 (183716704)|SOQL_EXECUTE_BEGIN|[188]|Aggregations:0|SELECT firstname, lastname, payment_account__c, id FROM Contact
08:17:01.0 (219117528)|SOQL_EXECUTE_END|[188]|Rows:2
08:17:01.0 (219380886)|METHOD_EXIT|[20]|01p4J000003gjQd|sirenum.AdaptiveBatch.start(Database.BatchableContext)
You can see from this that (in my test org) there are two rows selected, and these include the "payment_account__c" field being queried. That field is actually from our package, but is named without the prefix.
However, when execute is to be invoked the log simply contains something like:
08:17:01.0 (378428)|CODE_UNIT_STARTED|[EXTERNAL]|01p4J000003gkWc|MyAdaptiveBatch
08:17:01.0 (3495717)|HEAP_ALLOCATE|[72]|Bytes:3
08:17:01.0 (3577878)|HEAP_ALLOCATE|[77]|Bytes:152
08:17:01.0 (3594977)|HEAP_ALLOCATE|[342]|Bytes:408
08:17:01.0 (3608691)|HEAP_ALLOCATE|[355]|Bytes:408
08:17:01.0 (3622224)|HEAP_ALLOCATE|[467]|Bytes:48
08:17:01.0 (3650375)|HEAP_ALLOCATE|[139]|Bytes:6
08:17:01.0 (4766801)|HEAP_ALLOCATE|[EXTERNAL]|Bytes:578
08:17:01.0 (36070965)|FATAL_ERROR|Internal Salesforce.com Error
08:17:01.36 (36112562)|CUMULATIVE_LIMIT_USAGE
This then appears to relate to the developer script exception email:
Developer script exception from XXX:
'MyAdaptiveBatch':
SELECT firstname, lastname, payment_account__c, id FROM Contact
^ ERROR at Row:1:Column:29 No such column 'payment_account__c' on entity 'Contact'.
If you are attempting to use a custom field, be sure to append the '__c' after
the custom field name. Please reference your WSDL or the describe call for the
appropriate names
It seems that this is not, therefore, due to the Batchable methods being public (rather than global), which is a really good thing. At least I can address this behind the API by making sure namespaces are applied explicitly in the SOQL query fields.
Strange that it trips up in this specific context and not otherwise. I mean - our other batches fully implemented in the package don't require the namespace prefix, but this batch who's execute method is fully implemented in the package but where the actual queued class is outside the package fails, yet only in execute and not in start. This inconsistent start/execute behaviour is very strange and the fact it is reported as an internal error with no useful log detail doesn't help.
As the rule says, you have too many parameters. Consider passing in an entire record:
@AuraEnabled
public static updatestudentProgramDiscount(students_programs__c studentRecord) {
try {
update studentRecord;
} catch(Exception e) {
throw new AuraHandledException(e.getMessage());
}
}
Note: You should use Security.stripInaccessible to prevent invalid field access. Note also using an exception thrown to trigger the "catch"/"error" handler on the client.
Best Answer
Add the following annotation to your code:
Example:
This will disable the warning only for that method.
Note that
public
should generally be used for interface methods, unless you're making a Managed Package that needs this method to be global. Very few classes needglobal
, and should not be marked as such unless necessary (e.g. WebService methods, @RemoteAction methods for iframe Visualforce pages, etc).