I'm not sure you will be able to solve that issue in an interactive context. You might have to use something like Batch Apex or Future methods to get around it. I have had a similar issue with Mixed DML and overcame it using Batch Apex.
So, firstly, I created a simple interface
public interface IAction
{
void process(BatchJob job);
}
Then, I created a couple of work classes to do the work for each type of action:
public class FolderCreator implements IAction
{
public void process( BatchJob job )
{
// inject some state here and do the business
}
}
public class CustomSettingUpdater implements IAction
public void process( BatchJob job )
{
// inject some state here and do the business
}
}
Then, in your start method of your Batch Apex, build up a list of IActions, for example
List<IAction> actions = new List<IAction>();
actions.add( new FolderCreator() );
actions.add( new CustomSettingsUpdater() );
Then, set your batch size to 1 and iterate over the actions thus isolating each action from one another, e.g..
for( IAction action : (List<IAction>)scope )
{
action.process(job);
}
Not ideal having to use Batch Apex when really I didn't want to but it worked for me.
I would recommend you use a pattern discussed in the 3rd Edition of Dan Appleman's Book Advanced Apex Programming. You mention using a custom object. I'm presuming you're saving articles to the custom object when you want to send them out for translation. That would work especially well for this pattern.
The pattern would have you use a trigger on the custom object. The trigger would save a record to a 2nd custom object with the recordId, and a few other details you'd want to have for the purposes of error handling as files are processed by your class, plus sent and retrieved from your web services. Let's call the 2nd object DataMonitor
.
When the record is saved to the DataMonitor object. An After Insert trigger would fire that will call a queueable provided there are sufficient limits available for it to call it (queued at time <5 & no more than 100 jobs on hold in flex queue, plus under org limits for async apex). If there aren't, it will check to see if it can request a schedulable job instead (limit of 100 in org at a time plus under org limits for async apex). If it can call the queueable, the queueable will query for records in DataMonitor. If it finds any, it will retrieve what it can process (only one if that's what you say it can handle, more if that's possible). Similar would happen with the schedulable.
From there it will make the web services call out. If it's successful at initiating the callouts, it will write-back a success message to the records it sent; marking them as having been sent. If you wish, it can also send you an email with the results (check email limits first). Finally, if there are more records remaining and limits available, it can call another queuable to process them. If there isn't room in the queue, it could alternatively schedule another Schedulable to do them at a later time depending on the kind of limits available.
This is something of an "industrial strength" pattern in that it provides the ability to retry sending records if the first attempt isn't successful. If, after a couple attempt, or after receiving a recognizable error type, you can choose to write back the error to the DataMonitor
record and also send the email to the Admin with some kind of special notation on it, so they'll know they need to take action. I believe this pattern is discussed in Chapter 7 of Dan's book.
Best Answer
You can combine the callout and the DML in the same method if you want to; the only restriction is that no callouts are allowed after a DML. Each call to start, execute, and finish are separate transactions. There's really no need to perform your DML in the finish method, as you can do so in the execute method. Notably, if you're trying to do more than 10,000 callouts before you finally write to the database, you'll exceed the DML limit.
Your current design would certainly work, assuming the OAuth token doesn't expire until the end of the batch. Personally, I'd recommend moving the OAuth check into the execute method so that if you lose your token half-way through (say, because it's revoked), your batch can recover. You may also want to increase your scope size from 1 to a larger number, depending on how much callout time you think you'll need.
To calculate how many callouts you can do, figure out how much time is needed for each callout, and divide that into the maximum callout time. For example, if your callout takes an average of 5 seconds, then your limit would be 60 callouts in a transaction. Of course, you can't go over the governor limit of 100, so that'd be your maximum value.
Finally, batchable Apex isn't scheduled Apex, even though both are a form of "asynchronous" Apex. You won't need to worry about that error, even if you're chaining, and even if you use scheduleBatch to insert a delay inbetween.