Yes, Queueable can be called from Database.Batchable classes. You must go through enqueueJob for this to work:
public class BatchClass implements Database.Batchable<SObject> {
public Database.QueryLocator start(Database.BatchableContext context) {
return Database.getQueryLocator([SELECT Id FROM Account]);
}
public void execute(Database.BatchableContext context, Account[] records) {
System.enqueueJob(new QueueClass(new Map<Id, Account>(records).keySet()));
}
public void finish(Database.BatchableContext context) {
}
}
public class QueueClass implements Queueable {
Set<Id> recordids;
public QueueClass(Set<Id> recordIds) {
this.recordIds = recordIds;
}
public void execute(QueueableContext context) {
// Do something here
}
}
Note that you're limited to just one enqueueJob call per execute in the Database.Batchable class. This is to prevent explosive execution (a so-called "Rabbit Virus" effect).
You can combine the callout and the DML in the same method if you want to; the only restriction is that no callouts are allowed after a DML. Each call to start, execute, and finish are separate transactions. There's really no need to perform your DML in the finish method, as you can do so in the execute method. Notably, if you're trying to do more than 10,000 callouts before you finally write to the database, you'll exceed the DML limit.
Your current design would certainly work, assuming the OAuth token doesn't expire until the end of the batch. Personally, I'd recommend moving the OAuth check into the execute method so that if you lose your token half-way through (say, because it's revoked), your batch can recover. You may also want to increase your scope size from 1 to a larger number, depending on how much callout time you think you'll need.
To calculate how many callouts you can do, figure out how much time is needed for each callout, and divide that into the maximum callout time. For example, if your callout takes an average of 5 seconds, then your limit would be 60 callouts in a transaction. Of course, you can't go over the governor limit of 100, so that'd be your maximum value.
Finally, batchable Apex isn't scheduled Apex, even though both are a form of "asynchronous" Apex. You won't need to worry about that error, even if you're chaining, and even if you use scheduleBatch to insert a delay inbetween.
Best Answer
By "stack depth" I presume you mean "chain length"; each start/execute/finish runs in its own transaction and so has its own stack.
I know of no absolute limit: chaining is very kind to the platform because things run in a sequence rather than in parallel. But worth some approximate calculations to see if you will run anywhere near this limit recognizing that each start/execute/finish counts as one of these:
PS
I'd forgotten about the limit in tests that sfdcfox comments about. One way around that is shown in Testing a Database.Batchable implementation.