As per the link in your question, the following are not separate limits for the managed package, rather they're counted towards the whole org:
- The total heap size
- The maximum CPU time
- The maximum transaction execution time
Having said that, your DMLs inside the batch will have a separate limit for the managed package namespace, though the maximum CPU time for an example will stay the same for all apex in your org, regardless whether the code is executed withing the namespace of the managed package or your org in general.
So in regards to method executions, this link says that the maximum number of asynchronous Apex method executions falls into the Force.com Platform Apex Limits
"The limits in this table aren’t specific to an Apex transaction and
are enforced by the Force.com platform."
Meaning that the governor limit will apply for the managed package as well.
As mentioned, describe limits have been removed entirely. Just for completeness, there are a few remaining limits that are affected by these describe calls, but consumption is negligible. These limits are Heap Space
and CPU Time
. The numbers below are for my org, so mileage will vary. I doubt the numbers would ever get high enough to be a concern.
Methodology
You need separate approaches (though one execution is sufficient) to derive the measures that follow. All of this was run in Execute Anonymous
with logging levels set to NONE
except for Apex
, which was at DEBUG
.
Long start = Datetime.now().getTime();
for (Integer i = 0; i < 100; i++)
Map<String, SObjectType> describe = Schema.getGlobalDescribe();
system.debug(Datetime.now().getTime() - start);
// divide above number by 100 (or whatever number of iterations you use) to get ms elapsed
Integer heapBefore = Limits.getHeapSize();
Map<String, SObjectType> describe = Schema.getGlobalDescribe();
system.debug(Limits.getHeapSize() - heapBefore);
// divide the above by Limits.getLimitHeapSize(); for percentage of heap used
Consumption
Schema.getGlobalDescribe()
CPU Time
: ~7ms
Heap Space
: ~0.3%
Schema.SObjectType.getDescribe()
CPU Time
: ~0.002ms
Heap Space
: ~0.001%
Schema.SObjectField.getDescribe()
CPU Time
: ~0.38ms
Heap Space
: ~0.001%
Schema.DescribeSObjectResult.getRecordTypeInfosByName()
CPU Time
: ~0.2ms
Heap Space
: ~0.0001%
Analysis
If you are getting the global describe more than once, cache it. It should be quite simple to cache, and the heap consumption is likely worth the tradeoff even if the number of calls is <50.
For object describes, every 1000 calls (in a single transaction) consume only 2ms (out of 10s), so you don't need to pay much attention unless you are expecting >100K calls. Caching 1000 fields will only consume 1% of your heap space, so that's not a huge concern either. I recommend you use whatever makes your code cleanest.
For field describes, you only get three calls per millisecond, so the calculus is a little different. Now 1000 calls consume an amount of time that is perceptible by humans. However, the cache rate is the same. If you are describing the same field many times, consider caching.
For record type describes, the CPU situation is much the same as with fields. You get five calls per millisecond, so you don't necessarily want to call it thousands of times. But caching it is very inexpensive. You should be able to cache the describes for every single RecordType
without consuming 1% of your heap.
Best Answer
The short answer is that a managed package can be given its own set of governor limits that are separate from the governor limits of other managed packages and unmanaged code in the org. So they are not "better" in the sense of being higher, they are just separately counted. So if each managed package adds a trigger to Account, there is less risk of the work done in each managed package adding up to a governor limit exception on Account.
A managed package does not automatically operate that way and you could start your further research on that subject say here What is aloha app and what is process to make app as aloha.
The managed/unmanaged choice involves many factors. If you believe you are developing a product that you have a stable design for, that many customers will install, that will go through a series of enhancements, and that you want to promote via the AppExchange then managed is the right choice. But going the managed route has a learning curve (and cost) and you should do plenty of reading on the subject before taking any decision.