The number one rule here is do not, under any circumstances, perform load testing of any salesforce.com service or feature without formal consent. They may revoke your access and/or charge you service fees for the increased usage. Their own internal testing is sufficient to prove that the system is stable and working at optimal levels. In fact, salesforce Trust contains all of the relevant stats that you'd probably want to know, including number of transactions daily and average server response time. Outages and performance degradations are also reported here.
That being said, submit a case if you want to perform load testing to test the speed of your application using an automated tool. They will negotiate a set of parameters you can use (including number of simultaneous connections, number of users, number of tests, duration, etc). This can allow an organization to get a feel for the "average load" of the system. It's important to note that during peak load usage on a given server, all users are affected equally. For this reason, if your app is somehow performing slowly from salesforce.com's hardware, an alarm will have already gone off since a large number of clients would also be affected (at least, in most cases).
A better test is to test your network's performance. Run a bandwidth simulation test on your firewalls, routers, and other corporate infrastructure to make sure it can handle the load. This is more significant for a given organization's performance than testing the salesforce.com hardware, which is monitored and tested regularly.
You should note that salesforce.com handles nearly 1,000,000,000 transactions every weekday, or about 11,000 transactions per second. This means that the system is proven capable of handling all amounts of normal server transactions. However, if they allowed performance testing without scheduling them, a large number of entities performing testing all at once could cause a DDOS-style attack. This is why coordination is paramount, and those that violate it will be sanctioned.
An IT department that insists on testing will go through the appropriate channels. A smarter IT department knows that a proven reliable system that is already constantly monitored and tested doesn't need the external testing. This is the future of the cloud; managed hardware that performs well without needing babysitting by each individual organization using the service. This is a stark contrast to traditional corporate resources, where the IT department has to monitor and repair any outages themselves. This is a feature of salesforce.com (and other systems like Azure, S3, and so on).
Through the API you can check if your apex classes have compiled bytecode stored in the database via the ApexClass's isValid flag. If this is false there is no available bytecode for the class and the first time it's referenced it will need to be compiled before it begins executing. This can take upwards of 15 seconds for highly complex classes and their dependent classes (although that's an extreme case, I'd say 5 seconds is much more likely). The exact mechanics of when the compiled bytecode is removed have varied over time, and are an implementation detail salesforce doesn't disclose, so depending on what you're doing in your org this could potentially be happening quite often during development and testing.
Beyond that visualforce pages are served from a seperate domain which operates with a unique and lower-priviledged sessionId. The first time in a session your invoke a visualforce page a seperate redirect will need to be done to generate a session and set the relevant cookies for the visualforce domain, this can also contribute to a delay here.
Beyond that the exact mechanics of servlet.integration are a mystery, and I'm not sure what it does that couldn't be accomplished by iframing the /apex/namespace__page URL instead of using a separate servlet. I'd wager that legacy reasons are in play for it's continued existence.
Best Answer
I think profiling is most useful when you are isolating small chunks of functionality. If you want to know whether it is faster to use
List.isEmpty()
orList.size() > 0
, that sort of question is possible to answer definitively using profiling.If you take a look at my LimitsProfiler tool, it may give you some ideas. I definitely won't go so far as to say best practice, but I have walked down a similar path and my experience may benefit you.
I profiled
LimitsSnapshot.getInstance()
, a method which caches all the information given to you by theLimits
class. If memory serves, at the time I wrote it, I was able to cache millions of these snapshot instances within a one second interval.So if you wanted to use my library, you ought to be able to manage a static
List<LimitsSnapshot>
, and then at the end you can save them, debug them, or whatever you wish.