As I have gotten familiar with others implementations and best practices over time I tend to use both in different situations.
Interfaces:
Interfaces are absolutely fantastic for generalized API's that are re-used in many different contexts. For example: Apex-Lang's ArrayUtils.qsort I have made it a requirement in the past that all sorting outside of SOQL should be implemented using this interface. Why? because of the interface you can sort any object by its properties in a completely predictable manner, every time. No multiple dictionary mutation funny business. It cleanly provides a very clear re-usable approach to sorting objects, much like the Comparator interface in Java.
What are interfaces not good at? I have never had good luck implementing an interface as a controller for a visualforce page. Why? Because visualforce requires getter/setter methods in order to access properties and the idea of having to define each and every getter setter and then implement it defeats the purpose of re-usability.
This brings us to abstract/virtual classes.
Abstract/Virtual Classes:
In Apex abstract classes are great for creating re-usable patterns for implementing controllers (among other things). If you ever implement a visualforce page in a site you'll find that if there is any uncaught runtime error the user will only see a stanard permission exception page (hell to debug).
Instead of try catching every single contructor or initalize/action method you can write an abstract class that implements the various constructors and then use a one line method in the child class: (psuedo code)
public abstract class B{
sObject sobj;
public B(){}
public B(ApexPages.StandardController r){
try{
this.sobj = r.getRecord();
//do something standard
}
catch(Exception e){
ApexPages.addMessage(new ApexPages.Message(ApexPages.Severity.ERROR, e.getMessage()));
}
}
}
class A extends B{
public A(ApexPages.StandardController r){
super(r);
}
}
The number one rule here is do not, under any circumstances, perform load testing of any salesforce.com service or feature without formal consent. They may revoke your access and/or charge you service fees for the increased usage. Their own internal testing is sufficient to prove that the system is stable and working at optimal levels. In fact, salesforce Trust contains all of the relevant stats that you'd probably want to know, including number of transactions daily and average server response time. Outages and performance degradations are also reported here.
That being said, submit a case if you want to perform load testing to test the speed of your application using an automated tool. They will negotiate a set of parameters you can use (including number of simultaneous connections, number of users, number of tests, duration, etc). This can allow an organization to get a feel for the "average load" of the system. It's important to note that during peak load usage on a given server, all users are affected equally. For this reason, if your app is somehow performing slowly from salesforce.com's hardware, an alarm will have already gone off since a large number of clients would also be affected (at least, in most cases).
A better test is to test your network's performance. Run a bandwidth simulation test on your firewalls, routers, and other corporate infrastructure to make sure it can handle the load. This is more significant for a given organization's performance than testing the salesforce.com hardware, which is monitored and tested regularly.
You should note that salesforce.com handles nearly 1,000,000,000 transactions every weekday, or about 11,000 transactions per second. This means that the system is proven capable of handling all amounts of normal server transactions. However, if they allowed performance testing without scheduling them, a large number of entities performing testing all at once could cause a DDOS-style attack. This is why coordination is paramount, and those that violate it will be sanctioned.
An IT department that insists on testing will go through the appropriate channels. A smarter IT department knows that a proven reliable system that is already constantly monitored and tested doesn't need the external testing. This is the future of the cloud; managed hardware that performs well without needing babysitting by each individual organization using the service. This is a stark contrast to traditional corporate resources, where the IT department has to monitor and repair any outages themselves. This is a feature of salesforce.com (and other systems like Azure, S3, and so on).
Best Answer
Great question. In opening this can of worms, I would like to noodle the premise if you don't mind :-) just to learn if we're solving performance in the right quadrant. What motivates your question exactly?
Definitely performance is important, but the Force.com platform is pretty good at keeping you within reasonable boundaries. You don't have to worry about
nginx
vsiis
vsapache
servingXYZ
requests per second. Float above that stuff. Salesforce throws smarts and hardware at those problems so we don't have to.As a service layer developer, err on the side of inspecting:
Rather than doing legwork for the sake of the Apex runtime, optimize for you the architect, us the developers, them the future maintainers. The Apex runtime will get faster and smarter, you don't need to do it any favours. Principle of least astonishment and semantics wins over tricks every time.
Governor limits are the thoughtful and useful straitjacket that gives us a gentle slap in the face as course correction if code falls outside those reasonable boundaries.
As a client-side developer, invest your valuable time:
taking advantage of speedy (JavaScript Remoting) and reactive (Streaming API) features to offer the snappiness (or perceived snappiness) your users expect, decoupled from Apex performance,
check the
expires
attributes of pages holding JavaScript clients, thecache control
attributes of static resources (zips of course, concatenated CSS/JS courtesy a non-overkill build script)profile first, shoot later!