The system context is not just about sharing. From the docs:
In system context, Apex code has access to all objects and fields— object permissions, field-level security, sharing rules aren’t applied for the current user.
setting a class to run "with sharing" tells Apex to apply the current sharing rules for the current user, but field level security, object permissions etc still don't get applied. The class doesn't run in user mode though - only standard controllers or code run via execute anonymous run in user mode.
If you have a controller class without sharing that causes a trigger to be fired, the trigger will still run in the system context (i.e. no sharing, FLS, object security). If you need to respect sharing your trigger will need to delegate to a class declared as 'with sharing'.
There's a blog post from Abhinav Gupta that covers some of this, although its coming from the other side and explaining how delegating to a 'with sharing' class takes a trigger out of full system context:
http://www.tgerm.com/2011/03/trigger-insufficient-access-cross.html
Guest site user is a slightly different situation, in that some of its ability to access objects is constrained by the user license. I've hit the issue where I was trying to update something that the guest user profile should only have read/create permission on, so I thought I'd get around it using a custom controller (running in the system context). This had worked for me for contacts, but for another standard object I received an error that the license didn't support the operation. The license issue persisted even if I executed the update from an @future method. So no, I wouldn't expect it to have as much clout as the system administrator, although actions that should be prohibited by the license may sometimes work.
Spring'15 is the friendly name for less technical human beings.
There are usually 3 releases per year: spring, summer and winter. The year is incremented at the winter release, so what we get in late 2015 will become winter'16
33.0 is the API version typically incremented by 1 each release, so Summer'15 will end up with 34.0
Both names can be used to address a release.
There is even a third internal version number you might hear in serious and tricky support cases. It was somthing like 188.0 the last time I had to deal with it. As far as I know, you can't see this number in Salesforce or in the documentation, so I was a bit puzzled first. But under the hood, this seems to be the real version.
Best Answer
Contrariwise, I will provide an argument for upgrading, although the other answers state that complacency is acceptable.
There are two factors at play when you speak of API versions: features and consistency. Both of these issues introduce two distinct needs, namely stability and flexibility.
First, let's address consistency.
You should keep all your classes, pages, and triggers at the same API version. This is important to avoid bugs like this one. Note that this bug only occurs when some classes are lower than v28 and others are higher than v28. Each class appears to run in an emulated model that is compiled in a way that different versions are binary compatible, but having different features. Every once in a while, a newer feature can bleed into an older class, resulting in an error. You need consistency for stability. While the language is very good at what it does, it is not perfect, and you would do well to keep all your code within the same version.
Next, let's address features.
Inevitably, some CEO/CFO/VP/etc will want to implement a feature. You tell them that it's no big deal, it can be done in a week. Now, at this point, you're going to be at one of two places.
If you're in a habit of not upgrading, you'll write a new class with a new version, and try to tie it in to everything else, and everything will break. Not necessarily in big ways, but big enough to be noticeable. You've ignored the first aspect, consistency. So, you start upgrading all your other classes in the second week. By the third week, they've all been upgraded, but now your hunting down bugs that are really obscure, and you have no idea why. A month as gone by, and nobody's happy, but you swear you're almost there. Conversely, if you're regularly upgrading, you'll simply spend the week writing the new feature, and presto, it'll be ready.
In other words, by not upgrading, you are not saving time. You are kicking the can down the street, and when you need to upgrade, you'll be between a rock and a hard place.
I'm not advocating that you jump from v18 to v30 overnight. There are so many changes, so many small nuances you have to consider, that it would be a risky move. I tried something similar to that on a project, and we ultimately got set back months by way of troubleshooting, hunting bugs, etc. But, I'm also not saying that you should kick the can too far down the street. You need to address version upgrades in a controlled manner. There's nothing wrong with going from v18 to v20, then doing other things and going on to v22, for example.
There are a few versions that will be tricky to upgrade to, because they were significant upgrades. When they introduced "SeeAllData" for example, it required most test methods to be completely re-written or at least modified to use the new attribute. Then, they forced the split between test methods and live code. Who knows what other gamebreaking changes will happen in the future? You want to avoid the last-minute upgrade, because it will put you in a position you don't want to be in.
So, if you don't have the resources to be bleeding edge (and many organizations don't), at least avoid being left in the dust, because you will eventually have to invest the time sooner or later, and the longer you wait, the more time you will have to commit to at once.