Custom settings are cached in a special application layer cache which allows a developer to access this data without performing a query, similar to how a CPU's cache allows it to access frequently accessed information without requesting the data from main system memory.
All custom settings are cached, and each record in the cache counts towards the 10MB/1MB-per-user limit. Unlike regular data, which is normally 2KB per record (see exceptions in Help & Training), these records are charged based on the size of the data contained in each row. This is shown in the custom settings "record size" attribute. The more fields you have, and the larger they are, the more data is required per row.
Let's say that you have just one user license, and therefore limited to 1MB of cached data. This is equal to 1,048,576 bytes of data-- the extra 48,576 bytes comes from the traditional method of counting memory, where a kilobyte is 1,024 bytes, and a megabyte is 1,024 kilobytes. If each row is 384 bytes, you could store 2,730 rows of data (the remaining data is not usable for this custom setting). Conversely, if your record size is 3,112 bytes, your 1MB limit would result in just 336 rows of data available.
The record size attribute tells you how large each row is, and the total size attribute is simply the record size times the number of rows stored. For this reason, since every byte matters, it is important that you allocate only enough size to each field to store the required data. For example, if you are storing relational ID values, make sure the field is only 18 characters long, and not 255 characters long (resulting in a waste of 1,317% of the required space). Similarly, store a number in a number field, not a text field, since numbers require less memory to store (a full 18 digit number stores in about 8 bytes of data, instead of 18 bytes of text).
There is also additional overhead per field, so keep this in mind as well; using more custom settings with fewer fields will generally yield less cache usage than a few custom settings with many fields.
Custom settings should only contain data that needs to be protected or tucked away from regular users. There are no "sharing settings" on these records, and they are only accessible by code when they are protected in a managed package. This makes them ideal for storing passwords, security tokens, encryption keys, etc, that users shouldn't have easy access to. Even unprotected custom settings are acceptable for storing small amounts of data that needs to be accessed frequently, such as user preferences and configuration settings.
I'd say it depends on how you are using the Map<String, CustomSetting__c>
that getAll()
returns and how may list records there are.
If you are looping over the map values()
searching for a single record based on a field other than the Name then SOQL would probably serve you better as you could just grab the required records.
One thing to note with the SOQL approach. It will cost you one of your 100 synchronous SOQL queries governor limit. Where as the cached getAll()
is free from that but may not have the absolute latest data. I assume that the caching is only for the duration of the transaction (TODO: confirm this is actually the case). See the comment from @ca_peterson that the underlying implementation is based on memcached.
If you can access the Map by the Name key then the Map has some good advantages over SOQL if you need to pick out several values. Even better would be pulling out individual CustomSetting__c records by name using getInstance(dataset_name)
I'm going to go out on a limb here and say that the cache will be much faster than a SOQL query in getting a raw list of all the possible list values. Of course, it's hard to say what other caching Salesforce has got going on internally.
I did a quick test with a list custom setting to get a list of all values:
- SOQL query for all fields: 19:22:33.249 to 19:22:33.251 = ~2 ms
- .getAll().values(): 19:22:33.251 to 19:22:33.252 = ~1 ms
There is a slight advantage to the cache here, but that doesn't take into account how much extra processing you will need to do on the results.
So, in conclusion, it depends on how many of those list custom setting values you want and if you can access them by name. You will need to try both approaches on your data to see which works best while taking into account if you can spare the extra SOQL query.
Best Answer
Just like normal SObjects, your Test Context doesn't have access to the custom setting records already in the database.
Preferred solution
You can perfectly, insert a new Custom Setting record, in your test context like you would normally do with an SObject
and then your function should return the newly created test setting.
Alternative solution
declare your method with the @isTest(SeeAllData=true) that way your testmethod has visility over the data in your database, outside of the test context. However, these testmethods might fail in cases where there is no data in the custom setting.