The values() list is not guaranteed to be in order by default, because the hashing algorithm may arbitrarily reorder the elements in the list. I am not sure if I would classify this as a bug, but I do know I would never depend on the values() list being in any particular order; Visualforce has a hard time dealing with maps in general.
What I would do is instead create a wrapper class that keeps the list in a predefined order. I created this technique after I ran across a bug where Visualforce would die unexpectedly when a key was missing from a map, but I'm sure it has other uses as well. Here's the design pattern:
public class Controller {
Map<Id, SObject> objects;
public ObjectWrapper[] objectList { get; set; }
public class ObjectWrapper {
Controller controller;
Id key;
public ObjectWrapper(Controller controller, Id key) {
this.controller = controller;
this.key = key;
}
public SObject record {
get { return controller.get(value); }
set { controller.put(key, value); }
}
}
public Controller() {
objects = new Map<Id, SObject>();
objectList = new ObjectWrapper[0];
for(SObject record: [...]) {
objects.put(record.Id, record);
objectList.add(new ObjectWrapper(this, record.Id);
}
}
}
The basic premise is that the objectList provides a stable ordering of records which guarantees that the data won't be obliterated during deserialization. You can add values to the objectList, and you can even specify the same key twice to cause the value to appear more than once (but then you risk the object being clobbered randomly).
Best of all, you could implement Comparable to allow automatic sorting of the objectList while leaving the map alone. Since the map is naturally unordered, storing the order as a separate list will set things straight. Best of all, this pattern specifically avoids the "missing map key" error.
Update
As an example version of faulty logic, imagine we have the following code:
Opportunity[] records = somemap.values().deepClone(true, true, true);
String[] fields = new String[] { 'Name', 'CloseDate', 'Id' };
for(Opportunity record: records)
record.closedate = System.Today()+(1000*Math.random()).intValue();
for(Integer fdx = 0; fdx < fields.size(); fdx++)
for(Integer idx = 0; idx < records.size(); idx++)
somemap.values()[idx].put(fdx, records[idx].get(fields[fdx]));
This code would be perfectly sane if we replaced somemap.values()
with someList
, instead. That's because Map.values()
returns an unordered list, which means that the results are subject to re-order themselves arbitrarily, probably due to internal hashing. Therefore, since somemap.values()
is not guaranteed to return the same order, the fields could easily become scrambled; you may as well be calling: somemap.values()[(somemap.size()*Math.random()).intValue()].put(fdx, records[idx].get(fields[fdx]));
Hopefully this illustrates the problem better. It's nothing to do with serializing or deserializing, it has to do with the values() list not necessarily retaining its order, since it is, by definition, an unordered list.
Best Answer
I've come across a similar issue before in Using an sObject as a Map key.
From Map Considerations:
And sObject Map Considerations
Something interesting I found. As you say, the assertions pass when I run your sample code as anonymous apex with the Apex Code logging level set to DEBUG. However, with exactly the same code, if you set the Apex Code logging level to FINEST it will fail the assertion!