It looks like it is a "feature".
Anonymous Apex (note the extra closing brace right after the "v3" value of "k3" key:
System.debug(JSON.deserializeUntyped('{"k1": "efcc1129","k2": "v2", "k3": "v3"},"beingIgnored": "ignored"}'));
Output
18:23:58.39 (39602600)|CODE_UNIT_STARTED|[EXTERNAL]|execute_anonymous_apex 18:23:58.39 (40572794)|USER_DEBUG|[1]|DEBUG|{k1=efcc1129, k2=v2, k3=v3}
Node js
JSON.parse('{"k1": "efcc1129","k2": "v2", "k3": "v3"},"beingIgnored": "ignored"}') Uncaught SyntaxError: Unexpected non-whitespace character after JSON at position 42
By visual, it is obvious that the json is malformed. It seems like Apex is ignoring the rest and stopped at the error position gracefully. Note that the last "element" was dropped possibly due to the pre-mature closing brace.
{"k1": "efcc1129","k2": "v2", "k3": "v3"},"beingIgnored": "ignored"}
Best Answer
Apex uses a custom JSON parser. This parser stops as soon as it parses a valid JSON object, despite more string being left. Here's some more fun examples.
This is certainly a bug in the world of strict JSON deserialization. However, this behavior has always been around in Apex, if my memory serves me correctly. I think I discovered this bug shortly after Apex was introduced. In a sense, this is technically an optimization, as it is returning as soon as a valid object is parsed.
I don't think this will be fixed, but we can at least report it to salesforce.com and see what they say.
If you're concerned about having strict JSON, you could theoretically write a regular expression to check for validity before trying to parse, but that's overkill for most use cases--we almost always have valid JSON to begin with, so this extra check isn't strictly necessary most of the time.