It has been reported on multiple occasions (e.g. http://salesforce.stackexchange.com/questions/18244/how-does-sf-calculate-the-cpu-time
) that the CPU time reported for the same task can have significant variations, leading to lack of confidence whether a complex request will be able to consistently execute, or will fail over the CPU time limit.
I have noticed that calls to Limits.getHeapSize() can significantly increase a request's CPU usage. I assume that in an attempt for Limits.getHeapSize() to accurately report the heap size, garbage collection is run before returning a result. This assumption is consistent with the variation in both wall clock and CPU times. I.e. running two requests that do/don't call Limits.getHeapSize() but are otherwise identical, will take consistently different time to execute, measured by both stopwatch and Limits.getCpuTime().
This idea is about excluding garbage collection time from a request's CPU time, either explicitly invoked using Limits.getHeapSize(), or that is occurring automatically.
Doing that would provide more consistent results as measured by Limits.getCpuTime() and more confidence for developers that complex requests will be able to execute in development and production environments.
Garbage collection will happen in the underlying framework anyway, so accounting those hardly controllable events into a user request's CPU time is also not quite fair.