During our testing the GeForce GTX 680, we expressed some reservations about the behavior of GPU Boost , turbo Nvidia. This is indeed non-deterministic in the sense that, to enable the GPU to increase in frequency, it is based on actual consumption rather than an estimate which has the advantage of being the same for all GPUs. Approach has the advantage of maximizing the performance of samples that consume the least, for example because they suffer from less leakage currents, but by introducing a dose cons of variability in the performance of two samples. Nvidia is tight-lipped on the subject, although we have dwelt at length for more details on this range of variability. The manufacturer is happy to announce a frequency and GPU Boost which is the minimum frequency guaranteed that the GPU can reach, said its engineers were amazed watching the GPU go higher and in many cases refused to say more. Actually this GPU frequency Boost is a specification of façade, and its reference in the bios or drivers has absolutely no effect except to allow monitoring tools to postpone it.
So what is the true maximum frequency of GPU Boost? By observing the behavior of some trade cards, we could begin to understand why Nvidia is embarrassed by the questions that revolve around this point. All GK104-400 (version of the GPU for the GeForce GTX 680) are not qualified to the same maximum frequency! For example, the sample that provided us with Nvidia for the test is qualified to 1110 MHz while a copy Gigabyte we have learned the trade has a GPU that has been described as "only" to 1084 MHz. Others are at 1071 MHz, others to 1097 MHz etc.. This means that more non-deterministic operation of GPU Boost, the increase in frequency will be limited differently depending on the sample, regardless of whether the GPU temperature and fuel consumption are largely under the limits. Set against these findings, Nvidia still refuses to answer, arguing that, after insistence, he wants to keep secret his qualification procedure to avoid the competition draws inspiration. In general, and simplifying, the first batch of GPUs are tested and common specifications are defined that allow to have a certain volume of production with a certain performance level and a certain thermal envelope. In the case of GK104-400 , probably because the GeForce GTX 680 is neck and neck with the Radeon HD 7970, Nvidia seems to enjoy the single point of performance available ... even if the reach is incompatible with a sufficient volume of production. In other words, Nvidia wants to enjoy the performance of a GPU and 1110 MHz and the volume of production of a GPU to 1058 MHz. How it works in practice? We can assume that each GPU postpones number of bins of 13 MHz (notches) the gap between the base frequency and maximum frequency turbo, just as each GPU postponed for some time a tension all its own. Our sample press is a well qualified GK104-400 MHz to 1006 + 8 bins (1110 MHz) while the sample Gigabyte merely a qualified GK104-400 MHz to 1006 + 6 bins (1084 MHz). In practice we observed that the variation in performance between two cards comes over the use of GPU qualified at frequencies at different maximum operating non-deterministic GPU Boost, most games remaining well below its consumption limit. What a performance gap between these two samples?
This 2% difference in the actual specifications of these two GeForce GTX 680 results in a practical difference of 1.5%, which may amount to 5% in the case of Anno 2070, very greedy and is also influenced by the consumer GPU obviously lower on the sample media.
All for that?
Why worry about this detail? After all, 2% difference or less in games do not make a big difference, that does not fail to note Nvidia ... that does not account, however without it! We assume in fact that the GPU designer has taken this approach in order to glean a single point performance against the competition. Moreover, it is only an example on which we have fallen, without seeking the largest gap possible. Of GK104-400 can be validated at a frequency higher or lower turbo that we observed, producing the largest performance gaps. Nvidia categorically refusing to communicate on these differences, probably because it would be embarrassing to admit a difference in the specifications, we can not know the range of variation. It also raises more fundamental questions with respect to component specifications we are used that they are fixed by product model. Is it acceptable that they are not fixed? Is less than 1% is acceptable? 2%? 3%? 5%? 10%? At what point would there be abuses? Could you imagine an Intel Turbo which could be randomly 3.9 GHz or 4 GHz following the sample? A 128 GB SSD or 130 GB? It will be interesting to see if this margin increases or no change on future Nvidia products, although it is generally difficult to determine, especially at launch when we have to settle for Press a copy that has in all likelihood not chosen randomly. How to tell if the performance of the GeForce GTX 690 that we will offer tomorrow will be fully representative of the trade cards?