nsane.forums Posted July 31, 2013 Share Posted July 31, 2013 Anandtech digs deep on a forum allegation and finds depressing results.Benchmarking is not the end-all, be-all for potential gadget buyers. But these tests are good for showing a device's raw, potential power. Sites like Ars rely on them to help paint a more thorough picture of anything we review. Manufacturers know this and, apparently, some even take steps to boost performance within popular benchmark tests.Anandtech recently dusted off a pair of international model Galaxy S 4's equipped with first-gen Exynos 5 Octa (5410) SoCs to investigate a beyond3D forum rumor that Samsung was allegedly only exposing its 533MHz GPU clock to specific benchmarking tests while limiting other apps and games to 480MHz. Anandtech's initial results had the team wondering about CPU performance as well. And their final research showed that three tests—AnTuTu, GLBenchmark 2.5.1, and Quadrant—"get fixed CPU frequencies and a 532MHz max GPU block" while others did not. So the Anandtech team next examined the specific .apk responsible for this behavior and found more possible benchmark exceptions hard coded within (like Linpack and BenchmarkPi). It seems as if Samsung intentionally let its device perform better on certain benchmarks than in genuine day-to-day use.Ultimately, Anandtech currently uses different benchmarks when evaluating devices (and, full disclosure, when Ars reviewed the S 4, we used a US-model and ran the Geekbench 2.3.5, GLBenchmark 2.7, Google Octane, Kraken 1.1, and Sunspider 0.9.1 tests). And it's worth noting that this isn't the first time devices and drivers have specifically aimed to boost benchmark performance. But the lack of transparency is alarming to tech enthusiasts. Anandtech finished their update by calling for honesty from Samsung, suggesting the company either open up stronger settings for use across the board or abandon them all together."The risk of doing nothing is that we end up in an arms race between all of the SoC and device makers where non-insignificant amounts of time and engineering effort [are] spent on gaming the benchmarks rather than improving user experience," wrote Brian Klug and Anand Lal Shimpi. "Optimizing for user experience is all that’s necessary, good benchmarks benefit indirectly—those that don’t will eventually become irrelevant."View: Original Article Link to comment Share on other sites More sharing options...
nIGHT Posted August 6, 2013 Share Posted August 6, 2013 That's called cheating. <_< Link to comment Share on other sites More sharing options...
Recommended Posts