BenchmarkXPRT Blog banner

Category: Benchmark metrics

The XPRT Women Code-a-Thon

As Justin explained last week, we’ve resolved the issue we found with the TouchXPRT CP. I’m happy to say that the testing went well and that we released CP3 this week.

It’s been only three weeks since we announced the XPRT Weekly Tech Spotlight, and we already have another big announcement! Principled Technologies has joined with ChickTech Seattle to host the first ever XPRT Women Code-a-Thon! In this two-day event, participants will compete to create the best new candidate workload for WebXPRT or MobileXPRT. The workloads can’t duplicate existing workloads, so we are looking forward to seeing the new ideas.

Judges will study all the workloads and award prizes to the top three: $2,500 for first place, $1,500 for second place, and $1,000 for third place. Anyone interested can register here.

PT and the BenchmarkXPRT Development Community are committed to promoting the advancement of women in STEM, but we also win by doing good. As with the NCSU senior project, the BenchmarkXPRT Development Community will get some fresh perspectives and some new experimental test tools. Everyone wins!

So much has happened in 2016 and January isn’t even over yet. The year is off to a great start!

Eric

Nebula Wolf

A couple of months ago, we talked about the senior project we sponsored with North Carolina State University. We asked a small team of students to take a crack at implementing a game that we could use as the basis of a benchmark test.

Last Friday, the project culminated with a presentation at the annual Posters and Pies event.

Nebula Wolf

The team gave a great presentation, and I was impressed by how much they accomplished in 3 months. They implemented a game that they called Nebula Wolf – the mascot for NC State is a wolf. It’s a space-themed rail shooter. You can play the game, or click a button to run a script for benchmarking purposes. In the scripted mode, Nebula Wolf unlocks the frame rate so the device can run at full speed.

Over the next couple of weeks, we’re going to be testing Nebula Wolf, digging into the code and getting a deeper understanding of what the team did. We’re hoping to make the game available on our web site soon.

Tomorrow, AJ, Brien, and Rachel will present one last time, here at PT. It’s been a real pleasure working with them. I wish them all good luck as they finish college and start their careers.

Eric

The TouchXPRT 2016 CP arrives tomorrow!

As we said a couple of weeks ago, we wanted to test on Windows 10 Threshold 2 before releasing the TouchXPRT 2016 community preview. Well, Threshold 2 is out and the testing has been going very well.

TouchXPRT2016_CP

We’ll release the TouchXPRT 2016 to the community tomorrow. Because community previews are not available to the general public, members will need to download it from our site.

The installation procedure is fairly straightforward. First, you put the device in developer mode. Then, for a tablet or PC, run a PowerShell script, as you did for TouchXPRT 2014. For a mobile device, once it’s in developer mode, copy the bundle to the device and install it.

If you have any problems, please let us know.

We’re looking forward to seeing the results from phones and, as we have done with MobileXPRT, to comparing results across different-sized devices.

Enjoy!

Eric

More than the sum of its parts

There was a recent article in Bloomberg about phone maker ZTE’s increasing market share in the US. The article singled out one phone, the ZTE Maven, which costs about $60 (US).

This phrase jumped out at me: “a processor with capabilities somewhere between the iPhone 5 and 6.” The iPhone 5S could also fit that description. The ZTE Maven uses the ARM Cortex-A53, 64-bit processor running at 1.2 GHz. The Apple iPhone 5s uses the Apple Cyclone-A7 Cortex-A7 Harvard Superscalar processor running at 1.3 GHz.

We decided to put that statement to the test. We ran WebXPRT 2015 on the ZTE Maven and its score was 47. The iPhone 5s scored 100. The Maven was not even close.

As we’ve said before, the performance of a device depends on more than the GHz of its processor. For example, the ZTE Maven uses the Snapdragon 410 SoC, which was aimed at mid-level devices. The iPhone 5s uses the Apple A7, which was intended for higher-end devices.  You can find side by side specs here.

Be wary when you see unsupported performance claims. As this example shows, specs can appear comparable even when the actual performance of the devices differs considerably. A good benchmark can provide insights into performance that specs alone can’t.

Eric

Question we get a lot

“How come your benchmark ranks devices differently than [insert other benchmark here]?” It’s a fair question, and the reason is that each benchmark has its own emphasis and tests different things. When you think about it, it would be unusual if all benchmarks did agree.

To illustrate the phenomenon, consider this excerpt from a recent browser shootout in VentureBeat:

 
While this looks very confusing, the simple explanation is that the different benchmarks are testing different things. To begin with, SunSpider, Octane, JetStream, PeaceKeeper, and Kraken all measure JavaScript performance. Oort Online measures WebGL performance. WebXPRT measures both JavaScript and HTML 5 performance. HTML5Test measures HTML5 compliance.

Even with benchmarks that test the same aspect of browser performance, the tests differ. Kraken and SunSpider both test the speed of JavaScript math, string, and graphics operations in isolation, but run different sets of tests to do so. PeaceKeeper profiles the JavaScript from sites such as YouTube and FaceBook.

WebXPRT, like the other XPRTs, uses scenarios that model the types of work people do with their devices.

It’s no surprise that the order changes depending on which aspect of the Web experience you emphasize, in much the same way that the most fuel-efficient cars might not be the ones with the best acceleration.

This is a bigger topic than we can deal with in a single blog post, and we’ll examine it more in the future.

Eric

What’s in a name?

A couple of weeks ago, the Notebookcheck German site published a review of the Huawei P8lite. We were pleased to see they used WebXPRT 2015, and the P8 Lite got an overall score of 47. This week, AnandTech published their review of the Huawei P8lite. In their review, the P8lite got an overall score of 59!

Those scores are very different, but it was not difficult to figure out why. The P8lite comes in two versions, depending on your market. The version Notebookcheck used is based on HiSilicon’s Kirin 620, while the version AnandTech used was Qualcomm’s Snapdragon 615 SoC. It’s also worth noting that the phone Notebookcheck tested was running Android 5.0, while the phone AnandTech tested was running Android 4.4. With different hardware and different operating systems, it’s no surprise that the results were different.

One consequence of the XPRTs being used across the world is that is that it is not uncommon to see results from devices in different markets. As we’ve said before, many things can influence benchmark results, so don’t assume that two devices with the same name are identical.

Kudos to both AnandTech and Notebookcheck for their care in presenting the system information for the devices in their reviews. The AnandTech review even included a brief description of the two models of the P8lite. This type of information is essential for helping people make informed decisions.

In other news, Windows 10 launched yesterday. We’re looking forward to seeing the TouchXPRT and WebXPRT results!

Eric

Check out the other XPRTs:

Forgot your password?