BenchmarkXPRT Blog banner

Author Archives: Eric Hale

Something old, something new

Last week, we talked about porting TouchXPRT 2014 to be a Windows 10 universal app. This will let it run on devices running Windows 10 and those running Windows 10 mobile.

We won’t be retiring TouchXPRT 2014 when we release the Windows 10 universal app version. Windows 8 doesn’t support Windows 10 universal apps, but Windows 10 will be able to run Windows 8 applications. This means you’ll also be continue to be able to use TouchXPRT 2014 to test Windows 8 based systems, as well as to compare Windows 8 and Windows 10 performance.

The results from TouchXPRT 2014 and the universal app version of the benchmark won’t be compatible. Even though the test scenarios will be the same, the porting process means that we have to change the APIs the benchmark is using and rebuild the benchmark with different tools.

We’re currently debating changing the way we version the benchmarks. As the number of versions of each benchmark increases, it may make sense to move away from year-based versioning. This will obviously affect what we call the new Windows 10 version of TouchXPRT. If you have any thoughts on this, please let us know!

Eric

Hoping for a perfect 10

As many of you know by now, the release date for Windows 10 is July 29. As we’ve said before, we are hard at work getting TouchXPRT ready for Windows 10. We’ve succeeded in building TouchXPRT as a universal app, and it’s now running on Windows 10. We haven’t successfully run it on Windows 10 Phone yet, but we’re working on that.

Unfortunately, I can’t share any performance data. The EULA for the current build of Windows 10 (build 10143 as I’m writing this) forbids publishing benchmark results without prior written approval from Microsoft.

We’ll continue testing and refining the porting of TouchXPRT to Windows 10. Our goal is to release it as a universal app to the community in July.

What are your experiences testing Windows 10?  We’d love to hear about them!

Eric

Bit by bit

We’ve been working to internationalize the XPRTS. Our initial attempts have focused on China.

Both BatteryXPRT and WebXPRT have Chinese UI options. We expect to have a version of MobileXPRT with a Chinese UI option available in a couple of weeks.

We’ve also been working to make the benchmarks more accessible in China. WebXPRT has a mirror host site in Singapore. We’re also getting the XPRTs into Chinese app stores. MobileXPRT is available in two Chinese app stores: Xiaomi (http://app.mi.com/detail/90862) and Zhushou 360 (http://zhushou.360.cn/detail/index/soft_id/2984653). We aim to have BatteryXPRT and the Chinese version of MobileXPRT available in those stores as quickly as possible.

Obviously, we will continue to work to improve our localization. This is an area where we can use the help of the community. If you have the translation skills and want to contribute the strings for a UI for your language, let us know.

Eric

Rolling with the changes

While WebXPRT 2015 has been running fine on earlier beta versions of Windows 10, we have found a problem on some recent versions. Starting with build 10122, the Local Notes test hangs when using the Microsoft Edge browser. (Note: This browser still identifies itself as Spartan in the builds we have seen.) Chrome and Firefox on Windows 10 have successfully run WebXPRT 2015, so the problem appears to be restricted to Edge/Spartan.

Because WebXPRT ran successfully on earlier builds of Windows 10, we are hoping that upcoming builds will resolve the problem. However, we have been investigating the issue in case there is something we can address. The problem is that the encrypted strings that the test is trying to write to LocalStorage are not being written correctly. Non-encrypted strings are being written correctly.

As soon as the problem gets resolved, we’ll let you know.

In other news, we’ve been looking at Android M. There are a lot of interesting changes coming, such as the difference in the way that Android M manages app permissions. We’ve decided to delay releasing the design document for the next version of MobileXPRT so that we can make sure that the design deals with these changes appropriately.

Eric

What makes a good benchmark?

As we discussed recently, we’re working on the design document for the next version of MobileXPRT, and we’re really interested in any ideas you may have. However, we haven’t talked much about what makes for a good benchmark test.

The things we measure need to be quantifiable. A reviewer can talk about the realism of game play, or the innovative look of a game, and those are valid observations. However, it is difficult to convert those kinds of subjective impressions to numbers.

The things we measure must also be repeatable. For example, the response time for an online service may depend on the time of day, number of people using the service at the time, network load, and other factors that change over time. You can measure the responsiveness of such services, but doing so requires repeating the test enough times under enough different circumstances to get a representative sample.

The possible things we can measure go beyond the speed of the device to include things such as battery life and compatibility with standards, and even fidelity or quality such as with photos or video. BatteryXPRT and CrXPRT test battery life, while the HTML5 tests in WebXPRT are among those that test compatibility. We are currently looking into quality metrics for possible future tools.

I hope this has given you some ideas. If so, let us know!

Eric

It’s not the same

We sometimes get questions about comparing results from older versions of benchmarks to the current version. Unfortunately, it’s never safe to compare the results from different versions of benchmarks. This principle has been around much longer than the XPRTs. A major update will use different workloads and test data, and will probably be built with updated or different tools.

To avoid confusion, we rescale the results every time we release a new version of an existing benchmark. By making the results significantly different, we hope to reduce the likelihood that results from two different versions will get mixed together.

As an example, we scaled the results from WebXPRT 2015 to be significantly lower than those from WebXPRT 2013. Here are some scores from the published results for WebXPRT 2013 and WebXPRT 2015.

WebXPRT 2013 vs. 2015 results

Please note that the results above are not necessarily from the same device configurations, and are meant only to illustrate the difference in results between the two versions of WebXPRT.

If you have any questions, please let us know.

Eric

Check out the other XPRTs:

Forgot your password?