BenchmarkXPRT Blog banner

Category: PCs

TouchXPRT’s future

If you’ve been following the blog, you know that we’ve been reviewing each part of the XPRT portfolio. If you missed our discussions of HDXPRT, BatteryXPRT, WebXPRT, and CrXPRT, we encourage you to check them out and send us any thoughts you may have. This week, we continue that series by discussing the state of TouchXPRT and what we see down the road for it in 2017.

We released TouchXPRT 2016, an app for evaluating the performance of Windows 10 and Windows 10 Mobile devices, last February. We built the app by porting TouchXPRT 2014 performance workloads to the new Universal Windows App format, which allows a single app package to run on PCs, phones, tablets, and even consoles.

TouchXPRT 2016 installation is quick and easy, and the test completes in under 15 minutes on most devices. The app runs tests based on five everyday tasks (Beautify Photos, Blend Photos, Convert Videos for Sharing, Create Music Podcast, and Create Slideshow from Photos). It measures how long your device takes to complete each task, produces results for each scenario, and gives you an overall score.

As we think about the path forward for TouchXPRT, we’re aware that many expect 2017 to be a year of significant change in the Windows world, with two updates scheduled for release. Microsoft is slated to release the Windows 10 Creators Update (Build 1704) in April, and a subsequent version of Windows codenamed Redstone 3 may arrive this fall. Many tech observers believe that the Creators Update will introduce new creativity and gaming features, along with a UI upgrade named Project NEON. Major foundational shifts in the OS’s structure are more likely to appear with Redstone 3. At this point, quite a lot is still up in the air, but we’ll be following developments closely.

As we learn more about upcoming changes, we’ll have the opportunity to reevaluate TouchXPRT workloads and determine the best way to incorporate new technologies. Virtual reality, 3D, and 4K are especially exciting, but it’s too soon to know how we might incorporate them in a future version of TouchXPRT.

Because TouchXPRT 2016 continues to run well on a wide range of Windows 10 devices, we think it’s best to keep supporting the current version until we get a better idea of what’s in store for Windows.

If you have any thoughts on the future of Windows performance testing, please let us know!

Bill

They’re coming!

We’ve been hearing for a while about Google’s plan to bring Android apps to Chrome.  They recently published a video on the Google Developers channel that gives us some idea of what running Android apps on a Chromebook would look like.

Because I’m very interested in performance, the claim “Android apps can run full speed, with no overhead and no performance penalties” got my attention. You can bet we’ll be using the XPRTs to check that out! We’re using a Google developer tool called ARC Welder to do some experiments. However, it’s not fair or valid to print performance results based on a developer tool, so we’ll have to wait until the official release to see what the performance is really like.

Obviously, the use cases for Chrome will be changing. The demos in the video are for workloads we associate with PCs. MobileXPRT-type workloads might be more appropriate, or, assuming the scripting tools are available, perhaps HDXPRT-type workloads. We’ll be watching these developments closely to see how they will affect our future cross-platform benchmark.

Eric

Mobile World Congress 2016 and the need for more

Nothing shows you how much more bandwidth we need than a techie trade show like Mobile World Congress 2016. No matter how much the show’s organizers made available, the attendees swamped it with data from their many devices—phones, tablets, and PCs.

This show also demonstrated that we’re going to need a lot more of something else: device performance.

Some people like to say that our current devices are fast enough, but those people either weren’t at MWC or weren’t paying attention. New, demanding workloads were on display everywhere. High-end graphics. Support for an ever-growing range of wearables. Virtual reality. Augmented reality. The ability to act as a hub for all sorts of home automation devices. These and other new capabilities place ever-increasing demands on devices—and they’re all just getting started.

As I walked all of the MWC buildings—and I did at least walk by every single exhibit—I was struck again and again by how many cool technologies are on the cusp of being ready for prime time. They’ll bring nifty features to our everyday lives, and they’ll place heavy demands on our devices to support them and enable them to run well.

Some devices will handle these demands better than others, but we won’t be able to tell the winners from the losers just by looking at them. We’ll need reliable, relevant, real-world benchmarks to sort the winners from the posers—and that means we’ll need the XPRTs.  We’ll need the XPRTs we have today, and we’ll need new XPRTs and/or new XPRT workloads for the future. We’ll need help from everyone—members of the BenchmarkXPRT Development Community and vendors yet to join it—to create these new tools, so that buyers everywhere can make smart purchase decisions.

It’s an exciting time. The future for tech, for devices, and for the XPRTs is bright.  Let’s get busy creating it.

Comparing apples and oranges?

My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.

During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.

People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.

This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.

Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?

Eric

Check out the other XPRTs:

Forgot your password?