BenchmarkXPRT Blog banner

Category: PCs

They’re coming!

We’ve been hearing for a while about Google’s plan to bring Android apps to Chrome.  They recently published a video on the Google Developers channel that gives us some idea of what running Android apps on a Chromebook would look like.

Because I’m very interested in performance, the claim “Android apps can run full speed, with no overhead and no performance penalties” got my attention. You can bet we’ll be using the XPRTs to check that out! We’re using a Google developer tool called ARC Welder to do some experiments. However, it’s not fair or valid to print performance results based on a developer tool, so we’ll have to wait until the official release to see what the performance is really like.

Obviously, the use cases for Chrome will be changing. The demos in the video are for workloads we associate with PCs. MobileXPRT-type workloads might be more appropriate, or, assuming the scripting tools are available, perhaps HDXPRT-type workloads. We’ll be watching these developments closely to see how they will affect our future cross-platform benchmark.

Eric

Mobile World Congress 2016 and the need for more

Nothing shows you how much more bandwidth we need than a techie trade show like Mobile World Congress 2016. No matter how much the show’s organizers made available, the attendees swamped it with data from their many devices—phones, tablets, and PCs.

This show also demonstrated that we’re going to need a lot more of something else: device performance.

Some people like to say that our current devices are fast enough, but those people either weren’t at MWC or weren’t paying attention. New, demanding workloads were on display everywhere. High-end graphics. Support for an ever-growing range of wearables. Virtual reality. Augmented reality. The ability to act as a hub for all sorts of home automation devices. These and other new capabilities place ever-increasing demands on devices—and they’re all just getting started.

As I walked all of the MWC buildings—and I did at least walk by every single exhibit—I was struck again and again by how many cool technologies are on the cusp of being ready for prime time. They’ll bring nifty features to our everyday lives, and they’ll place heavy demands on our devices to support them and enable them to run well.

Some devices will handle these demands better than others, but we won’t be able to tell the winners from the losers just by looking at them. We’ll need reliable, relevant, real-world benchmarks to sort the winners from the posers—and that means we’ll need the XPRTs.  We’ll need the XPRTs we have today, and we’ll need new XPRTs and/or new XPRT workloads for the future. We’ll need help from everyone—members of the BenchmarkXPRT Development Community and vendors yet to join it—to create these new tools, so that buyers everywhere can make smart purchase decisions.

It’s an exciting time. The future for tech, for devices, and for the XPRTs is bright.  Let’s get busy creating it.

Comparing apples and oranges?

My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.

During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.

People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.

This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.

Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?

Eric

Check out the other XPRTs:

Forgot your password?