BenchmarkXPRT Blog banner

Category: Benchmarking

An exciting milestone for WebXPRT!

If you’re familiar with the run counter on WebXPRT.com, you may have noticed that WebXPRT recently passed a pretty significant milestone. Since we released WebXPRT 2013, users running WebXPRT 2013 and 2015 have successfully completed over 100,000 runs!

We’re thrilled about WebXPRT’s ongoing popularity, and we think that it’s due to the benchmark’s unique combination of characteristics: it’s easy to run, it runs quickly and on a wide variety of platforms, and it evaluates device performance using real-world tasks. Manufacturers, developers, consumers, and media outlets in more than 358 cities, from Aberdeen to Zevenaar, and 57 countries, from Argentina to Vietnam, have used WebXPRT’s easy-to-understand results to compare how well devices handle everyday tasks. WebXPRT has definitely earned its reputation as a “go-to” benchmark.

If you haven’t run WebXPRT yet, give it a try. The test is free and runs in almost any browser.

We’re grateful for everyone who’s helped us reach this milestone. Here’s to another 100,000 runs!

Justin

Rebalancing our portfolio

We’ve written recently about the many new ways people are using their devices, the growing breadth of types of devices, and how application environments also are changing. We’ve been thinking a lot about the ways benchmarks need to adapt and what new tests we should be developing.

As part of this process, we’re reviewing the XPRT portfolio. An example we wrote about recently was Google’s statement that they are bringing Android apps to Chrome OS and moving away from Chrome apps. Assuming the plan comes to fruition, it has big implications for CrXPRT, and possibly for WebXPRT as well. Another example is that once upon a time, HDXPRT included video playback tests. The increasing importance of 4K video might mean we should bring them back.

As always, we’re interested in your thoughts. Which tests do you see as the most useful going forward? Which ones do you think might be past their prime? What new areas do you like to see us start to address? Let us know!

Over the coming weeks, we’ll share our conclusions based on these market forces and your feedback. We’re excited about the possibilities and hope you are as well.

Bill

Doing things a little differently

I enjoyed watching the Apple Event live yesterday. There were some very impressive announcements. (And a few which were not so impressive – the Breathe app would get on my nerves really fast!)

One thing that I was very impressed by was the ability of the iPhone 7 Plus camera to create depth-of-field effects. Some of the photos demonstrated how the phone used machine learning to identify people in the shot and keep them in focus while blurring the background, creating a shallow depth of field. This causes the subjects in a photo to really stand out. The way we take photos is not the only thing that’s changing. There was a mention of machine learning being part of Apple’s QuickType keyboard, to help with “contextual prediction.”

This is only one product announcement, but it’s a reminder that we need to be constantly examining every part of the XPRTs. Recently, we talked a bit about how people will be using their devices in new ways in the coming months, and we need to be developing tests for these new applications. However, we must also stay focused on keeping existing tests fresh.  People will keep taking photos, but today’s photo editing tests may not be relevant a year or two from now.

Were there any announcements yesterday that got you excited? Let us know!

Eric

A Chrome-plated example

A couple of weeks ago, we talked about how benchmarks have to evolve to keep up with the changing ways people use their devices. One area where we are expecting a lot of change in the next few months is Chromebooks.

These web-based devices have become very popular, even outselling Macs for the first time in Q1 of this year. Chromebooks run Google Apps and a variety of third-party Chrome apps that also run on Windows, Mac, and Linux systems.

Back in May, Google announced that Android apps would be coming to Chromebooks. This exciting development will bring a lot more applications to the platform. Now, Google has announced that they will be “moving away” from the Chrome apps platform and will be phasing out Chrome app support on other platforms within the next two years.

Clearly, the uses of Chromebooks are likely to change a lot in coming months. Interestingly, part of the rationale Google gives for this decision is the development of powerful new Web APIs, which will have implications for WebXPRT as well.

As we’ve said before, we’ll be watching and adapting as the applications change.

Eric

Apples to apples?

PCMag published a great review of the Opera browser this week. In addition to looking at the many features Opera offers, the review included performance data from multiple benchmarks, which look at areas such as hardware graphics acceleration, WebGL performance, memory consumption, and battery life.

Three of the benchmarks have a significant, though not exclusive, focus on JavaScript performance: Google Octane 2.0, JetStream 1.1, and WebXPRT 2015. The three benchmarks did not rank the browsers the same way, and in the past, we‘ve discussed some of the reasons why this happens. In addition to the difference in tests, there are also sometimes differences in approaches that are worth considering.

For example, consider the test descriptions for JetStream 1.1. You’ll immediately notice that the tests are much lower-level tests than the ones in WebXPRT. However, consider these phrases from a few of the test descriptions:

  • code-first-load “…This test attempts to defeat the browser’s caching capabilities…”
  • splay-latency “Tests the worst-case performance…”
  • zlib “…modified to restrict code caching opportunities…”

 

While the XPRTs test typical performance for higher level applications, the tests in JetStream are tweaked to stress devices in very specific ways, some of which are not typical. The information these tests provide can be very useful for engineers and developers, but may not be as meaningful to the typical user.

I have to stress that both approaches are valid, but they are doing somewhat different things. There’s a cliché about comparing apples to apples, but not all apples are the same. If you’re making a pie, a Granny Smith would be a good choice, but for snacking, you might be better off with a Red Delicious. Knowing a benchmark’s purpose will help you find the results that are most meaningful to you.

Eric

The things we do now

We mentioned a couple of weeks ago that the Microsoft Store added an option to indicate holographic support, which we selected for TouchXPRT. So, it was no surprise to see Microsoft announce that next year, they will release an update to Windows 10 that enables mainstream PCs to run the Windows Holographic shell. They also announced that they‘re working with Intel to develop a reference architecture for mixed-reality-ready PCs. Mixed-reality applications, which combine the real world with a virtual reality, demand sophisticated computer vision, and applications that can learn about the world around them.

As we’ve said before, we are constantly watching how people use their devices. One of the most basic principles of the XPRT benchmarks is to test devices using the same kinds of work that people do in the real world. As people find new ways to use their devices, the workloads in the benchmarks should evolve as well. Virtual reality, computer vision, and machine learning are among the technologies we are looking at.

What sorts of things are you doing today that you weren’t a year ago? (Other than Pokémon GO – we know about that one.) Would you like to see those sorts of workloads in the XPRTs? Let us know!

Eric

Check out the other XPRTs:

Forgot your password?