BenchmarkXPRT Blog banner

Category: WebXPRT

Another pronunciation lesson

Knowing how to say the terms we read on-line can be a bit of a mystery. For example, it’s been 30 years since CompuServe created the GIF format, and people are still arguing about how to say it.

A couple of months ago, we talked about how to pronounce WebXPRT. In the video we pointed to, the narrator openly says he’s confused about how to say “XPRT.” For the record, it’s pronounced “expert.”

Recently, we came across another video, which referred to CrXPRT. The narrator pronounced it “Chrome expert.” The “expert” part is correct, but the “Chrome” part is not. It’s an understandable mistake, because Cr is the chemical symbol for chromium. That’s why we chose it! However, we pronounce the C and R individually. So, the name is said “C R expert.”

All that being said, it was great to see CrXPRT in the classroom! When we created CrXPRT, the education market was a big consideration, as you can tell from this CrXPRT video. We love seeing the XPRTs in the real world!

Eric

Thinking ahead to WebXPRT 2017

A few months ago, Bill discussed our intention to update WebXPRT this year. Today, we want to share some initial ideas for WebXPRT 2017 and ask for your input.

Updates to the workloads provide an opportunity to increase the relevance and value of WebXPRT in the years to come. Here are a few of the ideas we’re considering:

  • For the Photo Enhancement workload, we can increase the data sizes of pictures. We can also experiment with additional types of photo enhancement such as background/foreground subtraction, collage creation, or panoramic/360-degree image viewing.
  • For the Organize Album workload, we can explore machine learning workloads by incorporating open source JavaScript libraries into web-based inferencing tests.
  • For the Local Notes workload, we’re investigating the possibility of leveraging natural-brain libraries for language processing functions.
  • For a new workload, we’re investigating the possibility of using online 3D modeling applications such as Tinkercad.

 
For the UI, we’re considering improvements to features like the in-test progress bars and individual subtest selection. We’re also planning to update the UI to make it visually distinct from older versions.

Throughout this process, we want to be careful to maintain the features that have made WebXPRT our most popular tool, with more than 141,000 runs to date. We’re committed to making sure that it runs quickly and simply in most browsers and produces results that are useful for comparing web browsing performance across a wide variety of devices.

Do you have feedback on these ideas or suggestions for browser technologies or test scenarios that we should consider for WebXPRT 2017? Are there existing features we should ditch? Are there elements of the UI that you find especially useful or would like to see improved? Please let us know. We want to hear from you and make sure that we’re crafting a performance tool that continues to meet your needs.

Justin

Running Android-oriented XPRTs on Chrome OS

Since last summer, we’ve been following Google’s progress in bringing Android apps and the Google Play store to Chromebooks, along with their plan to gradually phase out support for Chrome apps over the next few years. Because we currently offer apps that assess battery life and performance for Android devices (BatteryXPRT and MobileXPRT) and Chromebooks (CrXPRT), the way this situation unfolds could affect the makeup of the XPRT portfolio in the years to come.

For now, we’re experimenting to see how well the Android app/Chrome OS merger is working with the devices in our lab. One test case is the Samsung Chromebook Plus, which we featured in the XPRT Weekly Tech Spotlight a few weeks ago. Normally, we would publish only CrXPRT and WebXPRT results for a Chromebook, but installing and running MobileXPRT 2015 from the Google Play store was such a smooth and error-free process that we decided to publish the first MobileXPRT score for a device running Chrome OS.

We also tried running BatteryXPRT on the Chromebook Plus, but even though the installation was quick and easy and the test kicked off without a hitch, we could not generate a valid result. Typically, the test would complete several iterations successfully, but terminate before producing a result. We’re investigating the problem, and will keep the community up to date on what we find.

In the meantime, we continue to recommend that Chromebook testers use CrXPRT for performance and battery life assessment. While we haven’t encountered any issues running MobileXPRT 2015 on Chromebooks, CrXPRT has a proven track record.

If you have any questions about running Android-oriented XPRTs on Chrome OS, or insights that you’d like to share, please let us know.

Justin

Digging deeper

From time to time, we like to revisit the fundamentals of the XPRT approach to benchmark development. Today, we’re discussing the need for testers and benchmark developers to consider the multiple factors that influence benchmark results. For every device we test, all of its hardware and software components have the potential to affect performance, and changing the configuration of those components can significantly change results.

For example, we frequently see significant performance differences between different browsers on the same system. In our recent recap of the XPRT Weekly Tech Spotlight’s first year, we highlighted an example of how testing the same device with the same benchmark can produce different results, depending on the software stack under test. In that instance, the Alienware Steam Machine entry included a WebXPRT 2015 score for each of the two browsers that consumers were likely to use. The first score (356) represented the SteamOS browser app in the SteamOS environment, and the second (441) represented the Iceweasel browser (a Firefox variant) in the Linux-based desktop environment. Including only the first score would have given readers an incomplete picture of the Steam Machine’s web-browsing capabilities, so we thought it was important to include both.

We also see performance differences between different versions of the same browser, a fact especially relevant to those who use frequently updated browsers, such as Chrome. Even benchmarks that measure the same general area of performance, for example, web browsing, are usually testing very different things.

OS updates can also have an impact on performance. Consumers might base a purchase on performance or battery life scores and end up with a device that behaves much differently when updated to a new version of Android or iOS, for example.

Other important factors in the software stack include pre-installed software, commonly referred to as bloatware, and the proliferation of apps that sap performance and battery life.

This is a much larger topic than we can cover in the blog. Let the examples we’ve mentioned remind you to think critically about, and dig deeper into, benchmark results. If we see published XPRT scores that differ significantly from our own results, our first question is always “What’s different between the two devices?” Most of the time, the answer becomes clear as we compare hardware and software from top to bottom.

Justin

WebXPRT in 2017

Over the last few weeks, we’ve discussed the future of HDXPRT and BatteryXPRT. This week, we’re discussing what’s in store for WebXPRT in 2017.

WebXPRT is our most popular tool. Manufacturers, developers, consumers, and media outlets in more than 350 cities and 57 countries have run WebXPRT over 113,000 times to date. The benchmark runs quickly and simply in most browsers and produces easy-to-understand results that are useful for comparing web browsing performance across a wide variety of devices and browsers. People love the fact that WebXPRT runs on almost any platform that has a web browser, from PCs to phones to game consoles.

More people are using WebXPRT in more places and in more ways than ever before. It’s an unquestioned success, but we think this is a good time to make it even better by beginning work on WebXPRT 2017. Any time change comes to a popular product, there’s a risk that faithful fans will lose the features and functionality they’ve grown to love. Relevant workloads, ease of use, and extensive compatibility have always been the core components of WebXPRT’s success, so we want to reassure users that we’re committed to maintaining all of those in future versions.

Some steps in the WebXPRT 2017 process are straightforward, such as the need to reassess the existing workload lineup and update content to reflect advances in commonly used technologies. Other steps, such as introducing new workloads to test emerging browser technologies, may be tricky to implement, but could offer tremendous value in the months and years ahead.

Are there test scenarios or browser technologies you would like to see in WebXPRT 2017, or tests you think we should get rid of? Please let us know. We want to hear from you and make sure that we’re crafting a performance tool that continues to meet your needs.

Bill

How do you say that?

I recently saw this video, and heard something that I had never imagined: “Next we tested with what I assume is pronounced web-export.” I’ve had people ask if it was an acronym, but I ‘ve never heard it pronounced “export.”

How do we pronounce XPRT? The same way we pronounce “expert.” So, it’s “Benchmark expert,” “Web expert,” “Touch expert,” and so on.  CrXPRT is pronounced “C‑R expert” and HDXPRT is pronounced “H‑D‑expert.”

When I was working in Australia, I got teased about my accent quite a bit, and my case-hardened American R was a particular target. So, when I say the letters out loud, is comes out something like “eks‑pee‑arrr‑tee,” (arrr like a pirate would say it) and “expert” is the closest match. This is true for most Americans. However, in many other accents, it’s more like “eks‑pee‑ah‑tee,” and “ex-paht” is much closer to “export.”

Yes, I think way too much about this stuff.

Eric

Check out the other XPRTs:

Forgot your password?