BenchmarkXPRT Blog banner

Category: Cross-platform benchmarks

We want your thoughts about experimental WebXPRT 4 workloads

Two weeks ago, we discussed how users can automate WebXPRT 4 testing by appending several parameters and values to the benchmark’s URL. One of these lets you enable any available experimental workloads during the test run. While we don’t currently offer any experimental workloads for WebXPRT 4, we are seeking suggestions for possible future workload scenarios, or specific web technologies that you’d like to be able to test with an experimental workload.

The main purpose of optional, experimental workloads would be to test cutting-edge browser technologies or new use cases, even if the experimental workload doesn’t work on all browsers or devices. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score. WebXPRT 4 testers would be able to run the experimental workloads one of two ways: by adjusting a value in the WebXPRT 4 automation scripts, as mentioned above, or by manually selecting them on the benchmark’s home screen.

Testers would benefit from experimental workloads by learning how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.

Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.

Justin

Looking forward to an important WebXPRT milestone

February 28, 2013 was a momentous day for the BenchmarkXPRT Development Community. On that day, we published a press release announcing the official launch of the first version of the WebXPRT benchmark, WebXPRT 2013. As difficult as it is for us to believe, the 10-year anniversary of the initial WebXPRT launch is in just a few short months!

We introduced WebXPRT as a truly unique browser performance benchmark in a field that was already crowded with a variety of measurement tools. Since those early days, the WebXPRT market presence has grown from a small foothold into a worldwide industry standard. Over the years, hundreds of tech press publications have used WebXPRT in thousands of articles and reviews, and the WebXPRT completed-runs counter rolled over the 1,000,000-run mark.

New web technologies are continually changing the way we use the web, and browser-performance benchmarks should evaluate how well new devices handle the web of today, not the web of several years ago. While some organizations have stopped development for other browser performance benchmarks, we’ve had the opportunity to continue updating and refining WebXPRT. We can look back at each of the four major iterations of the benchmark—WebXPRT 2013, WebXPRT 2015, WebXPRT 3, and WebXPRT 4—and see a consistent philosophy and shared technical lineage contributing to a product that has steadily improved.

As we get closer to the 10-year anniversary of WebXPRT next year, we’ll be sharing more insights about its reach and impact on the industry, discussing possible future plans for the benchmark, and announcing some fun anniversary-related opportunities for WebXPRT users. We think 2023 will be the best year yet for WebXPRT!

Justin

How to automate WebXPRT 4 testing

As the number of WebXPRT runs continues to grow, we realize many new WebXPRT users may be unfamiliar with all the features and capabilities of the benchmark. To help inform users about features that might facilitate their testing, we’ve decided to highlight a few WebXPRT features here in the blog. A few weeks ago, we discussed the multiple language options available in the WebXPRT 4 UI. This week, we look at WebXPRT 4 test automation.

WebXPRT 4 allows users to run scripts in an automated fashion. You can control the execution of WebXPRT 4 by appending parameters and values to the WebXPRT URL. Three parameters are available: testtype, tests, and result. Below, you’ll find a description of those parameters and instructions for utilizing automation.

Test type

The WebXPRT automation framework accounts for two test types: (1) the six core workloads and (2) any experimental workloads we might add in future builds. There are currently no experimental tests in WebXPRT 4, so always set the test type variable to 1.

  • Core tests: 1

Test scenario

This parameter lets you specify which tests to run by using the following codes:

  • Photo enhancement: 1
  • Organize album using AI: 2
  • Stock option pricing: 4
  • Encrypt notes and OCR scan using WASM: 8
  • Sales graphs: 16
  • Online homework: 32

To run a single individual test, use its code. To run multiple tests, use the sum of their codes. For example, to run Stocks (4) and Notes (8), use the sum of 12. To run all core tests, use 63, the sum of all the individual test codes (1 + 2 + 4 + 8 + 16 + 32 = 63).

Results format

This parameter lets you select the format of the results:

  • Display the result as an HTML table: 1
  • Display the result as XML: 2
  • Display the result as CSV: 3
  • Download the result as CSV: 4

To use the automation feature, start with the URL http://www.principledtechnologies.com/benchmarkxprt/webxprt/2021/wx4_build_3_7_3, append a question mark (?), and add the parameters and values separated by ampersands (&). For example, to run all the core tests and download the results, you would use the following URL: http://principledtechnologies.com/benchmarkxprt/webxprt/2021/wx4_build_3_7_3/auto.php?testtype=1&tests=63&result=4

We hope the WebXPRT automation features will make testing easier for you. If you have any questions about WebXPRT or the automation process, please feel free to ask!

Justin

How to use the WebXPRT language options

In September, the Chinese tech review site KoolCenter published a review of the ASUS Mini PC PN51 that included a screenshot of the device’s WebXPRT 4 test result screen. The screenshot showed that the testers had enabled the WebXPRT Simplified Chinese UI. Users can choose from three language options in the WebXPRT 4 UI: Simplified Chinese, German, and English. We included Simplified Chinese and German because of the large number of test runs we see from China and Central Europe. We wanted to make testing a little easier for users who prefer those languages, and are glad to see people using the feature.

Changing languages in the UI is very straightforward. Locate the Change Language? prompt under the WebXPRT 4 logo at the top of the Start screen, and click or tap the arrow beside it. After the drop-down menu appears, select the language you want. The language of the start screen changes to the language you selected, and the in-test workload headers and the results screen also appear in your chosen language.

The screenshots below my sig show the Change Language? drop-down menu, and how the Start screen appears when you select Simplified Chinese or German. Be aware that if you have a translation extension installed in your browser, the extension may override the WebXPRT UI by reverting the language back to the default of English. You can avoid this conflict by temporarily disabling the translation extension for the duration of WebXPRT testing.

If you have any questions about WebXPRT’s language options, please let us know!

Justin

The CloudXPRT v1.2 update package is now available!

We’re happy to announce that the CloudXPRT v1.2 update package is now available! The update prevents potential installation failures on Google Cloud Platform and Microsoft Azure, and ensures that the web microservices workload works on Ubuntu 22.04. The update uses updated software components such as Kubernetes v1.23.7, Kubespray v2.18.1, and Kubernetes Metrics Server v1, and incorporates some additional minor script changes.

The CloudXPRT v1.2 web microservices workload installation package is available at the CloudXPRT.com download page and the BenchmarkXPRT GitHub repository.

Before you get started with v1.2, please note the following updated system requirements:

  • Ubuntu 20.04.2 or 22.04 for on-premises testing
  • Ubuntu 18.04, 20.04.2, or 22.04 for CSP (AWS/Azure/GCP) testing

Because CloudXPRT is designed to run on high-end servers, physical nodes or VMs under test must meet the following minimum specifications:

  • 16 logical or virtual CPUs
  • 8 GB of RAM
  • 10 GB of available disk space (50 GB for the data analytics workload)

The update package includes only the updated v1.2 test harness and the updated web microservices workload. It does not include the data analytics workload. As we stated in the blog, now that we’ve published the web microservices package, we will assess the level of interest users express about a possible refresh of the v1.1 data analytics workload. For now, the v1.1 data analytics workload will continue to be available via CloudXPRT.com for some time to serve as a reference resource for users who have worked with the package in the past.

Please let us know if you have any questions about the CloudXPRT v1.2 test package. Happy testing!

Justin

The versatility of XPRT benchmarks

We’ve designed each of the XPRT benchmarks to assess the performance of specific types of devices in scenarios that mirror the ways consumers typically use those devices. While most XPRT benchmark users are interested in producing official overall scores, some members of the tech press have been using the XPRTs in unconventional, creative ways.

One example is the use of WebXPRT by Tweakers, a popular tech review site based in The Netherlands. (The site is in Dutch, so the Google Translate extension in Chrome was helpful for me.) As Tweakers uses WebXPRT to evaluate all kinds of consumer hardware, they also measure the sound output of each device. Tweakers then publishes the LAeq metric for each device, giving readers a sense of how loud a system may be, on average, while it performs common browser tasks.

If you’re interested in seeing Tweakers’ use of WebXPRT for sound output testing firsthand, check out their Apple MacBook Pro M2, HP Envy 34 All-in-One, and Samsung Galaxy Book 2 Pro reviews.

Other labs and tech publications have also used the XPRTs in unusual ways such as automating the benchmarks to run during screen burn-in tests or custom battery-life rundowns. If you’ve used any of the XPRT benchmarks in creative ways, please let us know! We are interested in learning more about your tests, and your experiences may provide helpful information that we can share with other XPRT users.

Justin

Check out the other XPRTs:

Forgot your password?