BenchmarkXPRT Blog banner

Author Archives: Justin Greene

The WebXPRT 4 tech press feedback survey

Device reviews in publications such as AnandTech, Notebookcheck, and PCMag, among many others, often feature WebXPRT test results, and we appreciate the many members of the tech press that use WebXPRT. As we move forward with the WebXPRT 4 development process, we’re especially interested in learning what longtime users would like to see in a new version of the benchmark.  

In previous posts, we’ve asked people to weigh in on the potential addition of a WebAssembly workload or a battery life test. We’d also like to ask experienced testers some other test-related questions. To that end, this week we’ll be sending a WebXPRT 4 survey directly to members of the tech press who frequently publish WebXPRT test results.

Regardless of whether you are a member of the tech press, we invite you to participate by sending your answers to any or all the questions below to benchmarkxprtsupport@principledtechnologies.com. We ask you to do so by the end of May.

  • Do you think WebXPRT 3’s selection of workload scenarios is representative of modern web tasks?
  • How do you think WebXPRT compares to other common browser-based benchmarks, such as JetStream, Speedometer, and Octane?
  • Are there web technologies that you’d like us to include in additional workloads?
  • Are you happy with the WebXPRT 3 user interface? If not, what UI changes would you like to see?
  • Are there any aspects of WebXPRT 2015 that we changed in WebXPRT 3 that you’d like to see us change back?
  • Have you ever experienced significant connection issues when testing with WebXPRT?
  • Given the array of workloads, do you think the WebXPRT runtime is reasonable? Would you mind if the average runtime were a bit longer?
  • Are there any other aspects of WebXPRT 3 that you’d like to see us change?

If you’d like to discuss any topics that we did not cover in the questions above, please feel free to include additional comments in your response. We look forward to hearing your thoughts!

Justin

WebXPRT passes the 750,000-run milestone!

We’re excited to see that users have successfully completed over 750,000 WebXPRT runs! If you’ve run WebXPRT in any of the more than 654 cities and 68 countries from which we’ve received complete test data—including newcomers Belize, Cambodia, Croatia, and Pakistan—we’re grateful for your help. We could not have reached this milestone without you!

As the chart below illustrates, WebXPRT use has grown steadily over the years. We now record, on average, almost twice as many WebXPRT runs in one month as we recorded in the entirety of our first year. In addition, with over 82,000 runs to date in 2021, there are no signs that growth is slowing.

Developing a new benchmark is never easy, and the obstacles multiply when you attempt to create a cross-platform benchmark, such as WebXPRT, that will run on a wide variety of devices. Establishing trust with the benchmarking community is another challenge. Transparency, consistency, and technical competency on our part are critical factors in building that trust, but the people who take time out of their busy schedules to run the benchmark for the first time also play a role. We thank all of the manufacturers, OEM labs, and members of the tech press who decided to give WebXPRT a try, and we look forward to your input as we continue to improve WebXPRT in the years to come. 

If you have any questions or comments about WebXPRT, we’d love to hear from you!

Justin

Default requirements for CloudXPRT results submissions

Over the past few weeks, we’ve received questions about whether we require specific test configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration options for the web microservices workload and three configuration options for the data analytics workload. Not all configuration options have an impact on testing and results, but a few of them can drastically affect key results metrics and how long it takes to complete a test. Because new CloudXPRT testers may not anticipate those outcomes, and so many configuration permutations are possible, we’ve come up with a set of requirements for all future results submissions to our site. Please note that testers are still free to adjust all available configuration options—and define service level agreement (SLA) settings—as they see fit for their own purposes. The requirements below apply only to results testers want to submit for publication consideration on our site, and to any resulting comparisons.


Web microservices results submission requirement

Starting with the May results submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload assigns to each pod, set to 4. Currently, the benchmark supports values of 1, 2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be more appropriate for relatively low-end systems or configurations with few vCPUs, a value of 4 is appropriate for most datacenter processors, and it often enables CSP instances to operate within the benchmark’s max default 95th percentile latency SLA of 3,000 milliseconds.

In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.


Data analytics results submission requirement

Starting with the May results submission cycle, all data analytics results submissions must have the best reported performance (throughput_jobs/min) correspond to a 95th percentile SLA latency of 90 seconds or less. We have received submissions where the throughput was extremely high, but the 95th percentile SLA latency was up to 10 times the 90 seconds that we recommend in CloudXPRT documentation. High latency values may be acceptable for the unique purposes of individual testers, but they do not provide a good basis for comparison between clusters under test. For more information about configuration options with the data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.

We will update CloudXPRT documentation to make sure that testers know to use the default configuration settings if they plan to submit results for publication. If you have any questions about CloudXPRT or the CloudXPRT results submission process, please let us know.

Justin

Considering a battery life test for WebXPRT 4

A few weeks ago, we discussed the beginnings of a WebXPRT 4 development plan, and asked for reader feedback about potential workload changes. So far, the two most common feedback topics have been the possible addition of a WebAssembly workload, and the feasibility of including a browser-based battery life test. Today, we discuss what a WebXPRT 4 battery life test would look like, and some of the challenges we’d have to overcome to make it a reality.

Battery life tests fall into two primary categories: simple rundown tests and performance-weighted tests. Simple rundown tests measure battery life during extreme idle periods and loops of movie playbacks, etc., but do not reflect the wide-ranging mix of activities that characterize a typical day for most users. While they can be useful for performing very specific apple-to-apples comparisons, these tests have limited value when it comes to giving consumers a realistic estimation of the battery life they would experience during everyday use.

In contrast, performance-weighted battery life tests, such as the one in CrXPRT 2, attempt to reflect real-world usage. The CrXPRT battery life test simulates common daily usage patterns for Chromebooks by including all the productivity workloads from the performance test, plus video playback, audio playback, and gaming scenarios. It also includes periods of wait/idle time. We believe this mixture of diverse activity and idle time better represents typical real-life behavior patterns. This makes the resulting estimated battery life much more helpful for consumers who are trying to match a device’s capabilities with their real-world needs.

From a technical standpoint, WebXPRT’s cross-platform nature presents us with several challenges that we did not face while developing the CrXPRT battery life test for Chrome OS. While the WebXPRT performance tests run in almost any browser, cross-browser differences in battery life reporting may restrict the battery life test to a single browser. For instance, Mozilla has deprecated the battery status API for Firefox, and we’re not yet sure if there’s another approach that would work. If a WebXPRT 4 battery life test supported only a single browser, such as Chrome or Safari, would you still be interested in using it? Please let us know.

A browser-based battery life workflow also presents other challenges that we do not face in native client applications such as CrXPRT:

  • A browser-based battery life test would require the user to check the starting and ending battery capacities, with no way for the app to independently verify data accuracy.
  • The battery life test could require more babysitting in the event of network issues. We can catch network failures and try to handle them by reporting periods of network disconnection, but those interruptions could influence the battery life duration.
  • The factors above could make it difficult to achieve repeatability. One way to address that problem would be to run the test in a standardized lab environment lab with a steady internet connection, but a long list of standardized environmental requirements would make the battery life test less attractive and less accessible to many testers.

Our intention with today’s blog is not to make a WebXPRT 4 battery life test seem like an impossibility. Rather, we want to share our perspective on what the test might look like, and describe some of the challenges and considerations in play. If you have thoughts about battery life testing, or experience with battery life APIs in one or more of the major browsers, we’d love to hear from you!

Justin

Considering WebAssembly for WebXPRT 4

Earlier this month, we discussed a few of our ideas for possible changes in WebXPRT 4, including new web technologies that may work well in a browser benchmark. Today, we’re going to focus on one of those technologies, WebAssembly, in more detail.

WebAssembly (WASM) is a binary instruction format that works across all modern browsers. WASM provides a sandboxed environment that operates at native speeds and takes advantage of common hardware specs across platforms. WASM’s capabilities offer web developers a great deal of flexibility for running complex client applications in the browser. That level of flexibility may enable workload scenario options for WebXPRT 4 such as gaming, video editing, VR, virtual machines, and image recognition. We’re excited about those possibilities, but it remains to be seen which WASM use cases will meet the criteria we look for when considering new WebXPRT workloads, such as relevancy to real life, consistency and replicability, and the broadest-possible level of cross-browser support.

One WASM workload that we’re investigating is a web-based machine learning workload with TensorFlow for JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of tasks, including image classification, object detection, sentence encoding, and natural language processing. TensorFlow.js originally used WebGL technology on the back end, but now it’s possible to run the workload using WASM. We could also use this technology to enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album using AI or Encrypt Notes and OCR Scan.

We’re can’t yet say that a WASM workload will definitely appear in WebXPRT 4, but the technology is promising. Do you have any experience with WASM, or ideas for WASM workloads? There’s still time for you to help shape the future of WebXPRT 4, so let us know what you think!

Justin

CrXPRT support through 2022

CrXPRT testers may remember that back around the time that we began the CrXPRT 2 development process, the Chrome team announced that they were phasing out support for Portable Native Client (PNaCL) in favor of WebAssembly (WASM). As a first step, they changed the Chrome OS setting that enabled PNaCL by default. At the time, this caused problems with the Photo Collage workload in CrXPRT 2015, and even though we identified a workaround, details in the Chrome team’s announcement led us to conclude that the workaround might stop working in June 2021. Because of this change, we decided that the best decision would be to remove the workload from CrXPRT 2, and keep existing CrXPRT 2015 testers informed of any changes with the workaround.

In 2020, the Chrome team also announced that they would be phasing out support for Chrome Apps altogether starting in June 2021, and would shift their focus to Chrome extensions. This change would have required us to reassess the viability of CrXPRT in anything like its current form.

We’re happy to report that the Chrome team has extended support for PNaCL and existing Chrome Apps through June 2022. Barring further changes, this means that CrXPRT 2015 (with the workaround) and CrXPRT 2 should continue to serve as reliable Chrome OS evaluation tools for some time.

If you have any questions about CrXPRT 2, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?