BenchmarkXPRT Blog banner

Category: Benchmark metrics

An early preview of the new WebXPRT 4 results viewer!

Last week, we shared some new details about the changes we’re likely to make in WebXPRT 4, and a rough target date for publishing a preview build. This week, we’re excited to share an early preview of the new results viewer tool that we plan to release in conjunction with WebXPRT 4. We hope the tool will help testers and analysts access the wealth of WebXPRT test results in our database in an efficient, productive, and enjoyable way. We’re still ironing out many of the details, so some aspects of what we’re showing today might change, but we’d like to give you an idea of what to expect.

The screenshot below shows the tool’s default display. In this example, the viewer displays over 650 sample results—from a wide range of device types—that we’re currently using as placeholder data. The viewer will include several sorting and filtering options, such as device type, hardware specs such as browser type and processor vendor, the source of the result, etc.

Each vertical bar in the graph represents the overall score of single test result, and the graph presents the scores in order from lowest to highest. To view an individual result in detail, the user simply hovers over and selects the bar representing the result. The bar turns dark blue, and the dark blue banner at the bottom of the viewer displays details about that result.

In the example above, the banner shows the overall score (250) and the score’s percentile rank (85th) among the scores in the current display. In the final version of the viewer, the banner will also display the device name of the test system, along with basic hardware disclosure information. Selecting the Run details button will let users see more about the run’s individual workload scores.

We’re still working on a way for users to pin or save specific runs. This would let users easily find the results that interest them, or possibly select multiple runs for a side-by-side comparison.

We’re excited about this new tool, and we look forward to sharing more details here in the blog as we get closer to taking it live. If you have any questions or comments about the results viewer, please feel free to contact us!

Justin

A clearer picture of WebXPRT 4

The WebXPRT 4 development process is far enough along that we’d like to share more about changes we are likely to make and a rough target date for publishing a preview build. While some of the details below will probably change, this post should give readers a good sense of what to expect.

General changes

Some of the non-workload changes in WebXPRT 4 relate to our typical benchmark update process, and a few result directly from feedback we received from the WebXPRT tech press survey.

  • We will update the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We do not anticipate significantly changing the flow of the UI.
  • We will update content in some of the workloads to reflect changes in everyday technology. For instance, we will upgrade most of the photos in the photo processing workloads to higher resolutions.
  • In response to a request from tech press survey respondents, we are considering adding a looping function to the automation scripts.
  • We are investigating the possibility of shortening the benchmark by reducing the default number of iterations from seven to five. We will only make this change if we can ensure that five iterations produce consistently low score variance.

Changes to existing workloads

  • Photo Enhancement. This workload applies three effects to two photos each (six photos total). It tests HTML5 Canvas, Canvas 2D, and JavaScript performance. The only change we are considering is adding higher-resolution photos.
  • Organize Album Using AI. This workload currently uses the ConvNetJS neural network library to complete two tasks: (1) organizing five images and (2) classifying the five images in an album. We are planning to replace ConvNetJS with WebAssembly (WASM) for both tasks and are considering upgrading the images to higher resolutions.
  • Stock Option Pricing. This workload calculates and displays graphic views of a stock portfolio using Canvas, SVG, and dygraph.js. The only change we are considering is combining it with the Sales Graphs workload (below).
  • Sales Graphs. This workload provides a web-based application displaying multiple views of sales data. Sales Graphs exercises HTML5 Canvas and SVG performance. The only change we are considering is combining it with the Stock Option Pricing workload (above).
  • Encrypt Notes and OCR Scan. This workload uses ASM.js to sync notes, extract text from a scanned receipt using optical character recognition (OCR), and add the scanned text to a spending report. We are planning to replace ASM.js with WASM for the Notes task and with WASM-based Tesseract for the OCR task.
  • Online Homework. This workload uses regex, arrays, strings, and Web Workers to review DNA and spell-check an essay. We are not planning to change this workload.

Possible new workloads

  • Natural Language Processing (NLP). We are considering the addition of an NLP workload using ONNX Runtime and/or TensorFlowJS. The workload would use Bidirectional Encoder Representations from Transformers (BERT) to answer questions about a given text. Similar use cases are becoming more prevalent in conversational bot systems, domain-specific document search tools, and various other educational applications.
  • Message Scrolling. We are considering developing a new workload that would use an Angular or React.js to scroll through hundreds of messages. We’ll share more about this possible workload as we firm up the details.

The release timeline

We hope to publish a WebXPRT 4 preview build in the second half of November, with a general release before the end of the year. If it looks as though that timeline will change significantly, we’ll provide an update here in the blog as soon as possible.

We’re very grateful for all the input we received during the WebXPRT 4 planning process. If you have any questions about the details we’ve shared above, please feel free to ask!

Justin

How to submit WebXPRT results for publication

It’s been a while since we last discussed the process for submitting WebXPRT results to be considered for publication in the WebXPRT results browser and the WebXPRT Processor Comparison Chart, so we thought we’d offer a refresher.

Unlike sites that publish all results they receive, we hand-select results from internal lab testing, user submissions, and reliable tech media sources. In each case, we evaluate whether the score is consistent with general expectations. For sources outside of our lab, that evaluation includes confirming that there is enough detailed system information to help us determine whether the score makes sense. We do this for every score on the WebXPRT results page and the general XPRT results page. All WebXPRT results we publish automatically appear in the processor comparison chart as well.

Submitting your score is quick and easy. At the end of the WebXPRT test run, click the Submit your results button below the overall score, complete the short submission form, and click Submit again. The screenshot below shows how the form would look if I submitted a score at the end of a WebXPRT 3 run on my personal system.

After you submit your score, we’ll contact you to confirm how we should display the source. You can choose one of the following:

  • Your first and last name
  • “Independent tester” (for those who wish to remain anonymous)
  • Your company’s name, provided that you have permission to submit the result in their name. To use a company name, we ask that you provide a valid company email address.


We will not publish any additional information about you or your company without your permission.

We look forward to seeing your score submissions, and if you have suggestions for the processor chart or any other aspect of the XPRTs, let us know!

Justin

Publishing CloudXPRT results from testing on pre-production gear

We recently received questions about whether we accept CloudXPRT results submissions from testing on pre-production gear, and how we would handle any differences between results from pre-production and production-level tests.  

To answer first question, we are not opposed to pre-production results submissions. We realize that vendors often want to include benchmark results in launch-oriented marketing materials they release before their hardware or software is publicly available. To help them do so, we’re happy to consider pre-production submissions on a case-by-case basis. All such submissions must follow the normal CloudXPRT results submission process, and undergo vetting by the CloudXPRT Results Review Group according to the standard review and publication schedule. If we decide to publish pre-production results on our site, we will clearly note their pre-production status.

In response to the second question, the CloudXPRT Results Review Group will handle any challenges to published results or perceived discrepancies between pre-production and production-level results on a case-by-case basis. We do not currently have a formal process for challenges; anyone who would like to initiate a challenge or express comments or concerns about a result should address the review group via benchmarkxprtsupport@principledtechnologies.com. Our primary concern is always to ensure that published results accurately reflect the performance characteristics of production-level hardware and software. If it becomes necessary to develop more policies in the future, we’ll do so, but we want to keep things as simple as possible.

If you have any questions about the CloudXPRT results submission process, please let us know!

Justin

WebXPRT passes the 750,000-run milestone!

We’re excited to see that users have successfully completed over 750,000 WebXPRT runs! If you’ve run WebXPRT in any of the more than 654 cities and 68 countries from which we’ve received complete test data—including newcomers Belize, Cambodia, Croatia, and Pakistan—we’re grateful for your help. We could not have reached this milestone without you!

As the chart below illustrates, WebXPRT use has grown steadily over the years. We now record, on average, almost twice as many WebXPRT runs in one month as we recorded in the entirety of our first year. In addition, with over 82,000 runs to date in 2021, there are no signs that growth is slowing.

Developing a new benchmark is never easy, and the obstacles multiply when you attempt to create a cross-platform benchmark, such as WebXPRT, that will run on a wide variety of devices. Establishing trust with the benchmarking community is another challenge. Transparency, consistency, and technical competency on our part are critical factors in building that trust, but the people who take time out of their busy schedules to run the benchmark for the first time also play a role. We thank all of the manufacturers, OEM labs, and members of the tech press who decided to give WebXPRT a try, and we look forward to your input as we continue to improve WebXPRT in the years to come. 

If you have any questions or comments about WebXPRT, we’d love to hear from you!

Justin

Considering a battery life test for WebXPRT 4

A few weeks ago, we discussed the beginnings of a WebXPRT 4 development plan, and asked for reader feedback about potential workload changes. So far, the two most common feedback topics have been the possible addition of a WebAssembly workload, and the feasibility of including a browser-based battery life test. Today, we discuss what a WebXPRT 4 battery life test would look like, and some of the challenges we’d have to overcome to make it a reality.

Battery life tests fall into two primary categories: simple rundown tests and performance-weighted tests. Simple rundown tests measure battery life during extreme idle periods and loops of movie playbacks, etc., but do not reflect the wide-ranging mix of activities that characterize a typical day for most users. While they can be useful for performing very specific apple-to-apples comparisons, these tests have limited value when it comes to giving consumers a realistic estimation of the battery life they would experience during everyday use.

In contrast, performance-weighted battery life tests, such as the one in CrXPRT 2, attempt to reflect real-world usage. The CrXPRT battery life test simulates common daily usage patterns for Chromebooks by including all the productivity workloads from the performance test, plus video playback, audio playback, and gaming scenarios. It also includes periods of wait/idle time. We believe this mixture of diverse activity and idle time better represents typical real-life behavior patterns. This makes the resulting estimated battery life much more helpful for consumers who are trying to match a device’s capabilities with their real-world needs.

From a technical standpoint, WebXPRT’s cross-platform nature presents us with several challenges that we did not face while developing the CrXPRT battery life test for Chrome OS. While the WebXPRT performance tests run in almost any browser, cross-browser differences in battery life reporting may restrict the battery life test to a single browser. For instance, Mozilla has deprecated the battery status API for Firefox, and we’re not yet sure if there’s another approach that would work. If a WebXPRT 4 battery life test supported only a single browser, such as Chrome or Safari, would you still be interested in using it? Please let us know.

A browser-based battery life workflow also presents other challenges that we do not face in native client applications such as CrXPRT:

  • A browser-based battery life test would require the user to check the starting and ending battery capacities, with no way for the app to independently verify data accuracy.
  • The battery life test could require more babysitting in the event of network issues. We can catch network failures and try to handle them by reporting periods of network disconnection, but those interruptions could influence the battery life duration.
  • The factors above could make it difficult to achieve repeatability. One way to address that problem would be to run the test in a standardized lab environment with a steady internet connection, but a long list of standardized environmental requirements would make the battery life test less attractive and less accessible to many testers.

Our intention with today’s blog is not to make a WebXPRT 4 battery life test seem like an impossibility. Rather, we want to share our perspective on what the test might look like, and describe some of the challenges and considerations in play. If you have thoughts about battery life testing, or experience with battery life APIs in one or more of the major browsers, we’d love to hear from you!

Justin

Check out the other XPRTs:

Forgot your password?