BenchmarkXPRT Blog banner

Category: Web-based testing

Here’s what to expect in the WebXPRT 4 Preview

A few months ago, we shared detailed information about the changes we expected to make in WebXPRT 4. We are currently doing internal testing of the WebXPRT 4 Preview build in preparation for releasing it to the public. We want to let our readers know what to expect.

We’ve made some changes since our last update and some of the details we present below could still change before the preview release. However, we are much closer to the final product. Once we release the WebXPRT 4 Preview, testers will be able to publish scores from Preview build testing. We will limit any changes that we make between the Preview and the final release to the UI or features that are not expected to affect test scores.

General changes

Some of the non-workload changes we’ve made in WebXPRT 4 relate to our typical benchmark update process.

  • We have updated the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We did not significantly change the flow of the UI.
  • We have updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
  • We have not yet added a looping function to the automation scripts, but are still considering it for the future.
  • We investigated the possibility of shortening the benchmark by reducing the default number of iterations from seven to five, but have decided to stick with seven iterations to ensure that score variability remains acceptable across all platforms.

Workload changes

  • Photo Enhancement. We increased the efficiency of the workload’s Canvas object creation function, and replaced the existing photos with new, higher-resolution photos.
  • Organize Album Using AI. We replaced ConvNetJS with WebAssembly (WASM) based OpenCV.js for both the face detection and image classification tasks. We changed the images for the image classification tasks to images from the ImageNet dataset.
  • Stock Option Pricing. We updated the dygraph.js library.
  • Sales Graphs. We made no changes to this workload.
  • Encrypt Notes and OCR Scan. We replaced ASM.js with WASM for the Notes task and updated the WASM-based Tesseract version for the OCR task.
  • Online Homework. In addition to the existing scenario which uses four Web Workers, we have added a scenario with two Web Workers. The workload now covers a wider range of Web Worker performance, and we calculate the score by using the combined run time of both scenarios. We also updated the typo.js library.

Experimental workloads

As part of the WebXPRT 4 development process, we researched the possibility of including two new workloads: a natural language processing (NLP) workload, and an Angular-based message scrolling workload. After much testing and discussion, we have decided to not include these two workloads in WebXPRT 4. They will be good candidates for us to add as experimental WebXPRT 4 workloads in 2022.

The release timeline

Our goal is to publish the WebXPRT 4 preview build by December 15th, which will allow testers to publish scores in the weeks leading up to the Consumer Electronics Show in Las Vegas in January 2022. We will provide more detailed information about the GA timeline here in the blog as soon as possible.

If you have any questions about the details we’ve shared above, please feel free to ask!

Justin

Thinking about experimental WebXPRT workloads in 2022

As the WebXPRT 4 development process has progressed, we’ve started to discuss the possibility of offering experimental WebXPRT 4 workloads in 2022. These would be optional workloads that test cutting-edge browser technologies or new use cases. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score.

WebXPRT testers would be able to run the experimental workloads one of two ways: by manually selecting them on the benchmark’s home screen, or by adjusting a value in the WebXPRT 4 automation scripts.

Testers would benefit from experimental workloads by being able to compare how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.

Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.

Justin

A clearer picture of WebXPRT 4

The WebXPRT 4 development process is far enough along that we’d like to share more about changes we are likely to make and a rough target date for publishing a preview build. While some of the details below will probably change, this post should give readers a good sense of what to expect.

General changes

Some of the non-workload changes in WebXPRT 4 relate to our typical benchmark update process, and a few result directly from feedback we received from the WebXPRT tech press survey.

  • We will update the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We do not anticipate significantly changing the flow of the UI.
  • We will update content in some of the workloads to reflect changes in everyday technology. For instance, we will upgrade most of the photos in the photo processing workloads to higher resolutions.
  • In response to a request from tech press survey respondents, we are considering adding a looping function to the automation scripts.
  • We are investigating the possibility of shortening the benchmark by reducing the default number of iterations from seven to five. We will only make this change if we can ensure that five iterations produce consistently low score variance.

Changes to existing workloads

  • Photo Enhancement. This workload applies three effects to two photos each (six photos total). It tests HTML5 Canvas, Canvas 2D, and JavaScript performance. The only change we are considering is adding higher-resolution photos.
  • Organize Album Using AI. This workload currently uses the ConvNetJS neural network library to complete two tasks: (1) organizing five images and (2) classifying the five images in an album. We are planning to replace ConvNetJS with WebAssembly (WASM) for both tasks and are considering upgrading the images to higher resolutions.
  • Stock Option Pricing. This workload calculates and displays graphic views of a stock portfolio using Canvas, SVG, and dygraph.js. The only change we are considering is combining it with the Sales Graphs workload (below).
  • Sales Graphs. This workload provides a web-based application displaying multiple views of sales data. Sales Graphs exercises HTML5 Canvas and SVG performance. The only change we are considering is combining it with the Stock Option Pricing workload (above).
  • Encrypt Notes and OCR Scan. This workload uses ASM.js to sync notes, extract text from a scanned receipt using optical character recognition (OCR), and add the scanned text to a spending report. We are planning to replace ASM.js with WASM for the Notes task and with WASM-based Tesseract for the OCR task.
  • Online Homework. This workload uses regex, arrays, strings, and Web Workers to review DNA and spell-check an essay. We are not planning to change this workload.

Possible new workloads

  • Natural Language Processing (NLP). We are considering the addition of an NLP workload using ONNX Runtime and/or TensorFlowJS. The workload would use Bidirectional Encoder Representations from Transformers (BERT) to answer questions about a given text. Similar use cases are becoming more prevalent in conversational bot systems, domain-specific document search tools, and various other educational applications.
  • Message Scrolling. We are considering developing a new workload that would use an Angular or React.js to scroll through hundreds of messages. We’ll share more about this possible workload as we firm up the details.

The release timeline

We hope to publish a WebXPRT 4 preview build in the second half of November, with a general release before the end of the year. If it looks as though that timeline will change significantly, we’ll provide an update here in the blog as soon as possible.

We’re very grateful for all the input we received during the WebXPRT 4 planning process. If you have any questions about the details we’ve shared above, please feel free to ask!

Justin

Investigating the possibility of WebXPRT user accounts

One of our goals during the ongoing WebXPRT 4 development process is to be as responsive as possible to user feedback, and we want to emphasize that it’s not too late to send us your ideas. Until we finalize the details for each workload and complete the code work for the preview build, we still have quite a bit of flexibility around adding new features.

Just this week, a community member raised the possibility of a WebXPRT 4 feature that would enable user-specific test ID numbers or accounts. One possible implementation of the idea would allow a user to sign up for a WebXPRT test account as an individual or on behalf of their organization. The test accounts would be both free and optional; you could continue to run the benchmark without an account, but running it with an account would let you save and view your test history. Another implementation option we are considering would let users generate a permanent user ID number for themselves or their organization. They could then use that number to tag and search for their automated test runs in our database, without having to log into an account.

Our biggest question at the moment is whether our user base would be interested in WebXPRT user accounts or test IDs. If this concept piques your interest, or you have suggestions for implementation, please let us know!

Justin

Improving WebXPRT-related tools and resources

As we move forward with the WebXPRT 4 development process, we’re also working on ways to enhance the value of WebXPRT beyond simply updating the benchmark. Our primary goal is to expand and improve the WebXPRT-related tools and resources we offer at WebXPRT.com, starting with a new results viewer.

Currently, users can view WebXPRT results on our site two primary ways, each of which has advantages and limitations.

The first way is the WebXPRT results viewer, which includes hundreds of PT-curated performance scores from a wide range of trusted sources and devices. Users can sort entries by device type, device name, device model, overall score, date of publication, and source. The viewer also includes a free-form filter for quick, targeted searches. While the results viewer contains a wealth of information, it does not give users a way to use graphs or charts for viewing and comparing multiple results at once. Another limitation of the current results viewer is that it offers no easy way for users to access the additional data about the test device and the subtest scores that we have for many entries.

The second way to view WebXPRT results on our site is the WebXPRT Processor Comparison Chart. The chart uses horizontal bar graphs to compare test scores from the hundreds of published results in our database, grouped by processor type. Users can click the average score for a processor to view all the WebXPRT results we currently have for that processor. The visual aspect of the chart and its automated “group by processor type” feature are very useful, but it lacks the sorting and filtering capabilities of the viewer, and navigating to the details of individual tests takes multiple clicks.

In the coming months, we’ll be working to combine the best features of the results viewer and the comparison chart into a single powerful WebXPRT results database tool. We’ll also be investigating ways to add new visual aids, navigation controls, and data-handling capabilities to that tool. We want to provide a tool that helps testers and analysts access the wealth of WebXPRT test information in our database in an efficient, productive, and enjoyable way. If you have ideas or comments about what you’d like to see in a new WebXPRT results viewing tool, please let us know!

Justin

Round 2 of the WebXPRT 4 survey is now open

In May, we surveyed longtime WebXPRT users regarding the types of changes they would like to see in a WebXPRT 4. We sent the survey to journalists at several tech press outlets, and invited our blog readers to participate as well. We received some very helpful feedback. As we explore new possibilities for WebXPRT 4, we’ve decided to open an updated version of the survey. We’ve adjusted the questions a bit based on previous feedback and added some new ones, so we invite you to respond even if you participated in the original survey.

To do so, please send your answers to the following questions to benchmarkxprtsupport@principledtechnologies.com before July 31.

  • Do you think WebXPRT 3’s selection of workload scenarios is representative of modern web tasks?
  • How do you think WebXPRT compares to other common browser-based benchmarks, such as JetStream, Speedometer, and Octane?
  • Would you like to see a workload based on WebAssembly (WASM) in WebXPRT 4? Why or why not?
  • Would you like to see a workload based on Single Page Application (SPA) technology in WebXPRT 4? Why or why not?
  • Would you like to see a workload based on Motion UI in WebXPRT 4? Why or why not?
  • Would you like to see us include any other web technologies in additional workloads?
  • Are you happy with the WebXPRT 3 user interface? If not, what UI changes would you like to see?
  • Have you ever experienced significant connection issues when testing with WebXPRT?
  • Given its array of workloads, do you think the WebXPRT runtime is reasonable? Would you mind if the average runtime increased slightly?
  • Would you like to see us change any other aspects of WebXPRT 3?


If you would like to share your thoughts on any topics that the questions above do not cover, please include those in your response. We look forward to hearing from you!

Justin

Check out the other XPRTs:

Forgot your password?