We’re excited
to announce that the WebXPRT 4 Preview is now available! Testers can access the
Preview at www.WebXPRT4.com or through a link on WebXPRT.com. The Preview is available to everyone, and testers can now
publish scores from Preview build testing. We may still tweak a few things, but
will limit any changes that we make between the Preview and the final release
to the UI and features we do not expect to affect test scores.
Longtime
WebXPRT users will notice that the WebXPRT 4 Preview has a new, but familiar,
UI. The general process for kicking off both manual and automated tests is the
same as with WebXPRT 3, so the transition from WebXPRT 3 to WebXPRT 4 testing
should be straightforward. We encourage everyone to visit the XPRT blog for more details about what’s new in this Preview release.
In addition, keep
your eye on the blog for more details about the all-new WebXPRT
4 results viewer, which we expect to publish in the very near future. We think
WebXPRT testers will enjoy using the viewer to explore our WebXPRT 4 test data!
After you try the WebXPRT 4 Preview, please send us your comments. Thanks and happy testing!
Last week, we provided readers with an overview of what to expect in the WebXPRT 4 Preview, as well as an update on the Preview’s release schedule. Since then, we’ve been working on UI adjustments and bug fixes, additional technical tweaks, and follow-up testing. We’re very close, but won’t be able to meet our original goal of publishing the Preview today. We believe it will be ready for release early next week.
As a reminder, once we release the WebXPRT 4 Preview, testers will be able to publish scores from Preview build testing. We will limit any changes that we make between the Preview and the final release to the UI or to features we do not expect to affect test scores.
If you have any questions about WebXPRT 4 or the Preview build, please let us know!
A few months ago, we shared detailed information about the changes we expected
to make in WebXPRT 4. We are currently doing internal testing of the WebXPRT 4 Preview
build in preparation for releasing it to the public. We want to let our readers
know what to expect.
We’ve made some changes since our
last update and some of the details we present below could still change before
the preview release. However, we are much closer to the final product. Once we
release the WebXPRT 4 Preview, testers will be able to publish scores from Preview
build testing. We will limit any changes that we make between the Preview and
the final release to the UI or features that are not expected to affect test
scores.
General changes
Some of the non-workload changes we’ve
made in WebXPRT 4 relate to our typical benchmark update process.
We have updated the aesthetics of the WebXPRT UI to make WebXPRT 4 visually distinct from older versions. We did not significantly change the flow of the UI.
We have updated content in some of the workloads to reflect changes in everyday technology, such as upgrading most of the photos in the photo processing workloads to higher resolutions.
We have not yet added a looping function to the automation scripts, but are still considering it for the future.
We investigated the possibility of shortening the benchmark by reducing the default number of iterations from seven to five, but have decided to stick with seven iterations to ensure that score variability remains acceptable across all platforms.
Workload changes
Photo
Enhancement. We increased the efficiency of the
workload’s Canvas object creation function, and replaced the existing photos
with new, higher-resolution photos.
Organize Album Using AI. We replaced ConvNetJS with WebAssembly (WASM) based OpenCV.js for both the face detection and image classification tasks. We changed the images for the image classification tasks to images from the ImageNet dataset.
Stock Option Pricing. We updated the dygraph.js library.
Sales Graphs. We made no changes to this workload.
Encrypt Notes and OCR Scan. We replaced ASM.js with WASM for the Notes task and updated the WASM-based Tesseract version for the OCR task.
Online Homework. In addition to the existing scenario which uses four Web Workers, we have added a scenario with two Web Workers. The workload now covers a wider range of Web Worker performance, and we calculate the score by using the combined run time of both scenarios. We also updated the typo.js library.
Experimental workloads
As part of the WebXPRT 4 development
process, we researched the possibility of including two new workloads: a
natural language processing (NLP) workload, and an Angular-based message
scrolling workload. After much testing and discussion, we have decided to not
include these two workloads in WebXPRT 4. They will be good candidates for us
to add as experimental WebXPRT 4 workloads in 2022.
The release timeline
Our goal is to publish the WebXPRT 4
preview build by December 15th, which will allow testers to publish
scores in the weeks leading up to the Consumer Electronics Show in Las Vegas in
January 2022. We will provide more detailed information about the GA timeline
here in the blog as soon as possible.
If you have any questions about the details we’ve shared above, please feel free to ask!
As the WebXPRT 4 development process has progressed, we’ve started to discuss the possibility of offering experimental WebXPRT 4 workloads in 2022. These would be optional workloads that test cutting-edge browser technologies or new use cases. The individual scores for the experimental workloads would stand alone, and would not factor in the WebXPRT 4 overall score.
WebXPRT testers would be able to run the experimental workloads one of two ways: by manually selecting them on the benchmark’s home screen, or by adjusting a value in the WebXPRT 4 automation scripts.
Testers would benefit from experimental workloads by being able to compare how well certain browsers or systems handle new tasks (e.g., new web apps or AI capabilities). We would benefit from fielding workloads for large-scale testing and user feedback before we commit to including them as core WebXPRT workloads.
Do you have any general thoughts about experimental workloads for browser performance testing, or any specific workloads that you’d like us to consider? Please let us know.
People choose a default web browser based on several factors.
Speed is sometimes the deciding factor, but privacy settings, memory load,
ecosystem integration, and web app capabilities can also come into play.
Regardless of the motivations behind a person’s go-to browser choice, the
dominance of software-as-a-service (SaaS) computing means that new updates are
always right around the corner. In previous blog posts, we’ve talked about how browser speed can increase
or decrease significantly after an update, only to swing back in the other
direction shortly thereafter. OS-specific optimizations can also affect
performance, such as with Microsoft Edge on Windows and Google Chrome on Chrome
OS.
Windows 11 began rolling out earlier this month, and tech press outlets
such as AnandTech and PCWorld have used WebXPRT
3 to evaluate the impact of the new OS—or
specific settings in the OS—on browser performance. Our own in-house tests, which
we discuss below, show a negligible impact on browser performance when updating
our test system from Windows 10 to Windows 11. It’s important to note that depending
on a system’s hardware setup, the impact might be more significant in certain
scenarios. For more information about such scenarios, we encourage you to read the
PCWorld article discussing the impact of the Windows 11 default virtualization-based
security (VBS) settings on
browser performance in some instances.
In our comparison tests, we used a Dell
XPS 13 7930 with an Intel
Core i3-10110U processor and 4 GB of RAM. For the Windows 10 tests, we used a
clean Windows 10 Home image updated to version 20H2 (19042.1165). For the
Windows 11 tests, we updated the system to Windows 11 Home version 21H2 (22000.282).
On each OS version, we ran WebXPRT 3 three times on the latest versions of five
browsers: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. For
each browser, the score we post below is the median of the three test runs.
In our last
round of tests on Windows 10, Firefox was the clear winner. Three of the
Chromium-based browsers (Chrome, Edge, and Opera) produced very close scores,
and the performance of Brave lagged by about 7 percent. In this round of
Windows 10 testing, performance on every browser improved slightly, with Google
Chrome taking a slight lead over Firefox.
In our Windows 11 testing, we were interested to find that without exception, browser scores were slightly lower than in Windows 10 testing. However, none of the decreases were statistically significant. Most users performing daily tasks are unlikely to notice that degree of difference.
Have you observed any significant differences in WebXPRT 3 scores
after upgrading to Windows 11? If so, let us know!
Last
week, we shared some new details
about the changes we’re likely to make in WebXPRT 4, and a rough target date
for publishing a preview build. This week, we’re excited to share an early
preview of the new results viewer tool that we plan to release in conjunction
with WebXPRT 4. We hope the tool will help testers and analysts access the
wealth of WebXPRT test results in our database in an efficient, productive, and
enjoyable way. We’re still ironing out many of the details, so some aspects of
what we’re showing today might change, but we’d like to give you an idea of
what to expect.
The screenshot below shows the tool’s default display. In this example, the viewer displays over 650 sample results—from a wide range of device types—that we’re currently using as placeholder data. The viewer will include several sorting and filtering options, such as device type, hardware specs such as browser type and processor vendor, the source of the result, etc.
Each
vertical bar in the graph represents the overall score of single test result,
and the graph presents the scores in order from lowest to highest. To view an
individual result in detail, the user simply hovers over and selects the bar
representing the result. The bar turns dark blue, and the dark blue banner at
the bottom of the viewer displays details about that result.
In the example above, the banner shows the overall score (250) and the score’s percentile rank (85th) among the scores in the current display. In the final version of the viewer, the banner will also display the device name of the test system, along with basic hardware disclosure information. Selecting the Run details button will let users see more about the run’s individual workload scores.
We’re
still working on a way for users to pin or save specific runs. This would let
users easily find the results that interest them, or possibly select multiple
runs for a side-by-side comparison.
We’re excited about this new tool, and we look forward to sharing more details here in the blog as we get closer to taking it live. If you have any questions or comments about the results viewer, please feel free to contact us!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.