BenchmarkXPRT Blog banner

Tag Archives: WebXPRT 4

Accessing the WebXPRT 4 source code

If you’re new to the XPRTs, you may not be aware that we provide free access to XPRT benchmark source code. Publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. By allowing interested parties to access and review our source code, we’re encouraging openness and honesty in the benchmarking industry. We’re also inviting constructive feedback that can help ensure that the XPRTs continue to improve and contribute to a level playing field for all the types of products they measure.

While we do offer free access to the XPRT source code, we’ve decided to offer the code upon request instead of using a permanent download link. This approach prevents bots or other malicious actors from downloading the code. It also has the benefit of allowing us to interact with users who are interested in the source code and answer any questions they may have. We’re always keen to learn more about what others are thinking about the XPRTs and the types of work they measure.

We recently received some questions about accessing the WebXPRT 4 source code, which made us realize that we needed to make a clearer way for people to ask for the code. In response, we added a “Request WebXPRT 4 source code” link to the gray Helpful Info box on WebXPRT.com (see it in the screenshot below). Clicking the link will allow you to email the BenchmarkXPRT Support team directly and request the code.

After we receive your request, we’ll send you a secure link to the current WebXPRT 4 build package. For those users who wish to set up a local instance of WebXPRT 4 for their own internal testbeds, the package will contain all the necessary files and installation instructions. We allow folks to set up their own instances for purposes of review, internal testing, or experimentation, but we ask that users publish only test results from the official WebXPRT 4 site.

While we offer free access to XPRT source code, our approach to derivative works differs from some traditional open-source models that encourage developers to change products and even take them in different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

If you have any questions about accessing the WebXPRT 4 source code, let us know!

Justin

WebXPRT in PT reports

We don’t just make WebXPRT—we use it, too. If you normally come straight to BenchmarkXPRT.com or WebXPRT.com, you may not even realize that Principled Technologies (PT) does a lot more than just managing and administering the BenchmarkXPRT Development Community. We’re also the tech world’s leading provider of hands-on testing and related fact-based marketing services. As part of that work, we’re frequent WebXPRT users.

We use the benchmark when we test devices such as Chromebooks, desktops, mobile workstations, and consumer laptops for our clients. (You can see a lot of that work and many of our clients on our public marketing portfolio page.) We run the benchmark for the same reasons that others do—it’s a reliable and easy-to-use tool for measuring how well devices handle web browsing and other web work.

We also sometimes use WebXPRT simply because our clients request it. They request it for the same reason the rest of us like and use it: it’s a great tool. Regardless of job titles and descriptions, most laptop and tablet users surf the web and access web-based applications every day. Because WebXPRT is a browser benchmark, higher scores on it could indicate that a device may provide a superior online experience.

Here are just a few of the recent PT reports that used WebXPRT:

  • In a project for Dell, we compared the performance of a Dell Latitude 7340 Ultralight to that of a 13-inch Apple MacBook Air (2022).
  • In this study for HP, we compared the performance of an HP ZBook Firefly G10, an HP ZBook Power G10, and an HP ZBook Fury G10.
  • Finally, in a set of comparisons for Lenovo, we evaluated the system performance and end-user experience of eight Lenovo ThinkBook, ThinkCentre, and ThinkPad systems along with their Apple counterparts.

All these projects, and many more, show how a variety of companies rely on PT—and on WebXPRT—to help buyers make informed decisions. P.S. If we publish scores from a client-commissioned study in the WebXPRT 4 results viewer, we will list the source as “PT”, because we did the testing.

By Mark L. Van Name and Justin Greene

WebXPRT benchmarking tips from the XPRT lab

Occasionally, we receive inquiries from XPRT users asking for help determining why two systems with the same hardware configuration are producing significantly different WebXPRT scores. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behaviors and environments. While some degree of variability is normal, these types of questions provide us with an opportunity to talk about some of the basic benchmarking practices we follow in the XPRT lab to produce the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. Most of them relate to evaluating browser performance with WebXPRT, but several of these practices apply to other benchmarks as well.

  • Hardware is not the only important factor: Most people know that different browsers produce different performance scores on the same system. Testers are not, however, always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always important to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Keep a thorough record of system information: We record detailed information about a test system’s key hardware and software components, including full model and version numbers. This information is not only important for later disclosure if we choose to publish a result, it can also sometimes help to pinpoint system differences that explain why two seemingly identical devices are producing very different scores. We also want people to be able to reproduce our results to the closest extent possible, so that commitment involves recording and disclosing more detail than you’ll find in some tech articles and product reviews.
  • Test with clean images: We typically use an out-of-box (OOB) method for testing new devices in the XPRT lab. OOB testing means that other than running the initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device and before they install additional software. This is the best way to provide an accurate assessment of the performance retail buyers will experience from their new devices. That said, the OOB method is not appropriate for certain types of testing, such as when you want to compare as close to identical system images as possible, or when you want to remove as much pre-loaded software as possible.
  • Turn off automatic updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always double-check update settings before testing.
  • Get a baseline for system processes: Depending on the system and the OS, a significant amount of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle time) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variance in the first run before the scores start to tighten up.
  • Use more than one data point: Because of natural variance, our standard practice in the XPRT lab is to publish a score that represents the median from three to five runs, if not more. If you run a benchmark only once and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions or over the course of multiple runs.


We hope these tips will help make your testing more accurate. If you have any questions about WebXPRT, the other XPRTs, or benchmarking in general, feel free to ask!

Justin

Comparing the WebXPRT 4 performance of five popular browsers

Every so often, we like to refresh a series of in-house WebXPRT comparison tests to see if recent updates have changed the performance rankings of popular web browsers. We published our most recent comparison last February, when we used WebXPRT 4 to compare the performance of five browsers on the same system.

For this round of tests, we used the same Dell XPS 13 7930 laptop as last time, which features an Intel Core i3-10110U processor and 4 GB of RAM, running Windows 11 Home updated to version 23H2 (22631.307). We installed all current Windows updates, and updated each of the browsers under test: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera.

After the update process completed, we turned off updates to prevent them from interfering with test runs. We ran WebXPRT 4 three times on each of the five browsers. The score we post for each browser is the median of the three test runs.

In our last round of tests, the range between high and low scores was tight, with an overall difference of only 4.3 percent. Edge squeaked out a win, with a 2.1 percent performance advantage over Chrome. Firefox came in last, but was only one overall score point behind the tied score of Brave and Opera.

In this round of testing, the rank order did not change, but we saw more differentiation in the range of scores. While the performance of each browser improved, the range between high and low scores widened to a 19.1 percent difference between first-place Edge and last-place Firefox. The scores of the four Chromium-based browsers (Brave, Opera, Chrome, and Edge) all improved by at least 21 points, while the Firefox score only improved by one point. 

Do these results mean that Microsoft Edge will always provide a faster web experience, or Firefox will always be slower than the others? Not necessarily. It’s true that a device with a higher WebXPRT score will probably feel faster during daily web activities than one with a much lower score, but your experience depends in part on the types of things you do on the web, along with your system’s privacy settings, memory load, ecosystem integration, extension activity, and web app capabilities.

In addition, browser speed can noticeably increase or decrease after an update, and OS-specific optimizations can affect performance, such as with Edge on Windows 11 and Chrome on Chrome OS. All these variables are important to keep in mind when considering how WebXPRT results may translate to your everyday experience.

Have you used WebXPRT 4 to compare browser performance on the same system? Let us know how it turned out!

Justin

Looking back on 2023 with the XPRTs

Around the beginning of each new year, we like to take the opportunity to look back and summarize the XPRT highlights from the previous year. Readers of our newsletter are familiar with the stats and updates we include each month, but for our blog readers who don’t receive the newsletter, we’ve compiled highlights from 2023 below.

Benchmarks
In March, we celebrated the 10-year anniversary of WebXPRT! WebXPRT 4 has now taken the lead as the most commonly-used version of WebXPRT, even as the overall number of runs has continued to grow.

XPRTs in the media
Journalists, advertisers, and analysts referenced the XPRTs thousands of times in 2023. It’s always rewarding to know that the XPRTs have proven to be useful and reliable assessment tools for technology publications around the world. Media sites that used the XPRTs in 2023 include 3DNews (Russia), AnandTech, Benchlife.info (China), CHIP.pl (Poland), ComputerBase (Germany), eTeknix, Expert Reviews, Gadgetrip (Japan), Gadgets 360, Gizmodo, Hardware.info, IT168.com (China), ITC.ua (Ukraine), ITWorld (Korea), iXBT.com (Russia), Lyd & Bilde (Norway), Notebookcheck, Onchrome (Germany), PCMag, PCWorld, QQ.com (China), Tech Advisor, TechPowerUp, TechRadar, Tom’s Guide, TweakTown, Yesky.com (China), and ZDNet.

Downloads and confirmed runs
In 2023, we had more than 16,800 benchmark downloads and 296,800 confirmed runs. Users have run our most popular benchmark, WebXPRT, more than 1,376,500 times since its debut in 2013! WebXPRT continues to be a go-to, industry-standard performance benchmark for OEM labs, vendors, and leading tech press outlets around the globe.

Trade shows
In January, Justin attended the 2023 Consumer Electronics Show (CES) Las Vegas. In March, Mark attended Mobile World Congress (MWC) 2023 in Barcelona. You can view Justin’s recap of CES here and Mark’s thoughts from MWC here.

We’re thankful for everyone who used the XPRTs and sent questions and suggestions throughout 2023. We’re excited to see what’s in store for the XPRTs in 2024!

Justin

Local AI and new frontiers for performance evaluation

Recently, we discussed some ways the PC market may evolve in 2024, and how new Windows on Arm PCs could present the XPRTs with many opportunities for benchmarking. In addition to a potential market shakeup from Arm-based PCs in the coming years, there’s a much broader emerging trend that could eventually revolutionize almost everything about the way we interact with our personal devices—the development of local, dedicated AI processing units for consumer-oriented tech.

AI already impacts daily life for many consumers through technologies such as such as predictive text, computer vision, adaptive workflow apps, voice recognition, smart assistants, and much more. Generative AI-based technologies are rapidly establishing a permanent, society-altering presence across a wide range of industries. Aside from some localized inference tasks that the CPU and/or GPU typically handle, the bulk of the heavy compute power that fuels those technologies has been in the cloud or in on-prem servers. Now, several major chipmakers are working to roll out their own versions of AI-optimized neural processing units (NPUs) that will enable local devices to take on a larger share of the AI load.

Examples of dedicated AI hardware in recently-released or upcoming consumer devices include Intel’s new Meteor Lake NPU, Apple’s Neural Engine for M-series SoCs, Qualcomm’s Hexagon NPU, and AMD’s XDNA 2 architecture. The potential benefits of localized, NPU-facilitated AI are straightforward. On-device AI could reduce power consumption and extend battery life by offloading those tasks from the CPUs. It could alleviate certain cloud-related privacy and security concerns. Without the delays inherent in cloud queries, localized AI could execute inference tasks that operate much closer to real time. NPU-powered devices could fine-tune applications around your habits and preferences, even while offline. You could pull and utilize relevant data from cloud-based datasets without pushing private data in return. Theoretically, your device could know a great deal about you and enhance many areas of your daily life without passing all that data to another party.

Will localized AI play out that way? Some tech companies envision a role for on-device AI that enhances the abilities of existing cloud-based subscription services without decoupling personal data. We’ll likely see a wide variety of capabilities and services on offer, with application-specific and SaaS-determined privacy options.

Regardless of the way on-device AI technology evolves in the coming years, it presents an exciting new frontier for benchmarking. All NPUs will not be created equal, and that’s something buyers will need to understand. Some vendors will optimize their hardware more for computer vision, or large language models, or AI-based graphics rendering, and so on. It won’t be enough for business and consumers to simply know that a new system has dedicated AI processing abilities. They’ll need to know if that system performs well while handling the types of AI-related tasks that they do every day.

Here at the XPRTs, we specialize in creating benchmarks that feature real-world scenarios that mirror the types of tasks that people do in their daily lives. That approach means that when people use XPRT scores to compare device performance, they’re using a metric that can help them make a buying decision that will benefit them every day. We look forward to exploring ways that we can bring XPRT benchmarking expertise to the world of on-device AI.

Do you have ideas for future localized AI workloads? Let us know!

Justin

Check out the other XPRTs:

Forgot your password?