BenchmarkXPRT Blog banner

Category: Browser-based benchmarks

Shopping for back-to-school tech? The XPRTs can help!

For many students, the first day of school is just around the corner, and it’s now time to shop for new tech devices that can help set them up for success in the coming year. The tech marketplace can be confusing, however, with so many brands, options, and competing claims to sort through.

Fortunately, the XPRTs are here to help!

Whether you’re shopping for a new phone, tablet, Chromebook, laptop, or desktop, the XPRTs can provide industry-trusted performance scores that can give you confidence that you’re making a smart purchasing decision.

The WebXPRT 4 results viewer is a good place to start looking for device scores. The viewer displays WebXPRT 4 scores from over 700 devices—including many of the latest releases—and we’re adding new scores all the time. To learn more about the viewer’s capabilities and how you can use it to compare devices, check out this blog post.

Another resource we offer is the XPRT results browser. The browser is the most efficient way to access the XPRT results database, which currently holds more than 3,700 test results from over 150 sources, including major tech review publications around the world, manufacturers, and independent testers. It offers a wealth of current and historical performance data across all the XPRT benchmarks and hundreds of devices. You can read more about how to use the results browser here.

Also, if you’re considering a popular device, there’s a good chance that a recent tech review includes an XPRT score for that device. There are two quick ways to find these reviews: You can either (1) search for “XPRT” on your preferred tech review site or (2) use a search engine and input the device name and XPRT name, such as “Dell XPS” and “WebXPRT.”

Here are a few recent tech reviews that use one of the XPRTs to evaluate a popular device:

Lastly, here at Principled Technologies, we frequently publish reports that evaluate the performance of hot new consumer devices, and many of those reports include WebXPRT scores. For example, check out our extensive testing of HP ZBook G10 mobile workstations or our detailed comparison of Lenovo ThinkPad, ThinkBook, and ThinkCentre devices to their Apple Mac counterparts.

The XPRTs can help anyone stuck in the back-to-school shopping blues make better-informed and more confident tech purchases. As this new school year begins, we hope you’ll find the data you need on our site or in an XPRT-related tech review. If you have any questions about the XPRTs, XPRT scores, or the results database please feel free to ask!

Justin

Putting together a good WebXPRT workload proposal

Recently, we announced that we’re moving forward with the development of a new AI-focused WebXPRT 4 workload. It will be an auxiliary workload, which means that it will run as a separate, optional test, and it won’t affect existing WebXPRT 4 tests or scores. Although the inspiration for this new workload came from internal WebXPRT discussions—and, let’s face it, from the huge increase in importance of AI—we wanted to remind you that we’re always open to hearing your WebXPRT workload ideas. If you’d like to submit proposals for new workloads, you don’t have to follow a formal process. Just contact us, and we’ll start the conversation.

If you do decide to send us a workload proposal, it will be helpful to know the types of parameters that we keep in mind. Below, we discuss some of the key questions we ask when we evaluate new WebXPRT workload ideas.

Will it be relevant and interesting to real users, lab testers, and tech reviewers?

When considering a WebXPRT workload proposal, the first two criteria are simple: is it relevant in real life, and are people interested in the workload? We created WebXPRT to evaluate device performance using web-based tasks that consumers are likely to experience daily, so real-life relevance has always been an essential requirement for us throughout development. There are many technologies, functions, and use cases that we could test in a web environment, but only some are relevant to common applications or usage patterns and are likely to draw the interest of real users, lab testers, and technical reviewers.

Will it have cross-platform support?

Currently, WebXPRT runs on almost any web browser and almost every device that supports a web browser. We would like to keep that level of cross-platform support when we introduce new workloads. However, technical differences in how various browsers execute tasks make it challenging to include certain scenarios without undermining our cross-platform ideal. When considering any workload proposal, one of the first questions we ask is, “Will it work on all the major browsers and operating systems?”

There are special exceptions to this guideline. For instance, we’re still in the early days of browser-based AI, and it’s unlikely that a new browser-based AI workload will run on every major browser. If it’s a particularly compelling idea, such as the AI scenario we’re currently working on, we may consider including it as an auxiliary test.

Will it differentiate performance between different types of devices?

XPRT benchmarks provide users with accurate measures for evaluating how well target systems or technologies perform specific tasks. With a broadly targeted benchmark like WebXPRT, if the workloads are so heavy that most devices can’t handle them or so light that most devices complete them without being taxed, the results will be of little use for helping buyers evaluating systems and making purchasing decisions, OEM labs, and the tech press.

That’s why, with any new WebXPRT workload, we look for a sweet spot with respect to how computationally demanding it will be. We want it to run on a wide range of devices—from low-end devices that are several years old to brand-new high-end devices, and everything in between. We also want users to see a wide range of workload scores and resulting overall scores that accurately reflect the experiences those systems deliver, so they can easily grasp the different performance capabilities of the devices under test.

Will results be consistent and easily replicated?

Finally, WebXPRT workloads should produce scores that consistently fall within an acceptable margin of error and are easily replicated with additional testing or comparable gear. Some web technologies are very sensitive to uncontrollable or unpredictable variables, such as internet speed. A workload that measures one of those technologies would be unlikely to produce results that are consistent and easily replicated.

We hope this post will be useful if you’re thinking about potential new workloads that you’d like to see in WebXPRT. If you have any general thoughts about browser performance testing or specific workload ideas that you’d like us to consider, please let us know.

Justin

Contribute to WebXPRT’s AI capabilities with your NPU-equipped gear

A few weeks ago, we announced that we’re developing a new auxiliary WebXPRT 4 workload focused on local, browser-based AI technology. This is an exciting project for us, and as we work to determine the best approach from the perspective of frameworks, APIs, inference models, and test scenarios, we’re also thinking ahead to the testing process. To best understand how the new workload will impact system performance, we’re going to need to test it on hardware equipped with the latest generation of neural processing units (NPUs).

NPUs are not new, but the technology is advancing rapidly, and a growing number of PC and laptop manufacturers are releasing NPU-equipped systems. Several vendors have announced plans to release systems equipped with all-new NPUs in the latter half of this year. As is often the case with bleeding-edge technology, however, official release dates do not always coincide with widespread availability.

We want to evaluate new AI-focused WebXPRT workloads on the widest possible range of new systems, but getting a wide selection of gear equipped with the latest NPUs may take quite a while through normal channels. For that reason, we’ve decided to ask our readers for help to expedite the process.

If you’re an OEM or vendor representative with access to the latest generation of NPU-equipped gear and want to contribute to WebXPRT’s evolution, consider sending us any PCs, white boxes, laptops, 2-in-1s, or tablets (on loan) that would be suitable for NPU-focused testing. We have decades of experience serving as trusted testers of confidential and pre-release gear, so we’re well-acquainted with concerns about confidentiality that may come into play, and we won’t publish any information about the systems or related test results without your permission.

We will, though, be happy to share with you our test results on your systems, and we’d love to hear any guidance or other feedback from you on this new workload.

We’re open to any suitable gear, but we’re especially interested in AMD Ryzen AI, Apple M4, Intel Lunar Lake and Arrow Lake, and Qualcomm Snapdragon X Elite systems.

If you’re interested in sending us gear for WebXPRT development testing, please contact us. We’ll work out all the necessary details. Thanks in advance for your help!

Justin

Updating our WebXPRT 4 browser performance comparisons with new gear

Once or twice per year, we refresh an ongoing series of WebXPRT comparison tests to see if recent updates have reordered the performance rankings of popular web browsers. We published our most recent comparison in January, when we used WebXPRT 4 to compare the performance of five browsers on the same system.

This time, we’re publishing an updated set of comparison scores sooner than we normally would because we chose to move our testing to a newer reference laptop. The previous system—a Dell XPS 13 7930 with an Intel Core i3-10110U processor and 4 GB of RAM—is now several years old. We wanted to transition to a system that is more in line with current mid-range laptops. By choosing to test on a capable mid-tier laptop, our comparison scores are more likely to fall within the range of scores we would see from an typical user today.

Our new reference system is a Lenovo ThinkPad T14s Gen 3 with an Intel Core i7-1270P processor and 16 GB of RAM. It’s running Windows 11 Home, updated to version 23H2 (22631.3527). Before testing, we installed all current Windows updates and tested on a clean system image. After the update process was complete, we turned off updates to prevent any further updates from interfering with test runs. We ran WebXPRT 4 three times each on five browsers: Brave, Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. In Figure 1 below, each browser’s score is the median of the three test runs.

In our last round of tests—on the Dell XPS 13—the four Chromium-based browsers (Brave, Chrome, Edge, and Opera) produced close scores, with Edge taking a small lead among the four. Each of the Chromium browsers significantly outperformed Firefox, with the slowest of the Chromium browsers (Brave) outperforming Firefox by 13.5 percent.

In this round of tests—on the Lenovo ThinkPad T14s—the scores were very tight, with a difference of only 4 percent between the last-place browser (Brave) and the winner (Chrome). Interestingly, Firefox no longer trailed the four Chromium browsers—it was squarely in the middle of the pack.

Figure 1: The median scores from running WebXPRT 4 three times with each browser on the Lenovo ThinkPad T14s.

Unlike previous rounds that showed a higher degree of performance differentiation between the browsers, scores from this round of tests are close enough that most users wouldn’t notice a difference. Even if the difference between the highest and lowest scores was substantial, the quality of your browsing experience will often depend on factors such as the types of things you do on the web (e.g., gaming, media consumption, or multi-tab browsing), the impact of extensions on performance, and how frequently the browsers issue updates and integrate new technologies, among other things. It’s important to keep such variables in mind when thinking about how browser performance comparison results may translate to your everyday web experience.

Have you tried using WebXPRT 4 to test the speed of different browsers on the same system? If so, we’d love for you to tell us about it! Also, please tell us what other WebXPRT data you’d like to see!

Justin

Want to see your WebXPRT 4 results on WebXPRT.com? Here’s how to submit them for review

In a recent post, we discussed some key features that the WebXPRT 4 results viewer tool has to offer. In today’s post, we’ll cover the straightforward process of submitting your WebXPRT 4 test results for possible publication in the viewer.

Unlike sites that publish all submissions, we publish only results that meet our evaluation criteria. Those results can come from OEM labs, third-party labs, reliable tech media sources, or independent user submissions. What’s important to us is that the scores must be consistent with general expectations, and for sources outside of our labs and data centers, they must include enough detailed system information that we can determine whether the score makes sense. That being said, if your scores are a little bit different from what you see in our database, please don’t hesitate to send them to us for consideration. It costs you nothing.

The actual result submission process is quick and easy. At the end of the WebXPRT test run, click the Submit your results button below the overall score, complete the short submission form, and click Submit again. Please be as specific as possible when filling in the system information fields. Detailed device information helps us assess whether individual scores represent valid test runs.

Figure 1 below shows how the form would look if I submitted a score at the end of a recent WebXPRT 4 run on one of the test systems here in our lab.

Figure 1: A screenshot of the WebXPRT 4 end-of-test results submission screen.

After you submit your score, we’ll contact you to confirm how we should display the source of the result in our database. You can choose one of the following:

  • Your first and last name
  • “Independent tester” (for users who wish to remain anonymous)
  • Your company’s name, if you have permission to submit the result in their name. If you want to use a company name, please provide a valid company email address that corresponds with the company name.

As always, we will not publish any additional information about you or your company without your permission.

We look forward to seeing your scores! If you have questions about WebXPRT 4 testing or results submission, please let us know!

Justin

Up next for WebXPRT 4: A new AI-focused workload!

We’re always thinking about ways to improve WebXPRT. In the past, we’ve discussed the potential benefits of auxiliary workloads and the role that such workloads might play in future WebXPRT updates and versions. Today, we’re very excited to announce that we’ve decided to move forward with the development of a new WebXPRT 4 workload focused on browser-side AI technology!

WebXPRT 4 already includes timed AI tasks in two of its workloads: the Organize Album using AI workload and the Encrypt Notes and OCR Scan workload. These two workloads reflect the types of light browser-side inference tasks that have been available for a while now, but most heavy-duty inference on the web has historically happened in on-prem servers or in the cloud. Now, localized AI technology is growing by leaps and bounds, and the integration of new AI capabilities with browser-based tasks is on the threshold of advancing rapidly.

Because of this growth, we believe now is the time to start work on giving WebXPRT 4 the ability to evaluate new browser-based AI capabilities—capabilities that are likely to become a part of everyday life in the next few years. We haven’t yet decided on a test scenario or software stack for the new workload, but we’ll be working to refine our plan in the coming months. There seems to be some initial promise in emerging frameworks such as ONNX Runtime Web, which allows users to run and deploy web-based machine learning models by using JavaScript APIs and libraries. In addition, new Web APIs like WebGPU (currently supported in Edge, Chrome, and tech preview in Safari) and WebNN (in development) may soon help facilitate new browser-side AI workloads.

We know that many longtime WebXPRT 4 users will have questions about how this new workload may affect their tests. We want to assure you that the workload will be an optional bonus workload and will not run by default during normal WebXPRT 4 tests. As you consider possibilities for the new workload, here are a few points to keep in mind:

  • The workload will be optional for users to run.
  • It will not affect the main WebXPRT 4 subtest or overall scores in any way.
  • It will run separately from the main test and will produce its own score(s).
  • Current and future WebXPRT 4 results will still be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices.
  • Because many of the available frameworks don’t currently run on all browsers, the workload may not run on every platform.

As we research available technologies and explore our options, we would love to hear from you. If you have ideas for an AI workload scenario that you think would be useful or thoughts on how we should implement it, please let us know! We’re excited about adding new technologies and new value to WebXPRT 4, and we look forward to sharing more information here in the blog as we make progress.

Justin

Check out the other XPRTs:

Forgot your password?