BenchmarkXPRT Blog banner

Category: benchmark

Using WebXPRT 3 to compare the performance of popular browsers

Microsoft recently released a new Chromium-based version of the Edge browser, and several tech press outlets have released reviews and results from head-to-head browser performance comparison tests. Because WebXPRT is a go-to benchmark for evaluating browser performance, PCMag, PCWorld, and VentureBeat, among others, used WebXPRT 3 scores as part of the evaluation criteria for their reviews.

We thought we would try a quick experiment of our own, so we grabbed a recent laptop from our Spotlight testbed: a Dell XPS 13 7930 running Windows 10 Home 1909 (18363.628) with an Intel Core i3-10110U processor and 4 GB of RAM. We tested on a clean system image after installing all current Windows updates, and after the update process completed, we turned off updates to prevent them from interfering with test runs. We ran WebXPRT 3 three times on six browsers: a new browser called Brave, Google Chrome, the legacy version of Microsoft Edge, the new version of Microsoft Edge, Mozilla Firefox, and Opera. The posted score for each browser is the median of the three test runs.

As you can see in the chart below, five of the browsers (legacy Edge, Brave, Opera, Chrome, and new Edge) produced scores that were nearly identical. Mozilla Firefox was the only browser that produced a significantly different score. The parity among Brave, Chrome, Opera, and the new Edge is not that surprising, considering they are all Chromium-based browsers. The rank order and relative scaling of these results is similar to the results published by the tech outlets mentioned above.

Do these results mean that Mozilla Firefox will provide you with a speedier web experience? Generally, a device with a higher WebXPRT score is probably going to feel faster to you during daily use than one with a lower score. For comparisons on the same system, however, the answer depends in part on the types of things you do on the web, how the extensions you’ve installed affect performance, how frequently the browsers issue updates and incorporate new web technologies, and how accurately the browsers’ default installation settings reflect how you would set up the same browsers for your daily workflow.

In addition, browser speed can increase or decrease significantly after an update, only to swing back in the other direction shortly thereafter. OS-specific optimizations can also affect performance, such as with Edge on Windows 10 and Chrome on Chrome OS. All of these variables are important to keep in mind when considering how browser performance comparison results translate to your everyday experience. In such a competitive market, and with so many variables to consider, we’re happy that WebXPRT can help consumers by providing reliable, objective results.

What are your thoughts on today’s competitive browser market? We’d love to hear from you.

Justin

A preview of the new CrXPRT 2 UI

As we get closer to the CrXPRT 2 Community Preview (CP), we want to provide readers with a glimpse of the new CrXPRT 2 UI. In line with the functional and aesthetic themes we used for the latest versions of WebXPRT, MobileXPRT, and HDXPRT, we’re implementing a clean, bright look with a focus on intuitive navigation. The screenshots below show how we’ve used that approach to rework the home, battery life test, performance test, and battery life test results screens. (We’re still tweaking the UI, so the screens you see in the CP may differ slightly.)

On the home screen, we kept the performance test and battery life test buttons, but made it clearer that you can choose only one. We also added a link to the user manual to the bottom ribbon for quick access.

If you choose to run a battery life test and click Next, the screen below appears. The CrXPRT 2 battery life test requires a full rundown, so you’ll need charge your device to 100 percent before you can start the test. Once you’ve done that, enter a name for the test run, unplug the system, and click Start. (Note that you no longer need to enter values for screen brightness and audio levels.)

The CrXPRT 2 performance test includes updated versions of six of the seven workloads in CrXPRT 2015. (As we discussed in a previous blog post, newer versions of Chrome can’t run the Photo Collage workload without a workaround, so we removed it from CrXPRT 2.)  To run the performance test, enter a name for the test run, customize the workloads if you wish, and click Start.

For the results screens, we wanted to highlight the most important end-of-test information while still offering clear paths for options such as getting additional details on the test, submitting results, and running the test again. Below, we show the results screen from a battery life test. Note the “Main menu” link in the upper-left corner, which we added to all screens to give users a quick way to navigate back to the home screen.

CrXPRT 2 development and testing are still underway. We don’t yet have an exact release date for the CP, but once we do, we’ll announce it here in the blog.

What do you think about the new CrXPRT 2 UI? Let us know!

Justin

The XPRT activity we have planned for first half of 2020

Today, we want to let readers know what to expect from the XPRTs over the next several months. Timelines and details can always change, but we’re confident that community members will see CloudXPRT Community Preview (CP), updated AIXPRT, and CrXPRT 2 releases during the first half of 2020.

CloudXPRT

Last week, Bill shared some details about our new datacenter-oriented benchmark, CloudXPRT. If you missed that post, we encourage you to check it out and learn more about the need for a new kind of cloud benchmark, and our plans for the benchmark’s structure and metrics. We’re already testing preliminary builds, and aim to release a CloudXPRT CP in late March, followed by a version for general availability roughly two months later.

AIXPRT

About a month ago, we explained how the number of moving parts in AIXPRT will necessitate a different development approach than we’ve used for other XPRTs. AIXPRT will require more frequent updating than our other benchmarks, and we anticipate releasing the second version of AIXPRT by mid-year. We’re still finalizing the details, but it’s likely to include the latest versions of ResNet-50 and SSD-MobileNet, selected SDK updates, ease-of-use improvements for the harness, and improved installation scripts. We’ll share more detailed information about the release timeline here in the blog as soon as possible.

CrXPRT 2

As we mentioned in December, we’re working on CrXPRT 2, the next version of our benchmark that evaluates the performance and battery life of Chromebooks. You can find out more about how CrXPRT works both here in the blog and at CrXPRT.com.

We’re currently testing an alpha version of CrXPRT 2. Testing is going well, but we’re tweaking a few items and refining the new UI. We should start testing a CP candidate in the next few weeks, and will have firmer information for community members about a CP release date very soon.

We’re excited about these new developments and the prospect of extending the XPRTs into new areas. If you have any questions about CloudXPRT, AIXPRT, or CrXPRT 2, please feel free to ask!

Justin

CloudXPRT is on the way

A few months ago, we wrote about the possibility of creating a datacenter XPRT. In the intervening time, we’ve discussed the idea with folks both in and outside of the XPRT Community. We’ve heard from vendors of datacenter products, hosting/cloud providers, and IT professionals that use those products and services.

The common thread that emerged was the need for a cloud benchmark that can accurately measure the performance of modern, cloud-first applications deployed on modern infrastructure as a service (IaaS) platforms, whether those platforms are on-premises, hosted elsewhere, or some combination of the two (hybrid clouds). Regardless of where clouds reside, applications are increasingly using them in latency-critical, highly available, and high-compute scenarios.

Existing datacenter benchmarks do not give a clear indication of how applications will perform on a given IaaS infrastructure, so the benchmark should use cloud-native components on the actual stacks used for on-prem and public cloud management.

We are planning to call the benchmark CloudXPRT. Our goal is for CloudXPRT to address the needs described above while also including the elements that have made the other XPRTs successful. We plan for CloudXPRT to

  • Be relevant to on-prem (datacenter), private, and public cloud deployments
  • Run on top of cloud platform software such as Kubernetes
  • Include multiple workloads that address common scenarios like web applications, AI, and media analytics
  • Support multi-tier workloads
  • Report relevant metrics including both throughput and critical latency for responsiveness-driven applications and maximum throughput for applications dependent on batch processing

CloudXPRT’s workloads will use cloud-native components on an actual stack to provide end-to-end performance metrics that allow users to choose the best IaaS configuration for their business.

We’ve been building and testing preliminary versions of CloudXPRT for the last few months. Based on the progress so far, we are shooting to have a Community Preview of CloudXPRT ready in mid- to late-March with a version for general availability ready about two months later.

Over the coming weeks, we’ll be working on getting out more information about CloudXPRT and continuing to talk with interested parties about how they can help. We’d love to hear what workflows would be of most interest to you and what you would most like to see in a datacenter/cloud benchmark. Please feel free to contact us!

Bill

The XPRTs in 2019: Looking back on an exciting and productive year

2019 is winding down, and we want to take this opportunity to review another exciting and productive year for the BenchmarkXPRT Development Community. Readers of our newsletter are familiar with the stats and updates we post in each month’s mailing, but we know that not all our blog readers receive the newsletter, so we’ve compiled the highlights below.

Trade shows
Earlier this year, Justin attended CES in Las Vegas and Mark travelled to MWC Barcelona. These shows help us keep up with the latest industry trends and gather insights that help to lay the groundwork for XPRT development in the years ahead.

Benchmarks
In the past year, we released MobileXPRT 3, HDXPRT 4, and AIXPRT, our new AI benchmark tool that helps you evaluate a system’s machine learning inference performance. There’s much more to come in 2020 with AIXPRT and several other projects, so expect more news about benchmark development early in the year.

Web mentions
In 2019 so far, journalists, advertisers, and analysts have referenced the XPRTs over 5,000 times, including mentions in more than 190 articles and 1,350 device reviews. This represents a more than 50% increase over 2018.

Downloads and confirmed runs
To date, we’ve had more than 24,800 benchmark downloads and 153,000 confirmed runs in 2019, increases of more than 8% and 10%, respectively, over 2018. Within the last month, our most popular benchmark, WebXPRT, passed the 500,000-run milestone! WebXPRT continues to be an industry-standard performance benchmark upon which OEM labs, vendors, and leading tech press outlets rely.

XPRT Tech Spotlight
We put 47 new devices in the XPRT Tech Spotlight throughout the year and published updated back-to-school, Black Friday, and holiday showcases to help buyers compare devices.

Media and interactive tools
We published a new XPRTs around the world infographic and an interactive AIXPRT installation package selector tool. We’ve received a lot of positive feedback about the tool. We encourage you to give it a try if you’re curious about AIXPRT but aren’t sure how to get started.

We’re thankful for everyone who used the XPRTs, joined the community, and sent questions and suggestions throughout 2019. This will be our last blog post for 2019, but there’s much more to come in 2020, including some exciting new developments. Stay tuned in early January for updates!

Justin

AIXPRT’s unique development path

With four separate machine learning toolkits on their own development schedules, three workloads, and a wide range of possible configurations and use cases, AIXPRT has more moving parts than any of the XPRT benchmark tools to date. Because there are so many different components, and because we want AIXPRT to provide consistently relevant evaluation data in the rapidly evolving AI and machine learning spaces, we anticipate a cadence of AIXPRT updates in the future that will be more frequent than the schedules we’ve used for other XPRTs in the past. With that expectation in mind, we want to let AIXPRT testers know that when we release an AIXPRT update, they can expect minimized disruption, consideration for their testing needs, and clear communication.

Minimized disruption

Each AIXPRT toolkit (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and we won’t always have a lot of advance notice when new versions are on the way. Hypothetically, a new version of OpenVINO could release one month, and a new version of TensorRT just two months later. Thankfully, the modular nature of AIXPRT’s installation packages ensures that we won’t need to revise the entire AIXPRT suite every time a toolkit update goes live. Instead, we’ll update each package individually when necessary. This means that if you only test with a single AIXPRT package, updates to the other packages won’t affect your testing. For us to maintain AIXPRT’s relevance, there’s unfortunately no way to avoid all disruption, but we’ll work to keep it to a minimum.

Consideration for testers

As we move forward, when software compatibility issues force us to update an AIXPRT package, we may discover that the update has a significant effect on results. If we find that results from the new package are no longer comparable to those from previous tests, we’ll share the differences that we’re seeing in our lab. As always, we will use documentation and versioning to make sure that testers know what to expect and  that there’s no confusion about which package to use.

Clear communication

When we update any package, we’ll make sure to communicate any updates in the new build as clearly as possible. We’ll document all changes thoroughly in the package readmes, and we’ll talk through significant updates here in the blog. We’re also available to answer questions about AIXPRT and any other XPRT-related topic, so feel free to ask!

Justin

Check out the other XPRTs:

Forgot your password?