BenchmarkXPRT Blog banner

Category: Cross-platform benchmarks

The AIXPRT source code is now public

This week, we have good news for AIXPRT testers: the AIXPRT source code is now available to the public via GitHub. As we’ve discussed in the past, publishing XPRT source code is part of our commitment to making the XPRT development process as transparent as possible. With other XPRT benchmarks, we’ve only made the source code available to community members. With AIXPRT, we have released the source code more widely. By allowing all interested parties, not just community members, to download and review our source code, we’re taking tangible steps to improve openness and honesty in the benchmarking industry and we’re encouraging the kind of constructive feedback that helps to ensure that the XPRTs continue to contribute to a level playing field.

Traditional open-source models encourage developers to change products and even take them in new and different directions. Because benchmarking requires a product that remains static to enable valid comparisons over time, we allow people to download the source code and submit potential workloads for future consideration, but we reserve the right to control derivative works. This discourages a situation where someone publishes an unauthorized version of the benchmark and calls it an “XPRT.”

We encourage you to download and review the source and send us any feedback you may have. Your questions and suggestions may influence future versions of AIXPRT. If you have any questions about AIXPRT or accessing the source code, please feel free to ask! Please also let us know if you think we should take this approach to releasing the source code with other XPRT benchmarks.

Justin

Using WebXPRT 3 to compare the performance of popular browsers

Microsoft recently released a new Chromium-based version of the Edge browser, and several tech press outlets have released reviews and results from head-to-head browser performance comparison tests. Because WebXPRT is a go-to benchmark for evaluating browser performance, PCMag, PCWorld, and VentureBeat, among others, used WebXPRT 3 scores as part of the evaluation criteria for their reviews.

We thought we would try a quick experiment of our own, so we grabbed a recent laptop from our Spotlight testbed: a Dell XPS 13 7930 running Windows 10 Home 1909 (18363.628) with an Intel Core i3-10110U processor and 4 GB of RAM. We tested on a clean system image after installing all current Windows updates, and after the update process completed, we turned off updates to prevent them from interfering with test runs. We ran WebXPRT 3 three times on six browsers: a new browser called Brave, Google Chrome, the legacy version of Microsoft Edge, the new version of Microsoft Edge, Mozilla Firefox, and Opera. The posted score for each browser is the median of the three test runs.

As you can see in the chart below, five of the browsers (legacy Edge, Brave, Opera, Chrome, and new Edge) produced scores that were nearly identical. Mozilla Firefox was the only browser that produced a significantly different score. The parity among Brave, Chrome, Opera, and the new Edge is not that surprising, considering they are all Chromium-based browsers. The rank order and relative scaling of these results is similar to the results published by the tech outlets mentioned above.

Do these results mean that Mozilla Firefox will provide you with a speedier web experience? Generally, a device with a higher WebXPRT score is probably going to feel faster to you during daily use than one with a lower score. For comparisons on the same system, however, the answer depends in part on the types of things you do on the web, how the extensions you’ve installed affect performance, how frequently the browsers issue updates and incorporate new web technologies, and how accurately the browsers’ default installation settings reflect how you would set up the same browsers for your daily workflow.

In addition, browser speed can increase or decrease significantly after an update, only to swing back in the other direction shortly thereafter. OS-specific optimizations can also affect performance, such as with Edge on Windows 10 and Chrome on Chrome OS. All of these variables are important to keep in mind when considering how browser performance comparison results translate to your everyday experience. In such a competitive market, and with so many variables to consider, we’re happy that WebXPRT can help consumers by providing reliable, objective results.

What are your thoughts on today’s competitive browser market? We’d love to hear from you.

Justin

AIXPRT’s unique development path

With four separate machine learning toolkits on their own development schedules, three workloads, and a wide range of possible configurations and use cases, AIXPRT has more moving parts than any of the XPRT benchmark tools to date. Because there are so many different components, and because we want AIXPRT to provide consistently relevant evaluation data in the rapidly evolving AI and machine learning spaces, we anticipate a cadence of AIXPRT updates in the future that will be more frequent than the schedules we’ve used for other XPRTs in the past. With that expectation in mind, we want to let AIXPRT testers know that when we release an AIXPRT update, they can expect minimized disruption, consideration for their testing needs, and clear communication.

Minimized disruption

Each AIXPRT toolkit (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and we won’t always have a lot of advance notice when new versions are on the way. Hypothetically, a new version of OpenVINO could release one month, and a new version of TensorRT just two months later. Thankfully, the modular nature of AIXPRT’s installation packages ensures that we won’t need to revise the entire AIXPRT suite every time a toolkit update goes live. Instead, we’ll update each package individually when necessary. This means that if you only test with a single AIXPRT package, updates to the other packages won’t affect your testing. For us to maintain AIXPRT’s relevance, there’s unfortunately no way to avoid all disruption, but we’ll work to keep it to a minimum.

Consideration for testers

As we move forward, when software compatibility issues force us to update an AIXPRT package, we may discover that the update has a significant effect on results. If we find that results from the new package are no longer comparable to those from previous tests, we’ll share the differences that we’re seeing in our lab. As always, we will use documentation and versioning to make sure that testers know what to expect and  that there’s no confusion about which package to use.

Clear communication

When we update any package, we’ll make sure to communicate any updates in the new build as clearly as possible. We’ll document all changes thoroughly in the package readmes, and we’ll talk through significant updates here in the blog. We’re also available to answer questions about AIXPRT and any other XPRT-related topic, so feel free to ask!

Justin

The XPRT Spotlight Black Friday Showcase helps you shop with confidence

Black Friday and Cyber Monday are almost here, and you may be feeling overwhelmed by the sea of tech gifts to choose from. The XPRTs are here to help. We’ve gathered the product specs and performance facts for some of the hottest tech devices in one convenient place—the XPRT Spotlight Black Friday Showcase. The Showcase is a free shopping tool that provides side-by-side comparisons of some of the season’s most popular smartphones, laptops, Chromebooks, tablets, and PCs. It helps you make informed buying decisions so you can shop with confidence this holiday season.

Want to know how the Google Pixel 4 stacks up against the Apple iPhone 11 or Samsung Galaxy Note10 in web browsing performance or screen size? Simply select any two devices in the Showcase and click Compare. You can also search by device type if you’re interested in a specific form factor such as consoles or tablets.

The Showcase doesn’t go away after Black Friday. We’ll rename it the XPRT Holiday Showcase and continue to add devices such as the Microsoft Surface Pro X throughout the shopping season. Be sure to check back in and see how your tech gifts measure up.

If this is the first you’ve heard about the XPRT Tech Spotlight, here’s a little background. Our hands-on testing process equips consumers with accurate information about how devices function in the real world. We test devices using our industry-standard BenchmarkXPRT tools: WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. In addition to benchmark results, we include photographs, specs, and prices for all products. New devices come online weekly, and you can browse the full list of almost 200 that we’ve featured to date on the Spotlight page.

If you represent a device vendor and want us to feature your product in the XPRT Tech Spotlight, please visit the website for more details.

Justin

Understanding AIXPRT’s default number of requests

A few weeks ago, we discussed how AIXPRT testers can adjust the key variables of batch size, levels of precision, and number of concurrent instances by editing the JSON test configuration file in the AIXPRT/Config directory. In addition to those key variables, there is another variable in the config file called “total_requests” that has a different default setting depending on the AIXPRT test package you choose. This setting can significantly affect a test run, so it’s important for testers to know how it works.

The total_requests variable specifies how many inference requests AIXPRT will send to a network (e.g., ResNet-50) during one test iteration at a given batch size (e.g., Batch 1, 2, 4, etc.). This simulates the inference demand that the end users place on the system. Because we designed AIXPRT to run on different types of hardware, it makes sense to set the default number of requests for each test package to suit the most likely hardware environment for that package.

For example, testing with OpenVINO on Windows aligns more closely with a consumer (i.e., desktop or laptop) scenario than testing with OpenVINO on Ubuntu, which is more typical of server/datacenter testing. Desktop testers require a much lower inference demand than server testers, so the default total_requests settings for the two packages reflect that. The default for the OpenVINO/Windows package is 500, while the default for the OpenVINO/Ubuntu package is 5,000.

Also, setting the number of requests so low that a system finishes each workload in less than 1 second can produce high run-to-run variation, so our default settings represent a lower boundary that will work well for common test scenarios.

Below, we provide the current default total_requests setting for each AIXPRT test package:

  • MXNet: 1,000
  • OpenVINO Ubuntu: 5,000
  • OpenVINO Windows: 500
  • TensorFlow Ubuntu: 100
  • TensorFlow Windows: 10
  • TensorRT Ubuntu: 5,000
  • TensorRT Windows: 500


Testers can adjust these variables in the config file according to their own needs. Finding the optimal combination of machine learning variables for each scenario is often a matter of trial and error, and the default settings represent what we think is a reasonable starting point for each test package.

To adjust the total_requests setting, start by locating and opening the JSON test configuration file in the AIXPRT/Config directory. Below, we show a section of the default config file (CPU_INT8.json) for the OpenVINO-Windows test package (AIXPRT_1.0_OpenVINO_Windows.zip). For each batch size, the total_requests setting appears at the bottom of the list of configurable variables. In this case, the default setting Is 500. Change the total_requests numerical value for each batch size in the config file, save your changes, and close the file.

Total requests snip

Note that if you are running multiple concurrent instances, OpenVINO and TensorRT automatically distribute the number of requests among the instances. MXNet and TensorFlow users must manually allocate the instances in the config file. You can find an example of how to structure manual allocation here. We hope to make this process automatic for all toolkits in a future update.

We hope this information helps you understand the total_requests setting, and why the default values differ from one test package to another. If you have any questions or comments about this or other aspects of AIXPRT, please let us know.

Justin

AIXPRT is here!

We’re happy to announce that AIXPRT is now available to the public! AIXPRT includes support for the Intel OpenVINO, TensorFlow, and NVIDIA TensorRT toolkits to run image-classification and object-detection workloads with the ResNet-50 and SSD-MobileNet v1networks, as well as a Wide and Deep recommender system workload with the Apache MXNet toolkit. The test reports FP32, FP16, and INT8 levels of precision.

To access AIXPRT, visit the AIXPRT download page. There, a download table displays the AIXPRT test packages. Locate the operating system and toolkit you wish to test and click the corresponding Download link. For detailed installation instructions and information on hardware and software requirements for each package, click the package’s Readme link. If you’re not sure which AIXPRT package to choose, the AIXPRT package selector tool will help to guide you through the selection process.

In addition, the Helpful Info box on AIXPRT.com contains links to a repository of AIXPRT resources, as well links to XPRT blog discussions about key AIXPRT test configuration settings such as batch size and precision.

We hope AIXPRT will prove to be a valuable tool for you, and we’re thankful for all the input we received during the preview period! If you have any questions about AIXPRT, please let us know.

Check out the other XPRTs:

Forgot your password?