BenchmarkXPRT Blog banner

Category: Cross-platform benchmarks

The XPRT Spotlight Black Friday Showcase helps you shop with confidence

Black Friday and Cyber Monday are almost here, and you may be feeling overwhelmed by the sea of tech gifts to choose from. The XPRTs are here to help. We’ve gathered the product specs and performance facts for some of the hottest tech devices in one convenient place—the XPRT Spotlight Black Friday Showcase. The Showcase is a free shopping tool that provides side-by-side comparisons of some of the season’s most popular smartphones, laptops, Chromebooks, tablets, and PCs. It helps you make informed buying decisions so you can shop with confidence this holiday season.

Want to know how the Google Pixel 4 stacks up against the Apple iPhone 11 or Samsung Galaxy Note10 in web browsing performance or screen size? Simply select any two devices in the Showcase and click Compare. You can also search by device type if you’re interested in a specific form factor such as consoles or tablets.

The Showcase doesn’t go away after Black Friday. We’ll rename it the XPRT Holiday Showcase and continue to add devices such as the Microsoft Surface Pro X throughout the shopping season. Be sure to check back in and see how your tech gifts measure up.

If this is the first you’ve heard about the XPRT Tech Spotlight, here’s a little background. Our hands-on testing process equips consumers with accurate information about how devices function in the real world. We test devices using our industry-standard BenchmarkXPRT tools: WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. In addition to benchmark results, we include photographs, specs, and prices for all products. New devices come online weekly, and you can browse the full list of almost 200 that we’ve featured to date on the Spotlight page.

If you represent a device vendor and want us to feature your product in the XPRT Tech Spotlight, please visit the website for more details.

Justin

Understanding AIXPRT’s default number of requests

A few weeks ago, we discussed how AIXPRT testers can adjust the key variables of batch size, levels of precision, and number of concurrent instances by editing the JSON test configuration file in the AIXPRT/Config directory. In addition to those key variables, there is another variable in the config file called “total_requests” that has a different default setting depending on the AIXPRT test package you choose. This setting can significantly affect a test run, so it’s important for testers to know how it works.

The total_requests variable specifies how many inference requests AIXPRT will send to a network (e.g., ResNet-50) during one test iteration at a given batch size (e.g., Batch 1, 2, 4, etc.). This simulates the inference demand that the end users place on the system. Because we designed AIXPRT to run on different types of hardware, it makes sense to set the default number of requests for each test package to suit the most likely hardware environment for that package.

For example, testing with OpenVINO on Windows aligns more closely with a consumer (i.e., desktop or laptop) scenario than testing with OpenVINO on Ubuntu, which is more typical of server/datacenter testing. Desktop testers require a much lower inference demand than server testers, so the default total_requests settings for the two packages reflect that. The default for the OpenVINO/Windows package is 500, while the default for the OpenVINO/Ubuntu package is 5,000.

Also, setting the number of requests so low that a system finishes each workload in less than 1 second can produce high run-to-run variation, so our default settings represent a lower boundary that will work well for common test scenarios.

Below, we provide the current default total_requests setting for each AIXPRT test package:

  • MXNet: 1,000
  • OpenVINO Ubuntu: 5,000
  • OpenVINO Windows: 500
  • TensorFlow Ubuntu: 100
  • TensorFlow Windows: 10
  • TensorRT Ubuntu: 5,000
  • TensorRT Windows: 500


Testers can adjust these variables in the config file according to their own needs. Finding the optimal combination of machine learning variables for each scenario is often a matter of trial and error, and the default settings represent what we think is a reasonable starting point for each test package.

To adjust the total_requests setting, start by locating and opening the JSON test configuration file in the AIXPRT/Config directory. Below, we show a section of the default config file (CPU_INT8.json) for the OpenVINO-Windows test package (AIXPRT_1.0_OpenVINO_Windows.zip). For each batch size, the total_requests setting appears at the bottom of the list of configurable variables. In this case, the default setting Is 500. Change the total_requests numerical value for each batch size in the config file, save your changes, and close the file.

Total requests snip

Note that if you are running multiple concurrent instances, OpenVINO and TensorRT automatically distribute the number of requests among the instances. MXNet and TensorFlow users must manually allocate the instances in the config file. You can find an example of how to structure manual allocation here. We hope to make this process automatic for all toolkits in a future update.

We hope this information helps you understand the total_requests setting, and why the default values differ from one test package to another. If you have any questions or comments about this or other aspects of AIXPRT, please let us know.

Justin

AIXPRT is here!

We’re happy to announce that AIXPRT is now available to the public! AIXPRT includes support for the Intel OpenVINO, TensorFlow, and NVIDIA TensorRT toolkits to run image-classification and object-detection workloads with the ResNet-50 and SSD-MobileNet v1networks, as well as a Wide and Deep recommender system workload with the Apache MXNet toolkit. The test reports FP32, FP16, and INT8 levels of precision.

To access AIXPRT, visit the AIXPRT download page. There, a download table displays the AIXPRT test packages. Locate the operating system and toolkit you wish to test and click the corresponding Download link. For detailed installation instructions and information on hardware and software requirements for each package, click the package’s Readme link. If you’re not sure which AIXPRT package to choose, the AIXPRT package selector tool will help to guide you through the selection process.

In addition, the Helpful Info box on AIXPRT.com contains links to a repository of AIXPRT resources, as well links to XPRT blog discussions about key AIXPRT test configuration settings such as batch size and precision.

We hope AIXPRT will prove to be a valuable tool for you, and we’re thankful for all the input we received during the preview period! If you have any questions about AIXPRT, please let us know.

Understanding concurrent instances in AIXPRT

Over the past few weeks, we’ve discussed several of the key configuration variables in AIXPRT, such as batch size and level of precision. Today, we’re discussing another key variable: number of concurrent instances. In the context of machine learning inference, this refers to how many instances of the network model (ResNet-50, SSD-MobileNet, etc.) the benchmark runs simultaneously.

By default, the toolkits in AIXPRT run one instance at a time and distribute the compute load according to the characteristics of the CPU or GPU under test, as well as any relevant optimizations or accelerators in the toolkit’s reference library. By setting the number of concurrent instances to a number greater than one, a tester can use multiple CPUs or GPUs to run multiple instances of a model at the same time, usually to increase throughput.

With multiple concurrent instances, a tester can leverage additional compute resources to potentially achieve higher throughput without sacrificing latency goals.

In the current version of AIXPRT, testers can run multiple concurrent instances in the OpenVINO, TensorFlow, and TensorRT toolkits. When AIXPRT Community Preview 3 becomes available, this option will extend to the MXNet toolkit. OpenVINO and TensorRT automatically allocate hardware for each instance and don’t let users make manual adjustments. TensorFlow and MXNet require users to manually bind instances to specific hardware. (Manual hardware allocation for multiple instances is more complicated than we can cover today, so we may devote a future blog entry to that topic.)

Setting the number of concurrent instances in AIXPRT

The screenshot below shows part of a sample config file (the same one we used when we discussed batch size and precision). The value in the “concurrent instances” row indicates how many concurrent instances will be operating during the test. In this example, the number is one. To change that value, a tester simply replaces it with the desired number and saves the changes.

Config_snip

If you have any questions or comments (about concurrent instances or anything else), please feel free to contact us.

Justin

Understanding AIXPRT results

Last week, we discussed the changes we made to the AIXPRT Community Preview 2 (CP2) download page as part of our ongoing effort to make AIXPRT easier to use. This week, we want to discuss the basics of understanding AIXPRT results by talking about the numbers that really matter and how to access and read the actual results files.

To understand AIXPRT results at a high level, it’s important to revisit the core purpose of the benchmark. AIXPRT’s bundled toolkits measure inference latency (the speed of image processing) and throughput (the number of images processed in a given time period) for image recognition (ResNet-50) and object detection (SSD-MobileNet v1) tasks. Testers have the option of adjusting variables such as batch size (the number of input samples to process simultaneously) to try and achieve higher levels of throughput, but higher throughput can come at the expense of increased latency per task. In real-time or near real-time use cases such as performing image recognition on individual photos being captured by a camera, lower latency is important because it improves the user experience. In other cases, such as performing image recognition on a large library of photos, achieving higher throughput might be preferable; designating larger batch sizes or running concurrent instances might allow the overall workload to complete more quickly.

The dynamics of these performance tradeoffs ensure that there is no single good score for all machine learning scenarios. Some testers might prefer lower latency, while others would sacrifice latency to achieve the higher level of throughput that their use case demands.

Testers can find latency and throughput numbers for each completed run in a JSON results file in the AIXPRT/Results folder. The test also generates CSV results files that are in the same folder. The raw results files report values for each AI task configuration (e.g., ResNet-50, Batch1, on CPU). Parsing and consolidating the raw data can take some time, so we’re developing a results file parsing tool to make the job much easier.

The results parsing tool is currently available in the AIXPRT CP2 OpenVINO – Windows package, and we hope to make it available for more packages soon. Using the tool is as simple as running a single command, and detailed instructions for how to do so are in the AIXPRT OpenVINO on Windows user guide. The tool produces a summary (example below) that makes it easier to quickly identify relevant comparison points such as maximum throughput and minimum latency.

AIXPRT results summary

In addition to the summary, the tool displays the throughput and latency results for each AI task configuration tested by the benchmark. AIXPRT runs each AI task multiple times and reports the average inference throughput and corresponding latency percentiles.

AIXPRT results details

We hope that this information helps to make it easier to understand AIXPRT results. If you have any questions or comments, please feel free to contact us.

Justin

WebXPRT: What would you like to see?

At over 412,000 runs and counting, WebXPRT is our most popular benchmark. From the first release in 2013, it’s been popular with device manufacturers, developers, tech journalists, and consumers because it’s easy to run, it runs on almost anything with a web browser, and it evaluates device performance using the types of web-based tasks that people are likely to encounter on a daily basis.

With each new version of WebXPRT, we analyze browser development trends to make sure the test’s underlying web technologies and workload scenarios adequately reflect the ways people are using their browsers to work and play. BenchmarkXPRT Development Community members can play an important part in that process by sending us feedback on existing tests and suggestions for new workloads to include.

For example, when we released WebXPRT 3, we updated the photo workloads with new images and a deep learning task used for image classification. We also added an optical character recognition task in the Encrypt Notes and OCR scan workload, and combined part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in an all-new Online Homework workload.

Consider for a moment what an ideal future version of WebXPRT would look like for you. Are there new web technologies or workload scenarios that you would like to see? Would you be interested in an associated battery life test? Should we include experimental tests? We’re interested in what you have to say, so please feel free to contact us with your thoughts or questions.

If you’re just now learning about WebXPRT, we offer several resources to help you better understand the benchmark and its range of uses. For a general overview of why WebXPRT matters, watch our video titled What is WebXPRT and why should I care? To read more about the details of the benchmark’s development and structure, check out the Exploring WebXPRT 3 white paper. To see WebXPRT 2015 and WebXPRT 3 scores from a wide range of processors, visit the WebXPRT 3 Processor Comparison Chart.

We look forward to hearing from you!

Justin

Check out the other XPRTs:

Forgot your password?