BenchmarkXPRT Blog banner

Category: Benchmarking

Understanding AIXPRT results

Last week, we discussed the changes we made to the AIXPRT Community Preview 2 (CP2) download page as part of our ongoing effort to make AIXPRT easier to use. This week, we want to discuss the basics of understanding AIXPRT results by talking about the numbers that really matter and how to access and read the actual results files.

To understand AIXPRT results at a high level, it’s important to revisit the core purpose of the benchmark. AIXPRT’s bundled toolkits measure inference latency (the speed of image processing) and throughput (the number of images processed in a given time period) for image recognition (ResNet-50) and object detection (SSD-MobileNet v1) tasks. Testers have the option of adjusting variables such as batch size (the number of input samples to process simultaneously) to try and achieve higher levels of throughput, but higher throughput can come at the expense of increased latency per task. In real-time or near real-time use cases such as performing image recognition on individual photos being captured by a camera, lower latency is important because it improves the user experience. In other cases, such as performing image recognition on a large library of photos, achieving higher throughput might be preferable; designating larger batch sizes or running concurrent instances might allow the overall workload to complete more quickly.

The dynamics of these performance tradeoffs ensure that there is no single good score for all machine learning scenarios. Some testers might prefer lower latency, while others would sacrifice latency to achieve the higher level of throughput that their use case demands.

Testers can find latency and throughput numbers for each completed run in a JSON results file in the AIXPRT/Results folder. The test also generates CSV results files that are in the same folder. The raw results files report values for each AI task configuration (e.g., ResNet-50, Batch1, on CPU). Parsing and consolidating the raw data can take some time, so we’re developing a results file parsing tool to make the job much easier.

The results parsing tool is currently available in the AIXPRT CP2 OpenVINO – Windows package, and we hope to make it available for more packages soon. Using the tool is as simple as running a single command, and detailed instructions for how to do so are in the AIXPRT OpenVINO on Windows user guide. The tool produces a summary (example below) that makes it easier to quickly identify relevant comparison points such as maximum throughput and minimum latency.

AIXPRT results summary

In addition to the summary, the tool displays the throughput and latency results for each AI task configuration tested by the benchmark. AIXPRT runs each AI task multiple times and reports the average inference throughput and corresponding latency percentiles.

AIXPRT results details

We hope that this information helps to make it easier to understand AIXPRT results. If you have any questions or comments, please feel free to contact us.

Justin

Navigating the AIXPRT Community Preview download page just got easier

AIXPRT Community Preview 2 (CP2) has been generating quite a bit of interest among the BenchmarkXPRT Development Community and members of the tech press. We’re excited that the tool has piqued curiosity and that folks are recognizing its value for technical analysis. When talking with folks about test setup and configuration, we keep hearing the same questions:

  • How do I find the exact toolkit or package that I need?
  • How do I find the instructions for a specific toolkit?
  • What test configuration variables are most important for producing consistent, relevant results?
  • How do I know which values to choose when configuring options such as iterations, concurrent instances, and batch size?


In the coming weeks, we’ll be working to provide detailed answers to questions about test configuration. In response to the confusion about finding specific packages and instructions, we’ve redesigned the CP2 download page to make it easier for you to find what you need. Below, we show a snapshot from the new CP2 download table. Instead of having to download the entire CP2 package that includes the OpenVINO, TensorFlow, and TensorRT in TensorFlow test packages, you can now download one package at a time. In the Documentation column, we’ve posted package-specific instructions, so you won’t have to wade through the entire installation guide to find the instructions you need.

AIXPRT Community Preview download table

We hope these changes make it easier for people to experiment with AIXPRT. As always, please feel free to contact us with any questions or comments you may have.

Justin

A new HDXPRT 4 build is available!

A few weeks ago, we announced that a new HDXPRT 4 build, v1.1, was on the way. This past Monday, we published the build on HDXPRT.com.

The new build includes an updated version of HandBrake, the commercial application that HDXPRT uses for certain video conversion tasks. HandBrake 1.2.2 supports hardware acceleration with AMD Video Coding Engine (VCE), Intel Quick Sync, and the NVIDIA video encoder (NVENC). By default, HDXPRT4 v1.1 uses the encoder available through a system’s integrated graphics, but testers can target discrete graphics by changing a configuration file flag before running the benchmark. HDXPRT will then use the encoder provided by the discrete graphics hardware. This configuration setting takes effect only when more than one of the supported encoders (VCE, QSV, or NVENC) is present on the system.

As we mentioned before, in all other respects, the benchmark has not changed. That means that, apart from a scenario where a tester changes the targeted graphics hardware, scores from previous HDXPRT 4 builds will be comparable to those from the new build.

The updated HDXPRT 4 User Manual contains additional information and instructions for changing the configuration file flag. Please contact us if you have any questions about the new build. Happy testing!

Justin

WebXPRT: What would you like to see?

At over 412,000 runs and counting, WebXPRT is our most popular benchmark. From the first release in 2013, it’s been popular with device manufacturers, developers, tech journalists, and consumers because it’s easy to run, it runs on almost anything with a web browser, and it evaluates device performance using the types of web-based tasks that people are likely to encounter on a daily basis.

With each new version of WebXPRT, we analyze browser development trends to make sure the test’s underlying web technologies and workload scenarios adequately reflect the ways people are using their browsers to work and play. BenchmarkXPRT Development Community members can play an important part in that process by sending us feedback on existing tests and suggestions for new workloads to include.

For example, when we released WebXPRT 3, we updated the photo workloads with new images and a deep learning task used for image classification. We also added an optical character recognition task in the Encrypt Notes and OCR scan workload, and combined part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in an all-new Online Homework workload.

Consider for a moment what an ideal future version of WebXPRT would look like for you. Are there new web technologies or workload scenarios that you would like to see? Would you be interested in an associated battery life test? Should we include experimental tests? We’re interested in what you have to say, so please feel free to contact us with your thoughts or questions.

If you’re just now learning about WebXPRT, we offer several resources to help you better understand the benchmark and its range of uses. For a general overview of why WebXPRT matters, watch our video titled What is WebXPRT and why should I care? To read more about the details of the benchmark’s development and structure, check out the Exploring WebXPRT 3 white paper. To see WebXPRT 2015 and WebXPRT 3 scores from a wide range of processors, visit the WebXPRT 3 Processor Comparison Chart.

We look forward to hearing from you!

Justin

An updated HDXPRT 4 build is on the way

HandBrake recently released a new version, v1.2.2, of their video conversion software. Among other improvements, the new version includes support for certain AMD (VCE) and NVIDIA (NVENC) hardware-accelerated video encoders. Because we include HandBrake as one of the commercial applications in the HDXPRT installer package, and because we want to keep HDXPRT 4 up-to-date for testers, we’ve put together a new HDXPRT 4 build: v1.1.  It includes HandBrake 1.2.2’s new capabilities, and we’re currently testing it in the lab.

With the new build, testers will be able to choose whether HDXPRT’s HandBrake tasks target a system’s integrated or discrete graphics cards by changing a flag called “UseIntegrated” in the config file. In HDXPRT 4 v1.1, the flag is set to “true” by default, directing HandBrake to use the codec provided by the system’s integrated graphics hardware. On the other hand, if a system has both integrated and discrete graphics available, and a user sets the flag to “false,” HandBrake will use the codec provided by the discrete graphics.

This update allows users to compare the video conversion performance of different video codecs on the same system. In all other respects, the benchmark has not changed. So apart from a scenario where a tester changes the targeted graphics hardware, scores from previous HDXPRT 4 builds will be comparable to those from the new build.

We’ll let the community know as soon as the new build is available, and we’ll update the HDXPRT 4 User Manual to reflect the changes.

If you have any questions about the upcoming HDXPRT 4 build, please let us know!

Justin

The MobileXPRT 3 source code is now available

We’re excited to announce that the MobileXPRT 3 source code is now available to BenchmarkXPRT Development Community members!

Download the MobileXPRT 3 source here (login required).

We’ve also posted a download link on the MobileXPRT tab in the Members’ Area, where you will find instructions for setting up and configuring a local instance of MobileXPRT 3.

As part of our community model for software development, source code for each of the XPRTs is available to anyone who joins the community. If you’d like to review XPRT source code, but haven’t yet joined the community, we encourage you to join! Registration is quick and easy, and if you work for a company or organization with an interest in benchmarking, you can join the community for free. Simply fill out the form with your company e-mail address and select the option to be considered for a free membership. We’ll contact you to verify the address and then activate your membership.

If you have any other questions about community membership or XPRT source code, feel free to contact us. We look forward to hearing from you!

Justin

Check out the other XPRTs:

Forgot your password?