BenchmarkXPRT Blog banner

Category: Machine learning

The AIXPRT learning tool is now live (and a CloudXPRT version is on the way)!

We’re happy to announce that the AIXPRT learning tool is now live! We designed the tool to serve as an information hub for common AIXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT find the answers they need in as little time as possible.

The tool features four primary areas of content:

  • The Q&A section provides quick answers to the questions we receive most from testers and the tech press.
  • The AIXPRT: the basics section describes specific topics such as the benchmark’s toolkits, networks, workloads, and hardware and software requirements.
  • The testing and results section covers the testing process, metrics, and how to publish results.
  • The AI/ML primer provides brief, easy-to-understand definitions of key AI and ML terms and concepts for those who want to learn more about the subject.

The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section. 

We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.

If you have any questions about the tool, please let us know!

Justin

A first look at the upcoming AIXPRT learning tool

Last month, we announced that we’re working on a new AIXPRT learning tool. Because we want tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be able to find the answers they need in as little time as possible, we’re designing this tool to serve as an information hub for common AIXPRT topics and questions.

We’re still finalizing aspects of the tool’s content and design, so some details may change, but we can now share a sneak peak of the main landing page. In the screenshot below, you can see that the tool will feature four primary areas of content:

  • The FAQ section will provide quick answers to the questions we receive most from testers and the tech press.
  • The AIXPRT basics section will describe specific topics such as the benchmark’s toolkits, networks, workloads, and hardware and software requirements.
  • The testing and results section will cover the testing process, the metrics the benchmark produces, and how to publish results.
  • The AI/ML primer will provide brief, easy-to-understand definitions of key AI and ML terms and concepts for those who want to learn more about the subject.

We’re excited about the new AIXPRT learning tool, and will share more information here in the blog as we get closer to a release date. If you have any questions about the tool, please let us know!

Justin

Following up

This week, we’re sharing news on two topics that we’ve discussed here in the blog over the past several months: CloudXPRT v1.01 and a potential AIXPRT OpenVINO update.

CloudXPRT v1.01

Last week, we announced that we were very close to releasing an updated CloudXPRT build (v1.01) with two minor bug fixes, an improved post-test results processing script, and an adjustment to one of our test configuration recommendations. Our testing and prep is complete, and the new version is live in the CloudXPRT GitHub repository and on our site!

None of the v1.01 changes affect performance or test results, so scores from the new build are comparable to those from previous CloudXPRT builds. If you’d like to know more about the changes, take a look at last week’s blog post.

The AIXPRT OpenVINO update

In late July, we discussed our plans to update the AIXPRT OpenVINO packages with OpenVINO 2020.3 Long-Term Support (LTS). While there are no known problems with the existing AIXPRT OpenVINO package, the LTS version targets environments that benefit from maximum stability and don’t require a constant stream of new tools and feature changes, so we thought it would be well suited for a benchmark like AIXPRT.

We initially believed that the update process would be relatively simple, and we’d be able to release a new AIXPRT OpenVINO package in September. However, we’ve discovered that the process is involved enough to require substantial low-level recoding. At this time, it’s difficult to estimate when the updated build will be ready for release. For any testers looking forward to the update, we apologize for the delay.

If you have any questions or comments about these or any other XPRT-related topics, please let us know!

Justin

We’re working on an AIXPRT learning tool

For anyone interested in learning more about AIXPRT, the Introduction to AIXPRT white paper provides detailed information about its toolkits, workloads, system requirements, installation, test parameters, and results. However, for AIXPRT.com visitors who want to find the answers to specific AIXPRT-related questions quickly, a white paper can be daunting.

Because we want tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be able to find the answers they need in as little time as possible, we’ve decided to develop a new learning tool that will serve as an information hub for common AIXPRT topics and questions.

The new learning tool will be available online through our site. It will offer quick bites of information about the fundamentals of AIXPRT, why the benchmark matters, the benefits of AIXPRT testing and results, machine learning concepts, key terms, and practical testing concerns.

We’re still working on the tool’s content and design. Because we’re designing this tool for you, we’d love to hear the topics and questions you think we should include. If you have any suggestions, please let us know!

Justin

Potential web technology additions for WebXPRT 4

A few months ago, we invited readers to send in their thoughts and ideas about web technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite you to continue to send your feedback. We’re approaching the time when we need to begin firming up plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of time for you to help shape the future of the benchmark.

One of the most promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads. WASM is a low-level, binary instruction format that works across all modern browsers. It offers web developers a great deal of flexibility and provides the speed and efficiency necessary for running complex client applications in the browser. WASM enables a variety of workload scenario options, including gaming, video editing, VR, virtual machines, image recognition, and interactive educational content.

In addition, the Chrome team is dropping Portable Native Client (PNaCL) support in favor of WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We generally model CrXPRT workloads on existing WebXPRT workloads, so familiarizing ourselves with WASM could ultimately benefit more than one XPRT benchmark.

We are also considering adding a web-based machine learning workload with TensorFlow for JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of tasks including image classification, object detection, sentence encoding, natural language processing, and more. We could also use this technology to enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album using AI or Encrypt Notes and OCR Scan.

Other ideas include using a WebGL-based workload to target GPUs and investigating ways to incorporate a battery life test. What do you think? Let us know!

Justin

The CloudXPRT Preview is almost here

We’re happy to announce that we’re planning to release the CloudXPRT Preview next week! After we take the CloudXPRT Preview installation and source code packages live, they will be freely available to the public via CloudXPRT.com and the BenchmarkXPRT GitHub repository. All interested parties will be able to publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We’ll share more information about that process and the corresponding dates here in the blog in the coming weeks.

We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.

If you missed the earlier workload-related post, here are the details about the two workloads that will be in the preview build:

  • In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.

The CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers with an opportunity to work with CloudXPRT directly and shape the future of the benchmark with their feedback. We hope that testers will take this opportunity to explore the tool and send us their thoughts on its structure, workload concepts and execution, ease of use, and documentation. That feedback will help us improve the relevance and accessibility of CloudXPRT testing and results for years to come.

If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.

Justin

Check out the other XPRTs:

Forgot your password?