BenchmarkXPRT Blog banner

Category: AI

We’re working on an update for the AIXPRT OpenVINO workload

Shortly after the initial AIXPRT release, we noted that each of the toolkits AIXPRT uses (Intel OpenVINO, TensorFlow, NVIDIA TensorRT, and Apache MXNet) is on its own development schedule, and new versions will sometimes appear with little warning. When this happens, we’ll have to respond by updating specific AIXPRT installation packages, giving AIXPRT testers relatively short notice.

This is one of those times! Intel recently released OpenVINO 2020.3 Long-Term Support (LTS), and we’re planning to update the AIXPRT OpenVINO packages with the LTS version. The LTS version targets environments that benefit from maximum stability, and don’t require a constant stream of new tools and feature changes. In other words, it’s well suited for a benchmark, and we think it’s a good fit for AIXPRT moving forward.

We don’t yet know what impact the new version will have on AIXPRT OpenVINO test results. A substantial part of the development process will involve testing the new packages on a variety of platforms to see how performance changes. We’ll communicate our findings here in the blog, so AIXPRT testers will know what to expect.

Thankfully, the modular nature of the AIXPRT installation packages ensures that we don’t need to revise the entire AIXPRT suite every time a toolkit update goes live. If you test with only TensorFlow, TensorRT, or MXNet, or a combination of those toolkits, this update won’t affect your testing.

We’re not ready to commit to a release date for the new build, but anticipate it will be in September.

If you have any questions about AIXPRT or OpenVINO, please let us know!

Justin

Make confident choices about your company’s future tech with the XPRTs

Durham, NC, April 23, 2020 — Principled Technologies and the BenchmarkXPRT Development Community have released a video on the benefits of consulting the XPRTs before committing to new technology purchases.

AIXPRT, one of the battery of XPRT benchmark tools, runs image-classification and object-detection workloads to determine how well tech handles AI and machine learning.

CloudXPRT, another XPRT tool, accurately measures the end-to-end performance of modern, cloud-first applications deployed on infrastructure as a service (IaaS) platforms – allowing corporate decision-makers to select the best configuration for every objective.

All of the XPRTs give companies the real-world information necessary to determine which prospective future tech p – and which will disappoint

According to the video, “The XPRTs don’t just look at specs and features; they gauge a technology solution’s real-world performance and capabilities. So you know whether switching environments is worth the investment. How well solutions support machine learning and other AI capabilities. If next-gen releases beat their rivals or fall behind the curve.”

Watch the video at facts.pt/pyt88k5. To learn more about how AIXPRT, CloudXPRT, WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, and HDXPRT can help IT decision-makers can make confident choices about future purchases, go to www.BenchmarkXPRT.com.

About Principled Technologies, Inc.
Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.principledtechnologies.com.

Company Contact
Justin Greene
BenchmarkXPRT Development Community
Principled Technologies, Inc.
1007 Slater Road, Suite #300
Durham, NC 27703
BenchmarkXPRTsupport@PrincipledTechnologies.com

Adapting to a changing tech landscape

The BenchmarkXPRT Development Community started almost 10 years ago with the development of the High Definition Experience & Performance Ratings Test, also known as HDXPRT. Back then, we distributed the benchmark to interested parties by mailing out physical DVDs. We’ve come a long way since then, as testers now freely and easily access six XPRT benchmarks from our site and major app stores.

Developers, hardware manufacturers, and tech journalists—the core group of XPRT testers—work within a constantly changing tech landscape. Because of our commitment to providing those testers with what they need, the XPRTs grew as we developed additional benchmarks to expand the reach of our tools from PCs to servers and all types of notebooks, Chromebooks, and mobile devices.

As today’s tech landscape continues to evolve at a rapid pace, our desire to play an active role in emerging markets continues to drive us to expand our testing capabilities into areas like machine learning (AIXPRT) and cloud-first applications (CloudXPRT). While these new technologies carry the potential to increase efficiency, improve quality, and boost the bottom line for companies around the world, it’s often difficult to decide where and how to invest in new hardware or services. The ever-present need for relevant and reliable data is the reason many organizations use the XPRTs to help make confident choices about their company’s future tech.

We just released a new video that helps to explain what the XPRTs provide and how they can play an important role in a company’s tech purchasing decisions. We hope you’ll check it out!

We’re excited about the continued growth of the XPRTs, and we’re eager to meet the challenges of adapting to the changing tech landscape. If you have any questions about the XPRTs or suggestions for future benchmarks, please let us know!

Justin

News about the CloudXPRT source code

For much of the BenchmarkXPRT Development Community’s history, we offered community members exclusive access to XPRT benchmark source code. Back in February, we started to experiment with a different approach when we made the AIXPRT source code publicly available on GitHub. By allowing anyone who is interested in AIXPRT to download and review the source code, we reinforced our commitment to making the XPRT development process as transparent as possible. We also want the XPRTs to continue to contribute to fair practices in the benchmarking world, and we believe that expanded access to the source code encourages constructive feedback to help in this goal.

The feedback we received after publishing the AIXPRT source code was very positive; thank you to all who reached out. Because of that feedback and our desire to increase openness, we’ve decided use standard open source licenses to make the CloudXPRT source code available to the public when we release of the first build, or shortly thereafter. As with AIXPRT, folks will be able to download the CloudXPRT source code and submit potential workloads for future consideration, but we reserve the right to control derivative works.

We’ll share more information about the first CloudXPRT release and its source code in the coming weeks. If you have any questions about XPRT source code, feel free to ask.  We also welcome any thoughts about using this approach to release the source code of other XPRT benchmarks. As always, feel free to comment below or reach out by email.

Justin

More details about CloudXPRT’s workloads

About a month ago, we posted an update on the CloudXPRT development process. Today, we want to provide more details about the three workloads we plan to offer in the initial preview build:

  • In the web-tier microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The machine learning (ML) training workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
  • The AI-themed container scaling workload starts up a container and uses a version of the AIXPRT harness to launch Wide and Deep recommender system inference tasks in the container. Each container represents a fixed amount of work, and as the number of Wide and Deep jobs increases, CloudXPRT launches more containers in parallel to handle the load. The workload reports both the startup time for the containers and the Wide and Deep throughput results. Testers can use this workload to compare container startup time between IaaS stacks; optimize the balance between resource allocation, capacity, and throughput on a given stack; and confirm whether a given stack is suitable for specific SLAs.

We’re continuing to move forward with CloudXPRT development and testing and hope to add more workloads in subsequent builds. Like most organizations, we’ve adjusted our work patterns to adapt to the COVID-19 situation. While this has slowed our progress a bit, we still hope to release the CloudXPRT preview build in April. If anything changes, we’ll let folks know as soon as possible here in the blog.

If you have any thoughts or comments about CloudXPRT workloads, please feel free to contact us.

Justin

The Introduction to AIXPRT white paper is now available!

Today, we published the Introduction to AIXPRT white paper. The paper serves as an overview of the benchmark and a consolidation of AIXPRT-related information that we’ve published in the XPRT blog over the past several months. For folks who are completely new to AIXPRT and veteran testers who need to brush up on pre-test configuration procedures, we hope this paper will be a quick, one-stop reference that helps reduce the learning curve.

The paper describes the AIXPRT toolkits and workloads, adjusting key test parameters (batch size, level of precision, number of concurrent instances, and default number of requests), using alternate test configuration files, understanding and submitting results, and accessing the source code.

We hope that Introduction to AIXPRT will prove to be a valuable resource. Moving forward, readers will be able to access the paper from the Helpful Info box on AIXPRT.com and the AIXPRT section of our XPRT white papers page. If you have any questions about AIXPRT, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?