BenchmarkXPRT Blog banner

Tag Archives: benchmark

The XPRTs in 2020: a year to remember

As 2020 comes to a close, we want to take this opportunity to review another productive year for the XPRTs. Readers of our newsletter are familiar with the stats and updates we include each month, but for our blog readers who don’t receive the newsletter, we’ve compiled some highlights below.

Benchmarks
In the past year, we released CrXPRT 2 and updated MobileXPRT 3 for testing on Android 11 phones. The biggest XPRT benchmark news was the release of CloudXPRT v1.0 and v1.01. CloudXPRT, our newest  benchmark, can accurately measure the performance of cloud applications deployed on modern infrastructure-as-a-service (IaaS) platforms, whether those platforms are paired with on-premises, private cloud, or public cloud deployments. 

XPRTs in the media
Journalists, advertisers, and analysts referenced the XPRTs thousands of times in 2020, and it’s always rewarding to know that the XPRTs have proven to be useful and reliable assessment tools for technology publications such as AnandTech, ArsTechnica, Computer Base, Gizmodo, HardwareZone, Laptop Mag, Legit Reviews, Notebookcheck, PCMag, PCWorld, Popular Science, TechPowerUp, Tom’s Hardware, VentureBeat, and ZDNet.

Downloads and confirmed runs
So far in 2020, we’ve had more than 24,200 benchmark downloads and 164,600 confirmed runs. Our most popular benchmark, WebXPRT, just passed 675,000 runs since its debut in 2013! WebXPRT continues to be a go-to, industry-standard performance benchmark for OEM labs, vendors, and leading tech press outlets around the globe.

Media, publications, and interactive tools
Part of our mission with the XPRTs is to produce materials that help testers better understand the ins and outs of benchmarking in general and the XPRTs in particular. To help achieve this goal, we’ve published the following in 2020:

We’re thankful for everyone who has used the XPRTs, joined the community, and sent questions and suggestions throughout 2020. This will be our last blog post of the year, but there’s much more to come in 2021. Stay tuned in early January for updates!

Justin

The AIXPRT learning tool is now live (and a CloudXPRT version is on the way)!

We’re happy to announce that the AIXPRT learning tool is now live! We designed the tool to serve as an information hub for common AIXPRT topics and questions, and to help tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT find the answers they need in as little time as possible.

The tool features four primary areas of content:

  • The Q&A section provides quick answers to the questions we receive most from testers and the tech press.
  • The AIXPRT: the basics section describes specific topics such as the benchmark’s toolkits, networks, workloads, and hardware and software requirements.
  • The testing and results section covers the testing process, metrics, and how to publish results.
  • The AI/ML primer provides brief, easy-to-understand definitions of key AI and ML terms and concepts for those who want to learn more about the subject.

The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section. 

We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.

If you have any questions about the tool, please let us know!

Justin

Thinking ahead to the next HDXPRT

We’re currently formulating our 2021 development roadmap for the XPRTs. In addition to planning CloudXPRT and WebXPRT updates, we’re discussing the possibility of releasing HDXPRT 5 in 2021. It’s hard for me to believe, but it’s been about two and a half years since we started work on HDXPRT 4, and February 2021 will mark two years since the first HDXPRT 4 release. Windows PCs are more powerful than ever, so it’s a good time to talk about how we can enhance the benchmark’s ability to measure how well the latest systems handle real-world media technologies and applications.

When we plan a new version of an XPRT benchmark, one of our first steps is updating the benchmark’s workloads so that they will remain relevant in years to come. We almost always update application content, such as photos and videos, to contemporary file resolutions and sizes. For example, we added both higher-resolution photos and a 4K video conversion task in HDXPRT 4. Are there specific types of media files that you think would be especially relevant to high-performance media tasks over the next few years?

Next, we will assess the suitability of the real-world trial applications that the editing photos, editing music, and converting videos test scenarios use. Currently, these are Adobe Photoshop Elements, Audacity, CyberLink MediaEspresso, and HandBrake. Can you think of other applications that belong in a high-performance media processing benchmark?

In HDXPRT 4, we gave testers the option to target a system’s discrete graphics card during the video conversion workload. Has this proven useful in your testing? Do you have suggestions for new graphics-oriented workloads?

We’ll also strive to make the UI more intuitive, to simplify installation, and to reduce the size of the installation package. What elements of the current UI do you find especially useful or think we could improve? 

We welcome your answers to these questions and any additional suggestions or comments on HDXPRT 5. Send them our way!

Justin

We’re working on an AIXPRT learning tool

For anyone interested in learning more about AIXPRT, the Introduction to AIXPRT white paper provides detailed information about its toolkits, workloads, system requirements, installation, test parameters, and results. However, for AIXPRT.com visitors who want to find the answers to specific AIXPRT-related questions quickly, a white paper can be daunting.

Because we want tech journalists, OEM lab engineers, and everyone who is interested in AIXPRT to be able to find the answers they need in as little time as possible, we’ve decided to develop a new learning tool that will serve as an information hub for common AIXPRT topics and questions.

The new learning tool will be available online through our site. It will offer quick bites of information about the fundamentals of AIXPRT, why the benchmark matters, the benefits of AIXPRT testing and results, machine learning concepts, key terms, and practical testing concerns.

We’re still working on the tool’s content and design. Because we’re designing this tool for you, we’d love to hear the topics and questions you think we should include. If you have any suggestions, please let us know!

Justin

Coming soon: a white paper about the CloudXPRT web microservices workload

Soon, we’ll be expanding our portfolio of CloudXPRT resources with a white paper that focuses on the benchmark’s web microservices workload. While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper will discuss the workload in much greater detail.

In addition to providing practical information about the web microservices installation packages and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.

As we’ve noted, CloudXPRT is one of the more complex tools in the XPRT family, with no shortage of topics to explore further. We plan to publish a companion overview for the data analytics workload, and possible future topics include the impact of adjusting specific test configuration options, recommendations for results reporting, and methods for analysis.

We hope that the upcoming Overview of the CloudXPRT Web Microservices Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. Once it goes live, we’ll provide links in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.

If you have any questions, please let us know!

Justin

Potential web technology additions for WebXPRT 4

A few months ago, we invited readers to send in their thoughts and ideas about web technologies and workload scenarios that may be a good fit for the next WebXPRT. We’d like to share a few of those ideas today, and we invite you to continue to send your feedback. We’re approaching the time when we need to begin firming up plans for a WebXPRT 4 development cycle in 2021, but there’s still plenty of time for you to help shape the future of the benchmark.

One of the most promising ideas for WebXPRT 4 is the potential addition of one or more WebAssembly (WASM) workloads. WASM is a low-level, binary instruction format that works across all modern browsers. It offers web developers a great deal of flexibility and provides the speed and efficiency necessary for running complex client applications in the browser. WASM enables a variety of workload scenario options, including gaming, video editing, VR, virtual machines, image recognition, and interactive educational content.

In addition, the Chrome team is dropping Portable Native Client (PNaCL) support in favor of WASM, which is why we had to remove a PNaCL workload when updating CrXPRT 2015 to CrXPRT 2. We generally model CrXPRT workloads on existing WebXPRT workloads, so familiarizing ourselves with WASM could ultimately benefit more than one XPRT benchmark.

We are also considering adding a web-based machine learning workload with TensorFlow for JavaScript (TensorFlow.js). TensorFlow.js offers pre-trained models for a wide variety of tasks including image classification, object detection, sentence encoding, natural language processing, and more. We could also use this technology to enhance one of WebXPRT’s existing AI-themed workloads, such as Organize Album using AI or Encrypt Notes and OCR Scan.

Other ideas include using a WebGL-based workload to target GPUs and investigating ways to incorporate a battery life test. What do you think? Let us know!

Justin

Check out the other XPRTs:

Forgot your password?