BenchmarkXPRT Blog banner

Category: Benchmark metrics

CloudXPRT version 1.0 is here!

The CloudXPRT Preview period has ended, and CloudXPRT version 1.0 installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! Like the Preview build, CloudXPRT version 1.0 includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.

Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. On CloudXPRT.com, the Helpful Info box contains resources such as links to the Introduction to CloudXPRT white paper, the CloudXPRT master readme, and the CloudXPRT GitHub repository.

The GitHub repository also contains the CloudXPRT source code. The source code is freely available for testers to download and review.

Performance results from this release are comparable to performance results from the CloudXPRT Preview build. Testers who wish to publish results on CloudXPRT.com can find more information about the results submission and review process in the blog. We post the monthly results cycle schedule on the results submission page.

We’re thankful for all the input we received during the CloudXPRT development process and Preview period. If you have any questions about CloudXPRT, please let us know.

Justin

Improving the CloudXPRT results viewer

This week, we made some changes to the CloudXPRT results viewer that we think will simplify the results-browsing experience and allow visitors to more quickly and easily find important data.

The first set of changes involves how we present test system information in the main results table and on the individual results details pages. We realized that there was potential for confusion around the “CPU” and “Number of nodes” categories. We removed those and created the following new fields: “Cluster components,” “Nodes (work + control plane),”  and “vCPUs (work + control plane).” These new categories better describe test configurations and clarify how many CPUs engage with the workload.

The second set of changes involves the number of data points that we list in the table for each web microservices test run. For example, previously, we published a unique entry for each level of concurrency a test run records. If a run scaled to 32 concurrent instances, we presented the data for each instance in its own row. This helped to show the performance curve during a single test as the workload scaled up, but it made it more difficult for visitors to identify the best throughput results from an individual run. We decided to consolidate the results from a complete test run on a single row, highlighting only the maximum number of successful requests (throughout). All the raw data from each run remains available for download on the details page for each result, but visitors don’t have to wade through all that data to find the configuration’s main “score.”

We view the development of the CloudXPRT results viewer as an ongoing process. As we add results and receive feedback from testers about the data presentation formats that work best for them, we’ll continue to add more features and tweak existing ones to make them as useful as possible. If you have any questions about CloudXPRT results or the results viewer, please let us know!

Justin

The CloudXPRT results viewer is live

We’re happy to announce that the CloudXPRT results viewer is now live with results from the first few rounds of CloudXPRT Preview testing we conducted in our lab. Here are some tips to help you to navigate the viewer more efficiently:

  • Click the tabs at the top of the table to switch from Data analytics workload results to Web microservices workload results.
  • Click the header of any column to sort the data on that variable. Single click to sort A to Z and double-click to sort Z to A.
  • Click the link in the Source/details column to visit a detailed page for that result, where you’ll find additional test configuration and system hardware information and the option to download results files.
  • By default, the viewer displays eight results per page, which you can change to 16, 48, or Show all.
  • The free-form search field above the table lets you filter for variables such as cloud service or processor.

We’ll be adding more features, including expanded filtering and sorting mechanisms, to the results viewer in the near future. We’re also investigating ways to present multiple data points in a graph format, which will allow visitors to examine performance behavior curves in conjunction with factors such as concurrency and resource utilization.

We welcome your CloudXPRT results submissions! To learn about the new submission and review process we’ll be using, take a look at last week’s blog.

If you have any questions or suggestions for ways that we can improve the results viewer, please let us know!

Justin

The CloudXPRT Preview results submission schedule

A few weeks ago, we shared the general framework of the periodic results publication process we will use for CloudXPRT. Now that the CloudXPRT Preview is live, we’re ready to share more details about the results review group; the submission, review, and publication cycles; and the schedule for the first three months.

The results review group
The CloudXPRT results review group will serve as a sanity check and a forum for comments on each month’s submissions. All registered BenchmarkXPRT Development Community members who wish to participate in the review process can join the group by contacting us via email. We’ll confirm receipt of your request and add you to the review group mailing list. Any non-members who would like to join the review group can contact us and we’ll help you become community members.

The submission, review, and publication cycle
We will update the CloudXPRT results database once a month on a published schedule. While testers can submit results through the CloudXPRT results submission page at any time, two weeks prior to each publication date, we will close submissions for that review cycle. One week prior to each publication date, we will email details of that month’s submissions to the results review group, along with the deadline for sending post-publication feedback.

Schedule for the first three publication cycles
We will publish results to the database on the last business day of each month and will close the submission window at 11:59 PM on the business day that falls two weeks earlier (with occasional adjustments for holidays). The schedule will be available at least six months in advance on CloudXPRT.com.

The schedule for the first three cycles is as follows:

July
Submission deadline: Friday 7/17/20
Publication date: Friday 7/31/20
August
Submission deadline: Monday 8/17/20
Publication date: Monday 8/31/20
September
Submission deadline: Wednesday 9/16/20
Publication date: Wednesday 9/30/20

As a reminder, members of the tech press, vendors, and other testers are free to publish CloudXPRT results at any time. We may choose to add such results to our database on the monthly publication date, after first vetting them.

We look forward to reviewing the first batch of results! If you have any questions about CloudXPRT or the results submission or review process, let us know!

Justin

The CloudXPRT Preview is here!

The CloudXPRT Preview installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! The CloudXPRT Preview includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.

Several different test packages are available for download from the CloudXPRT download page. For detailed installation instructions and hardware and software requirements for each, click the package’s readme link. The Helpful Info box on CloudXPRT.com also contains resources such as links to the CloudXPRT master readme and the CloudXPRT GitHub repository. Soon, we will add a link to the CloudXPRT Preview source code, which will be freely available for testers to download and review.

All interested parties may now publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We anticipate adding the first set of those within the coming week.

We’re thankful for all the input we received during the initial CloudXPRT development process, and we welcome feedback on the CloudXPRT Preview. If you have any questions about CloudXPRT, or would like to share your comments and suggestions, please let us know.

Justin

The CloudXPRT Preview is almost here

We’re happy to announce that we’re planning to release the CloudXPRT Preview next week! After we take the CloudXPRT Preview installation and source code packages live, they will be freely available to the public via CloudXPRT.com and the BenchmarkXPRT GitHub repository. All interested parties will be able to publish CloudXPRT results. However, until we begin the formal results submission and review process in July, we will publish only results we produce in our own lab. We’ll share more information about that process and the corresponding dates here in the blog in the coming weeks.

We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.

If you missed the earlier workload-related post, here are the details about the two workloads that will be in the preview build:

  • In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
  • The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework  that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.

The CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers with an opportunity to work with CloudXPRT directly and shape the future of the benchmark with their feedback. We hope that testers will take this opportunity to explore the tool and send us their thoughts on its structure, workload concepts and execution, ease of use, and documentation. That feedback will help us improve the relevance and accessibility of CloudXPRT testing and results for years to come.

If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.

Justin

Check out the other XPRTs:

Forgot your password?