Today, we published the Introduction to CloudXPRT white paper. The paper provides an overview of our latest benchmark and consolidates CloudXPRT-related information that we’ve published in the XPRT blog over the past several months. It describes the CloudXPRT workloads, choosing and downloading installation packages, submitting CloudXPRT results for publication, and possibilities for additional development in the coming months.
CloudXPRT is one of
the most complex tools in the XPRT family, and there are more CloudXPRT-related
topics to discuss than we could fit in this first paper. In future white papers,
we will discuss in greater detail each of the benchmark workloads, the range of
test configuration options, results reporting, and methods for analysis.
We hope that Introduction
to CloudXPRT will provide testers who are interested in CloudXPRT with
a solid foundation of understanding on which they can build. Moving forward, we
will provide links to the paper in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
If you have any questions about CloudXPRT, please let us know!
A few weeks ago, we shared the general framework of the periodic results publication process we will use for CloudXPRT. Now that the CloudXPRT Preview is live, we’re ready to share more details about the results review group; the submission, review, and publication cycles; and the schedule for the first three months.
The results review group The CloudXPRT results review group will serve as a sanity check and a forum for comments on each month’s submissions. All registered BenchmarkXPRT Development Community members who wish to participate in the review process can join the group by contacting us via email. We’ll confirm receipt of your request and add you to the review group mailing list. Any non-members who would like to join the review group can contact us and we’ll help you become community members.
The submission, review, and publication cycle We will update the CloudXPRT results database once a month on a published schedule. While testers can submit results through the CloudXPRT results submission page at any time, two weeks prior to each publication date, we will close submissions for that review cycle. One week prior to each publication date, we will email details of that month’s submissions to the results review group, along with the deadline for sending post-publication feedback.
Schedule for the first three publication cycles We will publish results to the database on the last business day of each month and will close the submission window at 11:59 PM on the business day that falls two weeks earlier (with occasional adjustments for holidays). The schedule will be available at least six months in advance on CloudXPRT.com.
The schedule for the first three cycles is as follows:
July Submission deadline: Friday 7/17/20 Publication date: Friday 7/31/20 August Submission deadline: Monday 8/17/20 Publication date: Monday 8/31/20 September Submission deadline: Wednesday 9/16/20 Publication date: Wednesday 9/30/20
As a reminder, members of the tech press, vendors, and other testers are free to publish CloudXPRT results at any time. We may choose to add such results to our database on the monthly publication date, after first vetting them.
We look forward to reviewing the first batch of results! If you have any questions about CloudXPRT or the results submission or review process, let us know!
The CloudXPRT Preview installation packages are now available on CloudXPRT.com and the BenchmarkXPRT GitHub repository! The CloudXPRT Preview includes two workloads: web microservices and data analytics (you can find more details about the workloads here). Testers can use metrics from the workloads to compare IaaS stack (both hardware and software) performance and to evaluate whether any given stack is capable of meeting SLA thresholds. You can configure CloudXPRT to run on local datacenter, Amazon Web Services, Google Cloud Platform, or Microsoft Azure deployments.
Several different test packages are available for
download from the CloudXPRT download
page. For detailed installation instructions and
hardware and software requirements for each, click the package’s readme link. The
Helpful Info box on CloudXPRT.com also contains resources such as links to the
CloudXPRT master readme and the CloudXPRT GitHub repository. Soon, we will add
a link to the CloudXPRT Preview source code, which will be freely available for
testers to download and review.
All interested parties may now publish CloudXPRT
results. However, until we begin the formal results submission and review process in July, we will publish only results we
produce in our own lab. We anticipate adding the first set of those within the coming
week.
We’re thankful for all the input we received during the initial CloudXPRT development process, and we welcome feedback on the CloudXPRT Preview. If you have any questions about CloudXPRT, or would like to share your comments and suggestions, please let us know.
We’re
happy to announce that we’re planning to release the CloudXPRT Preview next
week! After we take the CloudXPRT Preview installation and source code packages
live, they will be freely available to the public via CloudXPRT.com
and the BenchmarkXPRT GitHub repository.
All interested parties will be able to publish CloudXPRT results. However,
until we begin the formal results submission and review process
in July, we will publish only results we produce in our own lab. We’ll share
more information about that process and the corresponding dates here in the
blog in the coming weeks.
We do have one change to report regarding the CloudXPRT workloads we announced in a previous blog post. The Preview will include the web microservices and data analytics workloads (described below), but will not include the AI-themed container scaling workload. We hope to add that workload to the CloudXPRT suite in the near future, and are still conducting testing to make sure we get it right.
If
you missed the earlier workload-related post, here are the details about the
two workloads that will be in the preview build:
In the web microservices workload, a simulated user logs in to a web application that does three things: provides a selection of stock options, performs Monte-Carlo simulations with those stocks, and presents the user with options that may be of interest. The workload reports performance in transactions per second, which testers can use to directly compare IaaS stacks and to evaluate whether any given stack is capable of meeting service-level agreement (SLA) thresholds.
The data analytics workload calculates XGBoost model training time. XGBoost is a gradient-boosting framework that data scientists often use for ML-based regression and classification problems. The purpose of the workload in the context of CloudXPRT is to evaluate how well an IaaS stack enables XGBoost to speed and optimize model training. The workload reports latency and throughput rates. As with the web-tier microservices workload, testers can use this workload’s metrics to compare IaaS stack performance and to evaluate whether any given stack is capable of meeting SLA thresholds.
The
CloudXPRT Preview provides OEMs, the tech press, vendors, and other testers
with an opportunity to work with CloudXPRT directly and shape the future of the
benchmark with their feedback. We hope that testers will take this opportunity
to explore the tool and send us their thoughts on its structure, workload
concepts and execution, ease of use, and documentation. That feedback will help
us improve the relevance and accessibility of CloudXPRT testing and results for
years to come.
If you have any questions about the upcoming CloudXPRT Preview, please feel free to contact us.
A few weeks ago, we discussed error messages that a tester received when starting up CrXPRT 2
after a battery life test. CrXPRT 2 battery life tests require a full battery
rundown, after which the tester plugs in the Chromebook, turns it on, opens the
CrXPRT 2 app, and sees the test results. In the reported cases, the tester
opened the app after a battery life test that seemed successful, but saw “N/A” or
“test error” messages instead of the results they expected.
During discussions about the end-of-test
system environment, we realized that some testers might be unclear about how to
tell that the battery has fully run down. During the system idle portion of
CrXPRT 2 battery life test iterations, the Chromebook screen turns black and a
small cursor appears somewhere on the screen to let testers know the test is
still in progress. We believe that some testers, seeing the black screen but not
the cursor, believe the system has shut down. Restarting CrXPRT 2 before the
battery life test is complete could explain some of the “N/A” or “test error” messages
users have reported.
If you see a black screen without
a cursor, you can check to see whether the test is complete by looking for the
small system power indicator light on the side or top of most Chromebooks.
These are usually red, orange, or green, but if a light of any color is lit, the
test is still underway. When the light goes out, the test has ended. You can
plug the system in and power it on to see results.
Note that some Chromebooks provide
low-battery warnings onscreen. During CrXPRT 2 battery life runs, testers
should ignore these.
We hope this clears up any
confusion about how to know when a CrXPRT 2 battery life test has ended. If
you’ve received repeated “N/A” or “test error” messages during your CrXPRT 2
testing and the information above does not help, please let us know!
Testers who have started using the XPRT benchmarks recently
may not know about one of the free resources we offer. The XPRT results
database currently holds more than 2,400 test results from over 90 sources,
including major tech review publications around the world, OEMs, and
independent testers. It offers a wealth of current and historical performance
data across all the XPRT benchmarks and hundreds of devices.
We update the results database several times a week,
adding selected results from our own internal lab testing, end-of-test user
submissions, and reliable tech media sources. (After you run one of the XPRTs,
you can choose to submit the results, but they don’t automatically appear in
the database.)
Before adding a result, we evaluate whether the
score makes sense and is consistent with general expectations, which we can do
only when we have sufficient system information details. For that reason, we encourage
testers to disclose as much hardware and software information as possible when
publishing or submitting a result.
We encourage visitors to our site to explore the XPRT results database. There are three primary ways to do so. The first is by visiting the main BenchmarkXPRT results browser, which displays results entries for all of the XPRT benchmarks in chronological order (see the screenshot below). Users can narrow the results by selecting a benchmark from the drop-down menu and can type values, such as vendor or the name of a tech publication, into the free-form filter field. For results we produced in our lab, clicking “PT” in the Source column takes you to a page with additional disclosure information for the test system. For sources outside our lab, clicking the source name takes you to the original article or review that contains the result.
The second way to access our published results is by
visiting the results page for each individual XPRT benchmark. Go the page of
the benchmark you’re interested in, and look for the blue View Results button.
Clicking it takes you to a page that displays results for only that benchmark.
You can use the free-form filter on the page to filter those results, and can
use the Benchmarks drop-down menu to jump to the other individual XPRT results
pages.
The third way to view information in our results
database is with the WebXPRT Processor Comparison Chart.
When we publish a new WebXPRT result, the score automatically appears in the
processor comparison chart as well. For each processor, the chart shows a bar
representing the average score. Mousing over the bar displays a popup indicating
the number of WebXPRT results we currently have for that processor and clicking
the bar lets you view the results. You can change the number of results the
chart displays on each page, and use the drop-down menu to toggle back and
forth between the WebXPRT 3 and WebXPRT 2015 charts.
We hope you’ll take some time to browse the information in our results database. We welcome your feedback about what you’d like to see in the future and suggestions for improvement. Our database contains the XPRT scores that we’ve gathered, but we publish them as a resource for you. Let us know what you think!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.