Over the past few
weeks, we’ve received questions about whether we require specific test
configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration
options for the web microservices workload and three configuration options for the
data analytics workload. Not all configuration options have an impact on
testing and results, but a few of them can drastically affect key results
metrics and how long it takes to complete a test. Because new CloudXPRT testers
may not anticipate those outcomes, and so many configuration permutations are
possible, we’ve come up with a set of requirements for all future results
submissions to our site. Please note that testers are still free to adjust all
available configuration options—and define service level agreement (SLA)
settings—as they see fit for their own purposes. The requirements below apply only
to results testers want to submit for publication consideration on our site,
and to any resulting comparisons.
Web microservices
results submission requirement
Starting with the May results
submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload
assigns to each pod, set to 4. Currently, the benchmark supports values of 1,
2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be
more appropriate for relatively low-end systems or configurations with few
vCPUs, a value of 4 is appropriate for most datacenter processors, and it often
enables CSP instances to operate within the benchmark’s max default 95th
percentile latency SLA of 3,000 milliseconds.
In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.
Data analytics results
submission requirement
Starting with the May
results submission cycle, all data analytics results submissions must have the best
reported performance (throughput_jobs/min) correspond to a 95th
percentile SLA latency of 90 seconds or less. We have received submissions where
the throughput was extremely high, but the 95th percentile SLA
latency was up to 10 times the 90 seconds that we recommend in CloudXPRT
documentation. High latency values may be acceptable for the unique purposes of
individual testers, but they do not provide a good basis for comparison between
clusters under test. For more information about configuration options with the
data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.
We will update
CloudXPRT documentation to make sure that testers know to use the default
configuration settings if they plan to submit results for publication. If you
have any questions about CloudXPRT or the CloudXPRT results submission process,
please let us know.
Justin