BenchmarkXPRT Blog banner

Category: Performance benchmarking

Improving WebXPRT-related tools and resources

As we move forward with the WebXPRT 4 development process, we’re also working on ways to enhance the value of WebXPRT beyond simply updating the benchmark. Our primary goal is to expand and improve the WebXPRT-related tools and resources we offer at WebXPRT.com, starting with a new results viewer.

Currently, users can view WebXPRT results on our site two primary ways, each of which has advantages and limitations.

The first way is the WebXPRT results viewer, which includes hundreds of PT-curated performance scores from a wide range of trusted sources and devices. Users can sort entries by device type, device name, device model, overall score, date of publication, and source. The viewer also includes a free-form filter for quick, targeted searches. While the results viewer contains a wealth of information, it does not give users a way to use graphs or charts for viewing and comparing multiple results at once. Another limitation of the current results viewer is that it offers no easy way for users to access the additional data about the test device and the subtest scores that we have for many entries.

The second way to view WebXPRT results on our site is the WebXPRT Processor Comparison Chart. The chart uses horizontal bar graphs to compare test scores from the hundreds of published results in our database, grouped by processor type. Users can click the average score for a processor to view all the WebXPRT results we currently have for that processor. The visual aspect of the chart and its automated “group by processor type” feature are very useful, but it lacks the sorting and filtering capabilities of the viewer, and navigating to the details of individual tests takes multiple clicks.

In the coming months, we’ll be working to combine the best features of the results viewer and the comparison chart into a single powerful WebXPRT results database tool. We’ll also be investigating ways to add new visual aids, navigation controls, and data-handling capabilities to that tool. We want to provide a tool that helps testers and analysts access the wealth of WebXPRT test information in our database in an efficient, productive, and enjoyable way. If you have ideas or comments about what you’d like to see in a new WebXPRT results viewing tool, please let us know!

Justin

We welcome your CloudXPRT results!

We recently published a set of CloudXPRT Data Analytics and Web Microservices workload test results submitted by Quanta Computer, Inc. The Quanta submission is the first set of CloudXPRT results that we’ve published using the formal results submission and approval process. We’re grateful to the Quanta team for carefully following the submission guidelines, enabling us to complete the review process without a hitch.

If you are unfamiliar with the process, you can find general information about how we review submissions in a previous blog post. Detailed, step-by-step instructions are available on the results submission page. As a reminder for testers who are considering submitting results for July, the submission deadline is tomorrow, Friday July 16, and the publication date is Friday July 30. We list the submission and publication dates for the rest of 2021 below. Please note that we do not plan to review submissions in December, so if we receive results submissions after November 30, we may not publish them until the end of January 2022.

August

Submission deadline: Tuesday 8/17/21

Publication date: Tuesday 8/31/21

September

Submission deadline: Thursday 9/16/21

Publication date: Thursday 9/30/21

October

Submission deadline: Friday 10/15/21

Publication date: Friday 10/29/21

November

Submission deadline: Tuesday 11/16/21

Publication date: Tuesday 11/30/21

December

Submission deadline: N/A

Publication date: N/A

If you have any questions about the CloudXPRT results submission, review, or publication process, please let us know!

Justin

Feedback from the WebXPRT 4 tech press survey

In early May, we sent a survey to members of the tech press who regularly use WebXPRT in articles and reviews. We asked for their thoughts on several aspects of WebXPRT, as well as what they’d like to see in the upcoming fourth version of the benchmark. We also published the survey questions here in the blog, and invited experienced WebXPRT testers to send their feedback as well. We received some good responses to the survey, and for the benefit of our readers, we’ve summarized some of the key comments and suggestions below.

  • One respondent stated that WebXPRT is demanding enough to test performance, but if we want to simulate modern web usage, we should find the most up-to-date studies on common browser tasks and web technologies. This suggestion lines up with our intention to study the feasibility of adding a WebAssembly workload
  • One respondent liked that fact that unlike many other browser benchmarks, WebXPRT tests more than just JavaScript calculation speed.
  • One respondent suggested that we include a link to a WebXPRT white paper within the UI, or at least a guide describing what happens during each workload.
  • One respondent stated that they would like for WebXPRT to automatically produce a good result file on the local test system.
  • One respondent said that WebXPRT has a relatively long runtime for a browser benchmark, and they would prefer that the runtime not increase in WebXPRT 4.
  • We had no direct calls for a battery life test, because many testers already have scripts and/or methodologies in place for battery testing, but one tester suggested adding the ability to loop the test so users can measure performance over varying lengths of time.
  • There were no requests to bring back any aspects of WebXPRT 2015 that we removed in WebXPRT 3.
  • There were no reports of significant connection issues when testing with WebXPRT.

We greatly appreciate the members of the tech press that responded to the survey. We’re still in the planning stages of WebXPRT 4, so there’s still time for anyone to send comments or ideas to benchmarkxprtsupport@principledtechnologies.com. We look forward to hearing from you!

Justin

The WebXPRT 4 tech press feedback survey

Device reviews in publications such as AnandTech, Notebookcheck, and PCMag, among many others, often feature WebXPRT test results, and we appreciate the many members of the tech press that use WebXPRT. As we move forward with the WebXPRT 4 development process, we’re especially interested in learning what longtime users would like to see in a new version of the benchmark.  

In previous posts, we’ve asked people to weigh in on the potential addition of a WebAssembly workload or a battery life test. We’d also like to ask experienced testers some other test-related questions. To that end, this week we’ll be sending a WebXPRT 4 survey directly to members of the tech press who frequently publish WebXPRT test results.

Regardless of whether you are a member of the tech press, we invite you to participate by sending your answers to any or all the questions below to benchmarkxprtsupport@principledtechnologies.com. We ask you to do so by the end of May.

  • Do you think WebXPRT 3’s selection of workload scenarios is representative of modern web tasks?
  • How do you think WebXPRT compares to other common browser-based benchmarks, such as JetStream, Speedometer, and Octane?
  • Are there web technologies that you’d like us to include in additional workloads?
  • Are you happy with the WebXPRT 3 user interface? If not, what UI changes would you like to see?
  • Are there any aspects of WebXPRT 2015 that we changed in WebXPRT 3 that you’d like to see us change back?
  • Have you ever experienced significant connection issues when testing with WebXPRT?
  • Given the array of workloads, do you think the WebXPRT runtime is reasonable? Would you mind if the average runtime were a bit longer?
  • Are there any other aspects of WebXPRT 3 that you’d like to see us change?

If you’d like to discuss any topics that we did not cover in the questions above, please feel free to include additional comments in your response. We look forward to hearing your thoughts!

Justin

WebXPRT passes the 750,000-run milestone!

We’re excited to see that users have successfully completed over 750,000 WebXPRT runs! If you’ve run WebXPRT in any of the more than 654 cities and 68 countries from which we’ve received complete test data—including newcomers Belize, Cambodia, Croatia, and Pakistan—we’re grateful for your help. We could not have reached this milestone without you!

As the chart below illustrates, WebXPRT use has grown steadily over the years. We now record, on average, almost twice as many WebXPRT runs in one month as we recorded in the entirety of our first year. In addition, with over 82,000 runs to date in 2021, there are no signs that growth is slowing.

Developing a new benchmark is never easy, and the obstacles multiply when you attempt to create a cross-platform benchmark, such as WebXPRT, that will run on a wide variety of devices. Establishing trust with the benchmarking community is another challenge. Transparency, consistency, and technical competency on our part are critical factors in building that trust, but the people who take time out of their busy schedules to run the benchmark for the first time also play a role. We thank all of the manufacturers, OEM labs, and members of the tech press who decided to give WebXPRT a try, and we look forward to your input as we continue to improve WebXPRT in the years to come. 

If you have any questions or comments about WebXPRT, we’d love to hear from you!

Justin

Default requirements for CloudXPRT results submissions

Over the past few weeks, we’ve received questions about whether we require specific test configuration settings for official CloudXPRT results submissions. Currently, testers have the option to edit up to 12 configuration options for the web microservices workload and three configuration options for the data analytics workload. Not all configuration options have an impact on testing and results, but a few of them can drastically affect key results metrics and how long it takes to complete a test. Because new CloudXPRT testers may not anticipate those outcomes, and so many configuration permutations are possible, we’ve come up with a set of requirements for all future results submissions to our site. Please note that testers are still free to adjust all available configuration options—and define service level agreement (SLA) settings—as they see fit for their own purposes. The requirements below apply only to results testers want to submit for publication consideration on our site, and to any resulting comparisons.


Web microservices results submission requirement

Starting with the May results submission cycle, all web microservices results submissions must have the workload.cpurequestsvalue, which lets the user designate the number of CPU cores the workload assigns to each pod, set to 4. Currently, the benchmark supports values of 1, 2, and 4, with the default value of 4. While 1 and 2 CPU cores per pod may be more appropriate for relatively low-end systems or configurations with few vCPUs, a value of 4 is appropriate for most datacenter processors, and it often enables CSP instances to operate within the benchmark’s max default 95th percentile latency SLA of 3,000 milliseconds.

In future CloudXPRT releases, we may remove the option to change the workload.cpurequests value from the config.json file and simply fix the value in the benchmark’s code to promote test predictability and reasonable comparisons. For more information about configuration options for the web microservices workload, please consult the Overview of the CloudXPRT Web Microservices Workload white paper.


Data analytics results submission requirement

Starting with the May results submission cycle, all data analytics results submissions must have the best reported performance (throughput_jobs/min) correspond to a 95th percentile SLA latency of 90 seconds or less. We have received submissions where the throughput was extremely high, but the 95th percentile SLA latency was up to 10 times the 90 seconds that we recommend in CloudXPRT documentation. High latency values may be acceptable for the unique purposes of individual testers, but they do not provide a good basis for comparison between clusters under test. For more information about configuration options with the data analytics workload, please consult the Overview of the CloudXPRT Data Analytics Workload white paper.

We will update CloudXPRT documentation to make sure that testers know to use the default configuration settings if they plan to submit results for publication. If you have any questions about CloudXPRT or the CloudXPRT results submission process, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?