We’ve designed each of the XPRT benchmarks to assess the performance of specific types of devices in scenarios that mirror the ways consumers typically use those devices. While most XPRT benchmark users are interested in producing official overall scores, some members of the tech press have been using the XPRTs in unconventional, creative ways.
One example is the use
of WebXPRT
by Tweakers, a popular tech
review site based in The Netherlands. (The site is in Dutch, so the Google
Translate extension in Chrome was helpful for me.) As Tweakers uses WebXPRT to
evaluate all kinds of consumer hardware, they also measure the sound output of each
device. Tweakers then publishes the LAeq metric for each device,
giving readers a sense of how loud a system may be, on average, while it
performs common browser tasks.
Other labs and tech
publications have also used the XPRTs in unusual ways such as automating the
benchmarks to run during screen burn-in tests or custom battery-life rundowns. If
you’ve used any of the XPRT benchmarks in creative ways, please let us know!
We are interested in learning more about your tests, and your experiences may
provide helpful information that we can share with other XPRT users.
We
developed our first cloud benchmark, CloudXPRT,
to measure the performance of cloud applications deployed on modern infrastructure
as a service (IaaS) platforms. When we first released CloudXPRT in
February of 2021, the benchmark included two test packages: a web microservices
workload and a data analytics workload. Both supported on-premises and cloud
service provider (CSP) testing with Amazon Web Services (AWS), Google Cloud
Platform (GCP), and Microsoft Azure.
CloudXPRT
is our most complex benchmark, requiring sustained compatibility between many
software components across multiple independent test environments. As vendors
roll out updates for some components and stop supporting others, it’s
inevitable that something will break. Since CloudXPRT’s launch, we’ve become
aware of installation failures while attempting to set up CloudXPRT on Ubuntu
virtual machines with GCP and Microsoft Azure. Additionally, while the web
microservices workload continues to run in most instances with a few
configuration tweaks and workarounds, the data analytics workload fails
consistently due to compatibility issues with Minio, Prometheus, and Kafka
within the Kubernetes environment.
In
response, we’re working to fix problems with the web microservices workload and
bring all necessary components up to date. We’re developing an updated test
package that will work on Ubuntu 22.04, using Kubernetes v1.23.7 and Kubespray
v2.18.1. We’re also updating Kubernetes Metrics Server from v1beta1 to v1, and will
incorporate some minor script changes. Our goal is to ensure successful
installation and testing with the on-premises and CSP platforms that we
supported when we first launched CloudXPRT.
We
are currently focusing on the web microservices workload for two reasons.
First, more users have downloaded it than the data analytics workload. Second, we
think we have a clear path to success. Our plan is to publish the updated web
microservices test package, and see what feedback and interest we receive from
users about a possible data analytics refresh. The existing data analytics workload
will remain available via CloudXPRT.com for the time being to serve as a
reference resource.
We
apologize for the inconvenience that these issues have caused. We’ll provide
more information about a release timeline and final test package details here
in the blog as we get closer to publication. If you have any questions about
the future of CloudXPRT, please feel free to contact us!
The new school year is
upon us, and learners of all ages are looking for tech devices that have the
capabilities they will need in the coming year. The tech marketplace can be
confusing, and competing claims can be hard to navigate. The XPRTs are here to
help! Whether you’re shopping for a new phone, tablet, Chromebook, laptop, or
desktop, the XPRTs can provide reliable, industry-trusted performance scores
that can cut through all the noise.
A good place to start looking
for scores is the WebXPRT 4 results viewer. The viewer displays WebXPRT 4 scores from
over 175 devices—including many hot new releases—and we’re adding new scores
all the time. To learn more about the viewer’s capabilities and how you can use
it to compare devices, check out this blog post.
Another resource we
offer is the XPRT results browser. The browser is the most efficient way to access the XPRT
results database, which currently holds more than 3,000 test results from over 120
sources, including major tech review publications around the world, OEMs, and
independent testers. It offers a wealth of current and historical performance
data across all of the XPRT benchmarks and hundreds of devices. You can read
more about how to use the results browser here.
Also, if you’re considering a popular device, chances are good that a recent tech review includes an XPRT score for that device. Two quick ways to find these reviews: (1) go to your favorite tech review site and search for “XPRT” and (2) go to a search engine and enter the device name and XPRT name (e.g., “Apple MacBook Air” and “WebXPRT”). Here are a few recent tech reviews that use one of the XPRTs to evaluate a popular device:
The XPRTs can help consumers make better-informed and more confident tech purchases. As this school year begins, we hope you’ll find the data you need on our site or in an XPRT-related tech review. If you have any questions about the XPRTs, XPRT scores, or the results database please feel free to ask!
We’re excited to see that users have successfully completed over 1,000,000 WebXPRT runs! If you’ve run WebXPRT in any of the 924 cities and 81 countries from which we’ve received complete test data—including newcomers Bahrain, Bangladesh, Mauritius, The Philippines, and South Korea —we’re grateful for your help. We could not have reached this milestone without you!
As the chart below illustrates, WebXPRT use has grown steadily since the debut of WebXPRT 2013. On average, we now record more WebXPRT runs in one month than we recorded in the entirety of our first year. With over 104,000 runs so far in 2022, that growth is continuing.
For us, this moment represents more than a numerical milestone. Developing and maintaining a benchmark is never easy, and a cross-platform benchmark that will run on a wide variety of devices poses an additional set of challenges. For such a benchmark to succeed, developers need not only technical competency, but the trust and support of the benchmarking community. WebXPRT is now in its ninth year, and its consistent year-over-year growth tells us that the benchmark continues to hold value for manufacturers, OEM labs, the tech press, and end users like you. We see it as a sign of trust that folks repeatedly return to the benchmark for reliable performance metrics. We’re grateful for that trust, and for everyone that’s contributed to the WebXPRT development process throughout the years.
We’ll have more to share related to this exciting milestone in the weeks to come, so stay tuned to the blog. If you have any questions or comments about WebXPRT, we’d love to hear from you!
One of the core principles of
the BenchmarkXPRT Development Community is a commitment to valuing the feedback
of both community members and the larger group of testers that use the XPRTs on
a regular basis. That feedback helps us to ensure that as the XPRTs continue to
grow and evolve, the resources that we offer will continue to meet the needs of
those that use them.
In the past, user feedback has influenced specific aspects of our benchmarks such as the length of test runs, user interface features, results presentation, and the removal or inclusion of specific workloads. More broadly, we have also received suggestions for entirely new XPRTs and ways we might target emerging technologies or industry use cases.
As we
approach the second half of 2022 and begin planning for 2023, we’re asking to
hear your ideas about new XPRTs—or new features for existing
XPRTs. Are you aware of hardware form factors, software platforms, or prominent
applications that are difficult or impossible to evaluate using existing performance
benchmarks? Are there new technologies we should be incorporating into existing
XPRTs via new workloads? Can you recommend ways to improve any of the XPRTs or
XPRT-related tools such as results viewers?
We are interested in your answers to these questions and any other ideas you have, so please feel free to contact us. We look forward to hearing your thoughts!
Testers
new to the XPRT benchmarks may not know about one of the free resources we
offer. The XPRT results database currently holds more than 3,000 test results
from over 120 sources, including major tech review publications around the
world, OEMs, and independent testers. It offers a wealth of current and
historical performance data across all the XPRT benchmarks and hundreds of
devices.
We update the results
database several times a week, adding selected results from our own internal
lab testing, reliable tech media sources, and end-of-test user submissions.
(After you run one of the XPRTs, you can choose to submit the results, but they
don’t automatically appear in the database.) Before adding a result, we
evaluate whether the score makes sense and is consistent with general
expectations, which we can do only when we have sufficient system information details.
For that reason, we ask testers to disclose as much hardware and software
information as possible when publishing or submitting a result.
We encourage visitors to our site to explore the XPRT results database. There are three primary ways to do so. The first is by visiting the main BenchmarkXPRT results browser, which displays results entries for all of the XPRT benchmarks in chronological order (see the screenshot below). You can narrow the results by selecting a benchmark from the drop-down menu and can type values, such as vendor or the name of a tech publication, into the free-form filter field. For results we’ve produced in our lab, clicking “PT” in the Source column takes you to a page with additional disclosure information for the test system. For sources outside our lab, clicking the source name takes you to the original article or review that contains the result.
The second way to access our published results is by visiting the results page for an individual XPRT benchmark. Go the page of the benchmark that interests you, and look for the blue View Results button. Clicking it takes you to a page that displays results for only that benchmark. You can use the free-form filter on the page to filter those results, and can use the Benchmarks drop-down menu to jump to the other individual XPRT results pages.
The third way to view
information in our results database is with the WebXPRT 4 results viewer.
The viewer provides an information-packed, interactive environment in which
users can explore data from the curated set of WebXPRT 4 results we’ve
published on our site. To learn more about the viewer’s capabilities and
features, check out this blog post
from March.
We hope you’ll take
some time to browse the information in our results database. We welcome your feedback
about what you’d like to see in the future and suggestions for improvement. Our
database contains the XPRT scores that we’ve gathered, but we publish them as a
resource for you. Let us know
what you think!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.