This week, we published the Exploring WebXPRT 4 white paper. It describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and changes we’ve made to the structure of the performance test workloads. We explain the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication. The white paper also includes information about the third-party functions and libraries that WebXPRT 4 uses during the HTML5 and WASM capability checks and performance workloads.
The Exploring WebXPRT 4 white paper promotes
the high level of transparency and disclosure that is a core value of the
BenchmarkXPRT Development Community. We’ve always believed that transparency
builds trust, and trust is essential for a healthy benchmarking community.
That’s why we involve community members in the benchmark development process
and disclose how we build our benchmarks and how they work.
In March, we discussed the Chrome OS team’s plan to end support for Chrome apps in June and instead focus their
efforts on Chrome extensions and Progressive Web Apps. After receiving feedback
on their published timeline, the Chrome OS team decided to extend Chrome app support for Enterprise and Education account
customers through January 2025. Because we publish our Chrome app (CrXPRT) through a private BenchmarkXPRT developer account, and because
we have not seen any further updates to the support timeline, we don’t assume
that the support extension will apply to CrXPRT.
Since June has come and gone, and the support extension probably does not apply to our account, we do not expect to be able to publish any future fixes or updates for CrXPRT. As of now, and up through Chrome 105, the CrXPRT 2 performance and battery life tests are still working without a hitch. We will continue to run the benchmark on a regular basis to monitor functionality, and we will disclose any future issues here in the blog and on CrXPRT.com. We hope the app will continue to run both performance and battery life tests well into the future. However, given the frequency of Chrome updates, it’s difficult for us to predict how long the benchmark will remain viable.
As we mentioned back in March, we hope to begin development of an all-new Chrome OS XPRT benchmark by the end of this year. We’ll discuss that prospect in more detail in future blog posts, but if you have ideas about the types of features or workloads you’d like to see in a new Chrome OS benchmark, please let us know!
Recently, a member of the tech
press asked us about the status of AIXPRT,
our benchmark that measures machine learning inference performance. We want to
share our answer here in the blog for the benefit of other readers. The writer said
it seemed like we had not updated AIXPRT in a long time, and wondered whether we
had any immediate plans to do so.
It’s
true that we haven’t updated AIXPRT in quite some time. Unfortunately, while a
few tech press publications and OEM labs began experimenting with AIXPRT
testing, the benchmark never got the traction we hoped for, and we’ve decided
to invest our resources elsewhere for the time being. The AIXPRT installation
packages are still available for people to use or reference as they wish, but
we have not updated the benchmark to work with the latest platform versions
(OpenVINO, TensorFlow, etc.). It’s likely that several components in each
package are out of date.
If you
are interested in AIXPRT and would like us to bring it up to date, please let us know.
We can’t promise that we’ll revive the benchmark, but your feedback could be a
valuable contribution as we try to gauge the benchmarking community’s interest.
One of the core principles of
the BenchmarkXPRT Development Community is a commitment to valuing the feedback
of both community members and the larger group of testers that use the XPRTs on
a regular basis. That feedback helps us to ensure that as the XPRTs continue to
grow and evolve, the resources that we offer will continue to meet the needs of
those that use them.
In the past, user feedback has influenced specific aspects of our benchmarks such as the length of test runs, user interface features, results presentation, and the removal or inclusion of specific workloads. More broadly, we have also received suggestions for entirely new XPRTs and ways we might target emerging technologies or industry use cases.
As we
approach the second half of 2022 and begin planning for 2023, we’re asking to
hear your ideas about new XPRTs—or new features for existing
XPRTs. Are you aware of hardware form factors, software platforms, or prominent
applications that are difficult or impossible to evaluate using existing performance
benchmarks? Are there new technologies we should be incorporating into existing
XPRTs via new workloads? Can you recommend ways to improve any of the XPRTs or
XPRT-related tools such as results viewers?
We are interested in your answers to these questions and any other ideas you have, so please feel free to contact us. We look forward to hearing your thoughts!
Testers
new to the XPRT benchmarks may not know about one of the free resources we
offer. The XPRT results database currently holds more than 3,000 test results
from over 120 sources, including major tech review publications around the
world, OEMs, and independent testers. It offers a wealth of current and
historical performance data across all the XPRT benchmarks and hundreds of
devices.
We update the results
database several times a week, adding selected results from our own internal
lab testing, reliable tech media sources, and end-of-test user submissions.
(After you run one of the XPRTs, you can choose to submit the results, but they
don’t automatically appear in the database.) Before adding a result, we
evaluate whether the score makes sense and is consistent with general
expectations, which we can do only when we have sufficient system information details.
For that reason, we ask testers to disclose as much hardware and software
information as possible when publishing or submitting a result.
We encourage visitors to our site to explore the XPRT results database. There are three primary ways to do so. The first is by visiting the main BenchmarkXPRT results browser, which displays results entries for all of the XPRT benchmarks in chronological order (see the screenshot below). You can narrow the results by selecting a benchmark from the drop-down menu and can type values, such as vendor or the name of a tech publication, into the free-form filter field. For results we’ve produced in our lab, clicking “PT” in the Source column takes you to a page with additional disclosure information for the test system. For sources outside our lab, clicking the source name takes you to the original article or review that contains the result.
The second way to access our published results is by visiting the results page for an individual XPRT benchmark. Go the page of the benchmark that interests you, and look for the blue View Results button. Clicking it takes you to a page that displays results for only that benchmark. You can use the free-form filter on the page to filter those results, and can use the Benchmarks drop-down menu to jump to the other individual XPRT results pages.
The third way to view
information in our results database is with the WebXPRT 4 results viewer.
The viewer provides an information-packed, interactive environment in which
users can explore data from the curated set of WebXPRT 4 results we’ve
published on our site. To learn more about the viewer’s capabilities and
features, check out this blog post
from March.
We hope you’ll take
some time to browse the information in our results database. We welcome your feedback
about what you’d like to see in the future and suggestions for improvement. Our
database contains the XPRT scores that we’ve gathered, but we publish them as a
resource for you. Let us know
what you think!
Many of our blog
readers first encountered the XPRTs when reading about a specific benchmark,
such as WebXPRT, in a device
review. Because these folks might be unfamiliar with our other benchmarks, we
like to occasionally “reintroduce” individual XPRTs. This week, we invite you
to get to know HDXPRT.
HDXPRT, which
stands for High-Definition Experience & Performance Ratings Test, was the
first benchmark published by the HDXPRT Development Community, which later
became the BenchmarkXPRT Development Community. HDXPRT 4, the latest version, evaluates
the performance of Windows 10 and Windows 11 devices while handling real-world
media tasks such as photo editing, video conversion, and music editing. HDXPRT
uses real commercial applications, such Photoshop and MediaEspresso, to
complete its workloads. The benchmark then produces easy-to-understand results
that are relevant to buyers shopping for new Windows systems.
The HDXPRT 4
setup process takes about 30 minutes on most systems. The length of the test
can vary significantly depending on the speed of the system, but for most PCs that
are less than a few years old, a full three-iteration test cycle takes under two
hours.
HDXPRT is a useful
tool for anyone who wants to evaluate the real-world, content-creation
capabilities of a Windows PC. To see test scores from a variety of Windows
devices, go to HDXPRT.com and
click View Results.
Want to run HDXPRT?
Download HDXPRT from
HDXPRT.com. The HDXPRT user manual provides information
on minimum system requirements, as well as step-by-step instructions for
configuring your system and kicking off a test.
Want to dig into the details?
The HDXPRT
source code is available upon request. If you’d like to access the source code,
please send your request to benchmarkxprtsupport@principledtechnologies.com. Build
instructions are also available.
If you haven’t used HDXPRT before, give it a shot and let us know what you think!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.