Today, we expand our portfolio
of CloudXPRT resources with a paper on the benchmark’s data analytics workload.
While we summarized the workload in the Introduction to CloudXPRT white paper, the new paper goes into much
greater detail.
In addition to providing practical information about the data analytics installation package and minimum system requirements, the paper describes the workload’s test configuration variables, structural components, task workflows, and test metrics. It also discusses interpreting test results and the process for submitting results for publication.
CloudXPRT is the most
complex tool in the XPRT family, and the new paper is part of our effort to create more—and better—CloudXPRT documentation. We plan to
publish additional CloudXPRT white papers in the coming months, with possible
future topics including the impact of adjusting specific test configuration
options, recommendations for results reporting, and methods for analysis.
We hope that the Overview of the CloudXPRT Data Analytics Workload paper will serve as a go-to resource for CloudXPRT testers, and will answer any questions you have about the workload. You can find links to the paper and other resources in the Helpful Info box on CloudXPRT.com and the CloudXPRT section of our XPRT white papers page.
CloudXPRT is undoubtedly
the most complex tool in the XPRT family of benchmarks. To run the cloud-native
benchmark’s multiple workloads across different hardware and software platforms,
testers need two things: (1) at least a passing familiarity with a wide range
of cloud-related toolkits, and (2) an understanding that changing even one test
configuration variable can affect test results. While the complexity of CloudXPRT
makes it a powerful and flexible tool for measuring application performance on
real-world IaaS stacks, it also creates a steep learning curve for new users.
Benchmark setup and
configuration can involve a number of complex steps, and the corresponding
instructions should be thorough, unambiguous, and intuitive to follow. For all
of the XPRT tools, we strive to publish documentation that provides quick,
easy-to-find answers to the questions users might have. Community members have asked
us to improve the clarity and readability of the CloudXPRT setup,
configuration, and individual workload documentation. In response, we are
working to create more—and better—CloudXPRT documentation.
If you’re intimidated
by the benchmark’s complexity, helping you is one of our highest priorities. In
the coming weeks and months, we’ll be evaluating all of our CloudXPRT
documentation, particularly from the perspective of new users, and will release
more information about the new documentation as it becomes available.
We also want to remind
you of some of the existing CloudXPRT resources. We encourage everyone to check
out the Introduction to CloudXPRT and Overview of the CloudXPRT Web Microservices Workload white papers. (Note
that we’ll soon be publishing a paper on the benchmark’s data analytics
workload.) Also, a couple of weeks ago, we published the CloudXPRT learning tool, which we designed to serve as an information
hub for common CloudXPRT topics and questions, and to help tech journalists,
OEM lab engineers, and everyone who is interested in CloudXPRT find the answers
they need as quickly as possible.
Thanks to all who let us know that there was room for improvement in the CloudXPRT documentation. We rely on that kind of feedback and always welcome it. If you have any questions or suggestions regarding CloudXPRT or any of the other XPRTs, please let us know!
We’re happy to
announce that the AIXPRT learning tool is now live! We
designed the tool to serve as an information hub for common AIXPRT topics and
questions, and to help tech journalists, OEM lab engineers, and everyone who is
interested in AIXPRT find the answers they need in as little time as possible.
The tool features four
primary areas of content:
The Q&A section provides quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT: the basics section describes specific topics such as
the benchmark’s toolkits, networks, workloads, and hardware and software
requirements.
The testing and results section covers the testing process,
metrics, and how to publish results.
The AI/ML primer provides brief, easy-to-understand definitions of
key AI and ML terms and concepts for those who want to learn more about the
subject.
The first screenshot below shows the home screen. To show how some of the popup information sections appear, the second screenshot shows the Inference tasks (workloads) entry in the AI/ML Primer section.
We’re excited about the new AIXPRT learning tool, and we’re also happy to report that we’re working on a version of the tool for CloudXPRT. We hope to make the CloudXPRT tool available early next year, and we’ll post more information in the blog as we get closer to taking it live.
If you have any questions about the tool, please let us know!
The biggest shopping
days of the year are fast approaching, and if you’re researching phones,
tablets, Chromebooks, or laptops in preparation for Black Friday and Cyber
Monday sales, the XPRTs can help! One of the core functions of the XPRTs is to
help cut through all the marketing noise by providing objective, reliable
measures of a device’s performance. For example, instead of trying to guess
whether a new Chromebook is fast enough to handle the demands of remote
learning, you can use its CrXPRT and WebXPRT performance scores to see how it stacks up against the
competition when handling everyday tasks.
A good place to start your
search for scores is our XPRT results browser. The browser is the most efficient way to access the XPRT
results database, which currently holds more than 2,600 test results from over 100
sources, including major tech review publications around the world, OEMs, and
independent testers. It offers a wealth of current and historical performance
data across all the XPRT benchmarks and hundreds of devices. You can read more
about how to use the results browser here.
Also, if you’re considering
a popular device, chances are good that someone has already published an XPRT
score for that device in a recent tech review. The quickest way to find these
reviews is by searching for “XPRT” within your favorite tech review site, or by
entering the device name and XPRT name (e.g. “Apple iPad” and “WebXPRT”) in a
search engine. Here are a few recent tech reviews that use one or more of the
XPRTs to evaluate a popular device:
LaptopMag
used WebXPRT in a Best student Chromebooks for back to school 2020 review. This article can
still be helpful if you’ve discovered that your child’s existing
Chromebook isn’t handling the demands of remote learning well.
The XPRTs can help consumers make better-informed and more confident tech purchases this holiday season, and we hope you’ll find the data you need on our site or in an XPRT-related tech review. If you have any questions about the XPRTs, XPRT scores, or the results database please feel free to ask!
It’s been nine months
since we’ve published a WebXPRT 3 browser performance comparison, so we decided
to put the newest versions of popular browsers through the paces to see if the
performance rankings have changed since our last round of tests.
We used the same
laptop as last time: a Dell XPS 13 7930 with an Intel Core i3-10110U processor and 4 GB of RAM
running Windows 10 Home, updated to version 1909 (18363.1139). We installed all
current Windows updates and tested on a clean system image. After the update
process completed, we turned off updates to prevent them from interfering with
test runs. We ran WebXPRT 3 three times on five browsers: Brave, Google Chrome,
Microsoft Edge, Mozilla Firefox, and Opera. The posted score for each browser
is the median of the three test runs.
In our last round of tests, the four Chromium-based browsers (Brave, Chrome, Edge, and Opera)
produced scores that were nearly identical. Only Mozilla Firefox produced a
significantly different (and better) score. The parity of the Chromium-based
browsers was not surprising, considering they have the same underlying foundation.
In this round of testing, the Chromium-based browsers again produced very close scores, although Brave’s performance lagged by about 4 percent. Firefox again separated itself from the pack with a higher score. With the exception of Chrome, which produced an identical score as last time, every browser’s score was slightly slower than before. There are many possible reasons for this, including increased overhead in the browsers or changes in Windows, and the respective slowdowns for each browser will probably be unnoticeable to most users during everyday tasks.
Do these results mean that Mozilla Firefox will provide you with a speedier web experience? As we noted in the last comparison, a device with a higher WebXPRT score will probably feel faster during daily use than one with a lower score. For comparisons on the same system, however, the answer depends in part on the types of things you do on the web, how the extensions you’ve installed affect performance, how frequently the browsers issue updates and incorporate new web technologies, and how accurately each browsers’ default installation settings reflect how you would set up that browser for your daily workflow.
In
addition, browser speed can increase or decrease significantly after an update,
only to swing back in the other direction shortly thereafter. OS-specific
optimizations can also affect performance, such as with Edge on Windows 10 and
Chrome on Chrome OS. All of these variables are important to keep in mind when
considering how browser performance comparison results translate to your
everyday experience.
What are your thoughts on browser performance? Let us know!
Last month, we announced that we’re working on
a new AIXPRT learning tool. Because we want tech journalists, OEM lab
engineers, and everyone who is interested in AIXPRT to be able to find the
answers they need in as little time as possible, we’re designing this tool to serve
as an information hub for common AIXPRT topics and questions.
We’re still finalizing
aspects of the tool’s content and design, so some details may change, but we
can now share a sneak peak of the main landing page. In the screenshot below,
you can see that the tool will feature four primary areas of content:
The FAQ section will provide quick answers to the questions we
receive most from testers and the tech press.
The AIXPRT basics section will describe specific topics such as the
benchmark’s toolkits, networks, workloads, and hardware and software
requirements.
The testing and results section will cover the testing process,
the metrics the benchmark produces, and how to publish results.
The AI/ML primer will provide brief, easy-to-understand definitions
of key AI and ML terms and concepts for those who want to learn more about the
subject.
We’re excited about the new AIXPRT learning tool, and will share more information here in the blog as we get closer to a release date. If you have any questions about the tool, please let us know!
Cookie Notice: Our website uses cookies to deliver a smooth experience by storing logins and saving user information. By continuing to use our site, you agree with our usage of cookies as our privacy policy outlines.