BenchmarkXPRT Blog banner

Category: Cross-platform benchmarks

WebXPRT passes another milestone!

We’re excited to see that users have successfully completed over 250,000 WebXPRT runs! From the original WebXPRT 2013 to the most recent version, WebXPRT 3, this tool has been popular with manufacturers, developers, consumers, and media outlets around the world because it’s easy to run, it runs quickly and on a wide variety of platforms, and it evaluates device performance using real-world tasks.

If you’ve run WebXPRT in any of the more than 458 cities and 64 countries from which we’ve received complete test data—including newcomers Lithuania, Luxembourg, Sweden, and Uruguay—we’re grateful for your help in reaching this milestone. Here’s to another quarter-million runs!

If you haven’t yet transitioned your browser testing to WebXPRT 3, now is a great time to give it a try! WebXPRT 3 includes updated photo workloads with new images and a deep learning task used for image classification. It also uses an optical character recognition task in the Encrypt Notes and OCR scan workload and combines part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload. Users carry out tasks like these on their browsers daily, making these workloads very effective for assessing how well a device will perform in the real world.

Happy testing to everyone, and if you have any questions about WebXPRT 3 or the XPRTs in general, feel free to ask!

Justin

More on the way for the XPRT Weekly Tech Spotlight

In the coming months, we’ll continue to add more devices and helpful features to the XPRT Weekly Tech Spotlight. We’re especially interested in adding data points and visual aids that make it easier to quickly understand the context of each device’s test scores. For instance, those of us who are familiar with WebXPRT 3 scores know that an overall score of 250 is pretty high, but site visitors who are unfamiliar with WebXPRT probably won’t know how that score compares to scores for other devices.

We designed Spotlight to be a source of objective data, in contrast to sites that provide subjective ratings for devices. As we pursue our goal of helping users make sense of scores, we want to maintain this objectivity and avoid presenting information in ways that could be misleading.

Introducing comparison aids to the site is forcing us to make some tricky decisions. Because we value input from XPRT community members, we’d love to hear your thoughts on one of the questions we’re facing: How should our default view present a device’s score?

We see three options:

1) Present the device’s score in relation to the overall high and low scores for that benchmark across all devices.
2) Present the device’s score in relation to the overall high and low scores for that benchmark across the broad category of devices to which that device belongs (e.g., phones).
3) Present the device’s score in relation to the overall high and low scores for that benchmark across a narrower sub-category of devices to which that device belongs (e.g., high-end flagship phones).

To think this through, consider WebXPRT, which runs on desktops, laptops, phones, tablets, and other devices. Typically, the WebXPRT scores for phones and tablets are lower than scores for desktop and laptop systems. The first approach helps to show just how fast high-end desktops and laptops handle the WebXPRT workloads, but it could make a phone or tablet look slow, even if its score was good for its category. The second approach would prevent unfair default comparisons between different device types but would still present comparisons between devices that are not true competitors (e.g., flagship phones vs. budget phones). The third approach is the most careful, but would introduce an element of subjectivity because determining the sub-category in which a device belongs is not always clear cut.

Do you have thoughts on this subject, or recommendations for Spotlight in general? If so, Let us know.

Justin

Just before showtime

In case you missed the announcement, WebXPRT 3 is now live! Please try it out, submit your test results, and feel free to send us your questions or comments.

During the final push toward launch day, it occurred to us that not all of our readers are aware of the steps involved in preparing a benchmark for general availability (GA). Here’s a quick overview of what we did over the last several weeks to prepare for the WebXPRT 3 release, a process that follows the same approach we use for all new XPRTs.

After releasing the community preview (CP), we started on the final build. During this time, we incorporated features that we were not able to include in the CP and fixed a few outstanding issues. Because we always try to make sure that CP results are comparable to eventual GA results, these issues rarely involve the workloads themselves or anything that affects scoring. In the case of WebXPRT 3, the end-of-test results submission form was not fully functional in the CP, so we finished making it ready for prime time.

The period between CP and GA releases is also a time to incorporate any feedback we get from the community during initial testing. One of the benefits of membership in the BenchmarkXPRT Development Community is access to pre-release versions of new benchmarks, along with an opportunity to make your voice heard during the development process.

When the GA candidate build is ready, we begin two types of extensive testing. First, our quality assurance (QA) team performs a thorough review, running the build on numerous devices. In the case of WebXPRT, it also involves testing with multiple browsers. The QA team also keeps a sharp eye out for formatting problems and bugs.

The second type of testing involves comparing the current version of the benchmark with prior versions. We tested WebXPRT 3 on almost 100 devices. While WebXPRT 2015 and WebXPRT 3 scores are not directly comparable, we normalize scores for both sets of results and check that device performance is scaling in the same way. If it isn’t, we need to determine why not.

Finally, after testing is complete and the new build is ready, we finalize all related documentation and tie  the various pieces together on the web site. This involves updating the main benchmark page and graphics, the FAQ page, the results tables, and the members’ area.

That’s just a brief summary of what we’ve been up to with WebXPRT in the last few weeks. If you have any questions about the XPRTs or the development community, feel free to ask!

Justin

WebXPRT 3 is here!

We’re excited to announce that WebXPRT 3 is now available to the public. BenchmarkXPRT Development Community members have been using a community preview for several weeks, but now anyone can run WebXPRT 3 and publish their results.

As we mentioned on the blog, WebXPRT 3 has a completely new UI, updated workloads, and new test content. We carried over several features from WebXPRT 2015 including automation, the option to run individual workloads, and language options for English, German, and Simplified Chinese.

We believe WebXPRT 3 will be as relevant and reliable as WebXPRT 2013 and 2015. After trying it out, please submit your scores and feel free to let us know what you think. We look forward to seeing new results submissions!

Principled Technologies and the BenchmarkXPRT Development Community release WebXPRT 3, a free online performance evaluation tool for web-enabled devices

Durham, NC — Principled Technologies and the BenchmarkXPRT Development Community have released WebXPRT 3, a free online tool that gives objective information about how well a laptop, tablet, smartphone, or any other web-enabled device handles common web tasks. Anyone can go to WebXPRT.com and compare existing performance evaluation results on a variety of devices or run a simple evaluation test on their own.

WebXPRT 3 contains six HTML5- and JavaScript-based scenarios created to mirror common web browser tasks: Photo Enhancement, Organize Album Using AI, Stock Option Pricing, Encrypt Notes and OCR Scan, Sales Graphs, and Online Homework.

“WebXPRT is a popular, easy-to-use benchmark run by manufacturers, tech journalists, and consumers all around the world,” said Bill Catchings, co-founder of Principled Technologies, which administers the BenchmarkXPRT Development Community. “We believe that WebXPRT 3 is a great addition to WebXPRT’s legacy of providing relevant and reliable performance data for a wide range of devices.”

WebXPRT is one of the BenchmarkXPRT suite of performance evaluation tools. Other tools include MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. The XPRTs help users get the facts before they buy, use, or evaluate tech products such as computers, tablets, and phones.

To learn more about and join the BenchmarkXPRT Development Community, go to www.BenchmarkXPRT.com.

About Principled Technologies, Inc.
Principled Technologies, Inc. is a leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.PrincipledTechnologies.com.

Company Contact
Justin Greene
BenchmarkXPRT Development Community
Principled Technologies, Inc.
1007 Slater Road, Ste. 300
Durham, NC 27703
BenchmarkXPRTsupport@PrincipledTechnologies.com

WebXPRT 3 arrives next week!

After much development work and testing, we’re happy to report that we’ll be releasing WebXPRT 3 early next week!

Here are the final workload names and descriptions.

1) Photo Enhancement: Applies three effects to two photos each using Canvas.
2) Organize Album Using AI: Detects faces and classifies images using the ConvNetJS neural network library.
3) Stock Option Pricing: Calculates and displays graphics views of a stock portfolio using Canvas, SVG, and dygraphs.js.
4) Encrypt Notes and OCR Scan: Encrypts notes in local storage and scans a receipt using optical character recognition.
5) Sales Graphs: Calculates and displays multiple views of sales data using InfoVis and d3.js.
6) Online Homework: Performs science and English homework using Web Workers and Typo.js spell check.

As we mentioned in an earlier blog post, the updated photo workloads contain new images and a deep learning task. We also added an optical character recognition task to the Local Notes workload and combined part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload.

Longtime WebXPRT users will immediately notice a completely new UI. We worked to improve the UI’s appearance on smaller devices such as phones and we think testers will find it easier to navigate.

Testers can still choose to run individual workloads and we’re once again offering English, German, and Simplified Chinese language options.

Below my sig, I’ve included pictures of WebXPRT 3’s start test and results pages, as well as an in-test screen capture.

We’re thankful for all the interest in WebXPRT 3 so far and believe the new version of WebXPRT will be as relevant and reliable as WebXPRT 2013 and 2015—and easier to use. We look forward to seeing new results submissions next week!

Justin

WebXPRT 3 start page

WebXPRT 3 in test

Results page

Check out the other XPRTs:

Forgot your password?