BenchmarkXPRT Blog banner

Category: Benchmarking

Decisions, decisions

Back in April, we shared some of our initial ideas for a new version of WebXPRT, and work on the new benchmark is underway. Any time we begin the process of updating one of the XPRT benchmarks, one of the first decisions we face is how to improve workload content so it better reflects the types of technology average consumers use every day. Since benchmarks typically have a life cycle of two to four years, we want the benchmark to be relevant for at least the next couple of years.

For example, WebXPRT contains two photo-related workloads, Photo Effects and Organize Album. Photo Effects applies a series of effects to a set of photos, and Organize Album uses facial recognition technology to analyze a set of photos. In both cases, we want to use photos that represent the most relevant combination of image size, resolution, and data footprint possible. Ideally, the resulting image sizes and resolutions should differentiate processing speed on the latest systems, but not at the expense of being able to run reasonably on most current devices. We also have to confirm that the photos aren’t so large as to impact page load times unnecessarily.

The way this strategy works in practice is that we spend time researching hardware and operating system market share. Given that phones are the cameras that most people use, we look at them to help define photo characteristics. In 2017, the most widespread mobile OS is Android, and while reports vary depending on the metric used, the Samsung Galaxy S5 and Galaxy S7 are at or near the top of global mobile market share. For our purposes, the data tells us that choosing photo sizes and resolutions that mirror those of the Galaxy line is a good start, and a good chunk of Android users are either already using S7-generation technology, or will be shifting to new phones with that technology in the coming year. So, for the next version of WebXPRT, we’ll likely use photos that represent the real-life environment of an S7 user.

I hope that provides a brief glimpse into the strategies we use to evaluate workload content in the XPRT benchmarks. Of course, since the BenchmarkXPRT Development Community is an open development community, we’d love to hear your comments or suggestions!

Justin

Keeping up with the latest Android news

Ars Technica recently published a deep-dive review of Android 8.0 (Oreo) that contains several interesting tidbits about what the author called “Android’s biggest re-architecture, ever.” After reading the details, it’s hard to argue with that assessment.

The article’s thorough analysis includes a list of the changes Oreo is bringing to the UI, notification settings, locations service settings, and more. In addition to the types of updates that we usually see, a few key points stand out.

  • Project Treble, a complete reworking of Android’s foundational structure intended to increase the speed and efficiency of update delivery
  • A serious commitment to eliminating silent background services, giving users more control over their phone’s resources, and potentially enabling significant gains in battery life
  • Increased machine learning/neural network integration for text selection and recognition
  • A potential neural network API that allows third-party plugins
  • Android Go, a scaled-down version of Android tuned for budget phones in developing markets


There’s too much information about each of the points to discuss here, but I encourage anyone interested in Android development to check out the article. Just be warned that when they say “thorough,” they mean it, so it’s not exactly a quick read.

Right now, Oreo is available on only the Google Pixel and Pixel XL phones, and will not likely be available to most users until sometime next year. Even though widespread adoption is a way off, the sheer scale of the expected changes requires us to adopt a long-term development perspective.

We’ll continue to track developments in the Android world and keep the community informed about any impact that those changes may have on MobileXPRT and BatteryXPRT. If you have any questions or suggestions for future XPRT/Android applications, let us know!

Justin

Introducing the XPRT Selector

We’re proud of all the XPRT tools, each of which serves a different purpose for the people who rely on them. But for those new to the XPRTs, we wanted a way to make it easy to tell which tool will best suit each person’s specific requirements. To that end, today we’re excited to announce the XPRT Selector, an interactive web tool that helps consumers, developers, manufacturers, and reviewers zero in on exactly which XPRT tool is the right match for their needs.

Using the XPRT Selector is easy. Simply spin the dials on the wheel to choose the categories that best describe yourself, the devices and operating systems you’re working with, and the topic that interests you. Once you’ve aligned the dials, click Get results, and the Selector will present all the free XPRT tools and resources that are available to you. Along with choosing the best tools for you, the XPRT Selector also explains the purpose and capabilities of each tool.

To see the Selector in action, check out the short video below. You can take the XPRT Selector for a spin at http://www.principledtechnologies.com/benchmarkxprt/the-xprt-selector/.

The XPRT Selector

All the XPRT tools have one thing in common: They help take the guesswork out of device evaluation and comparison, making them invaluable for anyone using, making, or writing about tech products. We think the XPRT Selector is a great addition to the fold!

Justin

The XPRT Spotlight Back-to-School Roundup

Today, we’re pleased to announce our second annual XPRT Spotlight Back-to-School Roundup, a free shopping tool that provides side-by-side comparisons of this school year’s most popular Chromebooks, laptops, tablets, and convertibles. We designed the Roundup to help buyers choosing devices for education, such as college students picking out a laptop or school administrators deciding on the devices for a grade. The Roundup can help make those decisions easier by gathering the product and performance facts these buyers need in one convenient place.

We tested the Roundup devices in our lab using the XPRT suite of benchmark tools. In addition to benchmark results, we also provide photographs, device specs, and prices.

If you haven’t yet visited the XPRT Weekly Tech Spotlight page, check it out. Every week, the Spotlight highlights a new device, making it easier for consumers to shop for a new laptop, smartphone, tablet, or PC. Recent devices in the spotlight include the Samsung Chromebook Pro, Microsoft Surface Laptop, Microsoft Surface Pro, OnePlus 5, and Apple iPad Pro 10.5”.

Vendors interested in having their devices featured in the XPRT Weekly Tech Spotlight or next year’s Roundup can visit the website for more details.

We’re always working on ways to make the Spotlight an even more powerful tool for helping with buying decisions. If you have any ideas for the page or suggestions for devices you’d like to see, let us know!

Justin

Best practices

Recently, a tester wrote in and asked for help determining why they were seeing different WebXPRT scores on two tablets with the same hardware configuration. The scores differed by approximately 7.5 percent. This can happen for many reasons, including different software stacks, but score variability can also result from different testing behavior and environments. While some degree of variability is natural, the question provides us with a great opportunity to talk about the basic benchmarking practices we follow in the XPRT lab, practices that contribute to the most consistent and reliable scores.

Below, we list a few basic best practices you might find useful in your testing. While they’re largely in the context of the WebXPRT focus on evaluating browser performance, several of these practices apply to other benchmarks as well.

  • Test with clean images: We use an out-of-box (OOB) method for testing XPRT Spotlight devices. OOB testing means that other than initial OS and browser version updates that users are likely to run after first turning on the device, we change as little as possible before testing. We want to assess the performance that buyers are likely to see when they first purchase the device, before installing additional apps and utilities. This is the best way to provide an accurate assessment of the performance retail buyers will experience. While OOB is not appropriate for certain types of testing, the key is to not test a device that’s bogged down with programs that influence results unnecessarily.
  • Turn off updates: We do our best to eliminate or minimize app and system updates after initial setup. Some vendors are making it more difficult to turn off updates completely, but you should always account for update settings.
  • Get a feel for system processes: Depending on the system and the OS, quite a lot of system-level activity can be going on in the background after you turn it on. As much as possible, we like to wait for a stable baseline (idle) of system activity before kicking off a test. If we start testing immediately after booting the system, we often see higher variability in the first run before the scores start to tighten up.
  • Disclosure is not just about hardware: Most people know that different browsers will produce different performance scores on the same system. However, testers aren’t always aware of shifts in performance between different versions of the same browser. While most updates don’t have a large impact on performance, a few updates have increased (or even decreased) browser performance by a significant amount. For this reason, it’s always worthwhile to record and disclose the extended browser version number for each test run. The same principle applies to any other relevant software.
  • Use more than one data point: Because of natural variability, our standard practice in the XPRT lab is to publish a score that represents the median from at least three to five runs. If you run a benchmark only once, and the score differs significantly from other published scores, your result could be an outlier that you would not see again under stable testing conditions.


We hope those tips will make testing a little easier for you. If you have any questions about the XPRTs, or about benchmarking in general, feel free to ask!

Justin

MobileXPRT: evaluate the performance of your Android device

We recently discussed the capabilities and benefits of TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT. This week, we’re focusing on MobileXPRT, an app that evaluates how well an Android device handles everyday tasks. Like the other XPRT family benchmarks, MobileXPRT is easy to use. It takes less than 15 minutes to run on most devices, runs relatable workloads, and delivers reliable, objective, and easy-to-understand results.

MobileXPRT includes five performance scenarios (Apply Photo Effects, Create Photo Collages, Create Slideshow, Encrypt Personal Content, and Detect Faces to Organize Photos). By default, the benchmark runs all five tasks and reports individual workload scores and an overall performance score.

MobileXPRT 2015 is the latest version of the app, supporting both 32-bit and 64-bit hardware running Android 4.4 or higher. To test systems running older versions of Android, or to test 32-bit performance on a 64-bit system, you can use MobileXPRT 2013. The results of the two versions are comparable.

MobileXPRT is a useful tool for anyone who wants to compare the performance capabilities of Android phones or tablets. To see test results from a variety of systems, go to MobileXPRT.com and click View Results, where you’ll find scores from many different Android devices.

If you’d like to run MobileXPRT:

Simply download MobileXPRT from MobileXPRT.com or the Google Play Store. The full installer package on MobileXPRT.com, containing both app and test data, is 243 MB. You may also use this link to download the 18 MB MobileXPRT app file, which will download the test data during installation. The MobileXPRT user manual provides instructions for configuring your device and kicking off a test.

If you’d like to dig into the details:

Check out the Exploring MobileXPRT 2015 white paper. In it, we discuss the MobileXPRT development process and details of the individual performance scenarios. We also explain exactly how the benchmark calculates results.

If you’d like to dig even deeper, the MobileXPRT source code is available to members of the BenchmarkXPRT Development Community, so consider joining today. Membership is free for members of any company or organization with an interest in benchmarks, and there are no obligations after joining.

If you haven’t used MobileXPRT before, give it a shot and let us know what you think!

Justin

Check out the other XPRTs:

Forgot your password?