BenchmarkXPRT Blog banner

Search Results for: webxprt

A BatteryXPRT bug fix is on the way

Some time ago, we started to see unusual BatteryXPRT battery life estimates and high variance on some devices when running tests at the default length of 5.25 hours (seven 45-minute iterations). We suspected that the problem resulted from changes in how new OS versions report battery life on certain devices (e.g., charging past a reported level of 100 percent). In addition, the progress of battery technology in general means that the average phone battery lasts much longer than it did a few years ago. Together, these factors sometimes led to BatteryXPRT runs where the OS reported little to no battery decrease during the first few iterations of a test. We concluded that 5.25 hours wasn’t long enough to produce an accurate battery life estimate.

After extensive experimentation and testing, we’ve decided to release a new build that increases the default BatteryXPRT test length from 5.25 hours (seven iterations) to 45 hours (60 iterations) to allow enough time for a full rundown on most phones. Based on our testing, we consider full rundown tests to be the most accurate and will use those exclusively in our Spotlight testing and elsewhere. Testers will still have the option of choosing shorter test durations, but BatteryXPRT will flag the results with a qualifier that recommends performing a full rundown.

We plan to release the updated build by the end of next week and update BatteryXPRT documentation to reflect the changes. We have not changed any of the workloads and both performance results and full-rundown battery life estimates will be comparable to results from earlier builds.

BatteryXPRT continues to be a useful tool for gauging the performance and expected battery life of Android devices while simulating real-world tasks. If you have any questions about BatteryXPRT, please feel free to ask!

Justin

Updates on HDXPRT 4 and MobileXPRT 3

There’s a lot going on with the XPRTs, so we want to offer a quick update.

On the HDXPRT 4 front, we’re currently testing community preview candidate builds across a variety of laptops and desktops. Testing is going well, but as is often the case prior to a release, we’re still tweaking the code as necessary when we run into bugs. We’re excited about HDXPRT 4 and look forward to the community seeing how much faster and easier to use it is than previous versions. You can read more about what’s to come in HDXPRT 4 here.

On the MobileXPRT 3 front, proof-of-concept testing for the new and updated workloads went well, and we’re now working to implement the new UI. Below, you can see a mockup of the new MobileXPRT 3 start screen for phones. The aesthetic is completely different than MobileXPRT 2015, and is in line with the clean, bright look we used for WebXPRT 3 and HDXPRT 4. We’ve made it easy to select and deselect individual workloads by tapping the workload name (deselected workloads are grayed out), and we’ve consolidated common menu items into an Android-style taskbar at the bottom of the screen. Please note that this is an early view and some aspects of the screen will change. For instance, we’re certain that the final receipt-scanning workload won’t be called “Optical character recognition.”

We’ll share more information about HDXPRT 4 and MobileXPRT 3 in the coming weeks. If you have any questions about HDXPRT or MobileXPRT, or would like to share your ideas, please get in touch!

Justin

MobileXPRT-3-main-phone

News from the MobileXPRT 3 team

A few months ago, we shared some of our thoughts during the early planning stages of MobileXPRT 3 development. Since then, we’ve started building the new benchmark with Android Studio SDK 27. We’re now at a place where we can share more details about what to expect in MobileXPRT 3. In a nutshell, one of the five workloads in the previous version, MobileXPRT 2015, is getting a major overhaul, the remaining four workloads are getting updated test content, and we’re adding one completely new workload.

One of the first challenges we tackled was to completely rebuild the Create Slideshow workload. In MobileXPRT 2015, the workload uses FFmpeg to convert photos into video. FFmpeg utilizes a C++ executable, and it needs to be compiled differently for different architectures such as x86, x64, arm32, arm64, etc. With each new Android version, the task of maintaining FFmpeg compatibility with numerous architectures and Android versions becomes more complex. MobileXPRT 2015 still works well on most Android devices, but we wanted a more future-proof solution. In MobileXPRT 3, the Create Slideshow workload will use the Android MediaCodec API instead of FFmpeg. This change enables the workload to run successfully on devices that could not complete the workload in MobileXPRT 2015.

We are updating the test content of the following workloads: Apply Photo Effects, Create Photo Collages, Encrypt Personal Content, and Detect Faces to Organize Photos. We will replace items such as photos and videos with more contemporary file resolutions and sizes where applicable.

In the mobile device market, artificial intelligence and machine learning capabilities are rapidly moving from the level of novelty to being integrated into many daily tasks, so we wanted to include an AI or ML element in MobileXPRT 3. Our new workload uses Google’s Mobile Vision API to perform optical character recognition (OCR) tasks involving scanning receipts for personal records or an expense report. The scenario is similar to the OCR receipt-scanning task in WebXPRT 3, though the two workloads are based on different text-recognition technologies.

Finally, we’re updating the MobileXPRT UI to improve the look of the benchmark and make it easier to use. We’ll share a sneak peek of the new UI here in the blog around the time of the community preview. If you have any questions about MobileXPRT 2015 or MobileXPRT 3, please let us know!

Justin

XPRTs in the datacenter

The XPRTs have been very successful on desktops, notebooks, tablets, and phones. People have run WebXPRT over 295,000 times. It and other benchmarks such as MobileXPRT, HDXPRT, and CrXPRT are important tools globally for evaluating device performance on various consumer and business client platforms.

We’ve begun branching out with tests for edge devices with AIXPRT, our new artificial intelligence benchmark. While typical consumers won’t be able to run AIXPRT on their devices initially, we feel that it is important for the XPRTs to play an active role in a critical emerging market. (We’ll have some updates on the AIXPRT front in the next few weeks.)

Recently, both community members and others have asked about the possibility of the XPRTs moving into the datacenter. Folks face challenges in evaluating the performance and suitability to task of such datacenter mainstays as servers, storage, networking infrastructure, clusters, and converged solutions. These challenges include the lack of easy-to-run benchmarks, the complexity and cost of the equipment (multi-tier servers, large amounts of storage, and fast networks) necessary to run tests, and confusion about best testing practices.

PT has a lot of expertise in measuring datacenter performance, as you can tell from the hundreds of datacenter-focused test reports on our website. We see great potential in our working with the BenchmarkXPRT Development Community to help in this area. It is very possible that, as with AIXPRT, our approach to datacenter benchmarks would differ from the approach we’ve taken with previous benchmarks. While we have ideas for useful benchmarks we might develop down the road, more immediate steps could be drafting white papers, developing testing guidelines, or working with vendors to set up a lab.

Right now, we’re trying to gauge the level of interest in having such tools and in helping us carry out these initiatives. What are the biggest challenges you face in datacenter-focused performance and suitability to task evaluations? Would you be willing to work with us in this area? We’d love to hear from you and will be reaching out to members of the community over the coming weeks.

As always, thanks for your help!

Bill

Planning the next version of MobileXPRT

We’re in the early planning stages for the next version of MobileXPRT, and invite you to send us any suggestions you may have. What do you like or not like about MobileXPRT? What features would you like to see in a new version?

When we begin work on a new version of any XPRT, one of the first steps we take is to assess the benchmark’s workloads to determine whether they will provide value during the years ahead. This step almost always involves updating test content such as photos and videos to more contemporary file resolutions and sizes, and it can also involve removing workloads or adding completely new scenarios. MobileXPRT currently includes five performance scenarios (Apply Photo Effects, Create Photo Collages, Create Slideshow, Encrypt Personal Content, and Detect Faces to Organize Photos). Should we stick with these five or investigate other use cases? What do you think?

As we did with WebXPRT 3 and the upcoming HDXPRT 4, we’re also planning to update the MobileXPRT UI to improve the look of the benchmark and make it easier to use.

Crucially, we’ll also build the app using the most current Android Studio SDK. Android development has changed significantly since we released MobileXPRT 2015 and apps must now conform to stricter standards that require explicit user permission for many tasks. Navigating these changes shouldn’t be too difficult, but it’s always possible that we’ll encounter unforeseen challenges at some point during the process.

Do you have suggestions for test scenarios that we should consider for MobileXPRT? Are there existing features we should remove? Are there elements of the UI that you find especially useful or have ideas for improving? Please let us know. We want to hear from you and make sure that MobileXPRT continues to meet your needs.

Justin

Which browser is the fastest? It’s complicated.

PCWorld recently published the results of a head-to-head browser performance comparison between Google Chrome, Microsoft Edge, Mozilla Firefox, and Opera. As we’ve noted about similar comparisons, no single browser was the fastest in every test. Browser speed sounds like a straightforward metric, but the reality is complex.

For the comparison, PCWorld used three JavaScript-centric test suites (JetStream, SunSpider, and Octane), one benchmark that simulates user actions (Speedometer), a few in-house tests of their own design, and one benchmark that simulates real-world web applications (WebXPRT). Edge came out on top in JetStream and SunSpider, Opera won in Octane and WebXPRT, and Chrome had the best results in Speedometer and PCWorld’s custom workloads.

The reason that the benchmarks rank the browsers so differently is that each one has a unique emphasis and tests a specific set of workloads and technologies. Some focus on very low-level JavaScript tasks, some test additional technologies such as HTML5, and some are designed to identify strengths or weakness by stressing devices in unusual ways. These approaches are all valid, and it’s important to understand exactly what a given score represents. Some scores reflect a very broad set of metrics, while others assess a very narrow set of tasks. Some scores help you to understand the performance you can expect from a device in your everyday life, and others measure performance in scenarios that you’re unlikely to encounter. For example, when Eric discussed a similar topic in the past, he said the tests in JetStream 1.1 provided information that “can be very useful for engineers and developers, but may not be as meaningful to the typical user.”

As we do with all the XPRTs, we designed WebXPRT to test how devices handle the types of real-world tasks consumers perform every day. While lab techs, manufacturers, and tech journalists can all glean detailed data from WebXPRT, the test’s real-world focus means that the overall score is relevant to the average consumer. Simply put, a device with a higher WebXPRT score is probably going to feel faster to you during daily use than one with a lower score. In today’s crowded tech marketplace, that piece of information provides a great deal of value to many people.

What are your thoughts on browser testing? We’d love to hear from you.

Justin

Check out the other XPRTs:

Forgot your password?