BenchmarkXPRT Blog banner

Category: WebXPRT

Investigating a possible issue with WebXPRT 4 in iOS 17

Yesterday, Apple revealed the iPhone 15 and iPhone 15 Pro at its annual fall event, along with a new version of the iOS mobile operating system (iOS 17). The official iOS 17 launch will take place on September 18th, but before then, users of newer iPhones can install the OS via the Apple Beta Software Program.

Today, a tech journalist informed us that during their testing of iPhone 15 and iPhone 15 Pro with iOS 17 Beta models, WebXPRT 4 has been freezing while running the Encrypt Notes and OCR Scan workload in the Safari 17 browser. Here in the lab, we were able to immediately replicate the issue on an iPhone 12 Pro with iOS 17 Beta model.

Our initial troubleshooting confirmed that WebXPRT 3 successfully runs to completion on iOS 17 Beta, so it appears that the problem is specific to WebXPRT 4. We also confirmed that WebXPRT 4 freezes at the same place when running in the Google Chrome browser on iOS 17 Beta, so we know that the problem does not occur only in Safari.

We’re currently investigating the issue, and will publish our findings here in the blog as soon as we feel confident that we’ve identified both the root cause and a workable solution, if a solution is necessary. One reason a solution would not be necessary is that the issue is a bug on the iOS 17 Beta side that Apple will resolve before the official launch.

We apologize for any inconvenience this issue might cause for tech reviewers and iPhone users, and we appreciate your patience while we figure out what’s going on. If you have any questions about WebXPRT 4, please don’t hesitate to ask!

Justin

A note about CrXPRT 2

Recent visitors to CrXPRT.com may have seen a notice that encourages visitors to use WebXPRT 4 instead of CrXPRT 2 for performance testing on high-end Chromebooks. The notice reads as follows:

NOTE: Chromebook technology has progressed rapidly since we released CrXPRT 2, and we’ve received reports that some CrXPRT 2 workloads may not stress top-bin Chromebook processors enough to give the necessary accuracy for users to compare their performance. So, for the latest test to compare the performance of high-end Chromebooks, we recommend using WebXPRT 4.

We made this recommendation because of the evident limitations of the CrXPRT 2 performance workloads when testing newer high-end hardware. CrXPRT 2 itself is not that old (2020), but when we created the CrXPRT 2 performance workloads, we started with a core framework of CrXPRT 2015 performance workloads. In a similar way, we built the CrXPRT 2015 workloads on a foundation of WebXPRT 2015 workloads. At the time, the harness and workload structures we used to ensure WebXPRT 2015’s cross-browser capabilities provided an excellent foundation that we could adapt for our new ChromeOS benchmark. Consequently, CrXPRT 2 is a close developmental descendant of WebXPRT 2015. Some of the legacy WebXPRT 2015/CrXPRT 2 workloads do not stress current high-end processors—a limitation that prevents effective performance testing differentiation—nor do they engage the latest web technologies.

In the past, the Chromebook market skewed heavily toward low-cost devices with down-bin, inexpensive processors, making this limitation less of an issue. Now, however, more Chromebooks offer top-bin processors on par with traditional laptops and workstations. Because of the limitations of the CrXPRT 2 workloads, we now recommend WebXPRT 4 for both cross-browser and ChromeOS performance testing on the latest high-end Chromebooks. WebXPRT 4 includes updated test content, newer JavaScript tools and libraries, modern WebAssembly workloads, and additional Web Workers tasks that cover a wide range of performance requirements.

While CrXPRT 2 continues to function as a capable performance and battery life comparison test for many ChromeOS devices, WebXPRT 4 is a more appropriate tool to use with new high-end devices. If you haven’t yet used WebXPRT 4 for Chromebook comparison testing, we encourage you to give it a try!

If you have any questions or concerns about CrXPRT 2 or WebXPRT 4, please don’t hesitate to ask!

Justin

A bit of house cleaning at BenchmarkXPRT.com

When we’ve released a new version of an XPRT benchmark app, it’s been our practice for many years to maintain a link to the previous version on the benchmark’s main page. For example, visitors can start on the WebXPRT 4 homepage at WebXPRT.com and follow links to access WebXPRT 3, WebXPRT 2015, and WebXPRT 2013. Historically, we’ve maintained these links because labs and tech reviewers usually take a while to introduce a new benchmark to their testing suite. Continued access to the older benchmarks also allows users to quickly compare new devices to old devices without retesting everything.

That being said, several of the XPRT pages currently contain links to benchmarks that we no longer actively support. Some of those benchmarks still function correctly, and testers occasionally use them, but a few no longer work on the latest versions of the operating systems or browsers that we designed them to test. While we want to continue to provide a way for longtime XPRT users to access legacy XPRTs,  we also want to avoid potential confusion for new users. We believe our best way forward is to archive older tests in a separate part of the site.

In the coming weeks, we’ll be moving several legacy XPRT benchmarks to an archive section of the site. Once the new section is ready, we’ll link to it from the Extras drop-down menu at the top of BenchmarkXPRT.com. The benchmarks will still be available in the archive, but we will not actively support them or directly link to them from the homepages of active XPRTs.

During this process, we’ll move the following benchmarks to the archive section:

  • WebXPRT 2015 and 2013
  • CrXPRT 2015
  • HDXPRT 2014
  • TouchXPRT 2014
  • MobileXPRT 2015 and 2013

If you have any questions or concerns about the archive process or access to legacy XPRTs, please let us know

Justin

Exploring the XPRT white paper library

As part of our commitment to publishing reliable, unbiased benchmarks, we strive to make the XPRT development process as transparent as possible. In the technology assessment industry, it’s not unusual for people to claim that any given benchmark contains hidden biases, so we take preemptive steps to address this issue by publishing XPRT benchmark source code, detailed system disclosures and test methodologies, and in-depth white papers. Today, we’re focusing on the XPRT white paper library.

The XPRT white paper library currently contains 21 white papers that we’ve published over the last 12 years. We started publishing white papers to provide XPRT users with more information about how we design our benchmarks, why we make certain development decisions, and how the benchmarks work. If you have questions about any aspect of one of the XPRT benchmarks, the white paper library is a great place to find some answers.

For example, the Exploring WebXPRT 4 white paper describes the design and structure of WebXPRT 4, including detailed information about the benchmark’s harness, HTML5 and WebAssembly (WASM) capability checks, and the structure of the performance test workloads. It also includes explanations of the benchmark’s scoring methodology, how to automate tests, and how to submit results for publication.

The companion WebXPRT 4 results calculation white paper explains the formulas that WebXPRT 4 uses to calculate the individual workload scenario scores and overall score, provides an overview of the statistical techniques WebXPRT uses to translate raw timings into scores, and explains the benchmark’s confidence interval and how it differs from typical benchmark variability. To supplement the white paper’s discussion of the results calculation process, we published a results calculation spreadsheet that shows the raw data from a sample test run and reproduces the exact calculations WebXPRT uses to produce test scores.

We hope that the XPRT white paper library will prove to be a useful resource for you. If you have questions about any of our white papers, or suggestions for topics that you’d like us to cover in possible future white papers, please let us know!

Justin

Check out the WebXPRT 4 results viewer

New visitors to our site may not be aware of the WebXPRT 4 results viewer and how to use it. The viewer provides WebXPRT 4 users with an interactive, information-packed way to browse test results that is not available for earlier versions of the benchmark. With the viewer, users can explore all of the PT-curated results that we’ve published on WebXPRT.com, find more detailed information about those results, and compare results from different devices. The viewer currently displays over 460 results, and we add new entries each week.

The screenshot below shows the tool’s default display. Each vertical bar in the graph represents the overall score of a single test result, with bars arranged from lowest to highest. To view a single result in detail, the user hovers over a bar until it turns white and a small popup window displays the basic details of the result. If the user clicks to select the highlighted bar, the bar turns dark blue, and the dark blue banner at the bottom of the viewer displays additional details about that result.

In the example above, the banner shows the overall score (227), the score’s percentile rank (66th) among the scores in the current display, the name of the test device, and basic hardware disclosure information. If the source of the result is PT, users can click the Run info button to see the run’s individual workload scores. If the source is an external publisher, users can click the Source link to navigate to the original site.

The viewer includes a drop-down menu that lets users quickly filter results by major device type categories, and a tab that with additional filtering options, such as browser type, processor vendor, and result source. The screenshot below shows the viewer after I used the device type drop-down filter to select only desktops.

The screenshot below shows the viewer as I use the filter tab to explore additional filter options, such processor vendor.

The viewer also lets users pin multiple specific runs, which is helpful for making side-by-side comparisons. The screenshot below shows the viewer after I pinned four runs and viewed them on the Pinned runs screen.

The screenshot below shows the viewer after I clicked the Compare runs button. The overall and individual workload scores of the pinned runs appear in a table.

We’re excited about the WebXPRT 4 results viewer, and we want to hear your feedback. Are there features you’d really like to see, or ways we can improve the viewer? Please let us know, and send us your latest test results!

Justin

The role of potential WebXPRT 4 auxiliary workloads

As we mentioned in our most recent blog post, we’re seeking suggestions for ways to improve WebXPRT 4. We’re open to the prospect of adding both non-workload features and new auxiliary tests, e.g., a battery life or WebGPU-based graphics test scenario.

To prevent any confusion among WebXPRT 4 testers, we want to reiterate that any auxiliary workloads we might add will not affect existing WebXPRT 4 subtest or overall scores in any way. Auxiliary tests would be experimental or targeted workloads that run separately from the main test and produce their own scores. Current and future WebXPRT 4 results will be comparable to one another, so users who’ve already built a database of WebXPRT 4 scores will not have to retest their devices. Any new tests will be add-ons that allow us to continue expanding the rapidly growing body of published WebXPRT 4 test results while making the benchmark even more valuable to users overall.

If you have any thoughts about potential browser performance workloads, or any specific web technologies that you’d like to test, please let us know.

Justin

Check out the other XPRTs:

Forgot your password?