BenchmarkXPRT Blog banner

Thoughts from MWC Shanghai 2018

Ni hao from Shanghai! It is amazing the change that happens in a year. This year’s MWC Shanghai, like last year’s, took up about half of the Shanghai New International Expo Centre (SNIEC). “5G +” is the major theme and, unlike last year, 5G is not something in the distant future. It is now assumed to be in progress.

The biggest of the pluses was AI, with a number of booths explicitly sporting 5G + AI signage. There were also 5G plus robots, cars, and cloud services. Many of those are really about AI as well. The show makes it feel like 5G is everywhere and will make everything better (or at least a lot faster). And Asia is leading the way.

[caption id="attachment_3447" align="alignleft" width="640"]5G + robotics at MWCS 18. 5G + robotics at MWCS 18.[/caption]

Most of the booths touted their 5G support as they did last year, but rather than talking about the future, they tried to say that their 5G was now. They claimed their products were in real-world tests with anticipated deployment schedules. One of the keynote speakers talked about 1.2 billion 5G connections by 2025, with more than half of those in Asia. The purported scale and speed of the transition to 5G is staggering.

[caption id="attachment_3449" align="alignleft" width="640"]The keynote stage, displaying some big numbers. The keynote stage, displaying some big numbers.[/caption]

The last two halls I visited showed that world is not all 5G and AI. These halls looked at current fun applications of mobile technologies and companies developing technologies in the near future. MWC allowed children into one of the halls, where they (and we adults) could fly drones and experience VR technology. I watched in some amusement as people crashed drones, rode bikes with VR gear to simulate horses, were 3D scanned, and generally tried out new tech that didn’t always work.

The second hall included small booths from new companies working on future technologies that might be ready “4 years from now” (4YFN). These companies did not have much to show yet, but each booth displayed the company name and a short phrase summing up their future tech. That led to “Deepscent Labs is a smart scent data company,” ChineSpain is a “Marketplace of experiences for Chinese tourists in Spain,” and “Juice is a tech-based music contents startup that creates an ecosystem of music.” The mind boggles!

The XPRTs’ foray into AI with AIXPRT seems well timed based on this show. Other areas from this show that may be worth considering for the XPRTs are 5G and the cloud. We would love to hear your thoughts on those areas. We know they are important, but do you need the XPRTs and their emphasis on real-world benchmarks and workloads in those areas? Drop us a line and let us know!

Bill

A note about a recent CrXPRT update

A tester from Acer recently contacted us about an issue where CrXPRT was freezing indefinitely during the Photo Effects workload. We initially thought the problem was limited to a specific hardware platform or Chrome OS version, but soon discovered the issue was affecting all CrXPRT tests, regardless of the system.

After quite a bit of troubleshooting, we were able to find and fix what turned out to be simple bug. The problem started with a change we made to increase security and strengthen compliance with GDPR by moving all our web pages to HTTPS. Specifically, we added a redirect that forced principledtechnologies.com to www.principledtechnologies.com. Chrome apps have a manifest property that defines which websites can connect to the application. Because we hadn’t reconfigured the CrXPRT path permissions to account for the new redirect, the test failed. We made the necessary edits to the manifest, tested the fix, and uploaded the updated package (build number 1.0.2.1) to the Chrome Web Store.

If you’re still encountering this problem during testing, check to be sure the app has updated on your system. The changes we made do not affect performance, and all completed CrXPRT test scores from before and after the update are valid and comparable.

We’re grateful whenever community members report issues! If you ever have any problems, questions, or comments regarding any of the XPRTs, please feel free to contact us.

Justin

WebXPRT passes another milestone!

We’re excited to see that users have successfully completed over 250,000 WebXPRT runs! From the original WebXPRT 2013 to the most recent version, WebXPRT 3, this tool has been popular with manufacturers, developers, consumers, and media outlets around the world because it’s easy to run, it runs quickly and on a wide variety of platforms, and it evaluates device performance using real-world tasks.

If you’ve run WebXPRT in any of the more than 458 cities and 64 countries from which we’ve received complete test data—including newcomers Lithuania, Luxembourg, Sweden, and Uruguay—we’re grateful for your help in reaching this milestone. Here’s to another quarter-million runs!

If you haven’t yet transitioned your browser testing to WebXPRT 3, now is a great time to give it a try! WebXPRT 3 includes updated photo workloads with new images and a deep learning task used for image classification. It also uses an optical character recognition task in the Encrypt Notes and OCR scan workload and combines part of the DNA Sequence Analysis scenario with a writing sample/spell check scenario to simulate online homework in the new Online Homework workload. Users carry out tasks like these on their browsers daily, making these workloads very effective for assessing how well a device will perform in the real world.

Happy testing to everyone, and if you have any questions about WebXPRT 3 or the XPRTs in general, feel free to ask!

Justin

CrXPRT helps to navigate the changing Chromebook market

Some people envision Chromebooks as low-end, plastic-shelled laptops that large organizations buy in bulk because they’re inexpensive and easy to manage. While many sub-$200 Chromebooks are still available, the platform is no longer limited to budget chipsets and little memory. Consumers can now choose systems that feature up to 16 GB of RAM, 8th generation Intel Core CPUs, and Core i7 configurations for those willing to pay around $1,600. In addition, some Chromebooks can now run Android apps, Microsoft Office mobile apps, Linux apps, and even Windows apps. While Chromebooks still depend heavily on connectivity and cloud storage, an increasing number of Chrome apps let you perform substantial productivity tasks offline. The Chrome OS landscape has changed so much that for certain use cases, the practical hardware gap between Chromebooks and traditional laptops is narrowing.

More consumers might be interested in Chromebooks than was the case a few years ago, but how they make sense of all the devices on the market? CrXPRT can help by providing objective data on Chromebook performance and battery life. Steven J. Vaughan Nichols offered a great example of the value CrXPRT can provide in his recent ZDNet article on the new Core i7-based Google Pixelbook. The Pixelbook’s CrXPRT score of 226 showed that it performs everyday tasks faster than any of the Chromebooks in our results database. When trying to decide whether it’s worth spending a few hundred or even a thousand dollars more on a new Chromebook, having the right data in hand can transform guesses into well-informed decisions.

You don’t have to be a tech journalist or even a techie to use CrXPRT. If you’d like to learn more about CrXPRT, we encourage you to read the CrXPRT feature here in the blog or visit CrXPRT.com.

Justin

The WebXPRT 3 results calculation white paper is now available

As we’ve discussed in prior blog posts, transparency is a core value of our open development community. A key part of being transparent is explaining how we design our benchmarks, why we make certain development decisions, and how the benchmarks actually work. This week, to help WebXPRT 3 testers understand how the benchmark calculates results, we published the WebXPRT 3 results calculation and confidence interval white paper.

The white paper explains what the WebXPRT 3 confidence interval is, how it differs from typical benchmark variability, and how the benchmark calculates the individual workload scenario and overall scores. The paper also provides an overview of the statistical techniques WebXPRT uses to translate raw times into scores.

To supplement the white paper’s overview of the results calculation process, we’ve also published a spreadsheet that shows the raw data from a sample test run and reproduces the calculations WebXPRT uses.

The paper and spreadsheet are both available on WebXPRT.com and on our XPRT white papers page. If you have any questions about the WebXPRT results calculation process, please let us know, and be sure to check out our other XPRT white papers.

Justin

More on the way for the XPRT Weekly Tech Spotlight

In the coming months, we’ll continue to add more devices and helpful features to the XPRT Weekly Tech Spotlight. We’re especially interested in adding data points and visual aids that make it easier to quickly understand the context of each device’s test scores. For instance, those of us who are familiar with WebXPRT 3 scores know that an overall score of 250 is pretty high, but site visitors who are unfamiliar with WebXPRT probably won’t know how that score compares to scores for other devices.

We designed Spotlight to be a source of objective data, in contrast to sites that provide subjective ratings for devices. As we pursue our goal of helping users make sense of scores, we want to maintain this objectivity and avoid presenting information in ways that could be misleading.

Introducing comparison aids to the site is forcing us to make some tricky decisions. Because we value input from XPRT community members, we’d love to hear your thoughts on one of the questions we’re facing: How should our default view present a device’s score?

We see three options:

1) Present the device’s score in relation to the overall high and low scores for that benchmark across all devices.
2) Present the device’s score in relation to the overall high and low scores for that benchmark across the broad category of devices to which that device belongs (e.g., phones).
3) Present the device’s score in relation to the overall high and low scores for that benchmark across a narrower sub-category of devices to which that device belongs (e.g., high-end flagship phones).

To think this through, consider WebXPRT, which runs on desktops, laptops, phones, tablets, and other devices. Typically, the WebXPRT scores for phones and tablets are lower than scores for desktop and laptop systems. The first approach helps to show just how fast high-end desktops and laptops handle the WebXPRT workloads, but it could make a phone or tablet look slow, even if its score was good for its category. The second approach would prevent unfair default comparisons between different device types but would still present comparisons between devices that are not true competitors (e.g., flagship phones vs. budget phones). The third approach is the most careful, but would introduce an element of subjectivity because determining the sub-category in which a device belongs is not always clear cut.

Do you have thoughts on this subject, or recommendations for Spotlight in general? If so, Let us know.

Justin

Check out the other XPRTs:

Forgot your password?