BenchmarkXPRT Blog banner

Search Results for: webxprt

Running Android-oriented XPRTs on Chrome OS

Since last summer, we’ve been following Google’s progress in bringing Android apps and the Google Play store to Chromebooks, along with their plan to gradually phase out support for Chrome apps over the next few years. Because we currently offer apps that assess battery life and performance for Android devices (BatteryXPRT and MobileXPRT) and Chromebooks (CrXPRT), the way this situation unfolds could affect the makeup of the XPRT portfolio in the years to come.

For now, we’re experimenting to see how well the Android app/Chrome OS merger is working with the devices in our lab. One test case is the Samsung Chromebook Plus, which we featured in the XPRT Weekly Tech Spotlight a few weeks ago. Normally, we would publish only CrXPRT and WebXPRT results for a Chromebook, but installing and running MobileXPRT 2015 from the Google Play store was such a smooth and error-free process that we decided to publish the first MobileXPRT score for a device running Chrome OS.

We also tried running BatteryXPRT on the Chromebook Plus, but even though the installation was quick and easy and the test kicked off without a hitch, we could not generate a valid result. Typically, the test would complete several iterations successfully, but terminate before producing a result. We’re investigating the problem, and will keep the community up to date on what we find.

In the meantime, we continue to recommend that Chromebook testers use CrXPRT for performance and battery life assessment. While we haven’t encountered any issues running MobileXPRT 2015 on Chromebooks, CrXPRT has a proven track record.

If you have any questions about running Android-oriented XPRTs on Chrome OS, or insights that you’d like to share, please let us know.

Justin

Digging deeper

From time to time, we like to revisit the fundamentals of the XPRT approach to benchmark development. Today, we’re discussing the need for testers and benchmark developers to consider the multiple factors that influence benchmark results. For every device we test, all of its hardware and software components have the potential to affect performance, and changing the configuration of those components can significantly change results.

For example, we frequently see significant performance differences between different browsers on the same system. In our recent recap of the XPRT Weekly Tech Spotlight’s first year, we highlighted an example of how testing the same device with the same benchmark can produce different results, depending on the software stack under test. In that instance, the Alienware Steam Machine entry included a WebXPRT 2015 score for each of the two browsers that consumers were likely to use. The first score (356) represented the SteamOS browser app in the SteamOS environment, and the second (441) represented the Iceweasel browser (a Firefox variant) in the Linux-based desktop environment. Including only the first score would have given readers an incomplete picture of the Steam Machine’s web-browsing capabilities, so we thought it was important to include both.

We also see performance differences between different versions of the same browser, a fact especially relevant to those who use frequently updated browsers, such as Chrome. Even benchmarks that measure the same general area of performance, for example, web browsing, are usually testing very different things.

OS updates can also have an impact on performance. Consumers might base a purchase on performance or battery life scores and end up with a device that behaves much differently when updated to a new version of Android or iOS, for example.

Other important factors in the software stack include pre-installed software, commonly referred to as bloatware, and the proliferation of apps that sap performance and battery life.

This is a much larger topic than we can cover in the blog. Let the examples we’ve mentioned remind you to think critically about, and dig deeper into, benchmark results. If we see published XPRT scores that differ significantly from our own results, our first question is always “What’s different between the two devices?” Most of the time, the answer becomes clear as we compare hardware and software from top to bottom.

Justin

Celebrating one year of the XPRT Weekly Tech Spotlight

It’s been just over a year since we launched the XPRT Weekly Tech Spotlight by featuring our first device, the Google Pixel C. Spotlight has since become one of the most popular items at BenchmarkXPRT.com, and we thought now would be a good time to recap the past year, offer more insight into the choices we make behind the scenes, and look at what’s ahead for Spotlight.

The goal of Spotlight is to provide PT-verified specs and test results that can help consumers make smart buying decisions. We try to include a wide variety of device types, vendors, software platforms, and price points in our inventory. The devices also tend to fall into one of two main groups: popular new devices generating a lot of interest and devices that have unique form factors or unusual features.

To date, we’ve featured 56 devices: 16 phones, 11 laptops, 10 two-in-ones, 9 tablets, 4 consoles, 3 all-in-ones, and 3 small-form-factor PCs. The operating systems these devices run include Android, ChromeOS, iOS, macOS, OS X, Windows, and an array of vendor-specific OS variants and skins.

As much as possible, we test using out-of-the-box (OOB) configurations. We want to present test results that reflect what everyday users will experience on day one. Depending on the vendor, the OOB approach can mean that some devices arrive bogged down with bloatware while others are relatively clean. We don’t attempt to “fix” anything in those situations; we simply test each device “as is” when it arrives.

If devices arrive with outdated OS versions (as is often the case with Chromebooks), we update to current versions before testing, because that’s the best reflection of what everyday users will experience. In the past, that approach would’ve been more complicated with Windows systems, but the Microsoft shift to “Windows as a service” ensures that most users receive significant OS updates automatically by default.

The OOB approach also means that the WebXPRT scores we publish reflect the performance of each device’s default browser, even if it’s possible to install a faster browser. Our goal isn’t to perform a browser shootout on each device, but to give an accurate snapshot of OOB performance. For instance, last week’s Alienware Steam Machine entry included two WebXPRT scores, a 356 on the SteamOS browser app and a 441 on Iceweasel 38.8.0 (a Firefox variant used in the device’s Linux-based desktop mode). That’s a significant difference, but the main question for us was which browser was more likely to be used in an OOB scenario. With the Steam Machine, the answer was truly “either one.” Many users will use the browser app in the SteamOS environment and many will take the few steps needed to access the desktop environment. In that case, even though one browser was significantly faster than the other, choosing to omit one score in favor of the other would have excluded results from an equally likely OOB environment.

We’re always looking for ways to improve Spotlight. We recently began including more photos for each device, including ones that highlight important form-factor elements and unusual features. Moving forward, we plan to expand Spotlight’s offerings to include automatic score comparisons, additional system information, and improved graphical elements. Most importantly, we’d like to hear your thoughts about Spotlight. What devices and device types would you like to see? Are there specs that would be helpful to you? What can we do to improve Spotlight? Let us know!

Justin

TouchXPRT’s future

If you’ve been following the blog, you know that we’ve been reviewing each part of the XPRT portfolio. If you missed our discussions of HDXPRT, BatteryXPRT, WebXPRT, and CrXPRT, we encourage you to check them out and send us any thoughts you may have. This week, we continue that series by discussing the state of TouchXPRT and what we see down the road for it in 2017.

We released TouchXPRT 2016, an app for evaluating the performance of Windows 10 and Windows 10 Mobile devices, last February. We built the app by porting TouchXPRT 2014 performance workloads to the new Universal Windows App format, which allows a single app package to run on PCs, phones, tablets, and even consoles.

TouchXPRT 2016 installation is quick and easy, and the test completes in under 15 minutes on most devices. The app runs tests based on five everyday tasks (Beautify Photos, Blend Photos, Convert Videos for Sharing, Create Music Podcast, and Create Slideshow from Photos). It measures how long your device takes to complete each task, produces results for each scenario, and gives you an overall score.

As we think about the path forward for TouchXPRT, we’re aware that many expect 2017 to be a year of significant change in the Windows world, with two updates scheduled for release. Microsoft is slated to release the Windows 10 Creators Update (Build 1704) in April, and a subsequent version of Windows codenamed Redstone 3 may arrive this fall. Many tech observers believe that the Creators Update will introduce new creativity and gaming features, along with a UI upgrade named Project NEON. Major foundational shifts in the OS’s structure are more likely to appear with Redstone 3. At this point, quite a lot is still up in the air, but we’ll be following developments closely.

As we learn more about upcoming changes, we’ll have the opportunity to reevaluate TouchXPRT workloads and determine the best way to incorporate new technologies. Virtual reality, 3D, and 4K are especially exciting, but it’s too soon to know how we might incorporate them in a future version of TouchXPRT.

Because TouchXPRT 2016 continues to run well on a wide range of Windows 10 devices, we think it’s best to keep supporting the current version until we get a better idea of what’s in store for Windows.

If you have any thoughts on the future of Windows performance testing, please let us know!

Bill

Reflecting on 2016

The beginning of a new year is a good time to look back on the previous 12 months and take stock of everything that happened. Here’s a quick recap of a very busy year:

In 2016, the XPRTs travelled quite a bit. Eric went to CES in Las Vegas, Mark attended MWC in Barcelona, and Bill flew out to IDF16 in Shenzhen.

We also sent a team to Seattle for the first XPRT Women Code-A-Thon, an event we’re very proud to have sponsored and co-hosted along with ChickTech, a nonprofit organization dedicated to increasing the number of women in tech-related fields. The Code-a-thon also served as inspiration for an eight-part video series entitled Women Coding for Change. The series explains the motivation behind the Code-a-thon and profiles several of the participants. If you haven’t watched the videos, check them out. They’re well worth the time.

Speaking of videos, we also published one about Nebula Wolf, the mini-game workload produced through our first collaboration with the North Carolina State Senior Design Center. That experience was promising enough for us to partner with another student team this past fall, which resulted in a virtual reality app that we hope to share with the community in the near future.

Of course, we also continued work on our suite of benchmark tools and related resources. We released TouchXPRT 2016 to the public, published the Exploring TouchXPRT 2016 white paper, and released the TouchXPRT 2016 source code to community members.

In 2016, we unveiled the XPRT Weekly Tech Spotlight, a new way for device vendors and manufacturers to share verified test results with buyers around the world. We put 46 devices in the spotlight throughout the year and published Back-to-School, Black Friday, and Holiday device showcases.

In the last quarter of 2016, we celebrated our most widely-used benchmark, WebXPRT, passing the 100,000-run milestone. WebXPRT is still going strong and is as useful and relevant as ever!

Finally, we ended the year with the exciting news that we’re moving forward with efforts to develop a machine-learning performance evaluation tool. We look forward to engaging with the community in the coming year as we tackle this challenge!

As always, we’re grateful for everyone who’s helped to make the BenchmarkXPRT Development Community a strong, vibrant, and relevant resource for people all around the world. Here’s to a great 2017!

Justin

Principled Technologies and the BenchmarkXPRT Development Community announce new effort to evaluate machine learning performance

Durham, NC – Principled Technologies (PT) and the BenchmarkXPRT Development Community, which PT administers, are pleased to announce an initiative to develop a new tool for evaluating systems’ machine learning performance.

Machine learning is a disruptive technology that has the potential to influence a broad range of industries. While there are many available consumer and commercial applications that utilize machine learning for computer vision, natural language processing, and data analytics, there is currently no comprehensive machine learning or deep learning benchmark that includes home, automotive, industrial, and retail use cases.

“These are still the early days of the technology. A fragmented software and hardware landscape and lack of standardization make it complex and challenging to evaluate performance in machine learning,” said Bill Catchings, Co-Founder and Co-Owner of PT. “We’ve decided to take on that challenge, and we invite all interested parties to participate in the creation, evaluation, and testing of this new tool.”

The new tool will join the rest of the industry-standard benchmarks from the BenchmarkXPRT Development Community: WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, BatteryXPRT, and HDXPRT.

To learn more about and join the BenchmarkXPRT Development Community, go to www.BenchmarkXPRT.com.

About Principled Technologies, Inc.

Principled Technologies, Inc. is a leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.PrincipledTechnologies.com.

Company Contact

Bill Catchings
Principled Technologies, Inc.
1007 Slater Road, Suite #300
Durham, NC 27703
bcatchings@principledtechnologies.com

Check out the other XPRTs:

Forgot your password?