BenchmarkXPRT Blog banner

Category: benchmark

Following up on CrXPRT 2 battery life errors

A few weeks ago, we discussed error messages that a tester received when starting up CrXPRT 2 after a battery life test. CrXPRT 2 battery life tests require a full battery rundown, after which the tester plugs in the Chromebook, turns it on, opens the CrXPRT 2 app, and sees the test results. In the reported cases, the tester opened the app after a battery life test that seemed successful, but saw “N/A” or “test error” messages instead of the results they expected.

During discussions about the end-of-test system environment, we realized that some testers might be unclear about how to tell that the battery has fully run down. During the system idle portion of CrXPRT 2 battery life test iterations, the Chromebook screen turns black and a small cursor appears somewhere on the screen to let testers know the test is still in progress. We believe that some testers, seeing the black screen but not the cursor, believe the system has shut down. Restarting CrXPRT 2 before the battery life test is complete could explain some of the “N/A” or “test error” messages users have reported.

If you see a black screen without a cursor, you can check to see whether the test is complete by looking for the small system power indicator light on the side or top of most Chromebooks. These are usually red, orange, or green, but if a light of any color is lit, the test is still underway. When the light goes out, the test has ended. You can plug the system in and power it on to see results.

Note that some Chromebooks provide low-battery warnings onscreen. During CrXPRT 2 battery life runs, testers should ignore these.

We hope this clears up any confusion about how to know when a CrXPRT 2 battery life test has ended. If you’ve received repeated “N/A” or “test error” messages during your CrXPRT 2 testing and the information above does not help, please let us know!

Justin

Our results database, your resource

Testers who have started using the XPRT benchmarks recently may not know about one of the free resources we offer. The XPRT results database currently holds more than 2,400 test results from over 90 sources, including major tech review publications around the world, OEMs, and independent testers. It offers a wealth of current and historical performance data across all the XPRT benchmarks and hundreds of devices.

We update the results database several times a week, adding selected results from our own internal lab testing, end-of-test user submissions, and reliable tech media sources. (After you run one of the XPRTs, you can choose to submit the results, but they don’t automatically appear in the database.)

Before adding a result, we evaluate whether the score makes sense and is consistent with general expectations, which we can do only when we have sufficient system information details. For that reason, we encourage testers to disclose as much hardware and software information as possible when publishing or submitting a result.

We encourage visitors to our site to explore the XPRT results database. There are three primary ways to do so. The first is by visiting the main BenchmarkXPRT results browser, which displays results entries for all of the XPRT benchmarks in chronological order (see the screenshot below). Users can narrow the results by selecting a benchmark from the drop-down menu and can type values, such as vendor or the name of a tech publication, into the free-form filter field. For results we produced in our lab, clicking “PT” in the Source column takes you to a page with additional disclosure information for the test system. For sources outside our lab, clicking the source name takes you to the original article or review that contains the result.

The second way to access our published results is by visiting the results page for each individual XPRT benchmark. Go the page of the benchmark you’re interested in, and look for the blue View Results button. Clicking it takes you to a page that displays results for only that benchmark. You can use the free-form filter on the page to filter those results, and can use the Benchmarks drop-down menu to jump to the other individual XPRT results pages.

The third way to view information in our results database is with the WebXPRT Processor Comparison Chart. When we publish a new WebXPRT result, the score automatically appears in the processor comparison chart as well. For each processor, the chart shows a bar representing the average score. Mousing over the bar displays a popup indicating the number of WebXPRT results we currently have for that processor and clicking the bar lets you view the results. You can change the number of results the chart displays on each page, and use the drop-down menu to toggle back and forth between the WebXPRT 3 and WebXPRT 2015 charts.

We hope you’ll take some time to browse the information in our results database. We welcome your feedback about what you’d like to see in the future and suggestions for improvement. Our database contains the XPRT scores that we’ve gathered, but we publish them as a resource for you. Let us know what you think!

Justin

CloudXPRT is up next, and we’re thinking about how to handle results submission and publication

Last month, we provided an update on the CloudXPRT development process and more information about the three workloads that we’re including in the first build. We’d initially hoped to release the build at the end of April, but several technical challenges have caused us to push the timeline out a bit. We believe we’re very close to ready, and look forward to posting a release announcement soon.

In the meantime, we’d like to hear your thoughts about the CloudXPRT results publication process. Traditionally, we’ve published XPRT results on our site on a rolling basis. When we complete our own tests, receive results submissions from other testers, or see results published in the tech media, we authenticate them and add them to our site. This lets testers make their results public on their timetable, as frequently as they want.

Some major benchmark organizations use a different approach, and create a schedule of periodic submission deadlines. After each deadline passes, they review the batch of submissions they’ve received and publish all of them together on a single later date. In some cases, they release results only two or three times per year. This process offers a high level of predictability. However, it can pose significant scheduling obstacles for other testers, such as tech journalists who want to publish their results in an upcoming device review and need official results to back up their claims.

We’d like to hear what you think about the different approaches to results submission and publication that you’ve encountered. Are there aspects of the XPRT approach that you like? Are there things we should change? Should we consider periodic results submission deadlines and publication dates for CloudXPRT? Let us know what you think!

Justin

Results details and unexpected behavior with the CrXPRT 2 battery life test

It’s been two weeks since the CrXPRT 2 public release, and we’re happy to see widespread interest in the test right out of the gate!

This week, we received a couple of questions about its battery life test from Melissa Riofrio at PCWorld. First, she asked for clarification about the relationship between the rundown time and the 30-minute increments that appear in the iteration details table for each battery life run. Second, she asked what could be causing her to get “N/A” and “test error” battery life results at the end of what appeared to be successful tests. Both topics may be of interest to other CrXPRT 2 testers, so we’ve decided to address them here in the blog and invite our readers to provide any relevant feedback.

Rundown time vs. elapsed time

When you’re viewing previous CrXPRT 2 test results and click the Details link for a specific battery life test run, a window displaying additional test information appears (the screenshot below shows an example). The window first provides performance details, then presents a table with several data points for each iteration.

The data point in the far-right column, elapsed time, can be slightly confusing. Each test iteration runs for 30 minutes, and this column provides a cumulative total of these 30-minute increments. In some instances, these totals accurately reflect the actual time elapsed from the time that testing begins. However, if the test system shuts down for some reason before running the entire iteration, this table will still show the entire 30 minutes allotted for the that iteration. In these cases, the cumulative elapsed time value in the far-right column will not match the rundown time that the test reports for the system’s battery life. For that reason, testers should always consider rundown time as the definitive value for battery life.


 “N/A” and “test error” battery life results after apparently successful tests

We’re actively investigating this issue at present. We’ve tested a wide range of Chromebooks, both old and new, on several versions of Chrome OS, including the latest versions, and have been unable to reproduce the problem. Have you witnessed this behavior at the end of a CrXPRT 2 battery life test? If so, we’d love to get more information from you about the system under test and your testing procedures, so please contact us.

We’re grateful to Melissa for raising these questions, and we appreciate everyone’s feedback on CrXPRT 2. Hopefully, we’ll soon be able to determine the cause of the  “N/A” and “test error” results and find a solution. We’ll be sure to share that information here in the blog once we do.

Justin

Make confident choices about your company’s future tech with the XPRTs

Durham, NC, April 23, 2020 — Principled Technologies and the BenchmarkXPRT Development Community have released a video on the benefits of consulting the XPRTs before committing to new technology purchases.

AIXPRT, one of the battery of XPRT benchmark tools, runs image-classification and object-detection workloads to determine how well tech handles AI and machine learning.

CloudXPRT, another XPRT tool, accurately measures the end-to-end performance of modern, cloud-first applications deployed on infrastructure as a service (IaaS) platforms – allowing corporate decision-makers to select the best configuration for every objective.

All of the XPRTs give companies the real-world information necessary to determine which prospective future tech p – and which will disappoint

According to the video, “The XPRTs don’t just look at specs and features; they gauge a technology solution’s real-world performance and capabilities. So you know whether switching environments is worth the investment. How well solutions support machine learning and other AI capabilities. If next-gen releases beat their rivals or fall behind the curve.”

Watch the video at facts.pt/pyt88k5. To learn more about how AIXPRT, CloudXPRT, WebXPRT, MobileXPRT, TouchXPRT, CrXPRT, and HDXPRT can help IT decision-makers can make confident choices about future purchases, go to www.BenchmarkXPRT.com.

About Principled Technologies, Inc.
Principled Technologies, Inc. is the leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, USA. For more information, please visit www.principledtechnologies.com.

Company Contact
Justin Greene
BenchmarkXPRT Development Community
Principled Technologies, Inc.
1007 Slater Road, Suite #300
Durham, NC 27703
BenchmarkXPRTsupport@PrincipledTechnologies.com

Adapting to a changing tech landscape

The BenchmarkXPRT Development Community started almost 10 years ago with the development of the High Definition Experience & Performance Ratings Test, also known as HDXPRT. Back then, we distributed the benchmark to interested parties by mailing out physical DVDs. We’ve come a long way since then, as testers now freely and easily access six XPRT benchmarks from our site and major app stores.

Developers, hardware manufacturers, and tech journalists—the core group of XPRT testers—work within a constantly changing tech landscape. Because of our commitment to providing those testers with what they need, the XPRTs grew as we developed additional benchmarks to expand the reach of our tools from PCs to servers and all types of notebooks, Chromebooks, and mobile devices.

As today’s tech landscape continues to evolve at a rapid pace, our desire to play an active role in emerging markets continues to drive us to expand our testing capabilities into areas like machine learning (AIXPRT) and cloud-first applications (CloudXPRT). While these new technologies carry the potential to increase efficiency, improve quality, and boost the bottom line for companies around the world, it’s often difficult to decide where and how to invest in new hardware or services. The ever-present need for relevant and reliable data is the reason many organizations use the XPRTs to help make confident choices about their company’s future tech.

We just released a new video that helps to explain what the XPRTs provide and how they can play an important role in a company’s tech purchasing decisions. We hope you’ll check it out!

We’re excited about the continued growth of the XPRTs, and we’re eager to meet the challenges of adapting to the changing tech landscape. If you have any questions about the XPRTs or suggestions for future benchmarks, please let us know!

Justin

Check out the other XPRTs:

Forgot your password?