BenchmarkXPRT Blog banner

Category: Uncategorized

How to submit WebXPRT 4 results for publication

Each new XPRT benchmark release attracts new visitors to our site. Those who haven’t yet run any of our benchmarks may not know how everything works. For those folks, as well as longtime testers who may not be aware of everything the XPRTs have to offer, we like to occasionally revisit the basics here in the blog. Today, we cover the simple process of submitting WebXPRT 4 test results for publication in the WebXPRT 4 results viewer.

Unlike sites that publish all results that users submit, we publish only results—from internal lab testing, user submissions, and reliable tech media sources—that meet our evaluation criteria. Scores must be consistent with general expectations and, for sources outside of our lab, must include enough detailed system information that we can determine whether the score makes sense. Every score in the WebXPRT results viewer and on the general XPRT results page meets these criteria.

Everyone who runs a WebXPRT 4 test is welcome to submit scores for us to consider for publication. The process is quick and easy. At the end of the WebXPRT test run, click the Submit your results button below the overall score, complete the short submission form, and click Submit again. Please be as specific as possible when filling in the system information fields. Detailed device information helps us assess whether individual scores represent valid test runs. The screenshot below shows how the form would look if I submitted a score at the end of a WebXPRT 4 run on my personal system.

After you submit your score, we’ll contact you to confirm how we should display the source. You can choose one of the following:

  • Your first and last name
  • “Independent tester” (for those who wish to remain anonymous)
  • Your company’s name, provided that you have permission to submit the result in their name. If you want to use a company name, please provide a valid company email address.


We will not publish any additional information about you or your company without your permission.

We look forward to seeing your score submissions. If you have suggestions for the WebXPRT 4 results viewer or any other aspect of the XPRTs, let us know!

Justin

Answering questions about the AIXPRT Community Preview

Over the last two weeks, we’ve received a few questions about the AIXPRT Community Preview. Specifically, community members have asked about the project’s focus, possible future steps, and the results table. We decided to answer each of these here in the blog, since others are likely to have the same questions. We encourage folks to submit any new questions they may have.

PT previously stated that AIXPRT would be focused on edge devices. The current published results are from desktops and laptops. Is the focus of AIXPRT changing?

In the past, we did say that the focus of AIXPRT would be edge inference devices. After much feedback, we’ve come to understand that focus is probably too restrictive. PCs and laptops are using inference machine learning, and a decent amount of inference is taking place on servers in the cloud until phones are capable enough to handle the workloads. We now see all of these devices as potential targets for AIXPRT.

How did you choose the current results in your database?

We ran the AIXPRT CP on some of the systems we used during development and testing. We will continue to publish additional results as we test available systems in our lab. We’d love to get results from the community that cover a wider base of devices.

Will you be publishing results from servers?

We welcome server results submissions from the community, and will review them for publication on our site.

Will AIXPRT ever be available for Windows systems?

This is a possibility we’re actively exploring, and we hope to be able to share more about it soon.

What’s the best way to navigate the results table?

AIXPRT can run three toolkits, utilize two networks, and target CPU or GPU hardware. Together, these configuration options produce a lot of data points. To make it easier to handle all these variables, we’re working to improve the navigation, sorting, and filtering capabilities of the results table. In the meantime, a few tips:

  • There are two tabs at the top of the table, one for the ResNet-50 network and one for the SSD-MobileNet network. You can click the tabs to move between results for these networks.
  • Clicking any of the column headers will sort the data in that column A-Z (with the first click) or Z-A (with a second click).
  • To see if an individual test targeted a system’s CPU or GPU, read the description in the Summary column, e.g. Intel Core i7-7600U GPU / OpenVINO.
  • Clicking the entry in the Source column will take you to a more detailed page listing additional test configuration and system hardware information.

 

We’ll continue to share more information about AIXPRT in the coming weeks. Do you have additional questions or comments about AIXPRT? Let us know.

Justin

Preparing for the AIXPRT Community Preview

Thanks to everyone who downloaded the AIXPRT Request for Comments (RFC) preview build. Next week, we’re planning to publish the AIXPRT Community Preview (CP). The AIXPRT CP build includes support for the Intel OpenVINO, TensorFlow (CPU and GPU), and TensorFlow with NVIDIA TensorRT toolkits to run image-classification workloads with ResNet-50 and SSD-MobileNet v1 networks. The test reports FP32, FP16, and INT8 levels of precision. As with the RFC build, the test systems must be running Ubuntu 16.04 LTS. The minimum CPU and GPU requirements vary according to the toolkit being used, and we will publish more details about the hardware minimums next week.

As with our other community previews, we think the AIXPRT CP candidate is solid enough to allow folks to start quoting test results. During CP periods, we generally allow members to publish their own results, but wait until the build is available to the public before we post results on our site. Because community feedback is especially important for AIXPRT, we will handle things a bit differently. During the CP period, we’ll publish results that we produce as well as those from other members of the community, which you’ll be able to view at AIXPRT.com.

We’ll also provide detailed instructions for publishing results and sending them to us. Because of the high number of variables in each potential test configuration, we’ll ask testers to disclose more test, software, and hardware information than in the past. We will make this information available along with the results on AIXPRT.com. Our goal is that others can reproduce these numbers and confirm that they get similar results.

Our CP periods typically last four to six weeks before we make the benchmark available to the general public. If that schedule holds, it would place the public AIXPRT release around the end of March. During the CP period, we welcome your thoughts and suggestions about all aspects of the benchmark.

Also, we normally restrict access to our CPs to BenchmarkXPRT Development Community members. However, because we’re seeking broad input from experts in this field, we’ll gladly make the CP available to anyone interested in participating who has a GitHub account. To gain access, please contact us and let us know your GitHub username. Once we receive it, we’ll send you an invitation to join the repository as a collaborator.

Please let us know if you have any questions. We look forward to hearing your feedback.

Bill

CrXPRT helps to navigate the changing Chromebook market

Some people envision Chromebooks as low-end, plastic-shelled laptops that large organizations buy in bulk because they’re inexpensive and easy to manage. While many sub-$200 Chromebooks are still available, the platform is no longer limited to budget chipsets and little memory. Consumers can now choose systems that feature up to 16 GB of RAM, 8th generation Intel Core CPUs, and Core i7 configurations for those willing to pay around $1,600. In addition, some Chromebooks can now run Android apps, Microsoft Office mobile apps, Linux apps, and even Windows apps. While Chromebooks still depend heavily on connectivity and cloud storage, an increasing number of Chrome apps let you perform substantial productivity tasks offline. The Chrome OS landscape has changed so much that for certain use cases, the practical hardware gap between Chromebooks and traditional laptops is narrowing.

More consumers might be interested in Chromebooks than was the case a few years ago, but how they make sense of all the devices on the market? CrXPRT can help by providing objective data on Chromebook performance and battery life. Steven J. Vaughan Nichols offered a great example of the value CrXPRT can provide in his recent ZDNet article on the new Core i7-based Google Pixelbook. The Pixelbook’s CrXPRT score of 226 showed that it performs everyday tasks faster than any of the Chromebooks in our results database. When trying to decide whether it’s worth spending a few hundred or even a thousand dollars more on a new Chromebook, having the right data in hand can transform guesses into well-informed decisions.

You don’t have to be a tech journalist or even a techie to use CrXPRT. If you’d like to learn more about CrXPRT, we encourage you to read the CrXPRT feature here in the blog or visit CrXPRT.com.

Justin

How to automate WebXPRT 3 testing

Yesterday, we published the WebXPRT 3 release notes, which contain instructions on how to run the benchmark, submit results, and use automation to run the tests.

Test automation is a helpful feature that lets you use scripts to run WebXPRT 3 and to control specific test parameters. Below, you’ll find a description of those parameters and instructions for utilizing automation.

Test type

The WebXPRT automation framework is designed to account for two test types: the six core workloads and any experimental workloads we might add in future builds. There are currently no experimental tests in WebXPRT 3, so the test type variable should always be set to 1.

  • Core tests: 1
  • Experimental tests: 2

 
Test scenario

This parameter lets you specify which tests to run by using the following codes:

  • Photo enhancement: 1
  • Organize album using AI: 2
  • Stock option pricing: 4
  • Encrypt notes and OCR scan: 8
  • Sales graphs: 16
  • Online homework: 32

 
To run an individual test, use its code. To run multiple tests, use the sum of their codes. For example, to run Stocks (4) and Notes (8), use the sum of 12. To run all core tests, use 63, the sum of all the individual test codes (1 + 2 + 4 + 8 + 16 + 32 = 63).

Results format

This parameter lets you select the format of the results:

  • Display the result as an HTML table: 1
  • Display the result as XML: 2
  • Display the result as CSV: 3
  • Download the result as CSV: 4

 
To use the automation feature, start with the URL http://www.principledtechnologies.com/benchmarkxprt/webxprt/2018/3_v5/, append a question mark (?), and add the parameters and values separated by ampersands (&). For example, to run all the core tests and download the results, you would use the following URL: http://principledtechnologies.com/benchmarkxprt/webxprt/2018/3_v5/auto.php?testtype=1&tests=63&result=4

We hope WebXPRT’s automation features will make testing easier for you. If you have any questions about WebXPRT or the automation process, please feel free to ask!

Justin

A look back at 2017

At the beginning of each new year, we like to look back on the previous 12 months and review everything that we accomplished. Here’s a quick recap of an eventful 2017 for the XPRTs:

We continued our tradition of travelling to the world’s biggest tech expos. Bill went to CES in Las Vegas and MWC Shanghai, and Mark attended MWC in Barcelona. Travelling to these shows provides us with great opportunities to monitor industry trends and meet with other BenchmarkXPRT Development Community members.

We also continued work on our suite of benchmark tools and related resources. We updated BatteryXPRT in response to changes in the Android development environment, released two new HDXPRT builds to make installation and test configuration easier on new Windows 10 builds, and released the much-anticipated WebXPRT 3 Community Preview.

We released new media, including a video about our sponsorship of a team of North Carolina State University students who created an experimental VR demo workload, and new interactive tools, such as the XPRT Selector tool and the WebXPRT Processor Comparison Chart.

We continued to improve the XPRT Weekly Tech Spotlight by adding more devices, photos, and site features. We put 51 devices in the spotlight throughout the year and published updated back-to-school, Black Friday, and holiday showcases to help buyers compare devices.

At the end of the year, our most popular benchmark, WebXPRT, passed the 200,000-run milestone. WebXPRT use continues to grow around the world, and it has truly become an industry-standard performance benchmark for OEM labs, vendors, and leading tech press outlets.

We also continued work on our most challenging project to date, a benchmark for machine learning. We look forward to sharing more information on that effort with the community over the next few months.

We’re thankful for everyone who used the XPRTs, joined the community, and sent in questions and suggestions throughout 2017. Each one of you helped to make the BenchmarkXPRT Development Community a strong, vibrant, and relevant resource for people all around the world. Here’s to a great 2018!

Justin

Check out the other XPRTs:

Forgot your password?