BenchmarkXPRT Blog banner

Category: Future of performance evaluation

A clarification from Brett Howse

A couple of weeks ago, I described a conversation I had with Brett Howse of AnandTech. Brett was kind enough to send a clarification of some of his remarks, which he gave us permission to share with you.

“We are at a point in time where the technology that’s been called mobile since its inception is now at a point where it makes sense to compare it to the PC. However we struggle with the comparisons because the tools used to do the testing do not always perform the same workloads. This can be a major issue when a company uses a mobile workload, and a desktop workload, but then puts the resulting scores side by side, which can lead to misinformed conclusions. This is not only a CPU issue either, since on the graphics side we have OpenGL well established, along with DirectX, in the PC space, but our mobile workloads tend to rely on OpenGL ES, with less precision asked of the GPU, and GPUs designed around this. Getting two devices to run the same work is a major challenge, but one that has people asking what the results would be.”

I really appreciate Brett taking the time to respond. What are your thoughts in these issues? Please let us know!

Eric

Comparing apples and oranges?

My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.

During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.

People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.

This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.

Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?

Eric

The XPRT Women Code-a-Thon

As Justin explained last week, we’ve resolved the issue we found with the TouchXPRT CP. I’m happy to say that the testing went well and that we released CP3 this week.

It’s been only three weeks since we announced the XPRT Weekly Tech Spotlight, and we already have another big announcement! Principled Technologies has joined with ChickTech Seattle to host the first ever XPRT Women Code-a-Thon! In this two-day event, participants will compete to create the best new candidate workload for WebXPRT or MobileXPRT. The workloads can’t duplicate existing workloads, so we are looking forward to seeing the new ideas.

Judges will study all the workloads and award prizes to the top three: $2,500 for first place, $1,500 for second place, and $1,000 for third place. Anyone interested can register here.

PT and the BenchmarkXPRT Development Community are committed to promoting the advancement of women in STEM, but we also win by doing good. As with the NCSU senior project, the BenchmarkXPRT Development Community will get some fresh perspectives and some new experimental test tools. Everyone wins!

So much has happened in 2016 and January isn’t even over yet. The year is off to a great start!

Eric

In the spotlight

I’m happy to be back in North Carolina, but I had a really great time at CES. I talked to over a dozen companies about the XPRTs and the XPRT Weekly Tech Spotlight, and had some good conversations. Hopefully, some of these companies’ devices will be among the first ones we showcase when the XPRT Weekly Tech Spotlight goes live next month.

Of course, I saw some really great tech at CES! Amazing TVs and cars, magic mirrors, all kinds of drones, and the list goes on. Before the show, the Internet of Things was predicted to be big this year, and boy, was it! Smart refrigerators, door locks, and thermostats were just the beginning. Some of my favorite examples were the chopsticks and footbath—both Bluetooth enabled—and “the world’s first remote controlled game shoe.”

Clearly IoT is the Wild West of technology right now. We’re had some conversations about how the XPRTs might be able to help consumers navigate the chaos. However, with a class of products this diverse, there are a lot of issues to consider. If you have any thoughts about this, let us know!

Eric

Another great year

A lot of great stuff happened this year! In addition to releasing new versions of the benchmarks, videos, infographics, and white papers, we released our first-ever German UI and sponsored our first student partnership at North Carolina State University. We visited three continents to promote the XPRTs and saw XPRT results published in six of them (we’re still working on Antarctica).

Perhaps most exciting, we reached our fifth anniversary. Users have downloaded or run the XPRTs over 100,000 times.

As great as the year has been, we are sprinting into 2016. Though I can’t talk about them yet, there are some big pieces of news coming soon. Even sooner, I will be at CES next week. If you would like to talk about the XPRTs or the future of benchmarking, let me know and we’ll find a time to meet.

Whatever your holiday traditions are, I hope you are having a great holiday season. Here’s wishing you all the best in 2016!

Eric

Nebula Wolf

A couple of months ago, we talked about the senior project we sponsored with North Carolina State University. We asked a small team of students to take a crack at implementing a game that we could use as the basis of a benchmark test.

Last Friday, the project culminated with a presentation at the annual Posters and Pies event.

Nebula Wolf

The team gave a great presentation, and I was impressed by how much they accomplished in 3 months. They implemented a game that they called Nebula Wolf – the mascot for NC State is a wolf. It’s a space-themed rail shooter. You can play the game, or click a button to run a script for benchmarking purposes. In the scripted mode, Nebula Wolf unlocks the frame rate so the device can run at full speed.

Over the next couple of weeks, we’re going to be testing Nebula Wolf, digging into the code and getting a deeper understanding of what the team did. We’re hoping to make the game available on our web site soon.

Tomorrow, AJ, Brien, and Rachel will present one last time, here at PT. It’s been a real pleasure working with them. I wish them all good luck as they finish college and start their careers.

Eric

Check out the other XPRTs:

Forgot your password?