BenchmarkXPRT Blog banner

Category: Collaborative benchmark development

The big weekend is almost here!

A few weeks ago, I talked about the XPRT Women Code-a-Thon. Well, all the work that we and our friends at ChickTech Seattle have done is about to pay off! On Saturday, March 12, dozens of women will go to the Sole Repair Shop in Seattle for two days of coding, good food, and networking opportunities.

The goal of each team at the code-a-thon is to create a new workload that might be included in a future version of WebXPRT or MobileXPRT. Judges will award prizes to the top three workloads: $2,500 for first place, $1,500 for second place, and $1,000 for third place. I can’t wait to see the winning workloads!

We’re very fortunate to have Kristin Toth Smith as the keynote speaker. She is an avid supporter of women in tech, current COO of Dolly, and former CEO of Code Fellows.

It should be a great time for all. If you or anyone you know can get to Seattle this weekend, registration is still open.

Eric

Last week in the XPRTs
We published the XPRT Weekly Tech Spotlight on the ASUS ZenFone 2.
We added one new BatteryXPRT ’14 result.
We added five new WebXPRT ’15 results.

Is it hot in here?

One of the great meetings I had at CES was with another community member, Patrick Chang, Senior System Engineer in the Tablet Performance group at Dell.  I was glad to hear that, when he tests his devices, Patrick makes frequent use of TouchXPRT.

While TouchXPRT stresses the system appropriately, Patrick’s job requires him to understand not only how well the device performs, but why it performs that way. He was wondering what we could do to help him correlate the temperature, power consumption, and performance of a device.

That information is not typically available to software and apps like the XPRTs. However, it may be possible to add some hooks that would let the XPRTs coordinate with third-party utilities and hardware that do.

As always, the input from the community guides the design of the XPRTs. So, we’d love to know how much interest the community has in having this type of information. If you have thoughts about this, or other kinds of information you’d like the XPRTs to gather, please let us know!

Eric

Last week in the XPRTs
We published the XPRT Weekly Tech Spotlight on the Apple iPad Pro.
We added one new BatteryXPRT ’14 result.
We added one new CrXPRT ’15 result.
We added one new MobileXPRT ’13 result.
We added four new WebXPRT ’15 results.

XPRT Women Code-a-Thon: Make your voice heard and win a cash prize

DURHAM, NC –(Marketwired – March 01, 2016) – The BenchmarkXPRT Development Community and ChickTech are co-hosting the XPRT Women Code-a-Thon on March 12-13 in Seattle. The code-a-thon encourages Seattle software programmers to create small apps, or “workloads,” that mimic actions they take on their devices every day.

The top three participants or teams will receive cash prizes of up to $2,500, and all participants’ workloads will be considered for inclusion in future versions of the BenchmarkXPRT tools, or XPRTs. Any programmer familiar with Web development or Android development is encouraged to participate.

The XPRTs are apps that empower people all over the world to test how well devices handle everyday activities. They do this by running workloads that simulate common tasks – just like the workloads code-a-thon participants will be building.

“We want the XPRTs to reflect how people actually use their technology every day,” said Jennie Faries. Faries is one of the code-a-thon’s judges and a developer at Principled Technologies, which administers the BenchmarkXPRT Development Community. “By gaining the perspectives of this group of women, we’re making the tools stronger and more realistic. And when the tools we use to measure technology get better, the technology itself gets better too.”

All participants will receive a t-shirt and locally sourced breakfast and lunch on both days of the code-a-thon. The event will include time for networking and conclude with a talk from a special keynote speaker.

Add your voice to the tools that measure today’s hottest tech. Register today at facts.pt/XPRTcodeathon2016_registration, learn more at facts.pt/XPRT codeathon2016, and get all the details at facts.pt/XPRTcodeathon2016_FAQ.

About ChickTech

ChickTech envisions a safe, inclusive, and innovative technology future that includes equal pay, participation, and treatment of women. It is dedicated to retaining women in the technology workforce and increasing the number of women and girls pursuing technology-based careers. For more information, please visit http://chicktech.org

About the BenchmarkXPRT Development Community

The BenchmarkXPRT Development Community is a forum where registered members can contribute to the process of creating and improving the XPRTs. For more information, please visit http://www.principledtechnologies.com/benchmarkxprt

About Principled Technologies, Inc.

Principled Technologies, Inc. is a leading provider of technology marketing and learning & development services. It administers the BenchmarkXPRT Development Community.

Principled Technologies, Inc. is located in Durham, North Carolina, in NC’s Research Triangle Park region. For more information, please visit www.PrincipledTechnologies.com.

Company Contact
Jennie Faries
Principled Technologies, Inc.
1007 Slater Road, Suite #300
Durham, NC 27703

A clarification from Brett Howse

A couple of weeks ago, I described a conversation I had with Brett Howse of AnandTech. Brett was kind enough to send a clarification of some of his remarks, which he gave us permission to share with you.

“We are at a point in time where the technology that’s been called mobile since its inception is now at a point where it makes sense to compare it to the PC. However we struggle with the comparisons because the tools used to do the testing do not always perform the same workloads. This can be a major issue when a company uses a mobile workload, and a desktop workload, but then puts the resulting scores side by side, which can lead to misinformed conclusions. This is not only a CPU issue either, since on the graphics side we have OpenGL well established, along with DirectX, in the PC space, but our mobile workloads tend to rely on OpenGL ES, with less precision asked of the GPU, and GPUs designed around this. Getting two devices to run the same work is a major challenge, but one that has people asking what the results would be.”

I really appreciate Brett taking the time to respond. What are your thoughts in these issues? Please let us know!

Eric

TouchXPRT 2016 is here!

Today, we released TouchXPRT 2016, the latest version of our tool for evaluating the performance of Windows devices. The BenchmarkXPRT Development Community has been using a community preview for several weeks, but now anyone can run TouchXPRT 2016 and publish their results.

TouchXPRT 2016 is compatible with systems running Windows 10 and Windows 10 Mobile. The new release includes the same performance workloads as TouchXPRT 2014, but with updated content and in the form of a Universal Windows app.

TouchXPRT 2016 is available at TouchXPRT.com and in the Windows App store.

After trying TouchXPRT 2016, please submit your scores and send any comments to BenchmarkXPRTsupport@principledtechnologies.com. We’re eager to find out how you’ll use this tool!

Comparing apples and oranges?

My first day at CES, I had breakfast with Brett Howse from AnandTech. It was a great opportunity to get the perspective of a savvy tech journalist and frequent user of the XPRTs.

During our conversation, Brett raised concerns about comparing mobile devices to PCs. As mobile devices get more powerful, the performance and capability gaps between them and PCs are narrowing. That makes it more common to compare upper-end mobile devices to PCs.

People have long used different versions of benchmarks when comparing these two classes of devices. For example, the images for benchmarking a phone might be smaller than those for benchmarking a PC. Also, because of processor differences, the benchmarks might be built differently, say a 16- or 32-bit executable for a mobile device, and a 64-bit version for a PC. That was fine when no one was comparing the devices directly, but can be a problem now.

This issue is more complicated than it sounds. For those cases where a benchmark uses a dumbed-down version of the workload for mobile devices, comparing the results is clearly not valid. However, let’s assume that the workload stays the same, and that you run a 32-bit benchmark on a tablet, and a 64-bit version on a PC. Is the comparison valid? It may be, if you are talking about the day-to-day performance a user is likely to encounter. However, it may not be valid if you are making statement about the potential performance of the device itself.

Brett would like the benchmarking community to take charge of this issue and provide guidance about how to compare mobile devices and PCs. What are your thoughts?

Eric

Check out the other XPRTs:

Forgot your password?