BenchmarkXPRT Blog banner

Category: Collaborative benchmark development

I couldn’t wait!

We’ve created a series of videos about the XPRT Women Code-a-Thon. We’ll be talking more about the series, and an exciting new channel, “Women Coding for Change,” in next Thursday’s blog post.  However, we posted the first video today, and I wanted to let you know so you can check it out right away!

Eric

Watching students become masters

As you know, last year, PT sponsored a senior project at the Senior Design Center of North Carolina State University (NCSU). The students created Nebula Wolf, a mini game that might evolve into a future benchmark test. It was a valuable collaboration for us and a very educational experience for the students involved.

I’ve talked before about the emerging technologies we’re considering for new benchmarks. Today, I met with the folks at the NCSU Senior Design Center to discuss a possible future project. We’re hoping to harness the immense energy of these students by having them explore one of these new technologies, and then build on what they discover. Nothing is set yet, but we will, as always, keep you informed as things develop.

We’ll be sharing some exciting news about the XPRT Women Code-a-Thon tomorrow. Check back to find out more!  Meanwhile, we hope you enjoy as much as we did the University of Washington Tacoma article on student Viveret, the first place winner of the XPRT Women Code-a-Thon.

Eric

Getting it right

Back in April Bill announced that we are working on a cross-platform benchmark. We asked for your thoughts and comments, and you’ve been great! We really appreciate all the great ideas.

We’ve been using code from MobileXPRT and TouchXPRT as the basis for some experiments. In his post, Bill talked about the difficulty of porting applications. However, even though we have expertise in porting applications, it’s proving more difficult than we originally thought. Benchmarks are held to a higher standard than most applications. It’s not enough for the code to run reliably and efficiently, it must compare the different platforms fairly.

One thing we know for sure: getting it right is going to take a while. However, we owe it to you to make sure that the benchmark is reliable and fair on all platforms it supports. We will, of course, keep you informed as things progress.

In the meantime, keep sending your ideas!
Eric

Windows 10 upgrade?

We’ve gotten reports that HDXPRT 2014 no longer works on newer versions of Windows 10. We ran tests in our labs and found that to be true.

At least one user has reported that the problem may be the version of CPU-Z the benchmark uses.

We’re working to track down the problem and hope to provide a workaround in the near future and a more definitive fix (if necessary) later.

Please let us know if you’ve encountered this issue and if you’ve found any ways to work around it.

Eric

Personal preference

I saw an interesting article recently, Here’s why I gave up my beloved Galaxy S7 for a boring old iPhone. It’s only been a few weeks since we featured the Samsung S7 in the XPRT Weekly Tech Spotlight, so of course I had to read it. The interesting thing is this guy really loved his Samsung S7, and even declared it “the best smartphone I’ve ever used.” He loved its VR capabilities, camera, and its look. He even prefers Android as an operating system.

So why would he give it up for an iPhone 6s Plus? Simply put, battery life. As a self-described heavy user, he found his Samsung S7 dying before 5 PM every day. The iPhone 6s Plus lasted much longer.

This is a good reminder that people have different priorities. Your priority could be having the fastest phone, the longest battery life, the best screen, or the broadest compatibility. This is why there is no such this as “the best device.”

This is why we are always asking for your input. Knowing your priorities helps the community build better tests!

Eric

Feedback

We’re excited by the high level of interest the community and vendors have shown in the upcoming cross-platform MobileXPRT benchmark. We’ve received general observations about what a cross-platform benchmark should be, along with detailed suggestions about tests, subsystems, and benchmark architecture. We appreciate all of the responses and welcome more, so please keep them coming!

The number-one concern we’ve heard is that we be sure the benchmark tests all platforms fairly. Transparency will be essential to assure users that the tests are performing the same work on all platforms and performing the work in the appropriate way for each platform.

Fortunately, the XPRTs are well positioned to address that concern. From the beginning, we have used a community model. The source code is available to all members, which is the ultimate in transparency.  (If you’re not a community member, it’s easy to join!)

Speaking of source code, we released TouchXPRT source code to the community this week. Members can download the source here (login required).

Eric

Check out the other XPRTs:

Forgot your password?