BenchmarkXPRT Blog banner

Search Results for: webxprt

Rolling with the changes

While WebXPRT 2015 has been running fine on earlier beta versions of Windows 10, we have found a problem on some recent versions. Starting with build 10122, the Local Notes test hangs when using the Microsoft Edge browser. (Note: This browser still identifies itself as Spartan in the builds we have seen.) Chrome and Firefox on Windows 10 have successfully run WebXPRT 2015, so the problem appears to be restricted to Edge/Spartan.

Because WebXPRT ran successfully on earlier builds of Windows 10, we are hoping that upcoming builds will resolve the problem. However, we have been investigating the issue in case there is something we can address. The problem is that the encrypted strings that the test is trying to write to LocalStorage are not being written correctly. Non-encrypted strings are being written correctly.

As soon as the problem gets resolved, we’ll let you know.

In other news, we’ve been looking at Android M. There are a lot of interesting changes coming, such as the difference in the way that Android M manages app permissions. We’ve decided to delay releasing the design document for the next version of MobileXPRT so that we can make sure that the design deals with these changes appropriately.

Eric

What makes a good benchmark?

As we discussed recently, we’re working on the design document for the next version of MobileXPRT, and we’re really interested in any ideas you may have. However, we haven’t talked much about what makes for a good benchmark test.

The things we measure need to be quantifiable. A reviewer can talk about the realism of game play, or the innovative look of a game, and those are valid observations. However, it is difficult to convert those kinds of subjective impressions to numbers.

The things we measure must also be repeatable. For example, the response time for an online service may depend on the time of day, number of people using the service at the time, network load, and other factors that change over time. You can measure the responsiveness of such services, but doing so requires repeating the test enough times under enough different circumstances to get a representative sample.

The possible things we can measure go beyond the speed of the device to include things such as battery life and compatibility with standards, and even fidelity or quality such as with photos or video. BatteryXPRT and CrXPRT test battery life, while the HTML5 tests in WebXPRT are among those that test compatibility. We are currently looking into quality metrics for possible future tools.

I hope this has given you some ideas. If so, let us know!

Eric

It’s not the same

We sometimes get questions about comparing results from older versions of benchmarks to the current version. Unfortunately, it’s never safe to compare the results from different versions of benchmarks. This principle has been around much longer than the XPRTs. A major update will use different workloads and test data, and will probably be built with updated or different tools.

To avoid confusion, we rescale the results every time we release a new version of an existing benchmark. By making the results significantly different, we hope to reduce the likelihood that results from two different versions will get mixed together.

As an example, we scaled the results from WebXPRT 2015 to be significantly lower than those from WebXPRT 2013. Here are some scores from the published results for WebXPRT 2013 and WebXPRT 2015.

WebXPRT 2013 vs. 2015 results

Please note that the results above are not necessarily from the same device configurations, and are meant only to illustrate the difference in results between the two versions of WebXPRT.

If you have any questions, please let us know.

Eric

Looking at the data

We’re planning the general release of WebXPRT 2015 for late next week. The testing is looking good and the response has been positive.

We’ve been looking at the hundreds of runs in the database for the community preview.  As we’ve said before, we’ve been looking at the information from the JavaScript navigator object in the hope that we could improve the disclosure information from WebXPRT 2015. However, the information is not reliable enough for us to use it at this time. Hopefully, that will improve in the near future.

For now, we will continue to use the information from the user agent string. We’ve discussed the user agent string before. It does give us some information about the device, although not as much as we are able to gather in some of the XPRTs.

Looking at the data, the most common OS has been Windows. This may be in part because you needed to be logged in to run the CP. However, Android devices represented over a third of the runs, and Chrome OS represented about 25 percent of the runs. While we had healthy numbers of iOS devices, there were only a handful of Mac OSX runs.  Chrome was the most common browser. Other browsers identified themselves as Safari, Firefox, Opera, and MS IE.

As you can see, the new WebXPRT continues the tradition that WebXPRT 2013 started of running everywhere. We can’t wait to make it available to general public!

Eric

Lots of things are happening!

The WebXPRT 2015 community preview hasn’t even been out two weeks yet, but there are already hundreds of runs in the database. If you haven’t checked it out yet, now is a good time! (login required)

As I mentioned last week, Bill and Justin are at Intel Developer Forum 2015 – Shenzhen. Here’s their home away from home this week.

booth

Bill and Justin have been having a lot of good conversations and have found a lot of interest in an open development community. We’re looking forward to having even more members in Asia soon!

On Monday we released the CrXPRT white paper. If you want to know more about the concepts behind CrXPRT 2015, how it was developed, how the results are calculated, or anything else about CrXPRT, the white paper is a great place to start.
Finally, the MobileXPRT 2015 design document is coming in the next couple of weeks. What would you like to see in the next version of MobileXPRT? 64-bit support? New types of tests? Improvements to the UI? Everything is on the table. This is the time to make your voice heard!
Eric

Finally!

We’re releasing the community preview of WebXPRT 2015 tomorrow. We’re very excited that it’s finally here. In the past few weeks, we’ve discussed some of the new features in WebXPRT 2015, such as test automation, its new and improved tests, and its Chinese UI. We think you’re really going to enjoy the new WebXPRT.

The design document (login required) specifies that WebXPRT will contain an experimental workload. That workload is not in the community preview, although we plan for it to be in the general release. However, because any experimental workloads are not included in the overall score, this will not affect any results you generate.

We’re also investigating the use of the JavaScript navigator object to improve system disclosure, but we are still determining if we can get reliable enough information to display. So this information is not displayed in the community preview.

As with all the BenchmarkXPRT community previews, we’re not putting any publication restrictions on this preview release. Test at will, and publish your findings. We guarantee the results for the community preview will be comparable to results from the general release.

If you’re not a community member, join us and check out the new WebXPRT.

Eric

Check out the other XPRTs:

Forgot your password?