BenchmarkXPRT Blog banner

Category: HDXPRT

Change is inevitable

As we get close to the beta version of HDXPRT 2012, I wanted to let you know how it compares with the original design specification. As inevitably happens in any software project, there are differences between the original design and the final product. Generally, things have stayed pretty close with HDXPRT 2012, but there are two changes worth noting.

First, in the design specification, we specified Audacity 1.3.14 Beta in the Music Maker scenario as that was the only version that supported Windows 7 at the time. Audacity 2.0 with Windows 7 debuted in the interim so we are using that version.

The second and more significant issue was with Picasa, which was to be part of the Media Organizer scenario. Unfortunately, we couldn’t create a stable script because scripting tools like AutoIT could not properly recognize some of Picasa’s application UI elements. Somewhat reluctantly, we ended up replacing Picasa with Photoshop Elements. We still think the scenario is a good one and Photoshop Elements is an appropriate tool. I would have liked, however, to have Picasa in there.

There are probably some more minor differences between the beta and the design specification. We’ll let you know what they are when we have the beta ready in a couple weeks. (Hopefully!) We’re looking forward to getting that into your hands and getting your feedback. If you’re not already a member of the Development Community, I encourage you to join so that you can get your copy of the beta when it is available.

Bill

Comment on this post in the forums

HDXPRT 2012 – Testing the test

We’re currently starting testing on alpha versions of HDXPRT 2012. In order to do that, we’re putting together a testbed. We have two goals for the testbed that are somewhat contradictory. The first is to make the testbed as diverse as possible in terms of vendors and configurations. We want notebooks and desktops from as many vendors as possible. We want to make sure we have systems that will push the edges—both slower systems that may even be below the minimum recommended configuration and faster ones representing the current latest and greatest. These systems will help us shake out bugs and provide some raw data that we can publish when the benchmark debuts in the new results database.

The second goal for the testbed is to have systems where we can easily change one variable at a time to help us understand the characteristics of the benchmark. Typically, these are white box systems where we can swap processors, disks, RAM, and so on. We will use the results from these systems in the benchmark characterization white paper we will create for the debut of HDXPRT 2012.

We’d like your opinions on what we should be certain to test. We think we have a good handle on what to include, but we want your ideas as well.

We also are looking for additional systems to include in our testbed. If you can supply some, please let me know. That is one way to make sure HDXPRT 2012 works on your system and to get your results in the results database. Rest assured, we will not publish those results without your permission. Regardless, the more systems we can test, the better the final product will be.

There will, of course, be opportunities for you to help with the testing as we get to the beta stage in the near future.

Bill

Comment on this post in the forums

Back to the future of source code

Today I’m spending a good chunk of the day participating in a panel discussion on the Kermit file transfer protocol as part of an oral history project with the Computer History Museum. A little over 30 years ago, I worked at Columbia University on the original versions of Kermit. In preparing for the panel discussions, I’ve been thinking about projects with available source code, like Kermit and HDXPRT.

Kermit was a protocol and set of programs for moving files before the Internet. We designed Kermit to work between a wide variety of computers—from IBM mainframes to DEC minicomputers to CP/M microcomputers. As such, we wrote the code to accommodate the lowest common denominator and assume as little as possible. That meant we could not assume that the computers all used ASCII characters (IBM mainframes used EBCDIC), that 8-bit characters would transmit over a phone line, or that packets of more than 100 characters were possible (DEC-20 computers specifically had an issue with that). The pair of Kermit programs negotiated what was possible at the beginning of a session and were able to work, often in situations where nothing else would.

We developed Kermit before the open-source movement or Gnu. We just had the simple notion that the more people who had access to Kermit, the better. Because we did not want incompatible versions of Kermit or the code to be used for the wrong purposes, we retained control (via copyright) while allowing others to use the code to create their own versions. We also encouraged them to share their code back with us so that we could then share it with others. In this way, Kermit grew to support all sorts of computers, in just about every corner of the planet as well as outer space.

In many ways, what we are doing with HDXPRT and its source code is similar. We are working to create a community of interested people who will work together to improve the product. Our hope is that by having the HDXPRT source code available to the Development Community, it will encourage openness, foster collaboration, and spark innovation.

I believe that what made Kermit successful was not so much the design as it was the community. I’m hoping that through the Development Community here, we can make just as successful HDXPRT, TouchXPRT, and who knows what else in the future. If you have not already joined, please do—the more folks we have, the better the community and its resulting benchmarks will be. Thanks!

Bill

Comment on this post in the forums

Tentative TouchXPRT plan and schedule

Since the beginning of the year and especially in the last couple of weeks, I’ve been discussing in the blog our thoughts on what should be in TouchXPRT. Based on those thoughts and on feedback we’ve gotten, we are working on scenarios, apps, and workloads for two of the seven possible roles I mentioned in an earlier blog—consuming and manipulating media and browsing the Web. These seemed like two of the more important and common roles and ones where performance might have a noticeable impact.

For the consuming and manipulating media portion, we are working on building a limited app (or apps) that can do some of the functions in the scenario I described in last week’s blog. We’re also working on the necessary content (photos, videos, and sound clips) for TouchXPRT to manipulate and show using the app(s). For the Web browsing role, we are putting together Web pages and HTML5 that emulate sites and applications on the Web.

The goal is to release both of these roles as the first Community Build (CB1) of TouchXPRT by the end of April. As the name implies, CB1 will be available only to members of the Development Community. If you have not joined the Development Community, hopefully TouchXPRT CB1 will give you some additional incentive!

Once we have CB1 ready to release to the community, we will need your help with debugging, results gathering, and general critiquing. As always, thanks in advance for whatever help you are able to offer.

Bill

Comment on this post in the forums

Thinking about TouchXPRT scenarios

Last week I looked at the roles in TouchXPRT that would make sense on a touch-based device like a tablet. I suggested seven possible ones. The next step is to create usage models and scenarios based on those roles. In turn, we would need to develop simple apps to do these things. To get the ball rolling, here are some activity and scenario ideas we came up with for one of the roles—consuming and manipulating media.

After doing email and reading books, this is one of the main things I do on my iPad. Originally, in this role I mostly showed pictures or videos (especially of my grandsons) to people. (Yes, people do hide when they see me coming with my iPad in hand saying, “You gotta see this!”) As the iPad and its apps have grown, I’ve found myself doing some cleaning up of photos, video, and even sound directly on the iPad. I think a person in this role is not necessarily an expert in media, but like most of us enjoys playing with media. So, the person might do something like scale/trim a video or add a nice cross-dissolve between two video clips. Maybe the person would even create a video montage by combining stock travel footage with personal video clips. Beyond simply rotating and cropping photos, the person might add some stock preset effects like making them sepia toned, adding titles, or creating a postcard. The person might create a slideshow based on a set of travel photos and use some visual or audio effects. They might also add sound by manipulating audio clips. Based upon these kinds of usages, the apps would include some of the features found in apps like iMovie, Instagram, SnapSeed, PhotoGene, iPhoto, and GarageBand.

What do you think? How do those activities match your usage of touch-based devices? What would you add, subtract, or change? Do you have suggestions for the other roles? Thanks for your help in defining what TouchXPRT will be.

Bill

Comment on this post in the forums

Here a core, there a core…

Earlier this week, Apple announced its latest iPad. While the improvements seem to be largely incremental, I can’t wait to get my hands on one. (As an aside, I wonder how much work and how many arguments it took to come up with the name “new iPad.” I thought Apple had finally gotten over their longstanding fear of the number 3 with the iPhone 3, but I guess not.)

One of the incremental improvements that caught my eye, especially in light of trying to test the performance of touch devices, is the new iPad’s processor, the A5X. It’s hard to get a straight story as most reports refer to the chip as a quad-core processor and Apple referred to quad-core graphics. As best I can ferret out amidst the hype, the A5X is a quad-core for graphics, but for other operations it functions only as a dual-core.

Regardless of the specifics of the chip, it does have multiple cores for general execution and for graphics. Multiple processing units is an important trend over the last decade for processors in devices from PCs to tablets to phones. The interesting question to me is what is the proper way to benchmark devices in light of that trend. The problem is that for some things, the extra cores don’t help. For others, two cores may be twice as fast as one core. Similarly, additional dedicated processing units (such as for graphics) help only for particular operations.

The right answer to me is to do as we are trying to do with both HDXPRT and TouchXPRT—start with what people really do. That means that some usage scenarios and applications will benefit from additional processing units, while others will not. That should correspond with what people really experience. To make the results more useful, it would be helpful to try and understand which operations are most affected by additional general or special purpose processing units.

How do you think we should look at devices with multiple and varied processing units? I’d love to get your feedback and incorporate it into both HDXPRT and TouchXPRT over the coming months.

Bill

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?