BenchmarkXPRT Blog banner

Author Archives: Bill Catchings

HDXPRT 2012 – Testing the test

We’re currently starting testing on alpha versions of HDXPRT 2012. In order to do that, we’re putting together a testbed. We have two goals for the testbed that are somewhat contradictory. The first is to make the testbed as diverse as possible in terms of vendors and configurations. We want notebooks and desktops from as many vendors as possible. We want to make sure we have systems that will push the edges—both slower systems that may even be below the minimum recommended configuration and faster ones representing the current latest and greatest. These systems will help us shake out bugs and provide some raw data that we can publish when the benchmark debuts in the new results database.

The second goal for the testbed is to have systems where we can easily change one variable at a time to help us understand the characteristics of the benchmark. Typically, these are white box systems where we can swap processors, disks, RAM, and so on. We will use the results from these systems in the benchmark characterization white paper we will create for the debut of HDXPRT 2012.

We’d like your opinions on what we should be certain to test. We think we have a good handle on what to include, but we want your ideas as well.

We also are looking for additional systems to include in our testbed. If you can supply some, please let me know. That is one way to make sure HDXPRT 2012 works on your system and to get your results in the results database. Rest assured, we will not publish those results without your permission. Regardless, the more systems we can test, the better the final product will be.

There will, of course, be opportunities for you to help with the testing as we get to the beta stage in the near future.

Bill

Comment on this post in the forums

Touch: The finger versus the stylus

One advantage of being in the industry for a long time is seeing the development pendulum go back and forth. One such pendulum is the way of interacting with touch interfaces. Touch interfaces existed long before the current phone and tablet devices. I remember the HP-150, an early touchscreen PC, from my days working on Kermit in the 1980s. It was not a big seller, so you probably never used one. However, you may have used early touchscreen technology in devices like kiosks. While those touch interfaces were fairly simple, you used your fingertip on the screen to indicate your selections.

When PDAs became a big deal in the 1990s, the stylus rather than the fingertip became the way to touch the screen. If you lost your stylus or did not feel like pulling it out of the case, you could use your fingernail. I became very good at writing in the odd script that the Palm OS used. (I still sometimes write the letter A as an upside-down V.) Though the stylus was easier, you could do most things using your fingernail. I also used a stylus (and my fingernail) with Windows smartphones.

Smartphones, especially the iPhone, swung the pendulum back to touching the screen with your fingertip. It took me a decent bit of time to adjust to touching the screen that way. I also had to get used to staring at screens through fingerprints. The ability to multi-touch, however, made that worthwhile. (And, caused me to make sure I always carry those screen-cleaning cloths.)

Recent tablets have generally utilized multi-touch, fingertip interfaces. I still find myself wishing for a stylus at times. I’ve purchased a few different styli for using my iPad, but the mushy, fingertip-like ends leave much to be desired. I just ordered an interesting compromise, the Adonit Jot Classic Stylus. I’m hopeful, but won’t be surprised if I’m disappointed.

The stylus on some Windows 7 tablets like the Dell Latitude ST shows what is possible with a stylus. The stylus can be really useful in some work environments. Hopefully, we’ll see more innovation in touch interfaces. In my ideal world, I could use a simple stylus or my fingernail some of the time and my fingertips when multi-touch is better—all on a single device, of course! For now, I just have to keep cleaning off my iPad’s screen while I try to find the ideal stylus.

Whatever way the touch interface pendulum swings, we’ll try to make sure that TouchXPRT will be the right tool to measure it.

Bill

Comment on this post in the forums

Back to the future of source code

Today I’m spending a good chunk of the day participating in a panel discussion on the Kermit file transfer protocol as part of an oral history project with the Computer History Museum. A little over 30 years ago, I worked at Columbia University on the original versions of Kermit. In preparing for the panel discussions, I’ve been thinking about projects with available source code, like Kermit and HDXPRT.

Kermit was a protocol and set of programs for moving files before the Internet. We designed Kermit to work between a wide variety of computers—from IBM mainframes to DEC minicomputers to CP/M microcomputers. As such, we wrote the code to accommodate the lowest common denominator and assume as little as possible. That meant we could not assume that the computers all used ASCII characters (IBM mainframes used EBCDIC), that 8-bit characters would transmit over a phone line, or that packets of more than 100 characters were possible (DEC-20 computers specifically had an issue with that). The pair of Kermit programs negotiated what was possible at the beginning of a session and were able to work, often in situations where nothing else would.

We developed Kermit before the open-source movement or Gnu. We just had the simple notion that the more people who had access to Kermit, the better. Because we did not want incompatible versions of Kermit or the code to be used for the wrong purposes, we retained control (via copyright) while allowing others to use the code to create their own versions. We also encouraged them to share their code back with us so that we could then share it with others. In this way, Kermit grew to support all sorts of computers, in just about every corner of the planet as well as outer space.

In many ways, what we are doing with HDXPRT and its source code is similar. We are working to create a community of interested people who will work together to improve the product. Our hope is that by having the HDXPRT source code available to the Development Community, it will encourage openness, foster collaboration, and spark innovation.

I believe that what made Kermit successful was not so much the design as it was the community. I’m hoping that through the Development Community here, we can make just as successful HDXPRT, TouchXPRT, and who knows what else in the future. If you have not already joined, please do—the more folks we have, the better the community and its resulting benchmarks will be. Thanks!

Bill

Comment on this post in the forums

Tentative TouchXPRT plan and schedule

Since the beginning of the year and especially in the last couple of weeks, I’ve been discussing in the blog our thoughts on what should be in TouchXPRT. Based on those thoughts and on feedback we’ve gotten, we are working on scenarios, apps, and workloads for two of the seven possible roles I mentioned in an earlier blog—consuming and manipulating media and browsing the Web. These seemed like two of the more important and common roles and ones where performance might have a noticeable impact.

For the consuming and manipulating media portion, we are working on building a limited app (or apps) that can do some of the functions in the scenario I described in last week’s blog. We’re also working on the necessary content (photos, videos, and sound clips) for TouchXPRT to manipulate and show using the app(s). For the Web browsing role, we are putting together Web pages and HTML5 that emulate sites and applications on the Web.

The goal is to release both of these roles as the first Community Build (CB1) of TouchXPRT by the end of April. As the name implies, CB1 will be available only to members of the Development Community. If you have not joined the Development Community, hopefully TouchXPRT CB1 will give you some additional incentive!

Once we have CB1 ready to release to the community, we will need your help with debugging, results gathering, and general critiquing. As always, thanks in advance for whatever help you are able to offer.

Bill

Comment on this post in the forums

Thinking about TouchXPRT scenarios

Last week I looked at the roles in TouchXPRT that would make sense on a touch-based device like a tablet. I suggested seven possible ones. The next step is to create usage models and scenarios based on those roles. In turn, we would need to develop simple apps to do these things. To get the ball rolling, here are some activity and scenario ideas we came up with for one of the roles—consuming and manipulating media.

After doing email and reading books, this is one of the main things I do on my iPad. Originally, in this role I mostly showed pictures or videos (especially of my grandsons) to people. (Yes, people do hide when they see me coming with my iPad in hand saying, “You gotta see this!”) As the iPad and its apps have grown, I’ve found myself doing some cleaning up of photos, video, and even sound directly on the iPad. I think a person in this role is not necessarily an expert in media, but like most of us enjoys playing with media. So, the person might do something like scale/trim a video or add a nice cross-dissolve between two video clips. Maybe the person would even create a video montage by combining stock travel footage with personal video clips. Beyond simply rotating and cropping photos, the person might add some stock preset effects like making them sepia toned, adding titles, or creating a postcard. The person might create a slideshow based on a set of travel photos and use some visual or audio effects. They might also add sound by manipulating audio clips. Based upon these kinds of usages, the apps would include some of the features found in apps like iMovie, Instagram, SnapSeed, PhotoGene, iPhoto, and GarageBand.

What do you think? How do those activities match your usage of touch-based devices? What would you add, subtract, or change? Do you have suggestions for the other roles? Thanks for your help in defining what TouchXPRT will be.

Bill

Comment on this post in the forums

TouchXPRT update

We have been busy the last couple of months with TouchXPRT. We have been investigating and trying out things on Windows 8 Metro. While we are excited by the possibilities for a benchmark in that space, the task is a bit daunting.

The first key question is what are people likely to do with a device using touch-based environment like Metro? The best way to answer that is to look at what people are currently doing with IOS- and Android-based devices. We have been playing with those as well as some units running the Metro beta. To create an initial list of roles or usage categories, we spent some time looking at what is available on the iTunes App Store, the Android Play store, and the Windows Store. Here, in no particular order, is the list of uses we came up with:

  • Consume and manipulate media – Touch devices are heavily used for consuming media (music, photos, and video), but now are being used for some simple manipulation tasks like adding simple visual effects to video, mixing and changing audio, and enhancing photos.
  • Browse the Web – Touch devices are becoming one of the main ways people consume Web content, both normal Web pages and specially crafted “mobile” pages. Touch devices are what I use to find the phone number for the nearest takeout Chinese.
  • Watch video for entertainment – Through movie apps like Netflix and TV network apps, touch devices (especially tablets) are becoming a major force in this area.
  • Play games – This is obviously something folks do on their touch devices. As best we can tell, no consumer device can ship without Angry Birds!
  • Interact with others – Through apps like Facebook and Foursquare, touch devices are becoming a big way that people interact with each other.
  • Get news and information – Another big area is general news and information, including things like stock quotes and weather.
  • Use utilities – This is a broad category—there are a ton of utilities for doing everything from moving files to backing up data.

That list covers a lot of ground and some of the areas, like games, would be particularly difficult to benchmark. We thought, however, that it would be best to get everything out and then figure out what to tackle first. The big challenges we face are the lack of apps available for Metro and having no good ability to script or drive applications. Our current thinking is to write some minimal sample apps that mimic common apps out there. These would not be complete apps, but would do some of the key functions. Then, we could build scenarios around these functions. That seems like the best approach to completing something in a timely fashion. Initially, we would aim for two or three of those areas and then add others over time.

As always, we need your feedback. Let us know what you think about the list of uses and the approach in general. And, let us know if you can help with any of the sample app development. Thanks!

Bill

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?