BenchmarkXPRT Blog banner

Category: Let us know your thoughts

How do you use high-definition media?

The first step in our top-down process of defining the HDXPRT 2012 is to look at what people actually do today with high-definition media. The obvious place to start is with what we do with photos, video, and audio. Or, in my case, with what I do.

I regularly do about six different things with high-definition media:

  1. Organize media – This is something I do often. Maybe I’m more of a curator than a creator! In this category I include everything from keeping track of media, to doing simple enhancements (for example, red eye removal from photos), to converting to different formats.
  2. Create media – I include here not only capturing the media, but also manipulating photos and videos. These usages may not be the most difficult ones in the applications, but we all do them, and we all wait on them.
  3. Photo blog – This covers the fancier photo work, typically with more complex editing using digital tools.
  4. Produce video – When you really work on a video, you end up with editing and manipulation tasks that really stress your system. I often end up waiting for my computer to finish this sort of work.
  5. Create music – I’m not very good at this type of work, but mixing, editing, remixing, and sharing music can be a lot of fun!
  6. Viewing video – Finally, we come to viewing video, which is basically watching videos in different HD formats. I’m much better at this.

Some tasks I do often, such as looking at photos or listening to music, don’t tend to be stressful enough on a computer to be worth measuring in the benchmark. Consequently, I didn’t include them here.

That’s just my take, though.

What do you do? What tasks would you like to see in HDXPRT 2012?

Bill

Comment on this post in the forums

An open, top-down process

We’ve been hard at work putting together the RFC for HDXPRT 2012. As a group of us sat around a table discussing what we’d like to see in the benchmark, it became clear to me how different this development process is from those of other benchmarks I’ve had a hand in creating (3D WinBench, Winstone, WebBench, NetBench, and many others.). The big difference is not in the design or the coding or even the final product.

The difference is the process.

A sentiment that came up frequently in our meeting was “Sure, but we need to see what the community thinks.” That indicates a very different process than I am used to. Different from what companies developing benchmarks do and different from what benchmark committees do. What it represents, in a word, is openness. We want to include the Development Community in every step of the process, and we want to figure out how to make the process even more open over time. For example, we discussed ideas as radical as videoing our brainstorming sessions.

Another part of the process I think is important is that we are trying to do things top-down. Rather than deciding which applications should be in the benchmark, we want to start by asking how people really use high-definition media. What do people typically do with video? What do they do to create it and how do they watch it? Similarly, what do people do with images and audio?

At least as importantly, we don’t want to include only our opinions and research on these questions; we want to pick your brains and get your input. From there, we will work on the workflows, the applications, and the RFC. Ultimately, that will lead to the scripts themselves. With your input and help, of course!

Please let us know any ideas you have for how to make the process even more open. And tell us what you think about this top-down approach. We’re excited and hope you are, too!

Bill

Comment on this post in the forums

Suggestions?

In the midst of releasing the source code and results for HDXPRT 2011, we are beginning to plan for HDXPRT 2012. To make HDXPRT 2012 as good as possible, we want your suggestions. For the next four weeks, we want to hear what you would like to see in HDXPRT 2012. Obviously, we probably won’t be able to get everything done, but now is the time to dream.

To get you thinking, here are some areas where you might like to see changes or improvements:

  • Applications. What applications would you like to see in HDXPRT 2012? Should we add or remove any? Do you know of applications that would help us look at performance in areas we haven’t touched?
  • Workload scenarios. What activities should the applications carry out? Would you like to see other use case scenarios? Why?
  • Metrics. Are the current metrics easy enough to understand? Can you suggest improvements?
  • Execution. Are there issues with how HDXPRT 2011 runs that you would like to see improved?
  • Installation. Would you like to see any changes in how HDXPRT installs?
  • UI. Is the UI clear enough? Should we provide more progress feedback?
  • Documentation. Does the documentation give you what you need to run and understand HDXPRT? Would you like to see more white papers and results analysis?

To help make the suggestion period as interactive as possible, please check out the forum, http://www.hdxprt.com/forum/forumdisplay.php?11-HDXPRT-2012-Suggestions, and post your suggestions there. You can also send your suggestions to hdxprtsupport@hdxprt.com.

Bill

Comment on this post in the forums

Looking deeper into results

A few weeks ago, I mentioned some questions we had about graphics performance using HDXPRT 2011 after releasing our results white paper. The issue was that HDXPRT 2011 gave results I had not expected—the integrated graphics outperformed discrete graphics cards. I suspected that this was both because HDXPRT 2011’s lack of 3D work lessens the advantage of discrete graphics cards and because the integrated graphics on the second-generation Intel Core processors we used performed well.

We ran some tests with discrete graphics cards on an older processor (an Intel Core 2 Quad processor Q6600) and report our findings in a second results white paper. My suspicions were correct: On the older processor, the discrete graphics cards performed 21 to 36 percent better than the integrated graphics.

As an aside, we are looking into putting our test results on the Web site in some easy-to-access fashion so you can look at them in more detail. My hope is that doing so will facilitate sharing of results among all of us in the HDXPRT Development Community.

Based on this second results white paper, I would love to hear your responses to two questions. First, do you think that future versions of HDXPRT should include 3D graphics? Second, what other areas of HDXPRT 2011 would you like to see us look into?

Bill

Comment on this post in the forums

Helping hands

We ran into a problem last week with HDXPRT 2011. Basically, it would fail when we installed it. One of the biggest problems for application-based benchmarks like HDXPRT 2011 is dealing with existing applications on the system. Even more difficult to account for are the many DLLs, drivers, and Registry settings that can collide between applications and different versions of the same application.

After a lot of effort, we found the problem was indeed a conflict between some of the pre-installed software on the system and the HDXPRT 2011 installer. We were able to narrow down which applications caused the problem and posted on the site some instructions for how to work around the issues. (For more details, log into the forum and then see http://www.hdxprt.com/forum/showthread.php?18-Troubleshooting-Installation-problems-on-Dell-Latitude-notebooks. You won’t be able to read that message if you’re not logged in.)

My hope is that if you run into issues with HDXPRT 2011, you’ll share them. And, share the workarounds you find as well! So, please let us know any tips, tricks, or issues you find with the benchmark by sending email to hdxprtsupport@hdxprt.com. The more we work together, the better we can make both HDXPRT 2011 and the future versions. Thanks!

Next week, we’ll return to looking at the results HDXPRT 2011 provides.

Bill

Comment on this post in the forums

Anatomy of a benchmark, part II

As we discussed last week, benchmarks (including HDXPRT 2011) are made up of a set of common major components. Last week’s components included the Installer, User Interface (UI), and Results Viewer.  This week, we’ll look more at the guts of a benchmark—the parts that actually do the performance testing.

Once the UI gets the necessary commands and parameters from the user, the Test Harness takes over.  This part is the logic that runs the individual Tests or Workloads using the parameters you specified.  For application-based benchmarks, the harness is particularly critical, because it has to deal with running real applications.  (Simpler benchmarks may mix the harness and test code in a single program.)

The next component consists of the Tests or Workloads themselves.  Some folks use those terms interchangeably, but I try to avoid that practice.  I tend to think of tests as specially crafted code designed to gauge some aspect of a system’s performance, while workloads consist of a set of actions that an application must take as well as the necessary data for those actions.  In HDXPRT 2011, each workload is a set of data (such as photos) and actions (e.g., manipulations of those photos) that an application (e.g., Photoshop Elements) performs.  Application-based benchmarks, such as HDXPRT 2011, typically use some other program or technology to pass commands to the applications.  HDXPRT uses a combination of AutoIT and C code to drive the applications.

When the Harness finishes running the tests or workloads, it collects the results.  It then passes those results either to the Results Viewer or writes them to a file for viewing in Excel or some other program.

As we look to improve HDXPRT for next year, what improvements would you like to see in each of those areas?

Bill

Comment on this post in the forums

Check out the other XPRTs:

Forgot your password?