As we discussed last week, benchmarks (including HDXPRT 2011) are made up of a set of common major components. Last week’s components included the Installer, User Interface (UI), and Results Viewer. This week, we’ll look more at the guts of a benchmark—the parts that actually do the performance testing.
Once the UI gets the necessary commands and parameters from the user, the Test Harness takes over. This part is the logic that runs the individual Tests or Workloads using the parameters you specified. For application-based benchmarks, the harness is particularly critical, because it has to deal with running real applications. (Simpler benchmarks may mix the harness and test code in a single program.)
The next component consists of the Tests or Workloads themselves. Some folks use those terms interchangeably, but I try to avoid that practice. I tend to think of tests as specially crafted code designed to gauge some aspect of a system’s performance, while workloads consist of a set of actions that an application must take as well as the necessary data for those actions. In HDXPRT 2011, each workload is a set of data (such as photos) and actions (e.g., manipulations of those photos) that an application (e.g., Photoshop Elements) performs. Application-based benchmarks, such as HDXPRT 2011, typically use some other program or technology to pass commands to the applications. HDXPRT uses a combination of AutoIT and C code to drive the applications.
When the Harness finishes running the tests or workloads, it collects the results. It then passes those results either to the Results Viewer or writes them to a file for viewing in Excel or some other program.
As we look to improve HDXPRT for next year, what improvements would you like to see in each of those areas?
Bill