BenchmarkXPRT Blog banner

Tag Archives: machine learning

Answering questions about the AIXPRT Community Preview

Over the last two weeks, we’ve received a few questions about the AIXPRT Community Preview. Specifically, community members have asked about the project’s focus, possible future steps, and the results table. We decided to answer each of these here in the blog, since others are likely to have the same questions. We encourage folks to submit any new questions they may have.

PT previously stated that AIXPRT would be focused on edge devices. The current published results are from desktops and laptops. Is the focus of AIXPRT changing?

In the past, we did say that the focus of AIXPRT would be edge inference devices. After much feedback, we’ve come to understand that focus is probably too restrictive. PCs and laptops are using inference machine learning, and a decent amount of inference is taking place on servers in the cloud until phones are capable enough to handle the workloads. We now see all of these devices as potential targets for AIXPRT.

How did you choose the current results in your database?

We ran the AIXPRT CP on some of the systems we used during development and testing. We will continue to publish additional results as we test available systems in our lab. We’d love to get results from the community that cover a wider base of devices.

Will you be publishing results from servers?

We welcome server results submissions from the community, and will review them for publication on our site.

Will AIXPRT ever be available for Windows systems?

This is a possibility we’re actively exploring, and we hope to be able to share more about it soon.

What’s the best way to navigate the results table?

AIXPRT can run three toolkits, utilize two networks, and target CPU or GPU hardware. Together, these configuration options produce a lot of data points. To make it easier to handle all these variables, we’re working to improve the navigation, sorting, and filtering capabilities of the results table. In the meantime, a few tips:

  • There are two tabs at the top of the table, one for the ResNet-50 network and one for the SSD-MobileNet network. You can click the tabs to move between results for these networks.
  • Clicking any of the column headers will sort the data in that column A-Z (with the first click) or Z-A (with a second click).
  • To see if an individual test targeted a system’s CPU or GPU, read the description in the Summary column, e.g. Intel Core i7-7600U GPU / OpenVINO.
  • Clicking the entry in the Source column will take you to a more detailed page listing additional test configuration and system hardware information.

 

We’ll continue to share more information about AIXPRT in the coming weeks. Do you have additional questions or comments about AIXPRT? Let us know.

Justin

More, faster, better: The future according to Mobile World Congress 2019

More is more data, which the trillions of devices in the coming Internet of Things will be pumping through our air into our (computing) clouds in hitherto unseen quantities.

Faster is the speed at which tomorrow’s 5G networks will carry this data—and the responses and actions from our automated assistants (and possibly overlords).

Better is the quality of the data analysis and recommendations, thanks primarily to the vast army of AI-powered analytics engines that will be poring over everything digital the planet has to say.

Swimming through this perpetual data tsunami will be we humans and our many devices, our laptops and tablets and smartphones and smart watches and, ultimately, implants. If we are to believe the promise of this year’s Mobile World Congress in Barcelona—and of course I do want to believe it, who wouldn’t?—the result of all of this will be a better world for all humanity, no person left behind. As I walked the show floor, I could not help but feel and want to embrace its optimism.

The catch, of course, is that we have a tremendous amount of work to do between where we are today and this fabulous future.

We must, for example, make sure that every computing node that will contribute to these powerful AI programs is up to the task. From the smartphone to the datacenter, AI will end up being a very distributed and very demanding workload. That’s one of the reasons we’ve been developing AIXPRT. Without tools that let us accurately compare different devices, the industry won’t be able to keep delivering the levels of performance improvements that we need to realize these dreams.

We must also think a lot about how to accurately measure all other aspects of our devices’ performance, because the demands this future will place on them are going to be significant. Fortunately, the always evolving XPRT family of tools is up to the task.

The coming 5G revolution, like all tech leaps forward before it, will not come evenly. Different 5G devices will end up behaving differently, some better and some worse. That fact, plus our constant and growing reliance on bandwidth, suggests that maybe the XPRT community should turn its attention to the task of measuring bandwidth. What do you think?

One thing is certain: we at the Benchmark XPRT Development Community have a role to play in building the tools necessary to test the tech the world will need to deliver on the promise of this exciting trade show. We look forward to that work.

All about the AIXPRT Community Preview

Last week, Bill discussed our plans for the AIXPRT Community Preview (CP). I’m happy to report that, despite some last-minute tweaks and testing, we’re close to being on schedule. We expect to take the CP build live in the coming days, and will send a message to community members to let them know when the build is available in the AIXPRT GitHub repository.

As we mentioned last week, the AIXPRT CP build includes support for the Intel OpenVINO, TensorFlow (CPU and GPU), and TensorFlow with NVIDIA TensorRT toolkits to run image-classification workloads with ResNet-50 and SSD-MobileNet v1 networks. The test reports FP32, FP16, and INT8 levels of precision. Although the minimum CPU and GPU requirements vary by toolkit, the test systems must be running Ubuntu 16.04 LTS. You’ll be able to find more detail on those requirements in the installation instructions that we’ll post on AIXPRT.com.

We’re making the AIXPRT CP available to anyone interested in participating, but you must have a GitHub account. To gain access to the CP, please contact us and let us know your GitHub username. Once we receive it, we’ll send you an invitation to join the repository as a collaborator.

We’re allowing folks to quote test results during the CP period, and we’ll publish results from our lab and other members of the community at AIXPRT.com. Because this testing involves so many complex variables, we may contact testers if we see published results that seem to be significantly different than those from comparable systems. During the CP period, On the AIXPRT results page, we’ll provide detailed instructions on how to send in your results for publication on our site. For each set of results we receive , we’ll disclose all of the detailed test, software, and hardware information that the tester provides. In doing so, our goal is to make it possible for others to reproduce the test and confirm that they get similar numbers.

If you make changes to the code during testing, we ask that you email us and describe those changes. We’ll evaluate if those changes should become part of AIXPRT. We also require that users do not publish results from modified versions of the code during the CP period.

We expect the AIXPRT CP period to last about four to six weeks, placing the public release around the end of March or beginning of April. In the meantime, we welcome your thoughts and suggestions about all aspects of the benchmark.

Please let us know if you have any questions. Stay tuned to AIXPRT.com and the blog for more developments, and we look forward to seeing your results!

JNG

Preparing for the AIXPRT Community Preview

Thanks to everyone who downloaded the AIXPRT Request for Comments (RFC) preview build. Next week, we’re planning to publish the AIXPRT Community Preview (CP). The AIXPRT CP build includes support for the Intel OpenVINO, TensorFlow (CPU and GPU), and TensorFlow with NVIDIA TensorRT toolkits to run image-classification workloads with ResNet-50 and SSD-MobileNet v1 networks. The test reports FP32, FP16, and INT8 levels of precision. As with the RFC build, the test systems must be running Ubuntu 16.04 LTS. The minimum CPU and GPU requirements vary according to the toolkit being used, and we will publish more details about the hardware minimums next week.

As with our other community previews, we think the AIXPRT CP candidate is solid enough to allow folks to start quoting test results. During CP periods, we generally allow members to publish their own results, but wait until the build is available to the public before we post results on our site. Because community feedback is especially important for AIXPRT, we will handle things a bit differently. During the CP period, we’ll publish results that we produce as well as those from other members of the community, which you’ll be able to view at AIXPRT.com.

We’ll also provide detailed instructions for publishing results and sending them to us. Because of the high number of variables in each potential test configuration, we’ll ask testers to disclose more test, software, and hardware information than in the past. We will make this information available along with the results on AIXPRT.com. Our goal is that others can reproduce these numbers and confirm that they get similar results.

Our CP periods typically last four to six weeks before we make the benchmark available to the general public. If that schedule holds, it would place the public AIXPRT release around the end of March. During the CP period, we welcome your thoughts and suggestions about all aspects of the benchmark.

Also, we normally restrict access to our CPs to BenchmarkXPRT Development Community members. However, because we’re seeking broad input from experts in this field, we’ll gladly make the CP available to anyone interested in participating who has a GitHub account. To gain access, please contact us and let us know your GitHub username. Once we receive it, we’ll send you an invitation to join the repository as a collaborator.

Please let us know if you have any questions. We look forward to hearing your feedback.

Bill

XPRT collaborations: North Carolina State University

For those of us who work on the BenchmarkXPRT tools, a core goal is involving new contributors and interested parties in the benchmark development process. Adding voices to the discussion fosters the collaboration and innovation that lead to powerful benchmark tools with lasting relevance.

One vehicle for outreach that we especially enjoy is sponsoring a student project through North Carolina State University. Each semester, the Senior Design Center in the university’s Department of Computer Science partners with external companies and organizations to provide student teams with an opportunity to work on real-world programming projects. If you’ve followed the XPRTs for a while, you may remember previous student projects such as Nebula Wolf, a mini-game that shows how well different devices handle games, and VR Demo, a virtual reality prototype workload based on a room escape scenario.

This fall, a team of NC State students is developing a software console for automating machine learning tests. Ideally, the tool will let future testers specify custom workload combinations, compute a performance metric, and upload results to our database. The project will also assess the impact of the framework on performance scores. In fact, the console will perform many of the same functions we plan to implement with AIXPRT.

The students have worked very hard on the project, and have learned quite a bit about benchmarking practices and several new software tools. The project will wrap up in the next couple of weeks, and we’ll share additional details as soon as possible. Early next year, we’ll publish a video about the experience.

If you’d like to join the NC State students and hundreds of other XPRT community members in the future of benchmark development, please let us know!

Justin

Machine learning everywhere!

I usually think of machine learning as an emerging technology that will have a big impact on our lives in the not too distant future through applications like autonomous driving. Everywhere I look, however, I see areas where machine learning will affect our lives much sooner in a myriad of smaller ways.

A recent article in Wired described one such example. It told about the work some MIT and Google researchers have done using machine learning to retouch photos. I would do this by using a photo editing program to do something like adjust the color saturation of a whole photo. Instead, their algorithm applies different filters to different parts of a photo. So, faces in the foreground might get different treatment than the sunset in the background.

The researchers train the neural network using professionally retouched photos. I love the idea of a program that automatically improves the look of my less-than-professional personal photos.

What I found more exciting, however, is that the researchers could make their software efficient enough to run on a smartphone in a fraction of a second. That makes it significantly more useful.

This technology is not yet available, but it seems like something that could show up in existing photo or camera apps before long. I hope to see it soon on a smartphone in my hand!

All of that made me think about how we might incorporate such an algorithm in the XPRTs. When I started reading the article, I was thinking it might fit well in our upcoming machine-learning XPRT. By the time I finished it, however, I realized it might belong in a future version of one of the other XPRTs, like MobileXPRT. What do you think?

Bill

Check out the other XPRTs:

Forgot your password?