Loading…
This event has ended. View the official site or create your own event → Check it out
This event has ended. Create your own
View analytic
Wednesday, September 21 • 3:15pm - 4:15pm
Introduction to Vector Parallelism

Log in to save this to your schedule and see who's attending!

Parallel programming is a hot topic, and everybody knows that multicore processors and GPUs can be used to speed up calculations. What many people don't realize, however, is that CPUs provide another way to exploit parallelism -- one that predates recent multicore processors, has less overhead, requires no runtime scheduler, and can be used in combination with multicore processing to achieve even more speedup. It's called vector parallelism, and the hardware that implements it goes by brand names like SSE, AVX, NEON, and Altivec. If your parallel program does not use vectorization, you could be leaving a factor of 4 to 16 in performance on the floor.

In some ways, Vector programming is easier than thread-based parallel programming because it provides ordering guarantees that more closely resemble serial programming. Without an intuitive framework by which to interpret them, the ordering rules can be confusing, however, and restrictions on vector code that don't apply to thread-parallel code must be kept in mind. In this talk, we'll introduce you to the common elements of most vector hardware, show what kind of C++ code can be automatically vectorized by a smart compiler, and talk about programmer-specified vectorization in OpenMP as well as proposals making their way through the C++ standards committee. You'll understand the rules of vectorization, so that you can begin to take advantage of the vector units already in your CPU.

A basic understanding of C++11 lambda expressions is helpful.

Speakers
avatar for Pablo Halpern

Pablo Halpern

Software Engineer, Intel Corp.
Pablo Halpern has been programming in C++ since 1989 and has been a member of the C++ Standards Committee since 2007. He is currently an enginneer at Intel Corp., where he works on high-performance computing. As the former chairman of the Parallel Programming Models Working Group at Intel, he coordinated the efforts of teams working on Cilk Plus, TBB, OpenMP, and other parallelism languages, frameworks, and tools targeted to C++, C, and Fortran... Read More →


Wednesday September 21, 2016 3:15pm - 4:15pm
Kantner (Room 403) Meydenbauer Center