My research group investigates parallel computing systems, recently concentrating on the use of the graphics processing unit (GPU) as a general-purpose processor. GPUs can potentially deliver substantially greater performance on a broad range of problems than their CPU counterparts, but effectively mapping problems to a parallel programming model with an immature programming environment is a significant and important research problem. As the computing industry moves to parallel hardware and software, the lessons learned from the GPU, the first commodity parallel processor, are even more important. The lessons learned from our field of “general-purpose computation on the GPU” (GPGPU) (also called “GPU computing”) have had a substantial and growing impact on mainstream computing. We are primarily interested in the intersection between hardware and software: how to build software that best utilizes the hardware, how to build hardware that will be programmable and a good target for software, and how to characterize a programming model that connects the two.