Hide menu

GPU PROGRAMMING

/GPU-programmering/

For Doctoral students



Many research questions involve time consuming calculations if standard hardware is used. It is therefore common to use large computer clusters or cloud computing to reduce the processing time, but to be able to do the calculations on your own computer (equipped with one or several graphics cards) has several advantages. The researcher does not have to send data to and from the cluster, and can install whatever software needed for the research project. It is also not necessary to wait for other researchers to finish their calculations at the cluster.

Modern graphics cards (GPUs) are powerful and flexible, and can therefore perform parallel calculations for many different applications (e.g. image processing, statistics, machine learning, physics simulations). The reason for this is that modern graphics cards have several thousand processor cores and extremely fast memory (intended to handle advanced computer graphics for computer games). Some softwares have built in support for taking advantage of graphics cards, while other research problems require that the researcher modifies existing code (e.g. Matlab), or translates the code to another programming language (e.g. CUDA or OpenCL).

This course in GPU programming has the following goals

- To understand concepts about graphics hardware and different memory types

- To be able to write basic code in the programming languages CUDA and OpenCL

- To understand the basic concepts of how GPU performance can be optimized

Required prior knowledge

- C++ programming

The course contains the following lectures

- Lecture 1: GPU hardware and basics of CUDA programming

- Lecture 2: 2D convolution in CUDA using different memory types, performance optimisation

- Lecture 3: OpenCL programming

- Lecture 4: Using CUDA libraries, and accelerating Matlab and Python code

The course contains the following computer laborations

- Laboration 1: Getting started with CUDA, implement simple kernel, compare performance
using different configurations

- Laboration 2: 2D image convolution with CUDA, compare performance using global memory,
texture memory and shared memory

- Laboration 3: 2D image convolution with OpenCL

- Laboration 4: Using CUDA libraries, reading CUDA documentation to use functions available in
CUDA libraries

Project

To finish the laborations gives 3 course credits. Each student can also do a project which is adapted to the student's research project, to obtain 3 additional credits.

Course book

To be decided

Hardware

Each student is expected to have access to an Nvidia graphics card, and to do the laborations on their own computer. Students can ask their supervisors to apply for a free graphics card from Nvidia, https://www.developer.nvidia.com/academic_gpu_seeding

Start date

Beginning of April


Contact

Senior lecturer Anders Eklund, Department of Biomedical Engineering

 


Page responsible: Webmaster
Last updated: 2019-01-16