Setting up OpenCL

HOWTO: Setup for OpenCL development in Visual Studio 2008

  1. Download and install the latest NVIDIA drivers from (NOTE: "developer" drivers are available, but I'm not sure they add anything if you just use OpenCL - will investigate later)
  2. Download and install the NVIDIA GPU Computing Toolkit aka CUDA Toolkit (I'm using version 3.2, 32bit) from here:
  3. Create your OpenCL project in Visual Studio (I'm using 2008, but I imagine it applies to other versions).
  4. In Project Properties, go to the "C/C++" properties and add this path to your include directories:
    • Win7: C:\Program Files (x86)\NVIDIA GPU Computing Toolkit\CUDA\v3.2\include
    • WinXP: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\include
  5. Still in Project Properties, go to the "Linker -> General" settings, add this path to Additional Library Directories:
    • Windows 7: C:\Program Files (x86)\NVIDIA GPU Computing Toolkit\CUDA\v3.2\lib\Win32
    • WinXP: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v3.2\lib\Win32
  6. STILL doing Linker settings, under "Linker -> Input", add "OpenCL.lib" to "Additional Dependencies".
    • Alternative: put this in one of your files: #pragma comment(lib, "OpenCL.lib" )
  7. You are good to go!

HOWTO: Basic OpenCL programming

Use this link to just get some simple code that will compile and run quickly:


  1. #include "opencl.h"
  2. Setup code that I imagine wouldn't change much (basically initialise the device, create queue where commands will be stored)
  3. Allocate memory on the GPU with clCreateBuffer(...)
  4. Allocate any memory you may need locally too.
  5. Write your "kernel"/"kernels"
    • Kernels are written in C-like language.
    • A kernel is a single program, and many copies of it a run simultaneously, each in a different thread.
    • Each kernel has can call a special function, get_global_id(...), that tells it what index it is. For example, if you were adding two vectors A+B=C, you would run a kernel for each element of C, and get_global_id would tell you what element you should be calculating.
      • So: int i = get_global_id(0); C[i] = A[i] + B[i];
    • The trick to writing kernels is to:
      • figure out how to break your operation into its parallel parts (not to bad, its just the inside of your FOR loop usually)
      • be as smart as you can with memory accessing.
    • To expand on that second point, to get the best results you have to be really clever with memory. By that, I don't just mean moving data to the GPU (although moving data between the GPU and CPU all the time will kill you fast), I mean the ratio of memory accesses to number of operations, as well as the way memory is arranged in GPU RAM. This is why vector addition will never beat CPU - two memory reads : one addition is not a good deal.
  6. Compile kernel at runtime (!!!, OpenCL problem, CUDA can compile early), create a kernel object.
  7. Set the inputs of your kernels using clSetKernelArg(...) (i.e. the first argument points to the A vector in GPU RAM, second argument B, third C). You can also set the inputs to be in CPU RAM (i.e. an integer that contains the size of a matrix) but I'm not sure whether it'd be better on the GPU.
  8. Initialise everything (GPU and CPU). Memory is copied to GPU using clEnqueueWriteBuffer(...).
  9. Perform your operation using clEnqueueNDRangeKernel(...). There are three keys arguments. First is the kernel to run. Second is the problem size (e.g. the length of the vector). Third is something I don't quite understand, its like the chunk of the problem to assign to each thread I think. You may think that setting it to 1 makes sense (e.g. each element, which the above example link does), but it turns out you can pass NULL and let the driver figure it out. When I did this, I got better performance, so this may be the way to go.
  10. Pull your result back into local memory with clEnqueueReadBuffer(...).
  11. Standard de-allocation code to finish up.

Benchmarking CPU and GPU performance

Parameters: number of times to run test, vector size n.

TEST: Matrix * Vector = Vector


  1. Allocate memory for matrix A (size n by n) and vector B (size n) on CPU. Allocate memory for destination vector C (size n)
  2. Initialise each element to a float related to the memory index (just to put something in there)
  3. Setup OpenCL
  4. Create
  5. Load, compile the vector-matrix multiplication kernel:
    • __kernel void vector_matrix(__global const float *A, __global const float *B, __global float *C, int m) {
    • // Get the index of the current element to be processed
    • int i = get_global_id(0);
    • // Do the operation
    • float a = A[i], result = 0.0f;
    • for (int j = 0; j < m; j++) {
    • result += a*B[i+j*m];
    • }
    • C[i] += result;
    • }
  6. Allocate memory on the GPU for A, B, C
  7. Copy A,B to GPU, and set A, B, C as the arguments for the kernel.
  8. GPU timing:
    1. Start timer
    2. Run clEnqueueNDRangeKernel for however many times you want (I used 1000 times).
    3. Make sure it does all the operations! with clFinish(command_queue) (the GTX480 seemed to buffer them a lot more)
    4. End timer
  9. CPU timing
    1. Start timer
    2. Run the code however many times you want (e.g. 1000 times)
    3. End timer
  10. Done! Run this for a variety of size n and plot


Size n: 10 50 100 500 1000 2000
NVIDIA Quadro FX580 0.031 0.062 0.093 0.516 1.486 5.906
Intel Xeon W3520 @ 2.67 GHz 0.008 0.016 0.031 0.969 8.907 38.296
NVIDIA GTX 480 0.008 0.024 0.032 0.205 0.397 0.791
Intel Core i7 @ 3.07 GHz 0.008 0.008 0.027 0.805 7.661 32.483
n^2 / 100,000 0.001 0.025 0.1 2.5 10 40


  1. All done with floats.
  2. Note that no memory is exchanged between CPU and GPU during the timing period. This is really crucial. If I did put it in, it would cripple the GPU because it would spend most of the time thrashing the memory. A good algorithm that utilised the GPU would not be moving much memory between CPU and GPU, especially whole matrices. Certainly RSM never needs to move the basis between GPU and CPU, only vectors.
  3. The performance of the CPUs are in line with the problem complexity.
  4. The Xeon is a quad core, and the i7 is too (8 with hyperthreading), but only 1 is being used. The improvement between the i7 and the Xeon is down to clock speed and any caching improvements. Even if all 8 hyperthreads could be used as efficiently as the single thread is being used, it still would not be faster than the GTX480.
  5. The GTX 480 has 480 "CUDA cores", the Quadro has 32. It also has a better clock speed, and GDDR5 ram vs GDDR3 ram. It performs about 7 times better than the Quadro at n = 2000.
  6. Fitting a linear trendline to the GTX480 numbers using Excel gives an R^2 value of 0.9997, which is awesome considering its a n^2 problem!
Edit | Attach | Watch | Print version | History: r5 < r4 < r3 < r2 < r1 | Backlinks | Raw View | Raw edit | More topic actions
Topic revision: r5 - 2012-01-15 - TWikiAdminUser
This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback