Sometimes I believe I can speedup my algorithm by manually implementing specific kernels such as a vector sum, vector multiply or matrix multiply. I usually spent a lot of time coding and an impressive amount of time debugging. But is it really necessary? I will take a short look at the matrix matrix multiply,which was discussed several times on my blog. Its a computing intensive task with high memory usage. The naive implementation is extra easy to write. Its bottleneck however lies in the limited throughput of the GPU memory. Therefore a method called “Tiling” can be used along with shared memory on the GPU to reduce the memory load and reduce the computing time. This implementation is however no longer as easy and some at least basic knowledge of the GPU and its hardware is required. I have been testing recently whether its worth optimizing my own kernel,or just use the nVidia’s CuBLAS library. The result is somewhat fascinating.

Custom Global & Shared Impl.

 

Although one will likely encounter a situation,where MM has to be manually implemented, my advice is to use the power of nVidia’s libraries everywhere they can be used. The reason is given below (Tested on nVidia Quadro M2000 on CUDA 8.0 64-bit. Measured results are average of 1000 runs). The average speedup of CuBLAS over Shared memory implementation is roughly 10 times (for non – trivial matrices). The difference could be possibly reduced by using intrinsic functions and optimizing via fast math,loop unrolling and eliminating divergent branches. Therefore, I do believe that beating the Blas library is not an easy task.

mmmultiply

The only interesting part is my own implementation of the Vector sum. Its a bit faster, than the CuBLAS version, but the difference probably comes from missing data transfer of the result back to CPU, since I do not usually transfer the result. The sum of a 1024-element vector takes 8.1μs while on CuBLAS 115μs (MSI GTX 960). I Still wonder what causes this difference :)

 

Share This