Memory Coalescing Techniques¶
To take full advantage of the high memory bandwidth of the GPU, the reading from global memory must also run in parallel. We consider memory coalescing techniques to organize the execution of load instructions by a warp.
Memory Coalescing Techniques¶
Consider two ways of accessing the elements in a matrix:
elements are accessed row after row; or
elements are accessed column after column.
These two ways are shown in Fig. 112.

Fig. 112 Two ways of accessing elements in a matrix.¶
Recall the linear address system to store a matrix. In C, the matrix is stored row wise as a one dimensional array, see Fig. 58.
Threads \(t_0, t_1, t_2\), and \(t_3\) access the elements on the first two columns, as shown in Fig. 113.

Fig. 113 Accessing elements column after column.¶
Four threads \(t_0, t_1, t_2\), and \(t_3\) access elements on the first two rows, as shown in Fig. 114.

Fig. 114 Accesing elements row after row.¶
The differences between uncoalesced and coalesced memory accesses are shown in Fig. 115.

Fig. 115 Uncoalesced versus coalesced access.¶
We can use shared memory for coalescing. Consider Fig. 71 for the tiled matrix-matrix multiplication.
For \(\displaystyle C_{i,j} = \sum_{k=1}^{m/w} A_{i,k} \cdot B_{k,j}\), \(A \in {\mathbb R}^{n \times m}\), \(B \in {\mathbb R}^{m \times p}\), \(A_{i,k}, B_{k,j}, C_{i,j} \in {\mathbb R}^{w \times w}\), every warp reads one tile \(A_{i,k}\) of \(A\) and one tile \(B_{k,j}\) of \(B\): every thread in the warp reads one element of \(A_{i,k}\) and one element of \(B_{k,j}\).
The number of threads equals w, the width of one tile, and threads
are identified with tx = threadIdx.x
and ty = threadIdx.y
.
The by = blockIdx.y
and bx = blockIdx.x
correspond
respectively to the first and the second index of each tile, so we have
row = by*
w + ty
and col = bx*
w + tx
.
Row wise access to A uses A [row*m + (k*w + tx)]
.
For B: B [(k*w+ty)*m + col]
= B [(k*w+ty)*m + bx*w+tx]
.
Adjacent threads in a warp have adjacent tx
values
so we have coalesced access also to B.
The tiled matrix multiplication kernel is below:
__global__ void mul ( float *A, float *B, float *C, int m )
{
__shared__ float As[w][w];
__shared__ float Bs[w][w];
int bx = blockIdx.x; int by = blockIdx.y;
int tx = threadIdx.x; int ty = threadIdx.y;
int col = bx*w + tx; int row = by*w + ty;
float Cv = 0.0;
for(int k=0; k<m/w; k++)
{
As[ty][tx] = A[row*m + (k*w + tx)];
Bs[ty][tx] = B[(k*w + ty)*m + col];
__syncthreads();
for(int ell=0; ell<w; ell++)
Cv += As[ty][ell]*Bs[ell][tx];
C[row][col] = Cv;
}
}
Avoiding Bank Conflicts¶
All threads in the same warp execute the same instruction. When retrieving/storing data from global memory, one instruction in a kernel defines the retrieval/storage of 32 data elements.
With memory coalescing, retrieving/storing 32 data elements requires as much time as retrieving/storing one data element.
Arranging data involves positioning the input and output data so that adjacent data elements are accessed by adjacent threads.
Consider an array of complex numbers and/or multiple doubles.
The elements of such arrays are composite.
Every complex number has a real and imaginary part.
These parts can be one double, or a multiple double.
A quad double consists of a most significant double, the second most, third most, fourth most significant double.
Using the straighforward representation will lead to bank conflicts.
Instead of an array of complex doubles, use two arrays:
one array with the real doubles,
another array with the imaginary doubles.
An array of complex quad doubles is stored in 8 arrays.
Consider the following problem:
On input are \(x_0, x_1, x_2, \ldots x_{31}\), all of type float.
The output is
This gives 32 threads in a warp 1,024 multiplications to do. Assume the input and output resides in shared memory. How to compute without bank conflicts?
Suppose we observe the order of the output sequence. If thread \(i\) computes \(x_i^2, x_i^3, x_i^4, \ldots, x_i^{33}\), then after the first step, all threads write \(x_0^2, x_1^2, x_2^2, \ldots, x_{31}^2\) to shared memory. If the stride is 32, all threads write into the same bank. Instead of a simultaneous computation of 32 powers at once, the writing to shared memory will be serialized.
Suppose we alter the order in the output sequence.
After the first step, thread \(i\) writes \(x_i^2\) in adjacent memory, next to \(x_{i-1}^2\) (if \(i > 0\)) and \(x_{i+1}^2\) (if \(i < 31\)). Without bank conflicts, the speedup will be close to 32.
Below is a basic Julia version of the kernel to solve this problem.
using CUDA
"""
function gpupwr32!(a, b)
raises the elements in the array a
to the powers 2, 3, .., 33,
writing the results in the array b.
"""
function gpupwr32!(a, b)
i = threadIdx().x # starts at 1
idx = 1 + 32*(i-1)
b[idx] = a[i]*a[i]
idx = idx + 1
for p=3:33
b[idx] = a[i]*b[idx-1]
idx = idx + 1
end
return nothing
end
The main program to launch the kernel follows:
dx = convert(Float32, 0.2/31)
x_h = [0.9f0 + (k-1)*dx for k=1:32]
y_h = [0.0f0 for k=1:32*32] # output
println("the input numbers : ", x_h)
x_d = CuArray(x_h)
y_d = CuArray(y_h)
# run with 32 threads
@cuda threads=32 gpupwr32!(x_d, y_d)
For correctness, the code continues with a comparision with the vector computed on the host.
Exercises¶
Run
copyKernel
for large enough arrays for zerooffset
and anoffset
equal to two. Measure the timings and deduce the differences in memory bandwidth between the two different values foroffset
.Consider the kernel of
matrixMul
in the GPU computing SDK. Is the loading of the tiles into shared memory coalesced? Justify your answer.Write a CUDA program for the computation of consecutive powers, using coalesced access of the values for the input elements. Compare the two orders of storing the output sequence in shared memory: once with and once without bank conflicts.