admin管理员组文章数量:1401469
Cuda code:
__global__ void matrixAdd1(int *d_a, int *d_b, int *d_c, int width, int height) {
// Calculate the row and column indices for this thread
// RECOMMENDED VERSION:
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
// ALTERNATIVE:
int col = blockIdx.y * blockDim.y + threadIdx.y;
int row = blockIdx.x * blockDim.x + threadIdx.x;
// Make sure we don't go out of bounds
if (row < height && col < width) {
// Calculate the linear index for the matrices
int idx = row * width + col;
// Perform the addition
d_c[idx] = d_a[idx] + d_b[idx];
}
}
I tested the two ways of computing the data index, on arrays size of millions, repeated a number of times, and got EDIT only a small /EDIT performance difference. (Grace Hopper GPU)
Used blocksize is 16x16.
Should there be a performance difference? Can I demonstrate a "right" and "wrong" way to index?
EDIT It turns out that GPUs have the same performance problems with "streaming" operations that CPUs have. I edited both code variants to essentially repeat that addition operation a couple of times. Now there is a clear factor of 3 performance difference.
Using the profiler I get the following stats.
Right way:
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 2.62
SM Frequency Ghz 1.51
Elapsed Cycles cycle 272,148
Memory Throughput % 69.18
DRAM Throughput % 40.12
Duration us 179.58
L1/TEX Cache Throughput % 66.13
L2 Cache Throughput % 87.64
SM Active Cycles cycle 263,722.43
Compute (SM) Throughput % 48.22
----------------------- ----------- ------------
Wrong way:
----------------------- ----------- ------------
Metric Name Metric Unit Metric Value
----------------------- ----------- ------------
DRAM Frequency Ghz 2.62
SM Frequency Ghz 1.53
Elapsed Cycles cycle 743,618
Memory Throughput % 66.57
DRAM Throughput % 14.89
Duration us 486.11
L1/TEX Cache Throughput % 60.71
L2 Cache Throughput % 66.57
SM Active Cycles cycle 739,352.92
Compute (SM) Throughput % 17.57
----------------------- ----------- ------------
It is interesting that of the measures expressed as percentages only the DRAM throughput is significantly affected. I'll have to learn more about the architecture to understand why this is the case. Somewhat surprisingly cache behavior doesn't seem to play a role here.
本文标签: cudaDo contiguous threads need to access contiguous dataStack Overflow
版权声明:本文标题:cuda - Do contiguous threads need to access contiguous data - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1744308323a2599908.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
发表评论